空 挡 广 告 位 | 空 挡 广 告 位

Towards Advanced User Guidance and Context Awareness in Augmented Reality-guided Procedures

Note: We don't have the ability to review paper

PubDate: May 2023

Teams: ETH Zurich

Writers: Wolf, Julian

PDF: Towards Advanced User Guidance and Context Awareness in Augmented Reality-guided Procedures

Abstract

Procedural tasks are common in many professions, such as maintenance, assembly, or surgery, and are characterized by an operator performing a predefined sequence of steps to achieve a specific goal. Because these tasks often involve elaborated machines, devices, or even patients, they place the highest requirements on correct task execution.
Augmented reality (AR) head-mounted displays (HMDs) have been shown to provide effective support during procedural tasks. Compared to conventional information mediums, where information is often spread among multiple documents (e.g., maintenance) or external screens (e.g., surgery), AR HMDs display contextual information directly into the field of view of the operator without occupying the operators’ hands. While with AR, displayed information is only changed based on manual user input, context-aware AR promises to further improve the support provided by automatically adapting displayed information to best address the operator’s current needs and by providing feedback. Understanding the strengths and weaknesses of these two technologies is key to developing support systems that can improve the quality of task execution, making procedural tasks safer and improving outcomes. Previous studies on context-aware systems have focused primarily on manual execution without consideration of an important part of human interaction, the perception. Eye tracking allows to measure perception and provides deep insights into cognitive processes, and might therefore bring benefits to context-aware systems that are important to be investigated.

This work investigates different concepts of how AR and context-aware AR support systems can be designed, how they work, and how they affect operators’ task performance. It further aims to advance context-aware AR support by integrating eye tracking and by deriving a suitable system model to describe the relationships between human behavior, AR, and context-aware AR. Three studies are presented in this work.

Study I investigates the benefits of contextual information in AR over traditional information mediums to provide training instructions. A study was conducted with 21 medical students performing an extracorporeal membrane oxygenation (ECMO) cannulation on a physical simulator setup. The evaluation comprised of a detailed error protocol with both a categorization into knowledge- and handling-related errors and an error severity ranking. The results showed clear benefits of AR over conventional instructions while pointing out certain limitations that might be improved by context-aware AR.

Study II investigates effective visualization strategies when real-time feedback is provided continuously. A study was conducted with 4 expert surgeons and 10 surgical residents performing surgical drilling on a physical simulator setup. The results show that continuous performance feedback generally levels task performance between novice and expert operators, reveal clear advantages and preferences of certain AR visualizations, and give insights into how AR visualizations guide visual attention. In particular, the peripheral field around the area of execution proofed to be promising for displaying
information as the operator can simultaneously perceive feedback and coordinate hand movement.

Study III investigates the suitability of eye and hand tracking features for predicting and preventing an operator’s erroneous actions. A study was conducted on a memory card game to explore the potential and limitations of this approach. The first experiment, which involved 10 participants, recorded participants’ eye and hand movement to derive a method for target prediction. The second experiment with 12 participants examined the timeliness and accuracy of the implemented method end-to-end and showed the method to be highly effective in preventing a user’s erroneous hand actions.

One of the key conclusions of this work is that context-aware AR support can significantly improve procedural outcomes and even raise the task performance of less experienced operators to the level of experts. In addition, analyzing hand-eye coordination patterns in real-time allows for predictive AR support and error prevention, which might eventually provide a safety net for operators performing their first independent task executions. For future work, important research directions include integrating and advancing predictive AR support for more complex procedures, investigating effective visualization strategies in environments with multiple dynamic visual stimuli, as well as effective feedback and support strategies while operators transition from their first training to independent execution and eventually become experts.

您可能还喜欢...

Paper