雨果巴拉:行业北极星Vision Pro过度设计不适合市场

A study on improving performance in gesture training through visual guidance based on learners’ errors

Note: We don't have the ability to review paper

PubDate: November 2017

Teams: Sorbonne universités and Université de Technologie de Compiègne

Writers: Florian Jeanne;Indira Thouvenin;Alban Lenglet

PDF: A study on improving performance in gesture training through visual guidance based on learners’ errors

Abstract

Gesture training, especially for technical gestures, requires supervisors to point out errors made by trainees. Virtual reality (VR) makes it possible to reduce reliance on supervisors (fewer interventions and of shorter duration) and to reduce the length of training, using extrinsic feedback that provides training or learning assistance using different modalities (visual, auditory, and haptic). Visual feedback has received much attention in recent decades. Users can be guided by a metaphor in a virtual environment. This metaphor may be a 3D trace of canonical movements, a visual cue pointing in the right direction, or gestures by an avatar that the trainee must mimic. However, with many kinds of feedback, trainees are not aware of their errors while performing gestures. Our hypothesis is that guiding users with a dynamic metaphor based on the visualization of errors will reduce these errors and improve performance. To this end, in a previous work we designed and implemented a new 3D metaphor called EBAGG to guide users in real time.

In the present paper we evaluate EBAGG in relation to two other visual cues: first, a feedforward technique that displays the trace of a reference movement, and, second, a concurrent orientation feedback. The results of the user study show that EBAGG outperformed the others in improving users’ performances over a training session. Moreover, the information assimilated during training with this dynamic feedback had a persistent effect when the metaphor was no longer displayed.

您可能还喜欢...

Paper