Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality
PubDate: May 2020
Teams: University of Toronto
Writers: Di Laura Chen; Ravin Balakrishnan; Tovi Grossman
Manipulating virtual objects using bare hands has been an attractive interaction paradigm in virtual and augmented reality due to its intuitive nature. However, one limitation of freehand input lies in the ambiguous resulting effect of the interaction. The same gesture performed on a virtual object could invoke different operations on the object depending on the context, object properties, and user intention. We present an experimental analysis of a set of disambiguation techniques in a virtual reality environment, comparing three input modalities (head gaze, speech, and foot tap) paired with three different timings in which options appear to resolve ambiguity (before, during, and after an interaction). The results indicate that using head gaze for disambiguation during an interaction with the object achieved the best performance.