Mutual Dissambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality
Title: Mutual Dissambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality
Teams: Microsoft
Writers: Ed Kaiser Alex Olwal David McGee Hrvoje Benko Andrea Corradini Xiaoguang Li Phil Cohen Steven Feiner
Publication date: November 2003
Abstract
We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a time-stamped history of the objects that intersect them to derive statistics for ranking potential referents. We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the successful 3D multimodal interpretations. An accompanying video demonstrates the system in action.