雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Mixing Modalities of 3D Sketching and Speech for Interactive Model Retrieval in Virtual Reality

Note: We don't have the ability to review paper

PubDate: May 2021

Teams: University College London;Istituto Italiano di Tecnologia

Writers: Daniele Giunchi, Alejandro Sztrajman, Stuart James, Anthony Steed

PDF: Mixing Modalities of 3D Sketching and Speech for Interactive Model Retrieval in Virtual Reality

Abstract

Sketch and speech are intuitive interaction methods that convey complementary information and have been independently used for 3D model retrieval in virtual environments. While sketch has been shown to be an effective retrieval method, not all collections are easily navigable using this modality alone. We design a new challenging database for sketch comprised of 3D chairs where each of the components (arms, legs, seat, back) are independently colored. To overcome this, we implement a multimodal interface for querying 3D model databases within a virtual environment. We base the sketch on the state-of-the-art for 3D Sketch Retrieval, and use a Wizard-of-Oz style experiment to process the voice input. In this way, we avoid the complexities of natural language processing which frequently requires fine-tuning to be robust. We conduct two user studies and show that hybrid search strategies emerge from the combination of interactions, fostering the advantages provided by both modalities.

您可能还喜欢...

Paper