雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Multi-Modal Interaction for Space Telescience of Fluid Experiments

Note: We don't have the ability to review paper

PubDate: November 2018

Teams: Chinese Academy of Sciences

Writers: Ge Yu;Ji Liang;Lili Guo

PDF: Multi-Modal Interaction for Space Telescience of Fluid Experiments

Abstract

In this paper, a novel multi-modal interaction strategy for sequential multi-step operation processes in the space telescience experiments is proposed to provide a realistic ‘virtual presence’ and natural human-computer interface at the telescience ground facility. Due to the different fluid properties from the ground, the fluid in space is modeled as data-driven combined physical-based dynamic particles and rendered in 3D stereoscopic scenario in the CAVE and Oculus Rift at first. A single-channel speech separation method based on Deep Clustering with local optimization is proposed then to recover two or more individual speech signals from the mixed speech environment. Also, the speech recognition and the speech synthesis are both realized to make telecommands by voice. The task-command hierarchical interaction solution and the recognition algorithm of a series of understandable hand(s) gestures for somatosensory control with Leap Motion is proposed next for the less mental workload. Finally, the above interaction interfaces are integrated into the telescience experiment system. The results show the proposed multi-modal interaction method can provide a more efficient, natural and intuitive user experience compared with traditional interaction.

您可能还喜欢...

Paper