空 挡 广 告 位 | 空 挡 广 告 位

eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues

Note: We don't have the ability to review paper

PubDate: December 2021

Teams: University of South Australia

Writers: Allison Jing;Brandon Matthews;Kieran May;Thomas Clarke;Gun Lee;Mark Billinghurst

PDF: eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues

Abstract

In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.

您可能还喜欢...

Paper