VRMoVi: Towards an Expressive Visualization for Human Motion and Object Interaction in Virtual Reality

Note: We don't have the ability to review paper

PubDate: Mar 2023

Teams: Chapman University

Writers: Di Qi, LouAnne Boyd, Scott Fitzpatrick, Meghna Raswan

PDFVRMoVi: Towards an Expressive Visualization for Human Motion and Object Interaction in Virtual Reality

Abstract

Virtual reality (VR)-based immersive analysis has become an alternative to traditional approaches for analyzing complex, multidimensional human motion data. However, existing VR-based methods lack detailed information about hand motion and object interaction, which is essential for interpreting human activities and identifying their needs. To address that, we present a new VR system, VRMoVi, with a unique design of three expressive visualization layers: 1) a 3D tube layer for hand/object general motion, 2) a hand-object avatar layer for hand-object interaction animation, and 3) a particle-with-arrow layer for detailed hand positions and orientations. We validated VRMoVi with a real-world VR human motion dataset and conducted a user study with 24 participants. Compared with other visualization conditions, VRMoVi performed significantly better than the traditional 2D condition and slightly better than the standard VR-based condition; users found VRMoVi to be comprehensible, immersive, easy to use, and useful for interpreting human activity data.

You may also like...

Paper