Conveying spatial awareness cues in xR collaborations
PubDate: August 2019
Teams: Australian Research Centre for Interactive and Virtual Environments;University of Cantebury
Writers: Andrew Irlitti; Thammathip Piumsomboon; Daniel Jackson; Bruce H. Thomas
PDF: Conveying spatial awareness cues in xR collaborations
Abstract
Spatial Augmented Reality (SAR) systems can be suitably combined with other existing Extended Reality (xR) technologies to support collaboration. In existing strategies, users unencumbered by a viewing technology, such as a tablet interface or a head-mounted display, must rely on the transmission of their collaborators’ positioning through interpreting a first-person camera view. This design creates a seam between a user’s experience of the augmented physical environment in SAR, and their collaborators’ experience inside the virtual environment. To assist in development and evaluation of spatial cues to support spatial awareness in SAR environments, an egocentric spatial-communication taxonomy is presented given two determining dimensions, a cue’s attachment (physical/virtual) and animation (local/world). We developed four egocentric cues which characterize the four independent dimensions of the matrix: arrow, path, glow, and radial, and a single exocentric world in miniature visualization. Our study shows that virtual attachment cues are preferred, providing the highest accuracy, highest performance when collaborators are occluded, and produce the least mental effort when used with a single virtual collaborator. For multiple collaborators however, the virtual attached, world animated radial cue produces significant increases in mental load and reductions in preference, demonstrating the impact of visual augmentation clutter. The single exocentric visualization produced higher levels of head movement, and poorer accuracy, however the novelty of the visualization produced positive qualitative results.