Glanceable AR: Evaluating Information Access Methods for Head-Worn Augmented Reality
PubDate: May 2020
Teams: Virginia Tech
Writers: Feiyu Lu; Shakiba Davari; Lee Lisle; Yuan Li; Doug A. Bowman
Augmented reality head-worn displays (AR HWDs) have the potential to assist personal computing and the acquisition of everyday information. In this research, we propose Glanceable AR, an interaction paradigm for accessing information in AR HWDs. In Glanceable AR, secondary information resides at the periphery of vision to stay unobtrusive and can be accessed by a quick glance whenever needed. We propose two novel hands-free interfaces: “head-glance”, in which virtual contents are fixed to the user’s body and can be accessed by head rotation, and “gaze-summon” in which contents can be “summoned” into central vision by eye-tracked gazing at the periphery. We compared these techniques with a baseline heads-up display (HUD), which we call “eye-glance” interface in two dual-task scenarios. We found that the head-glance and eye-glance interfaces are more preferred and more efficient than the gaze-summon interface for discretionary information access. For a continuous monitoring task, the eye-glance interface was preferred. We discuss the implications of our findings for designing Glanceable AR interfaces in AR HWDs.