空 挡 广 告 位 | 空 挡 广 告 位

Bring-Your-Own Input: Context-Aware Multi-Modal Input for More Accessible Virtual Reality

Note: We don't have the ability to review paper

PubDate: April 2023

Teams:University of Waterloo

Writers:Johann Wentzel

PDF:Bring-Your-Own Input: Context-Aware Multi-Modal Input for More Accessible Virtual Reality

Abstract

Virtual reality applications make assumptions about user ability which may be difficult or even impossible to meet by people with limited mobility. However, we can increase the accessibility of these applications by taking advantage of the device combinations and usage contexts that people with mobility limitations already employ. By designing context aware multi-modal interfaces which gracefully adapt not only to the user’s input devices, but also to surrounding usage context like body or workspace position, we can meaningfully improve the overall accessibility of spatial computing. My research plan is threefold: first, qualitative research reveals how people with mobility limitations combine input devices to overcome accessibility barriers (published at CHI 2022). Next, we categorize these combinations based on their input dimensions, and develop a study of gracefully degrading input fidelity to understand how device combinations’ differing input space affects VR usage. Finally, we examine how the user’s surrounding context affects VR input and output, by exploring the design space of context-aware interfaces which adapt to changes in the user’s body position, output device (headset or desktop), or workspace proximity. My overall goal is to show how intelligent adaptation to input device combinations and surrounding input context can lead to more accessible spatial interfaces, and to provide actionable recommendations for designers and researchers creating accessible VR experiences.

您可能还喜欢...

Paper