Video See-Through Mixed Reality with Focus Cues
PubDate: February 2022
Teams: Graz University of Technology;Stanford University
Writers: Christoph Ebner; Shohei Mori; Peter Mohr; Yifan Peng; Dieter Schmalstieg; Gordon Wetzstein; Denis Kalkofen
PDF: Video See-Through Mixed Reality with Focus Cues
Abstract
This work introduces the first approach to video see-through mixed reality with full support for focus cues. By combining the flexibility to adjust the focus distance found in varifocal designs with the robustness to eye-tracking error found in multifocal designs, our novel display architecture reliably delivers focus cues over a large workspace. In particular, we introduce gaze-contingent layered displays and mixed reality focal stacks, an efficient representation of mixed reality content that lends itself to fast processing for driving layered displays in real time. We thoroughly evaluate this approach by building a complete end-to-end pipeline for capture, render, and display of focus cues in video see-through displays that uses only off-the-shelf hardware and compute components.