空 挡 广 告 位 | 空 挡 广 告 位

Saccade-Contingent Rendering

Note: We don't have the ability to review paper

PubDate: Jan 2024

Teams: Reality Labs

Writers: Yuna Kwak, Eric Penner, Xuan Wang, Mohammad R. Saeedpour-Parizi, Olivier Mercier, Xiuyun Wu, T. Scott Murdison, Phillip Guan

PDF: Saccade-Contingent Rendering

Abstract

Battery-constrained power consumption, compute limitations, and high frame rate requirements in head-mounted displays present unique challenges in the drive to present increasingly immersive and comfortable imagery in virtual reality. However, humans are not equally sensitive to all regions of the visual field, and perceptually-optimized rendering techniques are increasingly utilized to address these bottlenecks. Many of these techniques are gaze-contingent and often render reduced detail away from a user’s fixation. Such techniques are dependent on spatio-temporally-accurate gaze tracking and can result in obvious visual artifacts when eye tracking is inaccurate. In this work we present a gaze-contingent rendering technique which only requires saccade detection, bypassing the need for highly-accurate eye tracking. In our first experiment, we show that visual acuity is reduced for several hundred milliseconds after a saccade. In our second experiment, we use these results to reduce the rendered image resolution after saccades in a controlled psychophysical setup, and find that observers cannot discriminate between saccade-contingent reduced-resolution rendering and full-resolution rendering. Finally, in our third experiment, we introduce a 90 pixels per degree headset and validate our saccade-contingent rendering method under typical VR viewing conditions.

您可能还喜欢...

Paper