空 挡 广 告 位 | 空 挡 广 告 位

Saccade Landing Position Prediction for Gaze-Contingent Rendering

Note: We don't have the ability to review paper

Title: Saccade Landing Position Prediction for Gaze-Contingent Rendering

Teams: Saarland University, MMCI 2MPI Informatik 3Intel Visual Computing Institute

Writers: Elena Arabadzhiyska1,3 Okan Tarhan Tursun2Karol Myszkowski2 Hans-Peter Seidel2Piotr Didyk1,2

Publication date: July 2017

Abstract

Gaze-contingent rendering shows promise in improving perceived quality by providing a better match between image quality and the human visual system requirements. For example, information about fixation allows rendering quality to be reduced in peripheral vision, and the additional resources can be used to improve the quality in the foveal region. Gaze-contingent rendering can also be used to compensate for certain limitations of display devices, such as reduced dynamic range or lack of accommodation cues. Despite this potential and the recent drop in the prices of eye trackers, the adoption of such solutions is hampered by system latency which leads to a mismatch between image quality and the actual gaze location. This is especially apparent during fast saccadic movements when the information about gaze location is significantly delayed, and the quality mismatch can be noticed. To address this problem, we suggest a new way of updating images in gaze-contingent rendering during saccades. Instead of rendering according to the current gaze position, our technique predicts where the saccade is likely to end and provides an image for the new fixation location as soon as the prediction is available. While the quality mismatch during the saccade remains unnoticed due to saccadic suppression, a correct image for the new fixation is provided before the fixation is established. This paper describes the derivation of a model for predicting saccade landing positions and demonstrates how it can be used in the context of gaze-contingent rendering to reduce the influence of system latency on the perceived quality. The technique is validated in a series of experiments for various combinations of display frame rate and eye-tracker sampling rate.

您可能还喜欢...

Paper