雨果巴拉:行业北极星Vision Pro过度设计不适合市场

EyeHacker: Gaze-Based Automatic Reality Manipulation

Note: We don't have the ability to review paper

Title: EyeHacker: Gaze-Based Automatic Reality Manipulation

Teams: The University of Tokyo Ishikawa College

Writers: Daichi Ito ; Sohei Wakisaka ; Atsushi Izumihara ; Tomoya Yamaguchi ; Atsushi Hiyama ; Masahiko Inami

Publication date: July 2019

Abstract

In this study, we introduce EyeHacker, which is an immersive virtual reality (VR) system that spatiotemporally mixes the live and recorded/edited scenes based on the measurement of the users’ gaze. This system updates the transition risk in real time by utilizing the gaze information of the users (i.e., the locus of attention) and the optical flow of scenes. Scene transitions are allowed when the risk is less than the threshold, which is modulated by the head movement data of the users (i.e., the faster their head movement, the higher will be the threshold). Using this algorithm and experience scenario prepared in advance, visual reality can be manipulated without being noticed by users (i.e., eye hacking). For example, consider a situation in which the objects around the users perpetually disappear and appear. The users would often have a strange feeling that something was wrong and, sometimes, would even find what happened but only later; they cannot visually perceive the changes in real time. Further, with the other variant of risk algorithms, the system can implement a variety of experience scenarios, resulting in reality confusion.

您可能还喜欢...

Paper