EyeHacker: Gaze-Based Automatic Reality Manipulation
Title: EyeHacker: Gaze-Based Automatic Reality Manipulation
Teams: The University of Tokyo Ishikawa College
Writers: Daichi Ito ; Sohei Wakisaka ; Atsushi Izumihara ; Tomoya Yamaguchi ; Atsushi Hiyama ; Masahiko Inami
Publication date: July 2019
Abstract
In this study, we introduce EyeHacker, which is an immersive virtual reality (VR) system that spatiotemporally mixes the live and recorded/edited scenes based on the measurement of the users’ gaze. This system updates the transition risk in real time by utilizing the gaze information of the users (i.e., the locus of attention) and the optical flow of scenes. Scene transitions are allowed when the risk is less than the threshold, which is modulated by the head movement data of the users (i.e., the faster their head movement, the higher will be the threshold). Using this algorithm and experience scenario prepared in advance, visual reality can be manipulated without being noticed by users (i.e., eye hacking). For example, consider a situation in which the objects around the users perpetually disappear and appear. The users would often have a strange feeling that something was wrong and, sometimes, would even find what happened but only later; they cannot visually perceive the changes in real time. Further, with the other variant of risk algorithms, the system can implement a variety of experience scenarios, resulting in reality confusion.