空 挡 广 告 位 | 空 挡 广 告 位

Gaze-Adaptive Subtitles Considering the Balance among Vertical/Horizontal and Depth of Eye Movement

Note: We don't have the ability to review paper

PubDate: November 2021

Teams: Kobe University

Writers: Yusuke Shimizu; Ayumi Ohnishi; Tsutomu Terada; Masahiko Tsukamoto

PDF: Gaze-Adaptive Subtitles Considering the Balance among Vertical/Horizontal and Depth of Eye Movement

Abstract

Subtitles (captions displayed on the screen) are important in 3D content, such as virtual reality (VR) and 3D movies, to help users understand the content. However, an optimal displaying method and framework for subtitles have not been established for 3D content because 3D has a depth factor. To determine how to place text in 3D content, we propose four methods of moving subtitles dynamically considering the balance between the vertical/horizontal and depth of gaze shift. These methods are used to reduce the difference in depth or distance between the gaze position and subtitles. Additionally, we evaluate the readability of the text and participants’ fatigue. The results show that aligning the text horizontally and vertically to eye movements improves visibility and readability. It is also shown that the eyestrain is related to the distance between the object and subtitles. This evaluation provides basic knowledge for presenting text in 3D content.

您可能还喜欢...

Paper