空 挡 广 告 位 | 空 挡 广 告 位

Understanding Visual-Haptic Integration of Avatar Hands Using a Fitts’ Law Task in Virtual Reality

Note: We don't have the ability to review paper

PubDate: September 2019

Teams: University of Regensburg

Writers: Valentin Schwind;Jan Leusmann;Niels Henze

PDF: Understanding Visual-Haptic Integration of Avatar Hands Using a Fitts’ Law Task in Virtual Reality

Abstract

Virtual reality (VR) is becoming more and more ubiquitous to interact with digital content and often requires renderings of avatars as they enable improved spatial localization and high levels of presence. Previous work shows that visual-haptic integration of virtual avatars depends on body ownership and spatial localization in VR. However, there are different conclusions about how and which stimuli of the own appearance are integrated into the own body scheme. In this work, we investigate if systematic changes of model and texture of a users’ avatar affect the input performance measured in a two-dimensional Fitts’ law target selection task. Interestingly, we found that the throughput remained constant between our conditions and that neither model nor texture of the avatar significantly affected the average duration to complete the task even when participants felt different levels of presence and body ownership. In line with previous work, we found that the illusion of virtual limb-ownership does not necessarily correlate to the degree to which vision and haptics are integrated into the own body scheme. Our work supports findings indicating that body ownership and spatial localization are potentially independent mechanisms in visual-haptic integration.

您可能还喜欢...

Paper