空 挡 广 告 位 | 空 挡 广 告 位

Putting Vision and Touch Into Conflict: Results from a Multimodal Mixed Reality Setup

Note: We don't have the ability to review paper

PubDate: Sep 2022

Teams: Korea University

Writers: Hyeokmook Kang; Taeho Kang; Christian Wallraven

PDF:Putting Vision and Touch Into Conflict: Results from a Multimodal Mixed Reality Setup

Abstract

What happens if we put vision and touch into conflict? Which modality “wins”? Although several previous studies have addressed this topic, they have solely focused on integration of vision and touch for low-level object properties (such as curvature, slant, or depth). In the present study, we introduce a multimodal mixed-reality setup based on real-time hand-tracking, which was used to display real-world, haptic exploration of objects in a virtual environment through a head-mounted-display (HMD). With this setup we studied multimodal conflict situations of objects varying along higher-level, parametrically-controlled global shape properties. Participants explored these objects in both unimodal and multimodal settings with the latter including congruent and incongruent conditions and differing instructions for weighting the input modalities. Results demonstrated a surprisingly clear touch dominance throughout all experiments, which in addition was only marginally influenceable through instructions to bias their modality weighting. We also present an initial analysis of the hand-tracking patterns that illustrates the potential for our setup to investigate exploration behavior in more detail.

您可能还喜欢...

Paper