空 挡 广 告 位 | 空 挡 广 告 位

Predicting Hand-Object Interaction for Improved Haptic Feedback in Mixed Reality

Note: We don't have the ability to review paper

PubDate: February 2022

Teams: Stanford University

Writers: M. Salvato; Negin Heravi; Allison M. Okamura; Jeannette Bohg

PDF: Predicting Hand-Object Interaction for Improved Haptic Feedback in Mixed Reality

Abstract

Accurately detecting when a user begins interaction with virtual objects is necessary for compelling multi-sensory experiences in mixed reality. To address inherent sensing, computation, display, and actuation latency, we propose to predict when a user will begin touch interaction with a virtual object before it occurs. We hypothesize that the sequence of hand poses when approaching an object, combined with object pose, contain sufficient information to predict when the user will begin contact. By leveraging this information, we could reduce or eliminate latency in providing haptic feedback during virtual object interaction. We focus on small time horizons, on the order of 100 ms, to overcome sense-to-actuation latency for haptic feedback in mixed reality systems. We use a time series of tracked hand poses, along with virtual object geometry to perform our prediction. By calculating minimum hand-object distance and feeding those along with hand poses to a self-attention-based network, we achieve approximately 52.8 ms of timing error for a 100 ms prediction horizon. Additionally, we test our system against different levels of tracking and hand-object alignment noise, finding minimal change in timing error. By contrast, when only extrapolating joint and hand velocities, we find that timing error consistently exceeds the prediction horizon.

您可能还喜欢...

Paper