空 挡 广 告 位 | 空 挡 广 告 位

Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction

Note: We don't have the ability to review paper

PubDate: March 2014

Teams: Bielefeld University

Writers: Patrick Renner;Thies Pfeiffer

PDF: Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction

Abstract

For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular interest.

However, the analysis of gaze and gestures is a time-consuming task: Usually, manual annotation of the eye tracker’s scene camera video is necessary in a frame-by-frame manner. To tackle this issue, based on the EyeSee3D method, an automatic approach for annotating interactions is presented: A combination of geometric modeling and 3D marker tracking serves to align real world stimuli with virtual proxies. This is done based on the scene camera images of the mobile eye tracker alone. In addition to the EyeSee3D approach, face detection is used to automatically detect fixations on the interlocutor. For the acquisition of the gestures, an optical marker tracking system is integrated and fused in the multimodal representation of the communicative situation.

您可能还喜欢...

Paper