空 挡 广 告 位 | 空 挡 广 告 位

MuModaR: Multi-modal Framework for Human-Robot Collaboration in Cyber-physical Systems

Note: We don't have the ability to review paper

Date:March 2024

Teams:Cornell University

Writers:Jovan Clive Menezes

PDF:MuModaR: Multi-modal Framework for Human-Robot Collaboration in Cyber-physical Systems

Abstract

Real-Time Human Autonomous Systems Collaborations (RealTHASC), is a novel extended reality (XR) testbed that interfaces humans and robots with photorealistic simulated environments. The testbed functions as an innovative facility, enabling the conduction of experiments focused on human-robot collaboration (HRC). It bridges the gap between traditional laboratory settings, often utilized for these experiments, and real-world deployment scenarios for robots, achieved through high-fidelity virtual environments. This paper presents an early stage development of Multi-Modal RealTHASC (MuModaR), a framework that augments the original architecture of the testbed. The aim of this development is to enable simultaneous multi-modal interactions involving Human Multi-robot Autonomy Teams (HMATs). To illustrate the effectiveness of the devised framework, a preliminary experiment is conducted in a scenario involving detection of multiple targets by HMATs that exchange information using vision and auditory modalities of perception. MuModaR combines a vision transformer with a large language model to leverage the collective strengths of these large-scale models, facilitating real-time feedback. The development of this new framework enriches the testbed's potential by enabling more robust assessments of HMATs and improving the transition of HRC from simulation/laboratory testing to real-world scenarios.

您可能还喜欢...

Paper