空 挡 广 告 位 | 空 挡 广 告 位

Deep Learning and Mixed Reality to Autocomplete Teleoperation

Note: We don't have the ability to review paper

PubDate: October 2021

Teams: American University of Beirut

Writers: Mohammad Kassem Zein; Majd Al Aawar; Daniel Asmar; Imad H. Elhajj

PDF: Deep Learning and Mixed Reality to Autocomplete Teleoperation

Abstract

Teleoperation of robots can be challenging, especially for novice users with little to no experience at such tasks. The difficulty is largely due to the numerous degrees of freedom users must control and their limited perception bandwidth. To help mitigate these challenges, we propose in this paper a solution which relies on artificial intelligence to understand user intended motion and then on mixed reality to communicate the estimated trajectories to the users in an intuitive manner. User intended motion is estimated using a deep learning network trained on a dataset of motion primitives. During teleoperation, the estimated motions are augmented onto a first-person live video feed from the robot. Finally, if a suggested motion is accepted by the user, the robot is driven along that trajectory in an autonomous manner. We validate our proposed mixed reality teleoperation scheme with simulation experiments on a drone and demonstrate, through subjective and objective evaluation, its advantages over other teleoperation methods.

您可能还喜欢...

Paper