Collision-Aware AR Telemanipulation Using Depth Mesh

Note: We don't have the ability to review paper

PubDate: Aug 2022

Teams:  Kyushu Institute of Technology

Writers: Chanapol Piyavichayanon; Masanobu Koga; Eiji Hayashi; Sakmongkon Chumkamon

PDFCollision-Aware AR Telemanipulation Using Depth Mesh

Abstract

Remotely operating a robot in Augmented Reality (AR) is a challenging problem due to the limited information about the environment around the robot. The current AR teleoperation interface lacks the collision checking between the virtual robot model and the environment. This work aims to overcome that problem by using depth mesh generation to reconstruct the environment from a single pair of RGB and Depth images. By presenting the generated mesh with the virtual manipulator model in AR, we introduce three collision-aware features, i.e., collision checking, AR guidance, and ray casting distance calculation, to support the operator in the manipulation task. The reconstruction can be done instantly on the smartphone, allowing the system to be used on mobile AR applications. We evaluate our system with the pick-and-place task. The accuracy of the reconstruction is enough for the user to succeed in the operation. In addition, the collision-aware features reduce the task completion time, lower workload, and enhance the system’s usability.

You may also like...

Paper