空 挡 广 告 位 | 空 挡 广 告 位

Learning Depth Vision-Based Personalized Robot Navigation From Dynamic Demonstrations in Virtual Reality

Note: We don't have the ability to review paper

PubDate: Oct 2022

Teams: University of Bonn

Writers: Jorge de Heuvel, Nathan Corral, Benedikt Kreis, Maren Bennewitz

PDF: Learning Depth Vision-Based Personalized Robot Navigation From Dynamic Demonstrations in Virtual Reality

Abstract

For the best human-robot interaction experience, the robot’s navigation policy should take into account personal preferences of the user. In this paper, we present a learning framework complemented by a perception pipeline to train a depth vision-based, personalized navigation controller from user demonstrations. Our refined virtual reality interface enables the demonstration of robot navigation trajectories under motion of the user for dynamic interaction scenarios. In a detailed analysis, we evaluate different configurations of the perception pipeline. As the experiments demonstrate, our new pipeline compresses the perceived depth images to a latent state representation and, thus, enables efficient reasoning about the robot’s dynamic environment to the learning. We discuss the robot’s navigation performance in various virtual scenes by enrolling a variational autoencoder in combination with a motion predictor and demonstrate the first personalized robot navigation controller that solely relies on depth images.

您可能还喜欢...

Paper