Dex-Net AR: Distributed Deep Grasp Planning Using an Augmented Reality Application and a Smartphone Camera

Note: We don't have the ability to review paper

PubDate: June 2020

Teams: UC Berkeley

Writers: Harry Zhang, Jeffrey Ichnowski, Yahav Avigal, Joseph Gonzales, Ion Stoica, and Ken Goldberg

PDF: Dex-Net AR: Distributed Deep Grasp Planning Using an Augmented Reality Application and a Smartphone Camera

Project: Dex-Net

Abstract

Recent consumer demand for augmented reality in mobile phone applications has accelerated performance of structure from motion methods that build a point cloud from a sequence of RGB images taken by the camera on a mobile phone as it is moved around an object. Smartphone apps, such as the Apple ARKit, have potential to expand access to deep grasp planning systems such as Dex-Net. However, the resulting point clouds are often noisy due to estimation errors. We present a distributed pipeline, Dex-Net AR, that allows point clouds to be sent to our lab, cleaned, and evaluated by Dex-Net grasp planner to generate a grasp axis that is returned and displayed as an overlay on the original object. We implement Dex-Net AR using the iPhone and ARKit to generate point clouds and compare results with those generated with high-performance depth sensors.

You may also like...

Paper