Ditto: Building Digital Twins of Articulated Objects from Interaction
PubDate: Feb 2022
Teams: The University of Texas at Austin
Writers: Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu
PDF: Ditto: Building Digital Twins of Articulated Objects from Interaction
Abstract
Digitizing physical objects into the virtual world has the potential to unlock new research and applications in embodied AI and mixed reality. This work focuses on recreating interactive digital twins of real-world articulated objects, which can be directly imported into virtual environments. We introduce Ditto to learn articulation model estimation and 3D geometry reconstruction of an articulated object through interactive perception. Given a pair of visual observations of an articulated object before and after interaction, Ditto reconstructs part-level geometry and estimates the articulation model of the object. We employ implicit neural representations for joint geometry and articulation modeling. Our experiments show that Ditto effectively builds digital twins of articulated objects in a category-agnostic way. We also apply Ditto to real-world objects and deploy the recreated digital twins in physical simulation. Code and additional results are available at this https URL