Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences
PubDate: June 2021
Teams: 1Technical University of Munich 2University College London 3Adobe Research
Writers: Norman Muller ¨1 Yu-Shiang Wong2 Niloy J. Mitra2,3 Angela Dai1 Matthias Nießner1
PDF: Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences
Abstract
Multi-object tracking from RGB-D video sequences is a challenging problem due to the combination of changing viewpoints, motion, and occlusions over time. We observe that having the complete geometry of objects aids in their tracking, and thus propose to jointly infer the complete geometry of objects as well as track them, for rigidly moving objects over time. Our key insight is that inferring the complete geometry of the objects significantly helps in tracking. By hallucinating unseen regions of objects, we can obtain additional correspondences between the same instance, thus providing robust tracking even under strong change of appearance. From a sequence of RGB-D frames, we detect objects in each frame and learn to predict their complete object geometry as well as a dense correspondence mapping into a canonical space. This allows us to derive 6DoF poses for the objects in each frame, along with their correspondence between frames, providing robust object tracking across the RGB-D sequence. Experiments on both synthetic and real-world RGB-D data demonstrate that we achieve state-of-the-art performance on dynamic object tracking. Furthermore, we show that our object completion significantly helps tracking, providing an improvement of 6.5% in mean MOTA.