Consistency Guided Scene Flow Estimation

Note: We don't have the ability to review paper

PubDate: Aug 2020

Teams: Google Research;ETH Zurich

Writers: Yuhua Chen, Luc Van Gool, Cordelia Schmid, Cristian Sminchisescu

PDF: Consistency Guided Scene Flow Estimation

Abstract

Consistency Guided Scene Flow Estimation (CGSF) is a selfsupervised framework for the joint reconstruction of 3D scene structure and motion from stereo video. The model takes two temporal stereo pairs as input, and predicts disparity and scene flow. The model self-adapts at test time by iteratively refining its predictions. The refinement process is guided by a consistency loss, which combines stereo and temporal photo-consistency with a geometric term that couples disparity and 3D motion. To handle inherent modeling error in the consistency loss (e.g. Lambertian assumptions) and for better generalization, we further introduce a learned, output refinement network, which takes the initial predictions, the loss, and the gradient as input, and efficiently predicts a correlated output update. In multiple experiments, including ablation studies, we show that the proposed model can reliably predict disparity and scene flow in challenging imagery, achieves better generalization than the state-of-the-art, and adapts quickly and robustly to unseen domains.

You may also like...

Paper