雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Back to the Feature: Learning Robust Camera Localization from Pixels to Pose

Note: We don't have the ability to review paper

PubDate: Apr 2021

Teams: Department of Computer Science, ETH Zurich 2 ETH Zurich 3 Chalmers University of Technology4 Eigenvision 5 Ecole des Ponts 6 Microsoft 7 Czech Technical University in Prague

Writers: Paul-Edouard Sarlin, Ajaykumar Unagar, Måns Larsson, Hugo Germain, Carl Toft, Viktor Larsson, Marc Pollefeys, Vincent Lepetit, Lars Hammarstrand, Fredrik Kahl, Torsten Sattler

PDF: Back to the Feature: Learning Robust Camera Localization from Pixels to Pose

Abstract

Camera pose estimation in known scenes is a 3D geometry task recently tackled by multiple learning algorithms. Many regress precise geometric quantities, like poses or 3D points, from an input image. This either fails to generalize to new viewpoints or ties the model parameters to a specific scene. In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms. We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. Our approach is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching by jointly refining keypoints and poses with little overhead. The code will be publicly available at this https URL.

您可能还喜欢...

Paper