VS-Net: Voting with Segmentation for Visual Localization

Note: We don't have the ability to review paper

PubDate: June 2021

Teams: Zhejiang University;The Chinese University of Hong Kong;Xidian University

Writers: Zhaoyang Huang1,2*Xiaowei Zhou1Han Zhou1∗Hujun Bao1Yijin Li1Guofeng Zhang1†Bangbang Yang1Hongsheng Li2,Yan Xu2

PDF: VS-Net: Voting with Segmentation for Visual Localization

Abstract

Visual localization is of great importance in robotics and computer vision. Recently, scene coordinate regression based methods have shown good performance in visual localization in small static scenes. However, it still estimates camera poses from many inferior scene coordinates. To address this problem, we propose a novel visual localization framework that establishes 2D-to-3D correspondences between the query image and the 3D map with a series of learnable scene-specific landmarks. In the landmark generation stage, the 3D surfaces of the target scene are oversegmented into mosaic patches whose centers are regarded as the scene-specific landmarks. To robustly and accurately recover the scene-specific landmarks, we propose the Voting with Segmentation Network (VS-Net) to segment the pixels into different landmark patches with a segmentation branch and estimate the landmark locations within each patch with a landmark location voting branch. Since the number of landmarks in a scene may reach up to 5000, training a segmentation network with such a large number of classes is both computation and memory costly for the commonly used cross-entropy loss. We propose a novel prototype-based triplet loss with hard negative mining, which is able to train semantic segmentation networks with a large number of labels efficiently. Our proposed VS-Net is extensively tested on multiple public benchmarks and can outperform stateof-the-art visual localization methods. Code and models are available at https://github.com/zju3dv/VS-Net.

You may also like...

Paper