雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Long-Term Visual Localization with Semantic Enhanced Global Retrieval

Note: We don't have the ability to review paper

PubDate: April 2022

Teams: Beihang University

Writers: Hongrui Chen; Yuan Xiong; Jingru Wang; Zhong Zhou

PDF: Long-Term Visual Localization with Semantic Enhanced Global Retrieval

Abstract

Visual localization under varying conditions such as changes in illumination, season and weather is a fundamental task for applications such as autonomous navigation. In this paper, we present a novel method of using semantic information for global image retrieval. By exploiting the distribution of different classes in a semantic scene, the discriminative features of the scene’s structure layout is embedded into a normalized vector that can be used for retrieval, i.e. semantic retrieval. Color image retrieval is based on low-level visual features extracted by algorithms or Convolutional Neural Networks (CNNs), while semantic retrieval is based on high-level semantic features which are robust in scene appearance variations. By combining semantic retrieval with color image retrieval in the global retrieval step, we show that these two methods can complement with each other and significantly improve the localization performance. Experiments on the challenging CMU Seasons dataset show that our method is robust across large variations of appearance and achieves state-of-the-art localization performance.

您可能还喜欢...

Paper