空 挡 广 告 位 | 空 挡 广 告 位

Guiding Monocular Depth Estimation Using Depth-Attention Volume

Note: We don't have the ability to review paper

PubDate: Aug 2020

Teams: University of Oulu, Czech Technical University, Tampere University

Writers: Lam Huynh, Phong Nguyen-Ha, Jiri Matas, Esa Rahtu, Janne Heikkila

PDF: Guiding Monocular Depth Estimation Using Depth-Attention Volume

Project: Guiding Monocular Depth Estimation Using Depth-Attention Volume

Abstract

Recovering the scene depth from a single image is an ill-posed problem that requires additional priors, often referred to as monocular depth cues, to disambiguate different 3D interpretations. In recent works, those priors have been learned in an end-to-end manner from large datasets by using deep neural networks. In this paper, we propose guiding depth estimation to favor planar structures that are ubiquitous especially in indoor environments. This is achieved by incorporating a non-local coplanarity constraint to the network with a novel attention mechanism called depth-attention volume (DAV). Experiments on two popular indoor datasets, namely NYU-Depth-v2 and ScanNet, show that our method achieves state-of-the-art depth estimation results while using only a fraction of the number of parameters needed by the competing methods.

您可能还喜欢...

Paper