Multi-level Feature Maps Attention for Monocular Depth Estimation
PubDate: December 2021
Teams: Yonsei University
Writers: Seunghoon Lee; Minhyeok Lee; Sangyoon Lee
PDF: Multi-level Feature Maps Attention for Monocular Depth Estimation
Abstract
Monocular depth estimation is a fundamental task in autonomous driving, robotics, virtual reality. Monocular depth estimation is attracting research due to the efficiency of predicting depth map from a single RGB image. However, Monocular depth estimation is an ill-posed problem and is sensitive to image compositions such as light condition, occlusion, noise. We propose an encoder-decoder based network that uses multi-level attention and aggregate densely weighted feature map. Our model is evaluated on NYU Depth v2. Experimental results demonstrated that our model achieves promising performance.