360° Image Reference-Based Super-Resolution Using Latitude-Aware Convolution Learned From Synthetic to Real
PubDate: November 2021
Teams: Ewha Womans University
Writers: Hee-Jae Kim; Je-Won Kang; Byung-Uk Lee
Abstract
High-resolution (HR) 360° images offer great advantages wherever an omnidirectional view is necessary such as in autonomous robot systems and virtual reality (VR) applications. One or more 360° images in adjacent views can be utilized to significantly improve the resolution of a target 360° image. In this paper, we propose an efficient reference-based 360° image super-resolution (RefSR) technique to exploit a wide field of view (FoV) among adjacent 360° cameras. Effective exploitation of spatial correlation is critical to achieving high quality even though the distortion inherent in the equi-rectangular projection (ERP) is a nontrivial problem. Accordingly, we develop a long-range 360 disparity estimator (DE360) to overcome a large and distorted disparity, particularly near the poles. Latitude-aware convolution (LatConv) is designed to generate more robust features to circumvent the distortion and keep the image quality. We also develop synthetic 360° image datasets and introduce a synthetic-to-real learning scheme that transfers knowledge learned from synthetic 360° images to a deep neural network conducting super-resolution (SR) of camera-captured images. The proposed network can learn useful features in the ERP-domain using a sufficient number of synthetic samples. The network is then adapted to camera-captured images through the transfer layer with a limited number of real-world datasets.