空 挡 广 告 位 | 空 挡 广 告 位

Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution using Dynamic Filters

Note: We don't have the ability to review paper

PubDate: Aug 2020

Teams: Beijing Jiaotong University

Writers: Qingyan Sun, Shuo Zhang, Song Chang, Lixi Zhu, Youfang Lin

PDF: Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution using Dynamic Filters

Abstract

Light field cameras have been proved to be powerful tools for 3D reconstruction and virtual reality applications. However, the limited resolution of light field images brings a lot of difficulties for further information display and extraction. In this paper, we introduce a novel learning-based framework to improve the spatial resolution of light fields. First, features from different dimensions are parallelly extracted and fused together in our multi-dimension fusion architecture. These features are then used to generate dynamic filters, which extract subpixel information from micro-lens images and also implicitly consider the disparity information. Finally, more high-frequency details learned in the residual branch are added to the upsampled images and the final super-resolved light fields are obtained. Experimental results show that the proposed method uses fewer parameters but achieves better performances than other state-of-the-art methods in various kinds of datasets. Our reconstructed images also show sharp details and distinct lines in both sub-aperture images and epipolar plane images.

您可能还喜欢...

Paper