空 挡 广 告 位 | 空 挡 广 告 位

360-Degree Image Super-Resolution Based on Single Image Sample and Progressive Residual Generative Adversarial Network

Note: We don't have the ability to review paper

PubDate: Sep 2022

Teams:  China Telecom Research Institute

Writers: Liuyihui Qian; Xiaojun Liu; Juan Wu; Xiaoqing Xu

PDF:360-Degree Image Super-Resolution Based on Single Image Sample and Progressive Residual Generative Adversarial Network

Abstract

The restriction of network resources has forced cloud Virtual Reality service providers to only transmit low-resolution 360-degree images to Virtual Reality devices, leading to unpleasant user experience. Deep learning-based single image super-resolution approaches are commonly used for transforming low-resolution images into high-resolution versions, but these approaches are unable to deal with a dataset which has an extremely low number of training image samples. Moreover, current single image training models cannot deal with 360-degree images with very large image sizes. Therefore, we propose a 360-degree image super-resolution method which can train a super-resolution model on a single 360-degree image sample by using image patching techniques and a generative adversarial network. We also propose an improved Generative Adversarial Network (GAN) model structure named Progressive Residual GAN (PRGAN), which learns the image in a rough-to-fine way using progressively growing residual blocks and preserves structural and textural information with multi-level skip connections. Experiments on a street view panorama image dataset prove that our image super-resolution method outperforms several baseline methods in multiple image quality evaluation metrics, meanwhile keeping the generator model computational efficient.

您可能还喜欢...

Paper