空 挡 广 告 位 | 空 挡 广 告 位

Event Synthesis for Light Field Videos using Recurrent Neural Networks

Note: We don't have the ability to review paper

PubDate: April 2022

Teams: The University of Sydney;Beijing Technology and Business University;University of Science and Technology of China

Writers: Zhicheng Lu; Xiaoming Chen; Yuk Ying Chung; Sen Liu

PDF: Event Synthesis for Light Field Videos using Recurrent Neural Networks

Abstract

Light field videos (LFVs), consisting of multiple angles of view, yield higher complexity in performing computer vision tasks. The emerging event cameras offer a new means for light-weight processing of LFVs, but it is infeasible to build an LFV capturing device with multiple event cameras due to their high costs. In this poster, we propose a novel “event synthesis for light field videos” (ES4LFV) model by using recurrent neural networks and build a preliminary dataset for training. The ES4LFV can synthesize events for light field video (E-LFV) from LFV camera array and single event camera. The experimental results show that ES4LFV outperforms the traditional method by 3.1dB in PSNR.

您可能还喜欢...

Paper