空 挡 广 告 位 | 空 挡 广 告 位

Federated Multi-View Synthesizing for Metaverse

Note: We don't have the ability to review paper

PubDate: Dec 2023

Teams:Tsinghua University;Imperial College London

Writers:Yiyu Guo, Zhijin Qin, Xiaoming Tao, Geoffrey Ye Li

PDF:Federated Multi-View Synthesizing for Metaverse

Abstract

The metaverse is expected to provide immersive entertainment, education, and business applications. However, virtual reality (VR) transmission over wireless networks is data- and computation-intensive, making it critical to introduce novel solutions that meet stringent quality-of-service requirements. With recent advances in edge intelligence and deep learning, we have developed a novel multi-view synthesizing framework that can efficiently provide computation, storage, and communication resources for wireless content delivery in the metaverse. We propose a three-dimensional (3D)-aware generative model that uses collections of single-view images. These single-view images are transmitted to a group of users with overlapping fields of view, which avoids massive content transmission compared to transmitting tiles or whole 3D models. We then present a federated learning approach to guarantee an efficient learning process. The training performance can be improved by characterizing the vertical and horizontal data samples with a large latent feature space, while low-latency communication can be achieved with a reduced number of transmitted parameters during federated learning. We also propose a federated transfer learning framework to enable fast domain adaptation to different target domains. Simulation results have demonstrated the effectiveness of our proposed federated multi-view synthesizing framework for VR content delivery.

您可能还喜欢...

Paper