Data-Centric Video for Mixed Reality

Note: We don't have the ability to review paper

PubDate: September 2019


Writers: Peter Gusev; Jeff Thompson; Jeff Burke

PDF: Data-Centric Video for Mixed Reality


Network video streaming abstractions tend to replicate the paradigms of hardwired video dating back to analog broadcast. With IP video distribution becoming increasingly realistic for a variety of low-latency applications, this paper looks ahead to a data-centric architecture for video that can provide a superset of features from existing abstractions, to support how video is increasingly being used: for non-linear retrieval, variable speed and spatially selective playback, machine analysis, and other new approaches. As a case study, the paper describes the use of the Named Data Networking (NDN) network architecture within an experimental theatrical work being developed at UCLA. The work, a new play, Entropy Bound, uses NDN to enable a hybrid design paradigm for real-time video that combines properties of streams, buses, and stores. This approach unifies real-time live and historical playback, and is used to support edge-assisted machine learning. The paper introduces the play and its requirements (as well as the NDN components applied and developed), discusses key design patterns enabled and explored and their influence on the application architecture, and describes what was learned through practical implementation in a realworld production setting. The paper intends to inform future experimentation with real-time media over information-centric networking and elaborate on the benefits and challenges of using NDN in practice for mixed reality applications today.