Ultra-low-latency Video Coding Method for Autonomous Vehicles and Virtual Reality Devices
PubDate: January 2019
Teams: Nihon University;Kanazawa University
Writers: Seiji Mochizuki; Kousuke Imamura; Kaito Mori; Yoshio Matsuda; Tetsuya Matsumura
PDF: Ultra-low-latency Video Coding Method for Autonomous Vehicles and Virtual Reality Devices
Abstract
Applications such as autonomous driving and virtual reality (VR) require low-latency transfer of high definition (HD) video. The proposed ultra-low-latency video coding method, which adopts line-based processing, has 0.44μs latency at minimum for Full-HD video. With multiple line-based image-prediction methods, image-adaptive quantization, and optimized entropy coding, the proposed method achieves compression to 39.0% data size and image quality of 45.4dB. The proposed basic algorithm and the optional 1D-DCT mode achieve compression to 33% and 20%, respectively, without significant visual degradation. These results are comparable to those for H.264 Intra despite one-thousandth ultra-low-latency of the proposed method. With the proposed video coding, the autonomous vehicles and VR devices can transfer HD video using 20% of the bandwidth of the source video without significant latency or visual degradation.