HLS-based 360 VR using spatial segmented adaptive streaming

Note: We don't have the ability to review paper

PubDate: March 2018

Teams: Kwangwoon University

Writers: Hyeon Su Kim; Sang Bum Nam; Sang Geun Choi; Chang Hyung Kim; Tegg Tae Kyong Sung; Chae-Bong Sohn

PDF: HLS-based 360 VR using spatial segmented adaptive streaming


Recently, by advances in VR (Virtual Reality) contents and HMD (Head Mounted Display), 360VR video related research and development have been actively progressed. Also, mostly recent VR contents are provided with ultra-high definition, over 4K (UHD) and 8K (SUHD). The transmit efficiency which using the most efficient video compression, H.265, to handle such 360VR videos can be effected due to over-transmitting unseen fields in network streaming service. In this paper, a server and a network load problem can be solved by extracting and utilizing information in user-concentrated FOV (Field of View). Regarding to this concept, we propose the Spatial Segmented Adaptive Streaming (SSAS) method. By transmitting original quality video in a currently concentrated field, while transmitting degraded quality video in other fields, network load can be reduced. However, this selectively transmit method has caused switching quality delay by FOV movement. Therefore, we propose the HLS-based real-time adaptive streaming method through video fields and pre-encoding per quality.