空 挡 广 告 位 | 空 挡 广 告 位

Real-Time Lens Distortion Algorithm on an Edge Device With GPU

Note: We don't have the ability to review paper

PubDate: April 2022

Teams: Korea University of Technology and Education

Writers: Young-Woo Kim; Hyeon-Seok Yang; Duksu Kim

PDF: Real-Time Lens Distortion Algorithm on an Edge Device With GPU

Abstract

The lens distortion process is essential for displaying VR contents on a head-mounted display (HMD) with a distorted display surface. This paper proposes a novel lens distortion algorithm to achieve real-time performance on edge devices with an embedded GPU. We employ unified memory space to reduce the data transfer overhead based on an architectural characteristic: an integrated CPU and GPU memory system. The lens distortion kernel is based on the lookup table-based mapping algorithm whose performance is bounded by memory operations rather than computations. To improve the kernel’s performance, we propose a compressed lookup table approach that reduces the memory transactions on the kernel while slightly increasing computation. We tested our method on three different edge devices and a desktop system while varying the image resolution from 720p (1, 280×720 ) to 8K (7, 680×4.320 ). Compared with prior GPU-based lookup table algorithms, our method achieved up to 1.72-times higher performance while consuming up to 28.93% less power. Also, our method demonstrates real-time performance for up to a 4K image with a low-end edge device (e.g., 56 FPS on Jetson Nano) and up to an 8K image with a mid-range device (e.g., 94 FPS on Jetson NX). These results demonstrate the benefits of our approach from the perspectives of both performance and energy.

您可能还喜欢...

Paper