空 挡 广 告 位 | 空 挡 广 告 位

BlissCam: Boosting Eye Tracking Efficiency with Learned In-Sensor Sparse Sampling

Note: We don't have the ability to review paper

PubDate: Apr 2024

Teams: University of Rochester;Washington University in St. Louis;Northeastern University

Writers: Yu Feng, Tianrui Ma, Yuhao Zhu, Xuan Zhang

PDF: BlissCam: Boosting Eye Tracking Efficiency with Learned In-Sensor Sparse Sampling

Abstract

Eye tracking is becoming an increasingly important task domain in emerging computing platforms such as Augmented/Virtual Reality (AR/VR). Today’s eye tracking system suffers from long end-to-end tracking latency and can easily eat up half of the power budget of a mobile VR device. Most existing optimization efforts exclusively focus on the computation pipeline by optimizing the algorithm and/or designing dedicated accelerators while largely ignoring the front-end of any eye tracking pipeline: the image sensor. This paper makes a case for co-designing the imaging system with the computing system. In particular, we propose the notion of “in-sensor sparse sampling”, whereby the pixels are drastically downsampled (by 20x) within the sensor. Such in-sensor sampling enhances the overall tracking efficiency by significantly reducing 1) the power consumption of the sensor readout chain and sensor-host communication interfaces, two major power contributors, and 2) the work done on the host, which receives and operates on far fewer pixels. With careful reuse of existing pixel circuitry, our proposed BLISSCAM requires little hardware augmentation to support the in-sensor operations. Our synthesis results show up to 8.2x energy reduction and 1.4x latency reduction over existing eye tracking pipelines.

您可能还喜欢...

Paper