雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Binocular Feature Fusion and Spatial Attention Mechanism Based Gaze Tracking

Note: We don't have the ability to review paper

PubDate: February 2022

Teams: Shenyang Institute of Automation

Writers: Lihong Dai; Jinguo Liu; Zhaojie Ju

PDF: Binocular Feature Fusion and Spatial Attention Mechanism Based Gaze Tracking

Abstract

Gaze tracking is widely used in driver safety driving, visual impairment detection, virtual reality, human robot interaction, and reading process tracking. However, varying illumination, various head poses, different distances between human and cameras, occlusion of hair or glasses, and low-quality images pose huge challenges to accurate gaze tracking. In this article, based on binocular feature fusion and convolution neural network, a novel method of gaze tracking is proposed, in which local binocular spatial attention mechanism (LBSAM) and global binocular spatial attention mechanism (GBSAM) are integrated into the network model to improve the accuracy. Furthermore, the proposed method is validated on the GazeCapture database. In addition, four groups of comparative experiments have been conducted: between binocular feature fusion model and binocular data fusion model; among the local binocular spatial attention model, the local binocular channel attention model, and the model without local binocular attention mechanism; between the model with GBSAM and that without GBSAM; and between the proposed method and other state-of-the-art approaches. The experimental results verify the advantages of binocular feature fusion, LBSAM and GBSAM, and the effectiveness of the proposed method.

您可能还喜欢...

Paper