雨果巴拉:行业北极星Vision Pro过度设计不适合市场

A multimodal virtual keyboard using eye-tracking and hand gesture detection

Note: We don't have the ability to review paper

PubDate: October 2018

Teams: Fresno State University;University of Ulster

Writers: H. Cecotti; Y. K. Meena; G. Prasad

PDF: A multimodal virtual keyboard using eye-tracking and hand gesture detection

Abstract

A large number of people with disabilities rely on assistive technologies to communicate with their families, to use social media, and have a social life. Despite a significant increase of novel assistive technologies, robust, non-invasive, and inexpensive solutions should be proposed and optimized in relation to the physical abilities of the users. A reliable and robust identification of intentional visual commands is an important issue in the development of eye-movements based user interfaces. The detection of a command with an eyetracking system can be achieved with a dwell time. Yet, a large number of people can use simple hand gestures as a switch to select a command. We propose a new virtual keyboard based on the detection of ten commands. The keyboard includes all the letters of the Latin script (upper and lower case), punctuation marks, digits, and a delete button. To select a command in the keyboard, the user points the desired item with the gaze, and select it with hand gesture. The system has been evaluated across eight healthy subjects with five predefined hand gestures, and a button for the selection. The results support the conclusion that the performance of a subject, in terms of speed and information transfer rate (ITR), depends on the choice of the hand gesture. The best gesture for each subject provides a mean performance of 8.77 ± 2.90 letters per minute, which corresponds to an ITR of 57.04 ± 14.55 bits per minute. The results highlight that the hand gesture assigned for the selection of an item is inter-subject dependent.

您可能还喜欢...

Paper