Navigating in Virtual Reality using Thought: The Development and Assessment of a Motor Imagery based Brain-Computer Interface
PubDate: Dec 2019
Teams: University of Toronto
Writers: Behnam Reyhani-Masoleh , Tom Chau
Abstract
Brain-computer interface (BCI) systems have potential as assistive technologies for individuals with severe motor impairments. Nevertheless, individuals must first participate in many training sessions to obtain adequate data for optimizing the classification algorithm and subsequently acquiring brain-based control. Such traditional training paradigms have been dubbed unengaging and unmotivating for users. In recent years, it has been shown that the synergy of virtual reality (VR) and a BCI can lead to increased user engagement. This study created a 3-class BCI with a rather elaborate EEG signal processing pipeline that heavily utilizes machine learning. The BCI initially presented sham feedback but was eventually driven by EEG associated with motor imagery. The BCI tasks consisted of motor imagery of the feet and left and right hands, which were used to navigate a single-path maze in VR. Ten of the eleven recruited participants achieved online performance superior to chance (p < 0.01), while the majority successfully completed more than 70% of the prescribed navigational tasks. These results indicate that the proposed paradigm warrants further consideration as neurofeedback BCI training tool. A paradigm that allows users, from their perspective, control from the outset without the need for prior data collection sessions.