BRAIN DECODING: TOWARD REAL-TIME RECONSTRUCTION OF VISUAL PERCEPTION
PubDate: Oct 2023
Teams: Meta;PSL University
Writers: Yohann Benchetrit; Hubert Banville;Jean-Remi King
PDF: BRAIN DECODING: TOWARD REAL-TIME RECONSTRUCTION OF VISUAL PERCEPTION
Abstract
In the past five years, the use of generative and foundational AI systems has
greatly improved the decoding of brain activity. Visual perception, in particular,
can now be decoded from functional Magnetic Resonance Imaging (fMRI) with
remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time
usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with
high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding
model trained with both contrastive and regression objectives and consisting of
three modules: i) pretrained embeddings obtained from the image, ii) an MEG
module trained end-to-end and iii) a pretrained image generator. Our results are
threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval
over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals
and generations both suggest that MEG signals primarily contain high-level visual
features, whereas the same approach applied to 7T fMRI also recovers low-level
features. Overall, these results provide an important step towards the decoding
– in real time – of the visual processes continuously unfolding within the human
brain.