The Brain's Bitter Lesson: Scaling Speech Decoding With Self-Supervised Learning

16
citations
#359
in ICML 2025
of 3340 papers
5
Top Authors
4
Data Points

Abstract

The past few years have seen remarkable progress in the decoding of speech from brain activity, primarily driven by large single-subject datasets. However, due to individual variation, such as anatomy, and differences in task design and scanning hardware, leveraging data across subjects and datasets remains challenging. In turn, the field has not benefited from the growing number of open neural data repositories to exploit large-scale deep learning. To address this, we develop neuroscience-informed self-supervised objectives, together with an architecture, for learning from heterogeneous brain recordings. Scaling to nearly400 hoursof MEG data and900 subjects, our approach shows generalisation across participants, datasets, tasks, and even tonovelsubjects. It achievesimprovements of 15-27%over state-of-the-art models andmatchessurgicaldecoding performance withnon-invasivedata. These advances unlock the potential for scaling speech decoding models beyond the current frontier.

Citation History

Jan 28, 2026
0
Feb 13, 2026
16+16
Feb 13, 2026
16
Feb 13, 2026
16