The BrAIn2Lang website is now online.
This project explores how speech and language can be decoded from brain activity, bringing together neuroimaging and speech technologies.
https://aholab.ehu.eus/brain2lang
Aholab is the short name of the Signal Processing Laboratory of the University of the Basque Country (UPV/EHU). The laboratory is located in Bilbao. We are a university research team and focus our research in the areas of Text to Speech Conversion, Speech and Speaker Recognition, and Speech Processing in general. Since 2005 we are a recognized research group of the Basque Research Network. The laboratory is part of the Basque Center for Language Technology (HiTZ) and the Department of Communications Engineering of the Faculty of Engineering of Bilbao (ETSI).
The BrAIn2Lang website is now online.
This project explores how speech and language can be decoded from brain activity, bringing together neuroimaging and speech technologies.
https://aholab.ehu.eus/brain2lang
In the frame of the brAIn2lang project, we are looking for a highly motivated candidate with:
– MSc in AI, Computer Science, Biomedical Engineering, Computational Neuroscience, or related field
– Strong background in machine learning and deep learning
– Solid Python skills (e.g., PyTorch or TensorFlow)
– Interest in speech processing and/or neural signal decoding
– Excellent written and spoken English skills
Experience with speech technologies or EEG/MEG is a plus.
Odyssey: The Speaker and Language Recognition Workshop
📅 June 23–26, 2026, Lisbon, Portugal
We’re pleased to announce the Special Session on Speech and Language Technologies in Healthcare at Odyssey 2026: a dedicated forum bridging speech & language research with real-world health applications.
This session brings together interdisciplinary work on clinically grounded and deployable speech and language technologies — from assistive communication systems and speech-based diagnosis & monitoring, to emerging paradigms like silent speech interfaces and personalized, inclusive solutions that make a tangible impact in healthcare settings.
💡 Whether your work focuses on assistive tech, health monitoring through voice, or AI-driven speech interfaces that support patient communication — this is your chance to contribute to a vibrant community where research meets real needs.
📥 Submission Deadline: March 15, 2026
🔗 Call & details: https://odyssey2026.inesc-id.pt/speech-and-language-technologies-in-healthcare/
Join us in Lisbon to help shape the future of speech technology in health care! 🌍🗣️❤️
The EMG-Voc ReSSint Database has been made publicly available through ELRA.
https://islrn.org/resources/057-914-072-202-4
https://catalog.elra.info/en-us/repository/browse/ELRA-S0498
Aholab researchers presented two contributions at the X Congreso Internacional de Fonética Experimental (CIFE 2026), showcasing recent advances in neural voice personalization and dialect-aware speech synthesis.
Iñigo Hierro Muga
Evaluación sistemática de estrategias de personalización de voces basadas en redes neuronales.
Mariana Flores Ríos
Personalización dialectal de sistemas de síntesis de voz: experimentos para el español mexicano
These presentations reflect Aholab’s ongoing commitment to advancing personalized and linguistically adaptive speech technologies.
This year, we took part in the LibriBrain Competition 2025, an international challenge presented at NeurIPS 2025 aimed at advancing the decoding of language from non-invasive brain signals using the large-scale LibriBrain dataset. The competition seeks to foster progress in brain–computer interfaces, with the long-term goal of restoring communication abilities in individuals with speech impairments and enabling novel forms of human–machine interaction based on neural data.
Our system, neural2speech, achieved first place in the Phoneme Classification Standard Track. This track focuses on the prediction of phoneme classes directly from MEG (magnetoencephalography) recordings, under a constrained setting where only the official training data can be used, making robustness and generalization key challenges.
In our paper, “MEGConformer: Conformer-Based MEG Decoder for Robust Speech and Phoneme Classification”, we describe the technical approach behind our solution. We adapt a Conformer architecture, originally designed for automatic speech recognition, to operate directly on raw MEG signals from 306 sensors, effectively capturing both temporal dependencies and spectral characteristics of neural activity. Our method incorporates instance-level normalization to reduce distribution shifts across data splits, a dynamic chunk-averaging data loader to improve phoneme classification performance, and class-balancing strategies based on inverse square-root frequency weighting to address class imbalance.
These design choices result in a robust and competitive system, allowing neural2speech to stand out among the submitted solutions and demonstrating a meaningful step forward at the intersection of neural signal processing and speech technology.