The EMG-Voc ReSSint Database has been made publicly available through ELRA.
https://islrn.org/resources/057-914-072-202-4
https://catalog.elra.info/en-us/repository/browse/ELRA-S0498
Aholab is the short name of the Signal Processing Laboratory of the University of the Basque Country (UPV/EHU). The laboratory is located in Bilbao. We are a university research team and focus our research in the areas of Text to Speech Conversion, Speech and Speaker Recognition, and Speech Processing in general. Since 2005 we are a recognized research group of the Basque Research Network. The laboratory is part of the Basque Center for Language Technology (HiTZ) and the Department of Communications Engineering of the Faculty of Engineering of Bilbao (ETSI).
The EMG-Voc ReSSint Database has been made publicly available through ELRA.
https://islrn.org/resources/057-914-072-202-4
https://catalog.elra.info/en-us/repository/browse/ELRA-S0498
This year, we took part in the LibriBrain Competition 2025, an international challenge presented at NeurIPS 2025 aimed at advancing the decoding of language from non-invasive brain signals using the large-scale LibriBrain dataset. The competition seeks to foster progress in brain–computer interfaces, with the long-term goal of restoring communication abilities in individuals with speech impairments and enabling novel forms of human–machine interaction based on neural data.
Our system, neural2speech, achieved first place in the Phoneme Classification Standard Track. This track focuses on the prediction of phoneme classes directly from MEG (magnetoencephalography) recordings, under a constrained setting where only the official training data can be used, making robustness and generalization key challenges.
In our paper, “MEGConformer: Conformer-Based MEG Decoder for Robust Speech and Phoneme Classification”, we describe the technical approach behind our solution. We adapt a Conformer architecture, originally designed for automatic speech recognition, to operate directly on raw MEG signals from 306 sensors, effectively capturing both temporal dependencies and spectral characteristics of neural activity. Our method incorporates instance-level normalization to reduce distribution shifts across data splits, a dynamic chunk-averaging data loader to improve phoneme classification performance, and class-balancing strategies based on inverse square-root frequency weighting to address class imbalance.
These design choices result in a robust and competitive system, allowing neural2speech to stand out among the submitted solutions and demonstrating a meaningful step forward at the intersection of neural signal processing and speech technology.
WARNING: THE POSITIONS HAVE ALREADY BEEN ASSIGNED
Available positions: 3
Job Description: We are looking for three people from the IT or telecommunications field, with a passion for artificial intelligence and neural networks.
Responsibilities:
Collaborate with the team to improve the accuracy and efficiency of voice recognition systems.
Research and apply the latest techniques in the field of artificial intelligence.
Work closely with other members of the work teams to achieve project objectives.
Requirements:
Previous experience in programming and algorithm development.
Ability to work as a team and communicate effectively.
Solid knowledge of neural networks and voice signal processing will be an advantage.
Benefits:
Collaborative and creative work environment.
Continuous training and professional development.
Gross salary of about €32,000 per year.
If you are interested in joining our team and contributing to the future of speech recognition technology, we look forward to your application! Tell us!
Tuesday, June 12th
17:30
ADELA – ARABA
During the course of certain diseases, such as ALS, the ability to speak fluently can be lost. However, this should not mean losing communication with our surroundings: family members, caregivers, and others. This can be made possible through electronic devices that allow us to produce messages using a synthetic voice. In this talk, we will tell about the AhomyTTS project, which makes it possible for that synthetic voice to resemble the one we have now.
Our activity on the social network Bluesky begins.
From now on, you can find us at https://bsky.app/profile/aholab.bsky.social.