Student: Xabier Román
Silent speech interfaces allow the generation of acoustic speech from articulatory data obtained from some sensors. In this master thesis we propose to develop a Neural Network based system able to produce speech from Electromiographic data. These data will come from a free available database (https://www.uni-bremen.de/en/csl/research/silent-speech-communication/).
Goal: Obtaining natural sounding speech from EMG data.
Requirements: Basic knowledge of Neural Networks and some experience in python programming is mandatory.
Advisors: Eva Navas
Presented: 05/10/21
Xabier Roman: Direct speech synthesis from EMG data
1 file(s) 5.40 MB