Silent speech interfaces allow the generation of acoustic speech from articulatory data obtained from some sensors. In this master thesis we propose to develop a Neural Network based system able to produce speech from Electromiographic data. These data will come from a free available database (https://www.uni-bremen.de/en/csl/research/silent-speech-communication/).
Goal: Obtaining natural sounding speech from EMG data.
Requirements: Basic knowledge of Neural Networks and some experience in python programming is mandatory.
Advisor: Eva Navas