Computational Neuroscience of Speech & Hearing


The research group "Computational Neuroscience of Speech & Hearing" investigates the neural and cognitive underpinnings of speech and language. In our research, we mostly focus on clinical and healthy populations who have difficulties to process and understand language (e.g., due to hearing loss, aging, cognitive impairment). We develop technology to individually and context-dependently diagnose and rehabilitate language pathology and related impairments. In order to improve the current state of interventions for language pathology, we use neurophysiology-based technology such as neurofeedback as well as tools such as virtual reality, app-based tools for example in lip reading trainings, gamification elements in auditory-cognitive trainings among others. Furthermore, we use machine learning approaches to detect individuals at-risk of dementia early on using electroencephalography (EEG) data from listeners.

Our research is highly interdisciplinary, while mainly anchored in cognitive neuroscience, linguistics, and (clinical) neuropsychology. In our experiments we use various neuropsychological and psychoacoustic tests and a range of neuroimaging techniques, such as magnetic resonance imaging (MRI) and electroencephalography (EEG) in humans. 

The research group "Computational Neuroscience of Speech & Hearing" is funded by the SNSF.