Navigation auf uzh.ch
We are interested in how the brain processes spoken language with particularly focus on the cognitive aspect of language processing.
In everyday conversation, we are often confronted with mixture of different speech signals (for example at a cocktail party). This means that we have to direct our attention selectively to our interlocutor and suppress signals from other distracting speakers. The aim of our research is to better understand how the human brain extracts relevant linguistic information for the auditory speech signal and how attention helps to recognize and enhance relevant content. Further, we investigate how multimodal information provided by co-speech gestures support the process of speech perception and can be used to enhance spoken language comprehension.
For this research, we apply a wide variety of neuroscientific methods including eye-tracking, non-invasive brain stimulation (TACS, TMS), neuroimaging (EEG, fMRI, lesion mapping) and neurofeedback training.
The long-term vision of the group is to translate the neuromodulation of speech and language processing to populations with speech processing difficulties such as hard-of-hearing listeners and speakers with aphasia.
Courses for students of linguistics and students of psychology