How can we make sense of our acoustic environment, considering the fact that different sound sources in our complex environments (e.g. the classically cited "cocktail party") often activate the same receptors simultaneously? Only some information will be relevant for guiding our behaviour, while other information will be irrelevant or even distracting.


Next to a "faithful" transduction process and feedforward transmission of information, the listening-challenge can only be solved by appropriate deployment of top-down processes, such as attention or (anticipatory) prediction. The Auditory Neuroscience Group at the Salzburg Brain Dynamics lab is in particularly interested in neural processes linked to prediction, which could be derived e.g. by statistical regularities derived from previous input or by contextual cues (e.g. lip-movements accompanying speech). A particular focus of our group within this context is to shed light on the underinvestigated role corticofugal processes, which includes subcortical and even cochlear processes. We aim at linking these processes to higher-order cortical systems, that are more commonly investigated within cognitive neuroscience. For this purpose we are advancing simultaneous non-invasive neuro-cochlear measurements, which among others includes combination of measurement instruments (e.g. MEG and otoacoustic emissions) and innovative signal processing strategies applied to M/EEG data.


Our research has strong clinical implications, that we are currently pursuing in separate projects funded by the European Commission (Marie Curie Actions) and the FWF (see below). 

a) Feature specificity of auditory predictions

Prediction processes are frequency-specific during omission periods. Furthermore frequency-tuned predictions are exerted in an anticipatory manner.

b) Corticofugal processes in service of auditory perception

Innovative approaches of capturing concurrent cortical, subcortical and cochlear processes,

c) Prediction processes and tinnitus

Tamen a proposito, inquam, aberramus. Non igitur potestis voluptate omnia dirigentes aut tueri aut retinere virtutem.

d) Visuo-phonological transformations

Visual cortex transforms visual speech into phonological code via top-down control.