Multistability in auditory stream segregation: A predictive coding view

István Winkler, Susan Denham, Robert Mill, Tamás M. Bohm, Alexandra Bendixen

Research output: Contribution to journalArticle

68 Citations (Scopus)


Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.

Original languageEnglish
Pages (from-to)1001-1012
Number of pages12
JournalPhilosophical Transactions of the Royal Society B: Biological Sciences
Issue number1591
Publication statusPublished - Jan 1 2012



  • Auditory grouping
  • Auditory object representation
  • Auditory scene analysis
  • Computational models
  • Perceptual bistability
  • Predictive processing

ASJC Scopus subject areas

  • Biochemistry, Genetics and Molecular Biology(all)
  • Agricultural and Biological Sciences(all)

Cite this