Unimodal statistical learning produces multimodal object-like representations

Gábor Lengyel, Goda Žalalytė, Alexandros Pantelides, James N. Ingram, József Fiser, M. Lengyel, Daniel M. Wolpert

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

The concept of objects is fundamental to cognition and is defined by a consistent set of sensory properties and physical affordances. Although it is unknown how the abstract concept of an object emerges, most accounts assume that visual or haptic boundaries are crucial in this process. Here, we tested an alternative hypothesis that boundaries are not essential but simply reflect a more fundamental principle: Consistent visual or haptic statistical properties. Using a novel visuo-haptic statistical learning paradigm, we familiarised participants with objects defined solely by across-scene statistics provided either visually or through physical interactions. We then tested them on both a visual familiarity and a haptic pulling task, thus measuring both within-modality learning and across-modality generalisation. Participants showed strong within-modality learning and ‘zero-shot’ across-modality generalisation which were highly correlated. Our results demonstrate that humans can segment scenes into objects, without any explicit boundary cues, using purely statistical information.

Original languageEnglish
Article numbere43942
JournaleLife
Volume8
DOIs
Publication statusPublished - May 2019

    Fingerprint

ASJC Scopus subject areas

  • Neuroscience(all)
  • Immunology and Microbiology(all)
  • Biochemistry, Genetics and Molecular Biology(all)

Cite this

Lengyel, G., Žalalytė, G., Pantelides, A., Ingram, J. N., Fiser, J., Lengyel, M., & Wolpert, D. M. (2019). Unimodal statistical learning produces multimodal object-like representations. eLife, 8, [e43942]. https://doi.org/10.7554/eLife.43942