Unimodal statistical learning produces multimodal object-like representations

Gábor Lengyel, Goda Žalalytė, Alexandros Pantelides, James N. Ingram, J. Fiser, Máté Lengyel, Daniel M. Wolpert

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

The concept of objects is fundamental to cognition and is defined by a consistent set of sensory properties and physical affordances. Although it is unknown how the abstract concept of an object emerges, most accounts assume that visual or haptic boundaries are crucial in this process. Here, we tested an alternative hypothesis that boundaries are not essential but simply reflect a more fundamental principle: consistent visual or haptic statistical properties. Using a novel visuo-haptic statistical learning paradigm, we familiarised participants with objects defined solely by across-scene statistics provided either visually or through physical interactions. We then tested them on both a visual familiarity and a haptic pulling task, thus measuring both within-modality learning and across-modality generalisation. Participants showed strong within-modality learning and 'zero-shot' across-modality generalisation which were highly correlated. Our results demonstrate that humans can segment scenes into objects, without any explicit boundary cues, using purely statistical information.

Original languageEnglish
JournaleLife
Volume8
DOIs
Publication statusPublished - May 1 2019

Fingerprint

Statistics
Learning
Cognition
Cues
Generalization (Psychology)
Recognition (Psychology)

Keywords

  • haptic statistical learning
  • human
  • neuroscience
  • object representations
  • statistical learning
  • visual statistical learning
  • zero-shot generalization

ASJC Scopus subject areas

  • Neuroscience(all)
  • Biochemistry, Genetics and Molecular Biology(all)
  • Immunology and Microbiology(all)

Cite this

Lengyel, G., Žalalytė, G., Pantelides, A., Ingram, J. N., Fiser, J., Lengyel, M., & Wolpert, D. M. (2019). Unimodal statistical learning produces multimodal object-like representations. eLife, 8. https://doi.org/10.7554/eLife.43942

Unimodal statistical learning produces multimodal object-like representations. / Lengyel, Gábor; Žalalytė, Goda; Pantelides, Alexandros; Ingram, James N.; Fiser, J.; Lengyel, Máté; Wolpert, Daniel M.

In: eLife, Vol. 8, 01.05.2019.

Research output: Contribution to journalArticle

Lengyel, G, Žalalytė, G, Pantelides, A, Ingram, JN, Fiser, J, Lengyel, M & Wolpert, DM 2019, 'Unimodal statistical learning produces multimodal object-like representations', eLife, vol. 8. https://doi.org/10.7554/eLife.43942
Lengyel G, Žalalytė G, Pantelides A, Ingram JN, Fiser J, Lengyel M et al. Unimodal statistical learning produces multimodal object-like representations. eLife. 2019 May 1;8. https://doi.org/10.7554/eLife.43942
Lengyel, Gábor ; Žalalytė, Goda ; Pantelides, Alexandros ; Ingram, James N. ; Fiser, J. ; Lengyel, Máté ; Wolpert, Daniel M. / Unimodal statistical learning produces multimodal object-like representations. In: eLife. 2019 ; Vol. 8.
@article{bc616e6cc54546cd87341300e445ef12,
title = "Unimodal statistical learning produces multimodal object-like representations",
abstract = "The concept of objects is fundamental to cognition and is defined by a consistent set of sensory properties and physical affordances. Although it is unknown how the abstract concept of an object emerges, most accounts assume that visual or haptic boundaries are crucial in this process. Here, we tested an alternative hypothesis that boundaries are not essential but simply reflect a more fundamental principle: consistent visual or haptic statistical properties. Using a novel visuo-haptic statistical learning paradigm, we familiarised participants with objects defined solely by across-scene statistics provided either visually or through physical interactions. We then tested them on both a visual familiarity and a haptic pulling task, thus measuring both within-modality learning and across-modality generalisation. Participants showed strong within-modality learning and 'zero-shot' across-modality generalisation which were highly correlated. Our results demonstrate that humans can segment scenes into objects, without any explicit boundary cues, using purely statistical information.",
keywords = "haptic statistical learning, human, neuroscience, object representations, statistical learning, visual statistical learning, zero-shot generalization",
author = "G{\'a}bor Lengyel and Goda Žalalytė and Alexandros Pantelides and Ingram, {James N.} and J. Fiser and M{\'a}t{\'e} Lengyel and Wolpert, {Daniel M.}",
year = "2019",
month = "5",
day = "1",
doi = "10.7554/eLife.43942",
language = "English",
volume = "8",
journal = "eLife",
issn = "2050-084X",
publisher = "eLife Sciences Publications",

}

TY - JOUR

T1 - Unimodal statistical learning produces multimodal object-like representations

AU - Lengyel, Gábor

AU - Žalalytė, Goda

AU - Pantelides, Alexandros

AU - Ingram, James N.

AU - Fiser, J.

AU - Lengyel, Máté

AU - Wolpert, Daniel M.

PY - 2019/5/1

Y1 - 2019/5/1

N2 - The concept of objects is fundamental to cognition and is defined by a consistent set of sensory properties and physical affordances. Although it is unknown how the abstract concept of an object emerges, most accounts assume that visual or haptic boundaries are crucial in this process. Here, we tested an alternative hypothesis that boundaries are not essential but simply reflect a more fundamental principle: consistent visual or haptic statistical properties. Using a novel visuo-haptic statistical learning paradigm, we familiarised participants with objects defined solely by across-scene statistics provided either visually or through physical interactions. We then tested them on both a visual familiarity and a haptic pulling task, thus measuring both within-modality learning and across-modality generalisation. Participants showed strong within-modality learning and 'zero-shot' across-modality generalisation which were highly correlated. Our results demonstrate that humans can segment scenes into objects, without any explicit boundary cues, using purely statistical information.

AB - The concept of objects is fundamental to cognition and is defined by a consistent set of sensory properties and physical affordances. Although it is unknown how the abstract concept of an object emerges, most accounts assume that visual or haptic boundaries are crucial in this process. Here, we tested an alternative hypothesis that boundaries are not essential but simply reflect a more fundamental principle: consistent visual or haptic statistical properties. Using a novel visuo-haptic statistical learning paradigm, we familiarised participants with objects defined solely by across-scene statistics provided either visually or through physical interactions. We then tested them on both a visual familiarity and a haptic pulling task, thus measuring both within-modality learning and across-modality generalisation. Participants showed strong within-modality learning and 'zero-shot' across-modality generalisation which were highly correlated. Our results demonstrate that humans can segment scenes into objects, without any explicit boundary cues, using purely statistical information.

KW - haptic statistical learning

KW - human

KW - neuroscience

KW - object representations

KW - statistical learning

KW - visual statistical learning

KW - zero-shot generalization

UR - http://www.scopus.com/inward/record.url?scp=85066457391&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85066457391&partnerID=8YFLogxK

U2 - 10.7554/eLife.43942

DO - 10.7554/eLife.43942

M3 - Article

C2 - 31042148

AN - SCOPUS:85066457391

VL - 8

JO - eLife

JF - eLife

SN - 2050-084X

ER -