Sparsified and Twisted Residual Autoencoders: Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Previously, we have put forth the concept of Cartesian abstraction and argued that it can yield ‘cognitive maps’. We suggested a general mechanism and presented deep learning based numerical simulations: an observed factor (head direction) was non-linearly projected to form a discretized representation (head direction cells). That representation, in turn, enabled the development of a complementing factor (place cells) from high dimensional (visual) inputs. It has been shown that a related metric, in the form of oriented hexagonal grids, may also be derived. Elements of the algorithms were connected to the entorhinal-hippocampal complex (EHC loop). Here, we make one step further in the mapping to the neural substrate. We consider (i) the features of signals arriving at deep and superficial CA1 pyramidal cells, (ii) the interplay between lateral and medial entorhinal cortex efferents, and the nature of ‘instructive’ input timing-dependent plasticity, a feature of the loop. We suggest that the circuitry corresponds to a special form of Residual Networks that we call Sparsified and Twisted Residual Autoencoder (ST-RAE). We argue that ST-RAEs can learn Cartesian Factors and fit the structure and the working of the entorhinal-hippocampal complex to a reasonable extent, including certain oscillatory properties. We put forth the idea that the factor learning architecture of ST-RAEs has a double role in serving goal-oriented behavior, such as (a) the lowering the dimensionality of the task and (b) the mitigation of the problem of partial observation.

Original languageEnglish
Title of host publicationBiologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society
EditorsAlexei V. Samsonovich
PublisherSpringer Verlag
Pages321-332
Number of pages12
ISBN (Print)9783030257187
DOIs
Publication statusPublished - Jan 1 2020
Event10th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2019 - Seattle, United States
Duration: Aug 15 2019Aug 18 2019

Publication series

NameAdvances in Intelligent Systems and Computing
Volume948
ISSN (Print)2194-5357
ISSN (Electronic)2194-5365

Conference

Conference10th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2019
CountryUnited States
CitySeattle
Period8/15/198/18/19

Fingerprint

Plasticity
Computer simulation
Substrates
Deep learning

Keywords

  • Entorhinal-hippocampal loop
  • Factor learning
  • Residual networks
  • Skip connections
  • Sparsification

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science(all)

Cite this

Lőrincz, A. (2020). Sparsified and Twisted Residual Autoencoders: Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex. In A. V. Samsonovich (Ed.), Biologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society (pp. 321-332). (Advances in Intelligent Systems and Computing; Vol. 948). Springer Verlag. https://doi.org/10.1007/978-3-030-25719-4_41

Sparsified and Twisted Residual Autoencoders : Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex. / Lőrincz, A.

Biologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society. ed. / Alexei V. Samsonovich. Springer Verlag, 2020. p. 321-332 (Advances in Intelligent Systems and Computing; Vol. 948).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lőrincz, A 2020, Sparsified and Twisted Residual Autoencoders: Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex. in AV Samsonovich (ed.), Biologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society. Advances in Intelligent Systems and Computing, vol. 948, Springer Verlag, pp. 321-332, 10th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2019, Seattle, United States, 8/15/19. https://doi.org/10.1007/978-3-030-25719-4_41
Lőrincz A. Sparsified and Twisted Residual Autoencoders: Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex. In Samsonovich AV, editor, Biologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society. Springer Verlag. 2020. p. 321-332. (Advances in Intelligent Systems and Computing). https://doi.org/10.1007/978-3-030-25719-4_41
Lőrincz, A. / Sparsified and Twisted Residual Autoencoders : Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex. Biologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society. editor / Alexei V. Samsonovich. Springer Verlag, 2020. pp. 321-332 (Advances in Intelligent Systems and Computing).
@inproceedings{3965eaf9cf194f7ea7a717dd24658a1e,
title = "Sparsified and Twisted Residual Autoencoders: Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex",
abstract = "Previously, we have put forth the concept of Cartesian abstraction and argued that it can yield ‘cognitive maps’. We suggested a general mechanism and presented deep learning based numerical simulations: an observed factor (head direction) was non-linearly projected to form a discretized representation (head direction cells). That representation, in turn, enabled the development of a complementing factor (place cells) from high dimensional (visual) inputs. It has been shown that a related metric, in the form of oriented hexagonal grids, may also be derived. Elements of the algorithms were connected to the entorhinal-hippocampal complex (EHC loop). Here, we make one step further in the mapping to the neural substrate. We consider (i) the features of signals arriving at deep and superficial CA1 pyramidal cells, (ii) the interplay between lateral and medial entorhinal cortex efferents, and the nature of ‘instructive’ input timing-dependent plasticity, a feature of the loop. We suggest that the circuitry corresponds to a special form of Residual Networks that we call Sparsified and Twisted Residual Autoencoder (ST-RAE). We argue that ST-RAEs can learn Cartesian Factors and fit the structure and the working of the entorhinal-hippocampal complex to a reasonable extent, including certain oscillatory properties. We put forth the idea that the factor learning architecture of ST-RAEs has a double role in serving goal-oriented behavior, such as (a) the lowering the dimensionality of the task and (b) the mitigation of the problem of partial observation.",
keywords = "Entorhinal-hippocampal loop, Factor learning, Residual networks, Skip connections, Sparsification",
author = "A. Lőrincz",
year = "2020",
month = "1",
day = "1",
doi = "10.1007/978-3-030-25719-4_41",
language = "English",
isbn = "9783030257187",
series = "Advances in Intelligent Systems and Computing",
publisher = "Springer Verlag",
pages = "321--332",
editor = "Samsonovich, {Alexei V.}",
booktitle = "Biologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society",

}

TY - GEN

T1 - Sparsified and Twisted Residual Autoencoders

T2 - Mapping Cartesian Factors to the Entorhinal-Hippocampal Complex

AU - Lőrincz, A.

PY - 2020/1/1

Y1 - 2020/1/1

N2 - Previously, we have put forth the concept of Cartesian abstraction and argued that it can yield ‘cognitive maps’. We suggested a general mechanism and presented deep learning based numerical simulations: an observed factor (head direction) was non-linearly projected to form a discretized representation (head direction cells). That representation, in turn, enabled the development of a complementing factor (place cells) from high dimensional (visual) inputs. It has been shown that a related metric, in the form of oriented hexagonal grids, may also be derived. Elements of the algorithms were connected to the entorhinal-hippocampal complex (EHC loop). Here, we make one step further in the mapping to the neural substrate. We consider (i) the features of signals arriving at deep and superficial CA1 pyramidal cells, (ii) the interplay between lateral and medial entorhinal cortex efferents, and the nature of ‘instructive’ input timing-dependent plasticity, a feature of the loop. We suggest that the circuitry corresponds to a special form of Residual Networks that we call Sparsified and Twisted Residual Autoencoder (ST-RAE). We argue that ST-RAEs can learn Cartesian Factors and fit the structure and the working of the entorhinal-hippocampal complex to a reasonable extent, including certain oscillatory properties. We put forth the idea that the factor learning architecture of ST-RAEs has a double role in serving goal-oriented behavior, such as (a) the lowering the dimensionality of the task and (b) the mitigation of the problem of partial observation.

AB - Previously, we have put forth the concept of Cartesian abstraction and argued that it can yield ‘cognitive maps’. We suggested a general mechanism and presented deep learning based numerical simulations: an observed factor (head direction) was non-linearly projected to form a discretized representation (head direction cells). That representation, in turn, enabled the development of a complementing factor (place cells) from high dimensional (visual) inputs. It has been shown that a related metric, in the form of oriented hexagonal grids, may also be derived. Elements of the algorithms were connected to the entorhinal-hippocampal complex (EHC loop). Here, we make one step further in the mapping to the neural substrate. We consider (i) the features of signals arriving at deep and superficial CA1 pyramidal cells, (ii) the interplay between lateral and medial entorhinal cortex efferents, and the nature of ‘instructive’ input timing-dependent plasticity, a feature of the loop. We suggest that the circuitry corresponds to a special form of Residual Networks that we call Sparsified and Twisted Residual Autoencoder (ST-RAE). We argue that ST-RAEs can learn Cartesian Factors and fit the structure and the working of the entorhinal-hippocampal complex to a reasonable extent, including certain oscillatory properties. We put forth the idea that the factor learning architecture of ST-RAEs has a double role in serving goal-oriented behavior, such as (a) the lowering the dimensionality of the task and (b) the mitigation of the problem of partial observation.

KW - Entorhinal-hippocampal loop

KW - Factor learning

KW - Residual networks

KW - Skip connections

KW - Sparsification

UR - http://www.scopus.com/inward/record.url?scp=85070233444&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070233444&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-25719-4_41

DO - 10.1007/978-3-030-25719-4_41

M3 - Conference contribution

AN - SCOPUS:85070233444

SN - 9783030257187

T3 - Advances in Intelligent Systems and Computing

SP - 321

EP - 332

BT - Biologically Inspired Cognitive Architectures 2019 - Proceedings of the 10th Annual Meeting of the BICA Society

A2 - Samsonovich, Alexei V.

PB - Springer Verlag

ER -