Learning the states: A brain inspired neural model

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

AGI relies on Markov Decision Processes, which assume deterministic states. However, such states must be learned. We propose that states are deterministic spatio-temporal chunks of observations and notice that learning of such episodic memory is attributed to the entorhinal hippocampal complex in the brain. EHC receives information from the neocortex and encodes learned episodes into neocortical memory traces thus it changes its input without changing its emerged representations. Motivated by recent results in exact matrix completion we argue that step-wise decomposition of observations into 'typical' (deterministic) and 'atypical' (stochastic) constituents is EHC's trick of learning episodic memory.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages315-320
Number of pages6
Volume6830 LNAI
DOIs
Publication statusPublished - 2011
Event4th International Conference on Artificial General Intelligence, AGI 2011 - Mountain View, CA, United States
Duration: Aug 3 2011Aug 6 2011

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume6830 LNAI
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other4th International Conference on Artificial General Intelligence, AGI 2011
CountryUnited States
CityMountain View, CA
Period8/3/118/6/11

Fingerprint

Brain
Data storage equipment
Matrix Completion
Markov Decision Process
Trace
Model
Decomposition
Decompose
Learning
Observation

Keywords

  • exact matrix completion
  • hippocampus
  • MDP
  • sparse coding

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Lőrincz, A. (2011). Learning the states: A brain inspired neural model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6830 LNAI, pp. 315-320). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 6830 LNAI). https://doi.org/10.1007/978-3-642-22887-2_36

Learning the states : A brain inspired neural model. / Lőrincz, A.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 6830 LNAI 2011. p. 315-320 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 6830 LNAI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lőrincz, A 2011, Learning the states: A brain inspired neural model. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 6830 LNAI, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6830 LNAI, pp. 315-320, 4th International Conference on Artificial General Intelligence, AGI 2011, Mountain View, CA, United States, 8/3/11. https://doi.org/10.1007/978-3-642-22887-2_36
Lőrincz A. Learning the states: A brain inspired neural model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 6830 LNAI. 2011. p. 315-320. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-642-22887-2_36
Lőrincz, A. / Learning the states : A brain inspired neural model. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 6830 LNAI 2011. pp. 315-320 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{41f6066741dd4130a01a845fa521bea1,
title = "Learning the states: A brain inspired neural model",
abstract = "AGI relies on Markov Decision Processes, which assume deterministic states. However, such states must be learned. We propose that states are deterministic spatio-temporal chunks of observations and notice that learning of such episodic memory is attributed to the entorhinal hippocampal complex in the brain. EHC receives information from the neocortex and encodes learned episodes into neocortical memory traces thus it changes its input without changing its emerged representations. Motivated by recent results in exact matrix completion we argue that step-wise decomposition of observations into 'typical' (deterministic) and 'atypical' (stochastic) constituents is EHC's trick of learning episodic memory.",
keywords = "exact matrix completion, hippocampus, MDP, sparse coding",
author = "A. Lőrincz",
year = "2011",
doi = "10.1007/978-3-642-22887-2_36",
language = "English",
isbn = "9783642228865",
volume = "6830 LNAI",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "315--320",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Learning the states

T2 - A brain inspired neural model

AU - Lőrincz, A.

PY - 2011

Y1 - 2011

N2 - AGI relies on Markov Decision Processes, which assume deterministic states. However, such states must be learned. We propose that states are deterministic spatio-temporal chunks of observations and notice that learning of such episodic memory is attributed to the entorhinal hippocampal complex in the brain. EHC receives information from the neocortex and encodes learned episodes into neocortical memory traces thus it changes its input without changing its emerged representations. Motivated by recent results in exact matrix completion we argue that step-wise decomposition of observations into 'typical' (deterministic) and 'atypical' (stochastic) constituents is EHC's trick of learning episodic memory.

AB - AGI relies on Markov Decision Processes, which assume deterministic states. However, such states must be learned. We propose that states are deterministic spatio-temporal chunks of observations and notice that learning of such episodic memory is attributed to the entorhinal hippocampal complex in the brain. EHC receives information from the neocortex and encodes learned episodes into neocortical memory traces thus it changes its input without changing its emerged representations. Motivated by recent results in exact matrix completion we argue that step-wise decomposition of observations into 'typical' (deterministic) and 'atypical' (stochastic) constituents is EHC's trick of learning episodic memory.

KW - exact matrix completion

KW - hippocampus

KW - MDP

KW - sparse coding

UR - http://www.scopus.com/inward/record.url?scp=79961189968&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79961189968&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-22887-2_36

DO - 10.1007/978-3-642-22887-2_36

M3 - Conference contribution

AN - SCOPUS:79961189968

SN - 9783642228865

VL - 6830 LNAI

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 315

EP - 320

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -