Reinforcement learning with echo state networks

István Szita, Viktor Gyenes, A. Lőrincz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

25 Citations (Scopus)

Abstract

Function approximators are often used in reinforcement learning tasks with large or continuous state spaces. Artificial neural networks, among them recurrent neural networks are popular function approximators, especially in tasks where some kind of of memory is needed, like in real-world partially observable scenarios. However, convergence guarantees for such methods are rarely available. Here, we propose a method using a class of novel RNNs, the echo state networks. Proof of convergence to a bounded region is provided for k-order Markov decision processes. Runs on POMDPs were performed to test and illustrate the working of the architecture.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages830-839
Number of pages10
Volume4131 LNCS - I
ISBN (Print)3540386254, 9783540386254
Publication statusPublished - 2006
Event16th International Conference on Artificial Neural Networks, ICANN 2006 - Athens, Greece
Duration: Sep 10 2006Sep 14 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4131 LNCS - I
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other16th International Conference on Artificial Neural Networks, ICANN 2006
CountryGreece
CityAthens
Period9/10/069/14/06

Fingerprint

Echo State Network
Reinforcement learning
Reinforcement Learning
Partially Observable Markov Decision Process
Recurrent neural networks
Markov Decision Process
Recurrent Neural Networks
Artificial Neural Network
State Space
Neural networks
Data storage equipment
Scenarios
Class
Architecture

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Szita, I., Gyenes, V., & Lőrincz, A. (2006). Reinforcement learning with echo state networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4131 LNCS - I, pp. 830-839). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4131 LNCS - I). Springer Verlag.

Reinforcement learning with echo state networks. / Szita, István; Gyenes, Viktor; Lőrincz, A.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4131 LNCS - I Springer Verlag, 2006. p. 830-839 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4131 LNCS - I).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Szita, I, Gyenes, V & Lőrincz, A 2006, Reinforcement learning with echo state networks. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 4131 LNCS - I, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 4131 LNCS - I, Springer Verlag, pp. 830-839, 16th International Conference on Artificial Neural Networks, ICANN 2006, Athens, Greece, 9/10/06.
Szita I, Gyenes V, Lőrincz A. Reinforcement learning with echo state networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4131 LNCS - I. Springer Verlag. 2006. p. 830-839. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Szita, István ; Gyenes, Viktor ; Lőrincz, A. / Reinforcement learning with echo state networks. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4131 LNCS - I Springer Verlag, 2006. pp. 830-839 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{aeb407f570034ce3874a6302d77dc8e7,
title = "Reinforcement learning with echo state networks",
abstract = "Function approximators are often used in reinforcement learning tasks with large or continuous state spaces. Artificial neural networks, among them recurrent neural networks are popular function approximators, especially in tasks where some kind of of memory is needed, like in real-world partially observable scenarios. However, convergence guarantees for such methods are rarely available. Here, we propose a method using a class of novel RNNs, the echo state networks. Proof of convergence to a bounded region is provided for k-order Markov decision processes. Runs on POMDPs were performed to test and illustrate the working of the architecture.",
author = "Istv{\'a}n Szita and Viktor Gyenes and A. Lőrincz",
year = "2006",
language = "English",
isbn = "3540386254",
volume = "4131 LNCS - I",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "830--839",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Reinforcement learning with echo state networks

AU - Szita, István

AU - Gyenes, Viktor

AU - Lőrincz, A.

PY - 2006

Y1 - 2006

N2 - Function approximators are often used in reinforcement learning tasks with large or continuous state spaces. Artificial neural networks, among them recurrent neural networks are popular function approximators, especially in tasks where some kind of of memory is needed, like in real-world partially observable scenarios. However, convergence guarantees for such methods are rarely available. Here, we propose a method using a class of novel RNNs, the echo state networks. Proof of convergence to a bounded region is provided for k-order Markov decision processes. Runs on POMDPs were performed to test and illustrate the working of the architecture.

AB - Function approximators are often used in reinforcement learning tasks with large or continuous state spaces. Artificial neural networks, among them recurrent neural networks are popular function approximators, especially in tasks where some kind of of memory is needed, like in real-world partially observable scenarios. However, convergence guarantees for such methods are rarely available. Here, we propose a method using a class of novel RNNs, the echo state networks. Proof of convergence to a bounded region is provided for k-order Markov decision processes. Runs on POMDPs were performed to test and illustrate the working of the architecture.

UR - http://www.scopus.com/inward/record.url?scp=33749845123&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33749845123&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:33749845123

SN - 3540386254

SN - 9783540386254

VL - 4131 LNCS - I

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 830

EP - 839

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

PB - Springer Verlag

ER -