Reinforcement learning in a distributed market-based production control system

Balázs Csanád Csáji, László Monostori, Botond Kádár

Research output: Contribution to journalArticle

23 Citations (Scopus)

Abstract

The paper presents an adaptive iterative distributed scheduling algorithm that operates in a market-based production control system. The manufacturing system is agentified, thus, every machine and job is associated with its own software agent. Each agent learns how to select presumably good schedules, by this way the size of the search space can be reduced. In order to get adaptive behavior and search space reduction, a triple-level learning mechanism is proposed. The top level of learning incorporates a simulated annealing algorithm, the middle (and the most important) level contains a reinforcement learning system, while the bottom level is done by a numerical function approximator, such as an artificial neural network. The paper suggests a cooperation technique for the agents, as well. It also analyzes the time and space complexity of the solution and presents some experimental results.

Original languageEnglish
Pages (from-to)279-288
Number of pages10
JournalAdvanced Engineering Informatics
Volume20
Issue number3
DOIs
Publication statusPublished - Jul 1 2006

    Fingerprint

Keywords

  • Dynamic scheduling
  • Multi-agent systems
  • Reinforcement learning

ASJC Scopus subject areas

  • Information Systems
  • Artificial Intelligence

Cite this