Learning and innovative elements of strategy adoption rules expand cooperative network topologies

Shijun Wang, Máté S. Szalay, Changshui Zhang, Peter Csermely

Research output: Contribution to journalArticle

56 Citations (Scopus)

Abstract

Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q learning on a variety of random, regular, small-world, scale -free and modular network models in repeated, multi-agent Prisoner's Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing comlex systems.

Original languageEnglish
Article numbere1917
JournalPloS one
Volume3
Issue number4
DOIs
Publication statusPublished - Apr 9 2008

ASJC Scopus subject areas

  • Biochemistry, Genetics and Molecular Biology(all)
  • Agricultural and Biological Sciences(all)
  • General

Fingerprint Dive into the research topics of 'Learning and innovative elements of strategy adoption rules expand cooperative network topologies'. Together they form a unique fingerprint.

  • Cite this