Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


General Principles of Learning-Based Multiagent Systems. D. H. Wolpert, K. Wheeler, and K. Tumer. In Proceedings of the Third International Conference of Autonomous Agents, pp. 77–83, May 1999.

Abstract

We consider the problem of how to design large decentralized multi-agent systems (MAS's) in an automated fashion, with little or no hand-tuning. Our approach has each agent run a reinforcement learning algorithm. This converts the problem into one of how to automatically set/update the reward functions for each of the agents so that the global goal is achieved. In particular we do not want the agents to "work at cross-purposes" as far as the global goal is concerned. We use the term artificial COllective INtelligence (COIN) to refer to systems that embody solutions to this problem. In this paper we present a summary of a mathematical framework for COINs. We then investigate the real-world applicability of the core concepts of that framework via two computer experiments: we show that our COINs perform near optimally in a difficult variant of Arthur's bar problem (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance for our COINs in the leader-follower problem.

Download

[PDF]237.9kB  

BibTeX Entry

@inproceedings{tumer-wolpert_agents99,
	author = {D. H. Wolpert and K. Wheeler and K. Tumer},
	title = {General Principles of Learning-Based Multiagent Systems},
	booktitle={Proceedings of the Third International Conference of 
		Autonomous Agents},
	pages = {77-83}, 
	month = {May},
abstract = {We consider the problem of how to design large decentralized multi-agent systems (MAS's) in an automated fashion, with little or no hand-tuning. Our approach has each agent run a reinforcement learning algorithm. This converts the problem into one of how to automatically set/update the reward functions for each of the agents so that the global goal is achieved. In particular we do not want the agents to "work at cross-purposes" as far as the global goal is concerned. We use the term artificial COllective INtelligence (COIN) to refer to systems that embody solutions to this problem. In this paper we present a summary of a mathematical framework for COINs. We then investigate the real-world applicability of the core concepts of that framework via two computer experiments: we show that our COINs perform near optimally in a difficult variant of Arthur's bar problem (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance for our COINs in the leader-follower problem.},
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {Multiagent Systems, Reinforcement Learning},
	year = {1999}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43