Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Fast Multiagent Learning from Actions Not Taken for Heterogeneous Agent. C. Rebhuhn and K. Tumer. In AAMAS-2012 Workshop on Adaptive and Learning Agents, Valencia, Spain, June 2012.

Abstract

n a cooperative multiagent system, information discovered by one agent may be useful to others. Defining a reward structure that can tap into the information discovered by other agents can promote faster learning through group exploration of the state space. Previous work has shown that collective rewards of this nature, referred to as Actions Not Taken (ANT) rewards, outperform classic reward structures on the Bar Problem in both learning speed and robustness to system changes. However the applicability of ANT rewards is severely limited by the fact that they only consider learning in a system in which agents have identical properties and action spaces. Because most real-world systems do not involve identical agents, we propose a new ANT reward which better accommodates heterogeneous agents using a similarity weighting. In addition, we show the applicability of this approach by extending to more complex domains: (i) a version of the El Farol Bar Problem with agents that have heterogeneous actions and, (ii) a network routing domain. We show that our formulation of ANT for heterogeneous agents shows up to 55\% better performance with 30\% faster convergence in the network routing domain.

Download

(unavailable)

BibTeX Entry

@incollection{tumer-rebhuhn_ala12,
        author = {C. Rebhuhn  and K. Tumer},
        title = {Fast Multiagent Learning from Actions Not Taken for Heterogeneous Agent},
        booktitle = {AAMAS-2012 Workshop on Adaptive and Learning Agents},
	month = {June},
	address = {Valencia, Spain},
	editors = {E. Howley and P. Vrancx and M. Knudson},
	abstract={n a cooperative multiagent system, information discovered by one agent may be useful to others. Defining a reward structure that can tap into the information discovered by other agents can promote faster learning through group exploration of the state space. Previous work has shown that collective rewards of this nature, referred to as Actions Not Taken (ANT) rewards, outperform classic reward structures on the Bar Problem in both learning speed and robustness to system changes. However the applicability of ANT rewards is severely limited by the fact that they only consider learning in a system in which agents have identical properties and action spaces. Because most real-world systems do not involve identical agents, we propose a new ANT reward which better accommodates heterogeneous agents using a similarity weighting. In addition, we show the applicability of this approach by extending to more complex domains: (i) a version of the El Farol Bar Problem with agents that have heterogeneous actions and, (ii) a network routing domain. We show that our formulation of ANT for heterogeneous agents shows up to 55\% better performance with 30\% faster convergence in the network routing domain.},
	bib2html_pubtype = {Workshop/Symposium Papers},
	bib2html_rescat = {Multiagent Systems, Reinforcement Learning},
        year = {2012}
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Jun 26, 2018 19:10:42