Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Exploiting Structure and Utilizing Agent-Centric Rewards to Promote Coordination in Large Multiagent Systems (extended abstract). C. HolmesParker, A. Agogino, and K. Tumer. In Proceedings of the Twelveth International Joint Conference on Autonomous Agents and Multiagent Systems, Minneapolis, MN, May 2013.

Abstract

A goal within the field of multiagent systems is to achieve scaling to large systems involving hundreds or thousands of agents. In such systems the communication requirements for agents as well as the individual agents' ability to make decisions both play critical roles in performance. We take an incremental step towards improving scalability in such systems by introducing a novel algorithm that conglomerates three well-known existing techniques to address both agent communication requirements as well as decision making within large multiagent systems. In particular, we couple a Factored-Action Factored Markov Decision Process (FA-FMDP) framework which exploits problem structure and establishes localized rewards for agents (reducing communication requirements) with reinforcement learning using agent-centric difference rewards which addresses agent decision making and promotes coordination by addressing the structural credit assignment problem. We demonstrate our algorithms performance compared to two other popular reward techniques (global, local) with up to 10,000 agents.

Download

[PDF]186.9kB  

BibTeX Entry

@inproceedings{tumer-holmesparker-structure_aamas13,
        author = {C. HolmesParker and A. Agogino and  K. Tumer},
        title = {Exploiting Structure and Utilizing Agent-Centric Rewards to Promote Coordination in Large Multiagent Systems (extended abstract)},
        booktitle = {Proceedings of the Twelveth International Joint Conference on Autonomous Agents and Multiagent Systems},
	month = {May},
	address = {Minneapolis, MN},
	abstract={A goal within the field of multiagent systems is to achieve scaling to large systems involving hundreds or thousands of agents. In such systems the communication requirements for agents as well as the individual agents' ability to make decisions both play critical roles in performance. We take an incremental step towards improving scalability in such systems by introducing a novel algorithm that conglomerates three well-known existing techniques to address both agent communication requirements as well as decision making within large multiagent systems. In particular, we couple a Factored-Action Factored Markov Decision Process (FA-FMDP) framework which exploits problem structure and establishes localized rewards for agents (reducing communication requirements) with reinforcement learning using agent-centric difference rewards which addresses agent decision making and promotes coordination by addressing the structural credit assignment problem. We demonstrate our algorithms performance compared to two other popular reward techniques (global, local) with up to 10,000 agents. },
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {Multiagent Systems, Reinforcement Learning},
        year = {2013}
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Jun 26, 2018 19:10:42