Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Learning from Actions Not Taken in Multiagent Systems. K. Tumer and N. Khani. Advances in Complex Systems, 12(4-5):455–473, 2009.

Abstract

In large cooperative multiagent systems, coordinating the actions of the agents is critical to the overall system achieving its intended goal. Even when the agents aim to cooperate, ensuring that the agent actions lead to good system level behavior becomes increasingly difficult as systems become larger. One of the fundamental difficulties in such multiagent systems is the slow learning process where an agent not only needs to learn how to behave in a complex environment, but also needs to account for the actions of other learning agents. In this paper, we present a multiagent learning approach that significantly improves the learning speed in multiagent systems by allowing an agent to update its estimate of the rewards (eg., value function in reinforcement learning) for all its available actions, not just the action that was taken. This approach is based on an agent estimating the counterfactual reward it would have received had it taken a particular action. Our results show that the rewards on such "actions not taken" are beneficial early in training, particularly when only particular "key" actions are used. We then present results where agent teams are leveraged to estimate those rewards. Finally, we show that the improved learning speed is critical in dynamic environments where fast learning is critical to tracking the underlying processes.

Download

[PDF]351.7kB  

BibTeX Entry

@article{tumer-khani_acs09,
	author = {K. Tumer and N. Khani},
	title = {Learning from Actions Not Taken in Multiagent Systems},
	journal = {Advances in Complex Systems},
	Volume = {12},
	Number = {4-5},
	Pages = {455-473},
	bib2html_pubtype = {Journal Articles},
	bib2html_rescat = {Multiagent Systems},
	abstract ={In large cooperative multiagent systems, coordinating the actions of the agents is critical  to the overall system achieving its intended goal. Even when the agents aim to cooperate, ensuring that the agent actions lead to good system level behavior becomes increasingly difficult as systems become larger. One of the fundamental difficulties in such multiagent systems is the slow learning process where an agent not only needs to learn how to behave in a complex environment, but also needs to account for the actions of  other learning agents.  In this paper, we present a multiagent learning approach that significantly improves the learning speed in multiagent systems by allowing an agent to update its estimate of the rewards (eg., value function in reinforcement learning) for all its available actions, not just the action that was taken.  This approach is based on an agent estimating the counterfactual reward  it would have received had it taken a particular action. Our results show that the rewards on such "actions not taken" are beneficial early in training, particularly when only particular "key" actions are used. We then present results where agent teams are leveraged to estimate those rewards. Finally, we show that the improved learning speed is critical in dynamic environments where fast learning is critical to tracking the underlying processes.},
	year = {2009}
} 

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43