Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Reinforcement Learning in Distributed Domains: Beyond Team Games. D. H. Wolpert, J. Sill, and K. Tumer. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pp. 819–824, Seattle, WA, 2001.

Abstract

Using a distributed algorithm rather than a centralized one can be extremely beneficial in large search problems. In addition, the incorporation of machine learning techniques like Reinforcement Learning (RL) into search algorithms has often been found to improve their performance. In this article we investigate a search algorithm that combines these properties by employing RL in a distributed manner, essentially using the team game approach. We then present bi-utility search, which interleaves our distributed algorithm with (centralized) simulated annealing, by using the distributed algorithm to guide the exploration step of the simulated annealing. We investigate using these algorithms in the domain of minimizing the loss of importance-weighted communication data traversing a constellations of communication satellites. To do this we introduce the idea of running these algorithms "on top" of an underlying, learning-free routing algorithm. They do this by having the actions of the distributed learners be the introduction of virtual "ghost" traffic into the decision-making of the underlying routing algorithm, traffic that "misleads" the routing algorithm in a way that actually improves performance. We find that using our original distributed RL algorithm to set ghost traffic improves performance, and that bi-utility search -- a semi-distributed search algorithm that is widely applicable -- substantially outperforms both that distributed RL algorithm and (centralized) simulated annealing in our problem domain.

Download

[PDF]257.3kB  

BibTeX Entry

@inproceedings{tumer-wolpert_ijcai01,
	author = {D. H. Wolpert and J. Sill and  K. Tumer},
	title = {Reinforcement Learning in Distributed Domains: Beyond
		Team Games},
	booktitle={Proceedings of the Seventeenth International Joint 
	Conference on Artificial Intelligence},
	pages = {819-824},  
	address = {Seattle, WA},
	abstract = {Using a distributed algorithm rather than a centralized one can be extremely beneficial in large search problems. In addition, the incorporation of machine learning techniques like Reinforcement Learning (RL) into search algorithms has often been found to improve their performance. In this article we investigate a search algorithm that combines these properties by employing RL in a distributed manner, essentially using the team game approach. We then present bi-utility search, which interleaves our distributed algorithm with (centralized) simulated annealing, by using the distributed algorithm to guide the exploration step of the simulated annealing. We investigate using these algorithms in the domain of minimizing the loss of importance-weighted communication data traversing a constellations of communication satellites. To do this we introduce the idea of running these algorithms "on top" of an underlying, learning-free routing algorithm. They do this by having the actions of the distributed learners be the introduction of virtual "ghost" traffic into the decision-making of the underlying routing algorithm, traffic that "misleads" the routing algorithm in a way that actually improves performance. We find that using our original distributed RL algorithm to set ghost traffic improves performance, and that bi-utility search -- a semi-distributed search algorithm that is widely applicable -- substantially outperforms both that distributed RL algorithm and (centralized) simulated annealing in our problem domain.},
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {Multiagent Systems, Reinforcement Learning},
	year = {2001}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43