Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Approximating Difference Evaluations with Local Knowledge (Extended Abstract). M. Colby, W. Curran, C. Rebhuhn, and K. Tumer. In Proceedings of the Thirteenth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. , Paris, France, May 2014.

Abstract

Difference evaluation functions have resulted in excellent multiagent behavior in many domains, including air traffic control, mobile robot control, and distributed sensor network control. In addition to empirical evidence, there is theoretical evidence that suggests difference evaluation functions help shape private agent utilities/objectives in order to promote coordination on a system-wide level. However, calculating difference evaluation functions requires determining the value of a counterfactual system objective function in which an agent took an alternate action. That step is often difficult when the system objective function is unknown or global state and action information is unavailable. In this work, we demonstrate that a local estimate of the system evaluation function may be used to locally compute difference evaluations, allowing for difference evaluations to be computed in multiagent systems where the mathematical form of the objective function is not known. This approximation technique is tested in two domains, and we demonstrate that approximating difference evaluation functions results in better performance and faster learning than when using global evaluation functions.

Download

[PDF]191.5kB  

BibTeX Entry

@inproceedings{tumer-colby_aamas14,
        author = {M. Colby and W. Curran and C. Rebhuhn and K. Tumer},
        title = {Approximating Difference Evaluations with Local Knowledge (Extended Abstract)},
        booktitle = {Proceedings of the Thirteenth International Joint Conference on Autonomous Agents and Multiagent Systems},
	month = {May},
          pages ={},
	address = {Paris, France},
	abstract={Difference evaluation functions have resulted in excellent multiagent behavior in many domains, including air traffic control, mobile robot control, and distributed sensor network control. In addition to empirical evidence, there is theoretical evidence that suggests difference evaluation functions help shape private agent utilities/objectives in order to promote coordination on a system-wide level. However, calculating difference evaluation functions requires determining the value of a counterfactual system objective function in which an agent took an alternate action. That step is often difficult when the system objective function is unknown or global state and action information is unavailable. In this work, we demonstrate that a local estimate of the system evaluation function may be used to locally compute difference evaluations, allowing for difference evaluations to be computed in multiagent systems where the mathematical form of the objective function is not known. This approximation technique is tested in two domains, and we demonstrate that approximating difference evaluation functions results in better performance and faster learning than when using global evaluation functions.},
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {Reinforcement Learning, Evolutionary Algorithms, Multiagent Systems},
        year = {2014}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43