Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Multi-Robot Coordination for Space Exploration. L. Yliniemi, A. Agogino, and K. Tumer. AI Magazine, (), 2014. to appear

Abstract

Teams of artificially intelligent planetary rovers have tremendous potential for space exploration, allowing for reduced cost, increased flexibility and increased reliability. However, having these multiple autonomous devices acting simultaneously leads to a problem of coordination: to achieve the best results, the they should work together. This is not a simple task. Due to the large distances and harsh environments, a rover must be able to perform a wide variety of tasks with a wide variety of potential teammates in uncertain and unsafe environments. Directly coding all the necessary rules that can reliably handle all of this coordination and uncertainty is problematic. Instead, this article examines tackling this problem through the use of coordinated reinforcement learning: rather than being programmed what to do, the rovers iteratively learn through trial and error to take take actions that lead to high overall system return. To allow for coordination, yet allow each agent to learn and act independently, we employ state-of-the-art reward shaping techniques. The article uses visualization techniques to break down complex performance indicators into an accessible form, and identifies key future research directions.

Download

[PDF]1.1MB  

BibTeX Entry

@article{tumer-yliniemi_AI14,
	author = {L. Yliniemi and A. Agogino and K. Tumer},
	title = {Multi-Robot Coordination for Space Exploration},
	journal = {AI Magazine},
	volume = {},
	number = {},
	abstract={Teams of artificially intelligent planetary rovers have tremendous potential for space exploration, allowing for reduced cost, increased flexibility and increased reliability. However, having these multiple autonomous devices acting simultaneously leads to a problem of coordination: to achieve the best results, the they should work together. This is not a simple task. Due to the large distances and harsh environments, a rover must be able to perform a wide variety of tasks with a wide variety of potential teammates in uncertain and unsafe environments. Directly coding all the necessary rules that can reliably handle all of this coordination and uncertainty is problematic. Instead, this article examines tackling this problem through the use of coordinated reinforcement learning: rather than being programmed what to do, the rovers iteratively learn through trial and error to take take actions that lead to high overall system return. To allow for coordination, yet allow each agent to learn and act independently, we employ state-of-the-art reward shaping techniques. The article uses visualization techniques to break down complex performance indicators into an accessible form, and identifies key future research directions.},
	bib2html_pubtype = {Magazine Articles},
	bib2html_rescat = {Robotics,Multiagent Systems},
	note = {to appear},
	year = {2014}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43