Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Efficient State Spaces and Policy Transfer for Robot Navigation. M. Knudson and K. Tumer. In L. Garcia, editors, Advanced Robotics, Concept Press, 2012.

Abstract

Autonomous mobile robots are critical in many real world applications, ranging from planetary exploration to search and rescue. Indeed, the application domains using such robots are steadily increasing, and as the robots get smaller, navigation policies that operate with reduced computational and memory requirements become critical. In addition, as tasks become more complex, directly developing (in this case evolving) navigation policies for such robots becomes slow at best and impossible at worst.In this paper, we first revisit a state/action representation that allows robots to learn good navigation policies, but also allows them to transfer the policy to new and more complex situations. In particular, we show how the evolved policies can transfer to situations with: (i) new tasks (different obstacle and target configurations and densities); (ii) new sets of sensors (different resolution); and (iii) new sensor/actuation noise levels (failures or changes in sensing and/or actuation fidelity). Our results show that in all three cases, policies evolved in simple environments and transferred to more complex situations outperform policies directly evolved in the complex situation both in terms of overall performance (up to 30\%) and convergence speed (up to 90\%).

Download

(unavailable)

BibTeX Entry

@incollection{tumer-knudson-transfer12,
	author = {M. Knudson and K. Tumer},
	title = {Efficient State Spaces and Policy Transfer for Robot Navigation},
	booktitle = {Advanced Robotics},
	editor = {L. Garcia},
	publisher = {Concept Press},
	abstract={Autonomous mobile robots are critical in many real world applications, ranging from planetary exploration to search and rescue. Indeed, the application domains using such robots are steadily increasing, and as the robots get smaller, navigation policies that operate with reduced computational and memory requirements become critical. In addition, as tasks become more complex, directly developing (in this case evolving) navigation policies for such robots becomes slow at best and impossible at worst.
In this paper, we first revisit a state/action representation that allows robots to learn good navigation policies, but also allows them to transfer the policy to new and more complex situations.  In particular, we show how the evolved policies can transfer to situations with: (i) new tasks (different obstacle and target configurations and densities); (ii) new sets of sensors (different resolution); and (iii) new sensor/actuation noise levels (failures or changes in sensing and/or actuation fidelity). Our results show that in all three cases, policies evolved in simple environments and transferred to more complex situations outperform policies directly evolved in the complex situation both in terms of overall performance (up to 30\%) and convergence speed (up to 90\%).},
	bib2html_pubtype = {Book Chapters},
	bib2html_rescat = {Robotics, Evolutionary Algorithms},
	year = {2012}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43