Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Policy Transfer in Mobile Robots using Neuro-Evolutionary Navigation (Extended Abstract). M. Knudson and K. Tumer. In Proceedings of the Genetic and Evolutionary Computation Conference, Philadelphia, PA, July 2012.

Abstract

Mobile robots are a key component to many real world applications, ranging from planetary exploration to search and rescue. Indeed, the application domains using such robots are steadily increasing, and as the robots get smaller, we need navigation policies that operate with reduced computational and memory requirements. In addition, as tasks become more complex, directly developing (in this case evolving) navigation policies for such robots becomes slow at best and impossible at worst.In this paper, we first present a state/action representation that allows robots to learn good navigation policies, but also allows them to transfer the policy to new and more complex situations. In particular, we show how the evolved policies can transfer to situations with: (i) new tasks (different obstacle and target configurations and densities); (ii) new sets of sensors (different resolution); and (iii) new sensor/actuation noise levels (failures or changes in sensing and/or actuation fidelity). Our results show that in all three cases, policies evolved in simple environments and transferred to more complex situations outperform policies directly evolved in the complex situation both in terms of overall performance (up to 30\%) and convergence speed (up to 90\%).

Download

(unavailable)

BibTeX Entry

@incollection{tumer-knudson-gecco12,
	author = {M. Knudson and K. Tumer},
	title = {Policy Transfer in Mobile Robots using Neuro-Evolutionary Navigation (Extended Abstract)},
	 booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
	month = {July},
	address = {Philadelphia, PA},
	abstract={Mobile robots are a key component to many real world applications, ranging from planetary exploration to search and rescue. Indeed, the application domains using such robots are steadily increasing, and as the robots get smaller, we need navigation policies that operate with reduced computational and memory requirements. In addition, as tasks become more complex, directly developing (in this case evolving) navigation policies for such robots becomes slow at best and impossible at worst.
In this paper, we first present a state/action representation that allows robots to learn good navigation policies, but also allows them to transfer the policy to new and more complex situations.  In particular, we show how the evolved policies can transfer to situations with: (i) new tasks (different obstacle and target configurations and densities); (ii) new sets of sensors (different resolution); and (iii) new sensor/actuation noise levels (failures or changes in sensing and/or actuation fidelity). Our results show that in all three cases, policies evolved in simple environments and transferred to more complex situations outperform policies directly evolved in the complex situation both in terms of overall performance (up to 30\%) and convergence speed (up to 90\%).},
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {Robotics, Evolutionary Algorithms},
	year = {2012}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43