Sort by: [year] [type] [author]

Receding Horizon Differential Dynamic Programming

Yuval Tassa, Tom Erez, and William D. Smart.
In "Advances in Neural Information Processing Systems 20", John C. Platt, Daphne Koller, Yoram Singer, and Sam Roweis (eds)., pages 1465-1472, Cambridge, MA, 2008.

The control of high-dimensional, continuous, non-linear systems is a key problem in reinforcement learning and control. Local, trajectory-based methods, using techniques such as Differential Dynamic Programming (DDP) are not directly subject to the curse of dimensionality, but generate only local controllers. In this paper, we introduce Receding Horizon DDP (RH-DDP), an extension to the classic DDP algorithm, which allows us to construct stable and robust controllers based on a library of local-control trajectories. We demonstrate the effectiveness of our approach on a series of high-dimensional control problems using a simulated multi-link swimming robot. These experiments show that our approach effectively circumvents dimensionality issues, and is capable of dealing effectively with problems with (at least) 34 state and 14 action dimensions.

Paper: [PDF]

  author = {Tassa, Yuval and Erez, Tom and Smart, William D.},
  editor = {Platt, John C. and Koller, Daphne and Singer, Yoram and Roweis, Sam},
  title = {Receding Horizon Differential Dynamic Programming},
  booktitle = {Advances in Neural Information Processing Systems 20},
  pages = {1465--1472},
  publisher = {{MIT} Press},
  address = {Cambridge, {MA}},
  year = {2008}