Publications

Sort by: [year] [type] [author]

Why (PO)MDPs Lose for Spatial Tasks and What to Do About It

Terran Lane and William D. Smart.
In "Proceedings of the ICML Workshop on Rich Representations for Reinforcement Learning", Bonn, Germany, August 2005.

In this deliberately inflamatory paper, we claim that everything you belive about (PO)MDPs is wrong. More specifically, we claim that (PO)MDPs are so general as to be nearly useless in many cases of practical interest and that we should specialize rather than generalize. We are mostly concerned with problems involving a real, physical world (the same real, physical world that we live in). In particular, we are interested in spatial navigation, but we believe that this claim holds for a number of other key problem areas as well. Our abstraction efforts to date have focused on extending the reach of (PO)MDP models while maintaining their basic world-view. We claim that a profitable approach for the future is to cleave RL into a number of sub-disciplines, each studying important "special cases". By doing so, we will be able to take advantage of the properties of these cases in ways that our current (PO)MDP frameworks are unable to.

Paper: [PDF]

@inproceedings{icml05,
  author = {Lane, Terran and Smart, William D.},
  title = {Why {(PO)MDP}s Lose for Spatial Tasks and What to Do About It},
  booktitle = {Proceedings of the {ICML} Workshop on Rich Representations for Reinforcement Learning},
  address = {Bonn, Germany},
  month = {August},
  year = {2005}
}