Sort by: [year] [type] [author]

Real-Time Scheduling via Reinforcement Learning

Robert Glaubius, Terry Tidwell, Christopher D. Gill, and William D. Smart.
In "Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010)", Peter Grünwald and Peter Spirtes (eds)., Catalina Island, CA, 2010.

Cyber-physical systems, such as mobile robots, must respond adaptively to dynamic operating conditions. Effective operation of these systems requires that sensing and actuation tasks are performed in a timely manner. Additionally, execution of mission specific tasks such as imaging a room must be balanced against the need to perform more general tasks such as obstacle avoidance. This problem has been addressed by maintaining relative utilization of shared resources among tasks near a user-specified target level. Producing optimal scheduling strategies requires complete prior knowledge of task behavior, which is unlikely to be available in practice. Instead, suitable scheduling strategies must be learned online through interaction with the system. We consider the sample complexity of reinforcement learning in this domain, and demonstrate that while the problem state space is countably infinite, we may leverage the problem's structure to guarantee efficient learning.

Paper: [PDF]

  author = {Glaubius, Robert and Tidwell, Terry and Gill, Christopher D. and Smart, William D.},
  editor = {Gr"{u}nwald, Peter and Sprites, Peter},
  title = {Real-Time Scheduling via Reinforcement Learning},
  booktitle = {Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence ({UAI} 2010)},
  address = {Catalina Island, {CA}},
  year = {2010}