Distributed Coordination of Agents
for Air Traffic Flow Management
Oregon State University
A. K. Agogino, UCSC
This project is supported by the National Science Foundation under grants 0930168 and 0931591.
This project addresses the management of the air traffic system, a cyber-physical system where the need for a tight connection between the computational algorithms and the physical system is critical to safe, reliable and efficient performance. Indeed, the lack of this tight connection is one of the reasons current systems are overwhelmed by the ever increasing traffic and suffer when there is any deviation from the expected (e.g., changing weather).
Estimates put weather, routing decisions and airport condition induced delays at 1,682,700 hours in 2007, resulting in a staggering economic loss of over $41 Billion.
Because infrastructure improvements are neither affordable nor likely to address the root of the problem, the needed capacity improvements have to come almost entirely from more efficient algorithms.
Multiagent coordination algorithms are ideally suited to address this problem. Yet, many such approaches only consider the computational problem and provide difficult or impossible to implement solutions where the role of the current systems and procedures is ignored (e.g., free flight).
Our previous work on agent based air traffic management has shown that if agents are used within the current air traffic system, the
careful selection of the agents (e.g., fixes), their actions (e.g., setting aircraft separation) and their reward functions (e.g., their impact on system performance) can provide significant improvements over the current state of the art. However, that early work was based on a limited set of agent actions, used simulated data and relied heavily on a computationally costly simulator. In real world systems, there is significant interaction among the agents, particularly when the action set is expanded (reroutes or ground delays). In this proposal, we aim to study the impact of agent actions, rewards and interactions on system performance using data from real air traffic systems.
The key contributions of this project lies in its addressing the agent coordination problem in a physical setting by shifting the focus from "how to learn" to "what to learn." This paradigm shift allows us to separate the advances in learning algorithms from the reward functions used to tie those learning systems into physical systems. By exploring agent reward functions that implicitly model agent interactions based on feedback from the real world, we aim to build cyber-physical systems where an agent that learns to optimize its own reward leads to the optimization of the system objective function. In addition, by focusing on coordinating agents in the complex air traffic management domain, the proposed work aims to demonstrate the benefits of cyber-physical systems in both yielding new computational advances, and in providing new solutions to complex real world problems.
- Address the impact of exploration noise in a multiagent system and devise reward functions that aim to minimize the deleterious effects of exploratory noise.
- Introduce agent partitioning based on rewards to allow scaling to large multiagent systems
Model system objectives with tabular linear functions and neural networks to enable faster learning and scaling to large systems.
Cluster of agents to allow scaling to large systems. By finding agents that interfere with each other, aim to create clusters whose performance can be optimized intependently.
- Manage congestion in communication among Unmanned Aerial Vehicles sharing the airspace.
- Combining Reward Shaping and Hierarchies for Scaling to Large Multiagent Systems. C. HolmesParker, A. Agogino, and K. Tumer. Knowledge Engineering Review, 2015.
- Using Reward/Utility Based Impact Scores in Partitioning (Extended Abstract). W. Curran, A. Agogino, and K. Tumer. In Proceedings of the Thirteenth International
Joint Conference on Autonomous Agents and Multiagent Systems, pp. , Paris, France, May 2014.
- CLEAN Rewards to Improve Coordination by Removing Exploratory Action Noise. C. HolmesParker, M. Taylor, A. Agogino, and K. Tumer. In International Conference on Intelligent
Agent Technology, Warsaw, Poland, August 2014.
- Evolutionary Agent-Based Simulation of the Introduction of New Technologies in Air Traffic Management. L. Yliniemi, A. Agogino, and K. Tumer. In
Proceedings of the Genetic and Evolutionary Computation Conference, Vancouver, Canada, July 2014.
- CLEANing the Reward: Counterfactual Actions Remove Exploratory Action Noise in Multiagent Learning (Extended Abstract).
C. HolmesParker, M. Taylor, A. Agogino, and K. Tumer. In Proceedings of the Thirteenth International Joint Conference on Autonomous Agents and Multiagent Systems,
pp. , Paris, France, May 2014.
- Coordinating actions in congestion problems: Impact of top-down and bottom-up Utilities. S. Proper and K. Tumer.
Autonomous Agents and MultiAgent Systems, 27:3,419-443, 2013. (DOI: 10.1007/s10458-012-9211-z)
- Multiagent Learning with Noisy Global Reward Signal. S. Proper and K. Tumer.
Proceedings of the Twenty-Seventh AAAI Conference on Artifical Intelligence (AAAI 2013) , Bellevue, WA, July 2013.
- A Multiagent Approach to Managing Air Traffic Flow. A. Agogino and K. Tumer.
Autonomous Agents and MultiAgent Systems, 24:1–25,
2012. (DOI: 10.1007/s10458-010-9142-5)
- Evolving Large Scale UAV Communication Systems. A. Agogino, C. Holmes Parker, and K. Tumer.
In Proceedings of the Genetic and Evolutionary
Computation Conference, Philadelphia, PA, July 2012. Best "Real World Applications" paper award.
- Modeling Difference Rewards for Multiagent Learning (Extended Abstract). S. Proper and K. Tumer.
In Proceedings of the Eleventh International Joint Conference on Autonomous Agents and Multiagent Systems,
Valencia, Spain, June 2012.
- Robustness of Two Air Traffic Scheduling Approaches to
Departure Uncertainty. Adrian Agogino and Joey Rios. Digital
Aviation and Systems Conference 2011.
[pdf] (257.8 kB)
- Learning Indirect Actions in Complex Domains: Action Suggestions for Air Traffic Control. A. Agogino and K. Tumer. Advances in Complex Systems, 12():493–512,
- Multiagent Learning for Black Box System Reward Functions.
K. Tumer and A. Agogino. Advances in Complex Systems, 12():475–492, 2009.
- Component Evolution for Large Scale Air Traffic Optimization.
A. Agogino. Genetic and Evolutionary Algorithms Conference
2010 (extended abstract)