Distributed Coordination of Agents

for Air Traffic Flow Management

Principal Investigators:

Kagan Tumer, Oregon State University

A. K. Agogino, UCSC

This project is supported by the National Science Foundation under grants 0930168 and 0931591.

Project Overview

This project addresses the management of the air traffic system, a cyber-physical system where the need for a tight connection between the computational algorithms and the physical system is critical to safe, reliable and efficient performance. Indeed, the lack of this tight connection is one of the reasons current systems are overwhelmed by the ever increasing traffic and suffer when there is any deviation from the expected (e.g., changing weather). Estimates put weather, routing decisions and airport condition induced delays at 1,682,700 hours in 2007, resulting in a staggering economic loss of over $41 Billion. Because infrastructure improvements are neither affordable nor likely to address the root of the problem, the needed capacity improvements have to come almost entirely from more efficient algorithms.

Multiagent coordination algorithms are ideally suited to address this problem. Yet, many such approaches only consider the computational problem and provide difficult or impossible to implement solutions where the role of the current systems and procedures is ignored (e.g., free flight). Our previous work on agent based air traffic management has shown that if agents are used within the current air traffic system, the careful selection of the agents (e.g., fixes), their actions (e.g., setting aircraft separation) and their reward functions (e.g., their impact on system performance) can provide significant improvements over the current state of the art. However, that early work was based on a limited set of agent actions, used simulated data and relied heavily on a computationally costly simulator. In real world systems, there is significant interaction among the agents, particularly when the action set is expanded (reroutes or ground delays). In this proposal, we aim to study the impact of agent actions, rewards and interactions on system performance using data from real air traffic systems.

The key contributions of this project lies in its addressing the agent coordination problem in a physical setting by shifting the focus from "how to learn" to "what to learn." This paradigm shift allows us to separate the advances in learning algorithms from the reward functions used to tie those learning systems into physical systems. By exploring agent reward functions that implicitly model agent interactions based on feedback from the real world, we aim to build cyber-physical systems where an agent that learns to optimize its own reward leads to the optimization of the system objective function. In addition, by focusing on coordinating agents in the complex air traffic management domain, the proposed work aims to demonstrate the benefits of cyber-physical systems in both yielding new computational advances, and in providing new solutions to complex real world problems.

Recent Results

Key Publications

Home     Research     Teaching     Students     Publications     Bio