Kagan Tumer is the director of the Collaborative Robotics and Intelligent Systems (CoRIS) Institute, and a professor in the School of Mechanical, Industrial and Manufacturing Engineering at Oregon State University.

If you were looking for Kagan Tumer, the science fiction author, click here.


Our work in the Autonomous Agents and Distributed Intelligence Lab focuses on collective decision-making and multiagent coordination to satisfy long-term, dynamic, opaque, and possibly ill-defined objectives. Broadly, we develop methods that address the following research areas:

  • Multiagent Learning: Who learns what?
  • Co-Evolutionary Algorithms: Who impacts whose fitness?
  • Artificial General Intelligence: What matters when?
  • Long-term learning and teaming: What behavior supports future intent?
  • Societal impact of AI and robotics: How to insert societal considerations into AI systems?

Multiagent Coordination and Reward Shaping

Achieving coordination in large multiagent systems is a challenging problem because it often requires agents to simultaneously stumble upon good joint-actions. This problem becomes even more pronounced when the feedback for success is a single, sparsely available, team-wide signal — agents must learn to isolate the impact of their actions on the team's performance [1, 2, 3]. Reward shaping approaches address these difficulties by providing agents with the capability to evaluate the impact of their contributions on system performance [4, 5, 6].

Multi-Objective Multiagent Decision-Making

Many complex real-world problems are naturally characterized by multiple objectives. For instance, to manage traffic in an urban area, one might want to maximize throughput, minimize latency, enforce fairness for all the drivers, and minimize impact on the environment. Research in multiagent multi-objective learning aims to facilitate collective decision-making in real world problems [7, 8, 9]. In addition, multi-objective learning is a natural framework for introducing human-centered and societal considerations into problem formulations [10].

Diversity and Teaming

As teams get larger and tasks become more complex, agents need to better understand the actions and responses of other agents in their teams. Also, to sustain mutual cooperation, agents must learn diverse strategies that can be conditioned on the task dynamics, team composition, and behaviors of other agents [11, 12, 13]. Research in this thread focuses on informed discovery, preservation, and improvement of diverse behaviors that allow agents to radically adapt to the needs of their teams [14, 15].


Paper Awards

  • Best Real World Application Paper Award: “Evolving Large Scale UAV Communication Systems”
    A. Agogino, C. HolmesParker, and K. Tumer

    GECCO 2012.

  • Best Paper Award: “Distributed Agent-Based Air Traffic Flow Management”
    K. Tumer and A. Agogino

    AAMAS 2007.

  • Nomination for Best Paper Award: “Informed Diversity Search for Learning in Asymmetric Multiagent Systems”
    G. Dixit and K. Tumer

    GECCO 2024.

  • Nomination for Best Paper Award: “Reinforcing Inter-Class Dependencies in the Asymmetric Island Model”
    A. Festa, G. Dixit, and K. Tumer

    GECCO 2024.

  • Best Paper Award Finalist: “Diversifying behaviors for learning in Asymmetric Multiagent Systems”
    G. Dixit, E. Gonzalez, and K. Tumer

    GECCO 2022.

  • Best Paper Award Finalist: “Evolving Memory-Augmented Neural Architectures for Deep Memory Problems”
    S. Khadka, J. Jen Chung, and K. Tumer

    GECCO 2017.

  • Best Real World Applications Paper Award Finalist: “Robust Neuro-Control for A Micro Quadrotor”
    J. Shepherd III and K. Tumer

    GECCO 2010.

  • Best Real World Applications Paper Award Finalist: “A Neuro-Evolutionary Approach to Micro Aerial Vehicle Control”
    M. Salichon and K. Tumer

    GECCO 2010.