Complex Interactive Networks: Toward Self-healing Infrastructures
Massoud Amin
Electric Power Research Institute
Societies Solve Dilemmas with Groups:
Non-Cooperative Coalition Formation with Internal Redistribution
Rob Axtell
The Brookings Institution
Relation to collective decision-making/performance: Individual,
self-interested agents are fully represented. Each agent has an
explicit, private, scalar utility function that it evaluates in deciding
how to act. Societal welfare can be conceived wlog as the sum of
individual utilities. In lieu of explicit design, collective performance
is achieved via individual adaptation to the social environment
Abstract: We claim that human societies have evolved social institution,
main in the form of multi-agent groups with internal redistribution, in
order to reach high levels of inter'agent cooperation. This is the
so-called 'evolution of cooperation' problem and has heretofore been
analyzed from purely autarkic perspectives--e.g., non-cooperative game
theory and political science--or in aggregate terms--e.g., sociology.
Empirically, internal redistribution is a feature of essentially all
human groupings having any coherence. Here we treat this explicitly. In
a society of agents, groups are permitted to form in order for their
members to engage in non-cooperative play of some game. A fraction of
the rewards from such interactions are given to the group, from whence
they are subsequently redistributed. Permitting agents to migrate
between groups yields an evolution of group sizes and redistribution
rates toward more efficient outcomes. Asymptotically, the society of
agents partitioned into groups eventually reaches Pareto efficiency,
i.e., it extracts all available surplus, despite constant adaptation of
behavior at the agent level. Thus, 'social dilemmas' cease to exist in
the presence of such evolved groups. Implications for group selection
arguments are briefly drawn out.
Developmental Stability and Evolution
Aviv Bergman
Stanford University
Competition Between Adaptive Agents: Learning and Collective Efficiency
Damien Challet
Oxford University
We use the Minority Game and its variants to show how efficiency depends on
the learning procedure in models of agents competing for limited resources.
Exact results from statistical physics give a deep understanding of the
phenomenology.
Rigor and Robustness in Collective Dynamics
John Doyle
Caltech
Self-Play Can Improve the Performance of Certain Kinds
of Collectives, but which Kinds?
David Fogel
Natural Selection Inc.
Asynchronous Learning in Decentralized Environments:
A Game Theoretic Approach
Eric Friedman
Cornell University
We are interested in designing protocols for the Internet which will work
with self interested users or agents. Our main tools are those from game
theory and mechanism design. Formally, each agent has a utility function
over outcomes (the set of feasible allocations of resources) and is trying
to maximize their own utility using any "reasonable" learning algorithm to
find the "optimal" action. Our forward problem is to find the set of
outcomes which arise from this process. We call this set the solution
concept. Our inverse problem (which in our terminology is the mechanism
design problem) is to design the network, such that the outcomes according
to the solution concept maximize some social choice function, typically the
sum of the utilities.
Our contribution is to show that the social choice function is NOT the Nash
equilibrium or even the serially undominated set, but is contained inside
the serially unoverwhelmed set. This is based on theoretical analyses,
simulations and experiments with human subjects. We then show that this
has strong implications for the kinds of social choice functions for which
it is possible to design good mechanisms.
Multi-Agent Control of Modular Self-Reconfigurable Robots
Tad Hogg (Joint work with Arancha Casal)
Hewlett Packard
Modular self-reconfigurable (MSR) robots consist of large numbers of
identical modules
that can move, attach and detach relative to each other, thereby changing
the robot's
overall shape. This paper presents general design techniques for the
multiagent control algorithms of MSR robots. These techniques
are illustrated with simulation experiments on two types of
MSR robots: Proteo and Telecube.
Our experiments show that distributed control based purely on local rules
results in the desired global behavior in systems with hundreds and
thousands of modules. Controlling such large numbers of modules is
impractical using centralized control techniques. We show results for
various tasks, such as static and dynamic structure generation, locomotion
and navigation.
Dynamics of Large Autonomous Computational Systems
Bernardo A. Huberman (Joint work with Tad Hogg)
Hewlett Packard
Distributed large scale computation gives rise to a wide range of
behaviors, from the simple to the chaotic. This diversity of
behaviors stems from the fact that the agents and programs have
incomplete knowledge and imperfect information on the state of the
system. We describe an instantiation of such systems based on
market mechanisms which provides an interesting example of
autonomous control. We also show that when agents choose among
several resources, the dynamics of the system can be oscillatory
and even chaotic. Furthermore, we describe a mechanism for
achieving global stability through local controls.
Optimal Collectives of Autonomous Defects
Neil Johnson
Oxford University
Imperfection is an integral part of Nature, but it cannot always be
tolerated. High-technology devices, for example, must be precise and
dependable. A problem of significant economic and ecologic importance, is
what to do with a component which is already known to be defective. Such
components which are known to be defective are usually considered useless
and hence wasted. Our work considers how to make best use of imperfect
objects, such as defective analog and digital components. In addition to its
practical applications, our 'defect combination problem' (DCP) provides a
novel generalization of classical optimization problems. As such, it is
amenable to investigation using the COIN (COllective INtelligence)
techniques developed by Wolpert, Tumer and co-workers. Specifically, Wolpert
and Tumer have shown that one can treat the DCP within the COIN paradigm, by
taking the average error as the world utility, G. There are then N
individual agents, each setting one of the errors or distortions n_j. The
goal is to give those agents private utilities so that the maximizer of G is
found as they learn to maximize their private utilities.
We present and extend the DCP work, showing that perfect, or near-perfect,
devices can be constructed by taking combinations of such defects. Any
remaining objects can be recycled efficiently. Although combining simple
analog devices is not attractive since it is usually much easier and cheaper
to subtract the errors from the outputs, such active error-correction may
not be practical in more complex systems, particularly next-generation
technologies in the ultrasmall nano/micro regime. It is in these fields, and
in particular the fields of nan-computers and nano-botics, that we foresee
significant potential application. Our results imply that the 'quality' of a
component is not determined solely by its own intrinsic error. Instead error
becomes a collective property, which is determined by the 'environment'
corresponding to the other defective components. Finally, we present an
agent-based discussion of these problems and propose extensions for future
study within the collectives framework.
Large-Scale System Optimization: Designing Collectives
Ilan Kroo
Stanford University
Two Paradigms for the Design of Artificial Collectives
Kristina Lerman (Joint work with Aram Galstyan)
University of Southern California/ISI
Our research goal is to understand the collective behavior of artificial
collectives, such as multi-robot and other multi-agent systems. We study
systems composed of very simple agents in which beneficial behavior
emerges only on a collective level. Our approach is to model such agents
as stochastic Markov elements, where each agents future state depends
only on its present state. Once this mapping is made, we can employ the
machinery of stochastic processes used by chemists and physicists to
create mathematical models of collective behavior. Specifically, we
describe the dynamics of collective behavior using rate equations
approach. We have applied this analysis to two different robot systems.
Another direction we are pursuing is to study distributed mechanisms for
coordination among agents using iterative game dynamics. Here again,
robust global or collective coordination arises in a system of locally
interacting agents. We have shown this behavior in a simple resource
allocation task where the resource capacity changes in time.
Cooperation in Iterated-Game Collectives
Kristian Lindgren (Joint work with Anders Eriksson)
Goteborg University
we investigate a new type of repeated game in which the payoff
matrix is randomly generated for each round of the game. A player may
observe what it looks like, and she may as well remember actions done in
previous rounds of the game. In each round, the game now is a completely
new situation, some cases may resemble the PD game, but others may be
completely different.
In order to investigate whether this type of repeated game can lead to
cooperative behaviour, we study various types of evolutionary models in
which agents have strategies represented by finite state automata. We
present results of a model of a mixed population in which all play
against all in the iterated game, and a model of a spatially extended
system in which interactions are with nearest neighbours only. The
results show, in both cases, that cooperative behaviour do evolve, but
not as easily as in the iterated PD game. In our model, cooperation
means that players aims for the part of the payoff matrix where the sum
of own and opponent payoff is the largest. If both act according to this
strategy, they will in the long run share the highest possible total
payoff, thus maximizing population utility.
Adaptive Compilation in Randomly Assembled Computers
Mark Millonas (Joint work with David Wolpert)
NASA Ames Research Center
As the basic components that make up computers are miniaturized it
will become increasingly difficult and expensive to assemble them
according to exactingly pre-specified blueprints. Molecule-sized
electronic components are much more likely to be fabricated into
computational devices through a process that, to a greater or
lesser degree, can result in computers with random physical and
dynamical properties. Similarly, defects in components created
either before or after the fabrication of a computer will result
in such random properties, even in non-nano-scale computers.
Here we outline a scheme for adaptive programming of such random
computers. As an illustration we show that a random network of coupled
maps - modeling molecular electronic components with a specified response
to externally
applied fields - can be adaptively programmed to perform certain computations.
Efficiency and Equity in Collective Action of
Interacting Heterougeneous Agents
Akira Namatame
National Defense Academy, Japan
In this paper, we address the issue realizing efficient and equitable
utilization of limited resources by collective decision of interacting
heterogeneous agents. We especially address the forward problem of
collectives as how interacting heterogeneous agents may self-organize
collectives, and the inverse problem as designing the rule of information
guidance for self-organizing collectives of efficiency and equity. There is
no presumption that collective action of interacting agents leads to
collectively satisfactory results without any central authority. How well
agents do for it in adapting to their environment is not the same thing as
how satisfactory an environment they collectively create. Agents normally
react to others!G decisions, and the resulting volatile collective decision
is often far from being efficient. By means of experiments, we show that the
overall performance crucially of the system on the types of interaction as
well as the heterogeneity of preferences. We also show that the most crucial
factor that considerably improves the performance is the way of information
presentation to agents. It is shown that the if each agent adapts to global
information the performances are poor. The optimal guidance strategy to
improves both efficiency and equity depends on the way of interaction. With
symmetric interaction, the local information of the same type realizes the
highest performance. With asymmetric interaction, however, the local
information of the opposite preference type realizes the highest
performance.
Mechanism Design for Complex Systems: Towards Automatic Configuration
David Parkes
Harvard University
Computation is increasingly distributed, and performed on open
networks by autonomous agents. A new challenge for computer science
is to develop a new mathematics to analyze and understand
these distributed and anarchic systems, and to construct,
or grow, good distributed systems. In the terminology of collectives,
each agent has its own private utility function, and as a
system designer we wish to implement an outcome that maximizes
some social-choice function, given agents' preferences. Moreover,
each agent is self-interested, and distributed mechanisms are
analyzed using the tools of game theory. Classic mechanism design
provides some useful suggestions for methods to construct games with good
equilibrium properties, but remains a brittle tool for the design of
complex and highly distributed adaptive systems. First, mechanism
design is performed off-line, for a given set of assumptions about
the environment, about agent rationality, agent information, etc.
Computational agents have varying degrees of rationality,
can fail arbitrarily, and can adapt and learn to environments.
Second, mechanism design is typically applied to one-shot problems,
and little is known about mechanism design for sequentially
evolving systems. Interesting problems are multi-stage, and can be
highly combinatorial and require approximation.
In this work I choose to focus on two orthogonal
problems: mechanism design for a repeated problem in which agents
are adaptive, and learn across rounds; and mechanism design for
a dynamic problem in complex networks, in which agents are myopically
rational and the goal is to grow networks with desirable
properties. The contribution is mainly to frame a research agenda,
and make a few initial observations.
Solving the Evolution-of-Complexity Problem
Jordan Pollack
Brandeis University
Man and Superman: Human limitations, Innovation and Emergence in
Resource Competition
Robert Savit
University of Michigan
It's Not Your Father's Mechanism Design
Yoav Shoham
Stanford University
Effects of Interagent Communication on Collectives
Zoltan Toroczkai
Los Alamos National Laboratory
An Introduction to Collectives
Kagan Tumer (Joint work with David Wolpert)
NASA Ames Research Center
Many systems of self-interested agents have an associated performance
criterion that rates the dynamic behavior of the overall system. This
presentation introduces collectives, which are
defined as any system having the following two characteristics:
The system must contain one or more agents each of which we view as
trying to maximize an associated payoff utility.
The
system must have an associated world utility function that rates
the possible behaviors of that overall system.
In this presentation we discuss the fundamental properties
that the payoff utilities need to meet in order for the collective to
achieve high world utility.
We then show that designing a collective using these properties
significantly outperforms collecitives designed in conventional manners on
a host of different domains, including
congestion games, coordination of multiple rovers, data downlowd across a
constellations of satellites and data routing.
The Mathematics of Collectives
David Wolpert
NASA Ames Research Center
We consider the problem of designing (perhaps massively
distributed) collectives of computational processes to maximize a
provided "world" utility function. We concentrate on the situation
where the behavior of each process in the collective can be cast as
striving to maximize its own private utility function. For such
situations the central design issue is how to initialize/update those
private utility functions of the individual processes so as to induce
behavior of the entire collective having good values of the world
utility. Traditional "team game" approaches to this problem simply
assign to each process the world utility as its private utility
function. The "Collective Intelligence" (COIN) framework is a
semi-formal set of heuristics that recently have been used to
construct private utility functions that in many experiments have
resulted in world utility performance up to orders of magnitude
superior to that ensuing from use of the team game utility. In this
paper we introduce a formal mathematics for analyzing collectives. We
use it to explain these previous results concerning the superiority of
COIN heuristics in the domains in which they were tested. We also use
that framework to make predictions that can be tested in
experiments. We also use this framework to suggest new utilities that
should outperform the COIN heuristics in certain kinds of domains. In
this way we establish the study of collectives as a proper science,
involving experimental explanation, experimental prediction, and
engineering insights.