**Abductive Inference ARO MURI 2010 Workshop**

hosted by Oregon State University, School of Electrical Engineering and Computer Science

Poster Abstracts

Aniruddh Nath

University of Washington

Online Probabilistic Inference for Markov Logic

Many problems require repeated inference on probabilistic graphical models, with different values for evidence variables or other changes. Since small changes to the evidence typically only affect a small region of the network, repeatedly performing inference from scratch can be massively redundant. We propose two algorithms that address this problem. 'Expanding Frontier Belief Propagation' (EFBP) is an extension of loopy BP where each run of inference reuses results from the previous ones, instead of starting from scratch with the new evidence; messages are only propagated in regions of the network affected by the changes. 'Delta-LNC' is an online lifted inference algorithm that efficiently updates the structure of an existing lifted network. This allows us to construct the lifted network once for the initial inference problem, and amortize the cost over the subsequent problems. Experiments on viral marketing, video segmentation and probabilistic auctions domains show that these algorithms greatly reduce the cost of inference without significantly affecting the quality of the solutions.

Yucheng Low

Carnegie Mellon University

GraphLab: A New Framework For Parallel Machine Learning

Exponentially increasing dataset sizes have driven Machine Learning (ML) experts to make use of parallel and distributed computing for their research. However, due to the complexities involved in distributed programming, ML researchers have turned to high level abstractions such as Hadoop/MapReduce. These abstractions, though useful, are limited in expressiveness and are unable to express a broad range of ML problems efficiently. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while providing powerful data consistency guarantees. We demonstrate that these guarantees can be extended to the distributed setting by addressing several key challenges such as the management of distributed state, network bandwidth, and network latency, achieving state of the art performance on a variety of real world problems.

Ethan Dereszynski

Oregon State University

Initial Analysis of Strategies in Real-Time Strategy Games

Real-time strategy (RTS) games provide good examples of tactical fighting under conditions of uncertainty and partial observation. In this domain, players must manage resource collection, build offensive and defensive units, and then attack their opponent successfully in order to win the game. Proficient players conduct surveillance in order to locate their opponent's holdings, including structure locations and army composition. This reconnaissance information can also be used to infer an opponent's overall strategy, which can provide a significant advantage in designing a counter-strategy. We collected complete logs of 43 two-person Wargus games (86 perspectives) between strong players over the course of two human user-studies. Wargus is an open-source clone of Warcraft II: Tides of Darkness, a popular RTS title. This poster presents an initial analysis of this data, a taxonomy of the strategies employed (including strategy shifts and adaptations), and statistics on the amount of surveillance performed. The poster also describes our initial approach to abductive inference of strategies from instantaneous information about the game state.

Jesse Davis

University of Washington

Bottom-Up Learning of Markov Network Structure

The structure of a Markov network is typically learned using top-down search. At each step, the search specializes a feature by conjoining it to the variable or feature that most improves the score. This is inefficient, testing many feature variations with no support in the data, and highly prone to local optima. We propose bottom-up search as an alternative, inspired by the analogous approach in the field of rule induction. Our BLM algorithm starts with each complete training example as a long feature, and repeatedly generalizes a feature to match its k nearest examples by dropping variables. An extensive empirical evaluation demonstrates that BLM is both faster and more accurate than the standard top-down approach, and also outperforms other state-of-the-art methods.

Anton Chechetka

Carnegie Mellon University

Evidence-specific Structures for Rich Tractable CRF's

We present a simple and effective approach to learning tractable conditional random fields with structure that depends on the evidence. Our approach retains the advantages of tractable discriminative models, namely efficient exact inference and exact parameter learning. At the same time, our algorithm does not suffer a large expressive power penalty inherent to fixed tractable structures. On real-life relational datasets, our approach matches or exceeds state of the art accuracy of the dense models, and at the same time provides an order of magnitude speedup.

Henry Kautz

University of Rochester

Recognizing Joint Activities in Capture the Flag

Recent research has shown that surprisingly rich models of human behavior can be learned from GPS (positional) data. However, most research to date has concentrated on modeling single individuals or aggregate statistical properties of groups of people. Given noisy real-world GPS data, we-in contrast-consider the problem of modeling and recognizing activities that involve multiple related individuals playing a variety of roles. Our test domain is the game of capture the flag-an outdoor game that involves many distinct cooperative and competitive joint activities. We model the domain using Markov logic, a statistical relational language, and learn a theory that jointly denoises the data and infers occurrences of high-level activities, such as capturing a player.

Parag Singla

ersity of Texas at Austin

abilistic Abduction in Relational Domains using Markov Logic

Abduction is inference to the best explanation of a given set of evidence. It is an important task for a number of applications including intent recognition and plan recognition. Traditional approaches to abduction have either used some subset of first-order logic, which is unable to handle uncertainty, or probabilistic models (such as Bayesian networks), which cannot handle relations. In this work, we will look at how to do probabilistic abductive reasoning in relational domains using Markov logic, a powerful model to combine the power of first order logic and probability. Markov logic is inherently deductive in nature and hence, cannot be used as is on the given knowledge base (KB) for abduction. In this work, we propose a novel way to augment the KB to make them amenable to abductive reasoning using Markov logic. We evaluate our formulation on two plan recognition datasets and compare it with another approach based on BALPs (Bayesian Abductive Logic Programs) as well as some other existing approaches.

Jerry Hobbs, University of Southern California

Ekaterina Ovchinnikova - University of Southern California and University of Osnabrueck

Abductive Reasoning for Recognizing Textual Entailment using Lexical-Semantic Resources

The poster describes an application of an abductive reasoning procedure to the recognizing textual entailment task. The knowledge base used in abduction consists of axioms derived from lexical
semantic resources, such as FrameNet and WordNet. The described inference system is participating in the current RTE-6 competition.

Rutu Mulkar-Mehta and Jerry R. Hobbs

University of Southern California.

Causal Granularity: Answering "How" Events Happen

Granularity is about breaking down a coarse-grained descriptions into concepts at a finer level of detail. It can be thought of as providing a hierarchy of varying levels of information. This poster describes our evaluation of granularity identification by humans, showing that humans can seamlessly shift through various levels of granularity. The main focus of the poster is on causal granularity, and how it can be used for answering "how" questions, which remain among the most difficult class of questions for automatic question-answering.

James Blythe and Jerry R. Hobbs

University of Southern California.

A Unified Approach to Logical, Probabilistic and Weighted Abduction with application to Textual Entailment

We describe an approach to abduction that combines logical, probabilistic and weighted abduction in a single coherent framework, bringing together the benefits of all three and providing a sound probabilistic semantics for weighted abduction. Our framework is based on Markov logic (Richardson and Domingos 2006) and extends an earlier translation for logical abduction (Kate and Mooney 2009). We apply the approach to the RTE set of textual entailment problems using background knowledge derived from FrameNet encoded as axioms of weighted abduction.

Henry Kautz

University of Rochester

Multi-agent Interactions: Capture the Flag

Most of interesting behavior can be fully understood only in the context of actions of other agents: - Capturing an enemy?

- Chasing someone?

- Attempting to free an ally?

- Waiting in line? … Location alone provides a surprisingly rich source of information about people's behavior. Previous work leveraged GPS to learn probabilistic models of activities of "isolated" individuals. Our work focuses on learning models of multi-agent activities and attempted activities.

Chris Baker

MIT

Bayesian Theory of Mind

We present a computational framework for Theory of Mind (ToM): the human ability to make joint inferences about the unobservable beliefs and preferences underlying the observed actions of other agents. These mental state attributions can be understood as Bayesian inferences in a probabilistic generative model for rational action, or planning under uncertain and incomplete information, formalized as a Partially Observable Markov Decision Problem (POMDP). We posit that ToM inferences approximately reconstruct the combination of a reward function and belief state trajectory for an agent based on observing that agent's action sequence in a given environment. We test this POMDP model by showing human subjects the trajectories of agents moving in simple spatial environments and asking for joint inferences about the agents' utilities and beliefs about unobserved aspects of the environment. Our model performs substantially better than two simpler variants: one in which preferences are inferred without reference to an agents' beliefs, and another in which beliefs are inferred without reference to the agent's dynamic observations in the environment. We find that preference inferences are substantially more robust and consistent with our model's predictions than are belief inferences, in line with classic work showing that the ability to infer goals is more concretely grounded in visual data, develops earlier in infancy, and can be localized to specific neurons in the primate brain.

Stanley Kok

University of Washington

Learning Markov Logic Networks Using Structural Motifs

Markov logic networks (MLNs) use first-order formulas to dene features of Markov networks. Current MLN structure learners can only learn short clauses (4-5 literals) due to extreme computational costs, and thus are unable to represent complex regularities in data. To address this problem, we present LSM, the rst MLN structure learner capable of eciently and accurately learning long clauses. LSM is based on the observation that relational data typically contains patterns that are variations of the same structural motifs. By constraining the search for clauses to occur within motifs, LSM can greatly speed up the search and thereby reduce the cost of nding long clauses. LSM uses random walks to identify densely connected objects in data, and groups them and their associated relations into a motif. Our experiments on three real-world datasets show that our approach is 2-5 orders of magnitude faster than the state-of-the-art ones, while achieving the same or better predictive performance.