Explanation-Oriented Programming

Project Description

The goal of this project is to investigate the concept of explanation and to derive from it principles for explanation-oriented programming, which can be applied in three major ways.

First, we can design domain-specific languages to build explanations for specific domains that are traditionally hard to understand. For example, we have designed a DSL for building explanations of probabilistic reasoning. The underlying idea and the DSL are described in this paper.

Second, we can identify the notion of "explainability" as a language design criterion in the sense of the cognitive dimensions framework. The language designer can then use this principle in designing basic language structures and constructs in a way that lead to programs that have a higher degree of explainability. This idea is explored in this paper.

Third, we can think about changing the objective in the design of general-purpose languages. Currently, the purpose of a program is to compute a value or an effect. Whenever a program fails to meet the expectations of a user, the question is "Why did this happen?", and "What went wrong?". In such a situation we typically have to recourse to debuggers to understand how the value or effect was produced, which is often a very tedious process. The idea od explanation-oriented programming is to design languages in a way so that the language constructs produce not only values, but also explanations of how and why the values are obtained.

Selected Publications

Explainable Reinforcement Learning via Reward Decomposition, Zoe Juozapaitis, Anurag Koul, Alan Ferm, Martin Erwig, and Finale Doshi-Velez
IJCAI/ECAI Workshop on Explainable Artificial Intelligence, 47-53, 2019

Explaining Spreadsheets with Spreadsheets, Jacome Cunha, Mihai Dan, Martin Erwig, Danila Fedorin, and Alex Grejuc
ACM SIGPLAN Conf. on Generative Programming: Concepts & Experiences, 161-167, 2018

Explaining Deep Adaptive Programs via Reward Decomposition, Martin Erwig, Alan Fern, Magesh Murali, and Anurag Koul
IJCAI/ECAI Workshop on Explainable Artificial Intelligence, 40-44, 2018.

Systematic Identification and Communication of Type Errors, Sheng Chen and Martin Erwig
Journal of Functional Programming, Vol. 28, 1-48, 2018

Guided Type Debugging, Sheng Chen and Martin Erwig
Int. Symp. on Functional and Logic Programming, LNCS 8475, 35-51, 2014

Counter-Factual Typing for Debugging Type Errors, Sheng Chen and Martin Erwig
ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages, 583-594, 2014

A Visual Language for Explaining Probabilistic Reasoning, Martin Erwig and Eric Walkingshaw
Journal of Visual Languages and Computing, Vol. 24, No. 2, 88-109, 2013

Explanations for Regular Expressions, Martin Erwig and Rahul Gopinath
Int. Conference on Fundamental Approaches to Software Engineering, LNCS 7212, 394-408, 2012

A DSEL for Studying and Explaining Causation, Eric Walkingshaw and Martin Erwig
IFIP Working Conference on Domain Specific Languages, 143-167, 2011

Causal Reasoning with Neuron Diagrams, Martin Erwig and Eric Walkingshaw
IEEE Int. Symp. on Visual Languages and Human-Centric Computing, 101-108, 2010

Visual Explanations of Probabilistic Reasoning, Martin Erwig and Eric Walkingshaw
IEEE Int. Symp. on Visual Languages and Human-Centric Computing, 23-27, 2009

A DSL for Explaining Probabilistic Reasoning, Martin Erwig and Eric Walkingshaw
IFIP Working Conference on Domain Specific Languages, LNCS 5658, 335-359, 2009
Best Paper Award

A Visual Language for Representing and Explaining Strategies in Game Theory, Martin Erwig and Eric Walkingshaw
IEEE Int. Symp. on Visual Languages and Human-Centric Computing, 101-108, 2008

Participating Researchers

Divya Bajaj, Sheng Chen, Jacome Cunha, Mihai Dan, Martin Erwig, Alan Fern, Danila Fedorin, Alex Grejuc, Prashant Kumar, Magesh Murali, Eric Walkingshaw