Explanation-oriented programming (XOP) is motivated by two observations:
- Programs often produce unexpected results.
- Programs have value not only for instructing computers, but as a medium of communication between people.
When a program produces an unexpected result, a user is presented with several questions. Is the result correct? If so, what is the user’s misunderstanding? If not, what is wrong and how can it be fixed? In these situations an explanation of how the result was generated or why it is correct would be very helpful. Although some tools exist for addressing these questions, such as debuggers, their explanations (e.g. stepping through the program and observing its state) are expensive to produce and have low explanatory value, especially to non-programmers.
One goal of XOP is to shift the focus on explaining programs into the language design phase, promoting explainability as an explicit design goal. In particular, when defining a language, designers should consider not only how the syntax relates to the production of results (execution semantics), but also how it relates to explanations of how those results are produced and why they are correct (an explanation semantics).
Besides applications to debugging, XOP suggests a new class of domain-specific languages where the explanation itself, rather than the final value, is the primary output of a program. This emphasizes the second observation above, that programs are useful for communication between people. Using such a DSL, an explanation designer, who is an expert in the application domain, can create and distribute explanation artifacts (programs) to explain problems to non-expert explanation consumers.
- A Domain Analysis of Data Structure and Algorithm Explanations in the WildACM SIGCSE Technical Symp. on Computer Science Education (SIGCSE), 2018, 870–875
Explanations of data structures and algorithms are complex interactions of several notations, including natural language, mathematics, pseudocode, and diagrams. Currently, such explanations are created ad hoc using a variety of tools and the resulting artifacts are static, reducing explanatory value. We envision a domain-specific language for developing rich, interactive explanations of data structures and algorithms. In this paper, we analyze this domain to sketch requirements for our language. We perform a grounded theory analysis to generate a qualitative coding system for explanation artifacts collected online. This coding system implies a common structure among explanations of algorithms and data structures. We believe this structure can be reused as the semantic basis of a domain-specific language for creating interactive explanation artifacts. This work is part of our effort to develop the paradigm of explanation-oriented programming, which shifts the focus of programming from computing results to producing rich explanations of how those results were computed.
- A Visual Language for Explaining Probabilistic ReasoningJournal of Visual Languages and Computing (JVLC), vol. 24, num. 2, 2013, 88–109
We present an explanation-oriented, domain-specific, visual language for explaining probabilistic reasoning. Explanation-oriented programming is a new paradigm that shifts the focus of programming from the computation of results to explanations of how those results were computed. Programs in this language therefore describe explanations of probabilistic reasoning problems. The language relies on a storytelling metaphor of explanation, where the reader is guided through a series of well-understood steps from some initial state to the final result. Programs can also be manipulated according to a set of laws to automatically generate equivalent explanations from one explanation instance. This increases the explanatory value of the language by allowing readers to cheaply derive alternative explanations if they do not understand the first. The language is comprised of two parts: a formal textual notation for specifying explanation-producing programs and the more elaborate visual notation for presenting those explanations. We formally define the abstract syntax of explanations and define the semantics of the textual notation in terms of the explanations that are produced.
- A DSEL for Studying and Explaining CausationIFIP Working Conf. on Domain-Specific Languages (DSL), 2011, 143–167
We present a domain-specific embedded language (DSEL) in Haskell that supports the philosophical study and practical explanation of causation. The language provides constructs for modeling situations comprised of events and functions for reliably determining the complex causal relationships that emerge between these events. It enables the creation of visual explanations of these causal relationships and a means to systematically generate alternative, related scenarios, along with corresponding outcomes and causes. The DSEL is based on neuron diagrams, a visual notation that is well established in practice and has been successfully employed for causation explanation and research. In addition to its immediate applicability by users of neuron diagrams, the DSEL is extensible, allowing causation experts to extend the notation to introduce special-purpose causation constructs. The DSEL also extends the notation of neuron diagrams to operate over non-boolean values, improving its expressiveness and offering new possibilities for causation research and its applications.
- Causal Reasoning with Neuron DiagramsIEEE Int. Symp. on Visual Languages and Human-Centric Computing (VL/HCC), 2010, 101–108
The principle of causation is fundamental to science and society and has remained an active topic of discourse in philosophy for over two millennia. Modern philosophers often rely on “neuron diagrams”, a domain-specific visual language for discussing and reasoning about causal relationships and the concept of causation itself. In this paper we formalize the syntax and semantics of neuron diagrams. We discuss existing algorithms for identifying causes in neuron diagrams, show how these approaches are flawed, and propose solutions to these problems. We separate the standard representation of a dynamic execution of a neuron diagram from its static definition and define two separate, but related semantics, one for the causal effects of neuron diagrams and one for the identification of causes themselves. Most significantly, we propose a simple language extension that supports a clear, consistent, and comprehensive algorithm for automatic causal inference.
- Visual Explanations of Probabilistic ReasoningIEEE Int. Symp. on Visual Languages and Human-Centric Computing (VL/HCC), 2009, 23–27
Continuing our research in explanation-oriented language design, we present a domain-specific visual language for explaining probabilistic reasoning. Programs in this language, called explanation objects, can be manipulated according to a set of laws to automatically generate many equivalent explanation instances. We argue that this increases the explanatory power of our language by allowing a user to view a problem from many different perspectives.
- A DSL for Explaining Probabilistic ReasoningIFIP Working Conf. on Domain-Specific Languages (DSL), LNCS vol. 5658, Springer, 2009, 335–359Best paper
We propose a new focus in language design where languages provide constructs that not only describe the computation of results, but also produce explanations of how and why those results were obtained. We posit that if users are to understand computations produced by a language, that language should provide explanations to the user. As an example of such an explanation-oriented language we present a domain-specific language for explaining probabilistic reasoning, a domain that is not well understood by non-experts. We show the design of the DSL in several steps. Based on a story-telling metaphor of explanations, we identify generic constructs for building stories out of events, and obtaining explanations by applying stories to specific examples. These generic constructs are then adapted to the particular explanation domain of probabilistic reasoning. Finally, we develop a visual notation for explaining probabilistic reasoning.
- A Visual Language for Representing and Explaining Strategies in Game TheoryIEEE Int. Symp. on Visual Languages and Human-Centric Computing (VL/HCC), 2008, 101–108
We present a visual language for strategies in game theory, which has potential applications in economics, social sciences, and in general science education. This language facilitates explanations of strategies by visually representing the interaction of players’ strategies with game execution. We have utilized the cognitive dimensions framework in the design phase and recognized the need for a new cognitive dimension of “traceability” that considers how well a language can represent the execution of a program. We consider how traceability interacts with other cognitive dimensions and demonstrate its use in analyzing existing languages. We conclude that the design of a visual representation for execution traces should be an integral part of the design of visual languages because understanding a program is often tightly coupled to its execution.