new paper in a new journal

David Poole (poole@cs.ubc.ca)
Thu, 9 Jul 1998 09:50:38 -0700

You may be interested in the paper:
"Decision Theory, the Situation Calculus and Conditional Plans"
by David Poole (see the abstract below).

This has been received by a new style of journal, the "Electronic
Transactions of Artificial Intelligence", which has an interesting
format of publishing articles when they are received and allowing for
a public discussion of the paper over the Internet. When the
discussion period is over, the paper can be revised and is either
accepted or rejected into the journal. The accepted articles are
published online and in a paper edition by the Royal Swedish Academy
of Sciences.

The paper is in the ETAI area "Reasoning about Actions and
Change". There are other papers in this area that may be of interest
to the readers of this mailing list, as well as ongoing electronic
discussion about these articles. Your input to these discussions is
encouraged.

The home page for the Electronic Transactions of Artificial
Intelligence is:
http://www.ida.liu.se/ext/etai/

The home page for the above article is:
http://www.ep.liu.se/ea/cis/1998/008/

Have a look,
David

--------------------
Here is the abstract of the above paper:

Decision Theory, the Situation Calculus and Conditional Plans
David Poole

This paper shows how to combine decision theory and logical
representations of actions in a manner that seems natural for both.
In particular, we assume an axiomatization of the domain in terms of
situation calculus, using what is essentially Reiter's solution to the
frame problem, in terms of the completion of the axioms defining the
state change. Uncertainty is handled in terms of the independent
choice logic, which allows for independent choices and a logic program
that gives the consequences of the choices. As part of the
consequences are a specification of the utility of (final) states, and
how (possibly noisy) sensors depend on the state. The robot adopts
conditional plans, similar to the GOLOG programming language. Within
this logic, we can define the expected utility of a conditional plan,
based on the axiomatization of the actions, the sensors and the
utility. Sensors can be noisy and actions can be stochastic. The
planning problem is to find the plan with the highest expected
utility. This representation is related to recent structured
representations for partially observable Markov decision processes
(POMDPs); here we use stochastic situation calculus rules to specify
the state transition function and the reward/value function. Finally
we show that with stochastic frame axioms, action representations in
probabilistic STRIPS are exponentially larger than using the
representation proposed here.