Re: Bayesian Networks and Belief Functions

David Poole (poole@cs.ubc.ca)
Thu, 10 Jun 1999 15:40:37 -0700

Here is an alternate semantics for belief and plausibility for beleif
functions that doesn't rely on the probability of provability. Hopefully
it is understandable by Bayesians. [I have no idea if it is standard or
not, but I suppose I'll find out soon enough.]

It makes no sense to many of us (maybe just the Bayesians) to be unsure
about our own beliefs. It does make sense to be unsure about someone
else's beliefs. I will cast the semantics in terms of multiple-agents.
As Joe Halpern keeps reminding us, for multi-agent problems we have to
be careful about the protocols of the various agents. I will be explicit
about protocols to get the definitions of belief and plausibility.

Let's consider an agent A that gets to observe Q and decides whether to
set R or not (in an influence diagram think of R as a decision node with
Q as a parent.).

There are four possible policies or strategies for agent A
s1: Q --> R, ¬Q-->R
s2: Q --> R, ¬Q-->¬R
s3: Q --> ¬R, ¬Q-->R
s4: Q --> ¬R, ¬Q-->¬R
A is going to choose a mixed strategy with Pr(s1)+P(s2) = 0.9 this will
correspond to P(R|Q)=0.9. When Q is is true, the agent will set R true
0.9 of the time.
Similarly Pr(s1)+Pr(s3)=0.8 (which corresponds to P(R|¬Q)=0.8).
Lets assume (for some reason, unknown to me) that agent A chooses the
components of the strategies independently. So that Pr(s1)=0.9*0.8=0.72
and Pr(s2)=0.9*0.2=0.18, Pr(s3)=0.1*0.8=0.08, and Pr(s2)=0.1*0.2=0.02,

Obviously (as has been pointed out by various writers) no matter whether
Q is true or false, R is true at least 0.8 of the time.

Suppose that Q is chosen by another agent B. We have to be careful about
what information is available when B gets to decide whether Q is true or
not. Suppose that agent B gets to observe what policy agent A has chosen
before deciding whether to make Q tue.

It turns out that, based on the constraints, R must be true at least
0.72 of the time (in particular B chooses ¬Q if A chooses s2 and B
chooses Q if A chooses s3). And R can be true at most 0.98 of the time.
That is minimising over all strategies of B, the probability of R must
be at least 0.72. That is maximising over all strategies of B, the
probability of R must be at most 0.92.

This seems to explain where some of the "leaked" probability goes. This
says that the unknown probability can be set to trick you. But there is
some of the probability that can't be used against you; this is the
plausibility.

Note that I did make independence assumptions to get exactly the belief
and plausibility of D-S. However, without independence assumptions, if A
and B both choose strategies to minimize R, then R will still be true in
0.7 of the cases. (The strategy s1 must be chosen at least 0.7 of the
time).

Does this make sense?

David

p.s. I do like these discussions. I always learn a lot!