Hello Bob, Milan, Rich, David, and everybody,
I'd like to weigh in on the side of the "bottom up" definition
of a Bayesian network, that is, defining a Bayesian network in terms
of conditional distributions.
I think everyone agrees bottom-up is more practical, since we all find
it easier to think about local properties. However, I would argue that
it is also similar to the conventional ways of stating mathematical
definitions.
The bottom-up definition is analogous to a differential equation,
as Bob pointed out. One can state all kinds of diff eqs, but only
a subset of them will be well-behaved. (E.g., some nonlinear diff
eqs will not have a continuous solution.) The same remark goes for
integrals, infinite series, systems of equations, and so on.
At the root of the problem, you must state what you know. From
that you try to derive the characteristics of a solution. These
characteristics follow as theorems from your problem statement.
Concerning a collection of conditional probabilities, one might
be able to prove the existence of a joint distribution, or the
nonexistence, or perhaps even the existence of an infinite number
of joint distributions consistent with the conditionals.
The counter-example (a collection of conditionals not consistent
with any joint) mentioned by Milan strengthens my point. If every
set of conditionals had a consistent joint distribution, then
the bottom-up and top-down approaches would be equivalent. But
since we have to choose, I'll take the bottom-up approach: it is
putting the cart before the horse to restrict one's attention
only to the examples which have nice solutions.
Taking a bottom-up approach means that it will be necessary to
state, in some cases, "This Bayesian network does not correspond
to any well-defined joint distribution". I don't see a problem
with that, any more than saying "This integral has no solution"
or "This infinite series does not converge".
On a tangential note, I wonder what regularity conditions
would be sufficient to ensure that a collection of conditionals
does have a consistent joint distribution. I also wonder if
one could prove theorems of the form "The conditional p(T|G)
does exist even though the joint over A,...,Z does not".
Certainly yours,
Robert Dodier
This archive was generated by hypermail 2b29 : Fri Jul 27 2001 - 13:35:56 PDT