Re: [UAI] Definition of Bayesian network

From: Bob Welch (indianpeaks@home.com)
Date: Sun Jul 29 2001 - 13:11:13 PDT

  • Next message: Bruce D'Ambrosio: "(no subject)"

    Robert:

    The "bottom up approach" works only if your set of conditionals satisfy some
    global constraints as in Rich's two definitions. If they don't then you
    might as well build expert systems with an inconsistent set of rules.
    Furthermore, Rich's definitions are equivalent to the existence of a joint,
    for all practical purposes. (Even in the counter example there is a joint
    measure. Only its guaranteed to be finitely additive, but may not be
    countable additive).

    Sure the bottom up approach is what we all want. Nobody is advocating
    finding the joint distribution (and after all, if you have a joint who needs
    a Bayesian network or needs to bother with Dave's simulation). Especially
    when dealing with 1000 node networks, or networks that may dynamically grow,
    on the fly.

    But for cutting your audiences baby teeth, keep it simple, focused initially
    on what your audience understands, and be aware of getting boxed in by
    criticism that are likely to come up. Once the theoretical concepts are
    well understood, then start talking about knowledge engineering. And by the
    way, I don't believe that the knowledge engineering task of satisfying
    Rich's constraints is all that simple (excluding the no feedback causal
    network).

    As many have said, your audience should dictate the development. This may
    be different for computer science AI majors, than for chemical control
    engineers with many years experience in the field and very rusty on
    probability theory mush less graph theory. (Yet I feel, somehow,
    considering both points of view, one should be able to write to both).

    I bring these issues up for the following reason. the Bayesian network is an
    elegant representation of uncertainty and belief. However, except in the
    case of the no causal network, it is intuitively difficult to grasp as a
    network. The arcs don't have a simple interpretation, especially their
    direction. In a sense the network stands as a representation of conditional
    independence, by the absence of arcs. But not all conditional independencies
    are represented. And if you reverse an arc and follow through with adding
    all the additional arcs that are consequent to that action, the displayed
    conditional independence is altered considerably. As a consequence, networks
    that are trained on data may not reveal the most intuitive network. For me,
    what makes sense in dealing with a network in those circumstances is
    understanding it as a representation of a joint probabilistic relationship
    among a set of variables.

    Bob

    - ----- Original Message -----
    From: "Robert Dodier" <RobertD@athenesoft.com>
    To: <uai@cs.orst.edu>
    Sent: Friday, July 27, 2001 2:33 PM
    Subject: Re: [UAI] Definition of Bayesian network

    > Hello Bob, Milan, Rich, David, and everybody,
    >
    > I'd like to weigh in on the side of the "bottom up" definition
    > of a Bayesian network, that is, defining a Bayesian network in terms
    > of conditional distributions.
    >
    > I think everyone agrees bottom-up is more practical, since we all find
    > it easier to think about local properties. However, I would argue that
    > it is also similar to the conventional ways of stating mathematical
    > definitions.
    >
    > The bottom-up definition is analogous to a differential equation,
    > as Bob pointed out. One can state all kinds of diff eqs, but only
    > a subset of them will be well-behaved. (E.g., some nonlinear diff
    > eqs will not have a continuous solution.) The same remark goes for
    > integrals, infinite series, systems of equations, and so on.
    >
    > At the root of the problem, you must state what you know. From
    > that you try to derive the characteristics of a solution. These
    > characteristics follow as theorems from your problem statement.
    > Concerning a collection of conditional probabilities, one might
    > be able to prove the existence of a joint distribution, or the
    > nonexistence, or perhaps even the existence of an infinite number
    > of joint distributions consistent with the conditionals.
    >
    > The counter-example (a collection of conditionals not consistent
    > with any joint) mentioned by Milan strengthens my point. If every
    > set of conditionals had a consistent joint distribution, then
    > the bottom-up and top-down approaches would be equivalent. But
    > since we have to choose, I'll take the bottom-up approach: it is
    > putting the cart before the horse to restrict one's attention
    > only to the examples which have nice solutions.
    >
    > Taking a bottom-up approach means that it will be necessary to
    > state, in some cases, "This Bayesian network does not correspond
    > to any well-defined joint distribution". I don't see a problem
    > with that, any more than saying "This integral has no solution"
    > or "This infinite series does not converge".
    >
    > On a tangential note, I wonder what regularity conditions
    > would be sufficient to ensure that a collection of conditionals
    > does have a consistent joint distribution. I also wonder if
    > one could prove theorems of the form "The conditional p(T|G)
    > does exist even though the joint over A,...,Z does not".
    >
    > Certainly yours,
    > Robert Dodier
    >
    >



    This archive was generated by hypermail 2b29 : Sun Jul 29 2001 - 13:20:26 PDT