Re: [UAI] Mixed DBN propagation

From: Kevin S. Van Horn (Kevin_VanHorn@ndsu.nodak.edu)
Date: Tue Nov 27 2001 - 14:08:01 PST

  • Next message: EURASIP JASP Alert: "[UAI] OBJECT-BASED AND SEMANTIC IMAGE AND VIDEO ANALYSIS"

    On Tue, 27 Nov 2001, Todd Stephenson wrote:

    > I am working with dynamic Bayesian networks with a mixture
    > of continuous and discrete variables. [...]
    >
    > Consider the following DBN, with discrete variables D[1] . . . D[N]
    > and continous variables C[1] . . . C[N], where any variable can be
    > unobserved:
    >
    >
    > D[1] ---> D[2] ---> D[3] ---> D[4] ---> . . . ---> D[N]
    > | | | | |
    > | | | | |
    > | | | | |
    > | | | | |
    > V V V V V
    > C[1] ---> C[2] ---> C[3] ---> C[4] ---> . . . ---> C[N]

    This bears some resemblance to my current work on maximum-entropy acoustic
    models for speech recognition. In my case it's an undirected graphical model,
    and the potentials are defined on groups of variables { D[i] }, { D[i],
    D[i+1] }, or { D[i], C[i-w], ..., C[i+w] }. I'd be interested in knowing more
    about what you're doing; there might be sufficient similarity in the problems
    we're tackling for some useful synergy.

    > Is there a method to avoid having to fully connect all these
    > discrete variables and thus have to have a large clique containing
    > all of the discrete variables?

    You could try the approach that I'm working on for my model, which is to
    look at this from the viewpoint of elimination algorithms [1]. We can obtain
    the marginal over { D[2], C[2], ..., D[N], C[N] } by first summing over
    the possible values for D[1], then integrating out C[1]. Doing this
    repeatedly gives us marginals over { D[k], C[k], ..., D[N], C[N] } for 1 <= k
    <= N, and a similar backward pass gives us marginals over each { D[k], C[k] }.
    If there are M discrete values for the D[k], then we find that in the forward
    pass we essentially have a mixture of M^k Gaussians for C[k+1]. To keep this
    tractable, we apply some pruning and merging heuristics, relying on the fact
    that P(C[k] | D[1], ..., D[k]) has (hopefully) diminishing dependence on D[j]
    as |k - j| increases. If the pruning and merging loses too much accuracy,
    this method still gives us an efficient means of sampling from a distribution
    close to the desired one; one can then apply importance sampling to improve
    the accuracy of the computed marginals.

    As I look at this, I think this technique may work *better* for your
    problem (which has a simpler structure) than for mine. We should definitely
    discuss this in more detail outside of the mailing list.

    --------------------------
    [1]
    @article{li94elim,
      author = {Z. Li and B. D'Ambrosio},
      title = "Efficient Inference in Bayes Networks as a Combinatorial Optimization Problem",
      journal = "International Journal of Approximate Reasoning",
      volume = 11,
      year = 1994,
      pages = {55--81}
    }



    This archive was generated by hypermail 2b29 : Tue Nov 27 2001 - 14:09:11 PST