To add a few more references: The algorithm proposed by Shenoy and
Shafer(1990) is actually the two-way variable elimination
method. Only, the potentials are multiplied off-line. The
triangulation stuff has to do with finding an optimal elimination
sequence. If you find one tractable sequence for elimination down to
one variable, then you have a tractable sequence down to any
variable. Li and D'Ambrosio (1994) suggest a method very similar to
what Dechter later called bucket elimination, and Madsen and Jensen
extends it to a two-pass lazy method (called lazy propagation). I
think that the latter is close to the method Fabio Cozman is referring
to. My experience is too, that this method is by far the most easy one
to teach.
Best
Finn V. Jensen
G.R.Shafer and P.P. Shenoy (1990). Probability propagation. Ann Math. Artificial
Intelligence 2 , 327-352.
Z. Li, B D'Ambrosio (1994), Efficient inference in Bayes networks as a
combinatorial optimization problem, Internat. J. Approx. Reason. 11 (1), 55-81.
A. L. Madsen, F.V. Jensen (1999), Lazy propagation: a junction tree inference
algorithm based on lazy evaluation, Artificial Intelligence 113, 203-245.
Fabio Gagliardi Cozman wrote:
> Dear colleagues,
>
> The following answer to one of Will Briggs's questions may be
> of interest to others (question: where to find a general, easy
> algorithm for inference in general Bayesian networks).
>
> The easiest inference algorithm to understand/teach is (in my
> opinion, anyway) the variable elimination algorithm, also called
> bucket elimination --- the problem with variable elimination is
> that it does not generate marginal probabilities for all variables
> in a network, just for a query variable.
>
> But it is actually possible to derive a more
> general form of variable elimination, in which a second pass
> generates *all* marginal probabilities. The complexity is
> similar to usual junction tree algorithms, and the result
> is essentially the same. The bonus is that operations are much
> easier to understand, without any mention to triangulation
> and other graph-theoretic techniques. The result is similar
> to a tree propagation algorithm, but in a tree of clusters.
>
> The more general variable elimination is presented at
>
> Fabio G. Cozman, Generalizing Variable Elimination in
> Bayesian Networks, Workshop on Probabilistic Reasoning in
> Artificial Intelligence, pp. 27-32, Atibaia, Brazil, 2000.
>
> (get this paper from
> http://www.cs.cmu.edu/~javabayes/Download/jb-heading.ps.gz)
>
> This paper presents standard variable elimination and then
> adds the second pass that generates all marginal probabilities.
> I tried to make the presentation compact and understandable,
> without mathematical complications (no theorems, just
> explanations). I have used the paper as teaching material
> and I think it works well. I would of course be happy to
> receive comments.
>
> This general variable elimination scheme is implemented
> in the JavaBayes system (a system for manipulation of
> Bayesian networks available at http://www.cs.cmu.edu/~javabayes).
> Because JavaBayes is distributed under the GNU license,
> the source code is available and you can read the
> implementation of the general variable elimination
> scheme.
>
> I hope this is useful; as I said, I would appreciate
> comments on the algorithm/paper/JavaBayes.
>
> Best,
>
> Fabio Cozman
>
>
> PS: Variable elimination uses a simple idea, already mentioned
> in the peeling algorithms by Cannings, Thompson and Skolnick;
> the idea is also mentioned in the 1988
> paper by Lauritzen and Spiegelhalter. The idea resurfaced in 1996
> in two papers, one by Zhang and Poole (used "variable elimination")
> and another by Decther (used "bucket elimination"). There have
> been efforts to derive inference algorithms that use junction trees
> without the triangulation step (I know of results obtained by
> Draper and by Darwiche), but the general variable elimination
> scheme seems simpler and more direct. The papers
> containing these results are:
>
> C. Cannings and E. A. Thompson and M. H. Skolnick, Probability
> Functions in Complex Pedigrees, Advances in Applied Probability,
> 10, pp. 26-61, 1978.
>
> S. L. Lauritzen and D. J. Spiegelhalter, Local Computations with
> Probabilities on Graphical Structures and their Application to
> Expert Systems, Journal Royal Statistical Society B, 50(2):157-224,
> 1988.
>
> N. L. Zhang and D. Poole, Exploiting Causal Independence in Bayesian
> Network Inference, Journal of Artificial Intelligence Research,
> pp. 301-328, 1996.
>
> Bucket Elimination: A unifying framework for
> probabilistic inference, XII Conference on Uncertainty in Artificial
> Intelligence, E. Horvitz and F. Jensen (eds.), pp. 211-219, Morgan
> Kaufmann, San Francisco, 1996. (There is also an extended version
> in the book Learning in Graphical Models, MIT Press, 1999.)
>
> D. Draper, Clustering without (thinking about) triangulation,
> XI Conference on Uncertainty in Artificial Intelligence,
> P. Besnard and S. Hanks (eds.), Morgan Kaufmann, San Francisco,
> 1995.
>
> Adnan Darwiche, Dynamic Jointrees, Proceedings of the Fourteenth
> Conference on Uncertainty in Artificial Intelligence, G. F. Cooper
> and S. Moral (eds.), Morgan Kaufmann, San Francisco, 1998.
>
> On Tue, 3 Apr 2001, Briggs, Will wrote:
> > Is there an algorithm that would handle any belief net?
> > Has anyone written it down or implemented it?
> > I had thought of using the one Russell & Norvig presents for polytrees, on
> > nets that _aren't_ polytrees. (I do realize this will give the wrong
> > answer, but hope we can tweak it to fix the problem.)
------- End of Forwarded Message
This archive was generated by hypermail 2b29 : Thu Apr 05 2001 - 10:45:49 PDT