Learning BNs with hidden variables

Frank Wittig (fwittig@cs.uni-sb.de)
Tue, 06 Apr 1999 14:36:18 +0000

Dear colleagues,

We have been using the Adaptive Probabilistic Networks method
of Binder, Koller, Russell and Kanazawa (1997) to learn Bayesian
networks with hidden variables. (The context is the modeling of
unobservable properties of computer users.) We use
Polak-Ribière's method for choosing the direction of the next
hill-climbing step.

Although the results so far have been reasonable, the learned
nets seem to be local optima that could be improved upon. Also,
the algorithm seems to concentrate its learning mainly on the
CPTs that link a hidden variable with its observable children,
leaving the CPT for the hidden variable largely unchanged, even
when this initial CPT does not represent a particularly promising
choice.

We'd be interested to hear of methods that others have used to
get good results with this type of algorithm - for example, by
specifying constraints that the learned network ought to satisfy.

We are also interested in experiences with implementations of
the EM algorithm for learning Bayesian networks and how these or
similar problems have been solved in that context.

Thanks in advance,

Frank

Binder, J., Koller, D., Russell, S., & Kanazawa, K. (1997).
Adaptive probabilistic networks with hidden variables.
Machine Learning, 29, 213-244.

==================================================================
Frank Wittig Email: fwittig@cs.uni-sb.de
University of Saarbruecken WWW: http://w5.cs.uni-sb.de/~fwittig
P.O. Box 15 11 50 Phone: +49 681 302 4135
D-66041 Saarbruecken Fax: +49 681 302 4136
==================================================================