Bob Welch wrote:
>
> Hello Mohamed, Denver, et al.
>
> These are all very interesting methods for getting the joint probability of
> a small set of variables. Another method is the push operation that
> effectively moves variables into parent cliques (i.e., toward the root) and
> separators until the desired collection of variables are members of the same
> clique.
>
> However, I wonder if this discussion misses the mark of the original query,
> which is in the context of training networks to data. If the data set is
> complete (i.e., there are no hidden nodes ) then it seems the joint can be
> calculated very efficiently by multiplying the appropriate entries in each
> network node's cp-table. No need for any solution algorithm or join tree
> here.
>
> If the context is learning, and only one or two nodes are missing data,
> what, then, is the most efficient way to obtain the joint probability? If
> only one node is hidden, and the network is discrete, one can sum this
> product over each of the possible values of the hidden node. Furthermore,
> the calculation can be effectively restricted to the Markov neighborhood of
> the hidden node. This seems to turn the approach around, to trying to find a
> structure that contains the nodes that are missing data rather than one that
> contains the vast majority of nodes that have data.
>
> But I am still a little puzzled by the meaning of the original query:
>
> > >> > Hi,
> > >> > I try to program the BDe scoring metric for learning bayesian
> networks.
> > >> > Could you indicate me reference to the algorithmic aspects of
> > >> > calculating priors on parameters?
> > >> > I do not understand how i can calculate efficiently the prior joint
> > >> > probabilities
> > >> > from the prior network.
> > >> >
>
> Mohamed:
>
> Is this a question as to where the priors over network parameter values come
> from?
> Is the question related to how to calculate the prior probabilities of the
> newly trained parameters
> from the parameter priors?
> It seems the discussion has been more about the likelihood of the data given
> the new parameters rather than the parameter priors.
Bob,
I can at least tell you what question I thought I was answering (it
may be that Mohamed has missed this thread due to my renaming it).
During the search for structure, candidate networks are being
repeatedly generated and scored. If they are scored with a Bayesian
metric then each generated network needs a prior, not only for the
structure itself but for the parameters.
Geiger and Heckerman (I don't have the reference handy) showed that
given some assuptions non-uniform parameter priors can be calculated
on the fly using only a single elicited prior network and an
equivalent sample size for the expert. This method requires you to
use the prior network to calculate the probability P(X,Pa(X)) where
Pa(X) are the parents of X in the network being tested (not the prior
network). Therefore, in the prior network, the set of variables
{X,Pa(X)} are not necessarily in the same clique.
Hope this helps. Maybe Mohamed can correct me or elaborate if I
misinterpretted his question.
Cheers,
Denver.
> - ----- Original Message -----
> From: Rina Dechter <dechter@ramat-aviv.ICS.UCI.EDU>
> To: <uai@CS.ORST.EDU>
> Sent: Thursday, March 30, 2000 10:55 PM
> Subject: Re: Calculating joint over arbitrary sets of variables
>
> > Hi,
> > You can just use bucket-elimination algorithm where the subset
> > of variables queried are processed last (initiate the ordering).
> > The complexity will be exponential in the induced-width
> > of the moral graph computed along this restricted
> > ordering.
> >
> > ------Rina
> > >Hi, this is a repost of a previous thread (BDe scoring metric). I
> thought I
> > >would repost it under this title to see if it gets a better response.
> > >Basically I'm looking for an efficient way to calculate the joint of an
> > >arbitrary subset of nodes in a Bayes net. Below is one algorithm I
> > >proposed, but it's not terribly efficient.
> > >
> > >Thanks,
> > >Denver.
> > >
> > >- ----- Original Message -----
> > >From: "Denver Dash" <ddash@isp.pitt.edu>
> > >To: <uai@CS.ORST.EDU>
> > >Sent: Thursday, March 23, 2000 5:23 PM
> > >Subject: Re: [UAI] BDe scoring metric
> > >
> > >
> > >> Basically you need to calculate the joint over an arbitrary subset of
> > >> variables: P(V1,V2,...Vm) where m<n (the total number of variables).
> > >>
> > >> One way to do this is to do the following:
> > >>
> > >> Make use of the fact that P(V1,V2,...,Vm) is equal to
> > >> P(V1)P(V2|V1)P(V3|V1,V2)...P(Vm|V1,...,Vn-1).
> > >>
> > >> This suggests the following algorithm to calculate the joint of a
> subset S
> > >> of nodes:
> > >>
> > >> Evidence_Set = {};
> > >> jointProb = 1;
> > >>
> > >> For each node V in S do:
> > >> Update beliefs in network given Evidence_Set.
> > >> query node V to get P(V|Evidence_Set).
> > >> jointProb = jointProb * P(V|Evidence_Set)
> > >> Evidence_Set += V
> > >>
> > >> Of course if the nodes are in the same clique then you can find a more
> > >> efficient way to calculate their joint (marginalize the potentials over
> > >the
> > >> non-included variables), but in general this won't be the case.
> > >>
> > >> I would also be interested if anybody knows of a more efficient way to
> do
> > >> this calculation in general.
> > >>
> > >> Cheers,
> > >> Denver.
> > >> --
> > >> Denver Dash http://www.sis.pitt.edu/~ddash
> > >>
> > >> ----- Original Message -----
> > >> From: "Mohamed Bendou" <mohamed@esiea-ouest.fr>
> > >> To: <uai@CS.ORST.EDU>
> > >> Sent: Thursday, March 23, 2000 12:58 PM
> > >> Subject: [UAI] BDe scoring metric
> > >>
> > >>
> > >> > Hi,
> > >> > I try to program the BDe scoring metric for learning bayesian
> networks.
> > >> > Could you indicate me reference to the algorithmic aspects of
> > >> > calculating priors on parameters?
> > >> > I do not understand how i can calculate efficiently the prior joint
> > >> > probabilities
> > >> > from the prior network.
> > >> >
> > > > Thank you.
> > >> >
> > >> >
> > >> > Mohamed BENDOU
> > >> >
> > >> >
> > >> > --
> > >> > Mohamed BENDOU
> > >> >
> > >> >
> > >>
> > >
> > >
> > >------- End of Forwarded Message
> > >
> >
>
> ------- End of Forwarded Message
- --
Denver Dash (http://www.lis.pitt.edu/~ddash)
------- End of Forwarded Message
This archive was generated by hypermail 2b29 : Sun Apr 02 2000 - 08:56:35 PDT