Denver Dash,
The traditional method for learning HMM structure is to use EM (known as
Baum-Welsh or forward-backward in its HMM versions) to learn parameters for
the HMM. Structure shows up in the transitions probabilities as non-zero
probabilities for allowed transitions.
Many of the papers are in speech recognition journals, apparently because
HMMs first saw widespread use doing speech recognition. I believe these
also have some discussion of metrics.
Rabiner, L. R., "A tutorial on hidden Markov models and selected
applications in speech recognition". Proceedings of the IEEE vol 77,
pp 257-286, February 1989
Juang, B. H., and Rabiner, L. R., "Hidden Markov Models for Speech
Recognition", Technometrics, vol. 33, pp 251-272, August, 1991.
It's worth noting that structure in an HMM can be deceptive. It is often
the case that the same stochastic process can often be represented by more
than one HMM, and the structures can be different. Belief about the future
of the process is what's important, and the states of an HMM are best
thought of merely as basis vectors with which to represent the space of
belief states.
My dissertation research was an attempt to create an algorithm from which
learned HMMs from data by working from first principles. I could learn the
transition-output probabilities as a function of belief state, but I couldn't
have a way of choosing a basis which made all the probabilities positive.
My dissertation is available at:
http://www.santafe.edu/projects/CompMech/papers/TAHMMGHMM.html
Dan Upper
> Hi,
>
> Can anybody point me to some references for learning the structure of =
> Markov chains and Hidden Markov Models from data?
>
> Also of interest would be some standard references discussing metrics or =
> scoring functions for HMMs.
>
> Thanks,
> Denver.
> --
> http://www.sis.pitt.edu/~ddash
>
This archive was generated by hypermail 2b29 : Wed Jun 13 2001 - 13:24:06 PDT