Michael,
>As a general remark on some of the discussions on probability theory
>that recur on the UAI list, I think that it's important to emphasize
>that probability theory is best viewed as a special case of measure
>theory, and it's not a conceit of the mathematicians that they settled
>on the machinery of measurable spaces, random-variables-as-functions
>and the like. In case you don't believe this, read Section 1 of
>Billingsley, which will convince you that without measure theory even
>some elementary results regarding coin-tossing are out of reach.
I studied measure theory out of Billingsley, and I share Mike's
enthusiasm for the book as a good intro to the subject. I agree that
serious study of measure theory is important for any kind of general
treatment of continuous random variables.
However, it's worth noting that Phil Dawid, Vladimir Vovk, and Glenn
Shafer have been working on an alternate formulation of probability
("prequential games"), based on agents engaged in sequential games.
(Shafer and Vovk just published a book which they announced on this
list.) They have re-proven a bunch of the standard limit theorems
and other results of probability theory without recourse to measure
theory. This doesn't make measure theory unimportant -- their theory
is an alternative viewpoint which reproduces many of the same
mathematical results (and probably they will reproduce more of them
as time goes on). Anyone seriously interested in foundational
issues, or anyone working in areas requiring an understanding of
subtleties, needs to have a thorough understanding of measure theory.
Whichever way the foundational debate plays out, measure theory is
still valid mathematics and will continue to be used to prove
interesting results. Nevertheless, as an ontology for probability,
prequential theory is likely to be more palatable than measure theory
to people who have an affinity for constructivist theories of
mathematics. Moreover, it has a more natural semantics (IMHO) than
measure theory. In prequential theory, probability claims (e.g., X1,
..., XN are iid draws from distribution P(X)) are evaluated based
only on actual observed outcomes, rather than on on what would have
occurred if values other than the observed outcomes had happened. In
this way, prequential theory is similar in spirit to the likelihood
principle. A probabilistic claim such as the one above is correct in
the prequential interpretation if and only if an agent named
"skeptic" (playing the role of a skeptical scientist who looks for
flaws in "forecaster's" reasoning) in an appropriately defined
"prequential game" does not have a strategy that can make him/her
infinitely rich. It is a theorem that any two "prequential players"
whose forecasts are "prequentially calibrated" (the prequential
version of having a long-run limit, which applies even for processes
having no stationary distribution) will converge to the same
probability forecasts -- thus providing a constructivist
interpretation of "objective probability" (actually, I prefer the
term "intersubjectively verifiable" but the idea is the same).
Kathy
This archive was generated by hypermail 2b29 : Mon Jul 30 2001 - 16:58:03 PDT