Re: [UAI] Fuzzy sets vs. Bayesian Network

From: Kathryn Blackmond Laskey (klaskey@gmu.edu)
Date: Mon Feb 28 2000 - 06:37:39 PST

  • Next message: Rachel Cunningham: "[UAI] Graphical representation of CPTs in belief nets"

    Scott,

    >> > First, remember that it's not the *probability* of being
    >> > tall or small. It's not a probability at all. It's something
    >> > else, sometimes called "possibility", which measures the
    >> > degree something is true (not its frequency, or even one's
    >> > belief that it's true).
    >> This is true, but allow me to remark that I haven't seen any better
    >> `definition' than ``it's something else''. No axiomatic foundations, such
    >> that you can never be sure whether it's your calculus or your algorithm
    >> that leads to bad results....
    >
    >Well, they do have a clear axiomatic foundation. I agree however
    >that the fuzzy types have not given a clear interpretation of what
    >possibility really *is*. What is this measure really measuring?
    >...
    >> > In a fuzzy set theory, the set of tall people
    >> > and the set of small people could well be not mutually
    >> > exclusive. I'm tall for a jockey, but pretty small for a
    >> > basketball player. It makes a difference what the sets
    >> > were constructed to represent.
    >...
    >So you think vagueness is "nothing more than incomplete
    >information"? It's easy to show that it has nothing to do with
    >incomplete information. I could have all the heights of every
    >single individual in the population down to the nanometer,
    >yet still not be sure whether someone deserves the appellation
    >of "tall". There are still borderline cases. Or did you mean to
    >say it is nothing more that incomplete *specification*? That's
    >the more common argument.

    Probability is appropriate for sets satisfying the "clarity test." That
    is, could a clairvoyant who knows the entire state of the world, past
    present and future, down to the wave function of every quark, unambiguously
    specify the value of the variable in question? For heights measured in
    centimeters, the answer is yes (leaving out quantum fuzziness, which is
    there but matters only in the fifteenth decimal place or so). For example,
    our clairvoyant can easily answer questions such as whether my son Robbie
    will be between 175 and 176 centimeters tall when he reaches his full adult
    height. Therefore, it is fully appropriate to use a probability density
    function on his adult height, (at least in the classical physics
    approximation where people have definite heights -- which will serve most
    of our modeling purposes just fine).

    However, as pointed out, even if we knew Robbie's adult height, we wouldn't
    know whether he will be tall or not. I agree with the fuzzy folks that
    there *is* something there that's important to capture. However, I've
    tried in vain to get a number of different people in the fuzzy community to
    tell me what a fuzzy membership actually means in operational terms. If
    I'm going to use something in a serious engineering application, as opposed
    to academic philosophizing, it is *very* useful to know what I'm doing in
    theory, even if I do put in plenty of engineering hacks. As my thesis
    advisor used to tell me, "First figure out what you would do if you could
    do it right, and then figure out how to approximate it." If I don't KNOW
    what the thing I'm trying to approximate with my engineering hacks would
    mean if I could do it right, I'm rather uncomfortable.

    For probability theory we have several competing ontologies that have clear
    operational meaning in the domains to which they apply. The most commonly
    cited are (1) propensities based on physical symmetries; (2) limiting
    frequencies of "random" events; (3) beliefs about uncertain phenomena. All
    of these give clear operational criteria for connecting the referents of
    the model to entities in the world and for recognizing when they do and
    don't apply. Moreover, on nearly all problems to which more than one of
    them is applicable, when applied by a competent modeler, they give nearly
    indistinguishable answers to most questions of practical modeling interest.

    I have heard exactly one proposed ontology for fuzzy membership functions
    (proposed by Judea Pearl, among others) that makes sense to me. Under this
    proposed ontology, the fuzzy membership of Robbie's adult height in the set
    "tall," in a given context, should be taken as proportional to the
    probability that a generic person in that context would use the label
    "tall" to describe Robbie. Thus, fuzzy memberships are likelihood
    functions. We can think of them as soft evidence applied to numerical
    crisp set height measurements.

    I might go beyond this and suggest an alternate criterion, that the fuzzy
    membership be proportional to the *utility* for an appropriately defined
    decision maker in that context of using the term "tall" to describe Robbie
    (this, for example, would allow us to weigh costs of inappropriate usage of
    the term).

    This proposed ontology makes a lot of sense at a surface level. However, I
    know it is not what most fuzzy set researchers think they are talking about
    when they use fuzzy memberships. I've never seen its mathematical
    implications worked out, or seen any discussions about whether or under
    what circumstances it gives rise to combination rules that look anything
    like what the fuzzy people now use.

    I therefore find myself in the difficult position of being highly
    sympathetic to the concerns that drove people to invent fuzzy sets in the
    first place, but extremely skeptical about whether what they've developed
    solves the problem they set out to solve in an acceptable way.

    Kathy Laskey



    This archive was generated by hypermail 2b29 : Mon Feb 28 2000 - 06:38:30 PST