Re: [UAI] how to evaluate approximate algorithms when exact solution is not available?

From: Eugene Santos (eugene@mailsrv.engr.uconn.edu)
Date: Wed May 09 2001 - 11:32:30 PDT

  • Next message: Ian Miguel: "[UAI] Final CFP: CP2001"

    I also agree with Rina and Marek on both points they raised below. I would
    also add that another important analysis is to determine what sort of
    structural characteristics of the network most effects your algorithm.
    For example, these include the distribution of probability values at each
    node, the graph connectivity, graph topology such as localized clusters,
    etc. Once these are identified, you can then possibly prove lower and
    upper bounds using such structural assumptions but that is still very hard.
    On the other hand, you can at least perform additional empirical analyses
    by testing larger randomly generated networks where some conform to
    the structures versus those that do not. This will at least give some
    verification to your structural hypotheses and even examine scalability
    issues to a degree. My colleague, Solomon Shimony and I have some results
    along these lines and you can find the info at
    http://www.cse.uconn.edu/cse/IDIS

    Cheers!
    -Gene

         Eugene Santos, Jr. ********* Computer Science & Engineering Dept
        eugene@cse.uconn.edu * ? * University of Connecticut
        "It's not our fault!" * ? * UTEB, 191 Auditorium Rd, U-155
          - Kei & Yuri ********* Storrs, CT 06269-3155
                     http://www.cse.uconn.edu/~eugene
                   (860) 486-1458, Fax: (860) 486-4817

    > From owner-uai@ghost.cs.orst.edu Tue May 8 17:54 EDT 2001
    > To: uai@cs.orst.edu
    > Subject: Re: [UAI] how to evaluate approximate algorithms when exact solution is not available?
    >
    > - --On Monday, May 07, 2001, 11:56 AM -0700 Rina Dechter
    > <dechter@ics.uci.edu> wrote:
    >
    > > The question of evaluating the quality of an approximation to
    > > probabilistic inference is really a very important one and a difficult
    > > one, with which we also struggle with in Irvine.
    > >
    > > Sometimes it is possible to generate by an approximation an upper and
    > > lower bounds of the exact probability which can bound provide some
    > > estimate of the accuracy.
    > > However, such methods are often very inaccurate.
    > > Therefore one produces arbitrary approximations (no guarantees).
    > > In that case, I dont see any other way but to test your approximation
    > > algorithm on relatively small or moderate networks and compare against
    > > the exact figure (computed by an exact algorithm) with the
    > > hope that the information gained from such comparison scales up.
    > > You can then try too show superiority relative to other
    > > competing approximation algorithms.
    >
    > I agree with Rina here. I suggest that you find some networks that are
    > large or complex enough for your algorithm to be challenging and yet
    > solvable using exact methods. This is the approach that my doctoral
    > student, Jian Cheng, and I followed when testing stochastic sampling
    > algorithms (e.g., http://www.jair.org/abstracts/cheng00a.html). Unless you
    > have a good theory for putting bounds on your posteriors (we have a
    > forthcoming UAI paper along these lines; still we test this approach by
    > comparing the results to exact answers!), only when you have exact answers
    > can you saying anything meaningful about your approximate results.
    >
    > > I have seen some people comparing approximation reults to each other
    > > (comparing to Gibbs sampling for instance) however such comparisons are
    > > meaningless, I think.
    >
    > I agree here.
    >
    > Marek
    > --------------------------------------------------------------------------
    > Marek J. Druzdzel http://www.pitt.edu/~druzdzel
    >
    >
    >



    This archive was generated by hypermail 2b29 : Wed May 09 2001 - 11:43:05 PDT