[UAI] Two new papers in the Journal of Machine Learning Research

From: JMLR (cohn+jmlr@cs.cmu.edu)
Date: Fri Sep 21 2001 - 08:30:09 PDT

  • Next message: Berthe Y. Choueiry: "[UAI] AAAI-2002 Workshop Program"

    The Journal of Machine Learning Research (www.jmlr.org) is pleased to
    announce the availability of two new papers in electronic form:

    Tracking the Best Linear Predictor
    Mark Herbster and Manfred K. Warmuth
    Journal of Machine Learning Research 1(Sep):281-309, 2001

    Prior Knowledge and Preferential Structures in Gradient Descent Learning
    Algorithms
    Robert E. Mahony, Robert C. Williamson
    Journal of Machine Learning Research 1(Sep):311-355, 2001

    ----------------------------------------
    Tracking the Best Linear Predictor
    Mark Herbster and Manfred K. Warmuth
    Journal of Machine Learning Research 1(Sep):281-309, 2001

    Abstract

    In most on-line learning research the total on-line loss of the algorithm is
    compared to the total loss of the best off-line predictor u from a
    comparison class of predictors. We call such bounds static bounds. The
    interesting feature of these bounds is that they hold for an arbitrary
    sequence of examples. Recently some work has been done where the predictor
    u_t at each trial t is allowed to change with time, and the total on-line
    loss of the algorithm is compared to the sum of the losses of u_t at each
    trial plus the total ``cost'' for shifting to successive predictors. This is
    to model situations in which the examples change over time, and different
    predictors from the comparison class are best for different segments of the
    sequence of examples. We call such bounds shifting bounds. They hold for
    arbitrary sequences of examples and arbitrary sequences of predictors.
    Naturally shifting bounds are much harder to prove. The only known bounds
    are for the case when the comparison class consists of a sequences of
    experts or boolean disjunctions. In this paper we develop the methodology
    for lifting known static bounds to the shifting case. In particular we
    obtain bounds when the comparison class consists of linear neurons (linear
    combinations of experts). Our essential technique is to project the
    hypothesis of the static algorithm at the end of each trial into a suitably
    chosen convex region. This keeps the hypothesis of the algorithm
    well-behaved and the static bounds can be converted to shifting bounds.

    ----------------------------------------
    Prior Knowledge and Preferential Structures in Gradient Descent Learning
    Algorithms
    Robert E. Mahony, Robert C. Williamson
    Journal of Machine Learning Research 1(Sep):311-355, 2001

    Abstract

    A family of gradient descent algorithms for learning linear functions in an
    online setting is considered. The family includes the classical LMS
    algorithm as well as new variants such as the Exponentiated Gradient (EG)
    algorithm due to Kivinen and Warmuth. The algorithms are based on prior
    distributions defined on the weight space. Techniques from differential
    geometry are used to develop the algorithms as gradient descent iterations
    with respect to the natural gradient in the Riemannian structure induced by
    the prior distribution. The proposed framework subsumes the notion of
    "link-functions".

    ----------------------------------------
    These papers and earlier papers in Volume 1 are available electronically at
    http://www.jmlr.org in PostScript, PDF and HTML formats; a bound, hardcopy
    edition of Volume 1 will be available later this year.

    -David Cohn, <david.cohn@acm.org>
      Managing Editor, Journal of Machine Learning Research

    -------
    This message has been sent to the mailing list "jmlr-announce@ai.mit.edu",
    which is maintained automatically by majordomo. To subscribe to the list,
    send mail to listserv@ai.mit.edu with the line "subscribe jmlr-announce" in
    the body; to unsubscribe send email to listserv@ai.mit.edu with the line
    "unsubscribe jmlr-announce" in the body.



    This archive was generated by hypermail 2b29 : Fri Sep 21 2001 - 08:34:41 PDT