* * * Post-NIPS*2001 Workshops * * *
* * * Whistler, BC, CANADA * * *
* * * December 7-8, 2001 * * *
The NIPS*2001 Workshops will be on Friday and Saturday, December 7/8,
in Whistler, BC, Canada, following the main NIPS conference in
Vancouver Monday-Thursday, December 3-6.
This year there are 19 workshops:
Activity-Dependent Synaptic Plasticity
Artificial Neural Networks in Safety-Related Areas
Brain-Computer Interfaces
Causal Learning and Inference in Humans & Machines
Competition: Unlabeled Data for Supervised Learning
Computational Neuropsychology
Geometric Methods in Learning
Information & Statistical Structure in Spike Trains
Kernel-Based Learning
Knowledge Representation in Meta-Learning
Machine Learning in Bioinformatics
Machine Learning Methods for Images and Text
Minimum Description Length
Multi-sensory Perception & Learning
Neuroimaging: Tools, Methods & Modeling
Occam's Razor & Parsimony in Learning
Preference Elicitation
Quantum Neural Computing
Variable & Feature Selection
Some workshops span both days, while others will be only one day long.
One-day workshops will be assigned to friday or saturday by October 14.
Please check the web page after this time for individual dates.
All workshops are open to all registered attendees. Many workshops
also invite submissions. Submissions, and questions about individual
workshops, should be directed to the individual workshop organizers.
Included below is a short description of most of the workshops.
Additional information (including web pages for the individual
workshops) is available at the NIPS*2001 Web page:
http://www.cs.cmu.edu/Groups/NIPS/
Information about registration, travel, and accommodations for the
main conference and the workshops is also available there.
Whistler is a ski resort a few hours drive from Vancouver. The daily
workshop schedule is designed to allow participants to ski half days,
or enjoy other extra-curricular activities. Some may wish to extend
their visit to take advantage of the relatively low pre-season rates.
We look forward to seeing you in Whistler.
Virginia de Sa and Barak Pearlmutter
NIPS Workshops Co-chairs
- -------------------------------------------------------------------------
Activity-dependent Synaptic Plasticity
Paul Munro, Larry Abbott
http://www.pitt.edu/~pwm/plasticity
While the mathematical and cognitive aspects of rate-based
Hebb-like rules have been broadly explored, relatively little is
known about the possible role of STDP at the computational
level. Hebbian learning in neural networks requires both
correlation-based synaptic plasticity and a mechanism that induces
competition between different synapses. Spike-timing-dependent
synaptic plasticity is especially interesting because it combines
both of these elements in a single synaptic modification
rule. Some recent work has examined the possibility that STDP may
underlie older models, such as Hopfield networks or the BCM
rule. Temporally dependent synaptic plasticity is attracting a
rapidly growing amount of attention in the computational
neuroscience community. The change in synaptic efficacy arising
from this form of plasticity is highly sensitive to temporal
correlations between different presynaptic spike
trains. Furthermore, it can generate asymmetric and directionally
selective receptive fields, a result supported by experiments on
experience-dependent modifications of hippocampal place
fields. Finally, spike-timing-dependent plasticity automatically
balances excitation and inhibition producing a state in which
neuronal responses are rapid but highly variable. The major goals
of the workshop are:
1. To review current experimental results on
spike-timing-dependent synaptic plasticity and related effects.
2. To discuss models and mechanisms for this form of synaptic plasticity.
3. To explore the relationship of STDP with other approaches.
4. To reconcile the rate-based and spike-based plasticity data
with a unified theoretical framework (very optimistic!)..
- -------------------------------------------------------------------------
Artificial Neural Networks in Safety-Related Areas:
Applications and Methods for Validation and Certification
J. Schumann, P. Lisboa, R. Knaus
http://ase.arc.nasa.gov/people/schumann/workshops/NIPS2001
Over the recent years, Artificial Neural Networks have found their
way into various safety-related and safety-critical areas, for
example, power generation and transmission, transportation,
avionics, environmental monitoring and control, medical
applications, and consumer products. Applications range from
classification to monitoring and control. Quite often, these
applications proved to be highly successful, leading from pure
research prototypes into serious experimental systems (e.g., a
neural-network-based flight-control system test-flown on a NASA
F-15ACTIVE) or commercial products (e.g., Sharp's
Logi-cook). However, the general question of how to make sure that
the ANN-based system performs as expected in all cases has not yet
been addressed satisfactorily. All safety-related software
applications require careful verification and validation (V&V) of
the software components, ranging from extended testing to
full-fledged certification procedures. However, for neural-network
based systems, a number of specific issues have to be
addressed. For example, a lack of a concise plant model, often a
major reason to use a ANN in the first place, makes traditional
approaches to V&V impossible.
In this workshop, we will address such issues. In particular, we
will discuss the following (non-exhaustive list of) topics: *
theoretical methodologies to characterise the properties of ANN
solutions, e.g., multiple realisations of a particular network and
ways of managing this * fundamental software approaches to V&V and
implications for ANNs, e.g., the application of FMEA * statistical
(Bayesian) methods and symbolic techniques like rule extraction
with subsequent V&V to assess and guarantee the performance of a
ANN * dynamic monitoring of the ANN's behavior * stability proofs
for control of dynamical systems with ANNs * principled approaches
to design assurance, risk assessment, and performance evaluation
of systems with ANNs * experience of application and certification
of ANNs for safety-related applications * V&V techniques suitable
for on-line trained and adaptive systems
This workshop aims to bring together researchers who have applied
ANNs in safety-related areas and actually addressed questions of
demonstrating flawless operation of the ANN, researchers working
on theoretical topics of convergence and performance assessment,
researchers in the area of nonlinear adaptive control, and
researchers from the area of formal methods in software design for
safety-critical systems. Many prototypical/experimental
application of neural networks in safety-related areas have
demonstrated their usefulness successfully. But ANN applicability
in safety-critical areas is substantially limited because of a
lack of methods and techniques for verification and
validation. Currently, there is no silver bullet for V&V in
traditional software, and with the more complicated situation for
ANNs, none is expected here in the short run. However, any result
can have substantial impact in this field.
- -------------------------------------------------------------------------
Brain-Computer Interfaces
Lucas Parra, Paul Sajda, Klaus-Robert Mueller
http://newton.bme.columbia.edu/bci
- -------------------------------------------------------------------------
Causal learning and inference in humans and machines
T. Griffiths, J. Tenenbaum, T. Kushnir, K. Murphy, A. Gopnik
http://www-psych.stanford.edu/~jbt/causal-workshop.html
The topic of causality has recently leapt to the forefront of
theorizing in the fields of cognitive science, statistics, and
artificial intelligence. The main objective of this workshop is to
explore the potential connections between research on causality in
the these three fields. There has already been much productive
cross-fertilization: the development of causal Bayes nets in the
AI community has often had a strong psychological motivation, and
recent work by several groups in cognitive science has shown that
some elementary but important aspects of how people learn and
reason about causes may be best explained by theories based on
causal Bayes nets. Yet the most important questions lay wide
open. Some examples of the questions we hope to address in this
workshop include:
* Can we scale up Bayes-net models of human causal learning and
inference from microdomains with one or two causes and effects to
more realistic large-scale domains?
* What would constitute strong empirical tests of large-scale
Bayes net models of human causal reasoning?
* Do approximation methods for inference and learning on large
Bayes nets have anything to do with human cognitive processes?
* What are the relative roles of passive observation and active
manipulation in causal learning?
* What is the relation between psychological and computational
notions of causal independence?
The workshop will last one day. Most of the talks will be
invited, but we welcome contributions for short talks by
researchers in AI, statistics or cognitive science would like to
make connections between these fields. Please contact one of the
organizers if you are interested in participating. For more
information contact Josh Tenenbaum (jbt@psych.stanford.edu) or
Alison Gopnik (gopnik@socrates.berkeley.edu).
- -------------------------------------------------------------------------
Competition: Unlabeled Data for Supervised Learning
Stefan C. Kremer, Deborah A. Stacey
http://q.cis.uoguelph.ca/~skremer/NIPS2001/
Recently, there has been much interest in applying techniques that
incorporate knowledge from unlabeled data into systems performing
supervised learning. The potential advantages of such techniques
are obvious in domains where labeled data is expensive and
unlabeled data is cheap. Many such techniques have been proposed,
but only recently has any effort been made to compare the
effectiveness of different approaches on real world problems.
This web-site presents a challenge to the proponents of methods to
incorporate unlabeled data into supervised learning. Can you
really use unlabeled data to help train a supervised
classification (or regression) system? Do recent (and not so
recent) theories stand up to the data test?
On this web-site you can find challenge problems where you can try
out your methods head-to-head against anyone brave enough to face
you. Then, at the end of the contest we will release the results
and find out who really knows something about using unlabeled
data, and if unlabeled data are really useful or we are all just
wasting our time. So ask yourself, are you (and your theory) up to
the challenge?? Feeling lucky???
- -------------------------------------------------------------------------
Computational Neuropsychology
Sara Solla, Michael Mozer, Martha Farah
http://www.cs.colorado.edu/~mozer/nips2001workshop.html
The 1980's saw two important developments in the sciences of the
mind: The development of neural network models in cognitive
psychology, and the rise of cognitive neuroscience. In the 1990's,
these two separate approaches converged, and one of the results
was a new field that we call "Computational Neuropsychology." In
contrast to traditional cognitive neuropsychology, computational
neuropsychology uses the concepts and methods of computational
modeling to infer the normal cognitive architecture from the
behavior of brain-damaged patients. In contrast to traditional
neural network modeling in psychology, computational
neuropsychology derives constraints on network architectures and
dynamics from functional neuroanatomy and neurophysiology.
Unfortunately, work in computational neuropsychology has had
relatively little contact with the Neural Information Processing
Systems (NIPS) community. Our workshop aims to expose the NIPS
community to the unusual patient cases in neuropsychology and the
sorts of inferences that can be drawn from these patients based on
computational models, and to expose researchers in computational
neuropsychology to some of the more sophisticated modeling
techniques and concepts that have emerged from the NIPS community
in recent years.
We are interested in speakers from all aspects of neuropsychology,
including:
* attention (neglect)
* visual and auditory perception (agnosia)
* reading (acquired dyslexia)
* face recognition (prosopagnosia)
* memory (Alzheimer's, amnesia, category-specific deficits)
* language (aphasia)
* executive function (schizophrenia, frontal deficits).
Contact Sara Solla (solla@nwu.edu) or Mike Mozer
(mozer@colorado.edu) if you are interested in speaking at the
workshop.
- -------------------------------------------------------------------------
Geometric Methods in Learning workshop
Amir Assadi
http://www.lmcg.wisc.edu/bioCVG/events/NIPS2001/NIPS2001Wkshp.htm
http://www.lmcg.wisc.edu/bioCVG
The purpose of this workshop is to attract the attention of the
learning community to geometric methods and to take on an
endeavor:
1. To lay out a geometric paradigm for formulating profound ideas
in learning;
2. To facilitate the development of geometric methods suitable of
investigation of new ideas in learning theory.
Today's continuing advances in computation make it possible to
infuse geometric ideas into learning that otherwise would have
been computationally prohibitive. Nonlinear dynamics in brain-like
complex systems has created great excitement, offering a broad
spectrum of new ideas for discovery of parallel-distributed
algorithms, a hallmark of learning theory. By having great
overlap, geometry and nonlinear dynamics together offer a
complementary and more profound picture of the physical world and
how it interacts with the brain, the ultimate learning system.
Among the discussion topics, we envision the following:
information geometry, differential topological methods for turning
local estimates into global quantities and invariants, Riemannian
geometry and Feynman path integration as a framework to explore
nonlinearity, advanced in complex dynamical system theory in the
context of learning and dynamic information processing in brain,
and information theory of massive data sets. As before, in our
discussion sessions we will also examine the potential impact of
learning theory on future development of geometry, and report on
new examples of new vistas on the impact of learning theoretic
parallel-distributed algorithms on research in mathematics.
With 3 years of meetings, we are in a position to plan a volume
based on the materials for the workshops and other contributions
to be proposed to the NIPS Program Committee.
- -------------------------------------------------------------------------
Information and Statistical Structure in Spike Trains
Jonathon D. Victor
http://www-users.med.cornell.edu/~jdvicto/nips2001.html
Understanding how neurons represent and manipulate information in
their spike trains is one of the major fundamental problems in
neuroscience. Moreover, advances towards its solution will rely
on a combination of appropriate theoretical, computational, and
experimental strategies. Meaningful and reliable statistical
analyses, including calculation of information and related
quantities, are at the basis of understanding neural information
processing. The accuracy and precision of statistical analyses and
empirical information estimates depend strongly on the amount and
quality of the data available, and on the assumptions that are
made in order to apply the formalisms to a laboratory data
set. These assumptions typically relate to the neural transduction
itself (e.g., linearity or stationarity) and to the statistics of
the spike trains (e.g., correlation structure). There are numerous
approaches to conducting statistical analyses and estimating
information-theoretic quantities, and there are also some major
differences in findings across preparations. It is unclear to what
extent these differences represent fundamental biological
differences, differences in what is being measured, or
methodological biases. Specific areas of focus will include:
Theoretical and experimental approaches to analyze multineuronal
spiking activity; Bursting, rhythms, and other endogenous
patterns; Is "Poisson-like" a reasonable approximation to spike
train stochastic structure?; How do we formulate alternative
models to Poisson?; How do we evaluate model goodness-of-fit?
A limited number of slots are available for contributed
presentations. Individuals interested in presenting a talk
(approximately 20 minutes, with 10 to 20 minutes for discussion)
should submit a title and abstract, 200-300 words, to the
organizers, Jonathan D. Victor (jdvicto@med.cornell.edu) and Emery
Brown (brown@neurostat.mgh.harvard.edu) by October 12, 2001.
- -------------------------------------------------------------------------
Workshop on New Directions in Kernel-Based Learning Methods
Chris Williams, Craig Saunders, Matthias Seeger, John Shawe-Taylor
http://www.cs.rhul.ac.uk/colt/nipskernel.html
The aim of the workshop is to present new perspectives and new
directions in kernel methods for machine learning. Recent
theoretical advances and experimental results have drawn
considerable attention to the use of kernel functions in learning
systems. Support Vector Machines, Gaussian Processes, kernel PCA,
kernel Gram-Schmidt, Bayes Point Machines, Relevance and Leverage
Vector Machines, are just some of the algorithms that make crucial
use of kernels for problems of classification, regression, density
estimation, novelty detection and clustering. At the same time as
these algorithms have been under development, novel techniques
specifically designed for kernel-based systems have resulted in
methods for assessing generalisation, implementing model
selection, and analysing performance. The choice of model may be
simply determined by parameters of the kernel, as for example the
width of a Gaussian kernel. More recently, however, methods for
designing and combining kernels have created a toolkit of options
for choosing a kernel in a particular application. These methods
have extended the applicability of the techniques beyond the
natural Euclidean spaces to more general discrete structures.
The workshop will provide a forum for discussing results and
problems in any of the above mentioned areas. But more
importantly, by the structure of the workshop we hope to examine
the future directions and new perpsectives that will keep the
field lively and growing.
We seek two types of contributions:
1) Contributed 20 minutes talks that offer new directions
(serving as a focal point for the general discussions)
2) Posters of new ongoing work, with associated spotlight
presentations (summarising current work and serving as a
springboard for individual discussion).
Important Dates:
Submission of extended abstracts: 15th October 2001.
Notification of acceptance: Early November.
Submission Procedure: Extended abstracts in .ps or .pdf formats
(only) should be e-mailed to nips-kernel-workshop@cs.rhul.ac.uk
- -------------------------------------------------------------------------
Knowledge Representation In Meta-Learning
Ricardo Vilalta
http:www/research.ibm.com/MetaLearning
Learning across multiple related tasks, or improving learning
performance over time, requires knowledge be transferred across
tasks. In many classification algorithms, successive applications
of the algorithm over the same data always produces the same
hypothesis; no knowledge is extracted across tasks. Knowledge
across tasks can be used to construct meta-learners able to
improve the quality of the inductive bias through experience. To
attain this goal, different pieces of knowledge are needed. For
example, how can we characterize those tasks that are most
favorable to a particular classification algorithm? On the other
hand, What forms of bias are most favorable for certain tasks? Are
there invariant transformations inherent to a domain that can be
captured when learning across tasks? The goal of the workshop is
to discuss alternative ways of knowledge representation in
meta-learning with the idea of achieving new forms of bias
adaptation.
Important Dates: Paper submission: Nov 1, 2001. Notification of
acceptance: Nov 12, 2001. Camera-ready copy: Nov 26, 2001.
- -------------------------------------------------------------------------
Machine Learning Techniques for Bioinformatics
Colin Campbell, Shayan Mukherjee
http://lara.enm.bris.ac.uk/cig/nips01/nips01.htm
There has been significant recent interest in the development of
new methods for functional interpretation of gene expression data
derived from cDNA microarrays and related technologies. Analysis
frequently involves classification, regression, feature selection,
outlier detection and cluster analysis, for example. To provide a
focus, this topic be the main theme for this one-day Workshop,
though contributions in related areas of bioinformatics are
welcome. Contributed papers should ideally be in the area of new
algorithmic or theoretical approaches to analysing such datasets
as well as biologically interesting applications and validation of
existing algorithms. To make sure the Workshop relates to issues
of real importance to experimentalists there will be four invited
tutorial talks to introduce microarray technology, illustrate
particular case studies and discuss issues relevant to eventual
clinical application. The invited speakers are Pablo Tamayo or
Todd Golub (Whitehead Institute, MIT), Dan Notterman (Princeton
University), Roger Bumgarner (University of Washington) and
Richard Simon (National Cancer Institute). The invited speakers
have been involved in the preparation of well-known datasets and
studies of expression analysis for a variety of cancers. Authors
wishing to contribute papers should submit a title and extended
abstract to both organisers (C.Campbell@bris.ac.uk and
sayan@mit.edu) before 14th October 2001. Further details about
this workshop and the final schedule are available from the
workshop webpage.
- -------------------------------------------------------------------------
Machine Learning Methods for Images and Text
Thomas Hofmann, Jaz Kandola, Tomaso Poggio, John Shawe-Taylor
http://www.cs.rhul.ac.uk/colt/nipstext.html
The aim of the workshop is to present new perspectives and new
directions in information extraction from structured and
semi-structured data for machine learning. The goal of this
workshop is to investigate extensions of modern statistical
learning techniques for applications in the domains of
categorization and retrieval of information for example text,
video and sound, as well as to their combination --
multimedia. The focus will be on exploring innovative and
potentially groundbreaking machine learning technologies as well
as on identifying key challenges in information access, such as
multi-class classification, partially labeled examples and the
combination of evidence from separate multimedia domains. The
workshop aims to bring together an interdisciplinary group of
international researchers from machine learning, information
retrieval, computational linguistics, human-computer interaction,
and digital libraries for discussing results and dissemination of
ideas, with the objective of highlighting new research
directions. The workshop will provide a forum for discussing
results and problems in any of the above mentioned areas. But more
importantly, by the structure of the workshop we hope to examine
the future directions and new perpsectives that will keep the
field lively and growing. We seek two types of contributions:
1) Contributed 20 minutes talks that offer new directions (serving
as a focal point for the general discussions)
2) Posters of new ongoing work, with associated spotlight
presentations (summarising current work and serving as a
springboard for individual discussion).
Important Dates: Submission of extended abstracts: 15th October
2001. Notification of acceptance: 2nd November 2001.
Submission Procedure: Extended abstracts in .ps or .pdf formats
(only) should be e-mailed to nips-text-workshop@cs.rhul.ac.uk by
15th October 2001. Extended abstracts should be 2-4 sides of A4.
The higlighting of a confernce-style group for the paper is not
necessary, however the indication of a group and/or keywords would
be helpful.
- -------------------------------------------------------------------------
Minimum Description Length: Developments in Theory and New Applications
Peter Grunwald, In-Jae Myung, Mark Pitt
http://quantrm2.psy.ohio-state.edu/injae/workshop.htm
Inductive inference, the process of inferring a general law from
observed instances, is at the core of science. The Minimum
Description Length (MDL) Principle, which was originally proposed
by Jorma Rissanen in 1978 as a computable approximation of
Kolmogorov complexity, is a powerful method for inductive
inference. The MDL principle states that the best explanation
(i.e., model) given a limited set of observed data is the one that
permits the greatest compression of the data. That is, the more we
are able to compress the data, the more we learn about the
underlying regularities that generated the data. This
conceptualization originated in algorithmic information theory
from the notion that the existence of regularities underlying data
necessarily implies redundancy in the information from successive
observations. Since 1978, significant strides have been made in
both the mathematics and application of MDL. For example, MDL is
now being applied in machine learning, statistical inference,
model selection, and psychological modeling. The purpose of this
workshop is to bring together researchers, both theorists and
practitioners, to discuss the latest developments and share new
ideas. In doing so, our intent is to introduce to the broader
NIPS community the current state of the art in the field.
- -------------------------------------------------------------------------
Multi-sensory Perception & Learning
J. Fisher, L. Shams, V. de Sa, M. Slaney, T. Darrell
http://www.ai.mit.edu/people/fisher/nips01/perceptwshop/description/
All perception is multi-sensory perception. Situations where animals
are exposed to information from a single modality exist only in
experimental settings in the laboratory. For a variety of reasons,
research on perception has focused on processing within one sensory
modality. Consequently, the state of knowledge about multi-sensory
fusion in mammals is largely at the level of phenomenology, and the
underlying mechanisms and principles are poorly understood. Recently,
however, there has been a surge of interest in this topic, and this
field is emerging as one of fast growing areas of research in
perception.
Simultaneously and with the advent of low-cost, low-power
multi-media sensors there has been renewed interest in automated
multi-modal data processing. Whether it be in an intelligent room
environment, heterogenous sensor array or the autonomous robot, robust
integrated processing of multiple modalities has the potential to
solve perception problems more efficiently by leveraging complementary
sensor information.
The goals of this workshop are to further the understanding of the
both the cognitive mechanisms by which humans (and other animals)
integrate multi-modal data as well as the means by which automated
systems may similarly function. It is not our contention that one
should follow the other. It is our contention, that researchers in
these different communities stand to gain much through interaction
with each other. This workshop aims to bring these researchers
together to compare methods and performance and to develop a common
understanding of the underlying principles which might be used to
analyze both human and machine perception of multi-modal
data. Discussions and presentations will span theory, application, as
well as relevant aspects of animal/machine perception.
The workshop will emphasize a moderated discussion format with
short presentations prefacing each of the discussions. Please
see the web page for some of the specific questions to be addressed.
- -------------------------------------------------------------------------
Neuroimaging: Tools, Methods & Modeling
B. M. Bly, L. K. Hansen, S. J. Hanson, S. Makeig, S. Strother
http://psychology.rutgers.edu/Users/ben/nips2001/nips2001workshop.html
Advances in the mathematical description of neuroimaging data are
currently a topic of great interest. Last June, at the 7th Annual
Meeting of the Organization for Human Brain Mapping in Brighton
UK, the number of statistical modeling abstracts virtually
exploded (30 abstracts were submitted on ICA alone.) Because of
its high relevance for researchers in statistical modeling it has
been the topic of several NIPS workshops. Neuroinformatics is an
emerging research field, which besides a rich modeling activity
also is concerned with database and datamining issues as well as
ongoing discussions of data and model sharing. Several groups now
distribute statistical modeling tools and advanced exploratory
approaches are finding increasing use in neuroimaging labs. NIPS
is a rich arena for multivariate and neural modeling, the
intersection of Neuroimaging and neural models is important for
both fields.
This workshop will discuss the underlying methods and software
tools related to a variety of strategies for modeling and
inference in neuroimaging data analysis (Morning, Day 1.)
Discussants will also present methods for comparison, evaluation,
and meta-analysis in neuroimaging (Afternoon, Day 1.) On the
second day of the workshop, we will continue the discussion with a
focus on multivariate strategies (Morning, Day 2.) The workshop
will include a discussion of hemodynamic and neural models and
their role in mathematical modeling of neuroimaging data
(Afternoon, Day 2). Each session of the two-day workshop will
include discussion. Talks are intended to last roughly 20 minutes
each, followed by 10 minutes of discussion. At the end of each
day, there will be a discussion of themes by all participants,
with the presenters acting as a panel.
- -------------------------------------------------------------------------
Foundations of Occam's razor and parsimony in learning
David G. Stork
http://www.rii.ricoh.com/~stork/OccamWorkshop.html
"Entia non sunt multiplicanda praeter necessitatem"
-- William of Occam (1285?-1349?)
Occam's razor is generally interpreted as counselling the use of
"simpler" models rather than complex ones, fewer parameters rather
than more, and "smoother" generalizers rather than those that are
less smooth. The mathematical descendents of this philosophical
principle of parsimony appear in minimum-description-length,
Akaike, Kolmogorov complexity and related principles, having
numerous manifestations in learning, for instance regularization,
pruning, and overfitting avoidance. For a given quality of fit to
the training data, in the absence of other information should we
favor "simpler" models, and if so, why? How do we measure
simplicity, and which representation should we use when doing so?
What assumptions are made -- explicitly or implicitly -- by these
methods and when are such assumptions valid? What are the minimum
assumptions or conditions -- for instance that by increasing the
amount of training data we will improve a classifier's performance
-- that yield Occam's razor? Support Vector Machines and some
neural networks contain a very large number of free parameters,
more than might be permitted by the size of the training data and
in seeming contradiction to Occam's razor; nevertheless, such
classifiers can work exceedingly well. Why? Bayesian techniques
such as ML-II reduce a classifier's complexity in a data-dependent
way. Does this comport with Occam's razor? Can we characterize
problems for which Occam's razor should or should not apply? Even
if we abandon the search for the "true" model that generated the
training data, can Occam's razor improve our chances of finding a
"useful" model?
It has been said that Occam's razor is either profound and true,
or vacuous and false -- it just isn't clear which. Rather than
address specific implementation techniques or applications, the
goal of this workshop is to shed light on, and if possible
resolve, the theoretical questions associated with Occam's razor,
some of the deepest in the intellectual foundations of machine
learning and pattern recognition.
- -------------------------------------------------------------------------
Quantum Neural Computing
Elizabeth Behrman
- -------------------------------------------------------------------------
Variable and Feature Selection
Isabelle Guyon, David Lewis
http://www.clopinet.com/isabelle/Projects/NIPS2001/
Variable selection has recently received a lot of attention from
the machine learning and neural network community because of its
applications in genomics and text processing. Variable selection
refers to the problem of selecting input variables that are most
predictive of a given outcome. Variable selection problems are
found in all machine learning tasks, supervised or unsupervised
(clustering), classification, regression, time series prediction,
two-class or multi-class, posing various levels of challenges. The
objective of variable selection is two-fold: improving the
prediction performance of the predictors and providing a better
understanding of the underlying process that generated the
data. This last problem is particularly important in biology when
the process may be a living organism and the variables gene
expression coefficient. One of the goals of the workshop is to
explore alternate statements of the problem, including: (i)
discovering all the variables relevant to the concept (e.g. to
identify all candidate drug targets) (ii) finding a minimum subset
of variables that are useful to the predictor (e.g. to identify
the best biomarkers for diagnosis or prognosis). The workshop will
also be a forum to compare the best existing algorithms and to
discuss the organization of a potential competition on variable
selection for a future workshop. Prospective participants are
invited to submit one or two pages of summary. Theory, algorithm,
and application contributions are welcome. After the workshop, the
participants will be offered the possibility of submitting a full
paper to a special issue of the Journal of Machine Learning
Research on variable selection. Deadline for submission: October
15, 2001. Email submissions to: Isabelle Guyon at
isabelle@clopinet.com.
- -------------------------------------------------------------------------
New Methods for Preference Elicitation
Craig Boutilier, Holger Hoos, David Poole (chair), Qiang Yang
http://www.cs.ubc.ca/spider/poole/NIPS/Preferences2001.html
As intelligent agents become more and more adept at making (or
recommending) decisions for users in various domains, the need for
effective methods for the representation, elicitation, and
discovery of preference and utility functions becomes more
pressing. Deciding on the best course of action for a user
depends critically on that user's preferences. While there has
been much work on representing and learning models of the world
(e.g., system dynamics), there has been comparatively little
similar research with respect to preferences. The need to reason
about preferences arises in electronic commerce, collaborative
filtering, user interface design, task-oriented mobile robotics,
reinforcement learning, and many others. Many areas of research
bring interesting tools to the table that can be used to tackle
these issues: machine learning (classification, reinforcement
learning), decision theory and control theory (Markov decision
processes, filtering techniques), Bayesian networks and
probabilistic inferences, economics and game theory, among
others. The aim of this workshop is to bring together a diverse
group of researchers to discuss the both the practical and
theoretical problems associated with effective preference
elicitation and to highlight avenues for future research.
The deadline for extended abstracts and statements of interest is
October 19.
This archive was generated by hypermail 2b29 : Thu Oct 11 2001 - 08:34:54 PDT