Methods for improving generalizers, such as stacking, bagging,
boosting and error correcting output codes (ECOCs) have
recently been receiving a lot of attention.
We call such techniques "turnkey" techniques. This
reflects the fact that they were designed to improve
the generalization ability of generic learning algorithms,
without detailed knowledge about the inner workings of those
learners.
Whether one particular turnkey technique is, in general,
"better" than all others, and if so under what circumstances,
is a hotly debated issue.
Furthermore, it isn't clear whether it is meaningful
to ask that question without specific prior assumptions
(e.g., specific domain knowledge).
This workshop aims at investigating these issues,
building a solid understanding of how and when turnkey
techniques help generalization ability, and lay out a
road map to where the turnkey methods should go.
This workshop is of interest to anyone seeking to improve
generalization performance (e.g., better classification,
better function approximation etc.). The target audience is
researchers who explore or apply learning/generalization
methods.
If you have any questions or comments, please email Kagan Tumer.
Return to the turnkey algorihtms main page.