Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Estimating the Bayes Error Rate Through Classifier Combining. K. Tumer and J. Ghosh. In Proceedings of the Thirteenth International Conference on Pattern Recognition, pp. IV:695–99, Vienna, Austria, August 1996.

Abstract

The Bayes error provides the lowest achievable error rate for a given pattern classification problem. There are several classical approaches for estimating or finding bounds for the Bayes error. One type of approach focuses on obtaining analytical bounds, which are both difficult to calculate and dependent on distribution parameters that may not be known. Another strategy is to estimate the class densities through non-parametric methods, and use these estimates to obtain bounds on the Bayes error. This article presents two approaches to estimating performance limits based on classifier combining techniques. First, we present a framework that estimates the Bayes error when multiple classifiers are combined through averaging. Then we discuss a more general method that provides error limits based on disagreements among classifiers. The methods are illustrated for both artificial data and a difficult four class problem involving underwater acoustic data. In both cases, the Bayes error estimates introduced in this article clearly outperform the existing methods.

Download

(unavailable)

BibTeX Entry

@inproceedings{tumer-ghosh_icpr96,
       author= {K. Tumer and J. Ghosh},
       title= {Estimating the Bayes Error Rate Through Classifier Combining},	
       booktitle= {Proceedings of the Thirteenth International Conference
		on Pattern Recognition},
       pages={IV:695-99},
	month={August},
	address={Vienna, Austria},
	abstract={The Bayes error provides the lowest achievable error rate for a given pattern classification problem. There are several classical approaches for estimating or finding bounds for the Bayes error. One type of approach focuses on obtaining analytical bounds, which are both difficult to calculate and dependent on distribution parameters that may not be known. Another strategy is to estimate the class densities through non-parametric methods, and use these estimates to obtain bounds on the Bayes error. This article presents two approaches to estimating performance limits based on classifier combining techniques. First, we present a framework that estimates the Bayes error when multiple classifiers are combined through averaging. Then we discuss a more general method that provides error limits based on disagreements among classifiers. The methods are illustrated for both artificial data and a difficult four class problem involving underwater acoustic data. In both cases, the Bayes error estimates introduced in this article clearly outperform the existing methods.},
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {Bayes Error Estimation, Classifier Ensembles},
       year={1996}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43