Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Analysis of Decision Boundaries in Linearly Combined Neural Classifiers. K. Tumer and J. Ghosh. Pattern Recognition, 29(2):341–348, 1996.

Abstract

Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly in output space reduces the variance of the actual decision region boundaries around the optimum boundary. This result is valid under the assumption that the a posteriori probability distributions for each class are locally monotonic around the Bayes optimum boundary. In the absence of classifier bias, the error is shown to be proportional to the boundary variance, resulting in a simple expression for error rate improvements. In the presence of bias, the error reduction, expressed in terms of a bias reduction factor, is shown to be less than or equal to the reduction obtained in the absence of bias. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space.

Download

[PDF]156.2kB  

BibTeX Entry

@article{tumer-ghosh_pr96,
	author = {K. Tumer and J. Ghosh},
	title = {Analysis of Decision Boundaries in Linearly Combined Neural Classifiers},
	journal = {Pattern Recognition},
	volume = {29},
	number = {2},
	pages = {341-348},     
	abstract={Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly in output space reduces the variance of the actual decision region boundaries around the optimum boundary. This result is valid under the assumption that the a posteriori probability distributions for each class are locally monotonic around the Bayes optimum boundary. In the absence of classifier bias, the error is shown to be proportional to the boundary variance, resulting in a simple expression for error rate improvements. In the presence of bias, the error reduction, expressed in terms of a bias reduction factor, is shown to be less than or equal to the reduction obtained in the absence of bias. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space.},
	bib2html_pubtype = {Journal Articles},
	bib2html_rescat = {Classifier Ensembles},
	year = 1996
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43