Kagan Tumer's Publications

Display Publications by [Year] [Type] [Topic]


Boundary Variance Reduction for Improved Classification through Hybrid Networks (Invited Paper). K. Tumer and J. Ghosh. In Applications and Science of Artificial Neural Networks, Proceedings of the SPIE (Vol. 2492), pp. 573–584, April 1995.

Abstract

Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework that quantifies the improvements in classification results due to linear combining. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. In the absence of network bias, the added classification error is directly proportional to the boundary variance. Moreover, if the network errors are independent, then the reduction in variance boundary location is by a factor of N, the number of classifiers that are combined. In the presence of network bias, the reductions are less than or equal to N, depending on the interaction between network biases. We discuss how the individual networks can be selected to achieve significant gains through combining, and support them with experimental results on 25-dimensional sonar data. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space.

Download

(unavailable)

BibTeX Entry

@inproceedings{tumer-ghosh_spie95,
        author={K. Tumer and J. Ghosh},
        title="Boundary Variance Reduction for Improved Classification
               through Hybrid Networks ({I}nvited Paper)",
        booktitle="Applications and Science of Artificial Neural Networks,
		Proceedings of the SPIE (Vol. 2492)",
        pages ={573-584},
        month={April},
	abstract={Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework that quantifies the improvements in classification results due to linear combining. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. In the absence of network bias, the added classification error is directly proportional to the boundary variance. Moreover, if the network errors are independent, then the reduction in variance boundary location is by a factor of N, the number of classifiers that are combined. In the presence of network bias, the reductions are less than or equal to N, depending on the interaction between network biases. We discuss how the individual networks can be selected to achieve significant gains through combining, and support them with experimental results on 25-dimensional sonar data. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space.},
	bib2html_pubtype = {Refereed Conference Papers},
	bib2html_rescat = {Classifier Ensembles},
        year={1995}
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Apr 01, 2020 17:39:43