Using Diversity for Classifier Ensemble Pruning: An Empirical Investigation

Muhammad Atta Othman Ahmed, Luca Didaci, Bahram Lavi, Giorgio Fumera

Abstract


The concept of `diversity' has been one of the main open issues in the field of multiple classifier systems. In this paper we address a facet of diversity related to its effectiveness for ensemble construction, namely, explicitly using diversity measures for ensemble construction techniques based on the kind of overproduce and choose strategy known as ensemble pruning. Such a strategy consists of selecting the (hopefully) more accurate subset of classifiers out of an original, larger ensemble. Whereas several existing pruning methods use some combination of individual classifiers' accuracy and diversity, it is still unclear whether such an evaluation function is better than the bare estimate of ensemble accuracy. We empirically investigate this issue by comparing two evaluation functions in the context of ensemble pruning: the estimate of ensemble accuracy, and its linear combination with several well-known diversity measures. This can also be viewed as using diversity as a regularizer, as suggested by some authors. To this aim we use a pruning method based on forward selection, since it allows a direct comparison between different evaluation functions. Experiments on thirty-seven benchmark data sets, four diversity measures and three base classifiers provide evidence that using diversity measures for ensemble pruning can be advantageous over using only ensemble accuracy, and that diversity measures can act as regularizers in this context.

Full Text:

PDF

References


Z.-H. Zhou. Ensemble Methods: Foundations and Algorithms. CRC press, 1st edition, 2012.

L. I. Kuncheva. Combining pattern classifiers: methods and algorithms. John Wiley & Sons, 2014. DOI: 10.1002g9781118914564.index

L. I. Kuncheva and Ch. J. Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning, 51(2):181-207, 2003. DOI: 10.1023/A:1022859003006

E. K. Tang, P. N. Suganthan, and X. Yao. An analysis of diversity measures. Machine Learning, 65(1):247-271, 2006. DOI: 10.1007/s10994-006-9449-2

G. Brown, J. Wyatt, R. Harris, and X. Yao. Diversity creation methods: a survey and categorisation. Information Fusion, 6(1):5-20, mar 2005. DOI: 10.1016/j.inffus.2004.04.004

L. Didaci, G. Fumera, and F. Roli. Diversity in classifier ensembles: Fertile concept or dead end? In Multiple Classifier Systems, pages 37-48. Springer Berlin Heidelberg, 2013. DOI: 10.1007/978-3-642-38067-9_4

N. Ueda and R. Nakano. Generalization error of ensemble estimators. In Proceedings of International Conference on Neural Networks (ICNN`96). IEEE. DOI: 10.1109/icnn.1996.548872

G. Brown and L. I. Kuncheva. ``Good'' and ``bad'' diversity in Majority Vote Ensembles. In Multiple Classifier Systems, pages 124-133. Springer Berlin Heidelberg, 2010. DOI: 10.1007/978-3-642-12127-2_13

P. Sollich and A. Krogh. Learning with ensembles: How over-fitting can be useful. In Proceedings of the 8th International Conference on Neural Information Processing Systems, NIPS'95, pages 190-196. MIT Press, 1995.

L. I. Kuncheva. A bound on kappa-error diagrams for analysis of classifier ensembles. IEEE Transactions on Knowledge and Data Engineering, 25(3):494-501, 2013. DOI: 10.1109/tkde.2011.234

G. Martinez-Munoz and A. Suarez. Pruning in ordered bagging ensembles. In Proceedings of the 23rd international conference on Machine learning - ICML`06. ACM Press, 2006. DOI: 10.1145/1143844.1143921

Z.-H. Zhou and W. Tang. Selective ensemble of decision trees. In Lecture Notes in Computer Science, pages 476-483. Springer Berlin Heidelberg. DOI: 10.1007/3-540-39205-x_81

Z.-H. Zhou, J. Wu, and W. Tang. Ensembling neural networks: Many could be better than all. Artificial Intelligence, 137(1-2):239-263, 2002. DOI: 10.1016/s0004-3702(02)00190-x

N. Li, Y. Yu, and Z.-H. Zhou. Diversity regularized ensemble pruning. In Machine Learning and Knowledge Discovery in Databases, pages 330-345. Springer Berlin Heidelberg, 2012. DOI: 10.1007/978-3-642-33460-3_27

M. A. O. Ahmed, L. Didaci, G. Fumera, and F. Roli. An empirical investigation on the use of diversity for creation of classifier ensembles. In Multiple Classifier Systems, pages 206-219. Springer International Publishing, 2015. DOI: 10.1007/978-3-319-20248-8_18

Y. Yu, Y.-F. Li, and Z.-H. Zhou. Diversity regularized machine. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume Two, IJCAI'11, pages 1603-1608. AAAI Press, 2011. DOI: 10.5591/978-1-57735-516-8/IJCAI11-269

G. Tsoumakas, I. Partalas, and I. Vlahavas. An ensemble pruning primer. In Studies in Computational Intelligence, pages 1-13. Springer Berlin Heidelberg, 2009. DOI: 10.1007/978-3-642-03999-7_1

D. Partridge and W. B. Yates. Engineering multiversion neural-net systems. Neural Computation, 8(4):869-893, 1996. DOI: 10.1162/neco.1996.8.4.869

D. D. Margineantu and T. G. Dietterich. Pruning adaptive boosting. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 211-218. Morgan Kaufmann Publishers Inc., 1997.

A. L. Prodromidis and S. J. Stolfo. Pruning meta-classifiers in a distributed data mining system. In In Proc of the First National Conference on New Information Technologies, pages 151-160, 1998.

R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes. Ensemble selection from libraries of models. In Twenty-first international conference on Machine learning - ICML `04. ACM Press, 2004. DOI: 10.1145/1015330.1015432

G. Martinez-Munoz and A. Suarez. Aggregation ordering in bagging. In Proc. of the IASTED International Conference on Artificial Intelligence and Applications, pages 258-263. Citeseer, 2004.

R. E. Banfield, L. O. Hall, K. W. Bowyer, and W. P. Kegelmeyer. Ensemble diversity measures and their application to thinning. Information Fusion, 6(1):49-62, 2005. DOI: 10.1016/j.inffus.2004.04.005

I. Partalas, G. Tsoumakas, and I. Vlahavas. An ensemble uncertainty aware measure for directed hill climbing ensemble pruning. Machine Learning, 81(3):257-282, 2010. DOI: 10.1007/s10994-010-5172-0

G. D. C. Cavalcanti, L. S. Oliveira, T. J.M. Moura, and G. V. Carvalho. Combining diversity measures for ensemble pruning. Pattern Recognition Letters, 74:38-45, 2016. DOI: 10.1016/j.patrec.2016.01.029

H. Guo, H. Liu, R. Li, Ch. Wu, Y. Guo, and M. Xu. Margin & diversity based ordering ensemble pruning. Neurocomputing, 275:237-246, 2018. DOI: 10.1016/j.neucom.2017.06.052

G. Zenobi and P. Cunningham. Using diversity in preparing ensembles of classifiers based on different feature subsets to minimize generalization error. In Machine Learning: ECML 2001, pages 576-587. Springer Berlin Heidelberg, 2001. DOI: 10.1007/3-540-44795-4_49

A. Tsymbal, M. Pechenizkiy, and P. Cunningham. Diversity in search strategies for ensemble feature selection. Information Fusion, 6(1):83-98, 2005. DOI: 10.1016/j.inffus.2004.04.003

T. K. Ho. The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8):832-844, 1998. DOI: 10.1109/34.709601

G. Giacinto and F. Roli. Design of effective neural network ensembles for image classification purposes. Image and Vision Computing, 19(9-10):699-707, 2001. DOI: 10.1016/s0262-8856(01)00045-2

D. Partridge and W. Krzanowski. Software diversity: practical statistics for its measurement and exploitation. Information and Software Technology, 39(10):707-717, 1997. DOI: 10.1016/s0950-5849(97)00023-2

D. B. Skalak. The sources of increased accuracy for two proposed boosting algorithms. In Proc. American Association for Artificial Intelligence, AAAI-96, Integrating Multiple Learned Models Workshop, pages 120-125, 1996.

L. K. Hansen and P. Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(10):993-1001, 1990. DOI: 10.1109/34.58871

G. Martinez-Muoz, D. Hernandez-Lobato, and A. Suarez. An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):245-259, 2009. DOI: 10.1109/tpami.2008.78

L. Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1996. DOI: 10.1007/bf00058655

J. Demsar. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res., 7:1-30, 2006.




DOI: http://dx.doi.org/10.20904/291-2025

Refbacks

  • There are currently no refbacks.


Copyright (c) 2018 Muhammad Atta Othman Ahmed, Luca Didaci, Giorgio Fumera, Fabio Roli

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

ISSN: 1896-5334 (print), 2300-889X (online)

Open Acces CrossRef Indexed in DOAJ