Less Is More: A Comprehensive Framework for the Number of Components of Ensemble Classifiers
The number of component classifiers chosen for an ensemble has a great impact on its prediction ability. In this paper, we use a geometric framework for a priori determining the ensemble size, applicable to most of the existing batch and online ensemble classifiers. There are only a limited number of studies on the ensemble size considering Majority Voting (MV) and Weighted Majority Voting (WMV). Almost all of them are designed for batch-mode, barely addressing online environments. The big data dimensions and resource limitations in terms of time and memory make the determination of the ensemble size crucial, especially for online environments. Our framework proves, for the MV aggregation rule, that the more strong components we can add to the ensemble the more accurate predictions we can achieve. On the other hand, for the WMV aggregation rule, we prove the existence of an ideal number of components equal to the number of class labels, with the premise that components are completely independent of each other and strong enough. While giving the exact definition for a strong and independent classifier in the context of an ensemble is a challenging task, our proposed geometric framework provides a theoretical explanation of diversity and its impact on the accuracy of predictions. We conduct an experimental evaluation with two different scenarios to show the practical value of our theorems.
READ FULL TEXT