Large, James, Lines, Jason ORCID: https://orcid.org/0000-0002-1496-5941 and Bagnall, Anthony (2019) A probabilistic classifier ensemble weighting scheme based on cross-validated accuracy estimates. Data Mining and Knowledge Discovery, 33 (6). pp. 1674-1709. ISSN 1384-5810
Preview |
PDF (Accepted_Manuscript)
- Accepted Version
Available under License Creative Commons Attribution. Download (2MB) | Preview |
Preview |
PDF (Published_Version)
- Published Version
Available under License Creative Commons Attribution. Download (1MB) | Preview |
Abstract
Our hypothesis is that building ensembles of small sets of strong classifiers constructed with different learning algorithms is, on average, the best approach to classification for real world problems. We propose a simple mechanism for building small heterogeneous ensembles based on exponentially weighting the probability estimates of the base classifiers with an estimate of the accuracy formed through cross-validation on the train data. We demonstrate through extensive experimentation that, given the same small set of base classifiers, this method has measurable benefits over commonly used alternative weighting, selection or meta classifier approaches to heterogeneous ensembles. We also show how an ensemble of five well known, fast classifiers can produce an ensemble that is not significantly worse than large homogeneous ensembles and tuned individual classifiers on datasets from the UCI archive. We provide evidence that the performance of the Cross-validation Accuracy Weighted Probabilistic Ensemble (CAWPE) generalises to a completely separate set of datasets, the UCR time series classification archive, and we also demonstrate that our ensemble technique can significantly improve the state-of-the-art classifier for this problem domain. We investigate the performance in more detail, and find that the improvement is most marked in problems with smaller train sets. We perform a sensitivity analysis and an ablation study to demonstrate the robustness of the ensemble and the significant contribution of each design element of the classifier. We conclude that it is, on average, better to ensemble strong classifiers with a weighting scheme rather than perform extensive tuning and that CAWPE is a sensible starting point for combining classifiers.
Item Type: | Article |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
UEA Research Groups: | Faculty of Science > Research Groups > Data Science and Statistics Faculty of Science > Research Groups > Smart Emerging Technologies |
Depositing User: | LivePure Connector |
Date Deposited: | 21 May 2019 15:30 |
Last Modified: | 13 May 2023 00:57 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/71086 |
DOI: | 10.1007/s10618-019-00638-y |
Downloads
Downloads per month over past year
Actions (login required)
View Item |