A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies

Khondoker, Mizanur, Dobson, Richard, Skirrow, Caroline, Simmons, Andrew, Stahl, Daniel and , Alzheimer's Disease Neuroimaging Initiative (2016) A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies. Statistical Methods in Medical Research, 25 (5). pp. 1804-1823. ISSN 0962-2802

[img]
Preview
PDF (Published manuscript) - Published Version
Download (485kB) | Preview

Abstract

Background: Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. Methods: We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. Results: For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study.

Item Type: Article
Additional Information: This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (http://www.uk.sagepub.com/aboutus/openaccess.htm).
Uncontrolled Keywords: machine learning,cross-validation,generalisation error,truncated distribution,microarrays,electroencephalogram (eeg),magnetic resonance imaging (mri)
Faculty \ School: Faculty of Medicine and Health Sciences > Norwich Medical School
Related URLs:
Depositing User: Pure Connector
Date Deposited: 24 Sep 2016 00:39
Last Modified: 17 Mar 2020 22:19
URI: https://ueaeprints.uea.ac.uk/id/eprint/60167
DOI: 10.1177/0962280213502437

Actions (login required)

View Item View Item