A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies

Khondoker, Mizanur ORCID: https://orcid.org/0000-0002-1801-1635, Dobson, Richard, Skirrow, Caroline, Simmons, Andrew and Stahl, Daniel and Alzheimer's Disease Neuroimaging Initiative (2016) A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies. Statistical Methods in Medical Research, 25 (5). pp. 1804-1823. ISSN 0962-2802

[thumbnail of Published manuscript]
Preview
PDF (Published manuscript) - Published Version
Download (485kB) | Preview

Abstract

Background: Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. Methods: We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. Results: For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study.

Item Type: Article
Additional Information: This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (http://www.uk.sagepub.com/aboutus/openaccess.htm).
Uncontrolled Keywords: machine learning,cross-validation,generalisation error,truncated distribution,microarrays,electroencephalogram (eeg),magnetic resonance imaging (mri),sdg 3 - good health and well-being ,/dk/atira/pure/sustainabledevelopmentgoals/good_health_and_well_being
Faculty \ School: Faculty of Medicine and Health Sciences > Norwich Medical School
UEA Research Groups: Faculty of Medicine and Health Sciences > Research Groups > Public Health and Health Services Research (former - to 2023)
Faculty of Medicine and Health Sciences > Research Groups > Epidemiology and Public Health
Faculty of Science > Research Groups > Norwich Epidemiology Centre
Faculty of Medicine and Health Sciences > Research Groups > Norwich Epidemiology Centre
Faculty of Medicine and Health Sciences > Research Centres > Population Health
Related URLs:
Depositing User: Pure Connector
Date Deposited: 24 Sep 2016 00:39
Last Modified: 19 Oct 2023 01:46
URI: https://ueaeprints.uea.ac.uk/id/eprint/60167
DOI: 10.1177/0962280213502437

Actions (login required)

View Item View Item