On over-fitting in model selection and subsequent selection bias in performance evaluation

Cawley, Gavin C. ORCID: https://orcid.org/0000-0002-4118-9095 and Talbot, Nicola L. C. (2010) On over-fitting in model selection and subsequent selection bias in performance evaluation. Journal of Machine Learning Research, 11 (70). pp. 2079-2107. ISSN 1533-7928

Full text not available from this repository. (Request a copy)

Abstract

Model selection strategies for machine learning algorithms typically involve the numerical optimisation of an appropriate model selection criterion, often based on an estimator of generalisation performance, such as k-fold cross-validation. The error of such an estimator can be broken down into bias and variance components. While unbiasedness is often cited as a beneficial quality of a model selection criterion, we demonstrate that a low variance is at least as important, as a non-negligible variance introduces the potential for over-fitting in model selection as well as in training the model. While this observation is in hindsight perhaps rather obvious, the degradation in performance due to over-fitting the model selection criterion can be surprisingly large, an observation that appears to have received little attention in the machine learning literature to date. In this paper, we show that the effects of this form of over-fitting are often of comparable magnitude to differences in performance between learning algorithms, and thus cannot be ignored in empirical evaluation. Furthermore, we show that some common performance evaluation practices are susceptible to a form of selection bias as a result of this form of over-fitting and hence are unreliable. We discuss methods to avoid over-fitting in model selection and subsequent selection bias in performance evaluation, which we hope will be incorporated into best practice. While this study concentrates on cross-validation based model selection, the findings are quite general and apply to any model selection practice involving the optimisation of a model selection criterion evaluated over a finite sample of data, including maximisation of the Bayesian evidence and optimisation of performance bounds.

Item Type: Article
Faculty \ School: Faculty of Science > School of Computing Sciences

UEA Research Groups: Faculty of Science > Research Groups > Data Science and Statistics
Faculty of Science > Research Groups > Computational Biology
Faculty of Science > Research Groups > Centre for Ocean and Atmospheric Sciences
Related URLs:
Depositing User: EPrints Services
Date Deposited: 01 Oct 2010 13:42
Last Modified: 21 Apr 2023 06:32
URI: https://ueaeprints.uea.ac.uk/id/eprint/3640
DOI:

Actions (login required)

View Item View Item