Sparse multinomial logistic regression via Bayesian L1 regularisation

Cawley, G. C., Talbot, N. L. C. and Girolami, M. (2007) Sparse multinomial logistic regression via Bayesian L1 regularisation. In: Advances in Neural Information Processing Systems. MIT Press, pp. 209-216. ISBN 9780262195683

Full text not available from this repository. (Request a copy)

Abstract

Multinomial logistic regression provides the standard penalised maximum likelihood solution to multi-class pattern recognition problems. More recently, the development of sparse multinomial logistic regression models has found application in text processing and microarray classification, where explicit identification of the most informative features is of value. In this paper, we propose a sparse multinomial logistic regression method, in which the sparsity arises from the use of a Laplace prior, but where the usual regularisation parameter is integrated out analytically. Evaluation over a range of benchmark datasets reveals this approach results in similar generalisation performance to that obtained using cross-validation, but at greatly reduced computational expense.

Item Type: Book Section
Faculty \ School: Faculty of Science > School of Computing Sciences

University of East Anglia > Faculty of Science > Research Groups > Computational Biology (subgroups are shown below) > Machine learning in computational biology
Related URLs:
Depositing User: Vishal Gautam
Date Deposited: 04 Apr 2011 13:32
Last Modified: 22 Apr 2020 10:05
URI: https://ueaeprints.uea.ac.uk/id/eprint/23361
DOI:

Actions (login required)

View Item View Item