Mandic, Danilo P. and Chambers, Jonathon A. (1999) Towards the optimal learning rate for backpropagation. Neural Processing Letters, 11 (1). pp. 1-5. ISSN 1370-4621
Full text not available from this repository.Abstract
A backpropagation learning algorithm for feedforward neural networks with an adaptive learning rate is derived. The algorithm is based upon minimising the instantaneous output error and does not include any simplifications encountered in the corresponding Least Mean Square (LMS) algorithms for linear adaptive filters. The backpropagation algorithm with an adaptive learning rate, which is derived based upon the Taylor series expansion of the instantaneous output error, is shown to exhibit behaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trained by backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisation effects to the traditional backpropagation learning algorithm.
Item Type: | Article |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
Depositing User: | Vishal Gautam |
Date Deposited: | 10 Mar 2011 09:50 |
Last Modified: | 15 Dec 2022 01:58 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/23715 |
DOI: | 10.1023/A:1009686825582 |
Actions (login required)
View Item |