Recently a number of publications have proposed alternative methods to apply in least mean square (LMS) algorithms in order to improve convergence rate. It has been also shown that variable step size methods can provide better convergence speed than the fixed step size ones. This paper introduces a new algorithm for the on-going calculation of the step size, and investigates its applicability in the training of multilayer neural networks. The proposed method seems to be efficient at least in the case of lower level additive input noise.
|Number of pages||11|
|Journal||Periodica Polytechnica Electrical Engineering|
|Publication status||Published - Jan 1 1993|
ASJC Scopus subject areas
- Electrical and Electronic Engineering