This study investigates the use of several variants of conjugate gradient (CG) optimization and line search methods to accelerate the convergence of an MLP neural network learning two medical signal classification problems. Much of the previous work has been done with artificial problems which have little relevance to real world problems and results on real world problems have been variable. The effectiveness of CG compared to standard backpropagation (BP) depended on the degree to which the learning task required finding a global minimum. If learning was stopped when the training set had been learned to an acceptable degree of error tolerance (the typical pattern classification problem), standard BP was faster than CG and did not display the convergence difficulties usually attributed to it. If learning required finding a global minimum (as in function minimization or function estimation tasks), CG methods were faster but performance was very much dependant on careful selection of `tuning' parameters and line search. This requirement for meta-optimization was more difficult for CG than for BP because of the larger number of parameters.
|Number of pages||6|
|Publication status||Published - Dec 1 1995|
|Event||Proceedings of the 1995 IEEE International Conference on Neural Networks. Part 1 (of 6) - Perth, Aust|
Duration: Nov 27 1995 → Dec 1 1995
|Other||Proceedings of the 1995 IEEE International Conference on Neural Networks. Part 1 (of 6)|
|Period||11/27/95 → 12/1/95|
ASJC Scopus subject areas