Original contributionLearning in neural networks by using tangent planes to constraint surfaces
References (7)
A new approach for finding the global minimum of error function of neural networks
Neural Networks
(1989)Increased rates of convergence through learning rate adaptation
Neural Networks
(1988)Accelerated backpropagation learning: Two optimisation methods
Complex Systems
(1989)
There are more references available in the full text version of this article.
Cited by (4)
Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces
2012, Neural NetworksCitation Excerpt :Finally, the experimental results are given in Section 4. Lee (1993, 1997) proposed an algorithm for supervised learning in multilayered feedforward neural networks which gives significantly faster convergence than the gradient descent back-propagation algorithm. This tangent plane algorithm treats each teaching value as a constraint which defines a surface in the weight space of the network.
A transformation strategy for implementing distributed, multilayer feedforward neural networks: Backpropagation transformation
1997, Future Generation Computer SystemsTraining feedforward neural networks: An algorithm giving improved generalization
1997, Neural NetworksDeterministic neural classification
2008, Neural Computation
Copyright © 1993 Published by Elsevier Ltd.