Original contribution
Learning in neural networks by using tangent planes to constraint surfaces

https://doi.org/10.1016/0893-6080(93)90005-HGet rights and content

Abstract

The principal disadvantage of the back propagation gradient descent learning algorithm for multilayer feedforward neural networks is its relatively slow rate of convergence. An alternative method, adjusting weights by moving to the tangent planes to constraint surfaces, is shown to give significantly faster convergence whilst preserving the system of back propagating errors through the network.

References (7)

There are more references available in the full text version of this article.

Cited by (4)

  • Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces

    2012, Neural Networks
    Citation Excerpt :

    Finally, the experimental results are given in Section 4. Lee (1993, 1997) proposed an algorithm for supervised learning in multilayered feedforward neural networks which gives significantly faster convergence than the gradient descent back-propagation algorithm. This tangent plane algorithm treats each teaching value as a constraint which defines a surface in the weight space of the network.

  • Deterministic neural classification

    2008, Neural Computation
View full text