Paper
1 July 1992 Feedforward neural nets and one-dimensional representation
Laurence C. W. Dixon, David Mills
Author Affiliations +
Abstract
Feedforward nets can be trained to represent any continuous function, and training is equivalent to solving a nonlinear optimization problem. Unfortunately, it frequently leads to an error function with a Hessian matrix that is effectively singular at the solution. Traditional quadratic based optimization algorithms do not perform superlinearly on functions with a singular Hessian, but results on univariate functions show that even so they are more efficient and reliable than backpropagation. A feedforward net is used to represent a superposition of its own sigmoid activation function. The results identify some conditions for which the Hessian of the error function is effectively singular.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Laurence C. W. Dixon and David Mills "Feedforward neural nets and one-dimensional representation", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); https://doi.org/10.1117/12.140100
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial neural networks

Neural networks

Superposition

Algorithm development

Data hiding

Brain

Complex systems

Back to Top