Abstract
The approximation of an arbitrary continuous function by a neural network is, although possible in principle, an uncertain and time-consuming undertaking. First of all, a training procedure usually requires an unknown and generally high number of steps. Second, it is difficult to assess a size of a network, that is capable to learn the desired function; therefore it may be necessary to iterate the learning sequence to find a network of suitable size. Third, even if the network is capable of approximating the desired function, the learning procedure — depending on the initialization — may end up in a local minimum far from the global optimum. Again, the solution may be to repeat the training with different initializations.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
W.J. Daunicht. DEFAnet — a deterministic neural network concept for function approximation. Neural Networks, 4: 839–845, 1991.
E.A. Robinson and S. Treitel. The Toeplitz recursion. In Geophysical signal analysis, pages 163–169. Englewood Cliffs, N.J., Prentice Hall, 1980.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer-Verlag London Limited
About this paper
Cite this paper
Daunicht, W.J. (1993). Defanet2 — Advancements of a Deterministic Function Approximator. In: Gielen, S., Kappen, B. (eds) ICANN ’93. ICANN 1993. Springer, London. https://doi.org/10.1007/978-1-4471-2063-6_136
Download citation
DOI: https://doi.org/10.1007/978-1-4471-2063-6_136
Published:
Publisher Name: Springer, London
Print ISBN: 978-3-540-19839-0
Online ISBN: 978-1-4471-2063-6
eBook Packages: Springer Book Archive