Abstract
We proposed in previous work ([1, 2]) a method to find new learning rules for neural networks, considering them as parametric functions and using any standard optimization method (such as genetic algorithms, gradient descent, and simulated annealing) to select the parameters.
A longer version of this paper is available by ftp in the neuroprose archive, file bengio.general.ps.Z.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
S. Bengio, Y. Bengio, J. Cloutier, and J. Gecsei, Aspects théoriques de l’optimisation d’une règle d’apprentissage, in Actes de la conférence Neuro-Nimes 1992, Nimes, France, 1992.
Y. Bengio and S. Bengio, Learning a synaptic learning rule, Tech. Rep. 751, Département d’Informatique et de Recherche Opérationnelle, Université de Montréal, Montréal, QC, CANADA, 1990.
V. N. Vapnik, Estimation of Dependencies Based on Empirical Data, Springer-Verlag, New-York, NY, USA, 1982.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer-Verlag London Limited
About this paper
Cite this paper
Bengio, S., Bengio, Y., Cloutier, J., Gecsei, J. (1993). Generalization of a Parametric Learning Rule. In: Gielen, S., Kappen, B. (eds) ICANN ’93. ICANN 1993. Springer, London. https://doi.org/10.1007/978-1-4471-2063-6_131
Download citation
DOI: https://doi.org/10.1007/978-1-4471-2063-6_131
Published:
Publisher Name: Springer, London
Print ISBN: 978-3-540-19839-0
Online ISBN: 978-1-4471-2063-6
eBook Packages: Springer Book Archive