Abstract
This paper describes a new scheme of binary codification of artificial neural networks designed to generate automatically neural networks using any optimization method. Instead of using direct mapping of strings of bits in network connectivities, this particular codification abstracts binary encoding so that it does not reference the artificial indexing of network nodes; this codification employs shorter string length and avoids illegal points in the search space, but does not exclude any legal neural network. With these goals in mind, an Abelian semi-group structure with neutral element is obtained in the set of artificial neural networks with a particular internal operation called superimposition that allows building complex neural nets from minimum useful structures. This scheme preserves the significant feature that similar neural networks only differ in one bit, which is desirable when using search algorithms. Experimental results using this codification with genetic algorithms are reported and compared to other codification methods in terms of speed of convergence and the size of the networks obtained as a solution.
Similar content being viewed by others
References
D. Barrios, Generalized crossover operator, Ph.D. thesis, Facultad de Informática, UPM, Madrid (1991).
D. Barrios, A. Carrascal, D. Manrique and J. Ríos, Neural network training using real-coded genetic algorithms, in: Proceedings of the 5th Ibero-American Symposium on Pattern Recognition, Lisboa, Portugal (2000) pp. 337-346.
D. Barrios, D. Manrique, J. Porras and J. Ríos, Real-coded genetic algorithms based on mathematical morphology, in: Proceedings of the 3rd International Conference on Statistical Techniques in Pattern Recognition, Alicante, Spain (2000) pp. 706-715.
H. Braun, On optimizing large neural networks (multilayer perceptrons) by learning and evolution, Zeitschrift für Angewandte Mathematik und Mechanik 76(1) (1996) 211-214.
R.A. Browse, T.S. Hussain and M.B. Smillie, Using attribute grammars for the genetic selection of backpropagation networks for character recognition, in: Proceedings of SPIE: Applications of Artificial Neural Networks in Image processing IV, San Jose, CA (1999) pp. 26-34.
J. Dorado, Cooperative strategies to select automatically training patterns and neural architectures with genetic algorithms, Ph.D. thesis, University of La Coruña, Spain (1999).
K. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Networks 2 (1989) 183-192.
D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning (Addison Wesley, Reading, MA, 1989).
F. Gruau, Automatic definition of modular neural networks, Adaptive Behaviour 4 (1995) 151-183.
F. Gruau, D. Whitley and L. Pyeatt, A comparison between cellular encoding and direct encoding for genetic neural networks, in: Proceedings of the First Genetic Programming Conference, Stanford University, CA (1996) pp. 81-89.
J.H. Holland, Adaptation in Natural and Artificial Systems (University of Michigan Press, Ann Arbor, MI, 1975).
T.S. Hussain and R.A. Browse, Evolving neural networks using attribute grammars, in: 2000 IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, San Antonio, USA (2000) pp. 37-42.
H. Kitano, Designing neural networks using genetic algorithms with graph generation system, Complex Systems 4 (1990) 461-476.
R. Linggard, D.J. Myers and C. Nightingale, Neural Networks for Vision, Speech and Natural Language (Chapman and Hall, London, 1992).
P. Mars, J.R. Chen and R. Nambiar, Learning Algorithms: Theory and Applications in Signal Processing, Control and Communications (CRC Press, New York, 1996).
J.C. Principe, N.R. Euliano and W.C. Lefebvre, Neural and Adaptive Systems, Fundamentals through Simulations (Wiley, New York, 2000).
G.E. Robbins, M.D. Plumbley, J.C. Hughes, F. Fallside and R. Prager, Generation and adaptation of neural networks by evolutionary techniques (GANNET), Neural Computing and Applications 1 (1993) 23-31.
A. Siddiqi and S. Lucas, A comparison of matrix rewriting versus direct encoding for evolving neural networks, in: Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, Piscataway, NJ (1998) pp. 392-397.
S.V. Yablonsky, Introduction to Discrete Mathematics (Mir, Moscow, 1989).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Barrios, D., Manrique, D., Plaza, M.R. et al. An Algebraic Model for Generating and Adapting Neural Networks by Means of Optimization Methods. Annals of Mathematics and Artificial Intelligence 33, 93–111 (2001). https://doi.org/10.1023/A:1012337000887
Issue Date:
DOI: https://doi.org/10.1023/A:1012337000887