Skip to main content
Log in

An Algebraic Model for Generating and Adapting Neural Networks by Means of Optimization Methods

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

This paper describes a new scheme of binary codification of artificial neural networks designed to generate automatically neural networks using any optimization method. Instead of using direct mapping of strings of bits in network connectivities, this particular codification abstracts binary encoding so that it does not reference the artificial indexing of network nodes; this codification employs shorter string length and avoids illegal points in the search space, but does not exclude any legal neural network. With these goals in mind, an Abelian semi-group structure with neutral element is obtained in the set of artificial neural networks with a particular internal operation called superimposition that allows building complex neural nets from minimum useful structures. This scheme preserves the significant feature that similar neural networks only differ in one bit, which is desirable when using search algorithms. Experimental results using this codification with genetic algorithms are reported and compared to other codification methods in terms of speed of convergence and the size of the networks obtained as a solution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. D. Barrios, Generalized crossover operator, Ph.D. thesis, Facultad de Informática, UPM, Madrid (1991).

    Google Scholar 

  2. D. Barrios, A. Carrascal, D. Manrique and J. Ríos, Neural network training using real-coded genetic algorithms, in: Proceedings of the 5th Ibero-American Symposium on Pattern Recognition, Lisboa, Portugal (2000) pp. 337-346.

    Google Scholar 

  3. D. Barrios, D. Manrique, J. Porras and J. Ríos, Real-coded genetic algorithms based on mathematical morphology, in: Proceedings of the 3rd International Conference on Statistical Techniques in Pattern Recognition, Alicante, Spain (2000) pp. 706-715.

  4. H. Braun, On optimizing large neural networks (multilayer perceptrons) by learning and evolution, Zeitschrift für Angewandte Mathematik und Mechanik 76(1) (1996) 211-214.

    Google Scholar 

  5. R.A. Browse, T.S. Hussain and M.B. Smillie, Using attribute grammars for the genetic selection of backpropagation networks for character recognition, in: Proceedings of SPIE: Applications of Artificial Neural Networks in Image processing IV, San Jose, CA (1999) pp. 26-34.

  6. J. Dorado, Cooperative strategies to select automatically training patterns and neural architectures with genetic algorithms, Ph.D. thesis, University of La Coruña, Spain (1999).

    Google Scholar 

  7. K. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Networks 2 (1989) 183-192.

    Google Scholar 

  8. D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning (Addison Wesley, Reading, MA, 1989).

    Google Scholar 

  9. F. Gruau, Automatic definition of modular neural networks, Adaptive Behaviour 4 (1995) 151-183.

    Google Scholar 

  10. F. Gruau, D. Whitley and L. Pyeatt, A comparison between cellular encoding and direct encoding for genetic neural networks, in: Proceedings of the First Genetic Programming Conference, Stanford University, CA (1996) pp. 81-89.

    Google Scholar 

  11. J.H. Holland, Adaptation in Natural and Artificial Systems (University of Michigan Press, Ann Arbor, MI, 1975).

    Google Scholar 

  12. T.S. Hussain and R.A. Browse, Evolving neural networks using attribute grammars, in: 2000 IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, San Antonio, USA (2000) pp. 37-42.

  13. H. Kitano, Designing neural networks using genetic algorithms with graph generation system, Complex Systems 4 (1990) 461-476.

    Google Scholar 

  14. R. Linggard, D.J. Myers and C. Nightingale, Neural Networks for Vision, Speech and Natural Language (Chapman and Hall, London, 1992).

    Google Scholar 

  15. P. Mars, J.R. Chen and R. Nambiar, Learning Algorithms: Theory and Applications in Signal Processing, Control and Communications (CRC Press, New York, 1996).

    Google Scholar 

  16. J.C. Principe, N.R. Euliano and W.C. Lefebvre, Neural and Adaptive Systems, Fundamentals through Simulations (Wiley, New York, 2000).

    Google Scholar 

  17. G.E. Robbins, M.D. Plumbley, J.C. Hughes, F. Fallside and R. Prager, Generation and adaptation of neural networks by evolutionary techniques (GANNET), Neural Computing and Applications 1 (1993) 23-31.

    Google Scholar 

  18. A. Siddiqi and S. Lucas, A comparison of matrix rewriting versus direct encoding for evolving neural networks, in: Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, Piscataway, NJ (1998) pp. 392-397.

  19. S.V. Yablonsky, Introduction to Discrete Mathematics (Mir, Moscow, 1989).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Barrios, D., Manrique, D., Plaza, M.R. et al. An Algebraic Model for Generating and Adapting Neural Networks by Means of Optimization Methods. Annals of Mathematics and Artificial Intelligence 33, 93–111 (2001). https://doi.org/10.1023/A:1012337000887

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1012337000887

Navigation