Skip to main content

Framework for the Training of Deep Neural Networks in TensorFlow Using Metaheuristics

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11314))

Abstract

Artificial neural networks (ANN) again are playing a leading role in machine learning, especially in classification and regression processes, due to the emergence of deep learning (ANNs with more than four hidden layers), allowing them to encode more and more complex features. The increase in the number of hidden layers in ANNs has posed important challenges in their training. Variations (e.g. RMSProp) of classical algorithms such as backpropagation with its stochastic gradient descent are the state of the art for training deep ANNs. However, other research has shown that the advantages of metaheuristics need more detailed study in this area. We summarize the design and use of a framework to optimize learning of deep neural networks in TensorFlow using metaheuristics, a framework implemented in Python that allows training of the networks in CPU or GPU depending on the TensorFlow configuration and allows easy integration of diverse classification and regression problems solved with different neural networks architectures (conventional, convolutional and recurrent) and new metaheuristics. The framework initially includes Particle Swarm Optimization, Global-best Harmony Search, and Differential Evolution. It further enables the conversion of metaheuristics into memetic algorithms including exploitation processes using the algorithms available in TensorFlow: RMSProp, Adam, Adadelta, Momentum, and Adagrad.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Morse, G., Stanley, K.O.: Simple evolutionary optimization can rival stochastic gradient descent in neural networks. In: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference - GECCO 2016, pp. 477–484 (2016)

    Google Scholar 

  2. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)

    Article  MathSciNet  Google Scholar 

  3. Bengio, Y., Lecun, Y.: Scaling learning algorithms towards AI. In: Bottou, L., Chapelle, O., DeCoste, D., Weston, J. (eds.) Large-Scale Kernel Machines, pp. 321–360. MIT Press (2007)

    Google Scholar 

  4. Dauphin, Y.N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., Bengio, Y.: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In: Proceedings of the 27th International Conference on Neural Information Processing Systems - NIPS 2014, Montreal, Canada, vol. 2, no. 9, pp. 2933–2941. MIT Press, Cambridge (2014). http://dl.acm.org/citation.cfm?id=2969033.2969154

  5. Hinton, G.E.: Neural networks for machine learning lecture 6a overview of mini-batch gradient descent reminder : the error surface for a linear neuron. http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

  6. Ojha, V.K., Abraham, A., Snášel, V.: Metaheuristic design of feedforward neural networks: a review of two decades of research. Eng. Appl. Artif. Intell. 60, 97–116 (2017)

    Article  Google Scholar 

  7. Duchi, J.C., Bartlett, P.L., Wainwright, M.J.: Randomized smoothing for (parallel) stochastic optimization. In: Proceedings of IEEE Conference on Decision and Control, vol. 12, pp. 5442–5444 (2012)

    Google Scholar 

  8. Zeiler, M.D.: ADADELTA: an adaptive learning rate method. arXiv:1212.5701 [cs] (2012)

  9. Bengio, Y., Boulanger-Lewandowski, N., Pascanu, R.: Advances in optimizing recurrent networks. In: ICASSP, Proceedings of the IEEE International Conference on Acoustics Speech Signal Processing, pp. 8624–8628 (2013)

    Google Scholar 

  10. Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms (1989). http://dl.acm.org/citation.cfm?id=1623876

  11. Such, F.P., Madhavan, V., Conti, E., Lehman, J., Stanley, K.O., Clune, J.: Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning (2017)

    Google Scholar 

  12. Martín, A., Lara-Cabrera, R., Fuentes-Hurtado, F., Naranjo, V., Camacho, D.: EvoDeep: a new evolutionary approach for automatic deep neural networks parametrisation. J. Parallel Distrib. Comput. 117, 180–191 (2018)

    Article  Google Scholar 

  13. Deep Learning Frameworks|NVIDIA Developer. https://developer.nvidia.com/deep-learning-frameworks

  14. Saitou, S., et al.: Application of TensorFlow to recognition of visualized results of fragment molecular orbital (FMO) calculations. Chem-Bio Inform. J. 18, 58–69 (2018)

    Article  Google Scholar 

  15. Yuan, L., Qu, Z., Zhao, Y., Zhang, H., Nian, Q.: A convolutional neural network based on TensorFlow for face recognition. In: Proceedings of 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference, IAEAC 2017, pp. 525–529 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos Cobos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Muñoz-Ordóñez, J., Cobos, C., Mendoza, M., Herrera-Viedma, E., Herrera, F., Tabik, S. (2018). Framework for the Training of Deep Neural Networks in TensorFlow Using Metaheuristics. In: Yin, H., Camacho, D., Novais, P., Tallón-Ballesteros, A. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2018. IDEAL 2018. Lecture Notes in Computer Science(), vol 11314. Springer, Cham. https://doi.org/10.1007/978-3-030-03493-1_83

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03493-1_83

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03492-4

  • Online ISBN: 978-3-030-03493-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics