Skip to main content
Log in

A Levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In the present study, a new algorithm is developed for neural network training by combining a gradient-based and a meta-heuristic algorithm. The new algorithm benefits from simultaneous local and global search, eliminating the problem of getting stuck in local optimum. For this purpose, first the global search ability of the grey wolf optimizer (GWO) is improved with the Levy flight, a random walk in which the jump size follows the Levy distribution, which results in a more efficient global search in the search space thanks to the long jumps. Then, this improved algorithm is combined with back propagation (BP) to use the advantages of enhanced global search ability of GWO and local search ability of BP algorithm in training neural network. The performance of the proposed algorithm has been evaluated by comparing it against a number of well-known meta-heuristic algorithms using twelve classification and function-approximation datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Baptista DO, Morgado-Dias F (2013) A survey of artificial neural network training tools. Neural Comput Appl 23(3–4):609–615

    Article  Google Scholar 

  2. Torbati N, Ayatollahi A, Kermani A (2012) An efficient neural network based method for medical image segmentation. Comput Biol Med 44:76–87

    Article  Google Scholar 

  3. Rad SJM, Tab FA, Mollazade K (2011) Classification of rice varieties using optimal color and texture features and BP neural networks. In: Machine vision and image processing (MVIP), 2011 7th Iranian. IEEE, (2011) 1–5

  4. Fathi M, Mohebbi M, Razavi SMA (2011) Application of image analysis and artificial neural network to predict mass transfer kinetics and color changes of osmotically dehydrated kiwifruit. Food Bioprocess Technol 4(8):1357–1366

    Article  Google Scholar 

  5. Taormina R, Chau K-W, Sethi R (2012) Artificial neural network simulation of hourly groundwater levels in a coastal aquifer system of the Venice lagoon. Eng Appl Artif Intell 25(8):1670–1676

    Article  Google Scholar 

  6. Kulluk S, Ozbakir L, Baykasoglu A (2012) Training neural networks with harmony search algorithms for classification problems. Eng Appl Artif Intell 25(1):11–19

    Article  Google Scholar 

  7. Askarzadeh A, Rezazadeh A (2013) Artificial neural network training using a new efficient optimization algorithm. Appl Soft Comput 13(2):1206–1213

    Article  Google Scholar 

  8. Fausett L (1994) Fundamentals of neural networks: architectures, algorithms, and applications. Prentice-Hall, Inc, Upper Saddle River

    MATH  Google Scholar 

  9. Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209

    Article  MathSciNet  Google Scholar 

  10. Rumelhart DE, Hinton GE, Williams RJ (1985) Learning internal representations by error propagation. DTIC Document

  11. Zhang J-R, Zhang J, Lok T-M, Lyu MR (2007) A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training. Appl Math Comput 185(2):1026–1037

    MATH  Google Scholar 

  12. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

    Article  Google Scholar 

  13. Holland JH (1992) Genetic algorithms: computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand. Sci Am 267:66–72

  14. Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359

    Article  MathSciNet  Google Scholar 

  15. Hansen N, Muller SD, Koumoutsakos P (2003) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol Comput 11(1):1–18

    Article  Google Scholar 

  16. Rechenberg I (1994) Evolution strategy. Computational intelligence: imitating life. IEEE Press, Piscataway, pp 147–159

  17. Van Laarhoven PJ, Aarts EH (1987) Simulated annealing: theory and applications, vol 37. Springer Science & Business Media, Berlin

    Book  Google Scholar 

  18. Szu H, Hartley R (1987) Fast simulated annealing. Phys Lett A 122(3):157–162

    Article  Google Scholar 

  19. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179(13):2232–2248

    Article  Google Scholar 

  20. Alatas B (2011) ACROA: artificial chemical reaction optimization algorithm for global optimization. Expert Syst Appl 38(10):13170–13180

    Article  Google Scholar 

  21. Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of the international joint conference on Neural networks, IJCNN '02, vol 2, pp 1895-1899

  22. Gudise VG, Venayagamoorthy GK (2003) Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Swarm Intelligence Symposium, 2003. SIS’03. Proceedings of the 2003 IEEE. IEEE,110–117

  23. Blum C, Socha K (2005) Training feed-forward neural networks with ant colony optimization: an application to pattern classification. In: Fifth international conference on hybrid intelligent systems, HIS ’05, p 6

  24. Socha K, Blum C (2007) An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput Appl 16(3):235–247

    Article  Google Scholar 

  25. Karaboga D, Ozturk C (2011) A novel clustering approach: artificial bee colony (ABC) algorithm. Appl Soft Comput 11(1):652–657

    Article  Google Scholar 

  26. Ozturk C, Karaboga D (2011) Hybrid artificial bee colony algorithm for neural network training. In: Evolutionary Computation (CEC), 2011 IEEE Congress on. IEEE, 84–88

  27. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. Evolut Comput IEEE Trans 1(1):67–82

    Article  Google Scholar 

  28. Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: IJCAI

  29. Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science. New York, NY

  30. Das G, Pattnaik PK, Padhy SK (2014) Artificial neural network trained by particle swarm optimization for non-linear channel equalization. Expert Syst Appl 41(7):3491–3496

    Article  Google Scholar 

  31. Van den Bergh F, Engelbrecht AP (2000) Cooperative learning in neural networks using particle swarm optimizers. S Afr Comput J 26:84–90

    Google Scholar 

  32. Mirjalili S, Hashim SZM, Sardroudi HM (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218(22):11125–11137

    MathSciNet  MATH  Google Scholar 

  33. Geem ZW, Kim JH, Loganathan G (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68

    Article  Google Scholar 

  34. Lee KS, Geem ZW (2005) A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput Methods Appl Mech Eng 194(36):3902–3933

    Article  Google Scholar 

  35. Mirjalili SZ, Saremi S, Mirjalili SM (2015) Designing evolutionary feedforward neural networks using social spider optimization algorithm. Neural Comput App 26(8):1919–1928

    Article  Google Scholar 

  36. Ebrahimpour-Komleh H (2013) Cuckoo Optimization Algorithm for FeedForward Neural Network Training. 21th Iranian Conference on Electrical Engineering (ICEE2013), IEEE

  37. Song X, Tang L, Zhao S, Zhang X, Li L, Huang J, Cai W (2015) Grey wolf optimizer for parameter estimation in surface waves. Soil Dyn Earthq Eng 75:147–157

    Article  Google Scholar 

  38. Song HM, Sulaiman MH, Mohamed MR (2014) An application of grey wolf optimizer for solving combined economic emission dispatch problems. Int Rev Model Simul (IREMOS) 7(5):838–844

    Article  Google Scholar 

  39. Komaki GM, Kayvanfar V (2015) Grey wolf optimizer algorithm for the two-stage assembly flow shop scheduling problem with release time. J Comput Sci 8:109–120

    Article  Google Scholar 

  40. Sulaiman MH, Zuriani M, Mohamed MR, Aliman O (2015) Using the gray wolf optimizer for solving optimal reactive power dispatch problem. Appl Soft Comput 32:286–292

    Article  Google Scholar 

  41. Mirjalili S (2015) How effective is the Grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161

    Article  Google Scholar 

  42. Alba E, Chicano JF (2004) Training neural networks with GA hybrid algorithms. In: Genetic and Evolutionary Computation GECCO 2004. Springer, 852–863

  43. Carvalho M, Ludermir TB (2006) Hybrid training of feed-forward neural networks with particle swarm optimization. In: Neural information processing. Springer, 1061–1070

  44. Wang L, Zeng Y, Cui C, Wang H (2007) Application of artificial neural network supported by bp and particle swarm optimization algorithm for evaluating the criticality class of spare parts. In: Third international conference on natural computation, ICNC 2007, IEEE, pp 528–532

  45. Chechkin AV,  Metzler R, Klafter J, Yu. Gonchar V (2008) Introduction to the theory of Lévy flights. Anomalous transport: foundations and applications, Wiley, pp 129–162

  46. Sarangi PP, Sahu A, Panda M (2013) A hybrid differential evolution and back-propagation algorithm for feedforward neural network training. Int J Comput Appl 84(14)

  47. Chen X, Wang J, Sun D, Liang J (2008) A novel hybrid Evolutionary Algorithm based on PSO and AFSA for feedforward neural network training. In: 4th international conference on wireless communications, networking and mobile computing, WiCOM'08, IEEE, pp 1–5

  48. Yang X-S, Deb S (2013) Multiobjective cuckoo search for design optimization. Comput Oper Res 40(6):1616–1624

    Article  MathSciNet  Google Scholar 

  49. Yaghini M, Khoshraftar MM, Fallahi M (2013) A hybrid algorithm for artificial neural network training. Eng Appl Artif Intell 26(1):293–301

    Article  Google Scholar 

  50. Blake C, Merz CJ (1998) {UCI} Repository of machine learning databases

  51. Yaghini M, Khoshraftar MM, Fallahi M (2011) HIOPGA: a new hybrid metaheuristic algorithm to train feedforward neural networks for prediction. In: Proceedings of the international conference on data mining. 18–21

  52. Dehuri S, Roy R, Cho S-B, Ghosh A (2012) An improved swarm optimized functional link artificial neural network (ISO-FLANN) for classification. J Syst Softw 85(6):1333–1345

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shima Amirsadri.

Ethics declarations

Conflict of interest

First author (Shima Amirsadri) declares that she has no conflict of interest. Second author (Seyed Jalaleddin Mousavirad) declares that he has no conflict of interest. Third author (Hossein Ebrahimpour-Komleh) declares that he has no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amirsadri, S., Mousavirad, S.J. & Ebrahimpour-Komleh, H. A Levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training. Neural Comput & Applic 30, 3707–3720 (2018). https://doi.org/10.1007/s00521-017-2952-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-2952-5

Keywords

Navigation