Skip to main content

Learning Efficiency Improvement of Back Propagation Algorithm by Adaptively Changing Gain Parameter together with Momentum and Learning Rate

  • Conference paper
Software Engineering and Computer Systems (ICSECS 2011)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 181))

Included in the following conference series:

Abstract

In some practical NN applications, fast response to external events within enormously short time is highly demanded. However, by using back propagation (BP) based on gradient descent optimization method obviously not satisfy in several application due to serious problems associated with BP which are slow learning convergence velocity and confinement to shallow minima. Over the years, many improvements and modifications of the back propagation learning algorithm have been reported. In this research, we modified existing back propagation learning algorithm with adaptive gain by adaptively change the momentum coefficient and learning rate. In learning the patterns, the simulation results indicate that the proposed algorithm can hasten up the convergence behaviour as well as slide the network through shallow local minima compare to conventional BP algorithm. We use three common benchmark classification problems to illustrate the improvement of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Haykin, S.: Neural Networks and Learning Machines. Prentice Hall, New Jersey (2009)

    Google Scholar 

  2. Nawi, N.M., Ransing, R.S., Salleh, M.N.M., Ghazali, R., Hamid, N.A.: An Improved Back Propagation Neural Network Algorithm on Classification Problems. In: Zhang, Y., Cuzzocrea, A., Ma, J., Chung, K.-i., Arslan, T., Song, X. (eds.) DTA and BSBT 2010. Communications in Computer and Information Science, vol. 118, pp. 177–188. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  3. Nawi, N.M., Ghazali, R., Salleh, M.N.M.: The Development of Improved Back-Propagation Neural Networks Algorithm for Predicting Patients with Heart Disease. In: Zhu, R., Zhang, Y., Liu, B., Liu, C. (eds.) ICICA 2010. LNCS, vol. 6377, pp. 317–324. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  4. Sabeti, V., Samavi, S., Mahdavi, M., Shirani, S.: Steganalysis and Payload Estimation of Embedding in Pixel Differences using Neural Networks. Pattern Recogn. 43, 405–415 (2010)

    Article  MATH  Google Scholar 

  5. Mandal, S., Sivaprasad, P.V., Venugopal, S., Murthy, K.P.N.: Artificial Neural Network Modeling to Evaluate and Predict the Deformation Behavior of Stainless Steel Type AISI 304L during Hot Torsion. Applied Soft Computing 9, 237–244 (2009)

    Article  Google Scholar 

  6. Subudhi, B., Morris, A.S.: Soft Computing Methods Applied to the Control of a Flexible Robot Manipulator. Applied Soft Computing 9, 149–158 (2009)

    Article  Google Scholar 

  7. Lee, K., Booth, D., Alam, P.: A Comparison of Supervised and Unsupervised Neural Networks in Predicting Bankruptcy of Korean Firms. Expert Systems with Applications 29, 1–16 (2005)

    Article  Google Scholar 

  8. Sharda, R., Delen, D.: Predicting Box-Office Success of Motion Pictures with Neural Networks. Expert Systems with Applications 30, 243–254 (2006)

    Article  Google Scholar 

  9. Yu, L., Wang, S.-Y., Lai, K.K.: An Adaptive BP Algorithm with Optimal Learning Rates and Directional Error Correction for Foreign Exchange Market Trend Prediction. In: Wang, J., Yi, Z., Żurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3973, pp. 498–503. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  10. Lera, G., Pinzolas, M.: Neighborhood Based Levenberg-Marquardt Algorithm for Neural Network Training. IEEE Transaction on Neural Networks 13, 1200–1203 (2002)

    Article  Google Scholar 

  11. Wang, X.G., Tang, Z., Tamura, H., Ishii, M., Sun, W.D.: An Improved Backpropagation Algorithm to Avoid The Local Minima Problem. Neurocomputing 56, 455–460 (2004)

    Article  Google Scholar 

  12. Otair, M.A., Salameh, W.A.: Speeding Up Back-Propagation Neural Networks. In: Proceedings of the 2005 Informing Science and IT Education Joint Conference. pp.167 173.Flagstaff, Arizona, USA (2005)

    Google Scholar 

  13. Ji, L., Wang, X., Yang, X., Liu, S., Wang, L.: Back-Propagation Network Improved by Conjugate Gradient Based on Genetic Algorithm in Qsar Study on Endocrine Disrupting Chemicals. Chinese Science Bulletin 53, 33–39 (2008)

    Article  Google Scholar 

  14. Nawi, N.M., Ransing, R.S., Ransing, M.S.: An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks. International Journal of Information and Mathematical Sciences 4, 46–55 (2008)

    Google Scholar 

  15. Evett, I.W., Spiehler, E.J.: Rule Induction in Forensic Science. Knowledge Based Systems, 152–160 (1988)

    Google Scholar 

  16. Michalski, R.S., Chilausky, R.L.: Learning by Being Told and Learning from Examples: An Experimental Comparison of the Two Methods of Knowledge Acquisition in the Context of Developing an Expert System for Soybean Disease Diagnosis. International Journal of Policy Analysis and Information Systems 4(2) (1980)

    Google Scholar 

  17. Mangasarian, O.L., Wolberg, W.H.: Cancer diagnosis via linear programming. SIAM News 23, 1–18 (1990)

    Google Scholar 

  18. Ye, Y.C.: Application and Practice of the Neural Networks. Scholars Publication, Taiwan (2001)

    Google Scholar 

  19. Maier, H.R., Dandy, G.C.: The Effect of Internal Parameters and Geometry on The Performance Of Back-Propagation Neural Networks: An Empirical Study. Environmental Modelling and Software 13, 193–209 (1998)

    Article  Google Scholar 

  20. Thimm, G., Moerland, P., Fiesler, E.: The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks. Neural Comput. 8, 451–460 (1996)

    Article  Google Scholar 

  21. Eom, K., Jung, K., Sirisena, H.: Performance Improvement of Backpropagation Algorithm by Automatic Activation Function Gain Tuning Using Fuzzy Logic. Neurocomputing 50, 439–460 (2003)

    Article  MATH  Google Scholar 

  22. Hamid, N.A., Nawi, N.M., Ghazali, R.: The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems. In: Proceeding of the International Conference on Advanced Science, Engineering and Information Technology, Hotel Equatorial Bangi-Putrajaya, Malaysia , pp. 178–184 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Abdul Hamid, N., Mohd Nawi, N., Ghazali, R., Mohd Salleh, M.N. (2011). Learning Efficiency Improvement of Back Propagation Algorithm by Adaptively Changing Gain Parameter together with Momentum and Learning Rate. In: Zain, J.M., Wan Mohd, W.M.b., El-Qawasmeh, E. (eds) Software Engineering and Computer Systems. ICSECS 2011. Communications in Computer and Information Science, vol 181. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22203-0_68

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-22203-0_68

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-22202-3

  • Online ISBN: 978-3-642-22203-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics