Skip to main content

An Adaptive Learning Algorithm for Supervised Neural Network with Contour Preserving Classification

  • Conference paper
Artificial Intelligence and Computational Intelligence (AICI 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5855))

Abstract

A study of noise tolerance characteristics of an adaptive learning algorithm for supervised neural network is presented in this paper. The algorithm allows the existing knowledge to age out in slow rate as a supervised neural network is gradually retrained with consecutive sets of new samples, resembling the change of application locality under a consistent environment. The algorithm utilizes the contour preserving classification algorithm to pre-process the training data to improve the classification and the noise tolerance. The experimental results convincingly confirm the effectiveness of the algorithm and the improvement of noise tolerance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Tanprasert, T., Kripruksawan, T.: An approach to control aging rate of neural networks under adaptation to gradually changing context. In: ICONIP 2002 (2002)

    Google Scholar 

  2. Tanprasert, T., Kaitikunkajorn, S.: Improving synthesis process of decayed prior sampling technique. In: Tech 2005 (2005)

    Google Scholar 

  3. Tanprasert, T., Tanprasert, C., Lursinsap, C.: Contour preserving classification for maximal reliability. In: IJCNN 1998 (1998)

    Google Scholar 

  4. Burzevski, V., Mohan, C.K.: Hierarchical growing cell structures. In: ICNN 1996 (1996)

    Google Scholar 

  5. Fritzke, B.: Vector quantization with a growing and splitting elastic net. In: ICANN 1993 (1993)

    Google Scholar 

  6. Fritzke, B.: Incremental learning of local linear mappings. In: ICANN 1995 (1995)

    Google Scholar 

  7. Martinez, T.M., Berkovich, S.G., Schulten, K.J.: Neural-gas network for vector quantization and it application to time-series prediction. IEEE Transactions on Neural Networks (1993)

    Google Scholar 

  8. Chalup, S., Hayward, R., Joachi, D.: Rule extraction from artificial neural networks trained on elementary number classification tasks. In: Proceedings of the 9th Australian Conference on Neural Networks (1998)

    Google Scholar 

  9. Craven, M.W., Shavlik, J.W.: Using sampling and queries to extract rules from trained neural networks. In: ICML 1994 (1994)

    Google Scholar 

  10. Setiono, R.: Extracting rules from neural networks by pruning and hidden-unit splitting. Neural Computation (1997)

    Google Scholar 

  11. Sun, R.: Beyond simple rule extraction: Acquiring planning knowledge from neural networks. In: ICONIP 2001 (2001)

    Google Scholar 

  12. Thrun, S., Mitchell, T.M.: Integrating inductive neural network learning and explanation based learning. In: IJCAI 1993 (1993)

    Google Scholar 

  13. Towell, G.G., Shavlik, J.W.: Knowledge based artificial neural networks. Artificial Intelligence (1994)

    Google Scholar 

  14. Mitchell, T., Thrun, S.B.: Learning analytically and inductively. Mind Matters: A Tribute to Allen Newell (1996)

    Google Scholar 

  15. Fasconi, P., Gori, M., Maggini, M., Soda, G.: Unified integration of explicit knowledge and learning by example in recurrent networks. IEEE Transactions on Knowledge and Data Engineering (1995)

    Google Scholar 

  16. Polikar, R., Udpa, L., Udpa, S.S., Honavar, V.: Learn++: An incremental learning algorithm for supervised neural networks. IEEE Transactions on Systems, Man, and Cybernetics (2001)

    Google Scholar 

  17. Tanprasert, T., Fuangkhon, P., Tanprasert, C.: An Improved Technique for Retraining Neural Networks In Adaptive Environment. In: INTECH 2008 (2008)

    Google Scholar 

  18. Fuangkhon, P., Tanprasert, T.: An Incremental Learning Algorithm for Supervised Neural Network with Contour Preserving Classification. In: ECTI-CON 2009 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fuangkhon, P., Tanprasert, T. (2009). An Adaptive Learning Algorithm for Supervised Neural Network with Contour Preserving Classification. In: Deng, H., Wang, L., Wang, F.L., Lei, J. (eds) Artificial Intelligence and Computational Intelligence. AICI 2009. Lecture Notes in Computer Science(), vol 5855. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05253-8_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-05253-8_43

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-05252-1

  • Online ISBN: 978-3-642-05253-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics