Skip to main content

Extension of the Generalization Complexity Measure to Real Valued Input Data Sets

  • Conference paper
Book cover Advances in Neural Networks - ISNN 2010 (ISNN 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6063))

Included in the following conference series:

  • 1797 Accesses

Abstract

This paper studies the extension of the Generalization Complexity (GC) measure to real valued input problems. The GC measure, defined in Boolean space, was proposed as a simple tool to estimate the generalization ability that can be obtained when a data set is learnt by a neural network. Using two different discretization methods, the real valued inputs are transformed into binary values, from which the generalization complexity can be straightforwardly computed. The discretization transformation is carried out both through a very simple method based on equal width intervals (EW) and with a more sophisticated supervised method (the CAIM algorithm) that use much more information about the data. A study of the relationship between data complexity and generalization ability obtained was done together with an analysis of the relationship between best neural architecture size and complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baum, E.B., Haussler, D.: What Size Net Gives Valid Generalization? Neural Comput. 1(1), 151–160 (1990)

    Article  Google Scholar 

  2. Prechelt, L.: A Set of Neural Network Benchmark Problems and Benchmarking Rules. Technical Report 21/94. Fakultat fur Informatik, U. Kalrsruhe, Germany (1994)

    Google Scholar 

  3. Barron, A.R.: Approximation and Estimation Bounds for Artificial Neural Networks. Mach. Learn. 14(1), 115–133 (1994)

    MATH  Google Scholar 

  4. Camargo, L.S., Yoneyama, T.: Specification of Training Sets and the Number of Hidden Neurons for Multilayer Perceptrons. Neural Comput. 13(12), 2673–2680 (2001)

    Article  MATH  Google Scholar 

  5. Teoh, E., Xiang, C., Chen, K.: Estimating the Number of Hidden Neurons in a Feedforward Network Using the Singular Value Descomposition. In: Wang, J., Yi, Z., Żurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3971, pp. 858–865. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  6. Mirchandani, G., Cao, W.: On Hidden Nodes for Neural Nets. IEEE Trans. on Circuits and Systems 36(5), 661–664 (1989)

    Article  MathSciNet  Google Scholar 

  7. Arai, M.: Bounds on the Number of Hidden Units in Binary-Valued Three-Layer Neural Networks. Neural Networks 6(6), 855–860 (1993)

    Article  Google Scholar 

  8. Zhang, Z., Ma, X., Yang, Y.: Bounds on the Number of Hidden Neurons in Three-Layer Binary Neural Networks. Neural Networks 16(7), 995–1002 (2003)

    Article  Google Scholar 

  9. Yuan, H.C., Xiong, F.L., Huai, X.Y.: A Method for Estimating the Number of Hidden Neurons in Feedforward Neural Networks Based on Information Entropy. In: Computers and Electronics in Agriculture, pp. 57–64 (2003)

    Google Scholar 

  10. Liu, Y., Janusz, A., Zhu, Z.: Optimizing the Number of Hidden Neurons in Neural Networks. In: AIAP 2007, Proceedings of the 25th IASTED International Multi-Conference, pp. 121–126 (2007)

    Google Scholar 

  11. Haykin, S.: Neural Networks: A Comprehensive Foundation. Macmillan, New York (1994)

    MATH  Google Scholar 

  12. Gómez, I., Franco, L., Jerez, J.: Neural Network Architecture Selecion. Can Function Complexity Help? Neural Processing Letters 30, 71–87 (2009)

    Article  Google Scholar 

  13. Franco, L., Anthony, M.: The Influence of Oppositely Classified Examples on the Generalization Complexity of Boolean Functions. IEEE Transactions on Neural Networks 17, 578–590 (2006)

    Article  Google Scholar 

  14. Franco, L.: Generalization Ability of Boolean Functions Implemented in FeedForward Neural Networks. Neurocomputing 70, 351–361 (2006)

    Article  Google Scholar 

  15. Franco, L., Cannas, S.A.: Generalization and Selection of Examples in Feedforward Neural Networks. N. Comp. 12(10), 2405–2426 (2000)

    Article  Google Scholar 

  16. Franco, L., Cannas, S.A.: Generalization Properties of Modular Networks: Implementing the Parity Function. IEEE Trans. on Neural Networks 12, 1306–1313 (2001)

    Article  Google Scholar 

  17. Masters, T.: Practical Neural Network Recipes in C++. Academic Press Professional, Inc., London (1993)

    Google Scholar 

  18. Neuralware, Inc.: The Reference Guide (2001)

    Google Scholar 

  19. Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, San Francisco (2005)

    MATH  Google Scholar 

  20. Wong, A.K., Chiu, D.K.: Synthesizing Statistical Knowledge from Incomplete Mixed Mode Data. IEEE Trans. Pattern Analysis and Machine Intelligence 9, 796–805 (1987)

    Article  Google Scholar 

  21. Paterson, A., Nibblet, T.B.: ACLS Manual. Int. Teminals Ltd., Edinburgh (1987)

    Google Scholar 

  22. Fayyad, U.M., Irani, K.B.: Multi-Interval Discretization of Continuous Valued Attributes for Clasification Learning. In: Proceedings of the 13th International Joint Conference Artificial Intelligence, pp. 1022–1027 (1993)

    Google Scholar 

  23. Dougherty, J., Hohavi, R., Sahami, R.: Supervised and Unsupervised Discretization of Continuous Features. In: Proceedings 12th International Conference Machine Learning, pp. 194–202 (1995)

    Google Scholar 

  24. Kerber, R.: Chimerge. Discretization of Numeric Attributes. In: Proceedings of Ninth International Conference of Artificial Intelligence, pp. 123–128 (1992)

    Google Scholar 

  25. Liu, H., Setiono, R.: Feature Selection Via Discretization. IEEE Knowledge and Data Eng. 9(4), 642–645 (1997)

    Article  Google Scholar 

  26. Moller, M.F.: Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Networks 6(4), 525–533 (1993)

    Article  Google Scholar 

  27. Kurgan, L.A., Cios, K.J.: CAIM Discretization Algorithm. IEEE Transactions of Knoweledge and Data Eng. 16(2), 145–153 (2004)

    Article  Google Scholar 

  28. Maimon, O., Rokach, L.: Data Mining and Knoweledge Discovery Handbook. Springer, New York (2005)

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gómez, I., Franco, L., Jerez, J.M., Subirats, J.L. (2010). Extension of the Generalization Complexity Measure to Real Valued Input Data Sets. In: Zhang, L., Lu, BL., Kwok, J. (eds) Advances in Neural Networks - ISNN 2010. ISNN 2010. Lecture Notes in Computer Science, vol 6063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13278-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13278-0_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13277-3

  • Online ISBN: 978-3-642-13278-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics