Skip to main content

Function Decomposition Network

  • Conference paper
  • 1974 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5768))

Abstract

Novel neural network architecture is proposed to solve the nonlinear function decomposition problem. Top-down approach is applied that does not require prior knowledge about the function’s properties. Abilities of our method are demonstrated using synthetic test functions and confirmed by a real-world problem solution. Possible directions for further development of the presented approach are discussed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Nelles, O.: Nonlinear System Identification. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

  2. Haykin, S.: Neural Networks. A Comprehensive Foundation. Prentice Hall, Upper Saddle River (1999)

    MATH  Google Scholar 

  3. Pham, D.T., Liu, X.: Neural Networks for Identification, Prediction and Control. Springer, London (1995)

    Book  Google Scholar 

  4. Jang, J.-S.R., Sun, C.-T., Mizutani, E.: Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice Hall, Upper Saddle River (1997)

    Google Scholar 

  5. Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to modeling and control. IEEE Trans. Systems, Man, and Cybernetics 15, 116–132 (1985)

    Article  MATH  Google Scholar 

  6. Yamakawa, T., Uchino, E., Miki, T., Kusanagi, H.: A neo-fuzzy neuron and its applications to system identification and prediction of the system behavior. In: Proc. 2nd Int. Conf. Fuzzy Logic and Neural Networks, Iizuka, Japan, pp. 477–483 (1992)

    Google Scholar 

  7. Tikk, D., Koczy, L.T., Gedeon, T.D.: A survey on universal approximation and its limits in soft computing techniques. International Journal of Approximate Reasoning 33, 185–202 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  8. Zupan, B., Bratko, I., Bohanec, M., Demsar, J.: Function Decomposition In Machine Learning. In: Paliouras, G., Karkaletsis, V., Spyropoulos, C.D. (eds.) ACAI 1999. LNCS (LNAI), vol. 2049, pp. 71–101. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  9. Demsar, J., Zupan, B., Bohanec, M., Bratko, I.: Constructing intermediate concepts by decomposition of real functions. In: van Someren, M., Widmer, G. (eds.) ECML 1997. LNCS, vol. 1224, pp. 93–107. Springer, Heidelberg (1997)

    Google Scholar 

  10. Vaccari, D.A., Wojciecliowski, E.: Neural Networks as Function Approximators: Teaching a Neural Network to Multiply. Proc. IEEE World Congress on Computational Intelligence 4, 2217–2221 (1994)

    Google Scholar 

  11. Lu, X.-J., Li, H.-X.: Sub-domain intelligent modeling based on neural networks. Proc. IEEE World Congress on Computational Intelligence, 445–449 (2008)

    Google Scholar 

  12. Jansen, W.J., Diepenhorst, M., Nijhuis, J.A.G., Spaanenburg, L.: Assembling Engineering Knowledge in a Modular Multi-layer Perception Neural Network. In: Proc. Int. Conf. on Neural Networks, vol. 1, pp. 232–237 (1997)

    Google Scholar 

  13. Kolmogorov, A.N.: On the representation of continuous function of many variables by superpositions of continuous functions of one variable and addition. Doklady Akademii Nauk USSR 114, 953–956 (1957) (in Russian)

    MathSciNet  MATH  Google Scholar 

  14. Hecht-Nielsen, R.: Kolmogorov’s mapping neural network existence theorem. In: Proc. of the International Conference on Neural Networks, vol. III, pp. 11–14. IEEE Press, New York (1987)

    Google Scholar 

  15. Girosi, F., Poggio, T.: Representation properties of networks: Kolmogorov’s theorem is irrelevant. Neural Computation 1, 465–469 (1989)

    Article  Google Scholar 

  16. Kurkova, V.: Kolmogorov’s Theorem Is Relevant. Neural Computation 3, 617–622 (1991)

    Article  Google Scholar 

  17. Kurkova, V.: Kolmogorov’s theorem and multilayer neural networks. Neural Networks 5, 501–506 (1992)

    Article  Google Scholar 

  18. Katsuura, H., Sprecher, D.A.: Computational Aspects of Kolmogorov’s Superposition Theorem. Neural Networks 7, 455–461 (1994)

    Article  MATH  Google Scholar 

  19. Gorban, A.N., Wunsch, D.C.: The General Approximation Theorem. Proc. IEEE World Congress on Computational Intelligence 2, 1271–1274 (1998)

    Google Scholar 

  20. Neruda, R., Stedry, A., Drkosova, J.: Kolmogorov learning for feedforward networks. In: Proc. International Joint Conference on Neural Networks, vol. 1, pp. 77–81 (2001)

    Google Scholar 

  21. Sprecher, D.A., Draghici, S.: Space-filling curves and Kolmogorov superposition-based neural networks. Neural Networks 15, 57–67 (2002)

    Article  Google Scholar 

  22. Igelnik, B., Parikh, N.: Kolmogorov’s Spline Network. IEEE Trans. Neural Networks 14, 725–733 (2003)

    Article  Google Scholar 

  23. Kolodyazhniy, V., Bodyanskiy, Y.: Fuzzy Kolmogorov’s network. In: Proc. 8th Int. Conf. Knowledge-Based Intelligent Information and Engineering Systems, Wellington, New Zealand, pp. 764–771 (2004)

    Google Scholar 

  24. Shepherd, A.J.: Second-Order Methods for Neural Networks (Fast and Reliable Training Methods for Multi-Layer Perceptrons). Springer, London (1997)

    Book  Google Scholar 

  25. Bodyanskiy, Y., Popov, S., Rybalchenko, T.: Feedforward Neural Network with a Specialized Architecture for Estimation of the Temperature Influence on the Electric Load. In: Proc. 4th Int. IEEE Conf. Intelligent Systems, Varna, Bulgaria, vol. I, pp. 7-14-17-18 (2008)

    Google Scholar 

  26. Chambless, B., Lendaris, G.G., Zwick, M.: An Information Theoretic Methodology for Prestructuring Neural Networks. In: Proc. International Joint Conference on Neural Networks, vol. 1, pp. 365–369 (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bodyanskiy, Y., Popov, S., Titov, M. (2009). Function Decomposition Network. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_74

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04274-4_74

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04273-7

  • Online ISBN: 978-3-642-04274-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics