Skip to main content

An Efficient Algorithm for Complex-Valued Neural Networks Through Training Input Weights

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10637))

Included in the following conference series:

Abstract

Complex-valued neural network is a type of neural networks, which is extended from real number domain to complex number domain. Fully complex extreme learning machine (CELM) is an efficient algorithm, which owes faster convergence than the common complex backpropagation (CBP) neural networks. However, it needs more hidden neurons to reach competitive performance. Recently, an efficient learning algorithm is proposed for the single-hidden layer feed-forward neural network which is called the upper-layer-solution-aware algorithm (USA). Motivated by USA, an efficient algorithm for complex-valued neural networks through training input weights (GGICNN) has been proposed to train the split complex-valued neural networks in this paper. Compared with CELM and CBP, an illustrated experiment has been done in detail which observes the better generalization ability and more compact architecture for the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hirose, A.: Complex-Valued Neural Networks. World Scientific, Singapore (2003)

    Book  MATH  Google Scholar 

  2. Cha, I., Kassam, S.A.: Channel equalization using adaptive complex radial basis function networks. IEEE J. Sel. Areas Commun. 13, 122–131 (1995)

    Article  Google Scholar 

  3. Aizenberg, I.: Complex-Valued Neural Networks with Multi-valued Neurons. Springer, Berlin (2011). doi:10.1007/978-3-642-20353-4. Finance, A.: Multivariate nonlinear analysis and prediction of Shanghai stock market. Discret. Dyn. Nat. Soc. 47–58 (2008)

    Book  MATH  Google Scholar 

  4. Serre, D.: Matrices: Theory and Applications. Springer, New York (2002). doi:10.1007/978-1-4419-7683-3

    MATH  Google Scholar 

  5. Rakkiyappan, R., Velmurugan, G., Cao, J.: Stability analysis of fractional-order complex-valued neural networks with time delays. Chaos Solitons Fractals Interdisc. J. Nonlinear Sci. Nonequilibrium Complex Phenom. 78, 297–316 (2015)

    MATH  MathSciNet  Google Scholar 

  6. Leung, H., Haykin, S.: The complex backpropagation algorithm. IEEE Trans. Signal Process. 39, 2101–2104 (1991)

    Article  Google Scholar 

  7. Nitta, T.: An extension of the back-propagation algorithm to complex numbers. Neural Netw. Off. J. Int. Neural Netw. Soc. 10, 1391–1415 (1997)

    Article  Google Scholar 

  8. Zhang, H., Zhang, C., Wu, W.: Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks. Discret. Dyn. Nat. Soc. 2009, 332–337 (2009)

    MATH  MathSciNet  Google Scholar 

  9. Zhang, H., Xu, D., Zhang, Y.: Boundedness and convergence of split-complex back-propagation algorithm with momentum and penalty. Neural Process. Lett. 39, 297–307 (2014)

    Article  Google Scholar 

  10. Zhang, H., Liu, X., Xu, D., Zhang, Y.: Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus. Cogn. Neurodyn. 46, 5789–5796 (2014)

    Google Scholar 

  11. Zhang, H., Mandic, D.P.: Is a complex-valued stepsize advantageous in complex-valued gradient learning algorithms? IEEE Trans. Neural Netw. Learn. Syst. 27, 1–6 (2015)

    MathSciNet  Google Scholar 

  12. Xu, D., Dong, J., Zhang, H.: Deterministic convergence of Wirtinger-gradient methods for complex-valued neural networks. Neural Process. Lett. 1–12 (2016)

    Google Scholar 

  13. Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and its Applications. Wiley, New York (1971)

    MATH  Google Scholar 

  14. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)

    Article  Google Scholar 

  15. Yu, D., Deng, L.: Efficient and effective algorithms for training single-hidden-layer neural networks. Pattern Recogn. Lett. 33, 554–558 (2012)

    Article  Google Scholar 

  16. Li, M.B., Huang, G.B., Saratchandran, P., Sundararajan, N.: Fully complex extreme learning machine. Neurocomputing 68, 306–314 (2005)

    Article  Google Scholar 

  17. Shukla, S., Yadav, R.N.: Regularized weighted circular complex-valued extreme learning machine for imbalanced learning. IEEE Access 3048–3057 (2016)

    Google Scholar 

  18. Kim, T., Adal, T.: Approximation by fully complex multilayer perceptrons. Neural Comput. 15, 1641–1666 (2003)

    Article  MATH  Google Scholar 

  19. Suresh, S., Savitha, R., Sundararajan, N.: Supervised Learning with Complex-Valued Neural Networks. Studies in Computational Intelligence. Springer, Heidelberg (2013). doi:10.1007/978-3-642-29491-4

    Book  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 61305075), the China Postdoctoral Science Foundation (No. 2012M520624), the Natural Science Foundation of Shandong Province (No. ZR2013FQ004, ZR2013DM015, ZR2015AL014), the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20130133120014) and the Fundamental Research Funds for the Central Universities (Nos. 14CX05042A, 15CX05053A, 15CX02079A, 15CX08011A, 15CX02064A).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huaqing Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Liu, Q., Sang, Z., Chen, H., Wang, J., Zhang, H. (2017). An Efficient Algorithm for Complex-Valued Neural Networks Through Training Input Weights. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10637. Springer, Cham. https://doi.org/10.1007/978-3-319-70093-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70093-9_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70092-2

  • Online ISBN: 978-3-319-70093-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics