Skip to main content
Log in

Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus

  • Brief Communication
  • Published:
Cognitive Neurodynamics Aims and scope Submit manuscript

Abstract

This paper considers the fully complex backpropagation algorithm (FCBPA) for training the fully complex-valued neural networks. We prove both the weak convergence and strong convergence of FCBPA under mild conditions. The decreasing monotonicity of the error functions during the training process is also obtained. The derivation and analysis of the algorithm are under the framework of Wirtinger calculus, which greatly reduces the description complexity. The theoretical results are substantiated by a simulation example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

References

  • Adali T, Li H, Novey M et al (2008) Complex ICA using nonlinear functions. IEEE Trans Signal Process 56(9):4356–544

    Google Scholar 

  • Bos AVD (1994) Complex gradient and Hessian. Proc Inst Electr Eng Vision Image Signal Process 141:380–382

    Article  Google Scholar 

  • Brandwood D (1983) Complex gradient operator and its application in adaptive array theory. Proc Inst Electr Eng 130:11–16

    Google Scholar 

  • Fink O, Zio E, Weidmann U (2014) Predicting component reliability and level of degradation with complex-valued neural networks. Reliab Eng Syst Safe 121:198–206

    Article  Google Scholar 

  • Hirose A (2012) Complex-valued neural networks. Springer-Verlag, Berlin Heidelberg

    Book  Google Scholar 

  • Kim T, Adali T (2003) Approximation by fully complex multilayer perceptrons. Neural Comput 15:1641–666

    Article  PubMed  Google Scholar 

  • Li H, Adali T (2008) Complex-valued adaptive signal processing using nonlinear functions. EURASIP J Adv Signal Process 2008:122

    Google Scholar 

  • Mandi DP, Goh SL (2009) Complex valued nonlinear adaptive filters. Wiley, Chichester

    Book  Google Scholar 

  • Mcleod RM (1965) Mean value theorems for vector valued functions. Proc Edinburgh Math Soc 14(2):197–209

    Article  Google Scholar 

  • Nitta T (1997) An extension of the back-propagation algorithm to complex numbers. Neural Netw 10(8): 1391–1415

    Article  PubMed  Google Scholar 

  • Nitta T (2013) Local minima in hierarchical structures of complex-valued neural networks. Neural Netw 43: 1–7

    Article  PubMed  Google Scholar 

  • Osborn GW (2010) A Kalman filtering approach to the representation of kinematic quantities by the hippocampal-entorhinal complex. Cogn Neurodyn 4:C315–C335

    Article  Google Scholar 

  • Shao HM, Zheng GF (2011) Boundedness and convergence of online gradient method with penalty and momentum. Neurocomputing 74:765–770

    Article  Google Scholar 

  • Wei H, Ren Y, Wang ZY (2013) A computational neural model of orientation detection based on multiple guesses: comparison of geometrical and algebraic models. Cogn Neurodyn 7:C361–C379

    Article  Google Scholar 

  • Wu W, Feng GR, Li ZX, et al. (2005) Deterministic convergence of an online gradient method for BP neural networks. IEEE Trans Neural Netw 16:533–540

    Article  PubMed  Google Scholar 

  • Wang J, Wu W, Zurada J (2011) Deterministic convergence of conjugate gradient method for feedforward neural networks. Neurocomputing 74:2368–2376

    Article  Google Scholar 

  • Xu DP, Zhang HS, Liu L (2010) Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Neural Comput 22(10):2655-2677

    Article  PubMed  Google Scholar 

  • Zhang C, Wu W, Xiong Y (2007) Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks. Neural Process Lett 26:177–180

    Article  Google Scholar 

  • Zhang C, Wu W, Chen XH, et al. (2008) Convergence of BP algorithm for product unit neural networks with exponential weights. Neurocomputing 72:513–520

    Article  Google Scholar 

  • Zhang HS, Wu W, Liu F, Yao MC (2009) Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Trans Neural Netw 20(6):1050–1054

    Article  PubMed  Google Scholar 

  • Zhang HS, Xu DP, Zhang Y (2013) Boundedness and convergence of split-complex back-propagation algorithm with momentum and penalty. Neural Process Lett. doi:10.1007/s11063-013-9305-x

  • Ortega JM, Rheinboldt WC (1970) Iterative solution of nonlinear equations in several variables. Academic Press, New York

    Google Scholar 

Download references

Acknowledgments

This research is supported by the National Natural Science Foundation of China (61101228, 10871220), the China Postdoctoral Science Foundation (No. 2012M520623), the Research Fund for the Doctoral Program of Higher Education of China (No. 20122304120028), and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huisheng Zhang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, H., Liu, X., Xu, D. et al. Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus. Cogn Neurodyn 8, 261–266 (2014). https://doi.org/10.1007/s11571-013-9276-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11571-013-9276-7

Keywords

Navigation