Skip to main content
Log in

A solving method based on neural network for a class of multi-leader–follower games

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, we present a new solving approach for a class of multi-leader–follower games. For the problem studied, we firstly propose a neural network model. Then, based on Lyapunov and LaSalle theories, we prove that the trajectory of the neural network model can converge to the equilibrium point, which corresponds to the Nash equilibrium of the problem studied. The numerical results show that the proposed neural network approach is feasible to the problem studied.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Nash JF (1950) Equilibrium points in \(n\)-person game. Proc Natl Acad Sci USA 36:48–49

    Article  MathSciNet  MATH  Google Scholar 

  2. Nash JF (1951) Non-cooperative games. Ann Math 54:286–295

    Article  MathSciNet  MATH  Google Scholar 

  3. Bai X, Shahidehpour SM, Ramesh VC, Yu E (1997) Transmission analysis by Nash game method. IEEE Trans Power Syst 12:1046–1052

    Article  Google Scholar 

  4. Song H, Liu CC, Lawarre J, Bellevue WA (2002) Nash equilibrium bidding strategies in a bilateral electricity market. IEEE Trans Power Syst 17:73–79

    Article  Google Scholar 

  5. Hobbs BF (2001) Linear complementarity models of Nash–Cournot competition in bilateral and POOLCO power markets. IEEE Trans Power Syst 16:194–202

    Article  Google Scholar 

  6. Hu M, Fukushima M (2011) Variational inequality formulation of a class of multi-leader-follower games. J Optim Theory Appl 151:455–473

    Article  MathSciNet  MATH  Google Scholar 

  7. Pang JS, Fukushima M (2005) Quasi-variational inequalities, generalized Nash equilibria, and multi-leader-follower games. Comput Manag Sci 2:21–56

    Article  MathSciNet  MATH  Google Scholar 

  8. Hu X, Ralph D (2007) Using EPECs TO model bilevel games in restructured electricity markets with locational prices. Oper Res 55:809–827

    Article  MathSciNet  MATH  Google Scholar 

  9. Ding XP (2012) Equilibrium existence theorems for multi-leader-follower generalized multiobjective games in FC-spaces. J Glob Optim 53:381–390

    Article  MathSciNet  MATH  Google Scholar 

  10. Jia W, Xiang S, Hao J, Yang Y (2015) Existence and stability of weakly Pareto-Nash equilibrium for generalized multiobjective multi-leader-follower games. J Glob Optim. doi:10.1007/s10898-014-0178-y

    MathSciNet  MATH  Google Scholar 

  11. Hobbs BF, Metzler C, Pang JS (2000) Strategic gaming analysis for electric power networks: an MPEC approach. IEEE Trans Power Syst 15:638–645

    Article  Google Scholar 

  12. Kanzow C, Fukushima M (1998) Solving box constrained variational inequalities by using the natural residual with D-gap function globalization. Oper Res Lett 23:45–51

    Article  MathSciNet  MATH  Google Scholar 

  13. Kennedy MP, Chua LO (1998) Neural Network for nonlinear programming. IEEE Trans Circuits Syst 35:554–562

    Article  MathSciNet  Google Scholar 

  14. Chen KZ, Leung Y, Leung KS et al (2002) A neural network for solving nonlinear programming problems. Neural Comput Appl 11:103–111

    Article  Google Scholar 

  15. Xia YS, Wang J (1998) A general methodology for designing globally convergent optimization neural networks. IEEE Trans Neural Netw 9(6):1331–1343

    Article  Google Scholar 

  16. Gao XB (2004) A novel neural network for nonlinear convex programming. IEEE Trans Neural Netw 15(3):613–621

    Article  Google Scholar 

  17. Lv Y, Hu T, Wang G, Wan Z (2008) A neural network approach for solving nonlinear bilevel programming problem. Comput Math Appl 58(12):2823–2829

    Article  MathSciNet  MATH  Google Scholar 

  18. Lv Y, Chen Z, Wan Z (2010) A neural network approach for solving a convex quadratic bilevel programming problem. J Comput Appl Math 234:505–511

    Article  MathSciNet  MATH  Google Scholar 

  19. Kinderlehrer D, Stampacchia G (1980) An introduction to variational inequalities and their applications. Academic, New York

    MATH  Google Scholar 

  20. Hale JK (1980) Ordinary differentiable equations, 2nd edn. Huntington, NY

    Google Scholar 

  21. LaSalle JP (1976) The stability of dynamical Systems. Springer, New York

    Book  MATH  Google Scholar 

Download references

Acknowledgements

Supported by the National Natural Science Foundation of China (11201039,71471140,61273179).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yibing Lv.

Appendix

Appendix

A. Proof of Proposition 3.1

Following Proposition 2 in [18], Proposition 3.1 can be obtained immediately. \(\square \)

B. Proof of Theorem 3.4

It is obvious that \(E(z^{*})=0\) if and only if \(z^{*}\) solves (8). By Theorem 3.3, \(z^{*}\) satisfies (8) \(\Leftrightarrow \) \(x^{*}\) solves the variational inequality problem (7). Therefore, Theorem 3.4 is proved. \(\square \)

C. Proof of Theorem 3.5

Arbitrarily given \(z^{*}\in \Theta \), By Proposition 3.1, \(E(z^{*})=0\). It means that \(z^{*}\) solves the unconstrained programming problem \(\min \nolimits _{z\in R^{n_{i}+n_{ii}+s_{i}+s_{ii}+t_{i}+t_{ii}}}E(z)\). Following the necessary optimality conditions of optimization problem, we can deduce \(\nabla E(z^{*})=0\). Then, \(z^{*}\in \Omega \). As \(z^{*}\) is an arbitrary point in the set \(\Theta \). Then, \(\Theta \subseteq \Omega \). \(\square \)

D. Proof of Theorem 3.6

If \(F_{0}(x_{i},x_{ii})\) is strictly monotone, by Theorem 3.3 the system (8) has solutions. Moreover for \(\forall z\ne z^{*}, E(z)>0\). It means that E(z) is positive and definitive. Let \(z=z(t,z^{0})\) denote the trajectory of the neural network (10) corresponding to the initial point \(z^{0}\), along it there is

$$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t}E(z)=\, & {} \frac{{\text {d}}}{{\text {d}}t}E[z(t,z^{0})]\\=\, & {} \nabla E(z)^{T}\cdot \frac{{\text {d}}z}{{\text {d}}t}\\=\, & {} -k\Vert \nabla E(z)\Vert ^{2}<0 \end{aligned}$$

Following the Lyapunov Theorem [20], we have the result. \(\square \)

E. Proof of Theorem 3.7

Firstly, we prove the following two results.

  • (a) The function \(E(z(t,z^{0}))\) is monotone non-increasing with its variable.

    Actually, following Theorem 3.6, the result (a) is obvious.

  • (b) Let \(\Upsilon =\{z(t,z^{0}):t\ge 0\}\), then \(z(t,z^{0})\) is a bound positive semi-trajectory.

    Firstly, E(z) is bound from below. Following the continuity of E(z) and with (a), we have that the set \(L(z^{0})\) is compact and

    $$\begin{aligned} \Upsilon \subseteq L(z^{0}). \end{aligned}$$

    Hence, the result (b) is obtained.

Now, we will prove \(\lim \nolimits _{n\rightarrow +\infty }z(t_{n},z^{0})={\bar{z}}\).

Firstly, \(\Upsilon \) is a bound set of points. Take rigidly increasing time sequence \({\bar{t}}_{n}, 0\le {\bar{t}}_{1}<{\bar{t}}_{2}<\cdots <{\bar{t}}_{n}\rightarrow +\infty \), it is obvious that \(\{z({\bar{t}}_{n},z^{0})\}\) is a bound sequence, which composes of infinite points. Then, there exists a trajectory \(z(t_{n},z^{0})\) with the rigidly increasing time sequence \(\{t_{n}\}\subseteq \{{\bar{t}}_{n}\}, t_{n}\rightarrow +\infty \), satisfying

$$\begin{aligned}&\lim \limits _{n\rightarrow +\infty }z(t_{n},z^{0})={\bar{z}}. \end{aligned}$$

In the following contents, we will prove the second part of the theorem.

Following the result in (a), we know that \(E({\bar{z}})=0\Leftrightarrow \nabla E({\bar{z}})=0\). Following LaSalle invariance principle [21], there exists a set \(\vartheta \) such that \(\lim \nolimits _{t\rightarrow \infty }\Upsilon \rightarrow \vartheta \), where \(\vartheta \) denotes the largest invariant set in the set of the equilibrium point. That means, there exists a time sequence \(\{t_{n}\}\), satisfying

$$\begin{aligned}&\lim \limits _{n\rightarrow +\infty }z(t_{n},z^{0})={\bar{z}}, \end{aligned}$$

and \({\bar{z}}\) is the equilibrium point of the neural network (10). The proof is completed. \(\square \)

F. Proof of Theorem 3.8

Following Theorems 3.53.6 and  3.7, Theorem 3.8 is obvious. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lv, Y., Wan, Z. A solving method based on neural network for a class of multi-leader–follower games. Neural Comput & Applic 29, 1475–1483 (2018). https://doi.org/10.1007/s00521-016-2648-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-016-2648-2

Keywords

Navigation