A neural network for a generalized vertical complementarity problem

: In this paper, an e ﬃ cient artiﬁcial neural network is proposed for solving a generalized vertical complementarity problem. Based on the properties of log-exponential function, the generalized vertical complementarity problem is reformulated in terms of the unconstrained minimization problem. The existence and the convergence of the trajectory of the neural network are addressed in detail. In addition, it is also proved that if the neural network problem has an equilibrium point under some initial condition, the equilibrium point is asymptotically stable or exponentially stable under certain conditions. At the end of this paper, the simulation results for the generalized bimatrix game are illustrated to show the e ﬃ ciency of the neural network.

The traditional research of optimization was restricted by theoretical investigation and numerical implementation.However, neural networks were used to solve optimization problems [27][28][29] since 1980s.The LCP and the NCP were studied in [30] and [31] through neural networks, respectively.Another new neural network was proposed in [19] for solving the LCP.In recent years, neural networks are still popular among scholars, and many complex optimization problems were solved by using neural networks [18,[32][33][34][35][36][37][38][39][40].As shown in [30], the significant and unique feature of neural network to optimization is the realization of simple and real-time hardware implementation.
In this paper, we focus on solving the generalized vertical complementarity problem (1.1) through neural networks.At first, based on the properties of log-exponential function, we reformulate the problem (1.1) as an unconstrained minimization problem and the consistency is studied.Then the neural network is used to solve the unconstrained minimization problem, and under certain conditions, the equilibrium point is asymptotically stable or exponentially stable.The obtained results are finally applied to an example of generalization of bimatrix game.
The main contributions of this paper are as follows: first, different from the neural networks in [30] and [31], a neural network is constructed by a smoothing form of the nonsmooth problem (1.1), which avoids computing subgradients in the nonsmooth case.Second, some conditions ensuring consistency of the the equilibrium point in the neural network to the solution of (1.1) are provided.Moreover, asymptotical stability and exponential stability of equilibrium point of the differential equation with the initial condition are studied under certain conditions.Third, the neural network is finally used to solve a generalized vertical complementarity problem in a generalization of bimatrix game.
The main difficulties of this paper are as follows: first, the proof of consistency is obtained by combining the definition of convergence of sequence of set, the properties of log-exponential function, and the properties of solution mapping, which requires more complex techniques in variational analysis.Second, in the process of approximating the original problem, Φ is smooth when α > 0, and not smooth when α = 0. Hence, in order to ensure the convergence of solutions, the above two cases are investigated to complete the consistency analysis of the original problem (1.1), the approximated problem (3.3) and the differential Eq (3.5).Third, in the study of the stability of the neural network in the Lyapunov sense, many definitions and theorems on stability are mentioned, which need some differential equation techniques to combine them to get the desired conclusion.
This paper is organized as follows: in Section 2, some preliminary knowledge are provided.In Section 3, we reformulate the complementarity problem as an unconstrained minimization problem and construct a neural network.The consistency and stability results are discussed in Section 4 and Section 5, respectively.Simulation results are presented in Section 6.Finally, some concluding remarks are drawn in Section 7.

Preliminaries
In this section, some background material are provided.Let • denote the Euclidean norm of a vector or the Frobenius norm of a matrix.Let B denote the closed unit ball and B(x, δ) denote the closed ball around x of radius δ > 0. Let ∇φ(x) denote the gradient of φ: R n → R at x.For a mapping ϕ : R n → R m , Jϕ(x) denotes the Jacobian of ϕ at x.The notation of Clarke generalized Jacobian [41] is as follows.
Definition 2.1.Let F: R n → R m be locally Lipschitz at x ∈ R n , by Redemacher's theorem, F is differentiable almost everywhere.Let ω F denote the set where F is differentiable.We can define the B-subdifferential of F at x as and the Clarke subdifferential of F at x as where co denotes the convex hull of a set.
The log-exponential function is a smoothing function for max-type functions.Let V : R n → R be defined by is not differentiable everywhere.The log-exponential function is defined as follows.
Definition 2.2.[42,43] For any α >0, the log-exponential function of V(x), denoted as V(α, x) : R n+1 → R, is defined by Notice that, which implies lim α→0 V(α, x) = V(x) and the convergence is uniformly with respect to x.From the definition, we know that V(α, x) is a smoothing function with respect to x for any α > 0.
Definition 2.3.[43] For sets X n and X in R n , X is closed, the sequence where and N denotes the set of all positive integer numbers.Definition 2.4.[44] Let h : R n → R be locally Lipschitz.We call h : R n × (0, +∞) → R a smoothing function of h , if h satisfies the following conditions: 1) For any fixed µ ∈ (0, +∞), h(•, µ) is continuously differentiable in R n , and for any fixed Some fundamental definitions about differential equation in the [45] are reviewed.Consider the following differential equation Definition 2.6.An isolated equilibrium point x * of (2.3) is asymptotically stable if it is stable in the sense Lyapunov stable and there exists a δ > 0 , such that for any maximal solution x(t), t ∈ [t 0 , t 1 ), if x 0 − x * < δ, then one has t 1 = +∞ and lim t→∞ x(t) = x * .

Neural network
In this section, (1.1) is approximated as an unconstrained minimization problem by using the logexponential function.Then, the neural network in the forms of differential equation is proposed for solving the unconstrained minimization problem.
For convenience, in this paper, we only consider (1.1) for l = 3, that is finding a vector Notice that for each k, min{F k1 (x), F k2 (x), F k3 (x)} = 0 can be approximated by the following equation: in the sense that lim where Then we obtain the following approximation of problem (3.1): Suppose that x α satisfies Φ(x α , α) = 0 for some α > 0. Then one has that φ α (F k1 (x), where 3) has a solution if and only if the following least square problem has the zero minimum: Next, the frame structure of neural network is given for solving problem (3.1), which is based on the steepest descent method for the reformulated problem (3.4): τ is a scale factor.τ > 1 indicates that a longer step could be taken in simulation.And to simplify our analysis, let τ = 1.A block diagram of the model is shown in the following Figure 1.We know from Fermat's theorem that for a fixed α > 0 if x(α) is an optimal solution of (3.4), then ∇ x f (x(α), α) = 0, which means that x(α) is an equilibrium point of (3.5).
The existence of equilibrium of (3.5) is illustrated by the next theorem.

Consistency analysis
In this section, the relationship between the solutions of problem (3.1) and the solutions of problem (3.3), the relationship between the solutions of problem (3.1) and the equilibrium point of problem (3.5) are investigated.
For x ∈ X 0 , let index sets Next inspired by [24], a theorem is summarized about the convergence of the set X α as α tends to zero.
Theorem 4.2.Suppose the regularity condition holds for any x ∈ X 0 and any a ∈ R 3 satisfying 3 i=1 a i = 1 and a i ≥ 0 for each i, where Ξ( x, a) ∈ R n×n and the k-th row of Ξ( x, a) is Proof.At first, we show that lim sup α→0+ X α ⊆ X 0 .It suffices to prove that if there exist a number sequence {α n } → 0 and a sequence {x n } converging to x as n → ∞ such that x n ∈ X α n for each n, then x ∈ X 0 .Indeed, by (2.2), we have for each k, Next we show that for any x ∈ X 0 , x ∈ lim inf α→0+ X α .It suffices to show that for any α n → 0, there exists a sequence {x n } satisfying x n ∈ X α n for each n such that x n → x.According to Lemma 4.1, if the following condition 0 , where Φ is defined as in (3.3), then there exist numbers µ > 0, ε > 0 and δ > 0 such that d(x, X α ) ≤ µd(0, Φ(x, α)), ∀x ∈ B( x, ε), α ∈ B(0, δ).
Especially, we have which means for each n, X α n ∅.Therefore, there exists a sequence {x n } satisfying x n ∈ X α n for each n and x n → x as n → ∞.Then under condition (4.2), x ∈ lim inf α→0 X α .As a result, to obtain the conclusion, we only need to show condition (4.1) is the sufficient condition for (4.2).Consider the index Since for the index k ∈ {1, 2, • • • , n}\K( x), there exists only one index i ∈ {1, 2, 3} such that F ki ( x) = 0, we obtain For index k ∈ K( x), by Definition 2.1, we have According to the Theorem 4.2, under some conditions, we have lim α→0+ X α = X 0 .Now a natural question is the relationship between X 1α and X 0 .
At the beginning, let us consider the following problem: where It is clear that g 1 (x) is not differentiable, although the squared norm is differentiable, we cannot say the composition g(x) is differentiable by the chain rule in Mathematical Analysis.In fact, g(x) is continuously differentiable.
Theorem 4.4.Suppose x(α) ∈ X 1α , lim Proof.We know that According to the Definition 2.4, we have We know from proof of Theorem 4.2 that as α → 0+, Therefore we have as α → 0+, That is, V T 0 g 1 ( x) = 0. Since V 0 is nonsingular, then g 1 ( x) = 0, i.e., x ∈ X 0 .We know from the Theorem 4.4 that every cluster point of {x(α)} is a solution of generalized vertical complementarity problem (3.1) under conditions in Theorem 4.4, where x(α) is an equilibrium point of the neural network (3.5) for α > 0.
In Section 3 and Section 4, for the sake of illustration, consistency analysis is made for problem (1.1) when l = 3.In fact, the consistency analysis in this paper can be extended to (1.1) for an arbitrary choice of l, just replace the relevant 3 with l in the corresponding theorems and proofs.

Stability analysis
In this section, asymptotic stability and exponential stability for differential Eq (3.5) are studied.Let α > 0, suppose that x * is an isolated equilibrium point of (3.5), and First, we give a theorem to show the global convergence of solutions of differential Eq (3.5).
Proof.First, we need to prove that f (x, α) is a Lyapunov function over the set Ω * for Eq (3.5).From the definition of f (x, α), we know that f (x, α) is nonnegative over R n .Since x * is an isolated equilibrium point f (x * , α) = 0, for any x ∈ Ω * \ {x * }, f (x, α) > 0. Now we check the second condition in the definition of Lyapunov function [45].Notice that Hence, the function f (x, α) is a Lyapunov function for (3.5) over the set Ω * .Because of x * being an isolated equilibrium point, we know is asymptotically stable for (3.5).
Proof.Since J x Φ(x * , α) is nonsingular, x * satisfies Φ(x * , α) = 0. Notice that x * is an isolated equilibrium point of (3.5), therefore, x * is asymptotically stable by Theorem 5.4.Let δ > 0 be sufficiently small such that for any x(t) ∈ B(x * , δ), x(t) → x * as t → +∞.Notice that, J x Φ(x, α)J x Φ(x, α) T is an n × n nonsingular matrix, hence there exist κ 1 > 0 and κ 2 > 0 such that By the smoothness of the function Φ(x, α) , we have the following expansion Suppose that δ is small enough, such that for some 0 < < κ 1 and ∀x ∈ B(x * , δ).Now let be the first exit time of the solution from the ball B(x * , δ).Combine the above formulas, and Φ(x * , α) = 0, then for arbitrary t ∈ Ī = [t 0 , τ), According to [45, Corollary 2.1], the following inequality is equivalent to the definition of exponential stability where that is a contradiction.Thus τ = +∞ and we complete the proof of exponential stability.

Application to a generalized bimatrix game and simulation
In this section, an example in a generalized bimatrix game is illustrated to test our neural network.At first, the generalized bimatrix game is described, which is based on a bimatrix game introduced by [49] and a generalized SER-SIT stochastic games model in [50].Let A and B be two nonempty sets in R m×n .If R i ∈ {A i : A ∈ A} for i = 1, 2, • • • , m, then the m × n matrix R is said to be a row representative of A, where the subscript denotes the corresponding row.In the same way, if for all probability vectors u and v, for all row representative matrices R of A, and for all column representative matrices S of B, then the mixed strategies ( x, ȳ) is said to be a generalized Nash equilibrium pair for the generalized bimatrix game.
The game theoretic implication of the above generalized Nash equilibrium pair is as follows.In the game denoted by Λ(A, B), player I deals with the rows of matrices in A while player II deals with the columns of matrices in B. Player I chooses a row representative A of A and a probability distribution x on the set {1, 2}.Player II chooses a column representative B of B and a probability distribution y on the set {1, 2}.Then the first player's cost is x T Gy while the second player's cost is x T Hy.The existence of an equilibrium for Λ(A, B) describes the stage at which no player can decrease his cost by unilaterally changing his row/column selection and probability distribution.
By [49, Proposition 1], the existence of a pair of probability vectors ( x, ȳ) satisfying the (6.1) is equivalent to the existence of the solution ( x, y) to the equation where e = (1, −1) T .Moreover, if F( x, y) = 0 and x y 0, then ( x/ x , y/ y ) is a generalized Nash equilibrium pair for the generalized bimatrix game.Notice that (6.2) is just a generalized vertical complementarity problem: where To solve the above problem, by introducing the log-exponential function and α > 0, we obtain a least square problem to approximate (6.3): where The Eq (6. 3) is The following neural network is constructed for solving problem (6.4): The simulation is based on Matlab version 9.5, and the package ode45 in Matlab is used to solve differential equations.The starting point z 0 is a uniformly distributed random four dimensional vector in Here are some detailed simulation results in the following tables.Problem (6.4) has a unique solution z * = (1, 0, 2, 0) T .According to Table 1, where τ is set to be equal to 1000 , α ≤ 0.3, the approximate solutions obtained by neural networks are almost as the same as z * , but as α gets bigger, the distance between the approximate solutions and z * become larger.From Table 2, we know where α is set to be equal to 0.01 and τ = 1, 10, 100, 1000, the approximate solutions are almost as the same as z * , but only the length of the convergence has changed.The following observations can be made through the above tables and figures: (i) All trajectories converge to their corresponding static states, respectively.The convergence is faster when a larger scaling factor is applied.(ii) The smaller α is, the solutions approximate to the solution of the true problem (α ≥ 0.01).As α gets bigger, the distance between the approximate solutions and the solution of the true problem become larger.These limited numerical experiments verified the stability of the proposed neural network.

Conclusions
In this paper, a neural network is constructed to solve a generalized vertical complementarity problem.The log-exponential function is introduced to reformulate the generalized vertical complementarity problem in terms of an unconstrained minimization problem, and then we construct a neural network based on the unconstrained minimization problem.Some conditions ensuring consistency of equilibrium point of the neural network to the solution of generalized vertical complementarity problem are provided.Moreover, the asymptotical stability and exponential stability of equilibrium point of neural network are studied in detail.Finally, an example of generalized bimatrix game is illustrated to test our neural network.

Figure 1 .
Figure 1.Block diagram of neural network.