Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem

In this paper, we propose a new method, which is set up by incorporating an inertial step with the extragradient method for solving a strongly pseudomonotone equilibrium problems. This method had to comply with a strongly pseudomonotone property and a certain Lipschitz-type condition of a bifunction. A strong convergence result is provided under some mild conditions, and an iterative sequence is accomplished without previous knowledge of the Lipschitz-type constants of a cost bifunction. A sufficient explanation is that the method operates with a slow-moving stepsize sequence that converges to zero and non-summable. For numerical explanations, we analyze a well-known equilibrium model to support our well-established convergence result, and we can see that the proposed method seems to have a significant consistent improvement over the performance of the existing methods.


Introduction
Let C to be a nonempty closed, convex subset of E and f : E × E → R is a bifunction such that f (u, u) = 0 for all u ∈ C. The equilibrium problem [1] for the bifunction f on C is defined as follows: Find u * ∈ C such that f (u * , v) ≥ 0, for all v ∈ C. (1) The equilibrium problem (shortly, EP) was originally introduced in the unifying nature by Blum and Oettli [1] in 1994, and provides a comprehensive study on their theoretical properties. This unique formulation of a problem has an absolutely sensational way to deal with a wide range of topics that have emerged from the social sciences, economics, finance, restoration of image, ecology, transport, networking, elasticity and optimization problems (for more details see, [2][3][4]). The equilibrium problem covers several mathematical problems as a special case, namely minimization problems, the fixed point problems, variational inequality problems (shortly, V IP), Nash equilibrium of non-cooperation games, complementarity problems, problem of vector minimization and saddle point problems (see, e.g., [1,[5][6][7]). On the other hand, iterative methods are basic and powerful tools for studying the numerical solution of an equilibrium problem. In this direction, two well-established approaches are used, i.e., the proximal point method [8] and the auxiliary problem principle [9]. The strategy of the proximal point method was originally developed by Martinet [10] for the problems of a monotone variational inequality, and later Rockafellar [11] developed this idea for monotone operators. Moudafi [8] proposed the proximal point method for monotone equilibrium problems. Furthermore, Konnov [12] also provides a different variant of the proximal point method with weaker assumptions in the case of equilibrium problems. Several numerical methods based on these techniques have been developed to solve different classes of equilibrium problems in finite and infinite-dimensional abstract spaces (for more details see, [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]). More specifically, Hieu et al. in [30] developed an iterative sequence sequence {u n } recursively as In addition, the inertial-like methods depend on the approach of the heavy-ball methods of the second-order time dynamic system. Polyak began by considering inertial extrapolation as a speed-up method to solve smooth convex minimization problems. Inertial-like methods are two-step iterative schemes and the next iteration is achieved by making use of the previous two iterations (see [31,32]). An inertial extrapolation term is required to boost the iterative sequence in order to achieve the desired solution. These inertial methods are basically used to accelerate the iterative sequence towards the required solution. Numerical reviews suggest that inertial effects often improve the performance of the algorithm in terms of the number of iterations and time of execution in this context. These two impressive advantages enhance the researcher's interest in developing new inertial methods. There are many methods are already have been established for the different classes of variational inequality problem (for more details see [33][34][35][36][37]).
In this article, we focus on the second direction which consists of projection methods that are well recognized and practically easy to carry out based on their convenient mathematical computation. By relying on the research work of Hieu et al. [30] and Vinh et al. [38], we introduce an inertial extragradient method for solving a specific class of equilibrium problems, where f can be a strongly pseudomonotone bifunction. Our method is working without any knowledge of the Lipschitz-type and strongly pseudomonotone constants of a bifunction. The advantage of our method is based on the use of a stepsize sequence that gently converges to zero and non-summable. Due to this aspect and the strong pseudomonotonicity of the bifunction, the strong convergence of our method has been achieved. Nonetheless, we do not need to know such constants beforehand i.e., the input parameters of the method should not be such constants. In the end, the numerical experiments indicate that the proposed method seems to be more effective than the family of existing ones [30,39,40].
The rest of this paper organized subsequently: In Section 2 we provide some preliminaries and basic results that will be used throughout the paper. Section 3 includes our proposed method and the corresponding convergence result. Section 4 contains some application of our results in the variational inequality problems. Section 5 sets out the numerical experiments to explain the algorithmic efficiency of our proposed algorithm.

Background
Now, we provide important lemmas, definitions and other concepts that are useful throughout the convergence analysis. We continue to make use of C as a closed, convex subset of the Hilbert space E. By ., . and . we denote the inner product and norm on the Hilbert space respectively, etc. Let G : E → E is a well-defined operator and V I(G, C) is the solution set of a variational inequality problem corresponding operator G over the set C. Moreover, EP( f , C) stands for the solution set of an equilibrium problem upon the set C and u * is any arbitrary element of EP( f , C) or V I(G, C).
In addition, Let h : C → R is a convex function and subdifferential of h at u ∈ C is defined by: The normal cone of C at u ∈ C is given as Definition 1 ([41]). The metric projection P C (u) of u onto a closed, convex subset C of E is define as follows: Next, we have various notions of the bifunction monotonicity (see [1,42] for more details).

Definition 2.
The bifunction f : E × E → R on C for γ > 0 is said to be: Satisfying the Lipschitz-type condition on C if there are two positive real numbers c 1 , c 2 such that Remark 1. We obtain the following results from the above definitions.
strongly monotone =⇒ monotone =⇒ pseudomonotone strongly monotone =⇒ strongly pseudomonotone =⇒ pseudomonotone This section concludes with a few specific lemmas that are useful in studying the convergence analysis of our proposed method. Lemma 1 ([43]). Let C be a nonempty, closed and convex subset of a real Hilbert space E and h : C → R be a convex, subdifferentiable and lower semicontinuous function on C. Moreover, u ∈ C is a minimizer of a function h if and only if 0 ∈ ∂h(u) + N C (u), where ∂h(u) and N C (u) denotes the subdifferential of h at u and the normal cone of C at u, respectively. Lemma 2 ([44]). For every a, b ∈ E and µ ∈ R then the subsequent item is true: Lemma 3 ([45]). Suppose {a n } and {t n } are two sequences of nonnegative real numbers satisfying the inequality a n+1 ≤ a n + t n for all n ∈ N. If ∑ t n < ∞, then lim n→∞ a n exists.

Convergence Analysis for an Algorithm
We develop an algorithmic procedure that consists of two strong convex minimization problems with such an inertial term that is used to improve the convergence speed of the iterative sequence, so that it is classified as an inertial extragradient method for strongly pseudomonotone equilibrium problems. We have the following hypothesis on a bifunction that are compulsory to achieve the strong convergence of the iterative sequence generated by Algorithm 1.
f satisfy the Lipschitz-type condition through two positive constants c 1 and c 2 .
) is convex and subdifferentiable on C for each fixed u ∈ C.
holds. In addition, let {ξ n } be the sequence of positive real numbers that satisfy the following conditions: Iterative steps: Choose ϑ n such that 0 ≤ ϑ n ≤ β n , where Step 1: Step 2: Compute Set n := n + 1 and go back to Iterative steps.

Remark 2.
(i). Notice that if θ = 0, in the above method then it is equivalent to the default extragradient method in, e.g., [30].
Next, we are proving the validity of the stopping criterion for the Algorithm 1.

Lemma 5.
If v n = w n in Algorithm 1 then w n is the solution of a problem (1) over C.
Proof. By definition of v n with Lemma 1, we can write as Thus, we have Since η ∈ N C (v n ) then η, y − v n ≤ 0, and with above expression implies that Furthermore, by η ∈ f (w n , v n ) and the subdifferential definition, we have From the expression (8) and (9) with ξ n ∈ (0, +∞) implies that By v n = w n and under the assumption f 1 given that f (w n , y) ≥ 0, for all y ∈ C.
Lemma 6. We have the following important inequality from the Algorithm 1.
Proof. By definition of u n+1 , we have The above expression can be written as Since η ∈ N C (u n+1 ) then η, y − u n+1 ≤ 0 for all y ∈ C. Thus, we obtain By η ∈ ∂ f (v n , u n+1 ) and by subdifferential definition, we obtain Combining the expression (11) and (12) we obtain Lemma 7. We also have the following important inequality from the Algorithm 1.
Proof. We may obtain the proof by following the same step as in the proof of Lemma 6.
Next, we give a crucial inequality which is useful to prove the boundedness of the iterative sequence generated by Algorithm 1.

Lemma 8.
Suppose that the assumptions f 1 -f 4 as in Assumption 1 hold and the EP( f , C) = ∅. Thus, for each u * ∈ EP( f , C), we can obtain Proof. It is follow from the Lemma 7 and substituting y = u n+1 , we obtain Next, substituting y = u * in Lemma 6, we have Since By using the Lipschitz-type continuity of a bifunction f , we have From the expression (16) and (17), we have By the expression (14) and (18), we get Furthermore, we have the following facts: From above two facts and the expression (19), we obtain

Theorem 1.
Let a bifunction f : E × E → R satisfying the Assumptions 1. Then, for some u * ∈ EP( f , C) = ∅, the sequence {w n }, {u n } and {v n } set up by Algorithm 1 strongly converges to u * ∈ EP( f , C).
By using the above condition on Lemma 8, we have Furthermore, the above expression for all n ≥ n 0 can be written as By Lemma 3 with the expression (6) and (22) implies that lim n→∞ u n − u * = l, for some finite l ≥ 0.
By the definition of w n in Algorithm 1, we have The above expression with (23) and (7) implies that lim n→∞ w n − u * = l.
From Lemma 8 and the expression (24), we have which further implies that By taking the limit as n → ∞ in the expression (27), we get Thus, the expression (25) and (28) gives that From the expression (23), (25) and (29) implies that the sequences {u n }, {w n } and {v n } are bounded, and for each u * ∈ EP( f , C), the lim n→∞ u n − u * 2 , lim n→∞ v n − u * 2 , lim n→∞ w n − u * 2 exists. Next, we are going to prove that the sequence {u n } strongly converges to u * . It follows from the expression (26) for each n ≥ n 0 such that For some k > n 0 using in the expression (30) gives that for some M ≥ 0 and letting k → ∞ leads to It follows from Lemma 4 and the expression (32) such that Thus, the expression (29) and (33) gives that From the expression (7), (28) and using the Cauchy inequality, we have Finally, we get lim This completes the proof.
If we take θ = 0 in the Algorithm 1, we get the result that appeared in the Hieu et al. [30].
The sequence {u n } and {v n } strongly converges to the solution u * ∈ EP( f , C).

Application to Variational Inequality Problems
Now we discuss the application of our results to solve variational inequality problems involving strongly pseudomonotone with Lipschitz continuous operator. An operator G : E → E is called to be The problem of variational inequality is to = P C (w n − ξ n G(w n )), (38) and likewise u n+1 in Algorithm 1 can reduce to u n+1 = P C (w n − ξ n G(v n )).

Assumption 2.
We assume that G satisfying the following assumptions: G is strongly pseudomonotone on C and V I(G, C) = ∅; G 2 .
G is L-Lipschitz continuous upon C for some constant L > 0.
Thus, the Algorithm 1 is reduced to the following algorithm to solve a strongly pseudomonotone variational inequality problem.
Corollary 2. Assume that G : C → E satisfies (G 1 -G 2 ) as in Assumption 2. Let {w n }, {u n } and {v n } be the sequences generated as follows: holds. In addition, let {ξ n } be the sequence of positive real numbers which meets the following criteria: ii.
Thus, the sequence {w n }, {u n } and {v n } strongly converges to u * ∈ V I(G, C).
By using θ = 0 in the Corollary 2, we get the following results.
Corollary 3. Assume that G : C → E satisfies (G 1 -G 2 ) as in Assumption 2. Let {u n } and {v n } be the sequences generated as follows: i. Choose u 0 ∈ C and compute v n = P C (u n − ξ n G(u n )), where {ξ n } be the sequence of positive real numbers satisfy the following conditions: T 1 : lim n→∞ ξ n = 0 and T 2 : Thus, the sequence {u n } and {v n } strongly converges to u * ∈ V I(G, C).

Computational Experiment
We present some numerical results to explain the efficiency of our proposed methods. The MATLAB codes run in MATLAB version 9.5 (R2018b) on a PC Intel(R) Core(TM)i5-6200 CPU @ 2.30GHz 2.40GHz, RAM 8.00 GB. In all of these examples, we use u −1 = u 0 = v 0 = (1, 1, · · · , 1, 1) T , and x-axis points out to the number of iterations or the time elapsed (in seconds), whereas y-axes show for the value of D n . For each method, the corresponding stopping criterion is used, which helps the iterative sequence to converge the element of a solution set. Moreover, we use the following values for the error terms and some other terms (n: Dimension of a Hilbert space; N: Total number of samples; iter.: Average number of iterations; time: Average execution time).

Example 1
Assume that there will be n firms which produces the same product. Let u sets for a vector in which each entry u i denotes the amount of the product produce by a firm i. Now choose the cost P as a decreasing affine function that depends upon on the value of S = ∑ m i=1 u i , i.e., P i (S) = φ i − ψ i S, where φ i > 0, ψ i > 0. The profit function for each firm i is described by F i (u) = P i (S)u i − t i (u i ), where t i (u i ) is the tax value and cost for generating u i . Assume that C i = [u min i , u max i ] is the set of actions corresponding to each firm i, and the strategy for the whole model take the form as C := C 1 × C 2 × · · · × C n . In fact, each firm tries to reach its peak revenue by choosing the comparable stage of production on the hypothesis that other firms production is the input parameter. The technique generally employed to handle this type of model concentrates mainly on the well-known Nash equilibrium theory. We would like to remind that point u * ∈ C = C 1 × C 2 × · · · × C n is the solution of equilibrium the model if with the vector u * [u i ] represent the vector get from u * by taking u * i with u i . Finally, we take ) and the problem for finding the Nash equilibrium point for the model may be taken as: In addition, we assume that both the tax and the fee for the production of the unit are increasing as the amount of productivity increases. It follows from [19,22], the bifunction f be taken as where q ∈ R n . Also, Q − P is symmetric negative definite and Q is symmetric positive semidefinite of order n with Lipschitz parameters c 1 = c 2 = 1 2 P − Q (for more details see, [22]). During this Example in Section 5.1, both P, Q are arbitrary generated (Choosing two diagonal matrices randomly A 1 and A 2 with entries from [0, 2] and [−2, 0] respectively. Two random orthogonal matrices B 1 and B 2 (RandOrthMat(n)) are able to generate a positive semi definite matrix M 1 = B 1 A 1 B T 1 and negative semi definite matrix M 2 = B 2 A 2 B T 2 . Finally, set Q = M 1 + M T 1 , S = M 2 + M T 2 and P = Q − S.) and entries of q randomly belongs to [−1, 1]. The constraint set C ⊂ R n is convex and closed as The numerical results regarding Example in Section 5.1 have shown in Figures 1-9 and Table 1.

Example 2
Let a bifunction f define on the convex set as where D is an n × n diagonal matrix with nonnegative elements. Moreover, S is an n × n skew-symmetric matrix and B is an n × n matrix. The constraint set C ⊂ R n is taken as while A is an 100 × n matrix and nonnegative vector b. We can see that bifunction f is γ-monotone through γ = min{eig(BB T + S + D)} and Lipschitz-like constants are c 1 = c 2 = 1 2 max{eig(BB T + S + D)}. In our case we generate the random matrices (B = rand(n), C = rand(n), S = 0.5C − 0.5C T , D = diag(rand(n, 1)).) and the numerical results regarding Example in Section 5.2 are shown in Figures 10-19 and Table 2.  Remark 3. From our numerical experiments we have the following observation: 1.
There is no need to have prior knowledge of Lipschitz-constant for running algorithms on Matlab.

2.
The convergence rate of the iterative sequence is based on the convergence rate of the stepsize sequence.

3.
The convergence rate of the iterative sequence also depends on the nature of the problem and the size of the problem.

4.
Due to the variable stepsize sequence, a particular value of the stepsize that is not suited to the current iteration of the algorithm often causes disturbance and hump in the behaviour of an iterative sequence.

Conclusions
In this article, we established a new method by associating an inertial term with an extragradient method for dealing with a family of strongly pseudomonotone equilibrium problems. The proposed method requires a sequence of diminishing and non-summable stepsizes and carried out without previous knowledge of the Lipschitz-type constants and the strong pseudo-monotonicity modulus constant. Two numerical experiments have been reported to measure the computational efficiency of our method in comparison to other existing methods. Numerical experiments have pointed out that the method with an inertial scheme performs better than those without inertial scheme.