A self - adaptive inertial extragradient method for a class of split pseudomonotone variational inequality problems

: In this article, we study a class of pseudomonotone split variational inequality problems ( VIPs ) with non - Lipschitz operator. We propose a new inertial extragradient method with self - adaptive step sizes for ﬁ nding the solution to the aforementioned problem in the framework of Hilbert spaces. Moreover, we prove a strong convergence result for the proposed algorithm without prior knowledge of the operator norm and under mild conditions on the control parameters. The main advantages of our algorithm are: the strong convergence result obtained without prior knowledge of the operator norm and without the Lipschitz continuity condition often assumed by authors; the minimized number of projections per iteration com - pared to related results in the literature; the inertial technique employed, which speeds up the rate of convergence; and unlike several of the existing results in the literature on VIPs with non - Lipschitz opera - tors, our method does not require any linesearch technique for its implementation. Finally, we present several numerical examples to illustrate the usefulness and applicability of our algorithm.


Introduction
Let C be a nonempty, closed, and convex subset of a real Hilbert space H with induced norm ‖⋅‖ and inner product ⟨⋅ ⋅⟩ , . The variational inequality problem (VIP) for f on C is defined as follows: If f is monotone, the problem (1) is known as a monotone VIP, while it is known as a pseudo-monotone VIP if f is a pseudo-monotone. We denote the solution set of VIP (1) by ( ) C f VI , . In the early 1960s, Stampacchia [1] and Fichera [2] independently introduced the theory of VIP. The VIP is a fundamental problem that has a wide range of applications in the applied field of mathematics, such as network equilibrium problems, complementarity problems, optimization theory, and systems of nonlinear equations (see [3,4]). As a result of its wide applications, several authors have proposed many iterative algorithms for approximating the solution of VIP and related optimization problems, (see [5][6][7][8][9][10][11]) and the references therein. The VIP is widely known to be equivalent to the following fixed point equation: for > λ 0, where P C is the metric projection from H onto C.
A simple iterative formula that is an extension of (2) is the projection gradient method presented as follows: is α-strongly monotone and L-Lipschitz continuous. It is known that Algorithm 3 only converges weakly under some strict conditions that the operator f is either strongly monotone or inverse strongly monotone but fails to converge if f is monotone.
In order to overcome this barrier, a famous method called the extragradient method (EgM) was introduced by Korpelevich [12] for solving VIP in finite dimensional Euclidean spaces, which is defined as follows: x P x λfy n , , 1, n C n n n C n n 1 (4) where f is monotone and a Lipschitz continuous, ( ) ∈ λ 0, L 1 , and ⊆ C n is a closed convex set. If the solution set ( ) C f VI , is nonempty, then the sequence { } x n generated by the EgM converges to an element in ( ) C f VI , .
In recent years, the EgM has received great attention from numerous authors who have improved it in various ways (see, for instance, [13][14][15]). It is observed that the EgM requires the computation of two projections onto the closed convex set C per iteration. However, projection onto an arbitrary closed convex set C is often very difficult to compute. In order to overcome this barrier, authors have developed more efficient iterative algorithms; some of these algorithms are discussed below.
In 2000, Tseng [16] proposed the following iterative scheme known as the Tseng's extragradient method (TEgM): where A is a monotone and a Lipschitz continuous operator and ( ) ∈ λ 0, L 1 . Clearly, the TEgM requires one projection to be computed per iteration and hence has an advantage in computing projection over the EgM.
Furthermore, Censor et al. [8] introduced a new method that involves the modification of one of the projections in the EgM by replacing it with a projection onto an half space. This method is called the subgradient extragradient method (SEgM) and is defined as follows: Censor et al. [8,9] proved that provided the solution set is nonempty, the sequence { } x n generated by the SEgM converges weakly to an element n CA n VI , Also, Maingé and Gobinddass [17] obtained a result that relates to a weak convergence algorithm by using only a single projection by means of a projected reflected gradient-type method [14] and an inertial term for finding the solution of VIP in real Hilbert spaces.
Another related problem is the fixed point problem (FPP). Let → S C C : be a nonlinear mapping. A point ∈ p C is called a fixed point of S if = Tp p. We denote by ( ) F S , the set of fixed points of S, that is, Many of the problems in sciences and engineering can be formulated as finding the solution of FPP of a nonlinear operator.
Recently, Thong and Hieu [18] introduced the following viscosity-type subgradient extragradient algorithm for approximating a common solution of VIP and FPP in Hilbert spaces: Compute + x n 1 as follows: Step 2. Compute x H x λ fx y x y : , 0 n n n n n n .
Step 3. Compute is a contraction. Censor et al. in [7] introduced another problem called the split variational inequality problem (SVIP). The SVIP, which is a more general problem than the VIP, is formulated as follows: Find ∈ x C such that and where C and Q are nonempty, closed, and convex subsets of real Hilbert spaces H 1 and H , 2 respectively; f and g are nonlinear mappings on C and Q, respectively; and → A H H : 1 2 is a bounded linear operator. Observe that the SVIP can be viewed as a pair of VIPs in which the image of the solution of one VIP in a space H 1 under a given bounded linear operator T is a solution of another VIP in another space H 2 .
It is clear that Algorithm 10 fully exploits the splitting structure of the SVIP (equations (8) and (9)). However, the weak convergence of this method was proved under some strong assumptions, such as assumption (11) and the fact that both mappings are required to be co-coercive (inverse strongly monotone). It is worth mentioning that assumption (11), which depends on the averaged operator technique, has been dispensed with by other authors for solving the SVIP and related problems (see, e.g., [19][20][21][22]), but their methods also relied on the co-coercivity of the cost operators.
In order to overcome some of these weaknesses, He et al. [23] proposed an easily implementable relaxed projection method, which fully exploits the splitting structure of the SVIP, for solving the SVIP (equations (8) and (9)) when the underlying operators are monotone and Lipschitz continuous in finite dimensional spaces. However, this method still requires the reformulation of the original problem into a VIP in a product space (for more details, see [23]).
Tian and Jiang in [24] studied a more general class of SVIP. Precisely, the authors investigated the following class of SVIP: find ∈ x C such that where → f C H : 1 is monotone and Lipschitz continuous, → S H H : 2 2 is a nonexpansive mapping, and → A H H : 1 2 is a bounded linear operator. Moreover, the authors proposed the following algorithm for approximating the solution of problem (12): n C n n n n C n n n n C n n n 1 (13) In addition, they proved the following convergence theorem: It is clear that the class of SVIP (12) considered by Tian and Jiang [24] generalizes the class of SVIP (equations (8) and (9)) considered by Censor [7]. However, we observe that the result of Tian and Jiang [24] is only applicable when the associated cost operator f is monotone and Lipschitz continuous and S is nonexpansive. Moreover, the implementation of the proposed Algorithm (13) by the authors requires knowledge of the Lipschitz constant of the cost operator f and prior knowledge of the operator norm‖ ‖ A . In several instances, these parameters are unknown or difficult to estimate, which can hinder the implementation of their proposed algorithm. In spite of all these stringent conditions, the authors were only able to obtain a weak convergence result for their proposed algorithm. It is known that in solving optimization problems, strong convergence results are more applicable and therefore more desirable than weak convergence results.
In order to remedy some of the above limitations, Ogwo et al. [25] proposed and analyzed the convergence of the following algorithm for solving SVIP (12) when the underlying operator f is pseudomonotone (a weaker assumption than the monotone assumption) and Lipschitz continuous, and T is strictly pseudocontractive mapping: Step 2. Compute   .
We observe that while the result of Ogwo et al. [25] is an improvement over the result of Tian and Jiang [24], the following drawbacks are identified in their result: (1) their result is not applicable when the cost operator f is non-Lipschitz and/or not sequentially weakly continuous, and the operator T is more general than strict pseudocontractions; (2) the proposed algorithm involves linesearch technique, which could be computationally expensive to implement due to its loop nature; (3) implementation of their proposed algorithm requires knowledge of the operator norm.
One of our goals in this article is to remedy the above drawbacks. More precisely, we propose a new iterative method for approximating the solution SVIP (12) with the following features: (1) Unlike the result of Ogwo et al. [25] and Tian and Jiang [24], our proposed algorithm is applicable when the cost operator f is a non-Lipschitz pseudomonotone operator and T is a quasi-pseudocontraction. Moreover, our method does not require the weakly sequentially continuity condition assumed in [25] and in several other existing results in the literature on VIP with a pseudomonotone operator. (2) Our proposed algorithm does not involve any linesearch technique. It uses a simple but efficient selfadaptive step size technique that generates a nonmonotonic sequence of step sizes.  Finally, we present some applications and numerical examples to illustrate the usefulness and efficiency of our proposed method in comparison with some related methods in the literature. Subsequent sections of this article are organized as follows: in Section 2, we recall some basic definitions and lemmas that are relevant in establishing our main result; in Section 3, we present our proposed method, while in Section 4, we first establish some lemmas that are useful in establishing the strong convergence result of our proposed algorithm and then prove the strong convergence theorem for the algorithm; in Section 5, we present some numerical examples to illustrate the performance of our method and compare it with some related methods in the literature, and finally, Section 6, we give a concluding remark.

Preliminaries
In this section, we recall some basic lemmas and definitions required to establish our results. We denote by ⇀ x x n and → x x n the weak and strong convergence, respectively, of a sequence { } x n in H to a point ∈ x H. Let C be a nonempty, closed, and convex subset of a real Hilbert space H. The metric projection [26] → P H C : The operator P C is nonexpansive and has the following properties [27,28]: be a mapping. Then, T is called

then T is a contraction mapping and it is nonexpansive if
x p x C p F T and ; (iii) k-strictly pseudocontractive mapping, if there exists is nonexpansive, see [29]; (ix) uniformly continuous, if for every > ε 0, there exists In this connection, see Proposition 11.2 on page 42 of [30] and the early article by Bruck and Reich [31]. It is known that firmly nonexpansive mappings are 1 2 -averaged while averaged mappings are nonexpansive.
Also, every α-inverse strongly monotone mapping is α 1 -Lipschitz continuous. Moreover, if f is α-strongly monotone and L-Lipschitz continuous, then f is α L 2 -ism. Furthermore, both α-strongly monotone and α-inverse strongly monotone mappings are monotone, while monotone mappings are pseudomonotone.
However, the converse is not true. For instance, the mapping is pseudomonotone but not monotone. In addition, we note that uniform continuity is a weaker notion than Lipschitz continuity. For more examples on pseudomonotone operators that are not monotone, check [32,33].
Clearly, the class of quasi-pseudocontractive mappings includes the class of pseudo-contractive mappings with nonempty fixed points set, and it contains several other classes of mappings.
is uniformly continuous if and only if, for every > ε 0, there exists a constant < +∞ K such that Lemma 2.3.
[30] Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Given ∈ x H and ∈ z C.
Let { } α n be a sequence of nonnegative real numbers satisfying n n , and { } δ n satisfy the following conditions: , then the following hold: is quasi-pseudocontractive mapping, then the mapping T is quasi-nonexpansive.
Lemma 2.6. [25,35] Let H be a real Hilbert space, then, for all ∈ x y H , and ∈ β , the following hold: Furthermore, consider the sequence of integers { ( )} ≥ τ n n n 0 defined by Then, { } ≥ Γ n n n 0 is a nondecreasing sequence such that ( ) → ∞ τ n as → n 0, and for all ≥ n n 0 , the following hold: x is a solution of (

Proposed algorithm
In this section, we present our proposed algorithm. Let H 1 and H 2 be real Hilbert spaces, and let C be a nonempty, closed and convex subset of H .
A H H 0, 1 , : 1 2 is a bounded linear operator, and → S H H : 2 2 is a quasi-pseudocontractive mapping such that − I S is demiclosed at zero. We assume that the solution set . We establish the strong convergence of our proposed algorithm under the following conditions: Now we present our proposed algorithm as follows: and set = n 1.
Step 1: Given the ( − n 1)th and nth iterates, choose θ n such that ≤ ≤ θ θ 0ñ n with θ n defined as follows: , o t h e r w i s e . Step 2: Compute n n n n n 1 Step 3: Compute where Step 4: Compute Step 5: Compute and , otherwise being any nonnegative real number .
(2) Observe that by Lemma 2.5, the mapping T is quasi-nonexpansive and − I T is demiclosed at zero.

Remark 3.3.
(i) We point out that condition (C2)(a) is a much weaker assumption than the sequential weakly continuity assumption used in several of the existing results in the literature. (ii) Observe that while the pseudomonotone operator f is not necessarily Lipschitz, our proposed method does not require any linesearch technique but uses a simple step size rule in (67), which generates a nonmonotonic sequence of step sizes. The step size is constructed such that it reduces the dependence of the algorithm on the initial step size λ . 1

Convergence analysis
In this section, we analyze the convergence of our proposed algorithm. First, we establish some lemmas required to prove our strong convergence theorem.
Let ∈ q Γ. By applying the triangle inequality, we obtain By applying Lemma 2.6(iii) and Cauchy-Schwarz inequality, we obtain where , . Next, by the definition of y n with the firmly nonexpansivity of P C , we obtain  which implies that In addition, by (24) and (28), we have Since ∈ y C n and ∈ q Γ, we obtain ⟨ − ⟩≥ y q fq , 0. Consequently, for all > n N , By Lemma 2.6(iii) and the nonexpansivity of P C , we have , .
By Lemma 2.6(iii) and the quasi-nonexpansivity of T, we have

w q A I T Aw Aw Aq I T Aw TAw Aq I T Aw I T Aw TAw Aq I T Aw I T Aw I T Aw TAw Aq I T Aw I T Aw
By applying (27), (31), and (36), we have Furthermore, by (26), (33), and (37), we obtain By the pseudomonotonicity of f , we obtain We assume that ≠ fp 0 (otherwise p is a solution). Since f satisfies condition (C2)(a), we obtain Proof. Let ( ) = z P g z Γ , we consider two cases to prove the theorem. From (38), we have By applying (45) together with the fact that = →∞ α lim 0 n n , we obtain By the definition of τ n and the condition on ϕ n , we obtain From this, we obtain From (34) and (37), we observe that From which we obtain (53) n n n 2 Now, by applying Lemma 2.6 and equations (27), (33), and (53), we have From which we obtain , . By Remark 3.2, (62), and the fact that □ If we set ( ) = g x v for arbitrary but fixed ∈ v H 1 and for all ∈ x H 1 in Algorithm 3.1, we obtain the following result as a corollary to Theorem 4.4. and set = n 1.
Step 1: Given the ( − n 1)th and nth iterates, choose θ n such that ≤ ≤ θ θ 0ñ n with θ n defined as follows: , o t h e r w i s e . Step 2: Compute n n n n n 1 Step 3: Compute Step 4: Compute Step 5: Compute Set ≔ + n n 1 and go to Step 1.
Example 5.1. In this example, we consider Example 5.2 of He et al. [23]. Let be a separable, convex, and quadratic programming problem, where means the transpose of The problem (68) can be rewritten as SVIP (equations (8) and (9)     we set  Table 1.
, : . Now, let the operator . for all and = a 3. So, C is a nonempty, closed, convex subset of ℓ . 2 Hence,      Table 2.   In this article, we introduced and studied an inertial iterative method for approximating the solution of a class of pseudomonotone SVIP in the framework of Hilbert spaces. We established that the sequence generated by our method converges strongly to a solution of the SVIP when the cost operator is uniformly continuous and without prior knowledge of the operator norm. We gave some numerical examples to illustrate the efficacy and advantage of our method and compare it with related methods in the literature. Step

A.3 Appendix
Algorithm of He et al. [23] Step 0. Given a symmetric positive definite matrix , where κ and г are nonempty, closed, and convex subsets of N and m , respectively.
Step 1. Generate a predictor