Convergence Theorems for the Variational Inequality Problems and Split Feasibility Problems in Hilbert Spaces

In this paper, we establish an iterative algorithm by combining Yamada’s hybrid steepest descent method and Wang’s algorithm for finding the common solutions of variational inequality problems and split feasibility problems. *e strong convergence of the sequence generated by our suggested iterative algorithm to such a common solution is proved in the setting of Hilbert spaces under some suitable assumptions imposed on the parameters. Moreover, we propose iterative algorithms for finding the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.


Introduction
In 2005, Censor et al. [1] introduced the multiple-sets split feasibility problem (MSSFP), which is formulated as follows: where C i (i � 1, 2, . . . , N) and Q j (j � 1, 2, . . . , M) are nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, and A: H 1 ⟶ H 2 is a bounded linear mapping. Denote by Ω the set of solutions of MSSFP (1). Many iterative algorithms have been developed to solve the MSSFP (see [1][2][3]). Moreover, it arises in many fields in the real world, such as inverse problem of intensity-modulated radiation therapy, image reconstruction, and signal processing (see [1,4,5] and the references therein). When N � M � 1, the MSSFP is known as the split feasibility problem (SFP); it was first introduced by Censor and Elfving [5], which is formulated as follows: Denote by Γ the set of solutions of SFP (2). Assume that the SFP is consistent (i.e., (2) has a solution). It is well known that x ∈ C solves (2) if and only if it solves the fixed point equation where c is a positive constant, A * is the adjoint operator of A, and P C and P Q are the metric projections of H 1 and H 2 onto C and Q, respectively (for more details, see [6]). e variational inequality problem (VIP) was introduced by Stampacchia [7], which is finding a point x * ∈ C such that 〈F x * , x − x * 〉 ≥ 0, for all x ∈ C, (4) where C is a nonempty closed convex subset of a Hilbert space H and F: C ⟶ H is a mapping. e ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, and economic equilibrium (see [8][9][10][11][12]).
In [13], we see that x ∈ C solves (4) if and only if it solves the fixed point equation Moreover, it is well known that if F is k-Lipschitz continuous and η-strongly monotone, then VIP (4) has a unique solution (see, e.g., [14]).
Since SFP and VIP include some special cases (see [15,16]), indeed, convex linear inverse problem and split equality problem are special cases of SFP, and zero point problem and minimization problem are special cases of VIP. Jung [17] studied the common solution of variational inequality problem and split feasibility problem: find a point where Γ is the solution set of SFP (2) and F: H 1 ⟶ H 1 is an η-strongly monotone and k-Lipschitz continuous mapping. After that, for solving problem (6), Buong [2] considered the following algorithms, which were proposed in [14,18], respectively: x n+1 � α n x n + 1 − α n I − t n μF Tx n , n ≥ 0, (8) where T � P C (I − cA * (I − P Q )A), and under the following conditions: (C1) t n ∈ (0, 1), t n ⟶ 0 as n ⟶ ∞ and ∞ n�1 t n � ∞. (C2) 0 < liminf n⟶∞ α n ≤ limsup n⟶∞ α n < 1.
Moreover, Buong [2] considered the sequence x n that is generated by the following algorithm, which is weakly convergent to a solution of MSSFP (1): where P 1 � P C 1 , . . . , P C N and P 2 � P Q 1 , . . . , P Q M or P 1 � N i�1 α i P C i and P 2 � M j�1 β j P Q j in which α i and β j , for 1 ≤ i ≤ N and 1 ≤ j ≤ M, are positive real numbers such that N i�1 α i � M j�1 β j � 1. Motivated by the aforementioned works, we establish an iterative algorithm by combining algorithms (7) and (8) for finding the solution of problem (6) and prove the strong convergence of the sequence generated by our iterative algorithm to the solution of problem (6) in the setting of Hilbert spaces. Moreover, we propose iterative algorithms for solving the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

Preliminaries
In order to solve our results, we now recall the following definitions and preliminary results that will be used in the sequel. roughout this section, let C be a nonempty closed convex subset of a real Hilbert space H with inner product 〈·, ·〉 and norm ‖ · ‖.
In [5], we know that the metric projection P C : H ⟶ C is firmly nonexpansive and (1/2)-averaged.
We collect some basic properties of averaged mappings in the following results.
Lemma 1 (see [16]). We have (i) e composite of finitely many averaged mappings is averaged.
are averaged and have a common fixed point, then Proposition 1 (see [19]). Let D be a nonempty subset of H, m ≥ 2 be an integer, and ϕ; (0, 1) m ⟶ (0, 1) be defined by e following properties of the nonexpansive mappings are very convenient and helpful to use.
Lemma 2 (see [20] Proposition 2 (see [19]). Let C be a nonempty subset of H, and let T i i∈I be a finite family of nonexpansive mappings from C to H. Assume that α i i∈I ⊂ (0, 1) and δ i i∈I ⊂ (0, 1] such that i∈I δ i � 1. Suppose that, for every i ∈ I, T i is α i -averaged; then, T � i∈I δ i T i is α-averaged, where α � i∈I δ i α i . e following results play a crucial role in the next section.
en, the sequence x n defined by the following algorithm converges strongly to the unique solution x * of the variational inequality (4): . . , N, and under the following conditions: Theorem 2 (see [22]).
be as in eorem 1. en, the sequence x n defined by the following algorithm: converges strongly to the unique solution x * of variational inequality (4).

Main Results
In this section, we consider the following iterative algorithm by combining Yamada's hybrid steepest descent method [14] and Wang's algorithm [18] for solving problem (6): y n � 1 − α n x n + α n I − t n μF Tx n , x n+1 � I − t n μF Ty n , ∀n ≥ 1, where T � P C (I − cA * (I − P Q )A). If we set α n � 0 for n ∈ N, then (15) is reduced to (7) studied by Buong [2]. On the other hand, in the Numerical Example section, we present the example illustrating that the two-step method (15) is more efficient that the one-step method (8) studied by Buong [2] and in terms of the two-step method (15) the generated sequence has the less number of iterations and converges faster than the sequence generated by the one-step method (8). roughout our results, unless otherwise stated, we assume that H 1 and H 2 are two real Hilbert spaces and A: H 1 ⟶ H 2 is a linear bounded mapping. Let F be an η-strongly monotone and k-Lipschitz continuous mapping on H 1 with some positive constants η and k. Assume that μ ∈ (0, 2η/k 2 ) is a fixed number. Theorem 3. Let C and Q be two closed convex subsets in H 1 and H 2 , respectively. en, as n ⟶ ∞, the sequence x n defined by (15), where the sequences t n and α n satisfy conditions (C1) and (C2), respectively, converges strongly to the solution of (6).
Since (1 − λ)I + λS and I − t n μF are nonexpansive, then (I − t n μF)T is also nonexpansive. erefore, the strong convergence of (15) to the element x * in the solution set of (6) follows by eorem 2.
In [23], Miao and Li showed the weak convergence results of the sequence x n converging to the element of Fix(T) where x n is generated by the following algorithm: y n � 1 − β n x n + β n I − t n μF Tx n , x n+1 � 1 − α n x n + α n I − t n μF Ty n , ∀n ≥ 1, which t n satisfies condition (C3) ∞ n�1 t n < + ∞. Next, we will show the strong convergence for (17) where t n satisfies condition (C1). □ Theorem 4. Let C and Q be two closed convex subsets in H 1 and H 2 , respectively. en, as n ⟶ ∞, the sequence x n defined by (17), where the sequence t n satisfies condition (C1) and β n and α n satisfy condition (C2), converges strongly to the solution of (6).
Since (I − t n μF)T is nonexpansive, then the strong convergence of (17) to the element x * in the solution set of (6) follows by eorem 1. Moreover, we obtain the following results which are solving the common solution of variational inequality problem and multiple-sets split feasibility problem, i.e., find a point x * ∈ Ω such that 〈Fx * , x − x * 〉 ≥ 0, for all x ∈ Ω, (19) where Ω is a solution set of (1), and F: H 1 ⟶ H 1 is an η-strongly monotone and k-Lipschitz continuous mapping.
is problem has been studied in [2].  H 1 and H 2 , respectively. Assume that c ∈ (0, 1/‖A‖ 2 ), t n and α n satisfy conditions (C1) and (C2), respectively, and the parameters δ n and ζ n satisfy the following conditions: en, as n ⟶ ∞, the sequence x n , defined by y n � 1 − α n x n + α n I − t n μF P 1 I − cA I − P 2 A x n , with one of the following cases: . . , P Q M , converges to the element x * in the solution set of (19).
Proof. Let T � P 1 (I − cA * (I − P 2 )A). We will show that T is averaged.
If P 1 � N i�1 δ i P C i and P 2 � M j�1 ζ j P Q j , then by using Proposition 2 and condition (a), we obtain that P 1 is (1/2)-averaged. From condition (b) and taking into account that P Q j is nonexpansive, for all j � 1, . . . , M, we have that P 2 is also nonexpansive. It follows from Lemma 2 that I − cA * (I − P 2 )A is c‖A‖ 2 -averaged. us, T is λ-averaged with λ � (1 + c‖A‖ 2 )/2.
Since (1 − λ)I + λS and I − t n μF are nonexpansive, then (I − t n μF)T is nonexpansive. us, the strong convergence of (20) to the element x * in the solution set of (19) follows by eorem 2.
, c, t n , δ n , and ζ n be as in eorem 5. en, as n ⟶ ∞, the sequence x n , defined by with one of the cases (A1)-(A4), converges strongly to an element in the solution set of (19).
Since (I − t n μF)T is nonexpansive, the strong convergence of (23) to the element x * in the solution set of (19) follows by eorem 1.

Numerical Example
In this section, we present the numerical example comparing algorithm (8) which is given by Buong [2] and algorithm (15) (new method) to solve the following test problem in [2]: find an element x * ∈ Ω such that , where φ is a convex function, having a strongly monotone and Lipschitz continuous derivative φ ′ (x) on the Euclidian a i k , b i ∈ (− ∞, +∞), for 1 ≤ k ≤ n and 1 ≤ i ≤ N,     (1 − a). For each algorithm, we set a i � (1/i, − 1), b i � 0, for all i � 1, . . . , N, and a j � (1/j, 0), R j � 1, for all j � 1, . . . , M. Taking a � 0.5, c � 0.3, the stopping criterion is defined by E n � ‖x n+1 − x n ‖ < ε where ε � 10 − 4 and 10 − 6 . e numerical results are listed in Table 1 with different initial points x 1 , where n is the number of iterations and s is the CPU time in seconds. In Figures 1 and 2, we present the graphs illustrating the number of iterations for both methods using the stopping criterion defined as above with the different initial points shown in Table 1.   , φ, a, and A be as in Example 1. In the numerical experiment, we take the stopping criterion E n < 10 − 4 . e numerical results are listed in Table 2 with different cases of P 1 and P 2 . In Figures 3 and 4, we present the graphs illustrating the number of iterations for all cases of P 1 and P 2 using the stopping criterion as above with the different initial points appeared in Table 2. Moreover, Table 3 shows the effect of different choices of c.
Remark 2. We observe from the numerical analysis of Table 2 that algorithm (23) has the fastest convergence when P 1 and P 2 satisfy (A4) and the slowest convergence when P 1 and P 2 satisfy (A3). Moreover, we require less iteration steps and CPU times for convergence when c is chosen very small and close to zero.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest.