Several self-adaptive inertial projection algorithms for solving split variational inclusion problems

This paper is to analyze the approximation solution of a split variational inclusion problem in the framework of infinite dimensional Hilbert spaces. For this purpose, several inertial hybrid and shrinking projection algorithms are proposed under the effect of self-adaptive stepsizes which does not require information of the norms of the given operators. Some strong convergence properties of the proposed algorithms are obtained under mild constraints. Finally, an experimental application is given to illustrate the performances of proposed methods by comparing existing results.


Introduction
Inspired by the split variational inequality problem proposed by Censor et al. [1], Moudafi [2] introduced a more general form of this problem, that is, the split monotone variational inclusion problem (for short, SMVIP).It is worth noting that an important special case of the split monotone variation inclusion problem is the split variational inclusion problem (for short, SVIP), which is to find a zero of a maximal monotone mapping in one space, and the image of which under a given bounded linear transformation is a zero of another maximal monotone mapping in another space.As well as, the split variational inclusion problem is also a generalized form of many problems, such as the split variational inequality problem, the split minimization problem, the split equilibrium problem, the split saddle point problem and the split feasibility problem; see, for instance, [2-6] and the references therein.As applications, these problems are also widely applied to radiation therapy treatment planning, image recovery and signal recovery.For detail, we refer to [7][8][9].In SVIP, when the two spaces are the same and the given bounded linear operator is an identity mapping, SVIP is equivalent to the well-known common solution problem, i.e., the common solution of two variational inclusion problems.Naturally, common solution problems of other aspects can be obtained, such as the variational inequality problem, the minimization problem and the equilibrium problem.In general, the above common solution problems can be regarded as the distinguished convex feasibility problem.
In particular, finding the zero of a maximal monotone mapping is known as the variational inclusion problem (for short, VIP), which is a special case of the SVIP.Furthermore, the resolvent mapping of the maximal monotone mapping is considered to solve the approximate solution of VIP.With the help of this resolvent mapping and the attention of a large number of scholars, the variational inclusion problem and the split variational inclusion problem has obtained quite a few remarkable results; see, e.g., [10][11][12][13][14], etc.On the other hand, based on the idea of the implicit discretization of a differential system of the second-order in time, Alvarez and Attouch [15] introduced an inertial proximal point algorithm to approximate a solution of the VIP.Under the effect of the inertial technique, the iterative sequence of SVIP and other problems rapidly converges to the approximation solution of the corresponding problems, such as the split variational inclusion problem [4][5][6], the split common fixed point problem [7,16], the monotone inclusion problem [17][18][19].
From the existing results of the split variational inclusion problem, we find that it is easy to get the weak convergence property, and sometimes its strong convergence is proved in the case of other methods, such as the viscosity method, the Halpern method, the Mann-type method, the hybrid steepest descent method, and so on, for detail, see [3,4,6,13].Unfortunately, the stepsize sequences in these existing results often depends on the norm of the bounded linear operator.Hence, the work of this paper can be summarized in two aspects.The first one is to construct some new inertial iterative algorithms that converge strongly to a solution of SVIP.For this purpose, we consider two projection methods in our algorithms, namely hybrid projection [20] and shrinking projection [21].The second one is to design a new stepsize sequence which does not need prior knowledge of the bounded linear operator in our algorithms.
The rest of the article is outlined as follows.Section 2 introduces the split variational inclusion problem and some preliminaries.Several new iterative algorithms and their convergence theorems for SVIP are proposed in Section 3. Theoretical applications on other mathematical problems are given in Section 4.
Finally, in Section 5, the validity and authenticity of the convergence behavior of the proposed algorithms are demonstrated by some applicable numerical examples.

Preliminaries
To standardize, the notations → and stand for strong convergence and weak convergence, respectively.
The symbol Fix(S) denotes the fixed point set of a mapping S. ω w (x n ) represents the set of weak cluster point of a sequence {x n }.Let H be a Hilbert space with the inner product Lemma 2.1 [22,23] The resolvent mapping J B β of a maximal monotone mapping B with β > 0 is defined as J B β (x) = (I + β B) −1 (x), ∀x ∈ H.The following properties associated with J B β hold.
(1) The mapping J B β is a single-valued and firmly nonexpansive; Definition 2.2 The notation P C denotes the metric projection from H onto C, that is, P C x = argmin y∈C x − y , ∀x ∈ H. Naturally, we can know the following equivalent properties of P C : Lemma 2.2 [23] Let C be a nonempty closed convex subset of H and S : C → C be a nonexpansive mapping with Fix(S) = / 0. I − S is demiclosed at zero, that is, for any sequence {x n } in C, satisfying x n x and Lemma 2.3 [24] Let C be a nonempty closed convex subset of H. Let a sequence {x n } in H and u = 3 Several self-adaptive inertial hybrid and shrinking projection algorithms Combining the inertial technique with the projection methods, two types of projection algorithms are given for approximating a solution of the split variational inclusion problem (SVIP).Before this, we always assume that the following conditions are satisfied: (C1) H 1 , H 2 are two Hilbert spaces and A : H 1 → H 2 is a bounded linear operator with adjoint operator A * ; Inertial hybrid projection algorithms and inertial shrinking projection algorithms are introduced below and the strong convergence of these algorithms are guaranteed by the following appropriate parameter conditions: βn )Az n 2 with σ n ∈ (0, 2).Otherwise, γ n = 0.

The strong convergence of inertial hybrid projection algorithm
Algorithm 3.1 Given appropriate parameter sequences {α n }, {β n } and {γ n }, for any x 0 , x 1 ∈ H 1 , the sequence {x n } is constructed by the following iterative form. where Proof For any x ∈ Ω , we have x ∈ B −1 1 (0) and Ax ∈ B −1 2 (0).According to the property of firmly nonexpansive mappings J B 1 β n , J B 2 β n and I − J B 2 β n , we have Proof Firstly, we show that P C n Q n is well defined and From the definition of C n and Q n , it is obvious that the sets C n , Q n are convex and closed, which implies that P C n Q n is well defined.For any p ∈ Ω , it follows from Lemma 3.1 that Ω ⊂ C n .In addition, property of metric projection and x n = P C n−1 Q n−1 x 1 , we get Afterwards, we show that iterative sequence {x n } is bounded and Since Ω is a nonempty closed convex set, there exists a point , the sequence {x n } is bounded.From the definition of Q n and x n+1 = P C n ∩Q n x 1 ∈ Q n , we get . These indicate that lim n→∞ x 1 − x n exists.Further, it follows from the property of metric projection P Q n that This implies lim n→∞ x n − x n+1 = 0.
Lastly, we prove that the sequence {x n } converges strongly to x * = P Ω x 1 .
From the boundedness of {x n }, there exists a subsequence {x n l } of {x n } converges weakly to q, for any This implies that {z n } is bounded and z n l q.From (P2) and Algorithm 3.1, we have In addition, Hence, the sequence {u n } is bounded.Using Lemma 3.1, for any p ∈ Ω , On the other hand, from the definition of u n and the firmly nonexpansive property of J B 1 β n , we obtain β n z n = 0. Since A is a bounded linear operator, we get Az n l Aq.
By Remark 2.1 and Lemma 2.2, it follows that q ∈ Fix( 2 0, we can also get the same result.In summary, we have ω w (x n ) ⊂ Ω and x n − x 1 ≤ x * − x 1 .By virtue of Lemma 2.3, we obtain that {x n } converges strongly to x * = P Ω x 1 .

The strong convergence of inertial shrinking projection algorithms
Algorithm 3.2 Given appropriate parameter sequences {α n }, {β n } and {γ n }, for any x 0 , x 1 ∈ H 1 , the sequence {x n } is constructed by the following iterative process. where Algorithm 3.3 Given appropriate parameter sequences {α n }, {β n } and {γ n }, for any x 0 , x 1 ∈ H 1 , the sequence {x n } is constructed by the following iterative process.
where Proof Firstly, it is obvious that the half space C n (n ≥ 1) is convex and closed and P C n is well defined.
By Lemma 3.1, we can easily get that the solution set Ω ⊂ C n .Using x n = P C n x 1 , x n+1 = P C n+1 x 1 and for any p ∈ Ω , that is, {x n } is bounded.These imply that lim n→∞ x n − x 1 exists.
According to the proof in Theorem 3.1, we also prove that the sequence {x n } converges strongly to x * = P Ω x 1 .Proof Similarly, we obtain that C n (n ≥ 1) is convex and closed, P C n is well defined and Ω ⊂ C n .By Using the proof in Theorems 3.1 and 3.2, we have that {x n } converges strongly to x * = P Ω x 1 .

Theoretical applications
In this section, we give several interesting special cases of the split variation inclusion problem (SVIP).
At the same time, Algorithms 3.1, 3.2 and 3.3 are applied to these problems.Further, the same strong convergence property in Theorems 3.1, 3.2 and 3.3 are proved.

Split variational inequality problem
Let C and Q be nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively.Let F : and G : H 2 → H 2 be given operators, A : H 1 → H 2 be a bounded linear operator.The split variational inequality problem is to find a point x * ∈ C such that Especially, when H 1 = H 2 , F = G and A = I, the split variational inequality problem is transformed into the classical variational inequality problem which is to find a point Hence, the solution set of the variational inequality problem is represented by V I(F,C).Then, the split variational inequality problem is formulated as To solve the variational inequality problem, the normal cone N C (x) of C at a point x ∈ C is defined as follows: Further, the set valued mapping S F related to the normal cone N C (x) is defined by In the sense, if F is a α-inverse strongly monotone operator (i.e., for any x, z ∈ C, F(x) − F(z), x − z ≥ α F(x) − F(z) 2 ), then S F is a maximal monotone mapping.More importantly, x ∈ V I(F,C) if and only if 0 ∈ S F (x). Consequently, let F and G be α-inverse strongly monotone operators.The set valued mappings S F and S G are associated with F and G, respectively.In SVIP, when B 1 = S F and B 2 = S G , we obtain the above split variational inequality problem.

Split saddle point problem
Let X and Y be Hilbert spaces.A bifunction L : •) is convex for any x ∈ X and L(•, y) is concave for any y ∈ Y .The operator T L is defined as follows: where ∂ 1 is the subdifferential of L with respect to x and ∂ 2 is the subdifferential of −L with respect to y.
It is worth noting that T L is maximal monotone if and only if L is closed and proper, for detail, see, [25].
Naturally, the zeros of T L coincide with the saddle points of L. Therefore, let Y 2 be a bounded linear operator with adjoint operator A * .Let L 1 and L 2 be closed proper convex-concave bifunctions.Then, the split saddle point problem is to find a point In other words, when ), the split variational inclusion problem is reduced to the split saddle point problem.

Split minimization problem
Let H 1 and H 2 be Hilbert spaces.Let φ : H 1 → R and ψ : H 2 → R be lower semi-continuous convex functions, A : H 1 → H 2 be a bounded linear operator.The split minimization problem is to find As we all know, Recall that the proximal operator prox φ of φ is as follows: It is very important that prox φ (x) = (I + γ∂ φ ) −1 (x) = J ∂ φ γ (x).In addition, ∂ φ is a maximal monotone mapping and prox φ is a firmly nonexpansive mapping.In view of this, when B 1 = ∂ φ and B 2 = ∂ ψ in (SVIP), the split variational inclusion problem is transformed into the split minimization problem.

Numerical example
In this section, a numerical example is provided to illustrate the effectiveness and realization of convergence behavior of Algorithms 3.1, 3.2 and 3.3.All codes were written in Matlab R2018b, and ran on a Lenovo ideapad 720S with 1.6 GHz Intel Core i5 processor and 8GB of RAM.Our results compare the existing conclusions below.Firstly, let H 1 and H 2 be Hilbert spaces, A : H 1 → H 2 be a bounded linear operator with the adjoint operator A * .Let B 1 : H 1 → 2 H 1 and B 2 : H 2 → 2 H 2 be two set-valued maximal monotone mappings.Many existing conclusions on the split variational inclusion problem have been proven in such an environment as follows.
where f : The iterative sequence {x n } converges strongly to a point x * ∈ Ω .
where {θ n } is a sequence in (0, 1) with lim n→∞ We use E n = x n − x * to measure the iteration error of all algorithms.The stopping condition is E n < ε or the maximum number of iterations is 300 times.First, choose ε = 10 −2 , 10 −3 , 10 −4 , 10 −5 .We test the convergence behavior of all algorithms under different stopping conditions.The numerical results are shown in Table 1 and Figure 1.Second, Figure 2 describes the numerical behavior of all algorithms in different dimensions under the same stopping criterion ε = 10 −4 .
Table 1 The number of termination iterations of all algorithms under different stopping criteria The Algotithms

Remark 4 . 1
Through the above results, the split variational inclusion problem is transformed into other problems, such as the split variational inequality problem, the split saddle point problem and the split minimization problem.Using the same algorithms and techniques in Theorems 3.1, 3.2 and 3.3, the strong convergence property of these problems are obtained under the above corresponding conditions in Subsections 4.1, 4.2 and 4.3.
The iterative sequence {x n } converges strongly to a point x * ∈ Ω .Example 5.1 Assume that A, A 1 , A 2 : R m → R m are created from a normal distribution with mean zero and unit variance.Let B 1 : R m → R m and B 2 : R m → R m be defined by B 1 (x) = A * 1 A 1 x and B 2 (y) = A * 2 A 2 y, respectively.Consider the problem of finding a point x = ( x1 , . . ., xm ) T ∈ R m such that B 1 ( x) = (0, . . ., 0) T and B 2 (A x) = (0, . . ., 0) T .It is easy to see that the minimum norm solution of the mentioned above problem is x * = (0, . . ., 0) T .Our parameter settings are as follows.In our algorithms 3.1-3.3,set α n = 0.5, β n = 1 and σ n = 1.5.Take β = 1, δ n = 1 n+1 and γ n = 1.5 A * A in the Algorithm 4.4 proposed by Byrne et al. [3]