An inertially constructed forward–backward splitting algorithm in Hilbert spaces

In this paper, we develop an iterative algorithm whose architecture comprises a modified version of the forward–backward splitting algorithm and the hybrid shrinking projection algorithm. We provide theoretical results concerning weak and strong convergence of the proposed algorithm towards a common solution of the fixed point problem associated to a finite family of demicontractive operators, the split equilibrium problem and the monotone inclusion problem in Hilbert spaces. Moreover, we compute a numerical experiment to show the efficiency of the proposed algorithm. As a consequence, our results improve various existing results in the current literature.


Introduction
The theory of mathematical optimization provides a quantitative optimal solution associated with various real-world problems emerging in the fields of engineering, medicine, economics, management, and industry and other branches of the sciences. One of the main advantages of mathematical optimization is to provide effective iterative algorithms and the corresponding analysis of these iterative algorithms. Moreover, the viability of such iterative algorithms is evaluated in terms of computational performance and complexity. As a consequence, the theory of mathematical optimization has not only emerged as an independent subject to solve real-world problems but also serve as an interdisciplinary bridge between various branches of sciences.
Monotone operator theory is a fascinating field of research in nonlinear functional analysis and found valuable applications in the field of convex optimization, subgradients, partial differential equations, variational inequalities, signal and image processing, evolution equations and inclusions; see, for instance, [4,12,14,30] and the references cited therein. It is noted that the convex optimization problem can be translated into finding a zero of a maximal monotone operator defined on a Hilbert space. On the other hand, the problem of finding a zero of the sum of two (maximal-) monotone operators is of fundamental importance in convex optimization and variational analysis [23,27,33]. The forward-backward algorithm is prominent among various splitting algorithms to find a zero of the sum of two maximal monotone operators [23]. The class of splitting algorithms has parallel computing architectures and thus reducing the complexity of the problems under consideration. On the other hand, the forward-backward algorithm efficiently tackle the situation for smooth and/or nonsmooth functions. It is worth mentioning that the forward-backward algorithm has been modified by employing the heavy ball method [28] for convex optimization problems.
Fixed point theory has been studied extensively in the current literature owing to its rich abstract structures. These structures and subsequent tools elegantly manipulate various mathematical problems from the areas such as control theory, game theory, mathematical economics, image recovery signal processing and image processing. In 2015, the problem of finding a common solution of the zero point problem and fixed point problem was studied by Takahashi et al. [32]. It is well known that the class of demicontractive operators [15] includes various classes of nonlinear operators and comparatively exhibits powerful applications. Therefore, it is natural to study the fixed point problems associated with the class of demicontractive operators.
The theory of equilibrium problems is a systematic approach to the study of a diverse range of problems arising in the field of physics, optimization, variational inequalities, transportation, economics, network and noncooperative games; see, for example, [5,[11][12][13] and the references cited therein. The classical equilibrium problem theory has been generalized in several interesting ways to solve real-world problems. In 2012, Censor et al. [9] proposed a theory regarding split variational inequality problem (SVIP) which aims to solve a pair of variational inequality problems in such a way that the solution of a variational inequality problem, under a given bounded linear operator, solves another variational inequality.
In 2011, Moudafi [26] suggested the concept of split monotone variational inclusions (SMVIP) which includes, as a special case, split variational inequality problem, split common fixed point problem, split zeros problem, split equilibrium problem (SEP) and split feasibility problem. These problems have already been studied and successfully employed as a model in intensity-modulated radiation therapy treatment planning; see [6,8]. This formalism is also at the core of modeling of many inverse problems arising for phase retrieval and other real-world problems; for instance, in sensor networks in computerized tomography and data compression; see, for example, [10,12]. Some methods have been proposed and analyzed to solve SEP and generalized SEP in Hilbert spaces; see, for example, [2,3,[16][17][18][19][20][21][22] and the references cited therein.
Inspired and motivated by the above-mentioned results and the ongoing research in this direction, we aim to employ the modified inertial forward-backward algorithm to find a common solution of fixed point problem associated to a finite family of demicontractive operators, SEP and monotone inclusion problem in Hilbert spaces. The rest of the paper is organized as follows: Section 2 contains preliminary concepts and results regarding fixed point theory, equilibrium problem theory and monotone operator theory. Section 3 comprises weak and strong convergence results of the proposed algorithm. Section 4 deals with the efficiency of the proposed algorithm by a numerical experiment together with theo-retical applications to the split feasibility problem, the split variational inequality problem and the split minimization problem.

Preliminaries
In this section, we recall concepts and results regarding fixed point theory, equilibrium problem theory and monotone operator theory. Throughout this paper, let H 1 be a real Hilbert space with the inner product and the associated norm ·, · and · , respectively. The symbols and → denotes weak and strong convergence. An operator P C is said to be metric projection of H 1 onto nonempty, closed and convex subset C, if for every x ∈ H 1 , there exists a unique nearest point in C denoted by P C x such that It is noted that P C is a firmly nonexpansive operator and P C x is characterized by the following property: x -P C x, P C xy ≥ 0, for all x ∈ H 1 and y ∈ C.
Next, we recall the definitions of nonexpansive and related operators. Tx -Ty ≤ xy , ∀x, y ∈ C;

firmly nonexpansive if
Tx - It follows immediately that a firmly nonexpansive operator is a nonexpansive operator. We now define the concept of SEP. Let : H 1 → H 2 be a bounded linear operator. Let F 1 : C × C → R and F 2 : Q × Q → R be two bifunctions, then SEP is to find: and y * = x * ∈ Q such that F 2 y * , y ≥ 0 for all y ∈ Q.
The solution set of the SEP (1) and (2) is denoted by Now, we recall some important concepts related to monotone operator theory [4]. Let A : H 1 → 2 H 1 be a set-valued operator. We denote its domain, range, graph and zeros by DomA Moreover, A is said to be maximal monotone if its graph is not strictly contained in the graph of any other monotone operator on H 1 . A well-known example of a maximal monotone operator is the subgradient operator of a proper, lower semicontinuous convex function f : For a maximal monotone operator, the associated resolvent operator with index m > 0 is defined as where Id denotes the identity operator.
It is well known that the resolvent operator J m is well-defined everywhere on Hilbert space H 1 . Furthermore, J m is single-valued and satisfies the firmly nonexpansiveness. Furthermore, x ∈ A -1 (0) if and only if x = J m (x).
Let f : H 1 → R ∪ {+∞} be a proper, convex and lower semicontinuous function and let g : H 1 → R be a convex, differentiable and Lipschitz continuous function, then the convex minimization problem for f and g is defined as xy, Bx -By ≥ γ Bx -By 2 , ∀x, y ∈ H 1 .
The γ -ism is also coined as γ -cocoercive operator. Moreover, γ -ism is 1 γ -Lipschitz continuous. In connection with the problem (4), the monotone inclusion problem with respect to a maximally monotone operator A and an arbitrary operator B is to find: In the sequel, we list some important results in the form of lemmas for the convergence analysis.

Lemma 2.1 ([4]) Let C be a nonempty, closed and convex subset of a real Hilbert space H 1 .
Let T : C → C be an operator. Then the operator T is said to be demiclosed at zero, if for any sequence (x k ) in C that converges weakly to x and (Id -T)x k converges strongly to zero, then x ∈ Fix(T). Lemma 2.2 Let x, y ∈ H 1 and β ∈ R, then the following relations hold:

Lemma 2.5 ([24]
) Let E be a uniformly convex and q-uniformly smooth Banach space for some q ∈ (0, 2]. Let A : E → 2 E be an m-accretive operator and let B : E → E be an α-inverse strongly accretive operator. Then, given r > 0, there exists a continuous, strictly increasing and convex function ϕ q : where k q is the q-uniform smoothness coefficient of E.

Lemma 2.7 ([25]) Let C be a nonempty, closed and convex subset of a real Hilbert space
For every x, y ∈ H 1 and a ∈ R, the set is closed and convex. Assumption 2.8 Let C be a nonempty, closed and convex subset of a Hilbert space H 1 . Let F 1 : C × C → R be a bifunction satisfying the following conditions: Moreover, define an operator T F r : H 1 → C by for all x ∈ H 1 . Then we have the following observations: It is noted that if F 2 : Q × Q → R is a bifunction satisfying Assumption 2.8, where Q is a nonempty, closed and convex subset of a Hilbert space H 2 . Then, for each s > 0 and w ∈ H 2 , we define the operator Similarly, we have the following relations: is closed and convex.

Algorithm and convergence analysis
In this section, we present an approach to the convergence analysis of inertial forwardbackward splitting method for solving the fixed point problem associated to a finite family of demicontractive operators, SEP and monotone inclusion problem in Hilbert spaces. First, we set the following hypotheses required in the sequel: Let H 1 , H 2 be two real Hilbert spaces and let C ⊆ H 1 , Q ⊆ H 2 be nonempty, closed and convex subsets of Hilbert spaces H 1 and H 2 , respectively. We consider the following hypotheses: (H1) Let F 1 : C × C → R and F 2 : Q × Q → R be two bifunctions satisfying Assumption 2.8 such that F 2 is upper semicontinuous;
(H2) let : H 1 → H 2 be a bounded linear operator; (H3) let A : H 1 → 2 H 1 be a maximal monotone operator and let B : Theorem 3.1 If = ∅ with hypotheses (H1)-(H5), then the sequence (x k ) generated by Algorithm 1 converges weakly to an elementx ∈ , provided the following conditions hold: Proof First we show that * (Id -T F 2 u k ) is an 1 L -ism operator. For this, we utilize the firmly nonexpansiveness of T F 2 u k , which implies that (Id - for all x, y ∈ H 1 . So, we observe that * (Id -T F 2 u k ) is an 1 L -ism. Moreover, Idδ * (Id -T F 2 u k )h is nonexpansive provided δ ∈ (0, 1 L ). Now, we divided the rest of the proof into the following three steps: Step 1. Show that lim k→∞ x k -x exists for everyx ∈ .
Step 2. Show that x k x ∈ (A + B) -1 (0). Sincex = J kx , therefore it follows from Lemma 2.2 and Lemma 2.5 that As lim k→∞ x k -x exists, therefore utilizing, (C1), (C4), (C5) and (15), we get Also from (15), we get Using (16), (17) and the triangle inequality Since lim inf k→∞ m k > 0 there exists m > 0 such that m k ≥ m for all k ≥ 0. It follows from Lemma 2.4(b) that Now utilizing (18), the above estimate implies that As a consequence, we have Again, from (15), we have Rearranging the above estimate and using (C1), (C2), we get This implies that Again, by Lemma 2.2, Lemma 2.6 and (11), we have Rearranging the above estimate, we have Since δ(Lδ -1) < 0, it follows from (C1) and (23) that Note that T F 1 u k is firmly nonexpansive and Idδ * (Id -T F 2 u k ) is nonexpansive, therefore we have Therefore, we have Utilizing (24) and (C2), we have From the definition of (b k ) and (27), we have By the definition of (b k ) and (C1), we have Since (x k ) is bounded and H 1 is reflexive, ν ω (x k ) = {x ∈ H 1 : x k n x, (x k n ) ⊂ (x k )} is nonempty. Letx ∈ ν ω (x k ) be an arbitrary element. Then there exists a subsequence (x k n ) ⊂ (x k ) converging weakly tox. Letx ∈ ν ω (x k ) and (x k m ) ⊂ (x k ) be such that x k m x. From (24), we also have k n x and k m x. Since T A,B m is nonexpansive, from (19) and Lemma 2.1, we havex,x ∈ (A + B) -1 (0). By applying Lemma 2.3, we obtainx =x.
Let y t = ty + (1t)x for some 1 ≥ t > 0 and y ∈ H 1 . Sincex ∈ H 1 , consequently, y t ∈ H 1 and hence F 1 (y t ,x) ≤ 0. Using Assumption 2.8((A1) and (A4)), it follows that This implies that Letting t → 0, we have Thus,x ∈ EP(F 1 ). Similarly, we can show thatx ∈ EP(F 2 ). Since is a bounded linear operator, we have x k n x. It follows from (26) that Now, from Lemma 2.7 we have for all y ∈ H 1 . Since F 2 is upper semicontinuous in the first argument and from (31), we have for all y ∈ H 1 . This implies that x ∈ EP(F 2 ). Therefore,x ∈ .
Step 4. From (21) and by using the demiclosed principle for S i (it is evident that x k n x and lim k→∞ (Id -S i )x k n = 0), we havex ∈ i=1 N Fix(S i ) and hencex ∈ . This completes the proof. Now, we establish strong convergence results of Algorithm 2. Proof The proof is divided into the following steps: Step 1. Show that the sequence {x k } defined in Algorithm 2 is well-defined. We know that (A + B) -1 (0), and Fix(S i ) are closed and convex by Lemma 2.4 and Lemma 2.9. Moreover, from Lemma 2.7 we see that C k+1 is closed and convex for each k ≥
If y k = b k = k = w k = x k then stop and x k is the solution of problem . Otherwise, x k+1 = P C k+1 x 1 , ∀k ≥ 1.
Set k =: k + 1 and go back to Step 1.
1. Hence the projection P C k+1 x 1 is well-defined. For anyx ∈ , it follows from Algorithm 2 and the estimates (6), (12) and (13) that It follows from the estimate (32) that ⊂ C k+1 . Summing up these facts, we conclude that C k+1 is nonempty, closed and convex for all k ≥ 1, and hence the sequence (x k ) is welldefined.
Step 2. Show that lim k→∞ x kx 1 exists. Since is nonempty closed and convex subset of H 1 , there exists a unique x * ∈ such that x * = P x 1 . From x k+1 = P C k+1 x 1 , we have x k+1x 1 ≤ x *x 1 , for allx ∈ ⊂ C k+1 .
In particular x k+1x 1 ≤ P x 1x 1 . This proves that the sequence (x k ) is bounded. On the other hand, from x k = P C k x 1 and x k+1 = P C k+1 x 1 ∈ C k+1 , we get This implies that (x k ) is nondecreasing and hence lim k→∞ x kx 1 exists.
Step 3. Show thatx ∈ (A + B) -1 (0). In order to proceed, we first calculate the following estimates which are required in the sequel: Taking lim sup on both sides of the above estimate and utilizing (33), we have lim sup k→∞ x k+1x k 2 = 0. That is, Note that x k+1 ∈ C k+1 , therefore we have Utilizing (34) and (C1), the above estimate implies that From (34), (35) and the triangular inequality Also, from Lemma 2.2 and (21), we have Rearranging the above estimate, we have The above estimate, by using (C1) and (36), implies that Making use of (37), we have the following estimate: Reasoning as above, utilizing the estimate (37), the estimate (38) implies that In a similar fashion, we have Reasoning as above (Theorem 3.1, Step 2), we have the desired result.
Step 5. Show thatx ∈ i=1 N Fix(S i ). See proof of Step 4 in Theorem 3.1.
Step 6. Show thatx = P x 1 . Let x = P x 1 imply that x = P x 1 ∈ C k+1 . Since x k+1 = P C k+1 x 1 ∈ C k+1 , we have On the other hand, we have That is, Therefore, we conclude that lim k→∞ x k =x = P x 1 . This completes the proof.
The following remark gives us a stopping criterion of Algorithm 2.
Remark 3. 3 We remark here that the condition (C1) is easily implemented in numerical computation since the value of x kx k-1 is known before choosing k . The parameter k can be taken as 0 ≤ k ≤ k , where {ν k } is a positive sequence such that ∞ k=1 ν k < ∞ and k = ∈ [0, 1).

Numerical experiment and results
This section shows the effectiveness of Algorithm 2 by the following given example.
Example 4.1 Let H 1 = H 2 = R be the set of all real numbers, with the inner product defined by x, y = xy, for all x, y ∈ R and the usual induced norm | · |. Let F 1 : R × R → R be a bifunction defined as F 1 (x, y) = 2x(yx) and let F 2 : R × R → R be a bifunction defined as F 2 (p, q) = p(qp). For all x ∈ R, let the operators , A, B : R → R be defined as (x) = 3x, Ax = 4x and Bx = 3x, respectively. Let S i : R → R be a finite family of demicontractive operators defined by x ∈ (-∞, 0).
Then the sequence (x k ) generated by Algorithm 2 strongly converges to a point in .
Proof It is easy to prove that the bifunctions F 1 and F 2 satisfy Assumptions 2.8 and F 2 is upper semicontinuous with = 0. Moreover, is bounded linear operator on R with the adjoint operator * such that = * = 3, A is a maximal monotone operator and B is a monotone and γ -Lipschitz operator for some γ > 0 with (A + B) -1 (0) = {0}. Note that S i is a finite family of 1- For the rest of the numerical experiment, we proceed as follows: Step 1. Find z ∈ F 2 such that F 2 (z, y) + 1 u yz, zx ≥ 0 for all y ∈ F 2 . We write for all y ∈ F 2 . Thus, by Lemma 2.9(2), we know that T F 2 u x is single-valued for each x ∈ F 1 .
Step 4. Compute the numerical results for x k+1 . We provide a numerical test of a comparison between our Inertial Forward-Backward Splitting Algorithm (IFBSA) defined in Algorithm 2 (i.e., k = 0) and the Forward-Backward Splitting Algorithm (FBSA) (i.e., k = 0). The stopping criterion is defined as Error = E k = x k+1x k < 10 -6 . The different choices of x 0 and x 1 are given in tables and figures.
The error plotting E k and (x k ) against k = 0 and k = 0 for each choice in Table 1 is shown in Fig. 1.  We can see from Table 1 and Figs. 1 and 2 that IFBSA performs better as compared to FBSA. Elaborating the behavior of this algorithm with respect to Table 1, the error analysis

Applications
In this section, we illustrate the theoretical results which we have obtained in the previous section.

Split feasibility problems
Let H 1 and H 2 be two real Hilbert spaces and : H 1 → H 2 be a bounded linear operator. Let C and Q be closed, convex and nonempty subsets of H 1 and H 2 , respectively. The split feasibility problem aims to findx ∈ C such that Sx ∈ Q. We represent the solution sets by ω := C ∩ -1 (Q) = {ȳ ∈ C : ȳ ∈ Q}. Censor and Elfving [7] introduced it to solve inverse problems and their application to medical image reconstruction and radiation therapy in a finite dimensional Hilbert space. For the set C, recall the function Let P Q be the projection of H 2 onto a nonempty, convex and closed subset Q. Take: f (x) = 1 2 x -P Q x 2 and g(x) = b C (x). Then we compute the split feasibility problem from the following result.

Monotone variational inequality problems
Let H 1 be a Hilbert space and C be a nonempty, closed and convex subset of H 1 . Let B : C → H 1 be a nonlinear monotone operator. The variational inequality problem aims to find a pointx ∈ C such that Bx,ȳ -x ≥ 0 ∀ȳ ∈ C.

Conclusions
In this paper, we have devised an inertially constructed forward-backward splitting algorithm for computing a common solution of the finite family of demicontractive operators, SEP and monotone inclusion problem in Hilbert spaces. The theoretical framework of the algorithm has been strengthened with an appropriate numerical example. Moreover, this framework has also been implemented to various instances of the monotone inclusion problems. We would like to emphasize that the above mentioned problems occur naturally in many applications, therefore, iterative algorithms are inevitable in this field of investigation. As a consequence, our theoretical framework constitutes an important topic of future research.