Shrinking Projection Methods for Accelerating Relaxed Inertial Tseng-Type Algorithm with Applications

Our main goal in this manuscript is to accelerate the relaxed inertial Tseng-type (RITT) algorithm by adding a shrinking projection (SP) term to the algorithm. Hence, strong convergence results were obtained in a real Hilbert space (RHS). A novel structure was used to solve an inclusion and a minimization problem under proper hypotheses. Finally, numerical experiments to elucidate the applications, performance, quickness, and effectiveness of our procedure are discussed.

Abubakar et al. [45] introduced the RITT method as follows: ϕ n+1 � (1 − β)I n + βψ n + βℓ n ¥I n − ¥ψ n , n ≥ 1, where Λ and β are extrapolation and relaxation parameters, respectively. Under this algorithm, they discussed the weak convergence to the solution point of VIP (1) and the problem of image recovery. Note that the extrapolation step works to accelerate but not for the desired acceleration. e concept of the SP method was discussed by Takahashi et al. [46] as in the following algorithm: ϑ 0 ∈ ℸ be arbitrarily fixed, ey just selected one closed convex (CC) set for a family of nonexpansive mappings Z n to modify Mann's iteration method [47] and proved that the sequence ϑ n converges strongly to P Fix(Z) ϑ 0 , provided Λ n ≤ e for all n ≥ 1 and for some 0 < e < 1.
In 2019, Yang and Liu [48] selected the stepsize sequence for the iterative algorithm for monotone variational inequalities, which are based on Tseng's extragradient method and Moudafi viscosity scheme that does not require either the knowledge of the Lipchitz constant of the operator or additional projections.
With the incorporation of results of [45,46,48], we accelerate RITT algorithm by adding the SP method to algorithm (12). In a RHS, strong convergence results are given under a proposed algorithm. As applications, our algorithm was used to find the solution to a VIP and minimization problem under certain conditions. Eventually, numerical experiments to illustrate the applications, performance, acceleration, and effectiveness of the proposed algorithm are presented.

Preparatory Lemmas and Definitions
Suppose that C is a nonempty closed convex subset (CCS) of a RHS ℸ; we shall refer to " ⟶ " as the strong convergence, and P C : ℸ ⟶ C is the nearest point projection, that is, for all ϑ ∈ ℸ and ω ∈ C, ‖ϑ − P C ϑ‖ ≤ ‖ϑ − ω‖. P C is called the metric projection. It is obvious that P C verifies the following inequality: for all ϑ, ω ∈ ℸ. In other words, the metric projection P C is firmly nonexpansive. Hence, 〈ϑ − P C ϑ, ω − P C ω〉 ≤ 0 holds for all ϑ ∈ ℸ and ω ∈ C, see [49,50]. e following inequality holds in a HS [51]: for all l, m ∈ ℸ.

(16)
Lemma 2 (see [38]). Let C be a nonempty CCS of a RHS ℸ and P C : ℸ ⟶ C be the metric projection. en, for all ϑ ∈ ℸ and ω ∈ C.
Proof. For all ϑ, ω ∈ ℸ, we get e proof is ended.

Shrinking Projection Relaxed Inertial Tseng-Type Algorithm
We provide a method consisting of the forward-backward splitting method with an inertial factor and an explicit stepsize formula, which are being used to ameliorate the convergence average of the iterative scheme and to make the manner independent of the Lipschitz constants. e detailed method is provided in Algorithm 1. Note that (i) Since ¥ is an Λ− ism operator, it is a Lipschitz function with a constant L, ¥I n ≠ ¥ψ n , and we get It is obvious for ¥I n � ¥ψ n that inequality (25) is satisfied. Hence, it follows that ℓ n ≥ min (ρ/L), ℓ 0 . is implies that the generated sequence ℓ n is bounded below by min (ρ/L), ℓ 0 , i.e., ℓ n is monotonically decreasing. (ii) By (i) and (25), we have i.e., the update (28) is well defined. (iii) If we delete the shrinking projection term from our algorithm, we get the algorithms of the papers [22,45,53]. (1) is a nonempty CCS of a RHS ℸ, then the sequence ϑ n generated by Algorithm 1 converges strongly to a point τ � P Ω (ϑ 1 ), provided that

Theorem 1. Let ℸ be a RHS and the operators
Proof. e proof will be divided as follows: □ Mathematical Problems in Engineering Part 1. Demonstrate that P C n+1 ϑ 1 is well-defined, for each ϑ 1 ∈ ℸ, n ≥ 1, and Ω ⊂ C n+1 . It follows from condition (i) and is a nonexpansive mapping. Lemma 3 implies that Ω is a closed and convex set, and Lemma 1 clarifies that C n+1 is closed and convex, for all n ≥ 1. Let η ∈ Ω; we have Since the resolvent [ ℓ n is firmly a nonexpansive mapping and by Lemma 3, we have Hence, by (28), we get which leads to It is obvious that Applying (31) in (30), we can write Now, from definition ϕ n , we have Initialization: select initial ϑ 0 , ϑ 1 ∈ ℸ, ρ ∈ (0, 1), Λ ≥ 0, ℓ 0 > 0, and 0 < β < 1.

Part 2.
Illustrate that ϑ n is bounded. Since Ω ≠ ∅ and closed and convex subset of ℸ, there is a unique u ∈ Ω such that u � P Ω ϑ 1 .
is leads to ϑ n � P C n ϑ 1 , C n ⊂ C n+1 , and ϑ n+1 ∈ C n for all n ≥ 1, and we have Furthermore, as Ω ⊂ C n , for all n ≥ 1, we obtain It follows by (38) and (39) that lim n⟶∞ ‖ϑ n − ϑ 1 ‖ exists. Hence, ϑ n is bounded.
Part 3. Fulfillment of lim n⟶∞ ϑ n � τ. By the definition of C n , for m > n, we observe that ϑ m � P C m ϑ 1 ∈ C m ⊂ C n . From Lemma 2, we have By Part 2, we conclude that lim n,m⟶∞ ‖ϑ m − ϑ n ‖ 2 � 0. us, ϑ n is a Cauchy sequence. Hence, lim n⟶∞ ϑ n � τ. Additionally, we get Part 4. Prove that τ ∈ Ω. It follows from (41) that Also, by (42) and condition (ii), we can write

Solve a Minimization Problem
As an application of our theorem, we solve the following constrained convex minimization problem: where ℶ: ℸ ⟶ R is a convex function. We suppose that the function ℶ is differentiable such that ∇ℶ is an Λ− ism operator.
It is easy to see that problem (51) is equivalent to the following problem: where ℘ C is the indicator function of C. us, this problem becomes the problem of finding an element ϑ * ∈ ℸ such that where z℘ C is the subdifferential of ℘ C . We know that z℘ C is a maximal monotone operator, and (I + m z℘ C ) − 1 � P C for all m > 0. For solving problem (51), we state the theorem in the following, which is similar to eorem 1.

Solve a Split Feasibility Problem
In this section, we investigated the application of our proposed methods to the split convex feasibility problem (SCFP). Let T: ℸ 1 ⟶ ℸ 2 be a bounded linear operator and T * its adjoint defined on the two RHSs ℸ 1 and ℸ 2 . Assume that C ⊂ ℸ 1 and Q ⊂ ℸ 2 are nonempty CCSs. e SCFP [54] take the shape as follows: In a HS, SFP was initiated by Censor and Elfving [54], and they used a multidistance approach to find an adaptive approach for resolving it. Many of the problems that emerge from state retrieval and restoration of medical image can be formulated as SVFP [55,56]. SFP is also used in a variety of disciplines such as dynamic emission tomographic image 6 Mathematical Problems in Engineering reconstruction, image restoration, and radiation therapy treatment planning [57][58][59]. Let us consider for the metric projection P Q on to Q, the gradient ∇, and Υ � zi C . Due to the above construction, problem (55) has an inclusion format as described in (1). It can be seen that ¥ is Lipschitz continuous with constant L � ‖T‖ 2 , and Υ is maximal monotone, see, e.g., [60]. Let C be a nonempty CCS of a RHS ℸ, and a normal cone of C at ϑ ∈ C is defined by Suppose g: ℸ ⟶ (− ∞, +∞) is a proper, lower semicontinuous, and convex function. For each ϑ ∈ ℸ, the subdifferential zg of g is given by For any nonempty CCS C of ℸ, the indicator function i C of C is defined by It is obvious that the indicator function i C is proper, convex, and lower semicontinuous on ℸ. A subdifferential zi C of i C is a maximal monotone operator, and zi C (ϑ) � z ∈ ℸ: i C (y) − i C (ϑ) ≥ 〈z, y − ϑ〉, ∀y ∈ C � z ∈ ℸ: 〈z, y − ϑ〉 ≤ 0, ∀y ∈ C � N C (ϑ).

(60)
For each ϑ ∈ ℸ, now we define the resolvent of an indicator function zi C for each λ > 0 in the following manner: Hence, we can observe that Now, on the basis of the above, Algorithm 1 may be reduced to the following scheme. Theorem 3. Let ϑ n be a sequence generated by the following scheme: choose ϑ − 1 , ϑ 0 ∈ C, ρ ∈ (0, 1), Λ ≥ 0, ℓ 0 > 0, and 0 < β < 1.
St. (i): compute I n in the following way: where ℓ n+1 is the stepsize sequence revised in the following way: where Put n � n + 1, and return to St. (i). If the solution set Γ SFP is nonempty, then the sequence ϑ n converges weakly to an element of Γ (SFP) .

Numerical Discussion
is part is devoted to present a numerical solution to a SCFP in an infinite HS, which is a special inclusion problem as explained in Section 5. e problem setting is taken from [61]. We provide the comparison of Algorithm 1 (Alg1) in [45] and our proposed Algorithm 1 (Alg2).
Remark 1. It is well known that the success of any iterative method depends on two main things: first, the number of iterations: when the number of iterations is small, the method is successful in saving effort. Second, time factor: the method that needs less time in implementation is excellent than its counterpart, which needs a lot of time and is considered successful in saving time. So, from figures and tables, we observe that our algorithm needs fewer iterations and less time than Algorithm 1 [45]. is illustrates that our method is successful in speeding up Algorithm 1 [45] and solving problem (55). Also, the performance of our algorithm is good because it saves time and effort in studding the convergence rate.