Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access June 10, 2022

Inertial iterative method with self-adaptive step size for finite family of split monotone variational inclusion and fixed point problems in Banach spaces

  • Grace N. Ogwo , Timilehin O. Alakoya and Oluwatosin T. Mewomo EMAIL logo
From the journal Demonstratio Mathematica

Abstract

In this paper, we propose and study a new inertial iterative algorithm with self-adaptive step size for approximating a common solution of finite family of split monotone variational inclusion problems and fixed point problem of a nonexpansive mapping between a Banach space and a Hilbert space. This method combines the inertial technique with viscosity method and self-adaptive step size for solving the common solution problem. We prove a strong convergence result for the proposed method under some mild conditions. Moreover, we apply our result to study the split feasibility problem and split minimization problem. Finally, we provide some numerical experiments to demonstrate the efficiency of our method in comparison with some well-known methods in the literature. Our method does not require prior knowledge or estimate of the operator norm, which makes it easily implementable unlike so many other methods in the literature, which require prior knowledge of the operator norm for their implementation.

MSC 2010: 47H06; 47H09; 46N10

1 Introduction

Let X be a vector space and let A be a multivalued operator on X . The zero point problem of A is defined as:

find an element x X such that 0 A x .

The set of all zero points of A is denoted by A 1 ( 0 ) and defined as A 1 ( 0 ) = { x X : 0 A x } .

Recently, the split inverse problem (SIP) has attracted the attention of several authors due to its wide areas of applications, for example, in phase retrieval, image recovery, signal processing, data compression, intensity-modulated radiation therapy, among others (see [1,2] and references therein). The SIP model is formulated as follows: find a point

x X that solves IP 1

such that

y = T x Y that solves IP 2 ,

where X and Y are two vector spaces, T : X Y is a linear operator, IP 1 and IP 2 denote inverse problems in X and Y , respectively. In 1994, Censor and Elfving [2] introduced the first instance of the SIP known as the split feasibility problem (SFP). The SFP is defined as: Find

x D such that T x Q ,

where D and Q are nonempty, closed, and convex subsets of real Hilbert spaces 1 and 2 , respectively, and T : 1 2 is a bounded linear operator. It is known that the SFP has wide applications in many fields such as signal processing, treatment planning, phase retrieval, among others (see [3,4,5] and other references therein).

In 2011, Moudafi [6] introduced another instance of the SIP called the split monotone variational inclusion problem (SMVIP). The SMVIP is formulated as follows:

(1) Find x 1 such that 0 f 1 ( x ) + B 1 ( x ) , and y = A x 2 such that 0 f 2 ( y ) + B 2 ( y ) ,

where B 1 : 1 2 1 and B 2 : 2 2 2 are multivalued maximal monotone mappings. For more details on SMVIP the reader can see [7] and other references therein.

If f 1 = f 2 = 0 , then the problem (1) reduces to the split variational inclusion problem (SVIP) defined as follows: Find x 1 such that

(2) 0 B 1 ( x )

and

(3) y = A x 2 such that 0 B 2 ( y ) ,

where 0 is the zero vector, 1 and 2 are real Hilbert spaces, B 1 : 1 2 1 and B 2 : 2 2 H 2 are multi-valued mappings, A : H 1 H 2 is a bounded linear operator. We denote the solution set of SVIP (2) by SOLVIP ( B 1 ) and the solution set of SVIP (3) by SOLVIP ( B 1 ) . Hence, we denote the solution set of SVIP by Ω = { x 1 : x SOLVIP ( B 1 ) and A x SOLVIP ( B 2 ) } . The SVIP is also known as the split null point problem or the split zero point problem (see [8,9,10, 11,12,13, 14,15]). It includes as special cases, the split fixed point problem (FPP), the split variational inequality problem and the SFP (see [16,17,18] and other references therein), and it has wide applications in different fields such as medical treatment of the intensity-modulated radiation therapy, data compression, medical image reconstruction, signal processing, phase retrieval, among others (e.g., see [2,3,19]). The SVIP is considered to be a central problem in optimization and nonlinear analysis and has attracted the attention of several researchers because the theory provides a simple, natural and unified framework for a general treatment of many important mathematical problems such as minimization problems, network equilibrium problems, complementary problems, systems of nonlinear equations and others (see [19,20, 21,22,23, 24,25,26, 27,28] and other references therein).

Byrne et al. [29] proposed and studied the following algorithms for solving the SVIP for two maximal monotone operators A and B in Hilbert spaces. Take x 0 1 such that

(4) x n + 1 = J μ A ( x n + λ T ( J μ B I ) T x n )

and

(5) x n + 1 = α n x 0 + ( 1 α n ) J μ A ( x n + λ T ( J μ B I ) T x n )

for μ > 0 , T the adjoint of T , λ 0 , 2 L , L = T T and J μ A ( I + μ A ) 1 , J μ B ( I + μ B ) 1 are the resolvent operators of A and B , respectively. Under certain conditions, the authors obtained a weak convergence result for Algorithm 4 and a strong convergence result for Algorithm 5.

Moudafi [30] first introduced the viscosity approximation method, which is defined as : Choose x 0 such that the sequence { x n } is generated by

(6) x n + 1 = α n f ( x n ) + ( 1 α n ) S x n , n 1 ,

where { α n } ( 0 , 1 ) and f is a contraction mapping. He proved that the sequence { x n } generated by (6) converges strongly to a fixed point of a nonexpansive mapping S under some suitable control conditions.

Very recently, Suantai et al. [31] proposed the following viscosity iterative scheme to approximate the solution of SVIP between a Banach space X and a Hilbert space :

(7) x n + 1 = α n f ( x n ) + β n x n + γ n J λ n A ( x n + λ n T J X ( Q μ B I ) T x n ) , n 1 ,

where { μ n } , { λ n } ( 0 , ) , { α n } , { β n } , { γ n } ( 0 , 1 ) and α n + β n + γ n = 1 , f : is a contraction, J X is the duality mapping on X , J λ A is the resolvent of A for λ > 0 , Q μ B is the metric resolvent of B for μ > 0 . They proved that the sequence { x n } defined by (7) converges strongly to a solution of the SVIP under the following conditions for some a , b , c , d , k R + :

  1. 0 < a λ n T 2 b < 2 ;

  2. 0 < k μ n ;

  3. 0 < c γ n d < 1 ;

  4. lim n α n = 0 and n = 1 α n = .

The FPP is another related problem, which is defined as follows:

(8) Find x C such that S x = x ,

where C is a nonempty closed convex subset of a Hilbert space and S : is a nonlinear mapping. We denote by F ( S ) the fixed point set of S , that is

(9) F ( S ) = { x C : S x = x } .

Many problems in science and engineering can be formulated as a problem of finding the solution of an FPP of an nonlinear mapping.

Motivated by the work of Byrne et al. [32], Kazmi and Rizvi [33] proposed the following algorithm for approximating a solution of SVIP, which is also a fixed point of nonexpansive mapping.

Theorem 1.1

[33] Let 1 and 2 be two real Hilbert spaces and T : 1 2 be a bounded linear operator. Let f : 1 1 be a contraction mapping with constant ρ [ 0 , 1 ) and S : 1 1 be a nonexpansive mapping such that F ( S ) ω . For x 0 1 , let the sequences { u n } and { x n } be generated by

(10) u n = J r B ( x n + λ T ( J r C I ) T x n ) , x n + 1 = α n f ( x n ) + ( 1 α n ) S u n , n 0 ,

where r > 0 and λ 0 , 1 L , L is the spectral radius of the operator T T and T is the adjoint of T , { α n } is a sequence in ( 0 , 1 ) such that lim n α n = 0 , n = 1 α n = and n = 1 α n α n 1 < . Then the sequences { u n } and { x n } both converge strongly to an element in the solution set Ω F ( S ) .

All the above algorithms for solving SVIP have a common feature, which is also their computational weakness. It is the fact that their step size λ (or λ n ) depends on the norm of the operator T , which in most cases is unknown or very difficult to calculate or even estimate. This is a major drawback with the above algorithms and several existing algorithms in the literature.

Let be a real Hilbert space and X be a uniformly convex and smooth Banach space. Let A i : be a finite family of α i -inversely strongly monotone operators, and let B i : 2 and C i : X 2 X be finite families of maximal monotone operators for each i = 1 , 2 , , m . Let S : be a nonexpansive mapping and let T : X be a bounded linear operator. In this paper, we study the following common solution problem of finite family of SMVIP and FPP of nonexpansive mapping:

(11) Find x such that 0 i = 1 m ( A i + B i ) x , S x = x ;

and

(12) y = T x such that 0 i = 1 m C i y .

We denote the solution set of problems (11)–(12) by

Γ i = 1 m ( A i + B i ) 1 ( 0 ) i = 1 m T 1 ( C i 1 0 ) F ( S ) .

To accelerate the rate of convergence of iterative methods, authors often employ the inertial technique. Polyak [34] studied the convergence of the following inertial extrapolation algorithm:

x n + 1 = x n + β n ( x n x n 1 ) α n A x n , n 0 ,

where { α n } and { β n } are two real sequences. Recently, there has been an increased interest in studying inertial-type algorithm (see [35,36, 37,38,39, 40,41] and other references therein).

Very recently, Long et al. [42] introduced a new algorithm, which combines the inertial technique and the viscosity method for solving the SVIP in Hilbert spaces. They proved that the sequence { x n } generated by the algorithm converges strongly to an element in the solution set of the SVIP.

Motivated by the aforementioned works and the ongoing research activities in this direction, we propose and study a new inertial iterative algorithm with self-adaptive step size for approximating a common solution of finite family of SMVIPs and FPP for a nonexpansive mapping between a Banach space and a Hilbert space. This method combines the inertial technique with viscosity method for solving the common solution problem without a prior knowledge of the operator norm. We prove a strong convergence result for the sequence generated by our proposed method under some mild conditions and apply our results to study split feasibility problem and split minimization problem. Finally, we provide numerical experiments for the proposed method and compare the performance of our algorithm with some existing methods in the literature.

The remaining section of the paper is organized as follows. Section 2 contains basic definitions and results needed in subsequent sections. In Section 3, we present the proposed method. We investigate the convergence of the method in Section 4. The application of our proposed method is given in Section 5, while in Section 6 we perform some numerical experiments and compare the performance of our method with other methods in the literature. We then conclude in Section 7.

2 Preliminaries

Let C be a nonempty, closed and convex subset of a real Hilbert space with inner product , and norm . We will denote the weak and strong convergence of a sequence { x n } to a point x as x n x and x n x , respectively. Let X be a real Banach space with norm and let X be the dual space of X . We denote the value of y X at x X by x , y . Also, we denote the set of weak limits of { x n } by w ω ( x n ) , that is

w ω ( x n ) { x : x n j x for some subsequence { x n j } of { x n } } .

The modulus of convexity δ X : [ 0 , 2 ] [ 0 , 1 ] is defined as

δ X ( ε ) = inf 1 x + y 2 : x = 1 = y , x y ε .

A Banach space X is called uniformly convex if δ X ( ε ) > 0 for any ε ( 0 , 2 ] , and X is said to be strictly convex if for x , y S ( X ) { x X : x = 1 } and x y , one has x + y 2 < 1 . It is well known that every uniformly convex Banach space is reflexive and strictly convex and every Hilbert space is a uniformly convex space. The modulus of smoothness ρ X ( σ ) : [ 0 , ) [ 0 , ) is defined by

ρ X ( σ ) = x + σ y + x σ y 2 1 : x = y = 1 .

Then X is called uniformly smooth if lim σ 0 ρ X ( σ ) σ = 0 and q-uniformly smooth if there is a constant C q > 0 such that ρ X ( σ ) C q σ q for any σ > 0 . It is known that X is p -uniformly convex if and only if its dual X is q -uniformly smooth.

The normalized duality mapping J X : X 2 X is defined by

J X = { x X : x , x = x 2 = x 2 }

for every x X . If X is a real Hilbert space, then J X = I , where I is the identity mapping and if the Banach space X is smooth, then J X is a single valued mapping of X into X . Also, if J X is surjective, X is reflexive while X is strictly convex if and only if J X is one-to-one (see [26]).

The norm of X is said to be Gâteaux differentiable if for each x , y X such that x = y = 1 , the limit

lim t 0 x + t y x t

exists. In this case, X is called smooth. Let X be a uniformly convex and smooth Banach space with a Gâteaux differential norm and let B : X 2 X be a maximal monotone operator. We consider the metric resolvent of B ,

Q μ B = ( I + μ J X 1 B ) 1 , μ > 0 .

It is known that the operator Q μ B is firmly nonexpansive and the fixed points of the operator Q μ B are the null points of B (see [43,44]). The resolvent plays an important role in the approximation theory for zero points of maximal monotone operators in Banach spaces.

The following are the properties of the resolvent (see [45])

(13) Q μ B x x , J ( x Q μ B x ) 0 , x X , x B 1 ( 0 ) ,

in particular, if X is a real Hilbert space, then

J μ B x x , x J μ B x 0 , x X , x B 1 ( 0 ) ,

where J μ B = ( I + μ B ) 1 is the general resolvent, B 1 ( 0 ) = { z X : 0 B z } is the set of null points of B . Also, we know that B 1 ( 0 ) is closed and convex (see [46]).

Definition 2.1

Let A : be a mapping. Then, A is said to be

  1. L-Lipschitz continuous, if there exists L 0 such that

    A x A y L x y , x , y ;

    if L [ 0 , 1 ) , then A is called a contraction mapping;

  2. nonexpansive if A is 1-Lipschitz continuous;

  3. L-co-coercive (or L-inverse strongly monotone), if there exists L > 0 such that

    A x A y , x y L A x A y 2 , x , y ;

  4. monotone, if

    A x A y , x y 0 , x , y .

Clearly, L -inverse strongly monotone mappings are 1 L -Lipschitz continuous and monotone but the converse is not always true. We have the following result on inverse strongly monotone mappings.

Lemma 2.2

[47] Let A : be a k -inverse-strongly monotone mapping, then

  1. A is 1 k -Lipschitz continuous and monotone mapping;

  2. If λ is any constant in ( 0 , 2 k ] , then the mapping I λ A is nonexpansive, where I is the identity mapping on .

Recall that for a nonempty closed and convex subset C of , the metric projection denoted by P C , is a map defined on onto C , which assigns to each x a unique point in C , denoted by P C x such that

x P C x = inf { x y : y C } .

We have the following result on the metric projection map.

Lemma 2.3

[48] Let C be a nonempty closed convex subset of a real Hilbert space . For any x and z C , we have

z = P C x x z , z y 0 , for all y C .

Definition 2.4

A function c : R is called convex, if for all t [ 0 , 1 ] and x , y ,

c ( t x + ( 1 t ) y ) t c ( x ) + ( 1 t ) c ( y ) .

Definition 2.5

A convex function c : R is said to be subdifferentiable at a point x if the set

(14) c ( x ) = { u c ( y ) c ( x ) + u , y x , y }

is nonempty, where each element in c ( x ) is called a subgradient of c at x , c ( x ) is called the subdifferential of c at x and the inequality in (14) is called the subdifferential inequality of c at x . We say that c is subdifferentiable on if c is subdifferentiable at each x [49].

Lemma 2.6

[50] Let be a real Hilbert space, then the following assertions hold:

  1. 2 x , y = x 2 + y 2 x y 2 = x + y 2 x 2 y 2 , x , y ;

  2. α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 , x , y , α R ;

  3. x y 2 x 2 + 2 y , x y , x , y .

Definition 2.7

Let X be a Banach space and B a mapping of X into 2 X . The effective domain of B denoted by dom ( B ) is given as dom ( B ) = { x X : B x } . Let B : X 2 X be a multivalued operator on X . Then

  1. The graph G ( B ) is defined by

    G ( B ) { ( x , u ) : u B ( x ) } .

  2. The operator B is said to be monotone if x y , u v 0 for all x , y dom ( A ) , u A x and v A y .

  3. A monotone operator B on X is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on X .

Lemma 2.8

Let X be a smooth, strictly convex and reflexive Banach space. Let C be a nonempty, closed and convex subset of X , and let x 1 X and z C . Then, z = P C x 1 if and only if

z y , J X ( x 1 z ) 0 , y C .

Lemma 2.9

[51,52] For each x 1 , , x m and α 1 , , α m [ 0 , 1 ] with i = 1 m α i = 1 , the equality

α 1 x 1 + + α m x m 2 = i = 1 m α i x i 2 1 i < j m α i α j x i x j 2

holds.

Lemma 2.10

[53] Let be a real Hilbert space and let S : be a nonexpansive mapping with F ( S ) . Then, the mapping I S is demiclosed at zero, that is, for any sequence { x n } in such that x n x and x n S x n 0 implies x F ( S ) .

Lemma 2.11

[54] Each Hilbert space satisfies the Opial condition, that is, for any sequence { x n } with x n x , the inequality lim inf n x n x < lim inf n x n y holds for every y with y x .

Lemma 2.12

[55] Let { a n } be a sequence of non-negative real numbers, { γ n } be a sequence of real numbers in ( 0 , 1 ) with conditions n = 1 γ n = and { d n } be a sequence of real numbers. Assume that

a n + 1 ( 1 γ n ) a n + γ n d n , n 1 .

If lim sup k d n k 0 for every subsequence { a n k } of { a n } satisfying lim inf k ( a n k + 1 a n k ) 0 , then lim n a n = 0 .

Lemma 2.13

[56] Let be a real Hilbert space. Let B : 2 be a maximal monotone operator and A : be a k-inverse strongly monotone mapping on . Define T λ = ( I + λ B ) 1 ( I λ A ) , λ > 0 , where ( I + λ B ) 1 is the resolvent of B of order λ > 0 . Then we have

  1. F ( T λ ) = ( A + B ) 1 ( 0 ) ;

  2. For 0 < s λ and x , x T s x 2 x T λ x .

3 Proposed methods

Let be a real Hilbert space and let X be a uniformly convex and smooth Banach space. Let J X be the duality mapping on X . Let T : X be a bounded linear operator such that T 0 and T is the adjoint operator of T . Let f : be a contraction mapping with constant ρ [ 0 , 1 ) and 0 < γ < γ ρ , and let S : be a nonexpansive mapping. For each i = 1 , 2 , , m , let A i : be a finite family of α i -inverse strongly monotone operators, B i : 2 and C i : X 2 X be finite families of maximal monotone operators, and let J r i B i be the resolvent of B i for r i > 0 and Q μ i C i be the metric resolvent of C i for μ i > 0 . Suppose that the solution set Γ . We establish the convergence of the algorithm under the following assumptions on the control parameters:

  1. { α n } , { δ n } , { γ n } ( 0 , 1 ) such that α n + δ n + γ n = 1 , lim n α n = 0 , n = 0 α n = ;

  2. lim inf n δ n > 0 , lim inf n γ n > 0 , { β n , i } ( 0 , 1 ) and i = 0 m β n , i = 1 , lim inf n β n , 0 β n , i > 0 ;

  3. 0 < a τ n b < 2 , 0 < c μ n , i , 0 < d r n , i e < 2 α i for each i = 1 , 2 , , m ;

  4. Let θ > 0 and { ε n } be a nonnegative sequence such that 0 < d ε n and ε n = ( α n ) , i.e., lim n ε n α n = 0 .

Algorithm 3.1

Step 0: Select initial points x 0 , x 1 and set n = 1 .

Step 1: Given the iterates x n 1 and x n for each n 1 , choose θ n such that 0 θ n θ ¯ n , where

(15) θ ¯ n min θ , ε n x n x n 1 , if x n x n 1 , θ , otherwise .

Step 2: Compute

w n = x n + θ n ( x n x n 1 ) .

Step 3: Compute

z n , i = w n λ n , i T J X ( I Q μ n , i C i ) T w n ,

where

(16) λ n , i = τ n J X ( I Q μ n , i C i ) T w n 2 T J X ( I Q μ n , i C i ) T w n 2 if T w n Q μ n , i C i T w n , λ i otherwise ( λ i being any nonnegative real number for each i ) .

Step 4: Compute

u n = β n , 0 w n + i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i .

Step 5: Compute

x n + 1 = α n f ( x n ) + δ n x n + γ n S u n .

Set n n + 1 and return to Step 1.

Remark 3.2

By conditions (A1) and (A4), one can easily verify from (15) that

(17) lim n θ n x n x n 1 = 0 and lim n θ n α n x n x n 1 = 0 .

Next, we show that the sequence of step size { λ n , i } of the algorithm defined by (16) is well defined.

Lemma 3.3

The step size { λ n , i } of Algorithm 3.1 defined by (16) is well defined.

Proof

Let p Γ . Then, we have p = J r n , i B i ( I r n , i A i ) p and T p = Q μ n , i C i T p , for all n N and i = 1 , 2 , , m . From property (13) of the metric resolvent, we have that

Q μ n , i C i T w n T p , J X ( T w n Q μ n , i C i T w n ) 0 , T p C i 1 ( 0 ) , i = 1 , 2 , m .

Therefore, it follows that

w n p T J X ( I Q μ n , i C i ) T w n w n p , T J X ( I Q μ n , i C i ) T w n = T w n T p , J X ( T w n Q μ n , i C i T w n ) = T w n Q μ n , i C i T w n , J X ( T w n Q μ n , i C i T w n ) + Q μ n , i C i T w n T p , J X ( T w n Q μ n , i C i T w n ) = J X ( T w n Q μ n , i C i T w n ) 2 + Q μ n , i C i T w n T p , J X ( T w n Q μ n , i C i T w n ) ( T w n Q μ n , i C i T w n ) 2 .

Hence, it follows that T J X ( I Q μ n , i C i ) T w n > 0 when ( T w n Q μ n , i C i T w n ) 0 . Consequently, { λ n , i } is well defined.□

4 Convergence analysis

Lemma 4.1

Let { x n } be a sequence generated by Algorithm 3.1 under Assumptions (A1)–(A4). Then, { x n } is bounded.

Let p Γ . Then, it follows that

p = J r n , i B i ( I r n , i A i ) p , T p = Q μ n , i C i T p , and p = S p

for all n N and i = 1 , 2 , , m . From the definition of w n in Step 2 and by applying the triangle inequality, we have

(18) w n p = x n + θ n ( x n x n 1 ) p x n p + θ n x n x n 1 = x n p + α n θ n α n x n x n 1 .

Since, by Remark 3.2, lim n θ n α n x n x n 1 = 0 , it follows that there exists a constant M 1 > 0 such that θ n α n x n x n 1 M 1 for all n 1 . Hence, it follows from (18) that

(19) w n p x n p + α n M 1 .

From Step 3 and the property of the resolvent (13), we have

(20) z n , i p 2 = w n λ n , i T J X ( I Q μ n , i C i ) T w n p 2 = w n p 2 2 λ n , i w n p , T J X ( I Q μ n , i C i ) T w n + λ n , i 2 T J X ( I Q μ n , i C i ) T w n 2 = w n p 2 2 λ n , i T w n T p , J X ( I Q μ n , i C i ) T w n + λ n , i 2 T J X ( I Q μ n , i C i ) T w n 2 = w n p 2 2 λ n , i T w n Q μ n , i C i T w n , J X ( I Q μ n , i C i ) T w n 2 λ n , i Q μ n , i C i T w n T p , J X ( I Q μ n , i C i ) T w n + λ n , i 2 T J X ( I Q μ n , i C i ) T w n 2 = w n p 2 2 λ n , i J X ( I Q μ n , i C i ) T w n 2 2 λ n , i Q μ n , i C i T w n T p , J X ( I Q μ n , i C i ) T w n + λ n , i 2 T J X ( I Q μ n , i C i ) T w n 2 w n p 2 2 λ n , i J X ( I Q μ n , i C i ) T w n 2 + λ n , i 2 T J X ( I Q μ n , i C i ) T w n 2 = w n p 2 λ n , i [ 2 J X ( I Q μ n , i C i ) T w n 2 λ n , i T J X ( I Q μ n , i C i ) T w n 2 ] .

From the definition of λ n , i and (20), we have

(21) z n , i p 2 w n p 2 τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 .

Thus, by the assumption on τ n we have that

(22) z n , i p 2 w n p 2 .

From the definition of u n in Step 4, the nonexpansivity of J r n , i B i and Lemma 2.9 we obtain

(23) u n p 2 = β n , 0 w n + i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i J r n , i B i ( I r n , i A i ) p 2 = β n , 0 w n J r n , i B i ( I r n , i A i ) p 2 + i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i J r n , i B i ( I r n , i A i ) p 2 β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 β n , 0 w n p 2 + i = 1 m β n , i z n , i p r n , i ( A i z n , i A i p ) 2 β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 .

Next, by applying the inversely strongly monotonicity of the A i ’s, Lemma 2.6 and (22), for each i = 1 , 2 , , m , we observe that

(24) z n , i p r n , i ( A i z n , i A i p ) 2 = z n , i p 2 2 r n , i A i z n , i A i p , z n , i p + r n , i 2 A i z n , i A i p 2 z n , i p 2 2 r n , i α i A i z n , i A i p 2 + r n , i 2 A i z n , i A i p 2 = z n , i p 2 ( 2 α i r n , i ) r n , i A i z n , i A i p 2 .

Now, by applying (21), it follows from (23) and (24) that

(25) u n p 2 β n , 0 w n p 2 + i = 1 m β n , i z n , i p 2 i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 β n , 0 w n p 2 + i = 1 m β n , i w n p 2 i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 = w n p 2 i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2

(26) β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 w n p 2 .

Consequently, we have that

(27) u n p w n p x n p + α n M 1 .

From Step 5, (27) and the nonexpansivity of S , we have

x n + 1 p = α n f ( x n ) + δ n x n + γ n S u n p α n f ( x n ) p + δ n x n p + γ n S u n p α n f ( x n ) f ( p ) + α n f ( p ) p + δ n x n p + γ n u n p α n f ( x n ) f ( p ) + α n f ( p ) p + δ n x n p + γ n ( x n p + α n M 1 ) α n ρ x n p + α n f ( p ) p + ( 1 α n ) x n p + γ n α n M 1 ( 1 α n ( 1 ρ ) ) x n p + α n ( 1 ρ ) f ( p ) p 1 ρ + M 1 1 ρ max x n p , f ( p ) p + M 1 1 ρ max x 0 p , f ( p ) p + M 1 1 ρ .

Hence, the sequence { x n } is bounded. Consequently, { w n } , { z n , i } and { u n } are bounded.

Lemma 4.2

Let { x n } be a sequence generated by Algorithm 3.1 under Assumptions (A1)–(A4). Then, for all p Γ and n N we have:

x n + 1 p 2 ( 1 η n ) x n p 2 + η n α n 2 ( 1 ρ ) M 3 + 3 M 2 γ n ( 1 α n ) 2 ( 1 ρ ) θ n α n x n x n 1 + 1 ( 1 ρ ) f ( p ) p , x n + 1 p σ n i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 + i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 + β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 ,

where η n = 2 α n ( 1 ρ ) ( 1 α n ρ ) and σ n = γ n ( 1 α n ) ( 1 α n ρ ) .

Proof

Let p Γ . Then from Step 2, by applying the Cauchy-Schwartz inequality and Lemma 2.6, we obtain

(28) w n p 2 = x n + θ n ( x n x n 1 ) p 2 = x n p 2 + θ n 2 x n x n 1 2 + 2 θ n x n p , x n x n 1 x n p 2 + θ n 2 x n x n 1 2 + 2 θ n x n x n 1 x n p = x n p 2 + θ n x n x n 1 ( θ n x n x n 1 + 2 x n p ) x n p 2 + 3 M 2 θ n x n x n 1 = x n p 2 + 3 M 2 α n θ n α n x n x n 1 ,

where M 2 sup n N { x n p , θ n x n x n 1 } > 0 .

By applying Lemma 2.6, (25), and (28) we have

x n + 1 p 2 = α n f ( x n ) + δ n x n + γ n S u n p 2 = α n ( f ( x n ) p ) + δ n ( x n p ) + γ n ( S u n p ) 2 δ n ( x n p ) + γ n ( S u n p ) 2 + 2 α n f ( x n ) p , x n + 1 p = δ n 2 x n p 2 + γ n 2 S u n p 2 + 2 δ n γ n x n p , S u n p + 2 α n f ( x n ) p , x n + 1 p δ n 2 x n p 2 + γ n 2 S u n p 2 + 2 δ n γ n x n p S u n p + 2 α n f ( x n ) p , x n + 1 p δ n 2 x n p 2 + γ n 2 S u n p 2 + δ n γ n ( x n p 2 + S u n p 2 ) + 2 α n f ( x n ) p , x n + 1 p δ n ( δ n + γ n ) x n p 2 + γ n ( γ n + δ n ) u n p 2 + 2 α n f ( x n ) p , x n + 1 p = δ n ( 1 α n ) x n p 2 + γ n ( 1 α n ) u n p 2 + 2 α n f ( x n ) f ( p ) , x n + 1 p + 2 α n f ( p ) p , x n + 1 p δ n ( 1 α n ) x n p 2 + γ n ( 1 α n ) w n p 2 i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 + 2 α n ρ x n p x n + 1 p + 2 α n f ( p ) p , x n + 1 p δ n ( 1 α n ) x n p 2 + γ n ( 1 α n ) x n p 2 + 3 M 2 α n θ n α n x n x n 1 i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 + α n ρ ( x n p 2 + x n + 1 p 2 ) + 2 α n f ( p ) p , x n + 1 p = ( ( 1 α n ) 2 + α n ρ ) x n p 2 + α n ρ x n + 1 p 2 + 3 M 2 γ n ( 1 α n ) α n θ n α n x n x n 1 + 2 α n f ( p ) p , x n + 1 p γ n ( 1 α n ) i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 + i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 + β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 .

From this we obtain

x n + 1 p 2 ( 1 2 α n + α n 2 + α n ρ ) ( 1 α n ρ ) x n p 2 + 3 M 2 γ n ( 1 α n ) ( 1 α n ρ ) α n θ n α n x n x n 1 + 2 α n ( 1 α n ρ ) f ( p ) p , x n + 1 p γ n ( 1 α n ) ( 1 α n ρ ) i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 + i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 + β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 = ( 1 2 α n + α n ρ ) ( 1 α n ρ ) x n p 2 + α n 2 ( 1 α n ρ ) x n p 2 + 3 M 2 γ n ( 1 α n ) ( 1 α n ρ ) α n θ n α n x n x n 1 + 2 α n ( 1 α n ρ ) f ( p ) p , x n + 1 p γ n ( 1 α n ) ( 1 α n ρ ) i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 + i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 + β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 1 2 α n ( 1 ρ ) ( 1 α n ρ ) x n p 2 + 2 α n ( 1 ρ ) ( 1 α n ρ ) α n 2 ( 1 ρ ) M 3 + 3 M 2 γ n ( 1 α n ) 2 ( 1 ρ ) θ n α n x n x n 1 + 1 ( 1 ρ ) f ( p ) p , x n + 1 p γ n ( 1 α n ) ( 1 α n ρ ) i = 1 m β n , i τ n ( 2 τ n ) J X ( I Q Q μ n , i C i ) T w n 4 T J X ( I Q μ n , i C i ) T w n 2 + i = 1 m β n , i ( 2 α i r n , i ) r n , i A i z n , i A i p 2 + β n , 0 i = 1 m β n , i J r n , i B i ( I r n , i A i ) z n , i w n 2 ,

where M 3 = sup { x n p 2 : n N } . By taking η n = 2 α n ( 1 ρ ) ( 1 α n ρ ) and σ n = γ n ( 1 α n ) ( 1 α n ρ ) , we obtain the desired result.□

Lemma 4.3

Let p Γ and suppose { x n } is a sequence generated by Algorithm 3.1. Under Assumptions (A1)–(A4), the following inequality holds for all n N :

x n + 1 p 2 α n f ( x n ) p 2 + ( 1 α n ) x n p 2 + 3 M 2 γ n α n θ n α n x n x n 1 δ n γ n x n S u n 2 .

Proof

Let p Γ . By applying Lemma 2.9, (26) and (28), from Step 5 we have

x n + 1 p 2 = α n f ( x n ) + δ n x n + γ n S u n p 2 α n f ( x n ) p 2 + δ n x n p 2 + γ n S u n p 2 δ n γ n x n S u n 2 α n f ( x n ) p 2 + δ n x n p 2 + γ n u n p 2 δ n γ n x n S u n 2 α n f ( x n ) p 2 + δ n x n p 2 + γ n x n p 2 + 3 M 2 α n θ n α n x n x n 1 δ n γ n x n S u n 2 = α n f ( x n ) p 2 + ( 1 α n ) x n p 2 + 3 M 2 γ n α n θ n α n x n x n 1 δ n γ n x n S u n 2 ,

which is the desired inequality.□

We are now in the position to give the strong convergence theorem for Algorithm 3.1.

Theorem 4.4

Let be a Hilbert space, X a uniformly convex and smooth Banach space and J X the duality mapping on X . Let { x n } be generated by Algorithm 3.1 and suppose Assumptions (A1)–(A4) are satisfied. Then, the sequence { x n } converges strongly to a point x ¯ Γ , where x ¯ = P Γ f ( x ¯ ) .

Proof

Let x ¯ = P Γ f ( x ¯ ) . From Lemma 4.2, we have

(29) x n + 1 x ¯ 2 ( 1 η n ) x n x ¯ 2 + η n α n 2 ( 1 ρ ) M 3 + 3 M 2 γ n ( 1 γ n ) 2 ( 1 ρ ) θ n α n x n x n 1 + 1 ( 1 ρ ) f ( x ¯ ) x ¯ , x n + 1 x ¯ .

Now, we claim that the sequence { x n x ¯ } converges to zero. To establish this, by Lemma 2.12 it suffices to show that lim sup k f ( x ¯ ) x ¯ , x n k + 1 x ¯ 0 for every subsequence { x n k x ¯ } of { x n x ¯ } satisfying

lim inf k ( x n k + 1 x ¯ x n k x ¯ ) 0 .

Suppose that { x n k x ¯ } is a subsequence of { x n x ¯ } such that

(30) lim inf k ( x n k + 1 x ¯ x n k x ¯ ) 0 .

From Lemma 4.2, we have

σ n k i = 1 m β n k , i τ n k ( 2 τ n k ) J X ( I Q Q μ n k , i C i ) T w n k 4 T J X ( I Q μ n k , i C i ) T w n k 2 ( 1 η n k ) x n k x ¯ 2 x n k + 1 x ¯ 2 + η n k α n k 2 ( 1 ρ ) M 3 + 3 M 2 γ n k ( 1 α n k ) 2 ( 1 ρ ) θ n k α n k x n k x n k 1 + 1 α n k ( 1 ρ ) f ( x ¯ ) x ¯ , x n k + 1 x ¯ .

Since lim k α n k = 0 , then lim k η n k = 0 . Hence, by applying (30) we obtain

σ n k i = 1 m β n k , i τ n k ( 2 τ n k ) J X ( I Q Q μ n k , i C i ) T w n k 4 T J X ( I Q μ n k , i C i ) T w n k 2 0 , as k .

By the conditions on the control parameters, it follows that

β n k , i τ n k ( 2 τ n k ) J X ( I Q Q μ n k , i C i ) T w n k 4 T J X ( I Q μ n k , i C i ) T w n k 2 0 , as k , i = 1 , 2 , , m .

Since 0 < a τ n k b < 2 , lim inf k β n k , 0 β n k , i > 0 and T J X ( I Q μ n k , i C i ) T w n k is bounded for all i = 1 , 2 , , m , we have that

(31) lim k J X ( I Q μ n k , i C i ) T w n k = 0 i = 1 , 2 , , m .

Consequently, for i = 1 , 2 , , m we obtain

(32) T J X ( I Q μ n k , i C i ) T w n k T J X ( I Q μ n k , i C i ) T w n k = T J X ( I Q μ n k , i C i ) T w n k 0 , k .

Following similar argument, from Lemma 4.2 we obtain

(33) A i z n k , i A i x ¯ 0 , k , i = 1 , 2 , , m ,

and

(34) J r n k , i B i ( I r n k , i A i ) z n k , i w n k 0 , k , i = 1 , 2 , , m .

Also, from Lemma 4.3 we obtain

δ n k γ n k x n k S u n k 2 α n k f ( x n k ) x ¯ 2 + ( 1 α n k ) x n k x ¯ 2 x n k + 1 x ¯ 2 + 3 M 2 γ n k α n k θ n k α n k x n k x n k 1 .

By Remark 3.2, (30) and the conditions on the control parameters, we obtain

(35) x n k S u n k 0 , as k .

From Step 3 and (32), we have

(36) z n k , i w n k = w n k λ n k , i T J X ( I Q μ n k , i C i ) T w n k w n k = λ n k , i T J X ( I Q μ n k , i C i ) T w n k 0 , k , i = 1 , 2 , , m .

Also, from Step 4 and (34) we obtain

(37) u n k w n k β n k , 0 w n k w n k + i = 1 m β n k , i J r n k , i B i ( I r n k , i A i ) z n k , i w n k 0 , as k .

From (36) and (37), we obtain

z n k , i u n k 0 , as k .

Also from Remark 3.2, we have

(38) w n k x n k = θ n k x n k x n k 1 0 , as k .

From (36), (37), and (38) we have

(39) z n k , i x n k 0 , u n k x n k 0 , as k .

Also, from (35) and (39) we obtain

(40) u n k S u n k 0 , k .

By applying (35) and the condition on α n , from Step 5 we obtain

(41) x n k + 1 x n k = α n k f ( x n k ) + δ n k x n k + γ n k S u n k x n k α n k f ( x n k ) x n k + δ n k x n k x n k + γ n k S u n k x n k 0 , as k .

To complete the proof, we need to establish that w ω ( x n ) Γ . Let x w ω ( x n ) be an arbitrary element. Then, there exists a subsequence { x n k } of { x n } such that x n k x as k . From (38), we have that w n k x as k . Since T is bounded and linear we have

(42) T w n k T x .

From (31), we have

(43) lim k ( I Q μ n k , i C i ) T w n k = lim k J X ( I Q μ n k , i C i ) T w n k = 0 i = 1 , 2 , , m .

This together with (42) implies that Q μ n k , i C i T w n k T x , as k for each i = 1 , 2 , , m . Since Q μ n k , i C i is the metric of C i for μ n k , i > 0 , we have that

1 μ n k , i J X ( I Q μ n k , i C i ) T w n k C i Q μ n k , i C i T w n k

for all i = 1 , 2 , , m . Let v X . By the monotonicity of each C i , it follows that

0 v Q μ n k , i C i T w n k , v 1 μ n k , i J X ( I Q μ n k , i C i ) T w n k ,

for all v C i ( v ) . Passing limit as k , and by applying (43) together with the fact that 0 < c μ n , i , we obtain 0 v T x , v 0 for all v C i ( v ) , i = 1 , 2 , , m . Since each C i is maximal monotone, we have that T x C i 1 ( 0 ) , for all i = 1 , 2 , , m . This implies that

(44) x i = 1 m T 1 ( C i 1 ( 0 ) ) .

Next, we show that x i = 1 m ( A i + B i ) 1 ( 0 ) . Let T r n , i = J r n , i B i ( I r n , i A i ) , then by applying (34) and (36) we have

(45) T r n k , i z n k , i z n k , i T r n k , i z n k , i w n k + w n k z n k , i 0 , k , i = 1 , 2 , , m ,

for all i = 1 , 2 , , m .

By the condition on r n , i , there exists r i > 0 such that r n k , i r i for all k 1 and i = 1 , 2 , , m . Applying Lemma 2.13(ii), we have

lim k T r i z n k , i z n k , i 2 lim k T r n k , i z n k , i z n k , i = 0 i = 1 , 2 , .

We know that T r i is nonexpansive and from (39) z n k , i x for all i = 1 , 2 , , m . By the demiclosedness of I T r i , we have that x F ( T r i ) for all i = 1 , 2 , , m . By Lemma 2.13(i) we have that x ( A i + B i ) 1 ( 0 ) for all i = 1 , 2 , , m . This implies that

(46) x i = 1 m ( A i + B i ) 1 ( 0 ) .

Next, we show that x F ( S ) . From (39) we have that u n k x . By (40) and the demiclosedness property of I S at zero, we have that x F ( S ) . Hence, by (44) and (46) we have that w ω ( x n ) Γ .

From (39) we have w ω { z n , i } = w ω { x n } . By the boundedness of { x n k } there exists a subsequence { x n k j } of { x n k } such that x n k j x and

(47) lim j f ( x ¯ ) x ¯ , x n k j x ¯ = limsup k f ( x ¯ ) x ¯ , x n k x ¯ = limsup k f ( x ¯ ) x ¯ , z n k , i x ¯ .

Since x ¯ = P Γ f ( x ¯ ) , then it follows that

(48) limsup k f ( x ¯ ) x ¯ , x n k x ¯ = lim j f ( x ¯ ) x ¯ , x n k j x ¯ = f ( x ¯ ) x ¯ , x x ¯ 0 .

Thus, from (41) and (48) we have

(49) limsup k f ( x ¯ ) x ¯ , x n k + 1 x ¯ = limsup k f ( x ¯ ) x ¯ , x n k + 1 x n k + limsup k f ( x ¯ ) x ¯ , x n k x ¯ = f ( x ¯ ) x ¯ , x x ¯ 0 .

Applying Lemma 2.12 to (29), and by Remark 3.2, (49) together with the fact that lim n α n = 0 we have lim n x n x ¯ = 0 as required. This completes the proof.□

Taking A i = 0 for all i = 1 , 2 , , m , we obtain the following consequent result.

Algorithm 4.5

Step 0: Select initial points x 0 , x 1 and set n = 1 .

Step 1: Given the iterates x n 1 and x n for each n 1 , choose θ n such that 0 θ n θ ¯ n , where

(50) θ ¯ n min θ , ε n x n x n 1 , if x n x n 1 , θ , otherwise .

Step 2: Compute

w n = x n + θ n ( x n x n 1 ) .

Step 3: Compute

z n , i = w n λ n , i T J X ( I Q μ n , i C i ) T w n ,

where

λ n , i = τ n J X ( I Q μ n , i C i ) T w n 2 T J X ( I Q μ n , i C i ) T w n 2 .

Step 4: Compute

u n = β n , 0 w n + i = 1 m β n , i J r n , i B i z n , i .

Step 5: Compute

x n + 1 = α n f ( x n ) + δ n x n + γ n S u n .

Set n n + 1 and return to Step 1.

Corollary 4.6

Let be a Hilbert space, X a uniformly convex and smooth Banach space and J X the duality mapping on X . Let { x n } be generated by Algorithm 4.5 and suppose Assumptions (A1)–(A4) are satisfied. Then, the sequence { x n } converges strongly to a point x ¯ Ω , where x ¯ = P Ω f ( x ¯ ) and Ω = i = 1 m B i 1 ( 0 ) i = 1 m T 1 ( C i 1 0 ) F ( S ) .

5 Application

In this section, we apply our result to split feasibility problem and split minimization problem.

5.1 SFP

Let 1 and 2 be two real Hilbert spaces and let D and Q be nonempty closed convex subsets of 1 and 2 , respectively. The SFP is defined as follows:

(51) find x D such that T x Q ,

where T : 1 2 is a bounded linear operator.

Let Q be a nonempty closed convex subset of a real Hilbert space and i Q be the indicator function on Q , that is

i Q ( x ) = 0 if x Q ; if x Q .

Moreover, we define the normal cone N Q u of Q at u Q as follows:

N Q u = { z : z , v u 0 , v Q } .

It is known that i Q is a proper, lower semicontinuous and convex function on . Hence, the subdifferential i Q of i Q is a maximal monotone operator. Therefore, we define the resolvent J r i Q of i Q , r > 0 as follows:

J r i Q x = ( I + r i Q ) 1 x , x .

Moreover, for each x Q , we have

i Q x = { z : i Q x + z , u x i Q u , u } = { z : z , u x 0 , u Q } = N Q x .

Hence, for all α > 0 , we derive

u = J r i Q x x u + r i Q u x u r i Q u x u , z u 0 z Q u = P Q x .

Now, by applying Corollary 4.6 using the case for which the sequences { μ n , i } and { r n , i } are taken as constant sequences { μ i } and { r i } , respectively, for each i = 1 , 2 , , m . We obtain the following result for approximating a common solution of finite family of SFPs and FPP for a nonexpansive mapping between Hilbert and Banach spaces.

Theorem 5.1

Let D i and Q i be finite families of nonempty, closed and convex subsets of a Hilbert space and a uniformly convex and smooth Banach space X , respectively, for i = 1 , 2 , , m . Let J X be the duality mapping on X . Let T : X be a bounded linear operator such that T 0 and T is the adjoint operator of T . Let f : be a contraction mapping with ρ ( 0 , 1 ) and 0 < γ < γ ρ . Let S : be a nonexpansive mapping and suppose that the solution set Γ F ( S ) , where = { x i = 1 m D i : T x i = 1 m Q i } . Let { x n } be a sequence generated as follows:

Algorithm 5.2

Step 0: Select initial points x 0 , x 1 and set n = 1 .

Step 1: Given the iterates x n 1 and x n for each n 1 , choose θ n such that 0 θ n θ ¯ n , where

(52) θ ¯ n min θ , ε n x n x n 1 , if x n x n 1 , θ , otherwise .

Step 2: Compute

w n = x n + θ n ( x n x n 1 ) .

Step 3: Compute

z n , i = w n λ n , i T J X ( I P Q i ) T w n ,

where

λ n , i = τ n J X ( I P Q i ) T w n 2 T J X ( I P Q i ) T w n 2 .

Step 4: Compute

u n = β n , 0 w n + i = 1 m β n , i P D i z n , i .

Step 5: Compute

x n + 1 = α n f ( x n ) + δ n x n + γ n S u n .

Set n n + 1 and return to Step 1.

Suppose Assumptions (A1)–(A4) are satisfied. Then the sequence { x n } generated by Algorithm 5.2 converges strongly to a point x ¯ Γ , where x ¯ = P Γ f ( x ¯ ) .

5.2 Split minimization problem (SMP)

Let 1 and 2 be real Hilbert spaces, T : 1 2 be a bounded linear operator. Given some proper, lower semicontinuous and convex functions f 1 : 1 R { + } and f 2 : 2 R { + } , the SMP is defined as: find x 1 such that

(53) x argmin x 1 f 1 ( x ) and T x argmin y 2 f 2 ( y ) .

Moudafi and Thakur [17] first introduced the SMP, which has attracted a lot of attention from researchers in recent years (see [17,57,58] and references therein). The SMP has been applied to the study of many problems in applied science, which includes Fourier regularization, multi-resolution sparse regularization, alternating projection signal synthesis problems, among others.

In a real Hilbert space , the proximal operator of f is defined by

prox λ , f ( x ) argmin z f ( z ) + 1 2 λ x z 2 x , λ > 0 .

It is well known that

(54) prox λ , f ( x ) = ( I + λ f ) 1 ( x ) = J λ f ( x ) ,

where f is the subdifferential of f defined by

f ( x ) = { z : f ( x ) f ( y ) z , x y , y } ,

for each x . From [59], f is a maximal monotone operator and prox λ , f is firmly nonexpansive.

By setting B i = f i and C i = g i in Corollary 4.6 for each i = 1 , 2 , , m , we obtain the following result for approximating a common solution of finite family of SMP and FPP for nonexpansive mapping between Hilbert and Banach spaces.

Theorem 5.3

Let be a real Hilbert space and let X be a uniformly convex and smooth Banach space. Let J X be the duality mapping on X . Let T : X be a bounded linear operator such that T 0 and T is the adjoint operator of T . Let f : be a contraction mapping with ρ ( 0 , 1 ) and 0 < γ < γ ρ , and let S : be a nonexpansive mapping. Let f i : R { + } and g i : X R { + } be finite families of proper convex lower semicontinuous functions for i = , 2 , , m . Suppose that the solution set Γ F ( S ) , where = { x i = 1 m argmin f i : T x i = 1 m argmin g i } . Let { x n } be a sequence generated as follows:

Algorithm 5.4

Step 0: Select initial points x 0 , x 1 and set n = 1 .

Step 1: Given the iterates x n 1 and x n for each n 1 , choose θ n such that 0 θ n θ ¯ n , where

(55) θ ¯ n min θ , ε n x n x n 1 , if x n x n 1 , θ , otherwise .

Step 2: Compute

w n = x n + θ n ( x n x n 1 ) .

Step 3: Compute

z n , i = w n λ n , i T J X ( I prox μ n , i , g i ) T w n ,

where

λ n , i = τ n J X ( I prox μ n , i , g i ) T w n 2 T J X ( I prox μ n , i , g i ) T w n 2 .

Step 4: Compute

u n = β n , 0 w n + i = 1 m β n , i prox r n , i , f i z n , i .

Step 5: Compute

x n + 1 = α n f ( x n ) + δ n x n + γ n S u n .

Set n n + 1 and return to Step 1.

Suppose Assumptions (A1)–(A4) are satisfied. Then the sequence { x n } generated by Algorithm 5.4 converges strongly to a point x ¯ Γ , where x ¯ = P Γ f ( x ¯ ) .

6 Numerical experiment

In this section, we provide a numerical example in infinite dimensional spaces to demonstrate the performance of Algorithm 3.1 and then compare it with the method of Kazmi and Riziv [33, Theorem 3.1] (see Appendix 7.1), the method of Sitthithakerngkiet [60, Theorem 3.4] (see Appendix 7.2), the method of Long et al. [42, Algorithm 5] (see Appendix 7.3) and the method of Byrne et al. [29, Algorithm 4.4] (see Appendix 7.4).

We perform all implementations using Matlab 2016 (b), installed on a personal computer with Intel(R) Core(TM) i5-2600 CPU@2.30GHz and 8.00 Gb-RAM running on Windows 10 operating system. In Table 1, “Iter.” means the number of iterations while “CPU” means the CPU time in seconds.

Example 6.1

Let H = ( l 2 ( R , 2 ) ) = X , where l 2 ( R ) x = ( x 1 , x 2 , , x n , , ) , x j R : j = 1 x j 2 < , x 2 = j = 1 x j 2 1 2 for all x l 2 ( R ) . Let T : X be defined by T x = 3 2 x . For i = 1 , 2 , , 5 , define A i : as A i x = x 2 i , B i : by B i x = 3 2 i x , C i : X X by C i x = 5 2 i x . Then, A i is a finite family of inverse strongly monotone operators, B i and C i are maximal monotone operators for each i = 1 , 2 , , 5 . Let S : be defined by S x = 1 2 x . Observe that S is nonexpansive. We set f ( x ) = x 3 , β n , 0 = 5 n 2 ( 3 n + 2 ) , β n , i = n + 4 10 ( 3 n + 2 ) , α n = 1 3 n + 1 , δ n = n 3 n + 1 , γ n = 2 n 3 n + 1 , ε n = 1 ( 3 n + 1 ) 3 , τ n = n 2 n + 1 , μ n , i = r n , i = r = 0.01 and θ = 0.6 in Algorithm 3.1 for each n N . We take S n ( x ) = 1 2 n x , D = I and = 1 in Appendix 7.1, θ n = 1 ( 3 n + 1 ) 2 in Appendix 7.3 and λ = λ n = 0.05 .

We consider the following cases for the numerical experiments of this example.

Case I: Take x 0 = 5 , 1 , 1 5 , , x 1 = 1 10 , 1 100 , 1 1000 , ;

Case II: Take x 0 = 9 , 1 , 1 9 , , x 1 = ( 0.1 , 0.01 , 0.001 , ) ;

Case III: Take x 0 = 2 , 4 5 , 8 25 , , x 1 = ( 0.1 , 0.01 , 0.001 , ) ;

Case IV: Take x 0 = 2 , 4 3 , 8 9 , , x 1 = 1 10 , 1 100 , 1 1,000 , .

We compare the performance of the algorithms using the stopping criterion x n + 1 x n < 1 0 2 . We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Table 1 and Figure 1.

Table 1

Numerical results for Example 6.1

Appendix 7.1 Appendix 7.2 Appendix 7.3 Appendix 7.4 Algorithm 3.1
Case I CPU time (s) 0.0656 0.0104 0.0188 0.0152 0.0226
No. of Iter. 9 5 5 30 4
Case II CPU time (s) 0.0100 0.0069 0.0109 0.0110 0.0137
No. of Iter. 10 6 5 31 4
Case III CPU time (s) 0.1421 0.0232 0.0341 0.0234 0.0361
No. of Iter. 8 5 5 28 4
Case IV CPU time (s) 0.0103 0.0072 0.0118 0.0134 0.0167
No. of Iter. 8 5 5 28 4
Figure 1 
               Top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Figure 1

Top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV.

7 Conclusion

A new inertial viscosity algorithm with self-adaptive step size is proposed for finding a common solution of finite family of SMVIP and FPP for a nonexpansive mapping between a Banach space and a Hilbert space. The sequence generated by our proposed method was proved to converge strongly to a solution of the problem. Our method does not require prior knowledge or estimate of the operator norm, which makes it easily implementable unlike so many other methods in the literature, which require prior knowledge of the operator norm for their implementation. We applied our results to study the SFP and the SMP. Finally, we carried out some numerical experiments and compared the performance of our proposed method with some existing methods in the literature. The results show that our method is easier to implement and outperform some well-known methods in the literature.



Acknowledgements

The authors sincerely thank the reviewers for their careful reading, constructive comments and fruitful suggestions that improved the manuscript. Grace N. Ogwo acknowledged with thanks the scholarship and financial support from the University of KwaZulu-Natal (UKZN) Doctoral Scholarship. The research of Timilehin O. Alakoya was wholly supported by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. Oluwatosin T. Mewomo was supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903).

  1. Funding information: Grace N. Ogwo acknowledged the scholarship and financial support from the University of KwaZulu-Natal (UKZN) Doctoral Scholarship. The research of Timilehin O. Alakoya was wholly supported by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. Oluwatosin T. Mewomo was supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903).

  2. Conflict of interest: The authors declare that they have no competing interests.

Appendix

Appendix 7.1

The Algorithm in [33].

Initialization: Given λ 0 , 1 L , r > 0 . Let x 0 1 be arbitrary.

Iterative steps: Calculate x n + 1 as follows:

Step 1: Compute

u n = J r B ( x n + λ T ( J r C I ) T x n ) .

Step 2: Compute

x n + 1 = α n f ( x n ) + ( 1 α n ) S u n ,

where f : is a contraction mapping with constant ρ ( 0 , 1 ) , S is a nonexpansive mapping such that F ( S ) Γ .

Set n n + 1 and return to Step 1.

Appendix 7.2

The Algorithm in [60].

Initialization: Given λ 0 , 1 L , r > 0 . Let x 0 1 be arbitrary.

Iterative steps: Calculate x n + 1 as follows:

Step 1: Compute

y n = J r B ( x n + λ T ( J r C I ) T x n ) .

Step 2: Compute

x n + 1 = α n f ( x n ) + ( I α n D ) S n y n ,

where f : is a contraction mapping with constant α ( 0 , 1 ) , { S n } is a sequence of nonexpansive mappings such that F ( S n ) Γ , D is strongly positive bounded linear operator with coefficient γ ¯ > 0 and 0 < < γ ¯ β .

Set n n + 1 and return to Step 1.

Appendix 7.3

The Algorithm in [42].

Initialization: Let r > 0 .

Iterative steps: Calculate x n + 1 as follows: Let { x n } be a sequence in 1 defined by

(56) x 0 , x 1 , w n = x n + θ n ( x n x n 1 ) , y n = J r B ( I λ n T ( I J r C ) T ) w n , x n + 1 = α n f ( x n ) + ( 1 α n ) y n ,

where f : is a contraction mapping with constant α ( 0 , 1 ) .

Set n n + 1 and return to Step 1.

Appendix 7.4

The Algorithm in [29].

(57) v H 1 , x n + 1 = α n v + ( 1 α n ) J r B ( x n λ T ( I J r C ) T x n ) , n N .

References

[1] Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, A unified approach for inversion problems in intensity modulated radiation therapy, Phys. Med. Biol. 51 (2006), 2353–2365. 10.1088/0031-9155/51/10/001Search in Google Scholar PubMed

[2] Y. Censor and T. Elfving, A multiprojection algorithm using Bregman projections in product space, Numer. Algorithms 8 (1994), 221–239. 10.1007/BF02142692Search in Google Scholar

[3] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems 20 (2004), 103–120. 10.1088/0266-5611/20/1/006Search in Google Scholar

[4] A. Taiwo, L. O. Jolaoso, and O. T. Mewomo, Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert spaces, J. Ind. Manag. Optim. 17 (2021), no. 5, 2733–2759. 10.3934/jimo.2020092Search in Google Scholar

[5] O. T. Mewomo and F. U. Ogbuisi, Convergence analysis of an iterative method for solving multiple-set split feasibility problems in certain Banach spaces, Quaest. Math. 41 (2018), no. 1, 129–148. 10.2989/16073606.2017.1375569Search in Google Scholar

[6] A. Moudafi, Split monotone variational inclusions, J. Optim. Theory Appl. 150 (2011), 275–283. 10.1007/s10957-011-9814-6Search in Google Scholar

[7] X. Zhao, J. C. Yao, and Y. Yao, A proximal algorithm for solving split monotone variational inclusions, Politehn. Univ. Bucharest Sci. Bull. Ser. A Appl. Math. Phys. 82 (2020), no. 3, 43–52. Search in Google Scholar

[8] T. O. Alakoya and O. T. Mewomo, Viscosity S-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems, Comput. Appl. Math. 41 (2021), 39. 10.1007/s40314-021-01749-3Search in Google Scholar

[9] H. Dehghan, C. Izuchukwu, O. T. Mewomo, D. A. Taba, and G. C. Ugwunnadi, Iterative algorithm for a family of monotone inclusion problems in CAT(0) spaces, Quaest. Math. 43 (2020), no. 7, 975–998. 10.2989/16073606.2019.1593255Search in Google Scholar

[10] S. Reich and T. M. Tuyen, Iterative methods for solving the generalized split common null point problem in Hilbert spaces, Optimization 69 (2020), 1013–1038. 10.1080/02331934.2019.1655562Search in Google Scholar

[11] S. Reich and T. M. Tuyen, Two projection methods for solving the multiple-set split common null point problem in Hilbert spaces, Optimization 69 (2020), no. 9, 1913–1934. 10.1080/02331934.2019.1686633Search in Google Scholar

[12] S. Reich and T. M. Tuyen, Parallel iterative methods for solving the generalized split common null point problem in Hilbert spaces, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 114 (2020), 180. 10.1007/s13398-020-00901-8Search in Google Scholar

[13] T. M. Tuyen, N. T. T. Thuy, and N. M. Trang, A strong convergence theorem for a parallel iterative method for solving the split common null point problem in Hilbert spaces, J. Optim. Theory Appl. 138 (2019), no. 2, 271–291. 10.1007/s10957-019-01523-wSearch in Google Scholar

[14] T. M. Tuyen, A strong convergence theorem for the split common null point problem in Banach spaces, Appl. Math. Optim. 79 (2019), 207–227. 10.1007/s00245-017-9427-zSearch in Google Scholar

[15] T. M. Tuyen, N. S. Ha, and N. T. T. Thuy, A shrinking projection method for solving the split common null point problem in Banach spaces, Numer. Algorithms 81 (2019), 813–832. 10.1007/s11075-018-0572-5Search in Google Scholar

[16] P. E. Maingé, A viscosity method with no spectral radius requirements for the split common fixed point problem, Eur. J. Oper. Res. 235 (2014), 17–27. 10.1016/j.ejor.2013.11.028Search in Google Scholar

[17] A. Moudafi and B. S. Thakur, Solving proximal split feasibility problems without prior knowledge of operator norms, Optim. Lett. 8 (2014), no. 7, 2099–2110. 10.1007/s11590-013-0708-4Search in Google Scholar

[18] S. Reich and T. M. Tuyen, A new algorithm for solving the split common null point problem in Hilbert spaces, Numer. Algorithms 83 (2020), 789–805. 10.1007/s11075-019-00703-zSearch in Google Scholar

[19] Y. Censor, A. Gibali, and S. Reich, Algorithms for the split variational inequality problem, Numer. Algor. 59 (2012), 301–323. 10.1007/s11075-011-9490-5Search in Google Scholar

[20] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul. 4 (2005), 1168–1200. 10.1137/050626090Search in Google Scholar

[21] A. Gibali, A new non-Lipschitzian projection method for solving variational inequalities in Euclidean spaces, J. Nonlinear Anal. Optim. 6 (2015), 41–51. Search in Google Scholar

[22] L. O. Jolaoso, A. Taiwo, T. O. Alakoya, and O. T. Mewomo, A self adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces, Demonstr. Math. 52 (2019), 183–203. 10.1515/dema-2019-0013Search in Google Scholar

[23] S. H. Khan, T. O. Alakoya, and O. T. Mewomo, Relaxed projection methods with self-adaptive step size for solving variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in Banach spaces, Math. Comput. Appl. 25 (2020), 54. 10.3390/mca25030054Search in Google Scholar

[24] C. C. Okeke and O. T. Mewomo, On split equilibrium problem, variational inequality problem and fixed point problem for multi-valued mappings, Ann. Acad. Rom. Sci. Ser. Math. Appl. 9 (2017), no. 2, 223–248. Search in Google Scholar

[25] H. Raguet, J. Fadili, and G. Peyré, A generalized forward-backward splitting, SIAM J. Imaging Sci. 6 (2013), 1199–1226. 10.1137/120872802Search in Google Scholar

[26] A. Taiwo, T. O. Alakoya, and O. T. Mewomo, Strong convergence theorem for solving equilibrium problem and fixed point of relatively nonexpansive multi-valued mappings in a Banach space with applications, Asian-Eur. J. Math. 14 (2021), no. 8, 2150137. 10.1142/S1793557121501370Search in Google Scholar

[27] G. N. Ogwo, C. Izuchukwu, and O. T. Mewomo, A modified extragradient algorithm for a certain class of split pseudo-monotone variational inequality problem, Numer. Algebra Control Optim. 12 (2022), no. 2, 373–393. 10.3934/naco.2021011Search in Google Scholar

[28] G. N. Ogwo, T. O. Alakoya, and O. T. Mewomo, Iterative algorithm with self-adaptive step size for approximating the common solution of variational inequality and fixed point problems, Optimization (2021), DOI: https://doi.org/10.1080/02331934.2021.1981897. 10.1080/02331934.2021.1981897Search in Google Scholar

[29] C. Byrne, Y. Censor, A. Gibali, and S. Reich, The split common null point problem, J. Nonlinear Convex Anal. 13 (2012), no. 4, 759–775. Search in Google Scholar

[30] A. Moudafi, Viscosity approximation method for fixed points problems, J. Math. Anal. Appl. 241 (2000), 46–55. 10.1006/jmaa.1999.6615Search in Google Scholar

[31] S. Suantai, K. Srisap, N. Naprang, M. Mamat, V. Yundon, and P. Cholamjiak, Convergence theorems for finding the split common null point in Banach spaces, Appl. Gen. Topol. 18 (2017), no. 2, 345–360. 10.4995/agt.2017.7257Search in Google Scholar

[32] C. Byrne, Y. Censor, A. Gibali, and S. Reich, Weak and strong convergence of algorithms for the split common null point problem, J. Nonlinear Convex Anal. 13 (2012), 759–775. Search in Google Scholar

[33] K. R. Kazmi and S. H. Rizvi, An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping, Optim. Lett. 8 (2014), no. 3, 1113–1124. 10.1007/s11590-013-0629-2Search in Google Scholar

[34] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, U.S.S.R. Comput. Math. Math. Phys. 4 (1964), no. 5, 1–17. 10.1016/0041-5553(64)90137-5Search in Google Scholar

[35] H. Attouch, J. Peypouquet, and P. Redont, A dynamical approach to an inertial forward-backward algorithm for convex minimization, SIAM J. Optim. 24 (2014), 232–256. 10.1137/130910294Search in Google Scholar

[36] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci. 2 (2009), 183–202. 10.1137/080716542Search in Google Scholar

[37] G. N. Ogwo, C. Izuchukwu, and O. T. Mewomo, Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity, Numer. Algorithms 88 (2021), no. 3, 1419–1456. 10.1007/s11075-021-01081-1Search in Google Scholar

[38] G. N. Ogwo, C. Izuchukwu, Y. Shehu, and O. T. Mewomo, Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems, J. Sci. Comput. 90 (2022), 10. 10.1007/s10915-021-01670-1Search in Google Scholar

[39] T. O. Alakoya, A. O. E. Owolabi, and O. T. Mewomo, An inertial algorithm with a self-adaptive step size for a split equilibrium problem and a fixed point problem of an infinite family of strict pseudo-contractions, J. Nonlinear Var. Anal. 5 (2021), 803–829. Search in Google Scholar

[40] T. O. Alakoya, A. O. E. Owolabi, and O. T. Mewomo, Inertial algorithm for solving split mixed equilibrium and fixed point problems for hybrid-type multivalued mappings with no prior knowledge of operator norm, J. Nonlinear Convex Anal. (2021), (accepted, to appear). Search in Google Scholar

[41] D. V. Thong and D. V. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer. Algorithms 80 (2019), 1283–1307. 10.1007/s11075-018-0527-xSearch in Google Scholar

[42] L. V. Long, D. V. Thong, and V. T. Dung, New algorithms for the split variational inclusion problems and application to split feasibility problems, Optimization 68 (2019), no. 12, 2339–2367. 10.1080/02331934.2019.1631821Search in Google Scholar

[43] F. Kohsaka and W. Takahashi, Existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces, SIAM J. Optim. 19 (2018), no. 2, 824–835. 10.1137/070688717Search in Google Scholar

[44] F. Kohsaka and W. Takahashi, Fixed point theorems for a class of nonlinear mappings related to maximal monotone operators in Banach spaces, Arch. Math. 91 (2018), no. 2, 166–177. 10.1007/s00013-008-2545-8Search in Google Scholar

[45] K. Aoyama, F. Kohsaka, and W. Takahashi, Three generalizations of firmly nonexpansive mappings: their relations and continuity properties, J. Nonlinear Convex Anal. 10 (2009), 131–147. Search in Google Scholar

[46] W. Takahashi, Convex Analysis and Approximation of Fixed Points, Yokohama Publishers, Yokohama, 2000. (in Japanese)Search in Google Scholar

[47] W. Takahashi and M. Toyoda, Weak convergence theorems for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl. 118 (2003), 417–428. 10.1023/A:1025407607560Search in Google Scholar

[48] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, Marcel Dekker, New York, 1984. Search in Google Scholar

[49] J. B. Hiriart-Urruty and C. Lemarchal, Fundamentals of Convex Analysis, Springer, Berlin, 2001. 10.1007/978-3-642-56468-0Search in Google Scholar

[50] M. A. Olona, T. O. Alakoya, A. O.-E. Owolabi, and O. T. Mewomo, Inertial shrinking projection algorithm with self-adaptive step size for split generalized equilibrium and fixed point problems for a countable family of nonexpansive multivalued mappings, Demonstr. Math. 54 (2021), 47–67. 10.1515/dema-2021-0006Search in Google Scholar

[51] M. A. Olona, T. O. Alakoya, A. O.-E. Owolabi, and O. T. Mewomo, Inertial algorithm for solving equilibrium, variational inclusion and fixed point problems for an infinite family of strictly pseudocontractive mappings, J. Nonlinear Funct. Anal. 2021 (2021), 10. 10.23952/jnfa.2021.10Search in Google Scholar

[52] A. Taiwo, L. O. Jolaoso, and O. T. Mewomo, Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert Spaces, J. Ind. Manag. Optim. 17 (2021), no. 5, 2733–2759. 10.3934/jimo.2020092Search in Google Scholar

[53] K. Goebel and W. A. Kirk, Topics in Metric Fixed Point Theory, vol. 28, Cambridge University Press, Cambridge, United Kingdom, 1990. 10.1017/CBO9780511526152Search in Google Scholar

[54] Z. Opial, Weak convergence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc. 73 (1967), 591–597. 10.1090/S0002-9904-1967-11761-0Search in Google Scholar

[55] T. O. Alakoya, L. O. Jolaoso, and O. T. Mewomo, Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems, Optimization 70 (2021), no. 2, 545–574. 10.1080/02331934.2020.1723586Search in Google Scholar

[56] G. López, M. V. Márquez, F. Wang, and H. K. Xu, Forward-backward splitting methods for accretive operators in Banach spaces, Abstr. Appl. Anal. 2012 (2012), 109236. 10.1155/2012/109236Search in Google Scholar

[57] M. Abbas, M. AlSharani, Q. H. Ansari, G. S. Iyiola, and Y. Shehu, Iterative methods for solving proximal split minimization problem, Numer. Algorithms 78 (2018), 193–215. 10.1007/s11075-017-0372-3Search in Google Scholar

[58] Y. Yao, M. Postolache, X. Qin, and J.-C. Yao, Iterative algorithm for proximal split feasibility problem, U.P.B. Sci. Bull. Series A 80 (2018), no. 3, 37–44. Search in Google Scholar

[59] D. Butnariu and A. N. Iusem, Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization, Kluwer Academic Publishers, London, 2000. 10.1007/978-94-011-4066-9Search in Google Scholar

[60] K. Sitthithakerngkiet, J. Deepho, and P. Kumam, A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems, Appl. Math. Comput. 250 (2015), 986–1001. 10.1016/j.amc.2014.10.130Search in Google Scholar

Received: 2021-08-13
Revised: 2021-11-18
Accepted: 2021-12-28
Published Online: 2022-06-10

© 2022 Grace N. Ogwo et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/dema-2022-0005/html
Scroll to top button