Next Article in Journal
Adaptive Particle Swarm Optimization of PID Gain Tuning for Lower-Limb Human Exoskeleton in Virtual Environment
Next Article in Special Issue
Primal Lower Nice Functions in Reflexive Smooth Banach Spaces
Previous Article in Journal
New High Accuracy Analysis of a Double Set Parameter Nonconforming Element for the Clamped Kirchhoff Plate Unilaterally Constrained by an Elastic Obstacle
Previous Article in Special Issue
Relaxed Inertial Tseng’s Type Method for Solving the Inclusion Problem with Application to Image Restoration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generalized Viscosity Inertial Projection and Contraction Method for Pseudomonotone Variational Inequality and Fixed Point Problems

by
Lateef Olakunle Jolaoso
* and
Maggie Aphane
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Pretoria 0204, South Africa
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 2039; https://doi.org/10.3390/math8112039
Submission received: 28 September 2020 / Revised: 21 October 2020 / Accepted: 23 October 2020 / Published: 16 November 2020

Abstract

:
We introduce a new projection and contraction method with inertial and self-adaptive techniques for solving variational inequalities and split common fixed point problems in real Hilbert spaces. The stepsize of the algorithm is selected via a self-adaptive method and does not require prior estimate of norm of the bounded linear operator. More so, the cost operator of the variational inequalities does not necessarily needs to satisfies Lipschitz condition. We prove a strong convergence result under some mild conditions and provide an application of our result to split common null point problems. Some numerical experiments are reported to illustrate the performance of the algorithm and compare with some existing methods.

1. Introduction

Let H be a real Hilbert space induced with norm · and inner product · , · . Let Ω be a nonempty, closed and convex subset of H and A : Ω H be an operator. We study the Variational Inequality Problem (shortly, VIP) defined by
find x * Ω such that A x * , u x *     0 u Ω .
The solution set of (1) is denoted by S . The VIP is a powerful tool for studying many nonlinear problems arising in mechanics, optimization, control network, equilibrium problems, and so forth; see References [1,2,3]. Due to this importance, the problem has drawn the attention of many researchers who had studied its existence of solution and proposed various iterative methods such as the extragradient method [4,5,6,7,8,9], subgradient extragradient method [10,11,12,13,14], projection and contraction method [15,16], Tseng’s extragradient method [17,18] and Bregman projection method [19,20] for approximating its solution in various dimensions.
The operator A : Ω H is said to be
  • β -strongly monotone on Ω if there exists β > 0 such that
    A x A y , x y     β x y x , y Ω ;
  • monotone on Ω if
    A x A y , x y     0 x , y Ω ;
  • γ - strongly pseudo-monotone on Ω if there exists η > 0 such that
    A x , y x     0 A y , y x     γ x y 2 ,
    for all x , y Ω ;
  • pseudo-monotone on Ω if for all x , y Ω
    A x , y x     0 A y , y x     0 ;
  • L- Lipschitz continuous on Ω if there exists a constant L > 0 such that
    A x A y     L x y , x , y Ω .
    when L ( 0 , 1 ) , then A is called a contraction;
  • weakly sequentially continuous if for any { x n } H such that x n x ¯ implies A x n A x ¯ .
It is easy to see from (1) ⇒ (2) ⇒ (4) and (1) ⇒ (3) ⇒ (4), but the converse implications do not hold in general; see, for instance Reference [16,19].
For solving the VIP (1) in finite dimensional spaces, Korpelevich [21] introduced the Extragradient Method (EM) as follows:
x 0 Ω R n , y n = P Ω ( x n β A x n ) , x n + 1 = P Ω ( x n β A y n ) ,
where β 0 , 1 L , P Ω is the metric projection from H onto Ω and A : Ω H is a monotone and L-Lipschitz operator. See, for example, References [4,5,22,23], for some extension of the EM to infinite dimensional Hilbert spaces. A major drawback in the EM is the that one needs to calculate at least two projections onto the closed convex set Ω per each iteration which can be very complicated if Ω does not have a simple structure. Censor et al. [10,11] introduced an improved method called the Subgradient Extragradient Method (SEM) by replacing the second projection in the EM with a projection onto a half-space as follows:
x 0 H , y n = P Ω ( x n β A x n ) , Γ n = { ω H : ( x n β A x n ) y n , ω y n     0 } , x n + 1 = P Γ n ( x n β A y n ) ,
where β 0 , 1 L . The authors proved that the sequence generated by (3) converges weakly to a solution of the VIP. Furthermore, He [24] introduced a Projection and Contraction Method (PCM) which does not involves a projection onto the half-space as follows:
x 0 H , y n = P Ω ( x n β A x n ) , Θ ( x n , y n ) = ( x n y n ) β ( A x n A y n ) , x n + 1 = x n η γ n Θ ( x n , y n ) ,
where η ( 0 , 2 ) , β 0 , 1 L and γ n = x n y n , Θ ( x n , y n ) Θ ( x n , y n ) 2 . He [24] also proved that the sequence { x n } generated by (4) converges weakly to a solution of VIP. The PCM (4) has been modified by many author who proved its strong convergence to solution of the VIP; see for instance References [16,18,25,26]. In particular, Cholamjiak et al. [27] introduced the following inertial PCM for solving the VIP with pseudomonotone operator:
x 0 , x 1 H , w n = x n + θ n ( x n x n 1 ) , y n = P Ω ( w n β A w n ) , Θ ( w n , y n ) = ( w n y n ) λ ( A w n A y n ) , γ n = w n y n , Θ ( w n , y n ) | | Θ ( w n , y n ) | | 2 , z n = w n η γ n Θ ( w n , y n ) , x n + 1 = ( 1 α n δ n ) x n + α n z n ,
where η ( 0 , 2 ) , β 0 , 1 L , { τ n } ( 0 , ) such that τ n = o ( α n ) , where { α n } ( a , 1 δ n ) , for some a > 0 , { δ n } ( 0 , 1 ) , θ > 0 and θ n is chosen such that 0     θ n     θ ¯ n and
θ ¯ n = min θ , τ n x n x n 1 if x n x n 1 , θ otherwise .
The authors of Reference [27] proved that the sequence { x n } generated by Algorithm (5) converges strongly to a solution of the VIP provided the condition lim n θ n α n x n x n 1 = 0 is satisfied. Note that the inertial extrapolation term θ n ( x n x n 1 ) in (5) is regarded as a means of improving the speed of convergence of the algorithm. This process was first introduced by Polyak [28] as a discretization of a two-order time dynamical system and has been employed by many researchers; see for instance References [16,17,25,29,30,31,32,33,34].
The viscosity approximation method was introduced by Moudafi [35] for finding the fixed point of a nonexpansive mapping T, that is, finding x H such that T x = x . We denote the set of fixed points of T by F ( T ) = { x H : T x = x } . Let f : H H be a contraction, for an arbitrary x 0 H , let { x n } be generated recursively by
x n + 1 = α n f ( x n ) + ( 1 α n ) T x n , n     0 ,
where { α n } ( 0 , 1 ) . Xu [36] later proved that if { α n } satisfies some certain conditions, the sequence { x n } generated by (7) converges to the unique fixed point of T which are also solution of the variational inequality
( I f ) x * , x x *     0 , x F ( T ) .
Moreover, the problem of finding a common solution of VIP and fixed point problem for a nonlinear mapping T, that is,
find x * Ω such that x * S F ( T ) ,
become very important in optimization theory due to its possible applications to mathematical models whose constraints can be modeled as both problems. In particular, such models can be found in practical problems such as signal processing, network resources allocation, image recovery, see for instance, References [37,38,39].
Recently, Thong and Hieu [25] introduced the following modified SEM for solving (9):
x 0 H y n = P Ω ( x n β A x n ) , Γ n = { ω H : x n β A x n y n , ω y n     0 } , z n = P Γ n ( x n β A y n ) , x n + 1 = ( 1 α n β n ) z n + β n S z n ,
and
x 0 H y n = P Ω ( x n β A x n ) , Γ n = { ω H : x n β A x n y n , ω y n     0 } , z n = P Γ n ( x n β A y n ) , x n + 1 = ( 1 β n ) α n z n + β n S z n ,
where β 0 , 1 L , S : H H is a κ -demicontractive mapping with κ [ 0 , 1 ) and { α n } , { β n } ( 0 , 1 ) . The authors proved that the sequences generated by (10) and (11) converges strongly to a solution of (9) under certain mild conditions. Also Dong et al. [31] introduced an inertial PCM for solving (9) for a nonexpansive mapping S as follows:
x 0 , x 1 H , w n = x n + θ n ( x n x n 1 ) , y n = P Ω ( w n β A w n ) , Θ ( w n , y n ) = ( w n y n ) β ( A w n A y n ) , γ n = w n y n , Θ ( w n , y n ) | | Θ ( w n , y n ) | | 2 , x n + 1 = ( 1 α n ) w n + α n S ( w n η γ n Θ ( w n , y n ) ) ,
where η ( 0 , 2 ) , β 0 , 1 L , { θ n } is a non-decreasing sequence with θ 1 = 0 , 0     θ n     θ < 1 and σ , δ > 0 are constants such that
δ > θ 2 ( 1 + θ ) + θ σ 1 θ 2 , and 0 < α ̲     α n     [ δ θ ( ( 1 + θ ) + θ δ + σ ) ] δ [ 1 + θ ( 1 + θ ) + θ δ + σ ] = α ¯ .
We note that Algorithm (12) improves (10) and (11), however, it incurred the following drawbacks:
(i)
the stepsize β depends on a prior estimate of the Lipschitz constant L which is very difficult to determine in practice. Moreover in many practical problems, the cost operator may not even satisfies Lipschitz condition; see, for example, Reference [19];
(ii)
the condition (13) weaken the convergence of the algorithm;
(iii)
the algorithm converges weakly to a solution of (9).
Motivated by these results, in this paper, we introduce a new inertial projection and contraction method for finding a common solution of VIP and split common fixed point problem, that is,
find x Ω such that x S F ( T ) and D x F ( U ) ,
where H 1 , H 2 are real Hilbert spaces, Ω H 1 is nonempty closed convex set, D : H 1 H 2 is a bounded linear operator, T : H 1 H 1 and U : H 2 H 2 are ϱ -demicontractive mappings. It should be observed that when H 1 = H 2 , and U = D = I (identity operator on H 2 ), then Problem (14) reduced to (9). Thus (14) is general than (9). Our algorithm is designed such that the stepsize is determined by an Armijo line-search technique and its convergence does not require prior estimate of the Lipschitz constant. We also employ a generalized viscosity method and proved a strong convergence result for the sequence generated by our algorithm under certain mild conditions. We then provide some numerical examples to illustrate the performance of our algorithm. We highlight some contributions in this paper as follows:
  • The authors of References [18,25,26,27,32] introduced some inertial PCMs which required a prior estimate of the Lipschitz constant of the operator A . It is known that finding such estimate is very difficult which also slows down the rate of convergence of the algorithm. In this paper, we propose a new inertial PCM which does not require a prior estimate of the Lipschitz constant of A.
  • The authors of Reference [16] proposed an effective PCM for solving pseudomonotone VIP in real Hilbert space. When α n = θ n = 0 in our Algorithm 1, we obtained the method of Reference [16].
  • In Reference [26], the author proposed a hybrid inertial PCM for solving monotone VIP in real Hilbert spaces. This method required computing extra projection onto the intersection of two closed convex subsets of H which can be computationally costly. Our algorithm performs only one projection onto C and no extra projection onto any subset of H.

2. Preliminaries

In this section, some basic definitions and results which are needed for establishing our results would be given. In the sequel, H is a real Hilbert space, Ω is nonempty, closed and convex subset of H, we write x n x to denotes { x n } converges strongly to x and x n x to denotes { x n } converges weakly to x .
The metric projection of x H onto C is defined as the necessary unique vector P Ω ( x ) satisfying
| | x P Ω x | |     | | x y | | y Ω .
It is well known that P Ω has the following properties (see, e.g., Reference [40]).
(i)
For each x H and v Ω ,
v = P Ω x x v , v y     0 , y Ω .
(ii)
For any x , y H ,
P Ω x P Ω y , x y     | | P Ω x P Ω y | | 2 .
(iii)
For any x H and y C ,
| | P Ω x y | | 2     | | x y | | 2 | | x P Ω x | | 2 .
For any real Hilbert space H , it is known that the following identities hold (see, e.g., Reference [41]).
Lemma 1.
For all u , v H , then
(i) 
| | u + v | | 2 = | | u | | 2 + 2 u , v + | | v | | 2 ,
(ii) 
| | u + v | | 2     | | u | | 2 + 2 v , u + v ,
(iii) 
| | λ u + ( 1 λ ) v | | 2 = λ | | u | | 2 + ( 1 λ ) | | v | | 2 λ ( 1 λ ) | | u v | | 2 , λ [ 0 , 1 ] .
The following are types of nonlinear mappings we considered:
Definition 1
([42]). A mapping T : H H is called
(i) 
nonexpansive if
| | T u T v | |     | | u v | | , u , v H ;
(ii) 
quasi-nonexpansive mapping if F ( T ) and
| | T u z | |     | | u z | | , u H , z F ( T ) ;
(iii) 
μ-strictly pseudocontractive if there exists a constant μ [ 0 , 1 ) such that
| | T u T v | | 2     | | u v | | 2 + μ | | ( I T ) u ( I T ) v | | 2 u , v H ;
(iv) 
ϱ-demicontractive mapping if there exists ϱ [ 0 , 1 ) and F ( T ) such that
| | T u z | | 2     | | u z | | 2 + ϱ | | u T u | | 2 , u H , z F ( T ) .
It is well known that the demicontractive mappings posseses the following property.
Lemma 2.
([38], Remark 4.2, p. 1506) Suppose F ( T ) where T is a ϱ-demicontractive self-mapping on H . Define T λ : = ( 1 λ ) I + λ T where λ ( 0 , 1 ] . Then
(i) 
T λ is a quasi-nonexpansive mapping if λ [ 0 , 1 ϱ ] ;
(ii) 
F ( T ) is closed and convex.
Lemma 3
([7]). Let Ω be a nonempty closed and convex subset of a real Hilbert space H. For any w H and λ > 0 , we denote
r λ ( w ) : = w P Ω ( w λ A w ) ,
then
min { 1 , λ } | | r 1 ( w ) | |     | | r λ ( w ) | |     max { 1 , λ } | | r 1 ( w ) | | .
Lemma 4
([6]). Given x H and b e t a     γ > 0 . Then we obtain
u P Ω ( u β A u ) β     u P Ω ( u γ A u ) γ ,
and
u P Ω ( u γ A u )     u P Ω ( u β A u ) .
Lemma 5.
([43], Lemma 2.1) Consider the VIP (1) with Ω being a nonempty closed convex subset of H and A : Ω H is pseudomonotone and continuous. Then w S if and only if
A x , x w     0 x Ω .
Lemma 6
([44]). Let S : C H be a nonexpansive mapping and T = ( I α μ F ) S , where F is k-Lipschitz, η-strongly monotone and α ( 0 , 1 ] . Then T is a contraction map if 0 < μ < 2 η k 2 , that is,
T u T v     ( 1 α τ ) u v u , v H ,
where τ = 1 1 μ ( 2 η μ k 2 ) ( 0 , 1 ] .
Lemma 7.
([45], Lemma 3.1) Let { a ¯ n } and { c n } be sequences of nonnegative real numbers such that
a ¯ n + 1     ( 1 δ ¯ n ) a ¯ n + b n + c n , n     1 ,
where { δ ¯ n } is a sequence in ( 0 , 1 ) and { b n } is a real sequence. Assume that n = 0 c n < . Then, the following results hold:
(i) 
If b n     δ ¯ n M for some M     0 , then { a ¯ n } is a bounded sequence.
(ii) 
If n = 0 δ ¯ n = and lim sup n b n δ ¯ n     0 , then lim n a ¯ n = 0 .
Lemma 8.
([42], Lemma 3.1) Given a sequence of real numbers { a n } such that there exists a subsequence { a n i } of { a n } with a n i < a n i + 1 for all i N . Let { m k } be integers defined by
m k = max { j     k : a j < a j + 1 } .
Then { m k } is a non-decreasing sequence verifying lim n m n = , and for all k N , the following estimate hold:
a m k     a m k + 1 , and a k     a m k + 1 .

3. Results

In this section, we propose a new inertial projection and contraction for solving pseudomonotone variational inequality and split common fixed point problem.
Let H 1 , H 2 be real Hilbert spaces, Ω be a nonempty closed convex subset of H 1 , D : H 1 H 2 be a bounded linear operator, A : H 1 H 1 be a pseudomonotone operator which is weakly sequentually continuous in Ω , T : H 1 H 1 and U : H 2 H 2 be ϱ i demicontractive mappings with i = 1 , 2 respectively. Let f : H 1 H 1 be a contraction mapping with constant k ( 0 , 1 ) and B : H 1 H 1 be a Lipschitz and strongly monotone operator with coefficients λ ( 0 , 1 ) and σ > 0 respectively such that ν k < τ ¯ = 1 1 ξ ( 2 σ ξ λ 2 ) for ν     0 and ξ 0 , 2 σ λ 2 . Suppose the solution set
Γ = { x Ω : x S F ( T ) and D x F ( U ) } .
Let { δ n } , { θ n } , { ζ n } be sequences in ( 0 , 1 ) and { τ n } ( 0 , 1 ) such that
(C1)
lim n δ n = 0 , and n = 0 δ n = + ;
(C2)
0 < lim inf n θ n     lim sup n θ n < 1 ;
(C3)
0 < lim inf n ζ n     lim sup n ζ n < 1 ϱ 1 ;
(C4)
τ n = o ( δ n ) , that is, lim n τ n δ n = 0 .
We now present our algorithm as follows:
Remark 1.
Note that we are at a solution of Problem (14) if w n = y n = z n . In our convergence analysis, we will implicitly assumed that this does not occur after finite iterations so that our algorithm produces infinite sequences for the convergence analysis. More so, we show in the next result that the stepsize defined by (22) is well-defined.
Lemma 9.
Suppose { x n } is generated by Algorithm 1. Then there exists a non-negative integer n satisfying (22). In addition
γ n     ( 1 ϑ ) ( 1 + ϑ ) 2 .
Proof. 
Let r ρ n ( w n ) = w n P Ω ( w n ρ n A w n ) = 0 for some n     0 . Take n = l 0 for which (22) is satisfied. Suppose for some 1 > 0 , r ρ 1 0 and assume that (22) does not hold, that is,
ρ 1 | | A w n A ( P Ω ( w n ρ 1 A w n ) ) | | > ϑ | | r ρ 1 ( w n ) | | .
Using Lemma 3 and since ρ ( 0 , 1 ) , we have
| | A w n A ( P Ω ( w n ρ 1 A w n ) ) | | > ϑ ρ 1 | | r ρ 1 ( w n ) | |     ϑ ρ 1 min { 1 , ρ 1 } | | r 1 ( w n ) | | = ϑ | | r 1 ( w n ) | | .
Recall that P Ω is continuous, then P Ω ( w n ρ 1 A w n ) P Ω ( w n ) as 1 . Now, we consider the following possible cases.
Algorithm 1: GVIPCM
 Initialization: Choose η ( 0 , 2 ) , ρ , ϑ 0 , 1 , ϵ , n > 0 , x 0 , x 1 H be pick arbitrarily.
 Iterative steps: Given the iterates x n 1 and x n , α > 3 , for each n     1 , calculate the x n + 1 iterate as follows.
Step 1: 
Choose α n such that 0     α n     α ¯ n where
α ¯ n = min n 1 n + α 1 , τ n x n x n 1 , if x n x n 1 , n 1 n + α 1 , otherwise .
Step 2: 
Compute
w n = x n + α n ( x n x n 1 ) , y n = P Ω ( w n β n A w n ) ,
where β n = ρ n and n is the smallest non-negative integer satisfying
β n A w n A y n     ϑ w n y n .
If w n = y n : Set w n = z n and go to Step 4. Else: do Step 3.
Step 3: 
Calculate
Θ ( w n , y n ) = w n y n β n ( A w n A y n ) , γ n = w n y n , Θ ( w n , y n ) Θ ( w n , y n ) 2 if Θ ( w n , y n ) 0 , 0 , otherwise , z n = w n η γ n Θ ( w n , y n ) .
Step 4: 
Calculate x n + 1 as follows
u n = ( I μ n D * ( I U ) D ) z n , x n + 1 = δ n ν f ( x n ) + θ n x n + ( ( 1 θ n ) I δ n ξ B ) T ζ n u n ,
where T ζ n = ( 1 ζ n ) I + ζ n T for ζ n ( 0 , 1 ) and
μ n = min ϵ , ( 1 ϱ 2 ) ( I U ) D z n 2 D * ( I U ) D z n 2 if D z n U ( D z n ) , ϵ otherwise .
Case I: Suppose w n Ω . Then w n = P Ω ( w n ) . Since r ρ 1 ( w n ) 0 and ρ 1     1 , it follows from Lemma 3 that
0 < r ρ 1 ( w n ) | |     max { 1 , ρ 1 } r 1 ( w n ) = r 1 ( w n ) .
Passing to the limit as 1 in (20), we obtain
0 = A w n A w n     ϑ r 1 ( w n ) > 0 .
Then, we arrived at a contradiction and so (22) is valid.
Case II: Assume that w n Ω , then
ρ 1 A w n A y n 0 as 1 .
Also
lim 1 ϑ r ρ 1 ( w n ) = lim 1 ϑ w n P Ω ( w n ρ 1 A w n ) = ϑ w n P Ω ( w n ) > 0 .
This is a contraction. Therefore, we conclude that the line search (22) is well defined.
Furthermore, from (22), we have
w n y n , Θ ( w n , y n ) = w n y n , w n y n   β n ( A w n A y n ) = w n y n 2   β n w n y n , A w n A y n     w n y n 2   β n w n y n A w n A y n     w n y n 2   ϑ w n y n 2 = ( 1 ϑ ) w n y n 2 .
Also
Θ ( w n , y n ) = w n y n +   β n ( A y n A w n )     w n y n   +   β n A y n A w n     ( 1 + ϑ ) w n y n .
Hence, from (27) and (28) we have
γ n = w n y n , Θ ( w n , y n ) Θ ( w n , y n ) 2     ( 1 ϑ ) ( 1 + ϑ ) 2 .
 □
Lemma 10.
Let { x n } be the sequence generated by Algorithm 1. Then { x n } is bounded.
Proof. 
Let w * Γ , then w * S , T ( w * ) = w * and U ( D w * ) = D w * . Thus, we have
z n w * 2 = w n w * η γ n Θ ( w n , y n ) 2 = w n w * 2   2 η γ n w n w * , Θ ( w n y n ) + η γ n Θ ( w n , y n ) 2 = w n w * 2   2 η γ n w n y n , Θ ( w n , y n ) 2 η γ n y n w * , Θ ( w n , y n ) + η 2 γ n 2 Θ ( w n , y n ) 2 .
Since A is pseudomonotone and w * S , then
A y n , y n w *     0 .
Also from (15), we have
w n β n A w n y n , y n w *     0 .
Since β n > 0 and combining (28) and (29), we obtain
w n β n A w n y n , y n w * + β n A y n , y n w *     0 .
This implies that
w n y n β n ( A w n A y n ) , y n w *     0 .
Hence
y n w * , Θ ( w n , y n )     0 .
Then from (28) and (31), it follows that
z n w * 2     w n w * 2   2 η γ n w n y n , Θ ( w n , y n ) + η 2 γ n 2 Θ ( w n , y n ) 2 .
Using the definition of γ n , we obtain
z n w * 2     w n w * 2   2 η γ n w n y n , Θ ( w n , y n ) + η 2 γ n w n y n , Θ ( w n , y n ) = w n w * 2   η ( 2 η ) w n y n , Θ ( w n , y n ) .
More so from (23), we get
γ n w n y n , Θ ( w n , y n ) = γ n Θ ( w n , y n ) 2 = 1 η 2 w n z n 2 .
Substituting (33) into (32), we have
z n w * 2     w n w * 2 2 η η w n z n 2 .
Since η ( 0 , 2 ) , then we obtain
z n w * 2     w n w * 2 .
Furthermore using Lemma 1(i), we have
u n w * 2 = z n w * μ n D * ( I U ) D z n 2 = z n w * 2   2 μ n D * ( I U ) D z n , z n w * + μ n 2 D * ( I U ) D z n 2 = z n w * 2   2 μ n ( I U ) D z n , D z n D w * + μ n 2 D * ( I U ) D z n 2 = z n w * 2   2 μ n ( I U ) D z n , D z n D w * ( I U ) D z n + ( I U ) D z n + μ n 2 D * ( I U ) D z n 2 = z n w * 2 2   μ n ( I U ) D z n , U ( D z n ) D w * 2 μ n ( I U ) D z n 2 + μ n 2 D * ( I U ) D z n 2 = z n w * 2   μ n ( D z n D w * 2 ( I U ) D z n 2 U ( D z n ) D w * 2 ) 2 μ n ( I U ) D z n 2 +   μ n 2 D * ( I U ) D z n 2 = z n w * 2 μ n D z n D w * 2   μ n ( I U ) D z n 2 +   μ n U ( D z n ) D w * 2 + μ n 2 D * ( I U ) D z n 2     z n w * 2   μ n D z n D w * 2   μ n ( I U ) D z n 2 +   μ n ( D z n D w * 2 +   ϱ 2 ( I U ) D z n 2 ) + μ n 2 D * ( I U ) D z n 2 = z n w * 2   μ n ( 1 ϱ 2 ) ( I U ) D z n 2   μ n D * ( I U ) D z n 2 .
Using (25), we obtain
u n w * 2     z n w * 2 .
Moreover
T ζ n u n w * 2 = ( u n w * ) + ζ n ( T u n u n ) 2 = u n w * 2   2 ζ n u n w * , u n T u n +   ζ n 2 u n T u n 2     u n w * 2   ζ n ( 1 ϱ 1 ) u n T u n 2 +   ζ n 2 u n T u n 2 = u n w * 2   ζ n ( 1 ϱ 1 ζ n ) u n T u n 2 .
Using condition (C3), we obtain
T ζ n u n w * 2     u n w * 2 .
Therefore from Lemma 6, we have
x n + 1 w * = δ n ( ν f ( x n ) w * ) + θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * )     δ n ν f ( x n ) w *   +   θ n x n w *   +   ( ( 1 θ n ) I δ n ξ B ) T ζ n u n ( ( 1 θ n ) I δ n ξ B ) w * = δ n ( ν ( f ( x n ) f ( w * ) )   + ν f ( w * ) w * ) + θ n x n w * + ( 1 θ n ) I δ n 1 θ n ξ B T ζ n u n I δ n 1 θ n ξ B w *     δ n ν k x n w *   +   δ n v f ( w * ) w *   +   θ n x n w * + ( 1 θ n ) 1 δ n 1 θ n τ ¯ T ζ n u n w *     δ n ν k x n w *   +   δ n v f ( w * ) w *   +   θ n x n w *   + ( 1 θ n δ n τ ¯ ) u n w *     δ n ν k x n w *   +   δ n v f ( w * ) w *   +   θ n x n w *   + ( 1 θ n δ n τ ¯ ) w n w *     ( δ n ν k + θ n ) x n w *   +   δ n ν f ( w * ) w *   + ( 1 θ n δ n τ ¯ ) ( x n w *   +   α n x n x n 1 ) = [ ( δ n ν k + θ n ) + ( 1 θ n δ n τ ¯ ) ] x n w *   +   δ n ν f ( w * ) w *   + ( 1 θ n δ n τ ¯ ) α n x n x n 1 = ( 1 δ n ( τ ¯ ν k ) ) x n w * + δ n ( τ ¯ ν k ) [ ν f ( w * ) w * τ ¯ ν k + 1 θ n δ n τ ¯ τ ¯ ν k × α n δ n x n x n 1 ] .
Putting
σ n = 1 θ n δ n τ ¯ τ ¯ ν k × α n δ n x n x n 1 ,
it follows from condition (C4) that lim n σ n = 0 , this { σ n } is bounded. Let
M 1 = max ν f ( w * ) w * τ ¯ ν k , sup n N σ n .
Thus from (37), we obtain
x n + 1 w *     ( 1 δ n ( τ ¯ ν k ) ) x n w * + δ n ( τ ¯ ν k ) M 1 .
Putting a ¯ n = x n + 1 w * , δ ¯ n = δ n ( τ ¯ ν k ) , M = M 1 and c n = 0 in Lemma 7(i), it follows from (38) that { x n w * } is bounded. This implies that { x n } is bounded and consequently, { w n } , { y n } , { u n } are bounded too. □
Lemma 11.
Let { w n j } and { y n j } be subsequences of the sequences { w n } and { y n j } generated by Algorithm 1, respectively, such that x n j x ¯ Ω . Suppose x n j y n j 0 as j . Then
(i) 
0     lim inf j A w n j , w w n j for all w Ω ;
(ii) 
x ¯ S .
Proof. 
(i) Since y n j = P Ω ( w n j β n j A w n j ) , then from (15), we have
w n j β n j A w n j y n j , w y n j     0 w Ω .
Thus, we have
w n j y n j , w y n j     β n j A w n j , w y n j = β n j A w n j , w n j y n j + β n j A w n j , w w n j w Ω .
Hence
1 β n j w n j y n j , w y n j + A w n j , y n j w n j     A w n j , w w n j w Ω .
Next, we consider the following possible cases based on { β n j } .
Case I: Assume that lim inf j β n j = 0 . Let v n j = P Ω ( w n j β n j 1 A w n j ) . Note that β n j 1 > β n j , hence by using Lemma 4, we obtain
w n j v n j     1 w n j y n j 0 as j .
More so, v n j x ¯ Ω , which implies that { v n j } is a bounded sequence. By the uniform continuity of A, we have
A w n j A v n j 0 as j .
Thus
1 ϑ A w n j A v n j > w n j v n j β n j 1 .
Combining (41) and (42), we have
lim j w n j v n j β n j 1 = 0 .
More so, from (15), we get
w n j β n j 1 A w n j v n j , w v n j     0 w Ω .
Hence
1 β n j 1 w n j v n j , w v n j + A w n j , v n j w n j     A w n j , w w n j w Ω .
Taking limit of the above inequality as j , then we get
lim inf j A w n j , w w n j     0 .
Case II: On the other hand, suppose lim inf j β n j > 0 . Passing limit to (40) and noting that w n j y n j 0 as j , we have
lim inf j A w n j , w w n j     0 w C .
This established (i). Next we show (ii).
Now let { ε j } ( 0 , 1 ) such that ε j 0 as j . For each j     1 , we denote by N the smallest non-negative integer such that
A w n j , y w n j + ε j     0 j     N ,
where the existence of N follows from (i). Thus
A w n j , y + ε j k n j w n j     0 j     N ,
for some k n j H 1 satisfying 1 = A w n j , k n j . Since A is pseudomonotone, we have
( A y + ε j k n j ) , y + ε j k n j w n j     0 j     N .
This implies that
A y , y w n j     A y ( A y + ε j k n j ) , y + ε j k n j w n j ε j A y , k n j j     N .
Since j and A is continuous, then the right-hand side of (43) tends to zero and thus, we obtain
lim inf j A y , y w n j     0 y Ω .
Then
A y , y x ¯ = lim j A y , y w n j     0 y Ω .
Hence, in view of Lemma 5, we obtain that x ¯ S .  □
Lemma 12.
Let { x n } be the sequence generated by Algorithm 1. Then the following inequality holds for all w * Γ and n N :
S n + 1     ( 1 α ¯ n ) S n + α ¯ n b n + c n ,
where S n = x n w * 2 , α ¯ n = δ n ( τ ¯ 2 ν k ) 1 δ n ν k , b n = 2 ν f ( w * ) w * , x n + 1 ξ B w * τ ¯ 2 ν k , c n = ( 1 θ n δ n τ ¯ ) 1 δ n ν k α n M 2 x n x n 1 .
Proof. 
Clearly
w n w * 2 = x n w * +   α n ( x n x n 1 ) 2 = x n w * 2 +   α n 2 x n x n 1 2 + 2 α n x n w * , x n x n 1 = x n w * 2 +   α n 2 x n x n 1 2 +   α n ( x n w * 2 +   x n x n 1 2   x n 1 w * 2 )     x n w * 2 +   2 α n x n x n 1 2 +   α n ( x n w *   +   x n 1 w * ) x n x n 1     x n w * 2 +   α n M 2 x n x n 1 ,
where M 2 = sup n     0 { 2 x n x n 1 + x n w * + x n 1 w * } . Also
x n + 1 w * 2 = δ n ( ν f ( x n ) ξ B w * ) + θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2     θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2 +   2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n 2 x n w * 2 + ( 1 θ n δ n τ ¯ ) 2 T ζ n u n w * 2 + 2   θ n ( 1 θ n δ n τ ¯ ) x n w * T ζ n u n w * + 2 δ n ν f ( x n ) f ( w * ) x n + 1 w * + 2   δ n ν f ( w * ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) T ζ n u n w * 2 + 2 δ n ν k x n w * x n + 1 w * + 2 δ n ν f ( w * ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) u n w * 2 + 2   δ n ν k x n w * x n + 1 w * + 2 δ n ν f ( w * ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) w n w * 2 +   δ n ν k ( x n w * 2 + x n + 1 w * ) + 2 δ n ν f ( w * ) ξ B w * , x n + 1 w * .
Using (44) in the expression above, we get
x n + 1 w * 2     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) ( x n w * 2 +   α n M 2 x n x n 1 ) + δ n ν k ( x n w * 2 +   x n + 1 w * ) + 2 δ n ν f ( w * ) ξ B w * , x n + 1 w * = ( 1 δ n ( τ ¯ ν k ) ) x n w * 2 + ( 1 θ n δ n τ ¯ ) α n M 2 x n x n 1 +   δ n ν k x n + 1 w * + 2 δ n ν f ( w * ) ξ B w * , x n + 1 w *     ( 1 δ n ( τ ¯ ν k ) ) 1 δ n ν k x n w * 2 + ( 1 θ n δ n τ ¯ ) 1 δ n ν k α n M 2 x n x n 1 + 2 δ n 1 δ n ν k ν f ( w * ) ξ B w * , x n + 1 w * = 1 δ n ( τ ¯ 2 ν k ) 1 δ n ν k x n x * 2 + δ n ( τ ¯ 2 ν k ) 1 δ n ν k × 2 ν f ( w * ) ξ B w * , x n + 1 w * τ ¯ 2 ν k + ( 1 θ n δ n τ ¯ ) 1 δ n ν k α n M 2 x n x n 1 .
 □
Now, we present our main theorem.
Theorem 1.
Let { x n } be the sequence generated by Algorithm 1. Then { x n } converges strongly to a point x ¯ where x ¯ = P Γ ( I ξ B + ν f ) ( x ¯ ) is the unique solution of the variational inequalities
( ξ B ν f ) x ¯ , w x ¯     0 w Γ .
Proof. 
Let w * Γ and S n = x n w * 2 . We consider the following two cases.
Case A: Suppose { S n } is monotonically non-increasing. Then, since { S n } is bounded, we obtain
S n S n + 1 0 as n .
From (36), (44) and (45), we have   
x n + 1 w * 2 = δ n ( ν f ( x n ) ξ B w * ) + θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2     θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) T ζ n u n w * 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) [ u n w * 2 ζ n ( 1 ϱ 1 ζ n ) u n T u n 2 ] + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) [ x n w * 2 +   α n M 2 x n x n 1 ] ( 1 θ n δ n τ ¯ ) ζ n ( 1 ϱ 1 ζ n ) u n T u n 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w * = ( 1 δ n τ ¯ ) x n w * 2 + ( 1 θ n δ n τ ¯ ) α n M 2 x n x n 1 ( 1 θ n δ n τ ¯ ) ζ n ( 1 ϱ 1 ζ n ) u n T u n 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w * .
Since δ n 0 and α n δ n x n x n 1 0 as n , thus we have
( 1 θ n δ n τ ¯ ) ζ n ( 1 ϱ 1 ζ n ) u n T u n 2     S n S n + 1 δ n τ ¯ x n w * 2 +   δ n ( 1 θ n δ n τ ¯ ) × α n δ n M 2 x n x n 1 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w * 0 .
Using condition (C2) and (C3), we obtain
lim n u n T u n = 0 .
Also, from (35), (44) and (45), we have
x n + 1 w * 2 = δ n ( ν f ( x n ) ξ B w * ) + θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2     θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) T ζ n u n w * 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) u n w * 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) [ z n w * 2 μ n [ ( 1 ϱ 2 ) ( I U ) D z n 2 μ n D * ( I U ) D z n 2 ] ] + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) [ x n w * 2 +   α n M 2 x n x n 1 ] ( 1 θ n δ n τ ¯ ) μ n ( 1 ϱ 2 ) ( I U ) D z n 2 μ n D * ( I U ) D z n 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w * .
This implies that
μ n ( 1 ϱ 2 ) ( I U ) D z n 2 μ n D * ( I U ) D z n 2     S n S n + 1 δ n τ ¯ x n w * 2 +   δ n ( 1 θ n δ n τ ¯ ) × α n δ n M 2 x n x n 1 +   2 δ n ν f ( x n ) ξ B w * , x n + 1 w * 0 as n .
From (25) and (48), we obtain
lim n ( I U ) D z n = 0 .
More so, from (34), (44) and (45), we get
x n + 1 w * 2 = δ n ( ν f ( x n ) ξ B w * ) + θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2     θ n ( x n w * ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n w * ) 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) T ζ n u n w * 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) z n w * 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) w n w * 2 2 η η w n z n 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w *     θ n x n w * 2 + ( 1 θ n δ n τ ¯ ) ( x n w * 2 +   α n M 2 x n x n 1 ) 2 η η w n z n 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w * .
Hence, we have
2 η η w n z n 2     S n S n + 1 δ n τ ¯ x n w * 2 + 2 δ n ν f ( x n ) ξ B w * , x n + 1 w * 0 .
Since η ( 0 , 2 ) and δ n 0 , then we obtain
lim n w n z n = 0 .
Also from (19) and (34), we have
w n y n , Θ ( w n , y n ) = 1 η 2 γ n z n w n 2     ( 1 + ϑ ) 2 η 2 ( 1 ϑ ) z n w n 2 .
Hence using (27) in the above expression, we get
w n y n 2     ( 1 + ϑ ) 2 η 2 ( 1 ϑ ) 2 z n w n 2 .
This implies that
lim n w n y n = 0 .
Clearly,
lim n w n x n = lim n δ n × α n δ n x n x n 1 = 0 .
Then from (51) and (52), we have
lim n y n x n = lim n ( w n x n + y n w n ) = 0 .
Similarly from (50) and (51), we have
lim n z n x n = 0 .
On the other hand, from (49), we have
u n z n = | μ n | D * ( I U ) D z n     | μ n | D ( I U ) D z n 0 .
Hence
lim n u n x n = 0 .
Moreover
T ζ n u n u n = ( 1 ζ n ) u n + ζ n T u n u n     ζ n u n T u n 0 ,
and
x n + 1 u n = δ n ( ν f ( x n ) ξ B u n ) + θ n ( x n u n ) + ( ( 1 θ n ) I δ n ξ B ) ( T ζ n u n u n )     δ n ν f ( x n ) ξ B u n +   θ n x n u n + ( 1 θ n δ n τ ¯ ) T ζ n u n u n 0 .
Hence
x n + 1 x n         x n + 1 u n   +   u n x n 0 .
Since { x n } is bounded, then there exists a subsequence { x n j } of { x n } such that x n j x ¯ Ω . It follows from (52), (53) and (54) that w n j x ¯ , y n j x ¯ and z n j x ¯ respectively. Since  w n y n 0 and w n j x ¯ , it follows from Lemma 11 that x ¯ S . Also, since u n j x n j 0 , then u n j x ¯ . Since u n T u n 0 , it follows from the demiclosedness of T that x ¯ F ( T ) . Moreover, D is a bounded linear operator, then D z n j D x ¯ H 2 . Then it follows from (49) and the demiclosedness of I U that D x ¯ F ( U ) . Therefore x ¯ Γ . We now show that the sequence { x n } converges strongly to a point w , where w = P Γ ( I ξ B + ν f ) ( w ) . It follows from (15) and (56) that
lim sup n ν f ( w ) ξ B w , x n + 1 w = lim j ν f ( w ) ξ B w , x n j + 1 w = ν f ( w ) ξ B w , x ¯ w = ( I ξ B + ν f ) w w , x ¯ w     0 .
Hence from Lemma 7 and 12, we have x n w 0 as n . Thus { x n } converges strongly to  w .
Case B: Suppose { S n } is not monotonically decreasing. Let τ : N N be a function defined by
τ ( n ) = max { k N : k     n , S k     S k + 1 }
fo all n     n 0 (for some n 0 large enough). From Lemma 8, it is clear that τ is a non-decreasing sequence such that τ ( n ) and
S τ ( n )     S τ ( n ) + 1
for all n     n 0 . Hence from Lemma 12, we have
0     S τ ( n ) + 1 S τ ( n )     ( 1 α ¯ τ ( n ) ) S τ ( n ) + α ¯ τ ( n ) b τ ( n ) + c τ ( n ) S τ ( n ) ,
where
α ¯ τ ( n ) = δ τ ( n ) ( τ ¯ 2 ν k ) 1 δ τ ( n ) ν k , b τ ( n ) = 2 ν f ( w * ) w * , x τ ( n ) + 1 ξ B w * τ ¯ 2 ν k , c τ ( n ) = ( 1 θ τ ( n ) δ τ ( n ) τ ¯ ) α τ ( n ) M 2 x τ ( n ) x τ ( n ) 1 1 δ τ ( n ) ν k
for some M 2 > 0 . Then, we have
S τ ( n )     b τ ( n ) + c τ ( n ) α ¯ τ ( n ) .
Following similar proof as in Case A, we can show that
x τ ( n ) y τ ( n ) 0 , x τ ( n ) z τ ( n ) 0 , x τ ( n ) u τ ( n ) 0 , ( I T ) u τ ( n ) 0 , ( I U ) D z τ ( n ) 0 , x τ ( n ) + 1 x τ ( n ) 0
and
lim sup n ν f ( w * ) ξ B w * , x τ ( n ) + 1 w *     0 .
Also
lim n c τ ( n ) α ¯ τ ( n ) = 1 θ τ ( n ) δ τ ( n ) τ ¯ τ ¯ 2 ν k × α τ ( n ) δ τ ( n ) = 0 .
Hence from (57)–(59), we have that
lim n x τ ( n ) w * = 0 .
This implies that
lim n x τ ( n ) + 1 w * = 0 .
Moreover, for all n     n 0 , we have S τ ( n )     S τ ( n ) + 1 if n τ ( n ) (i.e., τ ( n ) < n ) . Since S j     S j + 1 for τ ( n ) + 1     j     n . Therefore, it follows that for all n     n 0 ,
0     S n     max { S τ ( n ) , S τ ( n ) + 1 } = S τ ( n ) + 1 .
So lim n S n = 0 . This implies that { x n } converges strongly to w * . This completes the proof. □
The following results can be obtained as consequences of our main result.
Corollary 1.
Let H 1 , H 2 be real Hilbert spaces, Ω be a nonempty closed convex subset of H 1 , D : H 1 H 2 be a bounded linear operator, A : H 1 H 1 be a pseudomonotone operator which is weakly sequentually continuous in Ω , T : H 1 H 1 and U : H 2 H 2 be quasi-nonexpansive mappings. Let f : H 1 H 1 be a contraction mapping with constant k ( 0 , 1 ) and B : H 1 H 1 be a Lipschitz and strongly monotone operator with coefficients λ ( 0 , 1 ) and σ > 0 respectively such that ν k < τ ¯ = 1 1 ξ ( 2 σ ξ λ 2 ) for ν     0 and ξ 0 , 2 σ λ 2 . Suppose the solution set Γ = { x Ω : x S F ( T ) a n d D x F ( U ) } . Let { δ n } , { θ n } , { τ n } and { ζ n } be sequences in ( 0 , 1 ) such that conditions (C1)–(C4) are satisfied. Then the sequence { x n } generated by Algorithm 1 converges strongly to a point x ¯ where x ¯ = P Γ ( I ξ B + ν f ) ( x ¯ ) is the unique solution of the variational inequalities
( ξ B ν f ) x ¯ , w x ¯     0 w Γ .
Also, by setting H 1 = H 2 = H (a real Hilbert space), U = D = I (the identity mapping on H 2 ), then we obtain the following result for finding common solution of pseudomonotone VIP (1) and fixed point of demicontractive mappings.
Corollary 2.
Let H be a real Hilbert space, Ω be a nonempty closed convex subset of H , A : H H be a pseudomonotone operator which is weakly sequentually continuous in Ω , T : H H be ϱ demicontractive mapping with ϱ [ 0 , 1 ) and I T is demiclosed at zero. Let f : H H be a contraction mapping with constant k ( 0 , 1 ) and B : H H be a Lipschitz and strongly monotone operator with coefficients λ ( 0 , 1 ) and σ > 0 respectively such that ν k < τ ¯ = 1 1 ξ ( 2 σ ξ λ 2 ) for ν     0 and ξ 0 , 2 σ λ 2 . Suppose the solution set Γ = { x Ω : x S F ( T ) } . Let { δ n } , { θ n } , { τ n } and { ζ n } be sequences in ( 0 , 1 ) such that conditions (C1)–(C4) are satisfied. Then the sequence { x n } generated by the following Algorithm 2 converges strongly to a point x ¯ where x ¯ = P Γ ( I ξ B + ν f ) ( x ¯ ) is the unique solution of the variational inequalities
( ξ B ν f ) x ¯ , w x ¯     0 w Γ .
Algorithm 2: GVIPCM
Initialization: Choose η ( 0 , 2 ) , ρ , ϑ 0 , 1 , α , n > 0 , x 0 , x 1 H be pick arbitrarily.
Iterative steps: Given the iterates x n 1 and x n for each n     1 , calculate the x n + 1 iterate as follows.
Step 1: 
Choose α n such that 0     α n     α ¯ n where
α ¯ n = min n 1 n + α 1 , τ n x n x n 1 , if x n x n 1 , n 1 n + α 1 , otherwise .
Step 2: 
Compute
w n = x n + α n ( x n x n 1 ) , y n = P Ω ( w n β n A w n ) ,
where β n = ρ n and n is the smallest non-negative integer satisfying
β n A w n A y n     ϑ w n y n .
Step 3: 
Calculate
Θ ( w n , y n ) = w n y n β n ( A w n A y n ) , γ n = w n y n , Θ ( w n , y n ) Θ ( w n , y n ) 2 z n = w n η γ n Θ ( w n , y n ) .
Step 4: 
Calculate x n + 1 as follows
x n + 1 = δ n ν f ( x n ) + θ n x n + ( ( 1 α n ) I θ n ξ B ) T ζ n z n ,
where T ζ n = ( 1 ζ n ) I + ζ n T for ζ n ( 0 , 1 )

4. Application

In this section, we apply our result to finding the solution of Split Null Point Problem (SNPP) in real Hilbert spaces.
We first recall some basic concept of monotone operators:
Definition 2.
  • A multivalued mapping φ : H 2 H is called monotone if for all u , v H ,
    u v , f g     0 , f φ ( u ) , g φ ( v ) ;
  • The graph of φ is defined by
    G r ( φ ) = { ( u , v ) H × H : v φ ( x ) } ;
  • When G r ( φ ) is not properly contained in the graph of any other monotone operator, we say that φ is maximally monotone. Equivalently, φ is maximal if and only if for ( u , f ) H × H , u v , f g     0 for all ( v , g ) G r ( φ ) implies that f M u .
The resolvent operator J λ associated with φ and λ > 0 is the mapping J λ : H H defined by
J λ φ ( x ) = ( I + λ φ ) 1 ( x ) ,
for all x H and λ > 0 . It is well known that the resolvent operator J λ φ is single-valued, nonexpansive and the set of zeros of φ (i.e., { x H : 0 φ 1 ( 0 ) } ) coincides with the set of fixed points of J λ φ , see for instance Reference [46].
Let H 1 and H 2 be real Hilbert spaces and D : H 1 H 2 be a bounded linear operator. Let  F : H 1 2 H 1 and G : H 2 2 H 2 be maximal monotone operators. The Split Null Point Problem (SNPP) is formulated as
find x * H 1 such that 0 F ( x * ) and y * = D x * H 2 solves 0 G ( y * ) .
We denote the set of solution of SNPP by (63) by Δ . The SNPP consist of many other important problems such as split variational inequality problem, split equilibrium problem and split feasibility problem. The split feasibility problem was first introduced by Censor and Elfving [47] and has found numerous applications in many real-life problems such as intensity, modulated therapy, medical phase retrival, tomography and image reconstruction, see for instance References [46,48,49,50,51,52,53]. By using our Algorithm 1, we have the following problem for solving the SNPP.
Theorem 2.
Let H 1 , H 2 be real Hilbert spaces, Ω be a nonempty closed convex subset of H 1 , D : H 1 H 2 be a bounded linear operator, A : H 1 H 1 be a pseudomonotone operator which is weakly sequentually continuous in Ω , F : H 1 2 H 1 and G : H 2 2 H 2 be maximal monotone operators. Let f : H 1 H 1 be a contraction mapping with constant k ( 0 , 1 ) and B : H 1 H 1 be a Lipschitz and strongly monotone operator with coefficients λ ( 0 , 1 ) and σ > 0 respectively such that ν k < τ ¯ = 1 1 ξ ( 2 σ ξ λ 2 ) for ν     0 and ξ 0 , 2 σ λ 2 . Suppose the solution set
Γ = { x Ω : x S Δ } .
Let { δ n } , { θ n } , { τ n } and { ζ n } be sequences in ( 0 , 1 ) such that condition (C1)–(C4) are satisfied with ϱ 1 = 0 in (C3). Then the sequence { x n } generated by the following Algorithm 3 converges strongly to a point x ¯ where x ¯ = P Γ ( I ξ B + ν f ) ( x ¯ ) is the unique solution of the variational inequalities
( ξ B ν f ) x ¯ , w x ¯     0 w Γ .
Algorithm 3: GVIPCM
Initialization: Choose η ( 0 , 2 ) , ρ , ϑ 0 , 1 , α , ϵ , n > 0 , x 0 , x 1 H be pick arbitrarily.
Iterative steps: Given the iterates x n 1 and x n for each n     1 , calculate the x n + 1 iterate as follows.
Step 1: 
Choose α n such that 0     α n     α ¯ n where
α ¯ n = min n 1 n + α 1 , τ n x n x n 1 if x n x n 1 , n 1 n + α 1 otherwise .
Step 2: 
Compute
w n = x n + α n ( x n x n 1 ) , y n = P Ω ( w n β n A w n ) ,
where β n = ρ n and n is the smallest non-negative integer satisfying
β n A w n A y n     ϑ w n y n .
Step 3: 
Calculate
Θ ( w n , y n ) = w n y n β n ( A w n A y n ) , γ n = w n y n , Θ ( w n , y n ) Θ ( w n , y n ) 2 z n = w n η γ n Θ ( w n , y n ) .
Step 4: 
Calculate x n + 1 as follows
u n = ( I μ n D * ( I J λ G ) D ) z n , x n + 1 = δ n ν f ( x n ) + θ n x n + ( ( 1 α n ) I θ n ξ B ) Λ ζ n u n ,
where Λ ζ n = ( 1 ζ n ) I + ζ n J λ F for ζ n ( 0 , 1 ) and
μ n = min ϵ , ( 1 κ 2 ) ( I J λ G ) D z n 2 D * ( I J λ G ) D z n 2 if D z n J λ G ( D z n ) , ϵ otherwise .
Proof. 
Set T = J λ F and U = J λ G in Algorithm 1. Then T and U are nonexpansive and thus, 0-demicontractive. Therefore, we obtain the desired result following the line of proof of Theorem 1. □

5. Numerical Examples

In this section, we give some numerical examples to show the performance and efficiency of the proposed algorithm.
Example 1.
First, we consider a generalized Nash-Cournot oligopolistic equilibrium problem in electricity markets described below:
Suppose there are m companies, each company j possessing I j generating units. We denoted by u , the vector whose entry corresponds to the power generating by unit j and p l ( t ) denotes the price which can be assumed to be a decreasing affine function of t , where t = j = 1 N u j and N is the number of all generating units. Then p l ( t ) = α δ l t . The profit made by company l is given by f l ( u ) = p l ( t ) j I l u j j I l c j ( u j ) , where c j ( u j ) denotes the cost for generating u j by unit j . We denote by Δ l , the strategy set of company l , that is, j I l u j Δ l for each l . Thus, we can write the strategy set of the model as Ω = Δ 1 × Δ 2 × × Δ m . Each company l wants to maximize its profit by choosing a corresponding production level under the presumption that the production of the other companies are parametric inputs. A commonly used approach for treating the model is the Nash equilibrium concept (see References [54,55]).
Recall that a point u * Ω = Δ 1 × Δ 2 × Δ m is called an equilibrium point of the Nash equilibrium model if
f l ( u * )     f l ( u * [ u l ] ) u l Δ l , l = 1 , 2 , , m ,
where the vector u * [ u l ] stands for the vector obtained from u * by replacing u l * with u l . Defining
f ( u , v ) = G ( u , v ) G ( u , u ) ,
with G ( u , v ) = l = 1 m f l ( u * [ v l ] ) . Then the problem of finding a Nash equilibrium point of the model can be formulated as
f i n d u * Ω : f ( u * , u )     0 , u Ω .
Furthermore, we suppose that the cost c j for each unit j used in production and the environmental fee g are increasing convex functions. This implies that both the cost c j and environmental fee g for producing a unit production by each unit j increase as the quantity of the production increases. Under this assumption, we can formulate problem (66) as
u Ω : D ¯ u α + φ ( u ) , v u     0 , v Ω ,
where α = ( α 1 , α 2 , , α m ) T ,
D ¯ 1 = δ 1 0 0 0 0 δ 2 0 0 0 0 0 δ m , D ¯ = 0 δ 1 δ 1 δ 1 δ 2 0 δ 2 δ 2 δ m δ m δ m δ m ,
and
φ ( u ) = u T D ¯ 1 u + j = 1 N c j ( u j ) .
Note that the function c j is convex and differentiable for each j. In this case, we test the proposed Algorithm 1 with the cost function given by
c j ( u j ) = 1 2 u j T D ¯ u j + d T u j .
The matrix D ¯ , vector d and parameter δ j ( j = 1 , , m ) are randomly generated in the interval [ 1 , 30 ] , [ 1 , 30 ] and ( 0 , 1 ] respectively. Also, we use different choices of N = 5 , 10 , 30 and 50 with different initial points x 0 , x 1 generated randomly in the interval [ 1 , 30 ] and m = 10 . More so, we assume that each company j has the same production level with other companies, that is,
Δ l = { u l : 1     u l     30 } , l = 1 , 2 , , 10 .
We take T = U = P Ω which is 0-demicontractive, D = I , f ( x ) = x 2 , x R N , B x = 2 x x R N , η = 1.99 , ρ = 0.01 , ϑ = 0.35 , α = 0.0001 , ϵ = 10 5 , n = 2 , δ n = 1 n + 1 , τ n = 1 ( n + 1 ) 2 , θ n = 3 n 8 n + 3 , ζ n = 1 2 n N . We compare the performance of our Algorithm 1 with Algorithm (5) of Cholamjiak et al. [27] and Algorithm (12) of Dong et al. [32]. In (5), we take α n = 1 n + 1 , θ n = 1 ( n + 1 ) 2 , δ n = 3 n 8 n + 3 , β = 0.01 , and η = 1.99 . Also for (12), we choose θ n = 0.02 , β = 0.01 , η = 1.9 , S = P Ω , α n = 1 n + 1 . The computations were stopped when each algorithm satisfies x n + 1 x n < 10 4 . The numerical results are shown in Table 1 and Figure 1. In Figure 1, Algorithm 3.1 refers to Algorithm 1.
Example 2.
Next, we consider the min-max problem which can be formulated as a variational inequality problem with skew-symmetric matrix. This problem is to determine the shortest network in a given full Steiner topology (see (References [56], Example 1)). The compact form of the min-max problem is given as
min x R max z B z T ( A x b ) ,
where
x T = x [ 1 ] T , , x [ 8 ] T T , z T = z [ 1 ] T , , z [ 17 ] T T ,
R = R 2 × × R 2 ( 8 t i m e s ) , B = B 2 , , B 2 , ( 17 t i m e s ) .
A is a block matrix of the form
A = I 2 I 2 I 2 I 2 I 2 I 2 I 2 I 2 a n d b = b [ 1 ] b [ 2 ] b [ 9 ] b [ 10 ] 0 0 .
Equation (67) is equivalent to the following linear variational inequality (see References [15])
L V I ( Ω , M , q ) u * Ω ( u u * ) T ( M u * + q )     0 u Ω ,
where
u = u 1 u 2 , M = 0 A T A 0 , q = 0 b , a n d Ω = R × B .
Note that M is skew-symmetric and the LVI is monotone. Also the mapping A u = M u + q in (68) is Lipschitz continuous. We set B 2 = { x R 2 : x     1 } . We define the mapping T : R 2 R 2 and U : R 2 R 2 by
T ( u 1 , u 2 ) = ( u 1 , u 2 ) i f u 1 < 0 , ( 2 u 1 , u 2 ) i f u 1     0 ,
and
U x = P Δ ( x ) = d + r x d x d , i f x Δ , x , i f x Δ ,
where Δ is the closed ball in R 2 centered at d R 2 with radius r > 0 , that is, Δ = { x R 2 : x d     r } . It is easy to see that T is 1 3 -demicontractive and not nonexpansive, while U is nonexpansive, and thus, 1 3 -demicontractive. We compare our method with the Projection contraction method of Cai et al. [15]. We take η = 1.78 , ρ = 0.02 , ϑ = 0.67 , α = ϵ = 10 4 , l n = 5 , δ n = 1 ( n + 1 ) 0.4 , τ n = δ n 2 , θ n = 2 n 5 n + 7 , ζ n = 0.45 , f ( x ) = x 2 , D x = x and choose the various initial values as follows:
Case I:
x 0 = [ 0 , 5 ] , x 1 = [ 15 ] ;
Case II:
x 0 = [ 2 , 2 ] , x 1 = [ 5 , 5 ] ;
Case III:
x 0 = [ 3 , 7 ] ; x 1 = [ 0 , 9 ] ;
Case IV:
x 0 = [ 1 , 8 ] , x 1 = [ 3 , 4 ] .
For the Reference [15] algorithm, we used the Correction of PC Method 1 and take γ = 1.79 . We used x n + 1 x n < 10 4 as stopping criterion. The numerical results are shown in Table 2 and Figure 2.
Finally, we give an example in infinite dimensional spaces to support our strong convergence result.
Example 3.
Let H 1 = H 2 = L 2 ( [ 0 , 1 ] ) with inner product x , y = 0 1 x ( t ) y ( t ) d t and norm x : = 0 1 | x ( t ) | 2 d t 1 / 2 , x , y L 2 ( [ 0 , 1 ] ) . Let Ω = { x L 2 ( [ 0 , 1 ] ) : x     1 } and A : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) be given by A x ( t ) = max { 0 , x ( t ) } . Then A is monotone and uniformly continuous and
P Ω ( x ) = x x i f x > 1 , x i f x     1 .
We define the mapping T = U = 0 1 x ( t ) 2 d t , t [ 0 , 1 ] and x L 2 ( [ 0 , 1 ] ) . Then T = U is 0-demicontractive. We take η = 1.75 ,   ϑ = 0.48 ,   ρ = 0.01 ,   α = ϵ = 10 3 ,   l n = 2 , δ n = 1 n + 1 , τ n = 1 n + 1 , θ n = 3 n 7 n + 9 ,   ζ n = 2 n 5 n + 1 ,   f ( x ) = x 2 , D x = x . We also compare the performance of our Algorithm 1 with Algorithm (5) of Reference [27] and (12) of Reference [32]. For (5), we take η = 1.75 ,   β = 0.55 ,   θ = 10 3 ,   α n = 1 n + 1 , τ n = 1 n + 1 ,   δ n = 2 n 5 n + 7 . Also for (12), we take η = 1.75 ,   β = 0.55   , θ = 0.001 ,   α n = 1 n + 1 . We test each algorithm using the following initial values and | | x n + 1 x n | | < 10 5 as stopping criterion:
Case I:
x 0 = t 2 2 t + 3 , x 1 = ( 2 t + 1 ) 3 ;
Case II:
x 0 = exp ( 3 t ) , x 1 = sin ( 2 t ) / 3 ;
Case III:
x 0 = cos ( 5 t ) / 10 , x 1 = sin ( 2 t ) ;
Case IV:
x 0 = t 3 + t 1 , x 1 = exp ( 4 t ) / 4 .
The numerical results are shown in Table 3 and Figure 3.

6. Conclusions

In this paper, we present a new generalized inertial viscosity approximation method for solving pseudomonotone variational inequality and split common fixed point problems in real Hilbert spaces. The algorithm is designed such that the stepsize of the variational inequality is determined by a line searching process and its convergence does not require norm of the bounded linear operator. A strong convergence result is proved under mild conditions and some numerical experiments are given to illustrate the efficiency and accuracy of the proposed method. This result improves and extends the results of References [16,17,18,26,27,32] and other related results in the literature.

Author Contributions

Conceptualization, L.O.J.; methodology, L.O.J.; validation, M.A. and L.O.J.; formal analysis, L.O.J.; writing—original draft preparation, L.O.J.; writing—review and editing, M.A.; visualization, L.O.J.; supervision, M.A.; project administration, M.A.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Sefako Makgatho Health Sciences University Postdoctoral research fund and and the APC was funded by Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Pretoria, South Africa.

Acknowledgments

The authors acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Alber, Y.; Ryazantseva, I. Nonlinear Ill-Posed Problems of Monotone Type; Springer: Dordrecht, The Netherlands, 2006. [Google Scholar]
  2. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Nonlinear Programming Technique for Equilibria; Spinger Nature: Cham, Switzerland, 2019. [Google Scholar]
  3. Shehu, Y. Single projection algorithm for variational inequalities in Banach spaces with applications to contact problems. Acta Math. Sci. 2020, 40, 1045–1063. [Google Scholar] [CrossRef]
  4. Ceng, L.C.; Hadjisavas, N.; Weng, N.C. Strong convergence theorems by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46, 635–646. [Google Scholar] [CrossRef]
  5. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for variational inequality problems in Euclidean space. Optimization 2012, 61, 119–1132. [Google Scholar] [CrossRef]
  6. Denisov, S.; Semenov, V.; Chabak, L. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  7. Fang, C.; Chen, S. Some extragradient algorithms for variational inequalities. In Advances in Variational and Hemivariational Inequalities; Springer: Cham, Switzerland, 2015; Volume 33, pp. 145–171. [Google Scholar]
  8. Hammad, H.A.; Ur-Rehman, H.; La Sen, M.D. Advanced algorithms and common solutions to variational inequalities. Symmetry 1198, 12, 1198. [Google Scholar] [CrossRef]
  9. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  10. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  11. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  12. Hieu, D.V. Parallel and cyclic hybrid subgradient extragradient methods for variational inequalities. Afr. Mat. 2017, 28, 677–679. [Google Scholar] [CrossRef]
  13. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A self adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces. Demonstr. Math. 2019, 52, 183–203. [Google Scholar] [CrossRef]
  14. Yang, J.; Liu, H. The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optim. Lett. 2020, 14, 1803–1816. [Google Scholar] [CrossRef]
  15. Cai, X.; Gu, G.; He, B. On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 2014, 57, 339–363. [Google Scholar] [CrossRef]
  16. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem. Comput. Appl. Math. 2019, 39. [Google Scholar] [CrossRef]
  17. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Modified inertial subgradient extragradient method with self-adaptive stepsize for solving monotone variational inequality and fixed point problems. Optimization 2020. [Google Scholar] [CrossRef]
  18. Thong, D.V.; Vinh, N.T.; Cho, Y.J. New strong convergence theorem of the inertial projection and contraction method for variational inequality problems. Numer. Algorithms 2020, 84, 285–305. [Google Scholar] [CrossRef]
  19. Jolaoso, L.O.; Aphane, M. Weak and strong convergence Bregman extragradient schemes for solving pseudo-monotone and non-Lipschitz variational inequalities. J. Ineq. Appl. 2020, 2020, 195. [Google Scholar] [CrossRef]
  20. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A strong convergence theorem for solving pseudo-monotone variational inequalities using projection methods in a reflexive Banach space. J. Optim. Theory Appl. 2020, 185, 744–766. [Google Scholar] [CrossRef]
  21. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody. 1976, 12, 747–756. (In Russian) [Google Scholar]
  22. Nadezhkina, N.; Takahashi, W. Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128, 191–201. [Google Scholar] [CrossRef]
  23. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [Green Version]
  24. He, B.S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
  25. Thong, D.V.; Hieu, D.V. Modified subgradient extragdradient algorithms for variational inequalities problems and fixed point algorithms. Optimization 2018, 67, 83–102. [Google Scholar] [CrossRef]
  26. Tian, M.; Jiang, B.N. Inertial hybrid algorithm for variational inequality problems in Hilbert spaces. J. Ineq. Appl. 2020, 2020, 12. [Google Scholar] [CrossRef] [Green Version]
  27. Cholamjiak, P.; Thong, D.V.; Cho, Y.J. A novel inertial projection and contraction method for solving pseudomonotone variational inequality problem. Acta Appl. Math. 2020, 169, 217–245. [Google Scholar] [CrossRef]
  28. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. U.S.S.R. Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  29. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusions. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef] [Green Version]
  30. Chambole, A.; Dossal, C.H. On the convergence of the iterates of the “fast shrinkage/thresholding algorithm”. J. Optim. Theory Appl. 2015, 166, 968–982. [Google Scholar] [CrossRef]
  31. Dong, Q.-L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  32. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalites. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  33. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, T.O. An inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert spaces. Optimization 2020. [Google Scholar] [CrossRef]
  34. Jolaoso, L.O.; Oyewole, K.O.; Okeke, C.C.; Mewomo, O.T. A unified algorithm for solving split generalized mixed equilibrium problem and fixed point of nonspreading mapping in Hilbert space. Demonstr. Math. 2018, 51, 211–232. [Google Scholar] [CrossRef] [Green Version]
  35. Moudafi, A. Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  36. Xu, H.K. Viscosity approximation method for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  37. Iiduka, H. Acceleration method for convex optimization over the fixed point set of a nonexpansive mappings. Math. Prog. Ser. A 2015, 149, 131–165. [Google Scholar] [CrossRef]
  38. Mainge, P.E. A hybrid extragradient viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 49, 1499–1515. [Google Scholar] [CrossRef]
  39. Maingé, P.E. Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. Eur. J. Oper. Res. 2010, 205, 501–506. [Google Scholar] [CrossRef]
  40. Rudin, W. Functional Analysis, McGraw-Hill Series in Higher Mathematics; McGraw-Hill: New York, NY, USA, 1991. [Google Scholar]
  41. Marino, G.; Xu, H.K. Weak and strong convergence theorems for strict pseudo-contraction in Hilbert spaces. J. Math. Anal. Appl. 2007, 329, 336–346. [Google Scholar] [CrossRef] [Green Version]
  42. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  43. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  44. Yamada, I. The hybrid steepest-descent method for variational inequalities problems over the intersection of the fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications; Butnariu, D., Censor, Y., Reich, S., Eds.; North-Holland: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
  45. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef] [Green Version]
  46. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  47. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Alg. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  48. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A.A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [Green Version]
  49. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  50. Dong, Q.L.; Jiang, D.; Cholamjiak, P.; Shehu, Y. A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 2017, 19, 3097–3118. [Google Scholar] [CrossRef]
  51. Cholamjiak, P.; Suantai, S.; Sunthrayuth, P. Strong convergence of a general viscosity explicit rule for the sum of two monotone operators in Hilbert spaces. J. Appl. Anal. Comput. 2019, 9, 2137–2155. [Google Scholar] [CrossRef]
  52. Cholamjiak, P.; Suantai, S.; Sunthrayuth, P. An explicit parallel algorithm for solving variational inclusion problem and fixed point problem in Banach spaces. Banach J. Math. Anal. 2020, 14, 20–40. [Google Scholar] [CrossRef]
  53. Kesornprom, S.; Cholamjiak, P. Proximal type algorithms involving linesearch and inertial technique for split variational inclusion problem in hilbert spaces with applications. Optimization 2019, 68, 2365–2391. [Google Scholar] [CrossRef]
  54. Shehu, Y. On a modified extragradient method for variational inequality problem with application to industrial electricity production. J. Ind. Appl. Math. 2019, 15, 319–342. [Google Scholar]
  55. Yen, L.H.; Muu, L.D.; Huyen, N.T.T. An algorithm for a class of split feasibility problems: Application to a model in electricity production. Math. Meth. Oper. Res. 2016, 84, 549–565. [Google Scholar] [CrossRef]
  56. Xue, G.L.; Ye, Y.Y. An efficient algorithm for minimizing a sum of Euclidean norms with applications. SIAM J. Optim. 1997, 7, 1017–1036. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example 1, Top Left: N = 5 ; Top Right: N = 10 ; Bottom Left: N = 30 ; Bottom Right: N = 50 .
Figure 1. Example 1, Top Left: N = 5 ; Top Right: N = 10 ; Bottom Left: N = 30 ; Bottom Right: N = 50 .
Mathematics 08 02039 g001
Figure 2. Example 2, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Figure 2. Example 2, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Mathematics 08 02039 g002
Figure 3. Example 3, Top Left: N = 5 ; Top Right: N = 10 ; Bottom Left: N = 30 ; Bottom Right: N = 50 .
Figure 3. Example 3, Top Left: N = 5 ; Top Right: N = 10 ; Bottom Left: N = 30 ; Bottom Right: N = 50 .
Mathematics 08 02039 g003
Table 1. Computational result for Example 1.
Table 1. Computational result for Example 1.
Algorithm 1Cholamjiak et al. [27]Dong et al. [32]
N = 5 No of Iter.304080
Time (sec)0.01360.01810.0471
N = 10 No of Iter.304080
Time (sec)0.01560.01950.0442
N = 30 No of Iter.283373
Time (sec)0.01410.01720.0370
N = 50 No of Iter.273269
Time (sec)0.01630.02010.0516
Table 2. Computational result for Example 2.
Table 2. Computational result for Example 2.
Algorithm 1Cai et al. [15]
Case INo of Iter.22202
Time (sec)0.04631.9129
Case IINo of Iter.3195
Time (sec)0.00970.0477
Case IIINo of Iter.32208
Time (sec)0.01101.1696
Case IVNo of Iter.24184
Time (sec)0.00571.1250
Table 3. Computational result for Example 3.
Table 3. Computational result for Example 3.
Algorithm 1Cholamjiak et al. [27]Dong et al. [32]
Case INo of Iter.4810
Time (sec)0.56690.99981.7530
Case IINo of Iter.377
Time (sec)0.51010.64610.6706
Case IIINo of Iter.356
Time (sec)0.40190.54440.6242
Case IVNo of Iter.345
Time (sec)0.21010.59380.7895
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jolaoso, L.O.; Aphane, M. A Generalized Viscosity Inertial Projection and Contraction Method for Pseudomonotone Variational Inequality and Fixed Point Problems. Mathematics 2020, 8, 2039. https://doi.org/10.3390/math8112039

AMA Style

Jolaoso LO, Aphane M. A Generalized Viscosity Inertial Projection and Contraction Method for Pseudomonotone Variational Inequality and Fixed Point Problems. Mathematics. 2020; 8(11):2039. https://doi.org/10.3390/math8112039

Chicago/Turabian Style

Jolaoso, Lateef Olakunle, and Maggie Aphane. 2020. "A Generalized Viscosity Inertial Projection and Contraction Method for Pseudomonotone Variational Inequality and Fixed Point Problems" Mathematics 8, no. 11: 2039. https://doi.org/10.3390/math8112039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop