A Parallel-Viscosity-Type Subgradient Extragradient-Line Method for Finding the Common Solution of Variational Inequality Problems Applied to Image Restoration Problems

: In this paper, we study a modiﬁed viscosity type subgradient extragradient-line method with a parallel monotone hybrid algorithm for approximating a common solution of variational inequality problems. Under suitable conditions in Hilbert spaces, the strong convergence theorem of the proposed algorithm to such a common solution is proved. We then give numerical examples in both ﬁnite and inﬁnite dimensional spaces to justify our main theorem. Finally, we can show that our proposed algorithm is ﬂexible and has good quality for use with common types of blur effects in image recovery.


Introduction
Let H be a real Hilbert space with the inner product ., . and the induced norm . . Let C be a nonempty closed and convex subset of H. A mapping f : C → C is said to be a strict contraction if there exists k ∈ [0, 1) such that A mapping A : C → H is said to be

Pseudo-monotone if
Ay, x − y ≥ 0 ⇒ Ax, x − y ≥ 0 for all x, y ∈ C; 3. L-Lipschitz continuous if there exists a positive constant L such that Ax − Ay ≤ L x − y for all x, y ∈ C.
In this paper, we study the following variational inequality problem (VIP) for the operator A to find x * , ∈ C such that Ax * , y − x * ≥ 0, ∀y ∈ C.
The set of solutions of VIP (5) is denoted by VI(A,C). The VIP was introduced and studied by Hartman and Stampacchia in 1966 [1]. The variational inequality theory is an important tool based on studying a wide class of problems-unilateral and equilibrium problems arising in structural analysis, economics, optimization, operations research, and engineering sciences (see [2][3][4][5][6][7] and the references therein). Several algorithms have been improved for solving variational inequality and related optimization problems (see [6,[8][9][10][11][12][13][14][15] and the references therein). It is well known that x is the solution of the VIP (5) if and only if x is the fixed point of the mapping P C (I − rA), r > 0 (see [4] for details) Therefore, we can find the fixed point of the mapping P C (I − rA) replaces finding the solution of VIP (5) (see [7,9]). For solving VIP (5), the projection on closed and convex sets have been used. The gradient method is the simplest projection method, in which only one projection on the feasible set needs to be computed. However, strongly monotone or inverse strongly monotone operators have been required to obtain the convergence result. In 1976, Korpelevich [16] proposed another projection method called the extragradient method for finding a saddle point, and then it was extended to finding a solution of VIP for Lipschitz continuous and monotone (even pseudomonotone) mappings A. The extragradient method is designed as follows: y n = P C (x n − λA(x n )), x n+1 = P C (x n − λA(y n )), (6) where P C is the metric projection onto C and λ is a suitable parameter. When the structure of C is simple, the extragradient method is computable and very useful because the projection onto C can be found easily. However, the computation of a projection onto a closed convex subset is generally difficult, and two distance optimization problems in the extragradient method are solved to obtain the next approximation x n+1 over each iteration. This can be precious and seriously affect the efficiency of the used method. In 2011, the subgradient extragradient method was proposed in [17] for solving VIPs in Hilbert spaces. A projection onto a closed convex subset is reduced into one step and a special half-space is constructed for the projection in the second step. The method is generated as follows: y n = P C (x n − λA(x n )), x n+1 = P T n (x n − λA(y n )), where T n is a half-space whose bounding hyperplane is supported on C at y n , that is, T n = {υ ∈ H : (x n − λA(x n )) − y n , υ − y n }.
The authors in [17] proved that two sequences {x n }, {y n } generated by (7) converge weakly to a solution of the VIP. Recently, Gibali [18] proposed a new subgradient extragradient method by using adopting Armijo-like searches, called the self-adaptive subgradient extragradient method. Under the assumption of pseudomonotonicity and continuity of the operator, it has been proven that the convergence result for VIP (5) is R n . Gibali [9] remarked that the Armijo-like searches can be viewed as a local approximation of the Lipschitz constant of A.
Very recently, solving the VIP (5) when A is a Lipschitz continuous monotone mapping such that the Lipschitz constant is unknown in Hilbert spaces by using the following viscosity-type subgradient extragradient-like method was proposed by Shehu and Iyiola [19].
(l n is the smallest non-negative integer l such that λ n Ax n − Ay n ≤ µ r ρ ln (x n ) ), z n = P T n (x n − λ n Ayn), where T n = {z ∈ H : x n − λ n Ax n − y n , z − y n ≤ 0} with ρ, µ ∈ (0, 1) and {α n } ⊆ (0, 1). It was proved that the sequence {x n } generated by (9) converges strongly to x * ∈ VI(C,A), where where f : H → H is a strict contraction mapping such that constant k ∈ [0, 1) under the following conditions: Our interest in this paper is to study the finding of common solutions of variational inequality problems (CSVIPs). The CSVIP is stated as follows: Let C be a nonempty closed and convex subset of H. Let A i : H → H, i = 1, 2, ..., N be mappings. The CSVIP is to find x * ∈ C such that If N = 1, CSVIP (11) becomes VIP (5). The CSVIP has received a great deal of attention due to its applications in a large variety of problems arising in structural analysis, convex feasibility problems, common fixed point problems, common minimizer problems, common saddle-point problems, and common variational inequality problems [20]. These problems have practical applicabilities in signal processing, network resource allocation, image processing, and many other fields [21,22]. Recently, many mathematicians have been widely studying this problem both theoretically and algorithmically; see [23][24][25][26][27] and the references therein.
Very recently, Anh and Hieu [28,29] proposed an important method for finding a common fixed point of a finite family of quasi ∅-nonexpansive mappings {S i } N i=1 in Banach spaces, which they called the parallel monotone hybrid algorithm. This algorithm is related to Hilbert spaces as follows: x 0 ∈ C, y i n = α n x n + (1 − α n )S i x n , i = 1, ..., N, i n = argmax{ y i n − x n : i = 1, ..., N},ȳ n := y i n n , where 0 < α n < 1, lim n→∞ sup α n < 1. We see that a parallel algorithm is an algorithm that can execute several directions simultaneously on different processing devices and then combine all the individual outputs.
Inspired and encouraged by the previous results, in this paper we introduce a modified parallel method with a viscosity-type subgradient extragradient-like method for finding a common solution of variational inequality problems. Numerical experiments are also conducted to illustrate the efficiency of the proposed algorithms. Moreover, the problem of multiblur effects in an image is solved by applying our proposed algorithm.

Preliminaries
In order to prove our main result, we recall some basic definitions and lemmas needed for further investigation. In a Hilbert space H, let C be a nonempty closed and convex subset of H. For every point x ∈ H, there exists a unique nearest point of C, denoted by P C x, such that x − P C x ≤ x − y for all y ∈ C. Such a P C is called the metric projection from H onto C.

Lemma 1 ([30]
). Let C be a nonempty closed and convex subset of a real Hilbert space H and let P C be the metric projection of H onto C. Let x ∈ H and z ∈ C. Then, z = P C x if and only if Lemma 2 ([30]). The following statements hold in any real Hilbert space H: Lemma 3 (Xu,[31]). Let {a n } ∞ n=0 be a sequence of non-negative real numbers satisfying the following relation: a n+1 ≤ (1 − α n )a n + α n σ n + γ n , n ≥ 1, Then, a n → 0 as n → ∞.

Lemma 4 ([8]
). Let C be a nonempty closed and convex subset of a real Hilbert space H and P C : H → C is the metric projection from H onto C. Then, the following inequality holds: Lemma 5 ([19]). There exists a nonnegative integer l n satisfying (9).

Main Results
In this section, we propose the parallel method with the viscosity-type subgradient extragradient-like method modified for solving common variational inequality problems. Let C be a nonempty closed and convex subset of a real Hilbert space H. Let A i : C → H be a monotone mapping and L i -Lipschitz continuous on H but L i is unknown for all i = 1, 2, ..., N such that is generated in the following Algorithm 1: Given ρ ∈ (0, 1), µ ∈ (0, 1). Let {α n } ∞ n=1 be a real sequence in (0, 1). Let x 1 ∈ H be arbitrary.
Step 1 : Compute y i n for all i = 1, 2, ..., N by where λ i n = ρ l i n and l i n is the smallest non-negative integer l i such that Step 2 : Compute Step 3 : Compute Set n + 1 → n and go to Step 1.
Then the sequence {x n } ∞ n=1 generated by Algorithm 1 strongly converges to x * ∈ F, where x * = P F f (x * ) is the unique solution of the variational inequality: By the characterization of the metric projection P i T n and x * ∈ F ⊆ C ⊆ T i n , we get This implies that We then obtain by the definition of Algorithm 1 that By the monotonicity of the operator A i , we have Using (20) in (21), we obtain Observe that Using the last inequality in (22) and Lemma 5, we have It follows from (15) and (23) that (15) and (23), we have Furthermore, using Lemma 2 (ii) in (15), we obtain which implies that for some M > 0, We will divide the next proof into two parts.
α i n y i n . It follows from our assumption (i) and (29) that and It follows from (28) that It follows from (30), (31), and (32) that Since {x n } is bounded, it has a subsequence {x n j } such that {x n j } converges weakly to some ω ∈ H and lim sup implies that y i n j ω and since y i n ∈ C, we then obtain ω ∈ C. For all x ∈ C and using the property of the projection P C , we have (since A i is monotone): Taking j → ∞, we get (recall that inf n≥1 λ n j > 0 by Remark 3.2 in [19]) This implies that ω ∈ F. Since z = P F f (z), we have In (26), let a n := x n − z 2 , β n := 2(1−k)α 0 n 1−α 0 n k , and σ n := Then, we can write (26) as a n+1 ≤ (1 − β n )a n + β n σ n .
Case 2 Assume that { x n − z } is not a monotone and decreasing sequence. Set F n = x n − z 2 and let τ : N → N be a mapping defined for all n ≥ n 0 (for some n 0 large enough) by It is clear that τ is a non-decreasing sequence such that τ (n) → ∞ as n → ∞ and This implies that x τ(n) − z ≤ x τ(n)+1 − z , ∀ n ≥ n 0 . Thus, lim n→∞ x τ(n) − z exists. By (27), we obtain as n → ∞. Thus, x τ(n) − y i τ(n) → 0 as n → ∞. As in Case 1, we can prove that Since {x τ(n) } is bounded, there exists a subsequence of {x τ(n) } that converges weakly to ω. Without loss of generality, we assume that x τ(n) ω. Observe that since lim n→∞ x τ(n) − y i τ(n) = 0 we also have y i τ(n) ω. By similar argument in Case 1, we can show that ω ∈ F and lim sup By (26), we obtain that Since α 0 τ(n) > 0 and k ∈ [0, 1), we have that β τ(n) > 0. So, we get Therefore, Furthermore, for n ≥ n 0 , it is easy to see that F τ(n) ≤ F τ(n)+1 if n = τ(n) (that is τ(n) < n), because F j ≥ F j+1 for τ(n) + 1 ≤ j ≤ n. As a consequence, we obtain for all n ≥ n 0 , Hence, lim F n = 0, that is , lim n→∞ x n − z = 0. Hence, {x n } converges strongly to z. This completes the proof.
We now give an example in Euclidian space R 3 where . is 2 -norm defined by x = x 3 ) to support the main theorem.
The stopping criterion is defined by x n − x n−1 < 10 −15 (See in Figures 1-3). The different choices of x 1 are given in Table 1 as follows in Example 1.    For infinitely dimensional space, we give an example in function space L 2 [0, 1] such that . is The stopping criterion is defined by x n (t) − x n−1 (t) < 10 −5 (See in Figures 4-6).
From Tables 1 and 2, we see that the advantage of the parallel viscosity type subgradient extragradient-line method Algorithm 1 when the common solution of two or more inputting A i gives the number of iterations smaller than one inputting.

Application to Image Restoration Problems
The image restoration problem can be modeled in one-dimensional vectors by the following linear equation system: where x ∈ R n×1 is an original image, b ∈ R m×1 is the observed image, υ is additive noise, and A ∈ R m×n is the blurring matrix. For solving problem (34), we aim to approximate the original image, vector x, by minimizing the additive noise, which is called a least squares (LS) problem as follows: where . is 2 -norm defined by x = ∑ n i=1 |x i | 2 . The solution of the problem (35) can be approximated by many well-known iteration methods.
The Richardson iteration, which is often called the Landweber method [33][34][35][36], is generally used as an iterative regularization method to solve (35). The basic iteration takes the form: Here the step size τ remains constant for each iteration. The convergence can be proved under the step size τ such that 0 < τ < 2 where σ max is the largest singular value of A. The goal in image restoration is to deblur an image without knowing which one is the blurring operator. Thus, we focus on the following problem: where x is the original true image, A i is the blurred matrix, b i is the blurred image by the blurred matrix A i for all i = 1, 2, ..., N. For solving this problem, we designed the following flowchart: WhereX is the deblurred image or the common solutions of the problem (37) and as seen in Figure 7. We can apply the algorithm in Theorem 1 to solve the problem (37), and as a result, we know that A T i (A i x − b i ) is Lipschitz continuous for each i = 1, 2, ..., N. Let f : R n → R n be a strict contraction mapping with constant k ∈ (0, 1]. Suppose {x n } ∞ n=1 is generated in the following Algorithm 2: Algorithm 2. Given ρ ∈ (0, 1), µ ∈ (0, 1). Let {α n } ∞ n=1 be a real sequence in (0, 1). Let x 1 ∈ H be arbitrary.
Step 1 : Compute for all i = 1, 2, ..., N by where λ i n = ρ l i n and l i n is the smallest nonnegative integer l i such that Step 2 : Compute where T i n := {z ∈ H : x n − λ i n A i x n − y i n , z − y i n ≤ 0}.
Step 3 : Compute Set n + 1 → n and go to Step 1.
We will present restoration of images corrupted by the following blur types: (1) Gaussian blur of filter size 9 × 9 with standard deviation σ = 4 (the original image was degraded by the blurring matrix A 1 ). (2) Out-of-focus blur (disk) with radius r = 6 (the original image was degraded by the blurring matrix A 2 ). (3) Motion blur specifying with motion length of 21 pixels (len = 21) and motion orientation 11 • (θ = 11) (the original image was degraded by the blurring matrix A 3 ).
The performance of the studied proposed Algorithm 2 with the following original grey and RGB images was tested, as can be seen in Figures 8 and 9.  Out of Focus Blurred Image with radius = 6 Figure 11. Three degraded grey image by blurred matrices A 1 , A 2 , and A 3 , respectively.    Next found the common solutions of a deblurring problem (VIP) with (N = 2) by using the proposed algorithm. So, we can consider the results of the proposed algorithm with 10,000 iterations in the following three cases:

Motion Blurred Image with len = 21 and =11
Case I: Inputting A 1 and A 2 in the proposed algorithm; Case II: Inputting A 1 and A 3 in the proposed algorithm; and Case III: Inputting A 2 and A 3 in the proposed algorithm.
It can be seen from Figures 22-27 that the quality of restored images by using the proposed algorithm in solving the common solutions of the deblurring problem (VIP) with (N = 2) was improved compared with the previous results in Figures 16-21 .
Finally, the common solution of the deblurring problem (VIP) with (N = 3) using the proposed algorithm was also tested (inputting A 1 , A 2 , and A 3 in the proposed algorithm).
Case I PSNR = 28.596             The Cauchy error is defined as x n − x n−1 < 10 −8 . The figure error is defined as x n − x , where x is the solution of the problem (VIP). The performance of the proposed algorithm at x n in the image restoration process was measured quantitatively by the means of the peak signal-to-noise ratio (PSNR), which is defined by The Cauchy error plot shows the validity of the proposed method, while the figure error plot confirms the convergence of the proposed method and the PSNR quality plot shows the measured quantitatively of the image. From Figures 30-35, it is clearly seen that the common solution of the deblurring problem (VIP) with (N ≥ 2) obtained quality improvements in the reconstructed grey and RGB images. Another advantage of the proposed method when the common solution of two or more image deblurring problems was used to restore the image is that the received image is more consistent than usual (see Figures 36-49). Figures 36-49 show the reconstructed grey and RGB images using the proposed algorithm in obtaining the common solution of the following problem with the same PSNR.
(2) Deblurring problem (VIP) with (N = 2) by inputting A 1 and A 2 , A 1 and A 3 , and A 2 and A 3 in the proposed algorithm respectively.

Conclusions
In this work, we considered the problem of finding a common solution of variational inequalities with monotonic and Lipschitz operators in a Hilbert space. Under some suitable conditions imposed on the parameters, we proved the strong convergence of the algorithm. Several numerical examples in both finite and infinite dimensional spaces were performed to illustrate the performance of the proposed algorithm (see Tables 1 and 2 and Figures 1-6). We applied our proposed algorithm to image recovery (2) under a situation without knowing the type of matrix blurs to demonstrate the computational performance (see . We found that the advantage of our proposed algorithm was its ability to restore two or more multiblur effects in an image, giving a restoration performance better than one (see Figures 36-49).
Author Contributions: Supervision, S.S.; formal analysis and writing, K.K.; editing and software, P.C. All authors have read and agreed to the published version of the manuscript.