A relaxed projection method using a new linesearch for the split feasibility problem

1 Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai, 50200, Thailand 2 School of Science, University of Phayao, Phayao 56000, Thailand 3 School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, P. R. China, and Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea


Introduction
In this work, we study the split feasibility problem (shortly, (SFP)) which is formulated as follows: Find a point x * ∈ C such that Ax * ∈ Q, (1.1) where C and Q are nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 → H 2 is a bounded linear operator. This problem was first proposed by Censor and Elfving [5] in Euclidean spaces (for recent results on the problem (SFP), see [10,12,13,25]). There are some related topic for splitting problems see [14,17,22,24]. Many applications in real world such as image reconstruction, signal recovery and intensity-modulated radiation therapy (shortly, (IMRT)) can be seen as solving the problem (SFP). Byrne [3,4] proposed the following projection algorithm for solving the problem (SFP): where α ∈ (0, 2/ A * A ), A * is the adjoint operator of A, P C and P Q are projections onto C and Q, respectively. This method is often called the CQ-algorithm. However, it is observed that, in general, the projections P C and P Q are not an easy task to be computed.
To eliminate this difficulty, Fukushima [15] introduced the level sets C and Q which are defined by where c : H 1 → R and q : H 2 → R are convex and subdifferential functions, and ∂c and ∂q are bounded operators. Fukushima [15] also introduced a relaxed projection algorithm for solving variational inequality problem and gave numerical experiments to support main result.
In 2004, Yang [27] presented a relaxed CQ-algorithm for solving the problem (SFP), by replacing P C and P Q by P C k and P Q k , respectively. In this case, C k and Q k are defined by It is easily seen that C ⊂ C k and Q ⊂ Q k for all k ≥ 1. We note that P C k and P Q k are easily computed since these sets are half-spaces.
Precisely, Yang [27] proposed the following relaxed CQ-algorithm: where {α n } is a sequence in (0, 2/ A * A ), A * is the adjoint operator of A and C k and Q k are given by (1.4) and (1.5), respectively. Recently, Gibali et al. [16] (see also Qu and Xiu [20]) proposed the modification of the relaxed CQalgorithm by using the linesearch technique in real Hilbert spaces. For each k ≥ 1, define a mapping F k : They constructed the following algorithm: Algorithm 1.1. Given constants γ > 0, ∈ (0, 1) and µ ∈ (0, 1). Let x 1 be arbitrary in H 1 . For each k ≥ 1, calculate where α k = γ m k and m k is the smallest nonnegative integer such that Construct the next iterative step x k+1 by (1.10) There have been various methods invented for solving the problem (SFP) (see, for example, [5-12, 14-19,21,26]). It is observed that, in each iteration, we have to find the integer m k such that (1.9) holds. So it takes CPU time and number of iterations in convergence. It is our aim to design new linesearch that takes CPU time less than Algorithm 1.1 and others.
Recently, Tian and Zhang [23] proposed a new regularized CQ algorithm without a priori knowledge of the operator norm to solve the split feasibility problem and provided strong convergence theorem in Hilbert spaces. Later, Dang et. al. [11] introduced inertial relaxed CQ algorithms for the split feasibility problem and proved asymptotical convergence. In 2018, Dong et. al. [13] introduced the projection and contraction methods for solving the split feasibility problem using the inverse strongly monotone property of the underlying operator of the SFP.
Motivated by the previous works, we modify algorithm of Gibali et al. [16] and propose the relaxed CQ-algorithm with the new linesearch in real Hilbert spaces and prove some weak convergence theorems of the proposed method under mild assumptions. Finally, we give some computational results to compare with the algorithms of Gibali et al. [16] and Yang [27]. Numerical experiments show that our algorithm has a better performance than others.

Preliminaries
In this section, we give some definitions and lemmas which are used in the proof. Throughout this paper, we use the following notations: • denotes the weak convergence; such that x k n x} denotes the weak ω-limit set of (x k ).
Let H be a real Hilbert space. We recall the following definition.
(1) A mapping T : H → H is said to be firmly nonexpansive if, for all x, y ∈ H, for all λ ∈ (0, 1) and for all x, y ∈ H.
A differentiable function f is convex if and only if there holds the inequality: for all z ∈ H, which is called the subdifferentiable inequality. The set of subgradients of f at x is denoted by ∂ f (x).
(4) A function f : H → R is said to be subdifferentiable at x if it has at least one subgradient at x. A function f is said to be subdifferentiable if it is subdifferentiable at all x ∈ H.
for all x ∈ H. Next, we give some useful lemmas for our proof.
Let C be a nonempty closed convex subset of a real Hilbert space H. Then, for any x ∈ H, the following assertions hold: From Lemma 2.1, we see that the operator I − P C is also firmly nonexpansive, where I denotes the identity operator, i.e., for any x, y ∈ H, (2.7) Lemma 2.2.
[1] Let S be a nonempty closed convex subset of a real Hilbert space H and {x n } be a sequence in H satisfying the following properties: Then {x k } converges weakly to a point in S . Now, we give our main results in this paper.

Main results
In this section, we introduce a new relaxed projection algorithm using the linesearch technique and prove its weak convergence. We have already defined the mappings F k and ∇F k as in (1.7). Our algorithm is generated by the following manner: Construct the next iterative step x k+1 by where α k = γ m k and m k is the smallest nonnegative integer such that It is remarked that Algorithm 3.1 is based on extra-gradient type method which was firstly introduced by Noor [28,29].
Following the proof line as in [20], we obtain the following lemma: Let γ > 0, ∈ (0, 1) and µ ∈ (0, 1 4 ). Then, the linesearch (3.3) terminates after a finite number of steps. In addition, we have the following: Proof. From [20], we know that and The second part is obtained by [20]. Hence the linesearch (3.3) is well-defined. This completes the proof.
In what follows, we denote S by the solution set of the problem (SFP) and also assume that S is nonempty. Proof. Let z ∈ S . Since C ⊆ C k and Q ⊆ Q k , it follows that z = P C (z) = P C k (z) and Az = P Q (Az) = P Q k (Az). Hence ∇F k (z) = 0. Using Lemma 2.1 (c), we get Also, we obtain Due to (2.7) and ∇F k (z) = 0, we have It also follows that Also, we have Combining (3.8)-(3.13), by Lemma 3.2, we obtain (3.14)

Using (3.3) and (3.4), we have
This implies that So, we obtain as k → ∞. Since {x k } is bounded, assume that ω w (x k ) is nonempty. Let x * ∈ ω w (x k ). Then there exists a subsequence {x k n } of {x k } such that x k n x * . Now, we show that x * ∈ S . Since x k n +1 ∈ C k n by the definition of C k n , we get c(x k n ) ≤ ξ k n , x k n − x k n +1 , (3.21) where ξ k n ∈ ∂c(x k n ). By the boundedness of ∂c and (3.20), we have as n → ∞. By the w-lsc of c, x k n x * and (3.22), it follows that Next, we show that Ax * ∈ Q. Since P Q kn (Ax k n ) ∈ Q k n , we have where η k n ∈ ∂q(Ax k n ). So, we have as n → ∞. Since x k n x * , Ax k n Ax * . The w-lsc of q and (3.25) imply that Hence Ax * ∈ Q. Using Lemma 2.2, we conclude that the sequence {x k } converges weakly to a point in S . This completes the proof.
Remark 3.4. Algorithm 3.1 in Theorem 3.3 is quite different from that of Gibali et al. [16]. To be more precise, the linesearch (3.3) is defined by a new way of two step method.
Remark 3.5. Theorem 3.3 is more convenient than the results of Yang [27] in practice. In fact, we do not require the information of the operator norm which is not easy in computation.

Numerical experiments
In this section, we provide some numerical experiments in signal recovery to compare our algorithm with those of Yang [27] and Gibali et al. [16]. In signal processing, the compressed sensing can be formulated as inverting the equation system: where x ∈ R N is a vector with m nonzero components to be recovered, y ∈ R M is the observed or measured data with noisy ε and A : R N → R M (M < N) is a bounded linear observation operator. Here A is sparse and the range of it is not closed in most inverse problems and thus A is often ill-condition and the problem is also ill-posed. When x is a sparse expansion, to find solutions of the problem (4.1) can be seen as solving the following LASSO problem: where t > 0 is a given constant. The sparse vector x ∈ R N is generated from the uniform distribution in the interval [−2, 2] with m nonzero elements. The matrix A ∈ R M×N is generated from a normal distribution with mean zero and variance one. The observation y is generated by white Gaussian noise with signal-to-noise ratio SNR=40. The process is started with t = m and initial point x 1 is picked randomly.
Let C = {x ∈ R N : x 1 ≤ t} and Q = {y}. Then the minimization problem (4.2) can be seen as the problem (SFP) and, since the projection onto the closed convex C does not have a closed form solution, we make use of the subgradient projection. Define a convex function c(x) = x 1 − t and denote the level set C k by: where ξ k ∈ ∂c(x k ). Then the orthogonal projection onto C k can be calculated by the following: The subdifferential ∂c at x k is x k < 0. (4.5) In our experiment, we consider two cases as follows: Next, we give some numerical results by using the relaxed CQ-algorithms defined by Yang [27], Gibali et al. [16] and our algorithms (Algorithm 3.1).
The iteration is stopped when the following criteria is satisfied: where x * is an estimated signal of x.
In what follows, let α k = 1 A 2 in the CQ-algorithm by Yang [27]. Define γ = 2, = 0.5 and µ = 0.2 in that of Gibali et al. [16]. Then the numerical results are reported as follows: In Figures 1 and 4, we see that the signal recovered by Algorithm 3.1 in both Case 1 and Case 2 has number of iterations and cpu time less than algorithm of Yang [27] and Gibali et al. [16]. In the algorithm of Yang [27], the stepsize α k depends on the operator norm A whenever the matrix has a large dimension, it may be compute very hard and has a costly cpu time. It is observed that our new linesearch has a less cpu time than that of Gibali et al. [16].
(2) In Figures 2 and 5, we plot the error value of iterations. We see that the errors of Algorithm 3.1 decrease faster than those of other algorithms. Also, in Figures 3 and 6, the objective function values obtained by Algorithm 3.1 have a better convergence behaviour than other algorithms.

Conclusions
In this paper, we proposed the relaxed CQ-algorithm with new two steps by using a new linesearch in real Hilbert spaces. The computation of matrix inverse and norm of operators is not required in our algorithm. Also, we gave in a simple and novel way how the sequence generated by the method weakly converges to a solution of the problem (SFP). All the results are compared to the relaxed CQalgorithms of Yang [27] and Gibali et al. [16]. Finally, we found that the proposed algorithm is effective and outruns other known methods in the literature.