Alternating proximal penalization algorithm for the modified multiple-sets split feasibility problems

In this paper, we present an extended alternating proximal penalization algorithm for the modified multiple-sets feasibility problem. For this method, we first show that the sequences generated by the algorithm are summable, which guarantees that the distance between two adjacent iterates converges to zero, and then we establish the global convergence of the algorithm provided that the penalty parameter tends to zero.


Introduction
The multiple-sets split feasibility problem, abbreviated as MSFP, is to find a point closest to the intersection of a family of some closed and convex sets in one space, such that its image under a linear operator is closest to the intersection of another family of some closed and convex sets in the image space. More specifically, the MSFP consists in finding a point x * such that where C i ⊂ R m (i = 1, 2, . . . , s) and Q j ⊂ R n (j = 1, 2, . . . , t) are nonempty, closed, and convex sets in a Euclidean space, A ∈ R n×m is a given matrix. The problem finds applications in such fields as image reconstruction and signal processing [1], and it was intensively considered by researchers [2][3][4][5][6][7][8][9][10][11]. For this problem, numerous efficient iterative methods were proposed, see [12][13][14][15][16][17][18][19][20][21][22][23] and the references therein.
To solve the problem, based on the multidistance idea, Censor and Elfving [1] established an iterative algorithm which involves the inverse of the underlying matrix at each iteration and hence is much more time-consuming. To overcome this drawback, Byrne [24] presented a projection-type algorithm, called CQ method, for solving the split feasibility problem. The algorithm is efficient when the orthogonal projections can be easily calculated. To make the projection method more efficient when the projection is difficult to compute, Yang [25] established a relaxed CQ algorithm by modifying the projection region. For this method, to make the objection function have sufficient decrease at each iteration, Qu and Xiu [26] presented a revised CQ method by introducing an Armijolike stepsize rule into the iterative frame. In order to accelerate the speed of the algorithm, Zhang and Wang [27] made a further modification to the method by adopting a new searching direction for the split feasibility problem.
Inspired by the work in [1] and the alternating proximal penalization algorithm in [28], we present a two-block alternating proximal penalization algorithm for this problem in this paper. Under mild conditions, we first show that the sequences generated by our algorithm are summable, which guarantees that the distance between two adjacent iterates converges to zero, and then we establish the global convergence of the algorithm provided that the penalty parameter tends to zero.
The remainder of this paper is organized as follows. In Sect. 2, we give some basic definitions and lemmas which will be used in the subsequent sections. In Sect. 3, we present a new method for solving the split problem and establish its convergence. Some conclusions are drawn in the last section.

Preliminaries
In this section, we first present some definitions and then recall some existing conclusions which will be used in the subsequent analysis.
First, we give some definitions on the continuous function f : R n → R.
(2) f is called ν-inverse strongly monotone on if there exists a constant ν > 0 such that (3) f is called Lipschitz continuous on if there exists a constant L > 0 such that (4) The subgradient set of f at x is given by 2 , it holds that ∇f (x) and ∇g(y) are both inverse strongly monotone and Lipschitz continuous on X and Y, where P C i (x) denotes the projection of x onto C i , i.e., P C i (x) = arg min{ yx | y ∈ C i }.
To proceed, we give some conclusions which play the heart role in the next section.

Lemma 2.2 [28]
Let {a n } and { n } be two real sequences such that {a n } is minorized, { n } ∈ l 1 , and a n+1 ≤ a n + n for any n ∈ N . Then sequence {a n } converges. [31]) Let {λ k } be a nonsummable sequence of positive real numbers, and {x k } be any sequence in a Hilbert space H, with the weighted averages {z k }. Assume that there exists a nonempty closed convex subset F of H such that

Lemma 2.3 (Opial lemma
exists for all f ∈ F. Then {z k } converges weakly to an element of F.
To end this section, we define the following operators which will be used in the following sections. Let Define monotone operatorsF andĜ as follows: Further, we define bounded linear operators ϒ and as follows: and : Let be a nonempty closed convex subset of R n . Then the normal cone operator of at x is defined as For the function : X × Y → R, the Fenchel conjugate * of the map at p ∈ X × Y is given by * (p) := sup p T q -(q) : q ∈ X × Y .

Algorithm and the convergence analysis
In [30], Zhang et al. proposed an alternating direction method to solve problem (1.1) based on the Lagrange function. Different from it, in this paper, we propose the following alternating proximal penalization algorithm: Given the current iterate (x k , y k ) and positive parameters α and β, the new point (x k+1 , y k+1 ) is generated by (3.1) Note that the penalty parameter sequence satisfies {γ k } ∈ l 2 /l 1 .

Lemma 3.2 For the function ϒ defined at the end of Sect. 2 and
(3.7) Hence, h k+1h k ≤ 2γ 2 k+1 * (p, q). The closedness of R(ϒ) and Lemma 2.2 imply that the sequence {h k } converges, which means that lim k→+∞ h k (x * , y * ) exists and the sequence {(x k , y k )} is bounded.
In order to prove the convergence of the algorithm, we need the following notations. Setting τ k = k n=1 γ n , define the averages of the sequences {x k } and {y k } aŝ γ n x n andŷ k = 1 τ k k n=1 γ n y n . Now we are in a position to prove the convergence of the algorithm.
Second, we prove that the sequence {(x k ,ŷ k )} has at most one cluster point.

Conclusion
In this paper, we presented an extended alternating proximal penalization algorithm for the modified multiple-sets feasibility problem. For the method, we first showed that the sequences generated by the algorithm are summable, which guarantees that the distance between two adjacent iterates converges to zero, and then we established the global convergence of the algorithm provided that the penalty parameter tends to zero.