THE REVISIT OF A PROJECTION ALGORITHM WITH VARIABLE STEPS FOR VARIATIONAL INEQUALITIES

. The projection-type methods are a class of important methods for solv- ing variational inequalities(VI). This paper presents a new treatment to a classical projection algorithm with variable steps, which was ﬁrst proposed by Auslender in 1970s and later was developed by Fukushima in 1980s. The main purpose of this work is to weaken the assumption conditions while the convergence of original method is still valid.


1.
Introduction. Consider the following variational inequality problem: find x ∈ C such that where C is a closed convex subset of R n and f (x) is a mapping from C to R n . When C is taken as R n + = {x ∈ R n : x ≥ 0}, variational inequality reduces to a special problem:complementarity problem(CP). In a recent two-volume monograph( [1]), F.Facchinei and J.S.Pang give a quite comprehensive, systematical treatment to VIs and CPs.
There have been a number of algorithms for VI. Among them the projection-type methods are theoretically simple and nice and practically useful provided that the projection onto set C is easily calculated. Though there have been various different projection algorithms and their varieties, such as basic projection algorithm, extragradient projection algorithm, hypersurface projection algorithm,etc.(see e.g. [1][2][3][4] [6][7][8][9][10][11][12][13][14][15][16][17]),the basic projection algorithm plays an elementary role and supplies a prototype for various more advanced projection algorithms. However the basic projection algorithm requires quite strong conditions, which limit its applied scope. The present paper is concerned with a projection method with variable steps solving VI, which is an improvement of the basic projection algorithm and was first proposed by Auslender ([3]) and later was modified by Fukushima([4]). The main advantage of this method is that it does not require some knowledge in advance, such as the strong monotone constant. However, the strong monotonicity of f (x), a strong requirement, is an essential assumption. For instance on Fukushima's algorithm, the authors of [1] comment ( [1,1223]):" The computational advantages of the approach are evident but should be weighted against the rather strong assumptions needed 212 QINGZHI YANG for convergence ". Therefore it is significant provided that the strong assumptions can be weakened. On the other hand, another projection algorithm with variable steps described in [1,Algorithm 12.1.4][14] [15], which is a natural modification of the basic projection algorithm and has scheme similar to Auslender's Algorithm, applies on a broader class of VIs compared with the basic projection algorithm due to its requiring the co-coercivity instead of the strong monotonicity and Lipchitz continuity of f (x). In this paper we make a revisit to Auslender's algorithm by combining the both algorithms. Roughly speaking, we establish the convergence of Auslender's algorithm under weak co-coercivity. Therefore our result can be viewed as an extension of Auslender's result as well as that result described in [1,Algorithm 12.1.4] [14] and [15]. This paper is organized as follows. In section 2, several conditions are given and weak co-coercivity is defined, then the algorithm is restated. In section 3, after several preliminary lemmas the convergence of Auslender's algorithm is proved under our assumptions. We conclude in section 4.

2.
Assumptions and Algorithm. First we make some assumptions as follows.
(A ) f (x) is a continuous mapping from C → R n ; (B ) There is a positive continuous function g(x, y) defined on C × C such that where β and M are positive constants. (D ) For any x ∈ C, f (x) = 0 .
Remark: If (B) holds, we call f (x) weakly co-coercive . If g(x, y) is a constant or g(x, y) has the positive infimum on C × C, then obviously f (x) is co-coercive(see [1, p.1111]) or inversely strong monotone(ism)(see [7] [8]). However if C is unbounded and g(x, y) tends to zero as x 2 or y 2 approaches infinity, f (x) needs not to be co-coercive. So the weak co-coercivity is weaker than co-coercivity. In Example 1 of next section we are to discuss a variational inequality problem with weakly co-coercive f (x), which is neither strong monotone nor co-coercive. Here (C) is a modification of (c) in [4], which plays a crucial role in ensuring the boundedness of iterate sequence.
For a given symmetric positive definite matrix G, the norm . G is defined by x G = ( x, Gx ) 1/2 , we denote by P C the projection operator onto C with respect to the norm . G . {ρ k } is a sequence of positive numbers.
Next we restate Auslender's algorithm as follows .
Algorithm 1. For an arbitrary x 1 , set k = 1. Calculate the projection of If x k+1 = x k , terminate; Otherwise set k = k + 1, repeat. Remark: Algorithm 1 was first proposed in [3], Auslender proved its convergence by assuming the strong monotonicity and boundedness of f (x). Later for practical consideration Fukushima([4]) presented a modification and established the convergence of the modified version under the strong monotonicity and other conditions. Moreover, the positive parameter sequence {ρ k } was assumed to tend to 0 in both algorithms, which will affect the convergence rate of these two algorithms if procedure executes too many times. If we replace the stepsize  [15]. In this case, for ensuring the convergence of algorithm, the co-coercivity of f (x) was required, which is weaker than strong monotonicity and Lipschitz continuity but stronger than weak co-coercivity, and τ k was assumed to have a positive infimum.
In the next section we will establish the convergence of Algorithm 1 under weak co-coercivity while ρ k is assumed to have a positive infimum, which will be beneficial to enhance the convergence rate of algorithm compared with the case of ρ k tending to 0.
3. Convergence. Before we give our main result, several preliminary lemmas are needed.
for any given α > 0. Proof: Since f (x) = 0 for any x ∈ C, it follows from (C) . Therefore x * ∈ C ∩ B(z, M ). It immediately follows that x * is a solution of (1.1).  Proof: Let z be a vector satisfying (C). By Lemma 3.3 we have for each x ∈ R n , If x k∈ B(z, M ), then we deduce from (C) and (3.3) where ν is the minimal eigenvalue of matrix G. Since inf {ρ k } = ρ 0 > 0 and for x k∈ B(z, M ). Consequently we conclude from (3.5) that for any starting point x 1 , there will be some x k ∈ C ∩ B(z, M ) in finite steps. On the other hand , for each x k ∈ C ∩ B(z, M ) we have that Combining (3.5) and (3.6) yields that {x k } is bounded. Furthermore, from (3.6) one gets x k+1 ∈B(z, M + βν 1/2 ) whenever x k∈ C ∩ B(z, M ). So for sufficiently large k, we have x k ∈B(z, M + βν 1/2 ).
Obviously x k is a solution of (1.1) provided x k+1 = x k for some k. So in the next we assume that {x k } is an infinite sequence.
We are ready to establish the convergence of Algorithm 1.
Then {x k } produced by Algorithm 1 converges to a solution of (1.1).
Proof: By Lemma 3.2, (1.1) has a solution x * in C ∩ B(z, M ). By Lemma 3.1 we have for each k Then we have where the first inequality follows from Lemma 3.3 and the last inequality follows from condition (B).
Then we have from (3.7) Because x * ∈ C ∩B(z, M ) ⊆ C ∩B(z, M + βν 1/2 ) and {x k } ⊆ C ∩B(z, M + βν 1/2 ) for sufficiently large k from Lemma 3.4, then Thus we deduce from (3.8) and (3.9) for sufficiently large k, which implies { x k − x * 2 G } decreases monotonically for sufficiently large k and x k+1 − x k → 0 as k → +∞. Because {x k } is bounded, then there is a subsequence x ki →x ∈ C. As a result x ki+1 →x as k i → +∞ since x k+1 − x k → 0 as k → +∞. By Algorithm 1 we have Since inf {ρ k } > 0 ,without loss of generality, we assume ρ k i →ρ > 0. By (D) we have f (x) = 0. Consequently one gets which meansx is a solution of (1.1). From Lemma 3.4 we knowx ∈ C ∩B(z, M + βν 1/2 ). We usex in place of x * in above argument, thus we conclude that { x k − x 2 G } is decreasing monotonically for sufficiently large k. Since { x k −x 2 G } has a subsequence converging to 0, this implies x k →x as k → +∞. This completes the proof.
Now we give an illustrative example, in which (A), (B), (C), (D) are satisfied, but f (x) is neither strongly monotone nor co-coercive. (1). Obviously f (x) is a continuous mapping from C to R 2 ; (2). ∀x, y ∈ C, Take g(x, y) = min(e x 1 , e y 1 ), then (B) holds; (3). Take z = (−1, 0) T , M = 2. Then one has that Since the conditions (A)(B)(C)(D) all hold, this variational inequality problem has a solution and the sequence generated by Algorithm 1 will converge to a solution of this problem as long as ρ k satisfies the requirement in Theorem 3.1. In detail, at this time β = 1, ν = 1 and ξ = min{ f (x) 2 g(x, y)|x, y ∈ C ∩B(z, 3)} = e −4 then the sequence {x k } produced by Algorithm 1 from any starting point x 1 tends to a solution of this variational inequality problem for ρ k such that inf {ρ k } > 0 and ρ k ≤ min{βν, ξ} = e −4 for each k.
On the other hand, obviously f (x) is not strong monotone. Since 2 → 0 as long as x 1 (or y 1 ) is bounded and y 1 (or x 1 ) → −∞, f (x) is not co-coercive either. As a result, the convergence of original Auslender's algorithm as well as Algorithm 12.1.4 in [1] can't be ensured.

4.
Conclusion. The above result shows that the Auslender's algorithm and Algorithm 12.1.4 of [1] may be applied in a broader scope. However, like Algorithm 12.1.4 of [1], the drawback of our result is the dependence of ρ k on some knowledge, such as g(x, y), β and M , while these knowledge can't be known in advance practically. So, as a compensation , it loses the advantage of ρ k being independent of strong monotone constant in original Auslender's algorithm.
In [19], we establish the convergence of the Auslender's algorithm and its relaxed version-Fukushima's algorithm under weak co-coercivity with assuming ρ k → 0 and ρ k = +∞ as in [4]. It seems that the assumption of ρ k → 0 is essential and can't be removed for the convergence of relaxed scheme(see [4]). However in a recent paper on the Split Feasibility Problem(SFP)( [18]), that can be transformed into a special VI, a relaxed projection algorithm with fixed positive stepsize is given and its convergence is established under mild conditions. So we believe the convergence of relaxed version of Algorithm 1 holds under above assumptions with adding a suitable condition.