Abstract

We propose a class of new double projection algorithms for solving variational inequality problem, which can be viewed as a framework of the method of Solodov and Svaiter by adopting a class of new hyperplanes. By the separation property of hyperplane, our method is proved to be globally convergent under very mild assumptions. In addition, we propose a modified version of our algorithm that finds a solution of variational inequality which is also a fixed point of a given nonexpansive mapping. If, in addition, a certain local error bound holds, we analyze the convergence rate of the iterative sequence. Numerical experiments prove that our algorithms are efficient.

1. Introduction

We consider the following variational inequality problem, denoted by VI, to find a vector such that where is a nonempty closed convex set in , is a continuous mapping from into itself, and denotes the usual inner product in . Let SOL denote the solution set of VI, and let denote the projection onto . Throughout this paper, we assume that SOL is nonempty and has the property that The property (2) holds if is monotone or more generally pseudomonotone on in the sense of Karamardian [1].

The variational inequality problems have wide applications in reality. In recent years, many numerical algorithms for VI have been proposed. These methods include Newton method, proximal algorithm, projection algorithm, and their variants; see [2, 3]. Among these methods, projection-type method is a simple and an efficient one; the oldest algorithm of this class is the extragradient projection method introduced in [4] and later refined and extended in [57].

In 1999, Solodov and Svaiter [8] proposed a hyperplane projection algorithm for solving the VI() in Euclidean space, known also as the double projection algorithm due to the fact that one needs to implement double projections in each iteration. One is onto the feasible set , and the other is onto the intersection of the feasible set and the half-space. More precisely, they presented the following algorithm.

Algorithm 1. Choose an intial point , parameters , and set .
Step  1. Having , compute
Stop if ; otherwise, go to Step 2.
Step  2. Compute , where with being the smallest nonnegative integer such that
Step  3. Compute where

Let , and return to Step 1.

Wang et al. [9] shows that Algorithm 1 can get a longer step size, and hence it is a better algorithm than the extragradient method proposed by Korpelevich [7] in theory. The convergence rate of the iterative sequence generated by the hyperplane projection method depends mainly on the choice of hyperplane and projection way. We note that the hyperplane of Algorithm 1 is constructed by . In 2006, He [10] constructed a new hyperplane by a linear combination of and and hence modified the Algorithm 1.

Inspired by the above, in this paper, we construct a class of new hyperplanes by a linear combination of , and and hence present a class of new double projection algorithms. Using the proof method proposed by He in [10], our algorithms are proved to be globally convergent under continuous and pseudomonotone. Numerical experiments show that constructing hyperplane by has a certain significance to change the convergence rate of the iterative sequence. In addition, we propose a modified version of our algorithm that find a solution of variational inequality which is also a fixed point of a given nonexpansive mapping.

The organization of this paper is as follows. In the next section, we give some preliminaries. The details of the double projection algorithm are presented, and its global convergence analysis is proved in Section 3. The modified double projection algorithm and its convergence analysis are in Section 4. Numerical results are reported in Section 5. Finally, conclusions together with some further studies are summarized in the last section.

2. Preliminaries

Let be a parameter. The natural residual function is defined by

A well-known fact is that is a solution of VI if and only if is a root of .

Lemma 2. Let be a closed convex set. Then, it holds that(1),(2).By Lemma 2  , it is easy to prove that

Lemma 3. For every , , and , one has

Proof. Since , by property (2), In inequality (10), substitute with , and we obtain the desired result.

Lemma 4. Let be a closed convex set in , be a real-valued function on , and be the set . If is nonempty and is Lipschitz continuous on with modulus , then where denotes the distance from to .

Proof. See [10, Lemma 2.3].

Lemma 5. Let be a real sequence satisfying for all , and let and be two sequences in such that for some , Then,

Proof. See [11, Lemma 3.1].

3. The Double Projection Algorithm and Convergence Analysis

Algorithm 6. Select , , , , , . Set .

Step  1. For , define

Compute . If , stop; else go to Step 2.

Step  2. Compute , where with being the smallest nonnegative integer satisfying

Step  3. Compute , where with being a halfspace defined by the function where normal vector .

Let , and return to Step 1.

Remark 7. In Algorithm 6, the searching direction is taken as which is a linear combination set of , with . When taking , , , degrade into the direction introduced by He [10]; when taking , , degrade into the direction introduced by Noor et al. [12]; when taking , , , degrade into the direction introduced by Wang et al. [13]; when taking , , degrade into the direction introduced by Iusem and Svaiter [7], Solodov and Svaiter [8], and wang et al. [9], which shows that our direction is a framework of the ones of the above projection methods.

Lemma 8. Let , and let the function be defined by (16). Then, In particular, if , then .

Proof. Consider that If , then because . Next, we prove that . Since by of Lemma 2, we have By property (2), Adding inequalities (21) and (22), we obtain We have where the first inequality follows from (2), (9), and (23) and the last one follows (8) and (15).
It follows that The proof is completed.

Now, we turn to consider the convergence of Algorithm 6. Certainly, if Algorithm 6 terminates at   , then is a solution of VI. Therefore, in the following analysis, we assume that Algorithm 6 always generates an infinite sequence.

Theorem 9. If is a nonempty closed and convex set in and the property (2) holds, then the sequence generated by Algorithm 6 is bounded and .

Proof. Let . Since it follows from Lemma 2 that It follows that the sequence is nonincreasing, and hence it is a convergent sequence. Therefore, is bounded and . The proof is completed.

Theorem 10. If is a nonempty closed and convex set in , is a continuous mapping from into itself, and the property (2) holds, then Algorithm 6 generates an infinite sequence converging to a solution of  VI .

Proof. By Theorem 9, the sequence is bounded. Since and projection operator are continuous, we have the sequence , and hence the sequence is bounded. Thus, , , and are bounded sequences; that is, there exists some , such that Clearly each function is Lipschitz continuous on with modulus . By Lemmas 8 and 4, we obtain that Thus, by Theorem 9, we have which implies that there exist subsequences and of , respectively, such that
Suppose that = . Since is continuous and is a bounded sequence, there exists an accumulation point of such that , which implies that solves the VI. Replacing by in (27), we obtain that the sequence is also nonincreasing and hence converges. Since is an accumulation point of , some subsequence of converges to zero. This shows that the whole sequence converges to zero, and hence .
If , by the search procedure (15), we have Since and are continuous, we obtain by letting that . Similar discussion obtains the desired result.

4. The Modified Double Projection Algorithm and Convergence Analysis

In this section, we present the modified double projection algorithm which finds a solution of the VI which is also a fixed point of a given nonexpansive mapping. Let be a nonexpansive mapping, and denote by Fix() its fixed point set; that is, Let for some .

Algorithm 11. Select , , , , , . Set .

Step  1. Compute and , where with being the smallest nonnegative integer satisfying the line search (15).

Step  2. Compute where with being a halfspace defined by the function  (16).

Let , and return to Step 1.

We, next, establish a convergence theorem for Algorithm 11. We assume that the following condition holds.

We also recall that in , for all and .

Theorem 12. Suppose that the assumptions of Theorem 10 hold, then any sequence generalized by Algorithm 11 converges to some point of .

Proof. Denote for all . Let . By the definition of , we obtain where the third equality follows from (36), the second inequality follows from the nonexpansion of , and the third inequality follows from Lemma 2. In (39), using the proof similar to those of Theorem 10, we obtain that converges to some solution of VI. It is now left to show that . By (38), we obtain that is a convergent sequence; that is, there exists some , such that Since is nonexpansive, we obtain which, together with (40), means that Furthermore, So applying Lemma 5, we obtain Since it follows from (41) and (45) that Since is nonexpansive on and converges to , we obtain that , which means that . Therefore, the sequence converges to .

Next, we provide a result on the convergence rate of the iterative sequence generated by Algorithm 11. To establish this result, we need the following local error bound condition to hold.

There exist two positive constants and such that

Theorem 13. In addition to the assumptions of Theorem 10, if is Lipschitz continuous with modulus and the local error bound condition (48) holds, then there is a constant such that for sufficiently large ,

Proof. Set . We first prove that for all . By the definition of , we obtain . If , then holds. Now, we assume that . It follows from that . By the construction of , we have From the Lipschitz continuity of , we have Thus, .
Let . By (29), (39), and (48), we obtain that for sufficiently large , Denote . Applying Lemma  6 in [14, Chapter 2], we have

5. Numerical Experiments

In this section, we present some numerical experiments’ results to show the effectiveness of the proposed algorithm. The MATLAB codes are run on a notebook computer with Intel Core 2 CPU 2.10 GHZ and RAM 2.00 GM under MATLAB Version 7.0. We compare the performance of our Algorithm 6, [10, Algorithm 2.1] and [8, Algorithm 2.2]. In all tables, we use Dim to mean the dimension of the problem, CPU to represent the CPU total runtime of the computer in seconds, and iter to mean the total iterative number of times. The tolerance means when , the procedure stops. We choose = , = , = , = , = , and = for our Algorithm 6 in Tables 1 and 2, , , , , , and as Algorithm 6 in Table 3; , , and for Algorithm  2.1 in [10]; , , and for Algorithm  2.2 in [8]. The choices of the parameters for the latter two algorithms are what the corresponding references proposed.

Example 14. Consider the affine variational inequality with and , where This problem was first tested in [15].

From Tables 1 and 2, we can see that our Algorithm 6 is more efficient than Algorithm  2.2 in [8] and Algorithm  2.1 in [10]. It can be observed that the number of iterations of our algorithm is much less than those of [8, 10], and the CPU time in our method is shorter than those of [8, 10]. In addition, for a set of similar problems, it seems that the number of iterations and the CPU time of the above three methods are not very sensitive to starting point. In fact, the initial points of the methods can be chosen randomly.

Example 15. The Kojima-Shindo nonlinear complementarity problem (NCP) (with ) was considered first in [16], where the function is defined by Let the feasible set be the simplex .

Example 16. The example comes from the problem of computing the Cournot-Nash equilibria of -firm noncooperative games. The defining mapping is of the form where with The constants , , , and are positive scalars whose data are taken from [17] with and .
In Table 3, Mathiesen’s test problem is Example 15. We use as the initial point. The test problem of Harnash5 and Harnash10 is Example 16 with and , respectively.

From Table 3, we can see that the number of iterations of our algorithm is less than the ones of the methods in [8, 10]. In Mathiesen’s test problem, Table 3 shows that the CPU time in our algorithm is also shorter than those in [8, 10].

Tables 1, 2, and 3 show that constructing hyperplane by , has a certain significance to change the convergence rate of the iterative sequence.

6. Conclusion

In this paper, a class of double projection algorithms for solving pseudomonotone variational inequalities are proposed on the basis of the algorithms in [8, 10]. The global convergence of the proposed algorithms is proved under the condition that is continuous and pseudomonotone. The numerical experiments’ results show that our algorithms are more efficient if the direction is chosen properly. How to choose a suitable direction for the different kind of variational inequalities problems would be an interesting topic in further research.

In addition, in Algorithm 6, the searching direction is taken as , which is a linear combination of , with . I wonder whether there exists an other combination for searching direction of projection method. This is also a topic for further research.

Acknowledgment

This work was supported by the Educational Science Foundation of Chongqing, Chongqing of China (Grant KJ111309).