General fixed-point method for solving the linear complementarity problem

In this paper, we consider numerical methods for the linear complementarity problem (LCP). By introducing a positive diagonal parameter matrix, the LCP is transformed into an equivalent fixed-point equation and the equivalence is proved. Based on such equation, the general fixed-point (GFP) method with two cases are proposed and analyzed when the system matrix is a P-matrix. In addition, we provide several concrete sufficient conditions for the proposed method when the system matrix is a symmetric positive definite matrix or an H+-matrix. Meanwhile, we discuss the optimal case for the proposed method. The numerical experiments show that the GFP method is effective and practical.


Introduction
In this paper, we consider the linear complementarity problem, which is to find a vector x ∈ R n such that x T (Ax + q) = 0, x ≥ 0 and Ax + q ≥ 0, (1.1) where A = (a i j ) ∈ R n×n and q ∈ R n are given. For convenience, such problem is usually abbreviated as LCP(A, q), which is related to many practical problems, such as American option pricing problems, the market equilibrium problems and the free boundary problems for journal bearings, see [1][2][3][4][5] and the references therein.
To obtain the numerical solution of the LCP(A, q) with a large and sparse system matrix, many kinds of iteration methods have been proposed and analyzed in recent decades. The projected method is a well known iteration method, which includes the smoothing projected gradient method [6], the projected gradient method [7], the partial projected Newton method [8], the improved projected successive over relaxation (IPSOR) [9], and so on. For more detailed materials about projected type iteration methods, we refer readers to [10][11][12][13] and the references therein. There are other efficient iteration methods for solving the LCP, e.g., the modulus-based matrix splitting iteration methods [2] and the nonstationary extrapolated modulus algorithms [14], the two-sweep modulus-based matrix splitting iteration methods [15], the general modulus-based matrix splitting method [5], the two-step modulus-based matrix splitting iteration method [16], the accelerated modulus-based matrix splitting iteration methods [17]. The main difference between the projected type iteration methods and the modulus-based type matrix splitting iteration methods is that the projected type iteration methods directly construct the iterative forms on an equivalent fixed-point equation based on the projection approaches and matrix splittings, however, the modulus-based type matrix splitting iteration methods reformulate the LCP(A, q) as an implicit fixed-point equation by introducing positive diagonal parameter matrices, then construct the iterative forms based on all kinds of matrix splittings and avoid the projections. The most prominent difference is that one requires the projection and the other does not. For other iteration methods for solving the complementarity problems, we refer readers to [18][19][20][21][22][23][24] and the references therein.
In [4], Shi, Yang, and Huang presented a fixed-point (FP) method for solving a concrete LCP(A, q) arising in American option pricing problems. The FP method is based on a fixed-point equation and belongs to the projected type iteration methods. However, the numerical examples in [4] shows that the number of iteration steps is very large when a suitable approximate solution is obtained. In this paper, we further discuss the projected type iteration methods and consider a general fixed-point equation by introducing a positive diagonal parameter matrix Ω. We note that [4] considered the case where Ω = αI with α > 0 and I is the identity matrix. We prove the equivalence between the general fixedpoint equation and the linear complementarity problem. Based on the new fixed-point equation, we propose the general fixed-point (GFP) method with two iteration forms. We discuss the convergence conditions and provide the concrete convergence domains for the proposed method. Moreover, we discuss the optimal parameter problem and obtain an optimal parameter value.
The rest of this paper is organized as follows. In Section 2, we introduce some notations and concepts briefly, and then provide two lemmas which are required to derive the new fixed-point method. In Section 3, we propose the general fixed-point method with convergence analysis. In Section 4, we present numerical examples to illustrate the efficiency of the proposed method. Finally, we give the concluding remark in Section 5.

Preliminaries
In this section, we first briefly review some notations and concepts, then provide two lemmas that will be used in Section 3.
A matrix A = (a i j ) ∈ R n×n is denoted by A ≥ 0 (or A > 0) if a i j ≥ 0 (or a i j > 0), and the absolute value matrix of A = (a i j ) ∈ R n×n is denoted by |A| = (|a i j |). The spectral radius of a square matrix A is denoted by ρ(A). A matrix A = (a i j ) ∈ R n×n is called a Z-matrix if a i j ≤ 0 for i j and i, j = 1, 2, ..., n, an M-matrix if A is a Z-matrix with A −1 ≥ 0, and a P-matrix if all of its principal minors are positive ( [25]). A matrix A = (a i j ) ∈ R n×n is called an H-matrix if its comparison matrix A is an M-matrix, and an H + -matrix if it is an H-matrix with all positive diagonal elements, where the comparison matrix A = ( a i j ) is defined by a ii = |a ii | and a i j = −|a i j | with i j for i, j = 1, 2, ..., n ( [2]). For a given matrix A ∈ R n×n , the splitting A = F − G is called an M-splitting if F is an M−matrix and G ≥ 0 ( [26]). For a given vector x ∈ R n , the symbols x + and x − denote the vectors x + = max{0, x} and x − = max{0, −x}, respectively. For x, x + and x − , we have Lemma 2.1 [27,28] Let A ∈ R n×n be an M-matrix and A = F − G be an M-splitting. Then For the equation where Ω is a given positive diagonal matrix, we have the following conclusion. (i) If x * is a solution of (1.1) and x * * = Ax * + q, then x * * * = x * − Ωx * * is a solution of (2.1).
(ii) If x * is a solution of (2.1), then x * + is a solution of (1.1).
That is, x * * * is a solution of (2.1).
From Lemma 2.2 (ii), we know that the solution of (1.1) can be obtained by solving Eq (2.1).

General fixed-point method
In this section, we first prove that the solution of equation (2.1) is unique when the system matrix is a P-matrix, then propose the general fixed-point (GFP) method for solving (1.1) based on (2.1), and discuss the convergence conditions. Theorem 3.1 If A is a P-matrix, then for any positive diagonal matrix Ω, (2.1) has a unique solution.
Based on (2.1) and Lemma 2.2 (ii), we get an iterative method for solving (1.1): We state the algorithm for (3.1) as follows.
Algorithm 1 Iterative method based on (3.1) 3: if then RES< ε 6: x = x (k) be the sequence generated by (3.1) and x * be the solution of (2.1). If converges to x * for any initial vector Proof. Since x * is the unique solution of (2.1), we have Combining with Eq (3.1), we get converges to x * for any initial vector x (0) ∈ R n .
Since ρ(A) ≤ A for any matrix norm · , we can get the following corollary easily from Theorem 3.2.
be the sequence generated by (3.1) and x * be the solution of (2.1). If converges to x * for any initial vector x (0) ∈ R n , where · is any matrix norm. In the following, we consider two special cases: A is an H + matrix with Ω = ωD −1 , where D = diag(A), and A is a symmetric positive definite matrix with Ω = ωI, where I is the identity matrix.
be the sequence generated by (3.1) and x * be the solution of (2.1). If , converges to x * for any initial vector [21]. For It follows that It can be easily seen from (3.4) that ρ(|I − ΩA|) < 1 for ω ∈ (0, 1] and for ω > 1, converges to x * for any initial vector Theorem 3.4 Suppose A is a symmetric positive definite matrix. Set Ω = ωI with ω > 0 and denote the smallest and the largest eigenvalues of A by λ min and λ max , respectively. Let x (k) +∞ k=1 be the sequence generated by (3.1) and x * be the solution of (2.1). If converges to x * for any initial vector Proof. From the proof of Theorem 3.2, similarly, we can have Then where · 2 is the spectral norm of matrix. So, we have a convergence condition of (3.1), that is to obtain the convergence conditions of (3.1), that is 0 < ω ≤ 2 λ min +λ max and 2 λ min +λ max ≤ ω < 2 λ max , which can be combined into 0 < ω < 2 λ max .
Thus, the conclusion is proved.
Let A be split as A = D − L − U, where D, −L and −U are the diagonal, strictly lower and strictly upper triangular parts of A, respectively. We can derive another iterative method for solving (1.1) based on (2.1) by using the idea of Gauss-Seidel: We state the algorithm for (3.5) as follows.
We call iterative methods (3.1) and (3.5) the general fixed-point method. We note that in [4], Shi, Yang, and Huang introduced a fixed-point method, which is a special case of (3.1) with Ω = αI (α > 0). In the following, we analyze the convergence of iterative method (3.5). In particular, we consider the convergence domain of Ω when A is an H + -matrix.
Theorem 3.5 Suppose A is a P-matrix, then the sequence x (k) +∞ k=1 generated by (3.5) converges to the unique solution x * of (2.1) for any initial vector Algorithm 2 Iterative method based on (3.5) 1: Given x (0) ∈ R n , ε > 0 2: for k = 0, 1, 2, . . . , do 3: if RES< ε then 6: x = x (k) From the above formula and equation (3.5), we have Thus It follows that Since I − |ΩL| is an M-matrix, i.e., (I − |ΩL|) −1 ≥ 0, we have converges to x * for any initial vector We now consider the convergence domain of Ω for iterative method (3.5) when A is an H + -matrix. Then the sequence x (k) +∞ k=1 generated by (3.5) converges to the unique solution of (2.1) for any initial vector x (0) ∈ R n .
converges to x * .

Numerical experiments
In this section, we illustrate some examples. Since most of the projected methods involve parameters, and it is not easy to select a proper in practice. Meanwhile, the implicit equations related to the projected methods are different, it is difficult to compare the GFP method with other projected methods fairly. So we do not compare the GFP method with other projected methods except for the FP method. The modulus-based approaches for solving the LCP(A, q) include many cases, we select the modulus-based SOR (MSOR) iteration method with the best cases as comparison. Besides, we use the GFP method to solve the LCP(A, q) arising in American option pricing problem( [4]). The number of iteration steps, the elapsed time and the norm of the residual vector are denoted by IT, CPU and RES, respectively. RES is defined as: where x (k) + is the kth approximate solution of (1.1). The iteration process stops if RES(x (k) + ) < 10 −5 or the number of iteration steps reaches 1000.
The system matrix A in the first two examples is generated by Example 4.1 In this example, we compare the GFP method with the MSOR iteration method ( [2]), which is a very effective method in solving the LCP(A, q). Both the GFP method and the MSOR iteration method involve a parameter matrix Ω, which has many choices and it is not appropriate if we take the same Ω. We set Ω = ωD −1 in the GFP method and Ω = ωD in the MSOR iteration method, respectively. For fairness, we compare the two methods with the best cases by performing many experiments. Set x (0) = zeros(n, 1) and n = 10000, then we obtain the following Table 1. 0.8e-5 0.3e-5 0.4e-5 0.6e-5 0.6e-5 0.7e-5 In Table 1, Alg 1 and Alg 2 denote Algorithm 1 and Algorithm 2, respectively. From Table 1, we can find that iterative method (3.1) is better than the MSOR iteration method in the running time, and iterative method (3.5) is better than the MSOR iteration method in the iteration steps. Since Algorithm 2 calculates the numerical solution entry by entry, the matrix-vector fast calculation technique can not be used, which makes the method more time-consuming. For Algorithm 1, single-step running time is short, but the number of iteration steps is relatively large. Besides, the best parameters for the MSOR iteration method are selected by many experiments, however, the best parameter for iterative method (3.1) is near or equals to 1, which is a conclusion of Theorem 3.3.
From Table 2 and Figure 1, we can find that Theorem 3.3 and Corollary 3.2 provide the convergent domains of ω for iterative methods (3.1) and (3.5), respectively, and the initial iteration vector is arbitrary when ω falls in these domains. When ω exceeds the convergence domain, the two iterative methods are convergent sometimes when x (0) = zeros(n, 1). Meanwhile, we can find that iterative method (3.5) is usually faster than iterative method (3.1) in terms of IT when ω takes the same value. In addition, this example illustrates the optimal parameter conclusion in Theorem 3.3, i.e., ω = 1 is a good parameter for iterative method (3.1).

Example 4.3
In this example, we apply the GFP method to solve the LCP (A, q) in [4], where the matrix A satisfies (3.9). We set Ω = ωD −1 and λθ = 0.5, 1, 1.5, 2, respectively. Similarly, just as Example 4.2, we set ω = 1 − floor 1 δ * δ : δ : From Figure 2, we can find that both iterative method (3.1) and iterative method (3.5) have good performance when ω = 1, i.e., the two iterative methods solve the LCP (A, q) in very few iteration steps. Meanwhile, this example also verifies that iterative method (3.5) is faster than iterative method (3.1) when ω = 1, which is a conclusion given at the end of Section 3.
Example 4.4 In this example, we compare the GFP method with the FP method ( [4]). The system matrix is generated by where C = diag([1 2 · · · n]) and n = 900. The initial iteration vector is x (0) = zeros(n, 1). For the FP method, the parameter matrix is Ω = ωI, and for the GFP method, the parameter matrix is Ω = ωD −1 = ω(diag(A)) −1 . We consider two cases in our experiments, that is η = 0 and η = 1. Thus, these two matrices are H + -matrices. Since the convergence domains of ω are different, we can not use the same ω values for the two methods. For the GFP method, based on Theorem 3.3 and Corollary 3.2, we know that the convergence domain is 0 < ω < For η = 0, since the system matrix is a symmetric positive definite matrix, based on Theorem 3.4, we know that the convergence domain is 0 < ω < 2 λ max , we set ω to be 1 3λ max : 1 3λ max : 2 λ max .
For η = 1, by performing many experiments, we set ω to be 0.0003 : 0.0002 : 0.0013, which include the convergence parameter values. The numerical results are shown in Tables 3 and 4, respectively.  Tables 3 and 4, we can find that since ρ is very large, the FP method can not obtain an approximate solution even the number of iteration steps reaches 1000. On the contrary, for the GFP method, both (3.1) and (3.5) can obtain the approximate solution in fewer iteration steps. Therefore, it is obvious that the GFP method is better than the FP method.

Conclusions
In this paper, based on an equivalent fixed-point equation with a parameter matrix Ω, we present the general fixed-point (GFP) method for solving the LCP(A, q), which is the generalization of the fixedpoint (FP) method. For this method, we discussed two iterative forms: one is the basic form and the other is the converted form, which is associated with matrix splitting. Both iterative forms can keep the spare structure of A in the iteration processes, thus the sparse structure of A can be applied to improve the effectiveness of this method. For the GFP method, the convergence conditions are proved and some concrete convergence domains of Ω as well as the optimal cases are presented. The iteration form of the GFP method is simple and the convergence rate is affected by the spectral radius of the iteration matrix. The numerical experiments show that the GFP method is an effective and competitive iterative method.