A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems

: A new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method is developed to solve linear complementarity problems. The convergence of the NRATMMS method is established with the system matrix A being an H + -matrix. Numerical experiments show that the proposed method is superior to some existing algorithms under appropriate conditions.


Introduction
Over the past decades, owing to a broad variety of applications in engineering, sciences and economics, the linear complementarity problem (LCP) has been an active topic in the optimization community and has garnered a flurry of interest. The LCP is a powerful mathematical model which is intimately related to many significant scientific problems, such as the well-known primal-dual linear programming, bimatrix game, convex quadratic programming, American option pricing problem and others, see e.g., [1][2][3] for more details. The LCP consists in determining a vector z ∈ R n such that The LCP(A, q) of form (1.1) together with its extensions are extensively investigated in recent years, and designing efficient numerical algorithms to fast and economically obtain the solution of the LCP(A, q) (1.1) is of great significance. Some numerical iterative algorithms have been developed for solving the LCP(A, q) (1.1) over the past decades, such as the pivot algorithms [1,2,4], the projected iterative methods [5][6][7][8], the multisplitting methods [9][10][11][12][13][14], the Newton-type iteration methods [15,16] and others, see e.g., [17][18][19] and the references therein. The modulus-based matrix splitting (MMS) iteration method, which was first introduced in [20], is particularly attractive for solving the LCP(A, q) (1.1). Based on the general variable transformation, by setting z = |x|+x γ and v = Ω γ (|x| − x), and let A = M − N, Bai reformulated the LCP(A, q) (1.1) as the following equivalent form [20] ( where γ > 0 and Ω ∈ R n×n is a positive diagonal matrix. Then he skillfully designed a general framework of MMS iteration method for solving the large-scale sparse LCP(A, q) (1.1), which exhibits the following formal formulation. Assume that x 0 ∈ R n is an arbitrary initial guess. For k = 0, 1, 2, · · · , compute {x k+1 } by solving the linear system (Ω + M)x k+1 = N x k + (Ω − A)|x k | − γq, and then set until the iterative sequence {z k } is convergent. Here, Ω ∈ R n×n is a positive diagonal matrix and γ is a positive constant.
The MMS iteration method not only covers some presented iteration methods, such as the nonstationary extrapolated modulus method [21] and the modified modulus method [22] as its special cases, but also yields a series of modulus-based relaxation methods, such as the modulus-based Jacobi (MJ), the modulus-based Gauss-Seidel (MGS), the modulus-based successive overrelaxation (MSOR) and the modulus-based accelerated overrelaxation (MAOR) methods. Thereafter, since the promising behaviors and elegant mathematical properties of the MMS iterative scheme, it immediately received considerable attention and diverse versions of the MMS method occurred. For instance, Zheng and Yin [23] established a new class of accelerated MMS (AMMS) iteration methods for solving the largescale sparse LCP(A, q) (1.1), and the convergence analyses of the AMMS method with the system matrix A being a positive definite matrix or an H + -matrix were explored. In order to further accelerate the MMS method, Zheng et al. [24] combined the relaxation strategy with the matrix splitting technique in the modulus equation of [25] and presented a relaxation MMS (RMMS) iteration method for solving the LCP(A, q) (1.1). The parametric selection strategies of the RMMS method were discussed in depth [24]. In addition, the RMMS method covers the general MMS (GMMS) method [25] as a special case. In the sequel, by extending the two-sweep iteration methods [26,27], Wu and Li [28] developed a general framework of two-sweep MMS (TMMS) iteration method to solve the LCP(A, q) (1.1), and the convergences of the TMMS method were established with the system matrix A being either an H + -matrix or a positive-definite matrix. Ren et al. [29] proposed a class of general two-sweep MMS (GTMMS) iteration methods to solve the LCP(A, q) (1.1) which encompasses the TMMS method by selecting appropriate parameter matrices. Peng et al. [30] presented a relaxation two-sweep MMS (RTMMS) iteration method for solving the LCP(A, q) (1.1) and gave its convergence theories with the system matrix A being an H + -matrix or a positive-definite matrix. Huang et al. [31] combined the parametric strategy, the relaxation technique and the acceleration technique to construct an accelerated relaxation MMS (ARMMS) iteration method for solving the LCP(A, q) (1.1). The ARMMS method can be regarded as a generalization of some existing methods, such as the MMS [20], the GMMS [25] and the RMMS [24]. For more modulus-based matrix splitting type iteration methods, see [32][33][34][35][36][37][38][39][40][41] and the references therein.
On the other hand, Bai and Tong [42] equivalently transformed the LCP(A, q) (1.1) into a nonlinear equation without using variable transformation and proposed an efficient iterative algorithm by using the matrix splittings and extrapolation acceleration techniques. Then some relaxed versions of the method proposed in [42] were constructed by Bai and Huang [43] and the convergence theories were established under some mild conditions. Recently, Wu and Li [44] recasted the LCP(A, q) (1.1) into an implicit fixed-point equation In fact, if M = A and Ω = I, then (1.2) reduces to the fixed-point equation proposed in [42]. Based on (1.2), the new MMS (NMMS) method for solving the LCP(A, q) (1.1) was constructed in [44].
be a splitting of the matrix A ∈ R n×n and the matrix Ω + M be nonsingular, where Ω ∈ R n×n is a positive diagonal matrix. Given a nonnegative initial vector z 0 ∈ R n , for k = 0, 1, 2, · · · until the iteration sequence {z k } is convergent, compute z k+1 ∈ R n by solving the linear system It is obvious that the NMMS method does not need any variable transformations, which is different from the above mentioned MMS type iteration methods. However, the NMMS method still inherits the merits of the MMS type iteration methods and some relaxation versions of it are studied.
where D A , −L A and −U A are the diagonal, strictly lowertriangular and strictly upper-triangular parts of A, respectively. It has been mentioned in [44] that the Algorithm 1.2 can reduce to the following methods.
The goal of this paper is to further improve the computing efficiency of the Algorithm 1.2 for solving the LCP(A, q) (1.1). To this end, we utilize the two-sweep matrix splitting iteration technique in [28,29] and the relaxation technique, and construct a new class of relaxed acceleration two-sweep MMS (NRATMMS) iteration method for solving the LCP(A, q) (1.1). Convergence analysis of the NRATMMS iteration method is studied in detail. By choosing suitable parameter matrices, the NRATMMS iteration method can generate some relaxation versions. Numerical results are reported to demonstrate the efficiency of the NRATMMS iteration method.
The remainder of this paper is organized as follows. In Section 2, we present some notations and definitions used hereinafter. Section 3 is devoted to establishing the NRATMMS iteration method for solving the LCP(A, q) (1.1) and the global linear convergence of the proposed method is explored. Section 4 reports the numerical results. Finally, some concluding remarks are given in Section 5.

Preliminaries
In this section, we collect some notations, classical definitions and some auxiliary results which lay the foundation of our developments.
R n×n denotes the set of all n × n real matrices and R n = R n×1 . I is the identity matrix with suitable dimension. |·| denotes absolute value for real scalar or modulus for complex scalar. For x ∈ R n , x i refers to its i-th entry, |x| = (|x 1 |, |x 2 |, · · · , |x n |) ∈ R n represents the componentwise absolute value of a vector x. tridiag(a, b, c) denotes a tridiagonal matrix that has a, b, c as the subdiagonal, main diagonal and superdiagonal entries in the matrix, respectively. Tridiag(A, B, C) denotes a block tridiagonal matrix that has A, B, C as the subdiagonal, main diagonal and superdiagonal block entries in the matrix, respectively.
Let two matrices P = (p i j ) ∈ R m×n and Q = (q i j ) ∈ R m×n , we write P ≥ Q (P > Q) if p i j ≥ q i j (p i j > q i j ) holds for any i and j. For A = (a i j ) ∈ R m×n , A ⊤ and |A| represent the transpose of A and the absolute value of A (|A| = (|a i j |) ∈ R m×n ), respectively. For A = (a i j ) ∈ R n×n , ρ(A) represents its spectral radius. Moreover, the comparison matrix ⟨A⟩ is defined by A matrix A ∈ R n×n is called a Z-matrix if all of its off-diagonal entries are nonpositive, and it is a Pmatrix if all of its principal minors are positive; we call a real matrix as an M-matrix if it is a Z-matrix with A −1 ≥ 0, and it is called an H-matrix if its comparison matrix ⟨A⟩ is an M-matrix. In particular, an H-matrix with positive diagonals is called an H + -matrix [9]. In addition, a sufficient condition for the matrix A to be a P-matrix is that A is an  [45]. Finally, the following lemmas are needed in the convergence analysis of the proposed method. Lemma 1. ( [46]) Let A ∈ R n×n be an H + -matrix, then the LCP(A, q) (1.1) has a unique solution for any q ∈ R n . Lemma 2. ( [47]) Let B ∈ R n×n be a strictly diagonal dominant matrix. Then for all C ∈ R n×n ,

The method and convergence
In this section, the NRATMMS iteration method for solving the LCP(A, q) (1.1) is developed, and the general convergence analysis of the NRATMMS iteration method will be explored.
with Ω i (i = 1, 2, 3, 4) being all nonnegative diagonal matrices, then (1.2) can be reformulated to the following fixed point format: ( where θ ≥ 0 is a relaxation parameter. Based on (3.1), the NRATMMS iteration method is established as in the following Algorithm 3.1.
being all nonnegative diagonal matrices such that M 1 + Ω 1 is nonsingular. Given two initial guesses z 0 , z 1 ∈ R n and a nonnegative relaxation parameter θ, the iteration sequence {z k } is generated by for k = 1, 2, · · · until convergence.
The Algorithm 3.1 provides a general framework of NMMS iteration methods for solving the LCP(A, q) (1.1), and it can yield a series of NMMS type iteration methods with suitable choices of the matrix splittings and the relaxation parameter. For instance, when θ = 1 and Ω i = 0 (i = 1, 2, 3, 4), the Algorithm 3.1 reduces to the new accelerated two-sweep MMS (NATMMS) iteration method The convergence analysis for Algorithm 3.1 is investigated with the system matrix A of the LCP(A, q) (1.1) being an H + -matrix.
be an H-splitting and a general splitting of A, respectively, and Ω i (i = 1, 2, 3, 4) be four nonnegative diagonal matrices such that M 1 + Ω 1 is nonsingular. Denotẽ then the iteration sequence {z k } generated by the Algorithm 3.1 converges to the unique solution z * for arbitrary two initial vectors if ρ(L) < 1.
Proof. Let z * be the exact solution of the LCP(A, q) (1.1), then it satisfies For simplicity, let and then the iteration sequence {z k } converges to the unique solution z * if ρ(L) < 1. Since L ≥ 0, according to Lemma 6, ρ(F + G) < 1 implies ρ(L) < 1. To prove the convergence of the Algorithm 3.1, it is sufficient to prove ρ(F + G) < 1.
Under the conditions that A is an H + -matrix and A = M 1 − N 1 is an H-splitting of A, i.e., ⟨M 1 ⟩ − |N 1 | is an M-matrix, then by Lemma 5, ⟨M 1 ⟩ ≥ ⟨M 1 ⟩ − |N 1 | implies that M 1 is an H-matrix, and Ω 1 + M 1 is also an H-matrix. In the light of Lemma 3, it follows that

Recall (3.4) and (3.5), we obtain
which yields that As a consequence, based on the monotone property of the spectral radius, the iteration sequence then the iteration sequence {z k } generated by the Algorithm 3.1 converges to the unique solution z * of the LCP(A, q) (1.1) for arbitrary two initial vectors if one of the following two conditions holds. Here, Ω = Ω 1 − Ω 2 = Ω 3 − Ω 4 and V is an arbitrary positive diagonal matrix such that (Ω 1 + ⟨M 1 ⟩)V is a strictly diagonal dominant matrix.

Proof. For the NRATMAOR iteration method, we have
and where α, β > 0 are parameters. In order to use the result of Lemma 8, we need A = M 1 − N 1 to be an H-splitting of A. Since A is an H + -matrix, we have D A > 0. It follows from (3.14) that and it follows from Lemma 7 that S is an M-matrix if which is satisfied if 0 < α ≤ 1 and ϱ < 1 or 1 < α < 2 and ϱ < 2 − α α .
In conclusion, A = M 1 − N 1 is an H-splitting of A (or, equivalently, S is an M-matrix) if one of the following four conditions holds: In the following, letÂ In order to prove the convergence of the NRATMAOR iteration method, based on Lemma 8, it suffices to prove ρ(L) < 1 provided that is a lower triangular matrix with positive diagonal entries and non-positive off-diagonal entries, it is an M-matrix. In addition,N ≥ 0. According to Lemma 7,Â is an M-matrix implies ρ(L) < 1. Thus, we will prove that the Z-matrixÂ is an M-matrix in the following.
Case I: 0 < θ ≤ 1. In this case, we havê from which we havê It can be easy to prove that the first term of (3.22) is nonnegative if Then it follows from (3.22) that from which and Lemma 5, we obtain thatÂ is an M-matrix whenever T is. It follows from Lemma 7 that T is an M-matrix if which is satisfied if 0 < α ≤ 1 and ϱ < 1 2 or 1 < α < 2 and ϱ < 2 − α 2α .
In Case I, it can be concluded from (i) and (ii) thatÂ is an M-matrix if one of the following four conditions holds: Case II: θ > 1. In this case, we havê from which we obtain The first term of (3.30) is nonnegative if Then it follows from (3.30) that (a) If 0 < β ≤ α, then it follows from (3.33) that from which and Lemma 5, we find thatÂ is an M-matrix whenever R is. It follows from Lemma 7 that R is an M-matrix if from which and Lemma 5, we find thatÂ is an M-matrix wheneverR is. It follows from Lemma 7 thatR is an M-matrix if In Case II, it can be concluded from (a) and (b) thatÂ is an M-matrix if one of the following four conditions holds: The proof is completed by combining (

Numerical results
In this section, three numerical examples are performed to validate the effectiveness of the NRATMMS iteration method.
All test problems are conducted in MATLAB R2016a on a personal computer with 1.19 GHz central processing unit (Intel (R) Core (TM) i5-1035U), 8.00 GB memory and Windows 10 operating system. In the numerical results, we report the number of iteration steps (denoted by "IT"), the elapsed CPU time in seconds (denoted as "CPU") and the norm of the absolute residual vector (denoted by "RES"). Here, RES is defined by RES(z k ) min{Az k + q, z k } 2 .
As shown in [44], the NMGS method can be top-priority when the six tested methods there are used to solve the LCP(A, q) in the three examples. Therefore, in this paper, we focus on comparing the performance of the NMGS method in [44] with the NRATMGS method. For the NMGS iteration method, Ω = D A is used [44]. For the NRATMGS method, we take Ω 1 = Ω 3 = D A , Ω 2 = Ω 4 = 0, M 2 = A, N 2 = 0 and α = β = 1. In addition, the optimal parameter θ exp in the NRATMGS iteration method is obtained experimentally (ranging from 0 to 2 with step size 0.1 for Example 4.1 and Example 4.2, and with step size 0.01 for Example 4.3) by minimizing the corresponding iteration step number. For the sake of fairness, each methods are run 10 times and we take the average value of computing times as the reported CPU. Both methods are started from the initial vectors z 0 = z 1 = (1, 0, 1, 0, · · · , 1, 0, · · · ) ⊤ and stopped if RES(z k ) < 10 −5 or the prescribed maximal iteration number k max = 500 is exceeded. The involved linear systems are solved by " \ ". Numerical results for Examples 4.1-4.3 are reported in Tables 1-3. It follows from Tables 1-3 the NRATMGS method is better than the NMGS method (and the other methods tested in [44]) in terms of the iteration step number and CPU time when the parameter θ exp is selected appropriately.

Conclusions
In this paper, by applying the matrix splitting, relaxation technique and two-sweep iteration form to the new modulus-based matrix splitting formula, we propose a new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method for solving the LCP(A, q) (1.1). We investigate the convergence properties of the NRATMMS iteration method with the system matrix A of the LCP(A, q) (1.1) being an H + -matrix. Numerical experiments illustrate that the NRATMMS iteration method is effective, and it can be superior to some existing methods. However, the choices of the optimal parameters in theory require further investigation.