Two e ﬀ ective inexact iteration methods for solving the generalized absolute value equations

: Modiﬁed Newton-type methods are e ﬃ cient for addressing the generalized absolute value equations. In this paper, to further speed up the modiﬁed Newton-type methods, two new inexact modiﬁed Newton-type iteration methods are proposed. The su ﬃ cient conditions for the convergence of the two proposed inexact iteration methods are given. Moreover, to demonstrate the e ﬃ cacy of the new method, several numerical examples are provided


Introduction
The main research of this article is about the generalized absolute value equation (GAVE): where A, B ∈ R n×n , b ∈ R n and |x| := (|x 1 |, |x 2 |, · · · , |x n |) T . Actually, when B is a identity matrix, the GAVE (1.1) can be easily transformed into the absolute value equation (AVE): The GAVE (1.1) was first proposed by Rohn as a class of nonlinear nondifferentiable optimization problems [1]. After that, the AVE (1.2) and GAVE (1.1) has applications in many areas of optimization, including linear complementarity problem, bimatrix games, constrained least squares problems, and so on, see [1][2][3][4][5][6]. For instance, the famous linear complementarity problem (LCP) [5]: Determining z such that Mz + q ≥ 0, z ≥ 0 and z T (Mz + q) = 0, with M ∈ R n×n , q ∈ R n . (1. 3) In [3], Mangasarian has proved that the GAVE (1.1) and LCP (1.3) can be transformed into each other under some conditions, and he also proposed the unique solvability of the AVE. Moreover, many scholars have further studied the solvability and unique solution theory of the AVE (1.2) [7][8][9]. In the early algorithm research, some efficient iteration methods have been proposed to address the GAVE (1.1) and AVE (1.2), which can be found in Mangasarian [2,10,11], Rohn [12], and so on. Then in [13], in order to overcome the difficulty of constructing the gradient of the absolute value term, Mangasarian presented the generalized Newton (GN) algorithm through using the generalized Jacobian matrix.This method may be summed up as follows: x (k+1) = (A − D(x (k) )) −1 b, k = 0, 1, 2, ..., (1.4) where D(x) := diag(sgn(x)), x ∈ R n , sgn(x) denotes a vector with components equal to 1, 0, −1 depending on whether the corresponding components of x is positive, zero or negative. Moreover, the GN algorithm is also able to apply on the GAVE (1.1) if we use BD(x (k) ) instead of D(x (k) ). Based on the GN method, Li [14] proposed a modified generalized Newton method. It should be noticed that the Jacobian matrix is changed with respect to the iteration index k, which will increase the amount of computation greatly. In order to stay away from the changed Jacobian matrix, many literature on the result of the AVE (1.2) and GAVE (1.1) are proposed. For instance, Rohn et al. presented the Picard method when A is invertible in [15]: In [16], Wang et al. added a new term Ωx (Ω is positive semi-definite and it can be guaranteed that A + Ω is invertible ) into the general problem and noticed that Ax + Ωx is differential but Ωx + B|x| + b non-differential, then studied a modified Newton-type(MN) iteration method: To further speed up the AOR method, Li et al. [17] presented the generalization version algorithm. Actually, the above algorithms all belong to Newton-based matrix splitting methods and their relaxed versions, which is proposed in [18]. Sometimes the coefficient matrix A has some special properties, for example, A is M-matrix, under which condition Ali et al. [19] proposed two new generalized Gauss-Seidel iteration methods. In [20], a new iteration method was presented by redesigning equivalently the AVE in (1.2) as a 2 × 2 block nonlinear equation. Also using matrix blocking, the block-diagonal and anti-block-diagonal splitting (BAS) method [21] and modified BAS method [22] are proposed.
Recently, in order to continue to improve the efficiency of solving the AVE (1.2), some inexact iteration algorithms have been studied. Every step in iteration of the Picard method can be viewed as solving a linear system with the coefficient matrix A. Therefore, Salkuyeh [23] presented the Picard-HSS method for AVE (1.2), which used the HSS method [24] to estimate the result x (k+1) at each Picard iteration. On this basis, Miao et al. [25] used single-step HSS (SHSS) method [26] to address the linear system in Picard iteration and proposed the Picard-SHSS method, a new Picard-type method. Inspired by these, we adopt the modified Newton-type iteration method with faster convergence speed as the outer iterative method, HSS and SHSS as the inner iterative method respectively, then we obtain two new inexact iterative algorithms, abbreviated as MN-HSS method and MN-SHSS method.
The remainder of this article is laid out as follows. In Section 2, the MN-HSS method is described, and we investigate its convergence property. Section 3 is devoted to introducing the MN-SHSS method and analyzing its convergence. Section 4 contains several numerical experiments, showing the feasibility and efficiency of the two new inexact iteration methods. Section 5 draws some conclusions.

The MN-HSS method for solving GAVE
At the beginning of this section, some notations are given blow. ones(n, 1) represents a vector like (1, 1, · · · , 1) T ∈ R. ρ(A) indicates the spectral radius of A. The symbol (·) T represents the transepose of a vector or matrix. And we have lim k→+∞ A k = 0 if and only if ρ(A) < 1 (see [27,28]).
Given A ∈ R n×n be a non-Hermitian positive definite matrix. Then H = 1 2 (A + A T ) , S = 1 2 (A − A T ), which are the Hermitian part and skew-Hermitian part of A, A = H + S . Then the HSS iteration method was presented by Bai et al. [24] for addressing positive definite system Ax = b. The iterative process of the HSS method can be expressed as follows: where α is a positive constant. Section 1 has introduced the MN method, which is equivalent to the following form Then the GAVE (1.1) has a unique solution x * , and for any starting vector x (0) ∈ R n and any sequence of positive integers l k , k = 0, 1, ..., the iteration sequence {x (k) } ∞ k=0 produced by the MN-HSS iteration method converges to x * provided that l = lim inf k→∞ l k ≥ N, where N is a natural number satisfying , ∀s > N, α , then according to the iteration formula (2.3), we can write the (k+1)th iterate x (k+1) of the MN-HSS iteration method as , which satisfying By subtracting (2.5) from (2.4), we have It follows from [24] that ρ(T α ) < 1, then Then the Eq (2.6) turns into It is obvious that for any x, y ∈ R n , |x| − |y| 2 ≤ x − y 2 . Therefore, On the other hand, since ρ(T α ) < 1, lim s→∞ T s α = 0. Therefore, there is a natural number N such that , ∀s ≥ N.
Thus, assume that l = lim inf k→∞ l k ≥ N, then we can obtain the result. This completes the proof.
The MN-HSS method can also be expressed as the residual-updating form: Algorithm (The residual-updating variant of MN-HSS iteration method): For k = 0, 1, ..., until converge, do:

The MN-SHSS method for solving GAVE
From the iteration scheme (2.3) we can notice that the each step of HSS iteration method is equivalent to solving two linear subsystems, whose coefficient matrices are Hermitian and skew-Hermitian respectively. The first equation with Hermitian coefficient matrix can be solved quickly by conjugate gradient method or Cholesky factorization. However, the other subsystem needs much more computation. So as to accelerate the algorithm, we replace the inner iteration method with single-step HSS (SHSS) method, then we present a new inexact method in this section, named as MN-SHSS iteration method.
Compared with the HSS method, the SHSS method ignores the second subsystem with skew-Hermitian matrix, which can be expressed as by limiting the iteration parameter α, Li et al. [26] has proved that the SHSS converges to the unique solution for any initial guess x (0) .
Theorem 3.1. Let the matrix A, B ∈ R n×n and choose suitable Ω ∈ R n×n such that A + Ω is a positive definite matrix, then H = 1 2 ((A + Ω) + (A + Ω) T ), S = 1 2 ((A + Ω) − (A + Ω) T ) are the Hermitian and skew-Hermitian parts of A + Ω respectively. Let also A −1 B 2 < 1, (A + Ω) −1 2 < 1 B 2 + Ω 2 , and α be a constant number such that α > max{0, where δ max is the largest singular value of S and λ min is the smallest eigenvalue of H. Then the GAVE (1.1) has a unique solution x * , and for any starting vector x (0) ∈ R n and any sequence of positive integers l k , k = 0, 1, ..., the iteration sequence {x (k) } ∞ k=0 produced by the MN-SHSS iteration method converges to x * provided that l = lim inf k→∞ l k ≥ N, where N is a natural number satisfying , ∀s > N, According to the iteration formula (3.2), we can get the kth iteration x (k+1) of the MN-SHSS method: On the other hand, the unique result x * of GAVE (1.1) satisfies Subtracting (3.4) from (3.3), then we have In [26], Li et al. has proved that ρ(K α ) < 1 when α > max{0, Therefore, we can calculate that (3.7) Due to the condition of ρ(K α ) < 1, we can get that K s α → 0 as s → ∞. So there is a natural number N such that , ∀s ≥ N.
That is to say, the iteration sequence {x (k) } ∞ k=0 converges to x * , which is produced by the MN-SHSS iteration method. This completes the proof.

Numerical experiments
In this section, the effectiveness of MN-HSS method and MN-SHSS method is showed by comparing them with other existing inexact iteration method for solving GAVE (1.2): the Picard-HSS method and the Picard-SHSS method. In addition, we also compare our methods with a descent method [29]. For this purpose, we compare from the following aspects: the number of outer iteration steps (denoted by 'IT'), the elapsed CPU time (denoted by 'CPU') and the residual error(denoted by 'RES'), where the 'RES' is set to be In our computations, we all use the residual-updating version and optimal parameters α resulting in the least iteration numbers of the algorithms mentioned above and the zero vector as our starting iteration vector. We use the Cholesky factorization to solve the equations whose coefficient matrix is αI + H and the LU factorization for the other subsystem. All runs are aborted once the residual error satisfies RES <= 10 −7 or the maximum iteration number k max = 500 is exceeded. As for the inner iterations, we set the stopping criterion of different methods as: is a block tridiagonal matrix, is a tridiagonal matrix, n = m 2 . we can know that z * = 1.2 * ones(n, 1) is the unique result of the LCP(1.3) and the exact solution of the GAVE(4.1) is x * = 0.6 * ones(n, 1).
In this experiment, we pick three scenarios for the parameter µ = 4, −1, −4. The matrices A and M are both symmetric positive definite when µ = 4, and symmetric indefinite when µ = −4. The matrix A is symmetric positive definite if µ = −1, but M symmetric indefinite. We choose four different sizes of n (n = 3600, 4900, 6400, 8100) for each parameter µ. Considering the simplicity and efficiency of our experiments, we use the same matrix Ω = 5I in the MN-HSS and MN-SHSS methods. In the methods we test, the value α is chosen to result in the fewest iteration steps, see Table 1. In Tables 2-4, we list the numerical outcomes of the studied four inexact iteration methods for µ = 4, −1, −4 . In these tables, "-" indicates that the relevant algorithm is not convergent to the answer under the limitation of the maximum number k max of iterative steps or even diverge.   Table 3. Numerical results of the testing methods for Example 1 with µ = −1.
Methods n=3600 n=4900 n=6400 n=8100 Picard-HSS The descent method From Table 2, we can see that when µ = 4, for all matrix scales the five evaluated methods can successfully create an estimated solution to the GAVE (4.1). And for each tested method, the CPU time also increases with the increase of the scale n of the coefficient matrix. Through the study of the numerical results, we can know that the two inexact iteration methods proposed by us are better than the Picard-HSS and Picard-SHSS methods in the number of iteration steps and CPU time. Although the descent method need less iteration steps, our methods spend less CPU time. When µ = −1, −4, in which instance the convergence of Picard-HSS and Piard-SHSS iteration methods cannot be guaranteed, hence Tables 3 and 4 show that the two Picard-type methods converge slowly or do not converge, while the MN-HSS and MN-SHSS method can still converge fast. The selectivity of matrix Ω provides higher stability for the algorithm we propose. When µ = −1, our methods still have better performance in elapsed CPU time than the descent method generally, and the descent method can not converge under constraints of maximum iteration number. Table 4. Numerical results of the testing methods for Example 1 with µ = −4.
Methods n=3600 n=4900 n=6400 n=8100 Picard-HSS The descent method where S = (0, 1) × (0, 1), ∂S is its boundary, q is a constant and p is a real number. For discretization, the five-point finite difference scheme are used on the diffusive terms and the central difference scheme on the convective terms respectively, then we can obtain the coefficient matrix A.
where I m and I n are the identity matrices of order m and n with n = m 2 , ⊗ means the Kronecker product, T x = tridiag(a 2 , a 1 , a 3 ) m×m and T y = tridiag(a 2 , 0, a 3 ) m×m with a 1 = 4, a 2 = −1 − Re, a 3 = −1 + Re. Here Re = qh 2 and h = 1 m+1 are the mesh Reynolds number and the equidistant step size, respectively. q is a positive constant. Let the coefficient matrix B = I and the exact solution x * = (−1, 2, −3, · · · , (−1) i i, · · · , (−1) n n) T , then the right-hand side vector can also be determined. In this instance the GAVE becomes a AVE (1.2) actually.
In our experiments, we test different values of p (p = 0, −1), q (q = 0, 1) and n (n = 100, 400, 1600, 6400). The parameters α of the four tested methods resulting in the fewest iteration steps are chosen, while the matrix Ω in our methods are chosen as Ω = ωI, see Tables 5  and 6. We refer to [25] for the parameters of Picard-HSS and Picard-SHSS methods.  Tables 7 and 8, the iteration steps reveal that the Picard-type methods and the descent method converges faster than the MN-HSS method, but in terms of the elapsed CPU times, the MN-SHSS method we new proposed has better computing efficiency generally. In Tables 9 and 10, that is when p = −1, the iteration counts and CPU time of the Picard-type methods increase dramatically with the increase of matrix scale, which situation is more obvious on the descent method. In contrast, our methods still maintain a low iteration counts and high computational efficiency. Therefore, our new proposed method is very effective.

Conclusions
In this article, to accelerate the MN iteration method, we propose the MN-HSS and MN-SHSS iteration algorithms for addressing the generalized absolute value equation. Moreover, we give the sufficient conditions to show that our methods are convergent for addressing the GAVE. In the end, we provide some experiments to comfirm the feasibility and effectiveness of our novel presented methods.