Two CSCS-based iteration methods for solving absolute value equations

Recently, two families of HSS-based iteration methods are constructed for solving the system of absolute value equations (AVEs), which is a class of non-differentiable NP-hard problems. In this study, we establish the Picard-CSCS iteration method and the nonlinear CSCS-like iteration method for AVEs involving the Toeplitz matrix. Then, we analyze the convergence of the Picard-CSCS iteration method for solving AVEs. By using the theory about nonsmooth analysis, we particularly prove the convergence of the nonlinear CSCS-like iterationsolver for AVEs. The advantage of these methods is that they do not require the storage of coefficient matrices at all, and the sub-system of linear equations can be solved efficiently via the fast Fourier transforms (FFTs). Therefore, computational cost and storage can be saved in practical implementations. Numerical examples including numerical solutions of nonlinear fractional diffusion equations are reported to show the effectiveness of the proposed methods in comparison with some existing methods.


Introduction
In the present paper, we are interested in the efficient solutions of absolute value equations (AVEs), i.e., where A is a non-Hermitian Toeplitz matrix and |x| = (|x 1 |, |x 2 |, . . . , |x n |) H denotes the component-wise absolute value of the vector x = (x 1 , x 2 , . . . , x n ) T . Here the transpose and the conjugate transpose of a matrix A are represented by A T and A H , respectively. At present, both theoretical and numerical investigations of such problems have been extensively studied in recent literature [1][2][3][4][5][6][7]. Additionally, a slightly more extended form of AVEs, was also discussed in [8] and investigated in a more general context [7,9]. On the other side, the system of AVEs (1.1), which is generally equivalent to the linear complementarity problem (LCP) [4,10], arises from linear programming, quadratic programming, bimatrix games and other engineering problems (see e.g. [11] for fractional diffusion equations). This means that the system of AVEs is NP-hard in its general form [2,4,7]. If B = 0, then extended AVEs (1.2) reduce to a linear system Ax = b, which have many applications in the field of scientific computations [4]. The recent researches concerning AVEs contents can be summarized as the following aspects, one is the theoretical analysis, which focuses on the theorem of alternatives, various equivalent reformulations, and the existence and nonexistence of solutions; refer, e.g., to [1,[8][9][10] for details, and the other is how to solve AVEs numerically. In the last decade, based on the fact that the LCP can be reduced to AVEs, which enjoys a special and simple structure, a lot of numerical methods for solving AVEs (1.1) can be found in the recent literature; see e.g. [3,4,[12][13][14] and references therein. For example, a finite computational algorithm that is solved by a finite succession of linear programs (SLP) in [2], and a semi-smooth Newton method and its inexact variants are introduced in [15,16] respectively, which largely shorten the computation time than the SLP method. Furthermore, a smoothing Newton algorithm was also presented in [10], which was proved to be globally convergent and the convergence rate was quadratic under the condition that the singular values of A exceed 1. This proposed condition was weaker than the one established in [15].
In recent years, the Picard-HSS iteration method and the nonlinear HSS-like iteration method are established to solve the AVEs in [17,18], respectively. The sufficient condition is given to guarantee the convergence of the Picard-HSS iteration method, and numerical experiments are employed to illustrate the effectiveness of the Picard-HSS and nonlinear HSS-like iteration methods. However, the number of the inner HSS iteration steps is often problem-dependent and difficult to be determined in actual computations. Moreover, the iterates can not be updated timely. It has shown that the nonlinear HSS-like iteration method is more efficient than the Picard-HSS iteration method in aspects of the defect mentioned above, which is designed originally for solving weakly nonlinear systems in [19]. In order to improve the nonlinear HSS-like iteration method, Zhang [20] had extended the preconditioned HSS (PHSS) method [21] to solve AVEs and also used the relaxation technique to accelerate his proposed methods. Meanwhile, he successfully achieved the proof of convergence of the nonlinear PHSS-like iteration method, which was not addressed in the previous work. Numerical results also show the effectiveness of his proposed method in [20]. We consider the special case of A with non-Hermitian Toeplitz structure in this paper, and a Toeplitz matrix A has the so-called circulant and skew-circulant splitting (CSCS) [22]. Inspired by the similar strategies of [17,18], two kinds of CSCS-based iteration methods are established to solve AVEs (1.1) efficiently. In the first, convergence conditions of the Picard-CSCS iteration method will be investigated. Then we follow Zhang's analytical techniques [20] to prove the convergence condition of the nonlinear CSCS-like iteration method.
The rest of this paper is organized as follows. In Section 2, we first introduce several preliminary results about the nonsmooth analysis. Then we briefly review the CSCS iteration method. In Section 3, we devote to introducing two CSCSbased iteration methods for solving the AVEs (1.1) and investigate their convergence properties, respectively. Numerical experiments are reported in Section 4 to demonstrate the feasibility and effectiveness of our proposed CSCS-based iteration methods. Finally, the paper closes with some conclusions in Section 5.

Preliminaries
In this section, similarly to [20], we review some notations and properties related to the nonsmooth analysis, which are useful for discussing the convergence of the proposed methods. Then we briefly recall the knowledge about the CSCS iteration method for solving the non-Hermitian Toeplitz system of linear equations Ax = b.

Preliminary results
Let Ψ : R n → R n be a specified function, and let x be a given point in R n . The function Ψ is supposed to be locally Lipschitzian near x if there exist a scalar κ ∈ R and δ > 0 such that, for all y, z ∈ R n , y − x < δ, z − x < δ, the following inequality holds: Let Ψ : R n → R n be a locally Lipschitzian function. From Rademacher's theorem [23, pp. 18-23], it notes that Ψ is differentiable almost everywhere. Denote the set of points at which Ψ is differentiable by D Ψ . We write Ψ ′ (x) for the usual n × n Jacobian matrix of partial derivatives whenever x is a point at which the necessary partial derivatives exist. Then, the Bouligand subdifferential of Ψ at x ∈ R n , denoted by ∂ B Ψ(x), is as follows: (2.1) Clarke's generalized Jacobian [24, pp. 69-75] of Ψ at x is the convex hull of ∂ B Ψ(x), i.e., ∂Ψ(x) = conv{∂ B Ψ(x)}. Since Ψ is a locally Lipschitzian function, so the set ∂ B Ψ(x) and ∂Ψ(x) are bounded. By the definition, ∂ B Ψ(x) is also closed. Therefore, ∂ B Ψ(x) and ∂Ψ(x) are compact.
exists. If Ψ is semismooth at all points in a given set, we can state that Ψ is semismooth in this set.
If Ψ is semismooth at x, then Ψ must be directionally differentiable at x. 25,26]) Suppose that Ψ is semismooth at x. Then the classic directional derivative exists and is equal to the limit in (2.2).
Semismoothness was originally presented by Mifflin [27] for functionals, and then Qi and Sun [25] generalized the concept to vector valued functions. It was proved in [25,Corollary 2.4] that Ψ is semismooth at x if and only if all its component functions are the same. The class of semismooth functionals is very broad; it includes the smooth functions, all convex functions, and the piecewise-smooth functions. Moreover, the sums, differences, products, and composites of semismooth functions are still semismooth; refer, e.g., to [26][27][28] for details.

The CSCS iteration method
Here let A ∈ C n×n be a non-Hermitian Toeplitz matrix of the following form a n−2 · · · a 1 a 0 a −1 a n−1 a n−2 · · · a 1 a 0 i.e., A is constant along its diagonals; refer to [22,29], and B ∈ C n×n be a zero matrix, the general AVEs (1.2) reduced to the system of linear equations It is well-known that a Toeplitz matrix A enjoys a circulant and skew-circulant splitting [22], i.e., A = C + S, where a 0 a −1 + a n−1 · · · a 2−n + a 2 a 1−n + a 1 a 1 + a 1−n a 0 · · · · · · a 2−n + a 2 . . . . . . . . . . . . . . . a n−2 + a −2 · · · · · · a 0 a −1 + a n−1 a n−1 + a −1 a n−2 + a −2 · · · a 1 + a 1−n a 0 and As we know, C is a circulant matrix, which can be diagonalized by the discrete Fourier transform matrix F ; and S is a skew-circulant matrix, which can be diagonalized by a discrete Fourier transform matrix with diagonal scaling, i.e.,F = F Ω, where Ω = diag(1, e − πι n , . . . , e −(n−1)πι n ) and ι = √ −1 is the imaginary unit. That is to say, it holds that where and Λ C , Λ S are two diagonal matrices formed by the eigenvalues of C and S, respectively, which can be obtained in O(n log n) operations by using the FFTs [29, pp. 37-39]. Furthermore, Ng [22] had established the following CSCS iteration scheme to solve the non-Hermitian Toeplitz linear system (2.3).

Algorithm 1 The CSCS iteration method.
Given an initial guess x (0) ∈ C n and compute x (k) for k = 0, 1, 2, . . ., using the following iterative scheme until {x (k) } ∞ k=0 converges, where σ is a positive constant and I is the identity matrix of order n.
On the other hand, we have Here, M(σ) is the iterative matrix of the CSCS iteration method. We mention that the CSCS iteration method is greatly similar to the HSS iteration method [30] and its variants, see e.g. [31] and references therein. When the circulant part C and the skew-circulant part S of A ∈ C n×n are both positive definite * , Ng has proved that the spectral radius ρ(M(σ)) of M(σ) is less than 1 for any parameters σ > 0, i.e., the CSCS iteration method unconditionally converges to the exact solution of Ax = b for any initial guess x (0) ∈ C n ; refer to [22, Theorem 1] for details.

Two CSCS-based iteration methods for AVEs
Motivated by the pioneer work of [17,18], we extend the conventional CSCS iteration method to two types of CSCS-based iteration methods for solving AVEs (1.1). These methods fully exploit the Toeplitz structure to accelerate the computation speed and save storage. Next, we will devote to establishing these two new methods, i.e., the Picard-CSCS iteration method and the nonlinear CSCS-like iteration method.

The Picard-CSCS iteration method
Recalling that the Picard iteration method is a fixed-point iterative method and the linear term Ax and the nonlinear term |x| + b are separated [17,18], the AVEs (1.1) can be solved by using the Picard iteration method We assume that the non-Hermitian Toeplitz matrix A is positive definite. In this case, the next iterate of x (k+1) can be approximately computed by the CSCS iteration method with using A = B(σ) − C(σ) as the following scheme (see [32]) where B(σ) and C(σ) are the matrices defined in the previous section, σ is a positive constant, {l k } ∞ k=0 is a prescribed sequence of positive integers, and x (k,0) = x (k) is the starting point of the inner CSCS iteration at k-th outer Picard iteration. This leads to the inexact Picard iteration method, called Picard-CSCS iteration method, for solving AVEs (1.1) which can be summarized as follows, refer to [32].

Algorithm 2 The Picard-CSCS iteration method
Let A = C + S ∈ C n×n be a non-Hermitian Toeplitz matrix; C and S are the circulant and skew-circulant parts of A given in (2.4) and (2.5) and they are both positive definite. Given an initial guess x (0) ∈ C n and a sequence {l k } ∞ k=0 of positive integers, compute x (k+1) for k = 0, 1, . . ., using the following iterative scheme until {x (k) } satisfies the stopping criterion: (b) For ℓ = 0, 1, . . . , l k − 1, solve the following linear systems to obtain x (k,ℓ+1) : where σ is a given positive constant.
Numerical advantages of the Picard-CSCS iteration method are obvious. First, the two linear subsystems in all inner CSCS iteration steps have the same shifted circulant coefficient matrix σI + C and shifted skew-circulant coefficient matrix σI + S, which are constant with respect to the iteration index k. Second, the exact solutions can be efficiently obtained via FFTs in O(n log n) operations [22,33]. Hence, the computation cost of the Picard-CSCS iteration method could be much cheaper than that of the Picard-HSS iteration method.
The next theorem suggests sufficient conditions for the convergence of the Picard-CSCS iteration method for solving the AVEs (1.1).
Theorem 3.1. Let A = C + S ∈ C n×n be a non-Hermitian Toeplitz matrix; C and S are the circulant and skew-circulant parts of A given in (2.4)-(2.5) and they are both positive definite. Let also η = A −1 2 < 1. Then the AVE (1.1) has a unique solution x * , and for any initial guess x (0) ∈ C n and any sequence of positive integers Proof. Due to η < 1 and the conclusion of [7, Proposition 4], the system of AVEs (1.1) has a unique solution x * ∈ C n . As seen from Eq. (2.7), it found that the (k + 1)-th iterate of the Picard-CSCS iteration can be written as On the other side, since x * is the solution of AVEs (1.1), it follows Furthermore, since ρ(M(σ)) < 1, we obtain Substituting the above identity in Eq. (3.5) yields Now, we can obtain Here, the above inequality is true due to the fact that for any x, y ∈ C n , it follows |x| − |y| 2 ≤ x − y 2 . Since ρ(M(σ)) < 1, then lim s→∞ (M(σ)) s = 0. Thus, there exists a natural number N such that At the stage, if we suppose that l = lim inf k→∞ l k ≥ N , then the targeted result is immediately completed.

The nonlinear CSCS-like iteration method
In the Picard-CSCS iteration method, the numbers l k , k = 0, 1, 2, . . . of the inner CSCS iteration steps are often problem-dependent and difficult to be determined in actual computations [17,18,32]. Moreover, the iterative vector can not be updated timely. Thus, to avoid the defection and still preserve the advantages of the Picard-CSCS iteration method, based on the nonlinear fixed-point equations we propose the following nonlinear CSCS-like iteration method.

Algorithm 3 The nonlinear CSCS-like iteration method
Let A = C + S ∈ C n×n be a non-Hermitian Toeplitz matrix; C and S are the circulant and skew-circulant parts of A given in (2.4) and (2.5) and they are both positive definite. Choose an initial guess x (0) ∈ C n and compute x (k+1) for k = 0, 1, 2, . . ., using the following iteration scheme until {x (k) } satisfies the stopping criterion: where σ is a given positive constant. Define and Then the nonlinear CSCS-like iterative scheme can be equivalently expressed as The Ostrowski theorem, i.e., Theorem 10.1.3 in [34, pp. 300-301], provides a local convergence theory about a one-step stationary nonlinear iteration. Based on this item, Zhu and Zhang established the local convergence theory for the nonlinear CSCS-like iteration method in [32]. However, these convergence theory has a strict requirement that Obviously, the absolute value function |x| is non-differentiable. In order to remedy the difficulty, Zhu, Zhang and Liang [18] attempt to introduce a smoothing approximation function [35] ϕ for |x|, then they present the convergence of the nonlinear HSS-like iteration method based on the convergence of the iteration scheme and their convergence result is deeply dependent on the smoothing approximate function ϕ(x) of |x|, nor |x| itself. Recently, Zhang [20] exploit the theory of nonsmooth analysis to introduce a framework to prove the convergence of his proposed relaxed nonlinear HSS-like iteration method completely. Inspired by Zhang's framework, we will similarly analyze the (local) convergence of the nonlinear CSCS-like iteration method in the next context. Firstly, the following definition in [34, pp. 299-300] needs to be cited here. Based on the above definition, we can obtain the following proposition, which is useful for studying the convergence of the nonlinear CSCS-like iteration method. From statements in [20], let x * satisfy Ax * − |x * | = b. We compute the Bouligand subdifferential of Θ(x) defined by (3.8)-(3.9) at x * . Due to the special form of V and U, it is easy to verify that, x * = U(x * ), x * = V(x * ), and x * = Θ(x * ). Observe the special form of Θ, we have that Using the above discussion and Proposition 3.1, it immediately obtains the following conclusion about the convergence of the nonlinear CSCS-like iteration solver.
Proof. It is clear that U and V are semismooth, so Θ is semismooth. Let E ∈ ∂ B |x * |, then it is not hard to find that E is a diagonal matrix. Assume This can complete the desired proof.

10)
then x * is a point of attraction of the nonlinear CSCS-like iteration method.
Remark. An attractive feature of the nonlinear CSCS-like iteration method is that it avoids the use of the differentiable in actual iterative scheme. Although we present our convergence analysis of the nonlinear CSCS-like iteration method under real matrices and vectors, the condition is not necessary in the actual implementation corresponding to numerical experiments of the next section.

Numerical results
In this section, numerical performances of the Picard-CSCS and the nonlinear CSCS-like iterative solvers are investigated and compared experimentally by a suit of test problems. All the tests are performed in MATLAB R2014a (64bit) on Intel(R) Core(TM) i5-3470 CPU @ 3.2 GHz and 8.00 GB of RAM, with machine precision 10 −16 , and terminated when the current residual satisfies where x (k) is the computed solution by each of the methods at iteration step k, and a maximum number of the iterations 200 is used. Morover, the stopping criterion for the inner iterations of the Picard-CSCS it- where l k is the number of the inner iteration steps and η k is the prescribed tolerance for controlling the accuracy of the inner iterations at the k-th outer iteration step. If η k is fixed for all k, then it is simply denoted byη. In our numerical experiments, we use the zero vector as the initial guess, the accuracy of the inner iterations η k for both Picard-CSCS and Picard-HSS iterative methods is fixed and set to beη = 0.01, a maximum number of iterations 15 (l k = 15, k = 0, 1, 2, . . . ,) for inner iterations, and the right-hand side vector b of the AVEs (1.1) is taken in such a way that the vector x * = (x 1 , x 2 , . . . , x n ) H with On the other hand, Mangasarian modified the classical Newton iteration method for solving AVEs by introducing the auxiliary diagonal matrixD(x) = ∂|x| = diag(sign(x)), refer to [15] for details; then he established the generalized Newton iterative scheme with the initial guess x (0) , so it notes that we need to solve a system of linear equations with the coefficient matrix J (k) = A −D(x (k) ), i.e., Eq. (4.2). If the matrix J (k) = A −D(x (k) ) is very sparse, then Eq. (4.2) can be solved by using MATLAB's function "\". If the matrix J (k) = A −D(x (k) ) is large-scale (even dense), the Eq. (4.2) can be solved by using Krylov subspace methods, such as GMRES [36] and TFQMR [37]. This consideration just follows the recent method named the inexact semi-smooth Newton method, which has been introduced in [16]. In our numerical experiments, we also give the compared results between the proposed method and the above generalized Newton iterative scheme. In practical implementations, the optimal parameter σ HSS = √ λ max λ min recommended in [30] is employed for the Picard-HSS and nonlinear HSS-like iteration methods, where λ min and λ max are the minimum and the maximum eigenvalues of the Hermitian part H of the matrix A. Similarly, we adopt the optimal parameter σ CSCS given in [22,38] for the Picard-CSCS iteration method and the nonlinear CSCS-like iteration method. More precisely, in our calculations, σ CSCS is chosen according to the following formula where γ min and γ max are the lower and the upper bounds of the real part of the eigenvalues of the matrices C and S, and ζ max is the upper bound of the absolute values of the imaginary part of the eigenvalues of the matrices C and S. Meanwhile, it should mention that two optimal parameters σ HSS and σ CSCS only minimize the bounds of the convergence factors (not the spectral radiuses selves) of the HSS and CSCS iteration matrices, respectively [32]. Admittedly, the optimal parameters are crucial for guaranteeing fast convergence of these parameter-dependent iteration methods, but they are generally difficult to be determined, see e.g. [17,19,30,32] for a discussion of these issues.
To show that the proposed iteration methods can also be efficiently applied to deal with the complex system of AVEs (1.1), we first construct and test the following example, which is a system of AVEs with complex Toeplitz matrix. Example 1. We consider that A ∈ C n×n is a complex non-Hermitian, sparse and positive definite Toeplitz matrix with the following form where ι = √ −1 and c, d, γ ∈ R are three given parameters. It means that the matrices A in the targeted AVEs are defined as Eq. (4.3). According to the performances of HSS-based methods, see [17,18,20], compared with other early established methods, we compare the proposed CSCS-based methods with HSS-based methods in Example 1. Then we choose different parameters c and d and present the corresponding numerical results in Tables 2-3. Firstly, the optimal parameters σ CSCS and σ HSS for Example 1 are listed in Table 1. It is worth mentioning that with the increase of the matrix dimension n, the optimal parameters σ CSCS and σ HSS are almost fixed or decreasing slightly. Moreover, in Tables 2-3, we report numerical results with respect to the Picard-HSS, the nonlinear HSS-like, the Picard-CSCS, the nonlinear CSCS-like iterations, and the generalized Newton iterations using the MATLAB's function "\" (referred to as GN). We also present the elapsed CPU time in seconds (denoted as CPU) and the number of outer, inner and total iteration steps (outer and inner iterations only for both Picard-HSS and Picard-CSCS) for the convergence performances (denoted as IT out, IT inn and IT, respectively).
As seen from Tables 2-3, it finds that except the GN method, the Picard-HSS, the nonlinear HSS-like, the Picard-CSCS and the nonlinear CSCS-like iterative methods can successfully achieve approximate solutions of the AVEs with all different matrix dimensions. When the dimension n is increasing, the number of outer and inner iteration steps are almost fixed for all iteration methods, and the number of total iteration steps shows the similar phenomena. But the total CPU time for all iteration methods are increasing quickly. Moreover, in terms of outer iteration steps, the Picard-HSS iteration method and the Picard-CSCS iteration method have almost the same results, but the Picard-CSCS iteration method is better than the Picard-HSS iteration method in terms of inner iteration steps. Then as a result, the Picard-CSCS iteration method is also more competitive than the Picard-HSS iteration method in aspects of the elapsed CPU time.
On the other hand, from Tables 2-3, we also observe that both the nonlinear CSCS-like and the Picard-CSCS iteration methods are better than the nonlinear HSS-like and the Picard-HSS iteration methods in terms of the number of iteration steps and the elapsed CPU time for solving AVEs. In particular, the nonlinear CSCS-like method often enjoys the better performance than the Picard-CSCS method in our implementations. Moreover, the convergence histories of residual 2-norms of these four different iterative algorithms are displayed in Fig. 1, and the performance profile based on CPU time for Example 1 with increasing the matrix size n is illustrated in Fig. 2. In conclusion, the nonlinear CSCS-like iteration method is the best choice for coping with AVEs concerning in Example 1. Besides, the Picard-CSCS iteration method can be regarded as an acceptable alternative.
Example 2. In order to evaluate the performances of the propose methods comprehensively, we consider a family of the practical problems about the AVEs arising in numerical solutions of the following one-dimensional nonlinear space fractional  diffusion equation, which is specially modified from Refs. [11,39], where α ∈ (1, 2) is the order of the fractional derivative, ς > 0, and diffusion coefficients d ± are nonnegative; i.e., d ± ≥ 0. Moreover, φ(x) is a known function. To solve Eq. (4.4) numerically, let N and M be positive integers, and h = 1/(N + 1) and τ = 1/M be the sizes of spatial grid and time step, respectively. We define a spatial and temporal partition x j = jh for j = 0, 1, . . . , N + 1 and t m = mτ for m = 0, 1, . . . , M . Let u (m) j = u(x j , t m ). In [39], Meerschaert and Tadjeran proposed the shifted Grünwald approximation as follows, where the coefficients g By using the similar ways given in [11], it is not difficult to prove that the numerical scheme (4.6) is unconditionally stable, which we will not pursue here. Let u (m) = (u where we take ς = τ and G α ∈ R N ×N is a nonsymmetric Toeplitz matrix defined in [40]. According to Eq. (4.6), it implies that we need to handle a system of nonlinear equations like the AVEs in (1.1) at each time step, i.e., there is a need for solving the AVEs with the form Au−|u| = u (0) , where A = I N − τ h α (d + G α +d − G T α ) is also a nonsymmetric Toeplitz matrix. Meanwhile, for simplicity, the vector u (0) is still chosen as the same as that x * in Eq. (4.1) is the solution of AVEs in (1.1).
Next, for the first temporal level m = 1, we employ these two CSCS-based iteration methods to solve the above resultant AVEs, then the necessary condition for analyzing the convergence of the CSCS-based iteration method is that both the circulant part C and the skew-circulant part S of A are positive definite. In fact, we have already mentioned that both the circulant part C and the skew-circulant part S of the matrix A = I N − τ h α (d + G α +d − G T α ) are positive definite (see [41] for details) via the similarly analyzed methods in [38]. It means that exploiting the CSCS-based iteration methods for solving the resulting AVEs is reasonable. At the same time, it is worth mentioning that HSS-based iteration methods are not suitable for Example 2 due to the Toeplitz coefficient matrix. Otherwise, it will lead to the complicated computations for solving two sub-systems with the dense coefficient matrices σI + H and σI+S. In this example, since the matrix J (k) is a Toeplitz-plus-diagnoal matrix, so there are no fast direct solvers for J (k)x = u (0) † . Fortunately, it should note that the matrix-vector product involving J (k) can be implemented via FFTs due to having the Toeplitz part A. It tells us that the Krylov subspace methods can be compatibly exploited for solving J (k)x = u (0) at each iteration step, we denote them as the GN-TFQMR method and the GN-GMRES method. In conclusion, we will compared the proposed CSCS-based iteration method with both the GN-GMRES and GN-TFQMR methods for solving the resultant AVEs in Example 2. Numerical results are reported in the following tables under different values of α, d ± and h = τ . The total number of (inner) iteration steps used for both GMRES and TFQMR methods, which are used to solve J (k)x = u (0) , is no more than 15 in our practical implementations. First of all, the optimal parameters σ CSCS for Example 2 are listed in Table  4. It is remarked that with the increase of the matrix dimension n, the optimal parameters σ CSCS are almost fixed or increasing slightly for the cases of α = 1.2 and α = 1.5. Since the case of α = 1.8 corresponding to the coefficient matrix A is very ill-conditioned, so the optimal parameters σ CSCS are varied intensely. Moreover, in Tables 5-7, we report the numerical results with respect to the Picard-CSCS, nonlinear CSCS-like, GN-GMRES and GN-TFQMR iterative methods. Similar to Example 1, we report the elapsed CPU time in seconds and the number of outer, inner and total iteration steps (outer and inner iterations only for Picard-CSCS, GN-GMRES and GN-TFQMR) for showing the convergence performances.
Based on numerical results in Tables 5-7, it finds that these two iterative solvers, i.e., the Picard-CSCS and the nonlinear CSCS-like, can successfully obtain approximate solutions to the AVEs for all different matrix dimensions; whereas both the GN-GMRES and GN-TFQMR iterative methods fully fail to converge. It is mainly because the Newton-like iterative methods are usually sensitive to the initial guess and the accuracy of solving the inner linear system corresponding to (4.2) per iterative step. When the matrix dimension N is increasing, the number of outer iteration steps are almost fixed or increasing slightly for all iteration methods, whereas the number of inner iteration steps show the contrary phenomena for the cases with α = 1.2 and α = 1.5. Meanwhile, the total CPU time and the total iteration steps for both the Picard-CSCS and the nonlinear CSCS-like iteration methods are increasing quickly except the cases of α = 1.8 with N = 1024 and N = 2048. On the other hand, from Tables 5-7, we also observe that the nonlinear CSCS-like method is almost more competitive than the Picard-CSCS iteration method in terms of the number of iterations and the elapsed CPU time for solving the AVEs. In particular, we can find that the nonlinear CSCS-like iteration method can require slightly less number of iterations to converge than the Picard-CSCS iterative solver, but the Picard-CSCS iterative solver can save a little elapsed CPU time with compared to the nonlinear CSCS-like iteration method in our implementations. However, it still concludes that the nonlinear CSCS-like iterative method is the first choice for

Conclusions
In this paper, we have constructed two CSCS-based iteration methods for solving AVEs (1.1) with non-Hermitian Toeplitz matrix. Two CSCS-based iteration methods are based on separable property of the linear term Ax and the nonlinear term |x| + b as well as on the CSCS of the involved non-Hermitian positive definite Toeplitz matrix A. By leveraging the theory of nonsmooth analysis, the local convergence of nonlinear CSCS-like iteration method has been investigated. Further numerical experiments have shown that the Picard-CSCS and nonlinear CSCS-like iteration methods are feasible and efficient nonlinear solvers for the AVEs. In particular, the nonlinear CSCS-like iteration method often does better than the Picard-CSCS iteration method for solving the AVEs. Finally, it is worth mentioning that how to employ suitable acceleration techniques [20,31,42] for enhancing the convergence of CSCS-based iteration methods, which are affiliated with the fixed-point iteration, can remain an interesting topic of further research.