A GENERALIZATION OF THE POSITIVE-DEFINITE AND SKEW-HERMITIAN SPLITTING ITERATION

. In this paper, a generalization of the positive-deﬁnite and skew-Hermitian splitting (GPSS) iteration is considered for solving non-Hermitian and positive deﬁnite systems of linear equations. Theoretical analysis shows that the GPSS method converges unconditionally to the exact solution of the linear system, with the upper bound of its convergence factor dependent only on the spectrum of the positive-deﬁnite splitting matrices. In some situations, this new scheme can outperform the Hermitian and skew-Hermitian splitting (HSS) method, the positive-deﬁnite and skew-Hermitian splitting (PSS) method, and the generalized HSS method (GHSS) and can be used as an eﬃcient preconditioner. Numerical experiments using discretization of convection-diﬀusion- reaction equations demonstrate the eﬀectiveness of the new method.

1. Introduction. We consider an iterative solution of the large sparse system of linear equations where A ∈ C n×n is a non-Hermitian and positive-definite matrix, and x, b ∈ C n . Iterative methods for the system of linear equations (1) require efficient splittings of the coefficient matrix A [21]. In this paper, the class of splitting iterative methods we considered is a class of Hermitian and skew-Hermitian splitting (HSS) based iterative methods. The HSS iterative method was first proposed by Bai, Golub and Ng in [8] for the solution of a broad class of non-Hermitian positive definite linear systems (1). Because the matrix A naturally possesses the Hermitian and skew-Hermitian splitting where

YANG CAO, WEIWEI TAN AND MEIQUN JIANG
Then based on the splitting (2) and motivated by the classical alternating direction implicit (ADI) iteration technique [20], the HSS iteration can be obtained (αI + H)x k+ 1 2 = (αI − S)x k + b, (αI + S)x k+1 = (αI − H)x k+ 1 2 + b, where α is a given positive constant and x 0 is an arbitrary initial guess. It was demonstrated in [8] that the HSS iteration method converges unconditionally to the unique solution of the non-Hermitian system of linear equations (1). Moreover, it was shown in the same paper that choosing α * = λ min (H)λ max (H), where λ min (H) and λ max (H) are the extreme eigenvalues of H, minimizes an upper bound on the spectral radius of the HSS iteration matrix. This is a very interesting work. Due to its promising performance and elegant mathematical properties, the HSS iteration has attracted many researchers' attention, resulting in numerous papers devoted to various aspects of the new algorithm. In one direction, based on the idea of the algorithm, many effective algorithms are designed for solving non-Hermitian systems of linear equations, such as preconditioned HSS (PHSS) method [11,22], normal and skew-Hermitian splitting (NSS) method [9], positive and skew-Hermitian splitting (PSS) method [7], asymmetric and skew-Hermitian splitting method [18], generalized HSS method [12], accelerated HSS method [5], modified HSS (MHSS) method [3], preconditioned MHSS (PMHSS) method [4], local HSS method [17], dimensional splitting (DS) iteration method [15] and so on. These iterative methods were first used as stationary iterative solvers, but they can also be used as preconditioners for Krylov subspace methods [21], resulting a far more efficient and robust class of solvers. Other significant developments include extension to certain singular systems [2], finding inexact solvers [10], studies on the optimal parameter [1,6,13] and spectral properties of HSS preconditioners [14,16,19] often with excellent results.
Based on the HSS iterative method and motivated by practical considerations, Bai, Golub, Lu and Yin in [7] proposed a positive-definite and skew-Hermitian splitting (PSS) iterative method for solving (1). This method is also an ADI iterative method. Since any Hermitian or non-Hermitian positive-definite matrix A possesses a splitting of the form whereP ∈ C n×n is a positive-definite matrix andS ∈ C n×n is a skew-Hermitian matrix. Then similar to the HSS iteration, the PSS iterative method can be defined where α is a given positive constant. Unlike the HSS iteration, in the PSS iteration there are many choices of matricesP andS. One useful choice is where D H , L H are the diagonal matrix and the strictly lower triangular matrix of H respectively. Theoretical analysis shows in [7] that the PSS iterative method also converges to the unique solution of (1). Recently, based on the HSS iteration and the PSS iteration, Benzi in [12] proposed a generalized HSS (GHSS) iteration for a class of non-Hermitian positive definite matrices arising from convection-diffusion problems. In this class of problems, the discretized matrix A has the following structure where G and K are generally Hermitian positive (semi-)definite matrices and S is a skew-Hermitian matrix. Then the GHSS iteration can be defined where α is a given positive constant. It was shown in [12] that the GHSS iterative method also unconditionally converges to the unique solution of (1) with coefficient matrix (6). One advantage of the GHSS iterative method is that the solution of systems with coefficient matrix αI + S + K by inner iterations is made easier, since this matrix is more diagonally dominant and typically better conditioned than αI + S.
In this paper, we try to design a new splitting iterative method based on previous HSS-based iterative methods. Motivated by the PSS and the GHSS methods, a generalized PSS scheme is proposed for solving a class of non-Hermitian positive definite systems of linear equations arising from convection-diffusion-reaction problems. Theoretical analysis shows that the GPSS iterative method preserves all properties of both PSS and GHSS methods. The remainder of the paper is organized as follows. In Section 2, a generalization of the PSS iteration is established and its convergence theory is studied. Preconditioned variants are considered in Section 3. In Section 4, some numerical examples using discretization of convection-diffusionreaction equations demonstrate the effectiveness of the proposed method. Finally, we end this paper with some conclusions and outlook in Section 5.
2. The generalized PSS iteration and its convergence. As discussed in Section 1 and [12], using finite difference method (FDM) or finite element method (FEM), many differential equations, such as unsteady convection-diffusion equation, steady convection-diffusion-reaction equation, can be discretized into a system of linear equations with a non-Hermitian positive-definite matrix of the form where G and K are generally Hermitian positive-definite matrices and S is a skew-Hermitian matrix. In fact, the matrix A in (7) can be split into where P 1 and P 2 are positive definite matrices. Two typical choices of splitting (8) can be and where D and L G are the diagonal matrix and the strictly lower triangular matrix of G, respectively. Other practical choices of the P 1 , P 2 can be obtained from practical problems and needed to be further in-depth study. Then we consider the following splittings of (8) And the corresponding alternating iterative scheme, called the GPSS iterative method, can be described as follows.

Method 1. (The GPSS iteration)
Let α be a given positive constant. Let x 0 ∈ C n be an initial guess vector. For k = 0, 1, 2, · · · , until certain stopping criteria satisfied, compute Evidently, just like the existing HSS-like methods, each iterate of the GPSS iteration alternates between two positive-definite matrices P 1 and P 2 , analogously to the classical ADI iteration for partial differential equations. In fact, the roles of the matrices P 1 and P 2 can be reversed so that we may first solve the system of linear equations with coefficient matrix αI + P 2 and then solve the system of linear equations with coefficient matrix αI + P 1 . In the GPSS iteration, with special splittings of the coefficient A such as (9) and (10), the first step is very easy to implement since αI + P 1 is now a triangular matrix. But in the second step, it is as difficult as in the PSS iteration and the GPSS iteration. Some inexact solvers such as incomplete LU factorization can be used. One advantage of the GPSS scheme consists in the fact that the solution of systems with coefficient matrix αI + K + S + L * G − L G by inner iterations is made easier, since this matrix is more "diagonally dominant" and typically better than αI + S + L * G − L G used in the PSS iteration. In the following, we turn to the study of convergence of the splitting method, assuming that solves in the splitting iteration are performed exactly (rather than approximately, as in the inexact inner-outer setting).
In matrix-vector form, the above GPSS iterative method can be equivalently rewritten as where and Here, Γ(α) is the iteration matrix of the GPSS iteration. In fact, (12) also results from the splitting and We note that and M (α) can be served as a preconditioner to the system of linear equations (1). We call this preconditioner as the GPSS preconditioner and will discuss it in the next section. In order to prove the convergence of the GPSS iterative method, we first give a lemma.
If P ∈ C n×n is a positive semi-definite matrix, then it holds that Furthermore, if P ∈ C n×n is a positive definite matrix, then it holds that Theorem 2.2. Let A ∈ C n×n be a positive-definite matrix and A = P 1 + P 2 , where P 1 and P 2 are also positive-definite matrices. Let ρ(Γ(α)) be the spectral radius of the GPSS iteration matrix. Then it holds that ρ(Γ(α)) < 1, ∀α > 0; i.e., the GPSS iteration converges unconditionally to the unique solution of system of linear equations (1).
Proof. From the expression (13) of the GPSS iteration matrix Γ(α), we know that Γ(α) is similar tõ Then from Lemma 2.1, we have for any positive constant α. Thus, the the GPSS iteration method converges unconditionally to the unique solution of the system of linear equations (1). Thus, we complete the proof. Theorem 2.2 shows that the GPSS iteration method converges unconditionally to the exact solution of (1). Moreover, the upper bound of its contraction factor is dependent on the spectrum of positive-definite matrices P 1 and P 2 but is independent of the eigenvectors of the matrices P 1 , P 2 and A.
Like the PSS iteration, there are two important problems that need to be further studied. The first one is how to choose the positive-definite matrices P 1 and P 2 . In the above, we list two practical choices, which may have better computational efficiency than the HSS iteration method, the PSS iteration method and the GHSS iteration method. There may be exist other more better choices. The second one is how to choose the optimal parameter α. Like all parameter based iterative methods, it is very difficult to obtain the optimal choice of the parameter. In the previous literature, a great deal of effort has been put into determining the value of α, which minimizes the spectral radius of the iteration matrix Γ(α) or some upper bound on it. However, the practical usefulness of such estimates is questionable. According to theoretical and numerical results studied in many HSS-based papers, a small value of α (usually between 0.01 and 1.5) gives good results. In this paper, the optimal parameters are obtained experimentally.

Preconditioned GPSS iteration.
In this section, two aspects of the preconditioned GPSS iteration are considered. The first one is that we can use the matrix M (α) as a preconditoiner, the second one is that we can apply the GPSS iteration method for a preconditioned linear system. These two aspects are very useful for accelerating convergence of the GPSS iteration method.
As already mentioned, the HSS iteration, the PSS iteration and the GPSS iteration are stationary iterations and unconditionally convergent. However, the convergence of these stationary iterations are typically too slow for these methods to be competitive even with the optimal parameter. But they can be accelerated by a nonsymmetric Krylov subspace method like GMRES [21]. The same is true of GPSS iteration. To show this, it is easy to see that the linear system Ax = b is equivalent to the linear system where M (α) = 1 2α (αI + P 1 )(αI + P 2 ). (14) is an equivalent (left-preconditioned) linear system, and can be solved by GM-RES. Here M (α) can be seen as a preconditioner for GMRES. In this paper, we denote it by GPSS preconditioner. This preconditioner is based on the splitting (11) of matrix A, resulting in a fixed-point iteration. We have proved that this iteration is unconditionally convergent in Section 2. Knowing this is very useful since it implies that the spectrum of the preconditioned matrix lies entirely in a circle centered at (1,0) with unity radius which is a desirable property for Krylov subspace acceleration. It should be also noted that the pre-factor 1 2α in the preconditioner M (α) has no effect on the preconditioned system and can be neglected.
As discussed in [8], it is possible to apply the stationary HSS iteration method to the symmetrically preconditioned system resulting in the preconditioned HSS iteration method. Here, A = H + S is assumed to be positive real, and the matrix B should be chosen such that the Hermitian positive definite matrix D = BB * is spectrally equivalent to H. In fact, the PHSS iteration is mathematically equivalent to the alternating iteration based on the splitting H). This idea can be applied to other HSS-based iteration methods. See [11] for detailed analyses and discussions. For the GPSS iteration, it is also possible to develop a preconditioned GPSS (PGPSS) method, which is based on the splittings Convergence analysis of the PGPSS iteration method can be obtained analogously from [11], with only technical modifications. 4. Numerical experiments. In this section, we present numerical experiments to illustrate the effectiveness of the GPSS iterative method. All runs are started from the initial zero vector and terminated if the current iterations satisfy ERR = ||r k || 2 /||r 0 || 2 ≤ 10 −6 (where r k = b − Ax k is the residual at the kth iteration) or if the number of the prescribed iteration number k max = 500 is exceeded. All runs are performed in MATLAB 7.0 on an Intel Pentium 4 (1G RAM) Windows XP system.
The problem under consideration arises from the steady-state convection-diffusionreaction equation of the following form with homogeneous Dirichlet boundary conditions, where ν is a viscosity scaler, ω = [ω x ω y ] and r is a positive function. The stepsizes along both x and y directions are the same, i.e., h = 1 m . We use the following centered difference scheme to discretize equation (1) and obtain the discretized linear system of the form where matrices G, S and K are corresponding to the second-order diffusion term, the convection term and the zeroth-order operator, respectively. In computation, ω = [ω x ω y ] = [1 1] and r = 1 are taken for simplicity, and three ν: ν = 1, 0.1, 0.01 are chosen. For this problem, the matrix G is a symmetric positive definite matrix, S is a skew-symmetric matrix and K is a diagonal positive definite matrix. The right-hand side vector b is chosen so that the exact solution of the above discretized linear system is (1, 1, · · · , 1) T ∈ R (m−1) 2 . In Figure 1, we plot the eigenvalues of the iteration matrices of the HSS iteration method, the PSS iteration method, the GHSS iteration method and the GPSS iteration method when ν = 1 and m = 24. Figure 2 depicts the eigenvalues of the HSS, PSS, GHSS and GPSS iteration matrices when ν = 0.01 and m = 24. From these figures, we can see that the eigenvalues of the HSS iteration matrix and the GHSS iteration matrix are almost the same, similar distribution is for the PSS iteration matrix and the GPSS iteration matrix, but the eigenvalues of the (G)HSS iteration matrices are quite different from the (G)PSS iteration matrices. When ν = 1, the eigenvalues of the (G)HSS iteration matrices are more clustered than the (G)PSS iteration matrices. But when ν = 0.01, the eigenvalues of the (G)PSS iteration matrices are more clustered than the (G)HSS iteration matrices.  For the HSS iteration method, there are some discussion on the choice of optimal parameters, see [1,6,13]. But there is no discussion on the optimal parameters of the PSS iteration and the GHSS iteration methods to the best of our knowledge. It is also very difficult to obtain the optimal parameters for the GPSS iteration method. From the numerical results listed in [7], we can see that for the HSS iteration method the optimal parameters, which minimized the spectral radius of the iteration matrix, are close to that used in the PSS iteration method especially for large problems. In this paper, we use the same parameters for the HSS iteration, the PSS iteration, the GHSS iteration and the GPSS iteration method for each problem. In actual computation, we solve the shift skew-Hermitian linear system subsystem at each iteration by the Gaussian elimination method. Tables I-III list the numerical results of the four iteration methods with respect to IT (iteration steps), CPU (elapsed time) and ERR (relative residual error) for ν = 1, ν = 0.1 and ν = 0.01. From these tables, we can see that when m increases or ν decreases, the parameter α decreases. For large ν (ν = 1, 0.1), the IT of (G)PSS are larger than those of the (G)HSS. However, because (G)PSS has a smaller computational workload than (G)HSS at each iteration, the actual CPU of (G)PSS are less than (G)HSS. For small ν (ν = 0.01), both the IT and the CPU of (G)PSS are less than those of (G)HSS. When ν = 1 and ν = 0.01, the IT of PSS and GPSS are almost the same, but when ν = 0.1, both the IT and CPU of GPSS are less than those of PSS. These facts straightforwardly imply that PSS and GPSS are more efficient than HSS and GHSS, and GPSS may be more efficient than PSS. We also compare the computing efficiency of HSS, PSS, GHSS and GPSS as preconditioners to the Krylov subspace methods, such as GMRES(♯). Here, the integer ♯ in GMRES(♯) denotes that the algorithm is restarted after every ♯ iterations. The "optimal"parameters which are minimized the HSS iteration matrices are used for other iteration methods. The numerical results with respect to IT and CPU for each case are listed in Table IV. From Table IV, we can see that for large ν (ν = 1, 0.1), the IT of (G)HSS preconditioned method are much less than those of (G)PSS preconditioned method. For small ν (ν = 0.01), the IT of (G)PSS is close to those of (G)HSS, but the actual CPU are less than that of (G)HSS. When ν decreases, the IT of (G)HSS increases but the IT of (G)PSS decreases. This may be indicate that (G)PSS may be more efficient than (G)HSS for small viscosity ν.