Three e ﬀ ective preconditioners for double saddle point problem

: In this paper, we mainly propose three preconditioners for solving double saddle point problems, which arise from some practical problems. Firstly, the solvability of this kind of problem is investigated under suitable assumption. Next, we prove that all the eigenvalues of the three preconditioned matrices are 1. Furthermore, we analyze the eigenvector distribution and the upper bound of the minimum polynomial degree of the corresponding preconditioned matrix. Finally, numerical experiments are carried to show the e ﬀ ectiveness of the proposed preconditioners.


Introduction
In this paper, we consider iterative methods for solving the following double saddle point problems: where A ∈ R n×n and D ∈ R p×p are symmetric positive definite (SPD) matrices, B ∈ R m×n is of full row rank, C ∈ R p×n is a rectangular matrix and n ≥ m + p. Linear systems of the form (1.1) have wide application background in many fields of science and engineering, such as mixed element approximation of fluid flow problem [7,13], finite element model of liquid crystal [9], the interior point method of quadratic programming problem [10,15], mixed formula of second order elliptic equation [7,9]. Obviously, we can decompose the coefficient matrix in (1.1) into block two-by-two forms and choose solution methods for standard saddle point problems [6] to solve the corresponding double saddle point problems. However, owing to its more complex structure, straightforward application of the solution method for standard block two-by-two saddle point problems usually leads to poor numerical performance. In recent years, solvers have devoted to finding a solution to the double saddle point problems (1.1). In 2018, Beik and Benzi [3] discussed the Krylov subspace method of block preconditioners in detail, besides, they also introduced the solvable conditions of coefficient matrix and proposed the block preconditioners for solving the double saddle point problems (1.1) [4]. In 2019, Benzi and Beik [5] proposed Uzawa-type and augmented Lagrangian method for solving double saddle point systems. Furthermore, Liang and Zhang [8] proposed alternating positive semi-definite splitting (APSS) preconditioners, and proved that the corresponding iterative scheme was unconditionally convergent; in order to improve its effectiveness, the relaxed APSS preconditioners was given.
It is worth noting that the linear systems (1.1) can be equivalently restated as which assists that we can use some specific iteration solution methods. Based on the linear systems (1.2), we start our discussions in this paper. Some solvability conditions of double saddle point problems (1.2) are discussed in detail [4].
In this work, we establish three new preconditioners for solving the double saddle point problems (1.2). Theoretical analysis shows that the eigenvalues of the three preconditioned matrices are all 1. In addition, we also obtain the eigenvector distribution and the upper bound of the minimum polynomial degree of the three preconditioned matrices.
The arrangement of this work is as follows. In Section 2, we discuss the condition of invertibility of matrix A. In Section 3, we propose three new block preconditioners for matrix A and derive that the eigenvalues of the corresponding preconditioned matrices are all 1. At the end of Section 3, we also analyze the distribution of eigenvectors and the upper bound of the minimum polynomial order of the corresponding preconditioned matrices. Brief discussions are given in Section 4 about practical implementation of three preconditioners. In Section 5, numerical experiments are performed to demonstrate the effectiveness of the proposed preconditioners.
Notations. For given arbitrary matrix A, we often shall write its transpose, null space and range space as A T , N(A) and R(A), respectively. Moreover, if A is symmetric positive (semi) definite, we write A 0(A 0). Finally, we write (x; y; z) to denote the vector (x T , y T , z T ) T .

Invertibility conditions
In this section, we mainly investigate the solvability of the double saddle point problems (1.2). We know that Beik and Benzi discussed the invertibility condition of coefficient matrix of the double saddle point problems (1.2) in the literature [4], which discussed four cases as follow: (1) It is necessary to improve the invertibility condition of coefficient matrix in (1.2) on the basis of it. Therefore, we introduce two necessary proposition for solving double saddle point problem (1.2). Proof. Assume that Au = 0 for u = (x; y; z), i.e., Multiplying the first equation in (2.1) by x T from the left, we have Then multiplying the second equation in (2.1) by y T from the left, we can obtain y T C T x + y T Ay + y T B T z = 0. From the second equation in (2.1), we can get C T x+ B T z = 0. Note that C T x+ B T z = 0 together with the assumption R(B T ) R(C T ) = {0} implies that C T x = 0 and B T z = 0. Since B T and C T have full column rank, we have x = 0 and z = 0. Hence, u = (x; y; z) = 0, which implies that A is nonsingular. The proof is completed. Proof. Assume that Au = 0 for u = (x; y; z), i.e., Multiplying the first equation in (2.5) by x T from the left, we have (2.6) Then multiplying by y T from the left of the second equation in (2.5), we can obtain Substituting (2.6) into (2.7), we deduce that In view of the positive definiteness of D, the preceding equality implies that x = 0. Consequently, (2.5) reduces to Because B has full row rank, it follows that z = 0 from the second equation in (2.9). From the first equation and the third equation in (2.9), we have y ∈ N(C) and y ∈ N(B). Since N(C) N(B) = {0}, we can obtain y ∈ N(C) N(B) = {0}. Hence, u = (x; y; z) is the zero vector, which shows the invertibility of A.
Finally, let us consider that N(C) N(B) = {0} is a necessary condition for the invertibility of A. If there exists a nonzero vector y ∈ N(C) N(B), then for u = (0; y; 0), we have Au = 0, this leads to a contradiction.

Three effective block preconditioners and spectral analysis of corresponding preconditioned matrix
In this section we describe three effective block preconditioners which can be used in Krylov subspace method for solving double saddle point peoblems (1.2). Inspired by the reference [12], we could establish three preconditioners. Firstly, by multiplying (1.2) from the left with the following invertible matrix We propose the following three preconditioners for linear equation (3.1): Furthermore, we propose three preconditioners for linear systems (1.2) by the expressions P 1 , P 2 and P 3 : Next, we analyze the spectral properties of the preconditioned matrix using the preconditioner P 1 , P 2 and P 3 . Firstly, the spectral distribution of P −1 1 A is described as follow: Theorem 3.1 Assume that A ∈ R n×n and D ∈ R p×p are symmetric positive definite matrix and B ∈ R m×n is of full row rank, C ∈ R p×n is a rectangular matrix and n ≥ m + p. Then the eigenvalues of preconditioned matrix P −1 1 A are all 1.
Proof. According to the above analysis, we have the following equation: then the preconditioners P 1 of linear system (1.2) can be reformulated as Therefore, Then the preconditioned matrix: Therefore, the eigenvalues of preconditioned matrix P −1 1 A are all 1.
It is well known that the convergence of Krylov subspace methods is not only related to the eigenvalue distribution of the preconditioned matrix, but also related to the number of corresponding linearly independent eigenvectors. The eigenvector distribution of the preconditioned matrix P −1 1 A is presented in the following theorem.
Proof. Let λ be an eigenvalue of the preconditioned matrix P −1 1 A and (x; y; z) be the corresponding eigenvector. From (3.3), we have From the second equation in (3.6), we get that B T z = 0, i.e. z = 0.
The function of preconditioning is to make the eigenvalue distribution more clustered and to reduce the iteration number necessary for solving the corresponding peoblems within certain tolerance. Among all the iteration methods, Krylov subspace methods with favorable properties, such as the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace [1,2,14], were found to be extremely useful when combined with an appropriate preconditioner in solving the underlying system. In the following, we study an upper bound of the degree of the minimal polynomial of the preconditioned matrix P −1 1 A. Theorem 3.3 Let the preconditioner P 1 be defined as in (3.2). Then the degree of the minimal polynomial of the preconditioned matrix P −1 1 A is at most 3. Proof. From (3.3), we can get the characteristic polynomial of the matrix P −1 1 A is Moreover, it is easy to check Therefore, the degree of the minimal polynomial of the preconditioned matrix P −1 1 A is at most 3. Theorem 3.4 Under the assumptions of Theorem 3.1, the eigenvalues and degree of the minimal polynomial of preconditioned matrix P −1 2 A are 1 and is at most 2, respectively. And the preconditioned matrix P −1 2 A has p + n linearly independent eigenvectors. There are (1) p eigenvectors (x l ; 0; 0)(l = 1, 2, ..., p) that correspond to the eigenvalue 1, where x l (l = 1, 2, ..., p) are arbitrary linearly independent vectors; (2) n eigenvectors (0; y 1 l ; 0)(l = 1, 2, ..., n) that correspond to the eigenvalue 1, where y l (l = 1, 2, ..., n) are arbitrary linearly independent vectors.

Implementation of the three preconditioners
In practical implementation of the P 1 within Krylov subspace acceleration, we need to solve the residual equation we obtain the following algorithm implementation for the P 1 : Algorithm 4.1: Solution of P 1 z = r, where r = (r 1 ; r 2 ; r 3 ) and z = (z 1 ; z 2 ; z 3 ) are given residual vector and the current vector, respectively; and r 1 , z 1 ∈ R p , r 2 , z 2 ∈ R n , r 3 , z 3 ∈ R m , from the following procedures: (1) solve z 1 from Dz 1 = r 1 ; Similarly, the implementation of the P 2 and P 3 with a Krylov subspace method can be described as follow. Algorithm 4.2: Solution of P 2 z = r, where r = (r 1 ; r 2 ; r 3 ) and z = (z 1 ; z 2 ; z 3 ) are given residual vector and the current vector, respectively; and r 1 , z 1 ∈ R p , r 2 , z 2 ∈ R n , r 3 , z 3 ∈ R m , from the following procedures: (2) solve z 1 from Dz 1 = r 1 + Cz 2 ; (3) compute B(A + C T D −1 C) −1 B T z 3 = r 3 + Bz 2 . Algorithm 4.3: We solve P 3 z = r for the preconditioner P 3 by the following steps: (1) solve z 1 from Dz 1 = r 1 ;

Numerical experiments
In this section, we give two numerical examples to prove the effectiveness of the three preconditioners P 1 , P 2 , P 3 proposed in Section 3, their structures are generalizations of the examples in [11]. In order to better show the advantages of the proposed preconditioners, we adopt the GMRES method incorporated with no preconditioner, APSS and RAPSS preconditioners proposed by [8] and our proposed three preconditioners to solve the linear system (1.2). All experiments are performed in MATLAB 2016a on an Intel Core (8G RAM) Windows 10 system. We define CPU times (denoted as 'CPU'), iteration steps (denoted by 'IT') and iteration residual (denoted by 'RES') to show the effect of preconditioners applied to GMRES method. In all tests, all runs are started from the zero vectors and stopped once the relative residual satisfies or if it exceeds the prescribed iteration steps k max = 2000, the iteration is stopped. It should be noted that we use the preconditioned GMRES method to solve the linear system (1.2). In Example 1, we use sparse Cholesky decomposition to solve the linear subsystem when we start to solve the residual equation. Similarly, in example 2, we use sparse LU decomposition to solve the linear subsystem. Example 1. Consider the double saddle point peoblems (1.2), in which

Example 2. Consider the double saddle point peoblems (1.2), in which
⊗ means the Kronecker product symbol and h = 1 p+1 . For this peoblems, the total dimension is 4p 2 . Numerical experiments are formed by setting different dimensions. It is noticed that we take b = A · 1. To show the effectiveness of the new preconditioners, we compare proposed preconditioners with APSS and RAPSS preconditioners by applying them to GMRES method.
In Tables 1 and 2, we list the numerical results of different preconditioned GMRES methods for Examples 1 and 2. Here, I denotes the GMRES method without preconditioning. Besides, we omit the iteration results if iteration steps exceed 2000. From these numerical results in Table 1, we can see that the no-preconditioning GMRES method converges very slowly, the five preconditioned methods are all effective. By comparing their iteration steps, residuals and CPU times, it follows that the three preconditioners P 1 , P 2 and P 3 are more effective than the other two preconditioners, and their iteration steps are obviously reduced. Especially, the iteration steps of P 1 , P 2 and P 3 preconditioned GMRES methods remain constant even if the dimension of the double saddle point system increases,which shows that three proposed preconditioners are advantage for solving the double saddle point problems (1.2). It can be seen from Table 2 that with increasing of dimension, the iteration time of GMRES method under the three proposed preconditioners increases greatly. The reason may be that in addition to the solutions of sublinear systems with matrix D, it also involves the solutions of those with matrices A + C T D −1 C and B(A + C T D −1 C)B T as sloving the residual equations under three preconditioners. We can find that both A + C T D −1 C and B(A + C T D −1 C)B T are full matrices, and it is expensive to solve sunlinear systems with using matrix decomposition. However, we can still find that the CPU times of three preconditioners are less than those of APSS and RAPSS preconditioners. Theoretical analysis shows that the degree of the minimum polynomial of the three preconditioned matrices is at most 3, 2 and 2, respectively. However, the iteration steps of numerical results corresponding to the three preconditioners are 4, 3 and 3, respectively. In fact, this is also according with convention. We can refer to [14].  In Figures 1 and 2, we plot the eigenvalue distribution of the original coefficient matrix and the P 1 , P 2 , P 3 preconditioned matrices for Example 1. In Figure 1, we test problems size of 1176. In Figure  2, we test problems size of 4656. From these two figures, we observe that the P 1 , P 2 and P 3 preconditioners improve the eigenvalue distribution of the original coefficient matrix greatly. Most importantly, we find that the eigenvalues distribution of these three preconditioned matrices all clustered at a point, which is the same as the theoretical analysis in Section 3. Moreover, the three preconditioned matrices have tight spectrums, which lead to stable numerical performances. In Figures 3 and 4, we plot the eigenvalue distribution of the original coefficient matrix and the P 1 , P 2 , P 3 preconditioned matrices with different dimension sizes for Example 2. From these Figures, it is easy to observe that the P 1 , P 2 and P 3 preconditioners improve the eigenvalue distribution of the original coefficient matrix greatly.