Projected Krylov Methods for Solving Non-Symmetric Two-by-Two Block Linear Systems Arising from Fictitious Domain Formulations

The paper deals with the solution of large non-symmetric two-by-two block linear systems with a singular leading submatrix. Our algorithm consists of two levels. The outer level combines the Schur complement reduction with the orthogonal projectors that leads to the linear equation on subspaces. To solve this equation, we use a Krylov-type method representing the inner level of the algorithm. We propose a general technique how to get from the standard Krylov methods their projected variants generating iterations on subspaces. Then we derive the projected GMRES. The efficiency of our approach is illustrated by examples arising from the combination of the fictitious domain and FETI method.


Introduction
We consider two-by-two block linear systems where with A ∈ R n×n singular, C ∈ R m×m , B 1 , B 2 ∈ R m×n , f, u ∈ R n , g, λ ∈ R m , and m n.Systems of this type arise in a variety of scientific and engineering applications [2].For instance, when the FETI (Finite Element Tearing and Interconnecting) domain decomposition method [8], [6], [20] is used for the numerical solution of elliptic PDEs, we get a saddle-point linear system, i.e., Eq. ( 1) with A being symmetric, positive semidefinite, B 1 = B 2 , and C = 0.The FETI algorithm consists of two levels.The outer level combines the Schur complement reduction requiring a generalized inverse to A with the null-space method performed by orthogonal projectors.It results in the linear equation with the singular matrix that is, fortunately, the symmetric and positive definite operator in a subspace V ⊂ R m .Therefore, this equation can be solved in V by the projected CGM representing the inner level of the FETI algorithm.Note that the projected CGM was developed also in context of optimization problems [12].
The extension of the FETI algorithm for solving Eq. (1) called the PSCM (Projected Schur Complement Method) was proposed in [16] (for C = 0).Although the outer level is based on the same ideas as in the FETI algorithm, we arrive at the linear equation whose matrix is the invertible operator between two different subspaces V 1 and V 2 in R m .Therefore, the projected iterative method for non-symmetric, indefinite operators is needed in the inner level of the PSCM.In this paper, we develop a general technique enabling us to derive from the standard Krylov methods their projected variants generating iterations on subspaces.Then we derive the projected GMRES (ProjGMRES).The projected Krylov methods for solving Eq. ( 1) with A being non-symmetric, B 1 = B 2 , and C = 0 have been developed also in [23] (without the Schur complement reduction).
Our research is motivated by the development of the fictitious domain (FD) method for solving elliptic PDEs.The main idea consists in an extension of the original PDE problem defined in a domain ω to a larger domain Ω ⊃ ω with a simple shape, e.g. a box.The new problem is chosen in such a way that its solution restricted to ω coincides with the solution of the original problem.Since Ω has a simple shape, one can use specific partitions based on non-fitted uniform meshes that does not respect the geometry of ω.Therefore, the resulting stiffness matrix, represented in Eq. ( 1) by A, does not depend on ω.Moreover, one can combine the FD method with other techniques, e.g. with the FETI domain decomposition method that enables us to perform an efficient parallelization of computations.There are several ways how to define the new problem in Ω.One of them uses boundary Lagrange multipliers introduced on the boundary of ∂ω [9], [10], [13], [14].The linear system Eq.( 1) arising from the finite element approximation is typically symmetric.Unfortunately, this approach suffers from a serious drawback: the computed solution has a generally non-zero jump of its normal derivative across γ.If non-fitted meshes are used, the singularity appears inside of some elements of the partition, namely those ones the interior of which is cut by γ.Consequently, the theoretical rate of convergence of approximate solutions is slow, at most 1/2 [16], [15].To improve the accuracy in ω, the authors of [16] proposed the smooth FD method.Instead of Lagrange multipliers on γ, control variables defined on a closed curve Γ in Ω having a positive distance from ω are used.The solution is still singular in Ω but the singularity is shifted away from ω to Γ and as a result, convergence in ω becomes faster.The algebraic formulation leads to the non-symmetric linear system Eq.( 1), in which B 1 and B 2 play the role of the "trace matrices" (i.e., approximations of boundary operators) on Γ and γ, respectively, and C = 0.
Let us introduce some conventions that we use throughout the whole paper.If M ∈ R k×l is a matrix, then N (M ) and R(M ) denote its null-space and range-space defined by M is the transpose of M .The Moore-Penrose inverse of M will be denoted by M † .The symbol • stands for the Euclidean norm of vectors, i.e., v = (v v) 1/2 for v ∈ R k .The spectral condition number of the symmetric, positive definite matrix N ∈ R k×k is given by κ(N ) = σ max (N )/σ min (N ), where 0 < σ min (N ) ≤ σ max (N ) are the smallest and largest eigenvalues of N , respectively.Finally, I and 0 denote the identity and zero matrices of an appropriate order, respectively.
It is easy to show that (A1) implies Further, (A2) yields the existence of R A , R A ∈ R n×l whose columns span N (A), and N (A ), respectively, where l := l(A) is the nullity of A. By X ∈ R n×n we denote an arbitrary generalized inverse of A satisfying A = AXA.Our algorithm requires that X, R A , and R A are to our disposal so that one can efficiently compute actions of these matrices on vectors; see [19] for computational details.
To describe the outer level of the PSCM, we start with the Schur complement reduction.The first block equation in Eq. ( 1) is satisfied, The first component of the solution is given by where α ∈ R l is an appropriate vector.Substituting Eq. ( 4) into the second block equation in Eq. ( 1), we arrive at Let us denote The matrix S is the (negative) Schur complement of the block A in A. Obviously, S is not unique, since it depends on the choice of X, R A , and R A .Nevertheless, one can prove that each S is non-singular due to (A1).Summarizing Eq. (3) and Eq. ( 5), we see that the pair To solve Eq. ( 1), one computes (λ, α) from Eq. ( 7) and then u by Eq. (4).
Assuming λ to be known, α is given by as follows from the first block equation in Eq. (7).
The inverse (G 1 G 1 ) −1 in Eq. ( 8) is well-defined, since the second relation in Eq. ( 2) guarantees the full rowrank of G 1 (similarly for G 2 due to the first relation in Eq. ( 2)).It remains to show how to compute λ.
We use the null-space method performed by the orthogonal projectors P i onto N (G i ): Applying P 1 to the first block equation in Eq. ( 7) and using P 1 G 1 = 0, we arrive at the problem in terms of λ: To get a linear equation in N (G 2 ), we decompose λ on two orthogonal components Hence, λ R is easily computable.Assuming that λ R is known, we find that λ N satisfies the linear equation where q = P 1 (d − F λ R ).Although the matrix P 1 F is singular, the following theorem guarantees the solvability of Eq. ( 11).
Theorem 1 [19] The linear operator P 1 F is invertible between N (G 2 ) and N (G 1 ).
Step 6: The matrices F , P 1 , and P 2 need not be assembled explicitly, if an approprite iterative method in Step 4 is used.We need only actions on µ ∈ R m evaluated successively as indicated by the parentheses: The following theorem shows that P 1 F is invariant with respect to the generalized inverse X.
Theorem 2 [19] Let X be an arbitrary generalized inverse of A and let A † be the Moore-Penrose one.Let us denote µ X = P 1 B 2 XB 1 µ and µ

Projected Krylov Methods
In this section, we present a general technique enabling us to derive projected Krylov methods for solving Eq. (11).Since our approach is general, we replace Eq. ( 11) by the following abstract problem: where M ∈ R m×m represents the invertible operator These assumptions guarantee that there is the unique solution to Eq. ( 12).Note that the matrix M may be singular (on R m ). Let Substituting these vectors into Eq.( 12) and multiplying by Z 1 , we find that Eq. ( 12) reduces to the system of linear equations: where Lemma 1 The matrix N in Eq. ( 13) is non-singular.
Proof.Let Z 1 M Z 2 y = 0 be the homogeneous system. Denoting In M Z 2 y = 0, we set y 2 = Z 2 y.Then, M y 2 = 0 implies y 2 = 0 due to the invertibility of M .Finally, Z 2 y = 0 yields y = 0, as Z 2 has full column-rank.Hence, the solution to the homogeneous system is trivial.
To propose a projected method for solving Eq. ( 12), we start from a (standard) Krylov method applied to Eq. ( 13) that generates approximations to the solution of Eq. ( 13) in R m−l .Our aim is to transform these approximations to get directly approximations of the solution to Eq. ( 12).We will also replace Z 1 and Z 2 by the respective orthogonal projectors so that Z 1 and Z 2 are not explicitly needed in computations.
First of all we show how to transform three vector operations performed by the Krylov methods from R m−l to V 2 , namely, the linear combination and the scalar product of vectors and the matrix-vector multiplication.We will use notation introduced already in Eq. ( 12) and Eq. ( 13): if x ∈ R m−l is a "short" vector, then x ∈ V 2 denotes the "extended" vector given by x = Z 2 x.As x is the vector of coordinates of x with respect to the basis of Z 2 , there is the one-to-one correspondence between x and x (similarly for ȳ, z ∈ R m−l and y, z ∈ V 2 , respectively).
Let a, b ∈ R. The linear combination is easy to transform, since Without lost of generality, we may assume that Z 2 is orthogonal, i.e., Using Eq. ( 15) in the scalar product of x and ȳ, we get From Eq. ( 14) and Eq. ( 16) we see that linear combinations and scalar products are identical for "short" and "extended" vectors, i.e., the respective formulas will be the same in the Krylov method as well as in its projected variant.
Let us discuss the matrix-vector multiplication by N from Eq. ( 13).We get: To replace Z 2 Z 1 by an orthogonal projector, we consider M P ∈ R m×m representing another invertible operator Then we introduce Z 1 by It follows immediately from the invertibility of M P that the columns of Z 1 span V 1 .Therefore, Z 1 is welldefined by Eq. ( 18).Inserting Eq. ( 18) into the last equality in Eq. ( 17), we arrive at where is the orthogonal projector from R m onto V 2 due to Eq. (15).When solving Eq. ( 11), one can replace Eq. ( 20) by Eq. ( 9).This avoids the explicite knowledge of Z 2 .
It remains to show how to choose M P in practice.We propose two variants.
Variant 2: M P = P 1 , where P 1 is the orthogonal projector from R m onto V 1 .One can compute P 1 again by Eq. ( 9) .Below we will analyze the matrix N in Eq. ( 13) for both variants using the smallest, largest singular values of M on V 2 defined by:

Variant 1
In this case, N = Z 2 M M Z 2 is symmetric, positive definite so that the CGM can be applied to Eq. ( 13).It is well-known that its convergence rate is determined by the spectral condition number κ(N ) of N [11].
Theorem 3 Let Z 2 be orthogonal.It holds: Proof.Recall that κ(N ) = σ max (N )/σ min (N ), where σ min (N ), σ max (N ) are the smallest, largest eigenvalues of N , respectively.We get: Analogously, As M P = M is invertible, N is non-singular for any choice of the input data.On the other hand, two expensive matrix-vector multiplications by M and its transpose are needed in Eq. (19).Moreover, as κ(N ) is the square of κ(M |V 2 ), it is usually too high so that the convergence rate of the CGM (and other Krylov methods) may be slow.

Variant 2
Now the invertibility of M P = P 1 is not guaranteed.
To get this property we need the following theorem.
Theorem 4 Let P 1 be the orthogonal projector onto V 1 .The restriction where Proof.First we prove the implication "⇐".Any x ∈ V 2 can be split into two orthogonal components: 21) yields x V1 = 0 and P 1 x = x V1 .Therefore, the only solution of the homogeneous equation P 1 x = 0 on V 2 is trivial so that the invertibility of P 1 on V 2 is proved.To prove the opposite implication "⇒", we assume that there is x ∈ V 2 ∩ V ⊥ 1 , x = 0.Then, x is the non-zero solution of the homogeneous equation P 1 x = 0 on V 2 .This contradicts to the invertibility of P 1 on V 2 .Condition Eq. ( 21) is equivalent to the fact that the angle θ between the subspaces V 2 and V ⊥ 1 is non-zero.Recall that θ = arccos γ, where x y x y .
Since Eq. ( 21) yields 0 ≤ γ < 1, the strengthened Cauchy-Schwarz inequality holds: The angle between the subspaces V 2 , V ⊥ 1 and between their orthogonal complements V ⊥ 2 , V 1 is the same [17].Therefore, with γ as above.Moreover, the equality in Eq. ( 22) is achieved for nonzero where 13).The following theorem proves, among others, that this N is close to a singular matrix when θ is small.Theorem 5 Let Z 2 be orthogonal.Let θ be the angle between the subspaces V 2 and V ⊥ 1 .The smallest, largest singular values σ min (N ), σ max (N ) of N on R m−l , respectively, satisfy: 2 , respectively, it holds: Assuming y ∈ V 1 and using Eq. ( 22), we have Together with Eq. ( 26), we obtain and, consequently, Since the bound is achieved for y * from Eq. ( 23), we get min y∈V1, y =0 Now Let x in the last expression of Eq. (28) be chosen as x ∈ V 2 satisfying y * = M x, where y * is from Eq. ( 27).Then proves the upper bound in Eq. ( 24).Further Eq. ( 28) yields the lower bound in Eq. (24).To prove Eq. ( 25), we start from The upper bound follows immediately from the fact that P 2 y ≤ y .Using the same x and y * as above, we obtain the lower bound:
The GMRES method is initialized by x0 ∈ R m−l and stopped, if the terminating tolerance ε > 0 or the maximal number of iterations k max ≥ 1 is achieved.The obtained result xk ∈ R m−l approximates the solution of Eq. ( 13).Here and in what it follows, k stands for the final number of iterations.In step 16 • of the scheme below, H = (h i,j ) ∈ R (k+1)×k denotes the upper Hessenberg matrix and e 1 = (1, 0, . . ., 0) ∈ R k+1 for k = k.The minimizer v ∈ R k in 16 • can be efficiently computed using the Givens rotations and, then, solving the resulting triangular linear system [24,11].The columns of Q are the Arnoldi vectors qk ∈ R m−l so that Q = (q 1 , . . ., qk ) ∈ R (m−l)× k represents the orthogonal basis of the respective Krylov space.Note that the norm of the residua err in step 13 • may be obtained without any explicit knowledge of xk [21].The computation of err is based on applying the Givens rotations to the extended matrix H = (H|βe 1 ) ∈ R (k+1)×(k+1) .Eliminating the subdiagonal we H, we get err as the last diagonal entry in a triangular matrix: The GMRES method for solving Eq. ( 13) is introduced below.Our implementation of the ProjGMRES for solving Eq. ( 12) is derived from the GMRES using the following results of Section 3: (i) the linear combination of vectors in the corresponding steps of the GMRES and the ProjGMRES are given by the same formulas; (ii) the scalar products, including the Euclidean norms of vectors, in GMRES and ProjGMRES give the same values; (iii) the matrix-vector multiplication by N in the GMRES is replaced by P 2 M P M in ProjGMRES.Acording to (i)-(iii), one can insert the most of the GMRES formulas into the ProjGMRES removing only the bars over vectors.The remaining formulas are discussed below.The computation of the residua in step 1 • of the ProjGMRES is based on the fact that r 0 = Z 2 r0 , q = Z 1 q, and Eq. ( 18): The analogous formula is formally used in step 13 • , however, the efficient computation of err in the Pro-jGMRES is given again by the principle represented by Eq. (29).Similarly, the minimization problem in step 16 • can be realized by the same way as in the GM-RES.The matrix Q = (q 1 , . . ., q k) ∈ R m× k consists of the extended Arnoldi vectors q k ∈ V 2 .The inputs M , q, P 2 , and M P are introduced in Section 3 and x 0 ∈ V 2 .The meaning of ε and k max is the same as in the GM-RES.Obviously, if x 0 = Z 2 x0 , then the GMRES and the ProjGMRES converge for the same number of iterations and x k = Z 2 xk (in the exact arithmetic).
GMRES(N ,q,x 0 ,ε,k max ) → xk The Arnoldi vectors q k generated in the ProjGMRES belong to V 2 .This property may be violated due to round-off errors so that q k may be deflected from V 2 .In order to remove this kind of instability, we propose to project each r k into V 2 , i.e., we add r k := P 2 r k in step 12 • .
Let us turn our attention to problem Eq. ( 11), i.e., The formulas in steps 1 • , 7 • of the ProjGMRES take the form r 0 := P 2 F (q − P 1 F x 0 ), r k := P 2 F P 1 F q k (30) for the Variant 1, or for the Variant 2, respectively.The orthogonal projectors P 1 , P 2 are given by Eq. ( 9).

Numerical Experiments
To test our algorithms, we shall solve linear systems Eq. ( 1) arising from the combination of the FD and FETI method applied to finite element approximations of linear elasticity problems.

Formulation
Let us consider an elastic body represented by a bounded domain ω ⊂ R 2 with the sufficiently smooth boundary ∂ω consisting of two disjoint parts γ u and γ p , ∂ω = γ u ∪γ p (see Fig. 1).Displacements g ∈ (L 2 (γ u )) 2 are prescribed on γ u , while surface tractions of density p ∈ (L 2 (γ p )) 2 act on γ p .Finally we suppose that the body ω is subject to volume forces of density f |ω , where f ∈ (L 2 loc (R 2 )) 2 .We seek a displacement field u in ω satisfying the equilibrium equation in ω and the Dirichlet and Neumann conditions on the boundary: where σ(u) is the stress tensor in ω corresponding to u and ν stands for the unit outward normal vector to γ.The stress tensor σ(u) is related to the linearized strain tensor (u) = 1/2(∇u + ∇ u) by Hooke's law for elastic, homogeneous, and isotropic materials: where "tr" denotes the trace of matrices, I ∈ R 2×2 is the identity matrix and c 1 , c 2 > 0 are the Lame constants.
Ω Γ Let us take a box Ω such that ω ⊂ Ω and construct a closed curve Γ surrounding ω; see Fig. 1.Instead of Eq. ( 33), we propose to solve the following fictitious domain (FD) formulation of Eq. (32) in Ω: where b Γ , b γu , b γp are duality pairings between the respective trace spaces and their duals.It is readily seen that û|ω solves Eq. (33).Notice that λ Γ plays the role of the control variable defined on Γ enforcing the boundary conditions on γ to be satisfied.This new reformulation of Eq. (33) enables us to use (non-fitted) uniform meshes on Ω and, in addition, it improves the convergence rate of finite element approximations; see [16], [15].
To increase the computational efficiency, we also apply the FETI method.We decompose Ω into subboxes Ω i , i = 1, . . ., s such that Ω = s i=1 Ω i ; see Fig. 2. To ensure the global continuity of the solution, we add the "gluing" condition on interfaces of the subboxes.A common interface between Ω i and Ω j , i = j, is defined by Before giving the FD-FETI formulation of Eq. (32), we introduce notation: , where denotes the jump of v on ∆ ij .The decomposition of the trace spaces on π ∈ {Γ, γ u , γ p } is defined as follows: where = −1/2 for π = Γ, γ u and = 1/2, if π = γ p ; the respective duality pairings are given by bπ (µ, v) , where b π,i are duality pairings on subspaces and their duals.
The FD-FETI formulation of Eq. (32) reads as follows: Note that û satisfies û|Ω i = ũi , i = 1, . . ., s, since the last equation ensures the zero jump of ũ across all interfaces ∆ ij .The mixed finite element approximation leads to the two-by-two block linear system Eq.( 1), in which A is symmetric, positive semi-definite with the block diagonal structure, A = diag (A 1 , . . ., A s ), where A i ∈ R ni×ni , n = s i=1 n i , and C = 0.The off-diagonal blocks B 1 , B 2 and g in Eq. ( 1) are given by where  [3] is satisfied, we get the linear system Eq.( 1) with A nonsingular.

Model Example
Let ω be the interior of the circle: and Ω = (0, 1) × (0, 1).The data in Eq. ( 32) are chosen as follows: f = −div σ(u), g = u |γ u , and p = σ(u |γ p )ν, where u(x, y) = (0.1xy, 0.1xy), (x, y) ∈ R 2 and γ p is the (northwest) quarter of the circle γ := ∂ω as in Fig. 2. The decomposition of Ω into Ω i is given by dividing the sides of Ω into n Ω equidistant segments of length H so that s = n 2 Ω is the total number of the subboxes.The mesh in each Ω i is constructed analogously: we divide the sides of Ω i into n Ωi equidistant segments of size h so that (n Ωi +1) 2 is the total number of the finite element nodes in Ω i .The finite element discretization of each H 1 (Ω i ) is given by piecewise bilinear functions so that n = 2n 2 Ω (n Ωi + 1) 2 .Note that the used meshes are conforming.The auxiliary curve Γ is constructed by shifting γ three h units in the direction of the outward normal vector to γ.To define finite element approximations of X(Γ), X(γ u ), and X(γ p ) we proceed as follows.First we find intersection points of Γ, γ u , and γ p with the finite element meshes on Ω i , i = 1, . . ., s, that define polygonal approximations of Γ, γ u , and γ p , respectively, with non-equidistant straight segments.Then we construct partitions of these approximations Γ, γu , and γp given by polygonal macrosegments, i.e., unions of few following straight segments.We assume that Γ, γu , and γp contain m Γ , m γu , and m γp almost equidistant macrosegments for which m Γ = m γu + m γp .The spaces X(Γ), X(γ u ), and X(γ p ) are discretized by piecewise constant functions over Γ, γu , and γp , respectively.Next we shall suppose that h γ /h = log 2 h, where h γ denotes the maximal length of the macrosegments on γ = γ ∪ γ p ; see [16].The entries of the matrices B Γ , B γu , B γp are given by: (B Γ ) ij = bΓ (ξ i , φ j ), i = 1, . . ., m Γ , j = 1, . . ., n; (B γu ) ij = bγu (ψ i , φ j ), i = 1, . . ., m γu , j = 1, . . ., n; (B γp ) ij = bγp (ψ i+mγ u , σ(φ j )ν), i = 1, . . ., m γp , j = 1, . . ., n; where ξ i , ψ i , and φ j are the basis functions of the finite element spaces on Γ, γ = γu ∪ γp , and Ω, respectively.As the duality pairings are represented by integrals over Γ, γu , and γp , we evaluate their values by the composite trapezoidal rule [22].Since γ and Γ are not far apart, the satisfaction of Eq. ( 21) is guaranteed.The space X(∆) is approximated by the Dirac distributions defined at the matching nodes on the common interfaces ∆ ij .The discrete "gluing" condition at the matching node x k on Ω i and x l on Ω j reads as v i (x k ) − v j (x l ) = 0. Thus each row of B ∆ contains only two nonzero entries 1 and −1 in appropriate positions.
The model example is computed with h = 4 • 10 −3 and H = 2 • 10 −1 .Fig. 3 and Fig. 4 show the deformation and the stress field, respectively.The pointwise errors between the exact and computed solutions are drawn in Fig. 5 and Fig. 6.One can observe that the most significant error is concentrated in a vicinity of γ, especially on the part γ p .

Comparison of the Algorithms
We compare the efficiency of the PSCM with respect to different solvers used in the inner level.By ProjGMRES(P 1 F ) and ProjGMRES(P 1 ) we denote our projected GMRES variants with M P = M and M P = P 1 , respectively.Note that the former leads to the equation with the symmetric, positive definite operator so that the projected CGM may be also used in this case.We refer to this method as  ProjCGM(P 1 F ).Its implementation was done with the reorthogonalization of the Lanczos vectors [11].
The projected BiCGSTAB algorithm is denoted here by ProjBiCGSTAB(P 1 ).It was proposed in [16] starting from the standard BiCGSTAB [26] by analogy with the projected CGM [8] but only for the Variant 2. Ac-tions of generalized inverses of A to vectors are realized by combining the Cholesky factorization with the singular value decomposition [4].The choice of the generalized inverse has no influnce on convergence of the projected methods as follows from Theorem 2 (see [20], [19] for numerical experiments).All computations were performed by using 32 cores with 2GB memory per core of the HP Blade system, model BLc7000.
Figure 7 and Fig. 8 show convergence of the relative residual norm (i.e.err/β in the ProjGMRES) typical for small and large problems, respectively.Table 1 reports the number of iterations (iter) and the computational time in seconds (CPU_time).One can observe  that ProjGMRES(P 1 ) is the most efficient method, if a high accuracy of the computed solution is required.The progress is more expressive in CPU_time, since the only one action of the generalized inverse of A per iteration is needed.for the different number of the subboxes provided that the ratio H/h is fixed.The obtained results indicate the boundedness of κ = κ(P 2 F P 1 F | N (G 2 )) that corresponds to the scalability of the FETI method [25].n Ω κ Fig. 9: Experimental scalability of the FD-FETI method, H/h = 50.

Conclusions
The paper deals with the solution of large nonsymmetric two-by-two block linear systems with a singular leading submatrix by the PSCM algorithm.This algorithm extends the ideas used in the FETI domain decomposition methods.It results from the necessity to solve the linear equation given by the invertible operator M between two different subspaces V 2 and V 1 in R m .We propose the general technique that enables us to derive from the standard Krylov methods for non-symmetric matrices their projected analogies ge-nerating iterations on V 2 .Two variants how to choose the auxiliary operator M P that maps the basis of V 2 into V 1 are analyzed.The first variant with M P = M guarantees convergence for each input data.Unfortunately, two expensive actions of M and M are necessary when multiplying.In addition, the condition number is too high so that the convergence rate may be slow.The second variant with M P = P 1 uses the orthogonal projector P 1 onto V 1 .In this case, the action of M is replaced by the action of P 1 that is considerably cheaper in computations.However, convergence is guaranteed only when the angle θ between V 2 and V ⊥ 1 is non-zero.Moreover, if θ is small, then the problem is equivalent to the system of linear equations whose matrix is close to the singular one.
The second part of the paper is devoted to numerical experiments illustrated by examples arising from the FD-FETI method.We compare two variants of the projected GMRES derived in this paper, namely with the projected CGM (for M P = M ) and the projected BiCGSTAB (for M P = P 1 ).We may conclude that the GMRES approach with M P = P 1 leads to the best performance of the PSCM algorithm in almost all experiments, especially, when the high accuracy of the computed solution is needed.
* if >2500, the default maximum of iterations is achieved