Indefinite Ruhe’s Variant of the Block Lanczos Method for Solving the Systems of Linear Equations

In this paper, we equip Cn with an indefinite scalar product with a specific Hermitian matrix, and our aim is to develop some block Krylov methods to indefinite mode. In fact, by considering the block Arnoldi, block FOM, and block Lanczos methods, we design the indefinite structures of these block Krylov methods; along with some obtained results, we offer the application of this methods in solving linear systems, and as the testifiers, we design numerical examples.


Introduction
First of all, we are going to introduce the inner product space that our work is scheduled to be done in that. Recall that the indefinite inner product [.,.] in C n has the all features of a standard inner product except that it may be nonpositive. In the other words, that is linear in the first argument, antisymmetry, and nondegenerate. The latter means that if ½x, y = 0 for every y ∈ C n , then x = 0. This kind of inner product may be applied in some areas of sciences and is commonly defined as ½x, y = y * Jx where J ∈ C n×n is a nonsingular Hermitian matrix, and even in some specific scientific areas such as the theory of relativity or in the research of the polarized light, it may be exclusively as follows: With this particular J, the indefinite inner product [.,.] is referred to as hyperbolic and take the form In [1] and [2], by considering C n with the indefinite scalar product (2), a number of the Krylov subspace methods have been reviewed and restructured. These methods are indefinite Arnoldi, indefinite full orthogonalization, and indefinite Lanczos. In this paper, we extend these indefinite Krylov methods to their indefinite block versions which will be discussed more in the subsequence sections.
Considering J as (1), we will need to consider the following definitions: A subspace M is said to be nondegenerate, with respect to the indefinite inner product [.,.] if x ∈ M and ½x, y = 0 for all y ∈ M imply that x = 0. Otherwise M is degenerate. For example, the indefinite inner product [.,.] ensures that ℂ n itself is always nondegenerate.
If M is any nondegenerate nonzero subspace of ℂ n , then the basis x 1 , ⋯, x 1 for M is said to be an orthogonal basis with respect to the indefinite inner product [.,.], if ½x i , x j = 0 for i ≠ j, and is said to be an orthonormal basis if in addition to orthogonality, ½x i , x i = ±1 for i = 1, ⋯, k. If the indefinite inner product [.,.] in this definition is the special indefinite inner product presented in (2), then the above definitions of orthogonal basis and orthonormal basis are called J-orthogonal and J-orthonormal bases, respectively.
A matrix A is said to be J-symmetric when A = JA T J and we write A = A ½T . This paper is classified such that the next section is devoted to recounting the basic definitions and the block Arnoldi and block Arnoldi-Ruhe's variant. In the third section, the indefinite versions of its previous section algorithms are designed and the use of these methods in solving linear systems is discussed. The fourth section offers the indefinite version of the block Lanczos method and its application in solving linear systems with J-symmetric coefficient matrices, and finally, the numerical examples are given in the fifth chapter.

Block FOM Method
In some areas of computational sciences, there may be a need to solve large sparse linear systems with several right-hand sides that are given at once. A nonsingular linear systems with p right-hand sides can written as with A ∈ C n × n and X, B ∈ C n×p . Generally, we call such n × p matrices block vectors. Block Krylov methods are iterative methods that have been designed for such problems. Note that for A ∈ C n×n and X ∈ C n×p , the block Krylov subspaces K n ðA, XÞ generated by A from X are where "block span" is defined such that In general, a block Krylov subspace method acts in the same manner as a Krylov subspace method, but at each iteration, the operator is applied to a block of vectors instead of just one. Most Krylov methods can be generalized to block Krylov space solvers (for example, see [3][4][5][6][7]). Specifically, we recall the algorithms of the three block Krylov methods where our paper is focused on, i.e., the block Arnoldi, block FOM, and block Lanczos algorithms.
The outcome of Algorithm 1 is an orthogonal basis for the block Krylov subspace KnðA, V 1 Þ ≡ fV 1 , AV 1 , ⋯, A n−1 V 1 g and also a band upper Hessenberg matrix with p subdiagonals.
The second algorithm which is the resulting of A. Ruhe's [8] efforts as shown in Algorithm 2.
Especially, the case p = 1 coincides with the usual Arnoldi process. According to the algorithm, the vector w satisfies the relation where k = j − p + 1. By line 9, w = h j+1,k v j+1 which yields Thus, in which the matrix H m is of size ðm + pÞ × m and V j represents the n × j matrix with columns v 1 , ⋯, v j .
Another issue on which we will work in the indefinite mode is the solving of linear systems with multiple righthand sides. The block generalization of FOM and Lanczos methods is defined in straightforward ways to solve linear systems which are defined in spaces with definite inner products. As a short reminder, consider the linear systems or in matrix notation Compute the QR-factorization of W j : W j = V j+1 H j+1,j (6) EndDo Advances in Mathematical Physics in which X = ðx ð1Þ , ⋯, x ðpÞ Þ and X = ðb ð1Þ , ⋯, b ðpÞ Þ, and assume that the initial block guess X 0 = ðx ð1Þ 0 , ⋯, x ðpÞ 0 Þ ∈ C n×p of initial guesses x ðiÞ 0 is given and the initial block residual R 0 is as follows: in which r Recall that in the case of a single system, that is when p = 1, the approximate solution x m is chosen such that the correction x m − x lies in the Krylov subspace The block Krylov space method for solving the p systems (3) is an iterative method that generates approximate solution X such that where R 0 is the initial residual as (11). The block FOM algorithm compute the QR-factorization of R 0 : in which matrix ½v 1 , ⋯, v p is unitary and R is p × p upper triangular. This factorization provides the first p vectors of the block Arnoldi basis. Each of the approximate solutions has the form Writing X = ðx ð1Þ , ⋯, x ðpÞ Þ and Y = ðy ð1Þ , ⋯, y ðpÞ Þ, we have Let E 1 be the ðm + pÞ × p matrix whose upper p × p principal block is an identity matrix. Then, the relation (8) results in Note that the vector g ðiÞ ≡ E 1 Re i in C m+p is a vector whose arrays are zero apart from those from 1 to i which are derived from the ith column of the upper triangular matrix R. The matrix H m is an ðm + pÞ × m matrix. The block FOM approximation eliminates the last p rows of H m and g ðiÞ and then solves the resulting system H m y ðiÞ = g ðiÞ . Then, the approximate solution x ðiÞ will be calculated by (15), and finally, from the orthogonality of the column vectors of V m+p , the relation (17) yields

Indefinite Block FOM Method
In [2], the indefinite Arnoldi's process builds a J-orthogonal basis for a nondegenerated Krylov subspace as shown in Algorithm 3.
In the following, after recalling the indefinite Gram-Schmidt orthogonalization (Algorithm 4), we will express the block analogue of this algorithm.
Algorithm 4 gives and this implies that X = QJR in which Q T JQ =J and J = diag ðtðq 1 Þ, ⋯, tðq m ÞÞ. Note that the indefinite modified Gram-Schmidt algorithm is similar to the above algorithm, except that here, the seventh row will be replaced as follows. Now see Algorithm 5.

Advances in Mathematical Physics
Note that J is the mentioned matrix in the indefinite scalar product (2) and J i ′is the acquired matrix by the indefinite Gram-Schmidt process. Now, a simple property of the algorithm is proved. Proposition 1. Denote by I κ the k × k identity matrix and define the following matrices by the above algorithm: Then the following relation holds: Proof. The relation is straightforward by the following equalities which are derived from algorithm: Algorithm 6 is the modified version of Algorithm 5 for which the indefinite QR-factorization is calculated by the indefinite modified Gram-Schmidt process.
As shown in Algorithm 6, we express the indefinite version of the block Arnoldi-Ruhe's variant, as another Krylov subspace method for which A acts on a group of vectors instead of just one vector.
(1) Choose vectors x 1 , ⋯, x p such that the indefinite Gram-Schmidt orthogonalization gives result: Compute the indefinite QR-factorization of W j = V j+1J j H j+1,j : (9) EndDo Algorithm 6. 4 Advances in Mathematical Physics where k = j − p + 1 and also w = t j+1 h j+1,k v j+1 . Thus, in which V j indicates the n × j matrix with columns v 1 , ⋯, v j . Finally, from there for ðm + pÞ × m matrix H m . Now, similar to the end of the previous section, consider the linear systems with matrix notation in which X = ðx ð1Þ , ⋯, X ðpÞ Þ and B = ðb ð1Þ , ⋯, b ðpÞ Þ, and assume that the initial block guess X 0 of initial guesses x ðiÞ 0 and the initial block residual R 0 are defined similar to the previous section.
in which r ðiÞ 0 ≔ b ðiÞ − Ax ðiÞ 0 . Recall that in the case of a single system, that is when p = 1, the approximate solution X m is chosen such that the correction X m − X lies in the Krylov subspace A block Krylov space method for solving the p systems (3) is an iterative method that generates approximate solution X such that where R 0 is the initial residual as (28). The indefinite block FOM computes the indefinite QRfactorization of R 0 : such that V p = ½v 1 , ⋯, v p and V t p JV p =J. Each of the approximate solutions has the form Writing X = ðx ð1Þ , ⋯, x ðpÞ Þ and Y = ðy ð1Þ , ⋯, y ðpÞ Þ, we have Let E 1 be the ðm + pÞ × p matrix as before. Then, the relation (24) results in The vector g ðiÞ ≡ E 1 Re i is a vector whose components are zero except those from 1 to i which are extracts from the ith column of the upper triangular matrix R. While H m and H m are defined as before, the indefinite block FOM (IBFOM) deletes the last p rows of H m and g ðiÞ and solves the resulting system H m y ðiÞ = g ðiÞ . Then, the approximate solution x ðiÞ is computed by (32).

Block Lanczos and Indefinite Block Lanczos Methods
In 1950, Lanczos proposed an algorithm (Algorithm 8) [9], which designed an orthogonal transformation of a symmetric (1) Choose matrix V 1 = ½v 1 , ⋯, v p such that for i = 1, ⋯, p : ½v i , v j = 0, i ≠ j and t i = ½v i , v i = ±1: Compute a ≔ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi j½w, wj p Advances in Mathematical Physics matrix into a tridiagonal matrix T. It can be applied to Krylov subspace methods for solving the symmetric matrix linear systems as well as the eigenvalue problem of the symmetric matrix. The block Lanczos algorithm was developed by Peter Montgomery and published in 1995, [10]; it is based on, and bears a strong resemblance to, the Lanczos algorithm for finding the eigenvalues of large sparse real matrices.
Due to the login of J andJ into the IBFOM algorithm, the better performance of the BFOM algorithm can be expected than the IBFOM algorithm, and this seen practically in numerous examples. We know there is a Ruhe's version of the block Lanczos algorithm that is used for the symmetric coefficient matrix mode. Our purpose in this section is to build its indefinite mode, named indefinite block Lanczos (Ruhe's variant) algorithm, that is used for J-symmetric matrices. First, we express the block Lanczos algorithm and then we design its indefinite version (Algorithm 8).
In Algorithm 9, we suggest the indefinite version of the Algorithm 8: Algorithm 9 transforms the matrix A in to the matrix T, and as seen for IBFOM algorithm, the following relation satisfies for them: (1) Choose r initial orthonormal vectors V = ½v 1 , ⋯, v p and set ½t ij ðm+pÞ×m = 0  6 Advances in Mathematical Physics As seen for the IBFOM algorithm, by considering the linear system AX = B, we gain some relations similar to what obtained at the end of the third section, and by solving the linear systems T m y ðiÞ = g ðiÞ and by substituting y ðiÞ in relation (32), the solution x ðiÞ is earned for the system. Here, T m and T m are defined similar to H m and H m .

Numerical Examples
Suppose that A is an n × n and J-symmetric matrix and B is a n × p matrix. Our purpose is to solve the linear system AX = B via three methods: block FOM (BFOM), indefinite block FOM (IBFOM), and indefinite block Lanczos (IBLAN). In the following examples, t will be considered (1) Choose matrix V = ½v 1 , ⋯, v p with J-orthonormal columns and set ½t ij ðm+kÞ×m = 0 and ½J ij m+p,m+p = 0 then ∈ = ∑ p i=1 ∈ i /p. We define m the number of iterations which is chosen arbitrarily.
Example 2. Consider matrix A as follows: in which A 11 , A 12 , A 21 , A 22 are all tridiagonal matrices with arbitrary arrays belonging to (0, 1). In addition, A 11 , A 22 are symmetric and A 21 = −A T 12 . With this selection and by defining J as below the matrix A is J-symmetric. If n = 600, m = 300, and p = 5, then the BFOM, IBFOM, and IBLAN method results are as shown in Table 1.
Example 3. Consider the n × n matrices A and J as follows: in which A 11 , A 12 , A 21 , A 22 are all tridiagonal matrices with arbitrary arrays belonging to (0, 10). Besides, A 12 , A 13 , and A 23 are ðn/3Þ × ð3/nÞ matrices by zero arrays except for their diagonal and subdiagonal arrays which are randomly selected in (0, 1). Also, suppose that A 21 = −A T 12 , A 32 = −A T 23 and A 31 = A T 13 . Under these circumstances, A is J-symmetric. By letting n = 600, m = 300, and p = 5, we the results shown in Table 2.
Example 4. Consider A and J as the previous example. By letting n = 600, m = 500, and p = 20, we obtain the results shown in Table 3.
In the above examples, B and X 0 are n × p matrices with randomly chosen arrays in (0, 1).

Conclusion
As can be seen from the discussed algorithms in this paper, the BFOM and IBFOM algorithms are used to solve the linear systems with arbitrary coefficients matrices. But since in the IBFOM algorithm the indefinite inner product is used instead of the standard inner product and the matrix J appears in the calculations, the number of arithmetic operations is increased, and as a result, the IBFOM algorithm does not perform better than the BFOM algorithm. But our goal is to solve the block linear systems with the J-symmetric coefficient matrices. So the performance of the IBLAN algorithm for solving such systems is important for us. The block linear systems with the J-symmetric coefficients matrices can only be solved by the BFOM, IBFOM, and IBLAN algorithms. As can be seen in the numerical examples, the performance of the IBLAN algorithm is far better than the IBFOM algorithm and even better than the BFOM algorithm due to the difference in the number of arithmetic operations, and this is what we have theoretically achieved. The result in a sentence is that the best way to solve the block linear systems with the J-symmetric coefficients matrices is to use the IBLAN algorithm.

Data Availability
All data is available.

Conflicts of Interest
The authors declare that they have no conflicts of interest.