Verified partial eigenvalue computations using contour integrals for Hermitian generalized eigenproblems

We propose a verified computation method for partial eigenvalues of a Hermitian generalized eigenproblem. The block Sakurai-Sugiura Hankel method, a contour integral-type eigensolver, can reduce a given eigenproblem into a generalized eigenproblem of block Hankel matrices whose entries consist of complex moments. In this study, we evaluate all errors in computing the complex moments. We derive a truncation error bound of the quadrature. Then, we take numerical errors of the quadrature into account and rigorously enclose the entries of the block Hankel matrices. Each quadrature point gives rise to a linear system, and its structure enables us to develop an efficient technique to verify the approximate solution. Numerical experiments show that the proposed method outperforms a standard method and infer that the proposed method is potentially efficient in parallel.


Introduction
We consider verifying the m eigenvalues λ i , counting multiplicity, of the Hermitian generalized eigenproblem Ax i = λ i Bx i , x i ∈ C n \ {0}, i = 1, 2, . . . , m in a prescribed interval Ω = [a, b] ⊂ R, where A = A H ∈ C n×n , B = B H ∈ C n×n is positive semidefinite, and the matrix pencil zB − A (z ∈ C) is regular 1 , i.e, det(zB − A) is not identically equal to zero. We call λ i an eigenvalue and x i the corresponding eigenvector of the problem (1) or matrix pencil zB − A, z ∈ C interchangeably. Throughout, we assume that the number of eigenvalues in the interval Ω is known to be m and there do not exist eigenvalues of (1) at the end points a, b ∈ R. We also denote the eigenvalues of (1) outside Ω by λ i (i = m + 1, m + 2, . . . , r), where r = rank B.
There are plenty of previous works for verification methods of eigenvalue problems (see, e.g., [22] and references therein). These previous works, in particular, for symmetric generalized eigenvalue problems are classified into two kinds: some of them aim at rigorously enclosing specific eigenvalues, and others aim at rigorously enclosing all eigenvalues. For the purposes, different approaches have been taken. Behnke [1] used Temple quotients, their generalizations, and the LDLT decomposition to verify specific eigenvalues. Behnke [2] used the variational principle to verify specific eigenvalues. Watanabe et al. [24] used an approximate diagonalization and generalized Rump's method, avoiding the Cholesky factorization, to verify the eigenvalue with the maximum magnitude. Yamamoto [25] combined the LDLT decomposition with Sylvester's law of inertia to verify specific eigenvalues. Maruyama et al. [9] used Geršhgorin's theorem to verify all eigenpairs. Miyajima et al. [14] used the techniques in [12,13] and combined it with Rump and Wilkinson's bounds to verify all eigenpairs. See [11] and references therein for the non-Hermitian case.
In this study, we develop a verification method for partial eigenvalues using the block Hankeltype Sakurai-Sugiura method [7], which receives attentions in recent years by virtue of the scalability in parallel and versatility [8]. We shed light on a new perspective of this method. This method uses contour integrals to form complex moment matrices. Their truncation errors for the trapezoidal rule of numerical quadrature were derived by Miyata et al. [15]. Thanks to their work, we derive a numerically computable enclosure of the complex moment. We point out that our verification method works for multiple eigenvalues in the prescribed region and for semidefinite B, whereas the previous methods [1,2,9,[12][13][14]25] work only for positive definite B. In addition, for each quadrature point, a structured linear system of equations arises to solve. The structure enables us to develop an efficient verification technique in case of B being positive definite. Yamamoto [26] and Rump [23] derived componentwise and normwise bounds, respectively, of the error of the approximate solution. See also [22]. These methods need a numerically computed inverse of the coefficient matrix, whereas the proposed technique does not need such a numerical inverse, and instead needs a lower bound of the smallest eigenvalue of B.
In the rest of the paper, we use the following notations: For a real matrix A = (a ij ) ∈ R m×n , a nonnegative matrix consisting of entrywise absolute values is denoted by |A| = (|a ij |). For B = (b ij ) ∈ R m×n and α ∈ R, the inequality A < B means a ij < b ij holds entrywise and the inequality A < α means a ij < α holds entrywise.
The rest of this paper is organized as follows: In Section 2, we briefly review the block Sakurai-Sugiura Hankel method and its error analysis derived by Miyata et al. [15]. Thanks to this result, we derive a computable rigorous error bound for complex moment in Section 3. We also put several remarks on the implementation of our method in Section 4. In Section 5, we show two numerical examples illustrating the performance of our method. In Section 6, we conclude the paper for discussing potentials of our method for parallel implementation and future directions.

Block Hankel-type Sakurai-Sugiura method
We review the block Sakurai-Sugiura Hankel method [7], which is the basis of the proposed method. The block Sakurai-Sugiura Hankel method has parameters such as the block size L ∈ N + , the order of moment M ∈ N + , a random matrix V ∈ C n×L whose column vectors consist of a linear combination of all eigenvectors, the basis vectors of the kernel of B, say Ker B, and the scaling parameters (γ, ρ) ∈ R × R for the eigenvalues. The p th complex moment matrix is given by defined on the closed Jordan curve Γ through the end points of the interval Ω = [a, b], where i = √ −1 is the imaginary unit and π is the circle ratio. Denote the block Hankel matrices consisting of the moments (2) by Then, the following theorem show that the block Sakurai-Sugiura Hankel method can compute eigenvalues in a prescribed domain and their corresponding eigenvectors [7, Theorems 5 and 6]. We remark that the condition rank(H M ) = m implies LM ≥ m. Next, we give a relationship between the target eigencomponents in the columns of V and the rank of H M . Recall the Weierstrass canonical form of the matrix pencil zB − A [3, Proposition 7.8.3]. There exists a nonsingular matrix X ∈ C n×n such that X H (zB − A)X = zI 0 − Λ, where I 0 is a diagonal matrix whose leading r diagonal entries are one and whose trailing n − r diagonal entries are zeros, and Λ is a diagonal matrix whose leading r diagonal entries are the eigenvalues of (1) and whose trailing n − r diagonal entries are one. Note that the columns of X = [x 1 , x 2 , . . . , x n ] are the appropriately scaled eigenvectors of matrix pencil zB − A, where x 1 , x 2 , . . . x r ∈ C n correspond to the eigenvalues λ 1 , λ 2 , . . . , λ r ∈ R, respectively, and x k , x k+1 , . . . x n ∈ C n form a basis of Ker B. Then, from X H BX = I 0 and the residue theorem, the complex moment (2) is expressed as where V k = V H Bx k x H k BV ∈ C L×L . This is represented by Using this form, we have Meanwhile, it follows that the range of S satisfies This implies If we set V such that x H k BV = 0 for some k = 1, 2, . . . , m, then dim (R(S)) < m and the Hankel matrix H M becomes singular.

Error bound of the complex moment
Based on the error analysis in the previous section, we derive a rigorous error bound for each complex moment M p . Let Then, the rightmost sides of (9) and (10) become (λ k − γ) p α k (k = 1, 2, . . . , m) and (λ k − γ) p β k (k = m+1, . . . , n), respectively. Then, we simplify the expressions of the approximated complex moment (11) The truncation error is given by We note that the following identities of the eigenvalues of a Hankel matrix pencil are useful for our verification methods.
p,in have the same eigenvalues. p,in is given by The proof is already completed by the above discussions. We can enclose M p,out regarding the outside of Γ is bounded as follows: Let V ∈ C n×L be an arbitrary matrix such that where the columns of X 0 = [x r+1 , x r+2 , . . . , x n ] form a basis of Ker B (r = rank(B)), the columns of X 1 = [x 1 , x 2 , . . . , x r ] form a basis of Ker B ⊥ , C 0 ∈ C (n−r)×L , and C 1 ∈ C r×L has at least one nonzero entry in each column and each row. Suppose N > 2M − 1 ≥ p and that λ satisfies λ − γ = min k=m+1,m+2,...,r |λ k − γ|. Then, the complex moment (11) is bounded above as for p = 0, 1, . . . , 2M − 1, where · F denotes the Frobenius norm.
Proof. Regarding the fraction factor in (11) as the geometric series, we have Note that the property BV = BX 1 C 1 + BX 0 C 0 = BX 1 C 1 gives

Hence, we have
where e k is the k th standard basis vector of R n , i.e., the k th entry is one and the remaining entries are zero. Therefore, we obtain (13).

Implementation
In this section, we present an implementation of the block Sakurai-Sugiura Hankel-based method for numerically verifying the partial eigenvalues λ i ∈ Ω, i = 1, 2, . . . , m. Suppose that the number of the eigenvalues in Γ is m. We set L and M such that m = LM . Note that if m is a prime number, either L or M must be one and the other must be m. To rigorously enclose the eigenvalues, we verify each block M  (12) can be bounded by using (13). The number of quadrature points can be automatically determined from the error bound (13) where δ denotes the tolerance of quadrature error. Hence, there is a trade-off between the accuracy for the quadrature and the central processing unit (CPU) time. (12) can be also bounded by evaluating the numerical error. To rigorously bound the numerical error, we need verification of a numerical solution of the linear system with multiple right-hand sides, that is (z j B − A)Y j = BV , which comes from The enclosure of Y * j can be obtained by standard verification methods, e.g., [23], whereas we consider efficiently enclosing the solution Y * j for positive definite B. Theorem 4.1. Let A be a Hermitian matrix and B a Hermitian positive definite matrix. The quadrature points z j , j = 1, 2, . . . , N are defined as in (7). Denote the i th entries of the solution y * = (z j B − A) −1 b and an approximate solutionỹ of (z j B − A)y = b byỹ i and y * i , respectively. If we denote the residual byr = b − (z j B − A)ỹ, then the errorỹ − y * satisfies for all i = 1, 2, . . . , n, where λ min (·) is the smallest eigenvalue of a matrix and · 2 denotes the Euclidean norm.
Proof. Denote the square root of B by B 1/2 . Then, for all i = 1, 2, . . . , n we have The bound (z j I − B −1/2 AB −1/2 ) −1 2 ≤ (Im z j ) −1 can be geometrically interpreted as in Figure 1. Namely, the distance from the quadrature point z j to the nearest eigenvalue of B −1/2 AB −1/2 is bounded below by the absolute value of the imaginary part of z j .
Note that z j B − A is nonsingular for j = 1, 2, . . . , N , since z j is not in the real axis (7). Hence, we do not need to verify the regularity of the coefficient matrix z j B − A such as in [23]. In addition, the bound (15) can be efficiently evaluated for sparse A and B. On the other hand, the bound (15) shows that, if λ min (B) is very small, the verification ofỹ j will be loose and the subsequent verification may fail. This indicates that Theorem 4.1 works well for well-conditioned B. For ill-conditioned B, applying iterative refinements with multi-precision arithmetics [18] to the linear system will potentially remedy the bound (15). Furthermore, if each entry of z j B − A and b is not wide interval, one can use a staggered correction [17,Section 4.3]. That is, whered solves (z j B − A)d ≈r in a numerical (non-rigorous) sense andd i denotes the i th entry ofd. This technique is expected to give sharper error bounds than (15) in Theorem 4.1. We summarize the above procedures in Algorithm 1. In this implementation, we scale the target interval Ω into [−1, 1] by A = 1 ρ (A − γB) and compute the eigenvalues of A x = λ Bx for simplicity. Here, we denote interval quantities with squares brackets.
The verification in line 4 of Algorithm 1 can be done by, e.g., the following steps: 1. Compute a numerical approximationλ ofλ (defined in Section 3) using MATLAB function eigs.

Numerical examples
To illustrate effectiveness of the proposed method, we show three numerical examples (two artificially generated eigenproblems and one practical eigenproblem). In first and third examples, we compared the proposed method with INTLAB's function verifyeig in terms of the CPU time. The second example was set for illustrating the performance of the proposed method Algorithm 1 Proposed method.
Input: A ∈ C n×n , B ∈ C n×n , L, M ∈ N + such that m = LM , V ∈ C n×L , γ, ρ ∈ R, and δ > 0.  [20]. The matrix V ∈ R n×L was generated by using built-in MATLAB function randn. The tolerance of quadrature error was δ = 10 −15 . We determined the smallest N that satisfies (14). Note again that the number of eigenvalues in the interval is given in advance.
In this example, numerically computed solutions of linear systems (z j B − A)Y j = BV were obtained by using MATLAB function mldivide. The eigenvalues of H <,in M y = λ H in M in line 9 of Algorithm 1 were verified by using INTLAB function verifyeig.
Artificially generated eigenproblems 1 The test matrix pencil zB − A used was given by where tridiag(·, ·, ·) denotes the tridiagonal Toeplitz matrix consisting of a triplet and the value of b i normally distributes with mean 1 and variance 10 −7 . The generalized eigenproblem of matrix pencil (16) models harmonic oscillators consisting of mass points and springs. In particular, the matrix pencil (16) arises from an equation of motion of mass points in one dimension. Let u i (t) be the displacement of the i th point from the equilibrium of spring i at time t with mass b i and connected with two springs with stiffnesses k i = k i+1 = 1. Then, we have the equation Suppose that the mass point has a simple harmonic oscillation u i (t) = x i sin(wt + φ), where w is the angular rate, φ is the phase, and the homogeneous Dirichlet boundary condition u 0 (t) = u n+1 (t) = 0 is imposed. Then, we have the eigenproblem Ax = ω 2 Bx, where The verification targets were four eigenvalues near 2 for n = 2 , = 5, 6, . . . , 20 of matrix pencil (16). We set the parameters L = 2 and M = 2. It is well-known that the eigenvalue of A is given by λ i (A) = 2 − 2 cos(iπ/(n + 1)) for i = 1, 2, . . . , n. Perturbation theory of Hermitian generalized eigenproblems [16,Theorem 8.3] gives the following bound between λ i and λ i (A): where ∆B = I − B. Then, we derived the lower bound of |λ| using the eigenvalue λ i (A) with its bound (17). Figure 2 shows the CPU time of the proposed method (Algorithm 1) and a standard method for verifying specific eigenvalues in MATLAB (build-in MATLAB function eigs for the solution of the eigenproblem and INTLAB function verifyeig for eigenvalue verification). As shown in Figure 2, the efficient verification technique based on Theorem 4.1 achieved a substantial improvement of the proposed method in the CPU time, and the proposed method using the technique based on Theorem 4.1 was faster than the standard method when the size of matrix n is larger than 2 10 . Furthermore, due to the limit of RAM, the standard method did not run for > 16. The proposed method tended to be more effective, as the size of the matrix n becomes large and sparse. On the other hand, the proposed method diminished more than verifyeig in terms of the error bounds. Table 1 gives the verified eigenvalues for the proposed method for each . For each , the digits in single lines are the same as those of the exact eigenvalues, whereas the digits in double lines denote the supremum and infimum of the exact eigenvalues. Table 1 shows that the proposed method succeeded in verifying the eigenvalues at least 5 digits up to = 20. For example, for n = 2 10 , verifyeig displayed correct 13 digits of the target eigenvalues This is mainly due to an overestimation of the errorỸ j − Y j and in particular (z j B − A) −1 2 (see Theorem 4.1). In addition, we remark that this example (16) is very ideal to show the effectiveness of the proposed method, thanks to the sparsity of A and B and the simple structure of B.
Artificially generated eigenproblems 2 Another test matrix pencil zB −A was considered for second numerical example, which is defined by where "pentadiag" denotes the pentadiagonal Toeplitz matrix. We changed b 100 as 0,10 −16 , 10 −15 , . . . , 10 0 for illustrating the performance of our method under the case that B is positive semidefinite or ill-conditioned.  We considered six (m = 6) eigenvalues in Ω = [0.95, 1.05]. We set the parameters L = 3, M = 2. For the scaled eigenproblem, we verified |λ| > 1.36 by using INTLAB's function isregular. Figure 3 shows a transition of verified partial eigenvalues with respect to b 100 entry. The six target eigenvalues were plotted in Figure 3 (a). Changing b 100 entry, these values slightly moves between b 100 = 1 and 10 −2 . Our proposed method succeeded in including these eigenvalues with the radius up to 10 −9 as shown in Figure 3 (b). This result implies that our proposed algorithm works well in the case of the martix B being semidefinite or ill-conditioned. Finally, we remark that Theorem 4.1 cannot work in this case because λ min (B) −1 becomes very large or infinity. One should use INTLAB's function verifylss or another verification methods for linear systems. generalized eigenvalue problem for VCNT900 [4][5][6], which is associated with a vibrating carbon nanotube within a supercell with spd orbitals. Both matrices A and B have nonzero density 42.8% and are not sparse. Figure 4 shows the distribution of the 52 eigenvalues and the outer eigenvalues nearest to [a, b]. To verify the lower bound of λ min (B), we used Rump's method [21] using the INTLAB function isspd. That is, we firstly computed an approximate smallest eigenvalue of B (e.g., by built-in MATLAB function eigs), sayλ min (B). We secondly checked the positive definiteness of B − cλ min (B)I using isspd for a certain c ∈ (0, 1). If the matrix is positive definite, then we adopt cλ min (B) as the desired lower bound of λ min (B). Furthermore, for the scaled eigenproblem, we verified |λ| > 1.19 by using INTLAB's function isregular. Execution time of this part is about 120 seconds because of our naive implementation. Indeed, there is a room to improve this part. For example, we can use an efficient technique given in [25], which is based on Sylvester's law of inertia, to verify non-existence of the eigenvalues in the prescribed interval. The proposed method based on Theorem 4.1 successfully verified 37 of 52 eigenvalues in 7.9 seconds and failed to obtain the inclusion of the rest 15 eigenvalues. This is due to an overestimation of the entries ofỸ j −Y j . When using a verification method in INTALB (so-called backslash '\') for the linear systems, the proposed method successfully verified all 52 eigenvalues in 36.0 seconds. The standard method (eigs+verifyeig) also succeeded in verifying all 52 eigenvalues in 5.2 seconds, since the sizes of the matrices are not so large. Although the most expensive part in Algorithm 1 is the verification ofỸ j , we note that this can be done in parallel for all j = 1, 2, . . . , N .