Analysis of eigenvalue condition numbers for a class of randomized numerical methods for singular matrix pencils

The numerical solution of the generalized eigenvalue problem for a singular matrix pencil is challenging due to the discontinuity of its eigenvalues. Classically, such problems are addressed by first extracting the regular part through the staircase form and then applying a standard solver, such as the QZ algorithm, to that regular part. Recently, several novel approaches have been proposed to transform the singular pencil into a regular pencil by relatively simple randomized modifications. In this work, we analyze three such methods by Hochstenbach, Mehl, and Plestenjak that modify, project, or augment the pencil using random matrices. All three methods rely on the normal rank and do not alter the finite eigenvalues of the original pencil. We show that the eigenvalue condition numbers of the transformed pencils are unlikely to be much larger than the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta $$\end{document}δ-weak eigenvalue condition numbers, introduced by Lotz and Noferini, of the original pencil. This not only indicates favorable numerical stability but also reconfirms that these condition numbers are a reliable criterion for detecting simple finite eigenvalues. We also provide evidence that, from a numerical stability perspective, the use of complex instead of real random matrices is preferable even for real singular matrix pencils and real eigenvalues. As a side result, we provide sharp left tail bounds for a product of two independent random variables distributed with the generalized beta distribution of the first kind or Kumaraswamy distribution.


Introduction
The purpose of this work is to study three recent numerical methods, introduced in [11,12], for computing finite eigenvalues of a square singular matrix pencil A − λB, that is, A, B ∈ C n×n and det(A − λB) ≡ 0. We say that λ 0 ∈ C is an eigenvalue of A − λB if rank(A − λ 0 B) < nrank(A, B), where nrank(A, B) := max ζ∈C rank(A − ζB) < n is the normal rank of the pencil.Similarly, if rank(B) < nrank(A, B) then we say that A − λB has infinite eigenvalue(s).
A major difficulty when working with a singular pencil numerically is the discontinuity of its eigenvalues, that is, the existence of (arbitrarily small) perturbations of A, B that completely destroy the eigenvalue accuracy.To circumvent this phenomenon, it is common to first extract the regular part from the staircase form of the pencil [27] before applying the QZ algorithm [21,14] to compute the eigenvalues.Notably, the popular software GUPTRI [2,3] is based on this approach.Numerically, the computation of the staircase form requires several rank decisions and these decisions tend to become increasingly difficult as the algorithm proceeds, which can ultimately lead to a failure of correctly identifying and extracting the regular part [7,22].
Despite the discontinuity of the eigenvalues mentioned above, Wilkinson [29] observed that the QZ algorithm directly applied to the original singular pencil usually returns the eigenvalues of the regular part with reasonable accuracy.De Terán, Dopico, and Moro [1] explained this phenomenon by developing a perturbation theory for singular pencils, implying that the set of perturbation directions causing discontinuous eigenvalue changes has measure zero.Later on, Lotz and Noferini [17] turned this theory into quantitative statements, by measuring the set of perturbations leading to large eigenvalue changes and defining the notion of weak eigenvalue condition number for singular pencils.
It is important to note that Wilkinson's observation does not immediately lead to a practical algorithm, because the (approximate) eigenvalues returned by the QZ algorithm are mixed with the spurious eigenvalues originating from the (perturbed) singular part and it is a nontrivial task to distinguish these two sets.During the last few years, several methods have been proposed to circumvent this difficulty.
Inspired by the findings in [1], Hochstenbach, Mehl, and Plestenjak [11] proposed to introduce a modification of the form for matrices D A , D B ∈ C k×k with k := n − nrank(A, B), random matrices U, V ∈ C n×k , and a scalar τ ̸ = 0. Generically, A − λ B is a regular pencil and the regular part of A − λB is exactly preserved, i.e., if λ i is a finite eigenvalue of A − λB then λ i is an eigenvalue of A − λ B with the same partial multiplicities.More specifically, λ i is an eigenvalue of A − λB if and only if λ i is an eigenvalue of A − λ B such that its right/left eigenvectors x, y satisfy V * x = 0 and U * y = 0.The latter property is used to extract the eigenvalues of A − λB from the computed eigenvalues of A − λ B.
In [12], two different variations of the approach from [11] described above are proposed.Instead of adding a modification, the pencil is projected to the generically regular pencil U * ⊥ AV ⊥ − λU * ⊥ BV ⊥ for random matrices U ⊥ , V ⊥ ∈ C n×(n−k) , and the eigenvalues of A − λB are extracted from the computed eigenvalues of the smaller pencil.The third method analyzed in this work consists of computing the eigenvalues of A−λB from the augmented generically regular pencil where S A , S B , T A , T B ∈ C k×k and U, V ∈ C n×k are random matrices.For both variants, it holds generically that the regular part of A−λB is fully preserved in exact arithmetic.Note that roundoff error affects this eigenvalue preservation property when forming the modified pencils (1) and U * ⊥ AV ⊥ − λU * ⊥ BV ⊥ in finite precision.
One goal of this work is to show that the modifications introduced by the three methods above are numerically safe.More specifically, we show that, with high probability, the eigenvalue condition numbers of the modified pencils are not much larger than the weak eigenvalue condition numbers of the original pencil.In particular, these methods can be expected to return good accuracy for well-conditioned eigenvalues of A − λB in the presence of roundoff error.Another implication of our result is that the eigenvalue condition numbers of the modified pencils represent a reliable complementary criterion for identifying reasonably well-conditioned finite eigenvalues in any of the algorithms from [11] and [12].

Related work
Closer to the analyses in [1,17], it was recently suggested in [16] to perturb the full pencil: A + τ E − λ(B + τ F ), where E, F ∈ C n×n are random Gaussian matrices and τ > 0 is small but well above the level of machine precision.Unlike for the three methods mentioned above, the regular part of A − λB is not preserved by this perturbation.On the other hand, the direct connection to [17] allows to facilitate their analysis and use the computed eigenvalue condition numbers of the perturbed pencil as a criterion to identify finite eigenvalues of the original pencil.In [16, P. 2], it was stated that a similar analysis would be more difficult for the method from [11] because of the structure imposed on the random perturbation in (1).In this work, we will address this question and carry over the analysis from [16,17] to the three methods above.In particular, our analysis confirms that the computed eigenvalue condition numbers can be used as a reliable indicator for such algorithms as well.
Outline The structure of the paper is as follows.In Section 2 we review basic concepts for singular pencils as well as δ-weak condition numbers.In Section 3 we present the three randomized numerical methods that we analyze in Section 4, where we also obtain the new left tail bounds.This is followed by numerical examples in Section 5.In the appendix we provide results obtained with symbolic computation that verify the results from latter sections.

Reducing subspaces and eigenvectors
In order to define eigenvectors of a singular pencil according to [1,17], we first introduce the Kronecker canonical form (KCF) and the notion of minimal reducing subspaces, see, e.g., [9,28].
Theorem 1 (Kronecker canonical form) Let A, B ∈ C n×n .Then there exist nonsingular matrices P, Q ∈ C n×n such that where J and N are in Jordan canonical form with N nilpotent.Furthermore, where L j (λ) = [0 I j ] − λ [I j 0] is of size j × (j + 1), and m i , n i ≥ 0 for i = 1, . . ., k, where k = n − nrank(A, B).
The pencil R(λ) in ( 2) is called the regular part of A − λB and contains the eigenvalues of A − λB, where the Jordan part J − λI r contains the Jordan blocks of the finite eigenvalues of A − λB.If λ 0 is a finite eigenvalue of A − λB, then its partial multiplicities are the sizes of the Jordan blocks associated with λ 0 .A finite eigenvalue is called simple if it is a simple root of det R(λ).The pencil S(λ) is called the singular part of A − λB and contains right singular blocks L m1 (λ), . . ., L m k (λ) and left singular blocks L n1 (λ) T , . . ., L n k (λ) T , where m 1 , . . ., m k and n 1 , . . ., n k are called the right and left minimal indices of the pencil, respectively.
We say that a subspace M is a reducing subspace [28] for the pencil is the intersection of all reducing subspaces and is spanned by the columns of Q corresponding to the blocks L m1 (λ), . . ., L m k (λ).Analogously, L is a left reducing subspace for the pencil A − λB if dim(A * L + B * L) = dim(L) − k and the minimal left reducing subspace L RS (A, B) is the intersection of all left reducing subspaces.
For an eigenvalue λ 0 ∈ C of A−λB, a nonzero vector x ∈ C n is called a right eigenvector if (A − λ 0 B)x = 0 and x ̸ ∈ M RS (A, B).A nonzero vector y ∈ C n such that y * (A − λ 0 B) = 0 and y ̸ ∈ L RS (A, B) is called a left eigenvector.This agrees with the definition of eigenvectors from [5,17].Compared to a regular pencil, the eigenvectors of singular pencils have a much larger degree of non-uniqueness, due to components from the minimal reducing subspaces.

Eigenvalue perturbation theory
Suppose that we perturb an n × n matrix pencil A − λB into where ϵ > 0. We define ∥(E, , where ∥•∥ F denotes the Frobenius norm of a matrix.When ∥(E, F )∥ = 1 we can identify the pencil E − λF with a point on the unit sphere in C 2n 2 and think of (E, F ) as a direction of the perturbation (3).
Before addressing the singular case, let us first recall the classical eigenvalue perturbation theory [26] for a regular pencil A − λB.Consider a simple finite eigenvalue λ 0 ∈ C of A − λB with normalized right/left eigenvectors x, y.For ∥(E, F )∥ = 1 and sufficiently small ϵ > 0 there exists an eigenvalue λ 0 (ϵ) of the perturbed pencil (3) satisfying the perturbation expansion as ϵ → 0. Note that the inequality becomes an equality for the direction In turn, the absolute condition number of λ 0 , defined as satisfies matrices with orthonormal columns such that: X 1 is a basis for ker(A−λ 0 B)∩ M RS (A, B), X is a basis for ker(A − λ 0 B), Y 1 is a basis for ker((A − λ 0 B) * ) ∩ L RS (A, B), and Y is a basis for ker((A − λ 0 B) * ).If E − λF is such that det(Y * 1 (E − λ 0 F )X 1 ) ̸ = 0, then, for sufficiently small ϵ > 0, there exists an eigenvalue λ 0 (ϵ) of the perturbed pencil (3) such that The expansion (7) allows one to define the directional sensitivity for perturbations E − λF , ∥(E, F )∥ = 1, satisfying the condition of Theorem 2: see [17,Definition 2.4 and Corollary 3.3].We can now generalize the definition of γ(λ 0 ) in (6) to singular pencils by using the right/left eigenvectors x, y from Theorem 2. Because of the appearance of the factor y * Bx in the denominator of ( 8), we expect that the quantity 1/γ(λ 0 ) continues to play a crucial role in determining the sensitivity of an eigenvalue.However, it is important to note that x, y are particular choices of eigenvectors made in Theorem 2: the right eigenvector x is orthogonal to the right minimal reducing subspace M RS and the left eigenvector y is orthogonal to L MR .Under these constraints, the eigenvectors x and y of the simple eigenvalue λ 0 become uniquely determined up to multiplication by unit complex numbers.It follows from 1 Bx = 0 and y * BX 1 = 0 that this particular choice maximizes |y * Bx|, i.e., for all vectors z and w such that ∥z∥ 2 = ∥w∥ 2 = 1, (A − λ 0 B)z = 0, and w * (A − λ 0 B) = 0, it holds that Remark 3 If A, B are real matrices and λ 0 is a real simple eigenvalue, then the matrices X, X 1 , Y, Y 1 and vectors x, y in Theorem 2 can be chosen to be real as well.
Clearly, the perturbations for which the quantity det(Y * 1 (E − λ 0 F )X 1 ) in the denominator of (7) vanishes form a set of measure zero in the unit sphere in C 2n 2 .In other words, this event has zero probability if we draw (E, F ) uniformly at random from the the unit sphere in C 2n 2 , which we will denote by (E, F ) ∼ U(2n 2 ).The δ-weak condition number introduced by Lotz and Noferini [17, Definition 2.5] offers a more refined picture by measuring the tightest upper bound t such that the directional sensitivity (8) stays below t with probability at least 1 − δ.
Definition 1 Let λ 0 ∈ C be a finite simple eigenvalue of a singular pencil A − λB.The δ-weak condition number of λ 0 is defined as If λ 0 is a finite simple eigenvalue of a regular pencil A − λB, it follows from ( 3) and ( 6 Therefore, if we apply Definition 1 to a regular pencil, it follows from (5) that κ w (λ 0 ; δ) converges to κ(λ 0 ) = 1/γ(λ 0 ) monotonically from below as δ ↓ 0. The following result from [17,Theorem 5.1] and [16,Theorem 3.1] suggests to use 1/γ(λ 0 ) as a proxy for eigenvalue sensitivity in the singular case as well. , where γ(λ 0 ) is defined as in (6) with the right/left eigenvectors x, y from Theorem 2.
The algorithms for the singular generalized eigenvalue problem presented in the next section use γ(λ 0 ) to identify (finite) eigenvalues numerically.We finish this section by remarking that the absolute condition number ( 5) is consistent with the definitions in [17,16].Following, e.g., [10] one could also consider a notion of relative condition number that imposes ∥E∥ F ≤ ∥A∥ F and ∥F ∥ F ≤ ∥B∥ F instead of ∥(E, F )∥ ≤ 1 on the perturbation direction in (3).In effect, both the standard and weak (absolute) condition numbers get multiplied by the factor (∥A∥ 2 F + ∥B∥ 2 F )1/2 .Under reasonable choices of the parameters involved, this additional factor does not differ significantly for the modified pencils.In particular, our results presented for absolute condition numbers easily extend to relative condition numbers.

Randomized numerical methods based on the normal rank
In this section, we describe in some detail the three numerical methods from [11] and [12] for computing the finite eigenvalues of an n×n singular pencil A−λB with nrank(A, B) = n−k for k ≥ 1.All three methods require knowledge about the exact normal rank in order to leave the regular part intact.If this quantity is not known a priori, it can be determined from rank(A − ξ i B) for a small number of randomly chosen ξ i ∈ C. 1

Rank-completing modification
We first consider the rank-completing method from [11], where a random pencil of normal rank k is added to yield a (generically regular) matrix pencil where and such that V * x = 0 for all right eigenvectors x and U * y ̸ = 0 for all left eigenvectors y. 4. Random left eigenvalues: There are N eigenvalues, which are all simple and such that V * x ̸ = 0 for all right eigenvectors x and U * y = 0 for all left eigenvectors y.
Summary 5 has the following practical consequences.If we compute all eigenvalues λ i of (10), together with the (normalized) right and left eigenvectors x i and y i for i = 1, . . ., n, then max(∥V * x i ∥ 2 , ∥U * y i ∥ 2 ) = 0 if and only if λ i is an eigenvalue of A − λB.In numerical computations, we can use max(∥V * x i ∥ 2 , ∥U * y i ∥ 2 ) < δ 1 , where δ 1 is a prescribed threshold, as a criterion to extract the true eigenvalues in the first phase.Note that for a simple finite eigenvalue x i and y i are unique (up to multiplication by unit complex numbers) because, generically, the modified pencil is regular.They correspond to eigenvectors of the original singular pencil satisfying the orthogonality constraints V * x i = 0 and U * y i = 0.
In the second phase, we use the (reciprocal) eigenvalue sensitivities for extracting simple finite eigenvalues, that is, we compute and identify λ i as a finite eigenvalue if γ i > δ 2 for a prescribed threshold δ 2 .Note that 1/γ i is the absolute condition number of λ i as an eigenvalue of the (generically) regular pencil (10); see (6).
For different matrices U and V in (10) we obtain different eigenvectors x i and y i and thus different values of γ i for the same eigenvalue, while the changing of τ , D A and D B does not affect the eigenvectors [12,Lemma 3.4].
In Section 4 we will analyze these values for random U and V and compare them to the unique value γ(λ i ) that appears in the δ-weak condition number of λ i as an eigenvalue of the singular pencil A − λB; see Theorem 4.
The considerations above lead to Algorithm 1 from [11].In theory, the results returned by the algorithm are independent of τ ̸ = 0.In practice, |τ | should be neither too small nor too large in order to limit the impact of roundoff error; in [11] it is suggested to scale A and B so that ∥A∥ 1 = ∥B∥ 1 = 1, where ∥A∥ 1 = max 1≤j≤n n i=1 |a ij |, and take τ = 10 −2 .The quantity ε stands for the machine precision.
Algorithm 1: Eigenvalues of singular pencil by rank-completing modification.
By the theory in [11] and [12], the eigenvectors of the modified pencil (10) that correspond to true eigenvalues do not change if we replace U and V by U = U R and V = V S, where R and S are arbitrary nonsingular k × k matrices.Thus, choosing U, V to have orthonormal columns does not violate the genericity assumption in Summary 5.
Note that γ i in line 4 was initially computed in [11] as [12] to be consistent with [17] and [16].Since for true eigenvalues y * i Bx i = y * i Bx i , we use B instead of B to simplify the analysis.
We remark that the values γ i from (11) were also used in [16] for computing finite eigenvalues of a singular pencil via unstructured random perturbations.The use of full-rank perturbations comes with two disadvantages: the orthogonality relations for the eigenvectors exploited above are not satisfied and, in contrast to Algorithm 1, the eigenvalues of the perturbed pencil in [16] differ from the exact eigenvalues of the original pencil.The latter leads one to choose τ > 0 very small, but at the same time it needs to stay well above the level of machine precision.
Let us also remark that Algorithm 1, with additional heuristic criteria, can in practice, due to a "positive" effect of roundoff error, successfully compute multiple finite eigenvalues as well, for details see [12,Sec. 6].

Normal rank projections
In [12], a variant of Algorithm 1 was proposed that uses random projections to a pencil of smaller size, equal to the normal rank n − k.In addition, the method does not require to choose the matrices D A , D B , and the parameter τ . For To connect it to the modified pencil (10) used in Algorithm 1, let us assume that the columns of U ⊥ and V ⊥ span the orthogonal complements of ranges of U and V , respectively, so that U * ⊥ U = 0, V * ⊥ V = 0.The pencil is then equivalent to (10) and we observe the following.Based on the above results, an algorithm is devised in [12].Algorithm 2 is a simplified form that matches Algorithm 1 as much as possible.
Algorithm 2: Eigenvalues of singular pencil by normal rank projection.
Input and output: See Algorithm 1.
1: Select random unitary n × n matrices [U U ⊥ ] and [V V ⊥ ], where U and V have k columns.2: Compute the eigenvalues λ i , i = 1, . . ., n − k, and right and left normalized eigenvectors x i and The following corollary shows that the reciprocal eigenvalue condition number γ i computed in line 4 of Algorithm 2 matches the corresponding quantity of Algorithm 1.
Corollary 1 Let λ i ∈ C be a simple eigenvalue of a singular pencil A − λB.
Under the assumptions of Proposition 1, if a) (λ i , x i , y i ) is an eigentriple of (10) such that ∥x i ∥ 2 = 1, ∥y i ∥ 2 = 1, V * x i = 0, and U * y i = 0; and b) Proof Since λ i is simple, the vectors x i , y i from a) and w i , z i from b) are uniquely defined up to multiplication by unit complex numbers.If a) and b) both hold then it immediately follows that, up to sign changes, x i = V ⊥ w i and

Augmentation
The third method, also presented in [12], uses the (n + k) × (n + k) augmented (or bordered) matrix pencil Return all eigenvalues λ i , i = 1, . . ., n + k, where max(σ i , τ i ) < δ 1 and Again, the reciprocal eigenvalue condition number γ i computed in line 5 of Algorithm 3 matches the corresponding quantity of Algorithm 1.
Corollary 2 Under the assumptions of Proposition 2, if a) (λ i , x i , y i ) is an eigentriple of (10) such that ∥x i ∥ 2 = 1, ∥y i ∥ 2 = 1, V * x i = 0, and U * y i = 0; and b) (λ i , w i , z i ), where , is an eigentriple of the augmented pencil (12), Proof If a) and b) are both true then it immediately follows that, up to sign changes, x i = w i1 and y i = z i1 .⊓ ⊔

Probabilistic analysis
Our goal is to analyze the behavior of the quantities γ i in Algorithms 1-3 and show that they are unlikely to be much below γ(λ i ).It follows from Corollaries 1 and 2 that it is sufficient to consider Algorithm 1 and the quantity γ i defined in (11).
In the following, we assume that λ i is a simple eigenvalue of A − λB and let X = [X 1 x] and Y = [Y 1 y] denote the orthonormal bases for ker(A − λ i B) and ker((A − λ i B) * ) introduced in Theorem 2. We recall from Theorem 4 that the reciprocal of γ(λ i ) = |y * Bx|(1 + |λ i | 2 ) −1/2 critically determines the sensitivity of λ i as an eigenvalue of A − λB.The sensitivity of λ i as an eigenvalue of A − λ B from (10) is given by 1/γ i with ) −1/2 ; see (11).The eigenvectors x i , y i are normalized (∥x i ∥ 2 = ∥y i ∥ 2 = 1) and depend on the choices of U and V in Algorithm 1. Generically (with respect to D A , D B , U, V * ), Summary 5.1 yields the relations Analogously, the choice of U determines |β| if U * Y 1 is invertible.Since Y * BX 1 = 0 and Y * 1 BX = 0, we get y * i Bx i = αβy * Bx and thus γ i = |α||β|γ(λ i ).
The relation 0 ≤ |α|, |β| ≤ 1 immediately gives γ i ≤ γ(λ i ), in line with (9).A small value of |α||β| means an increased eigenvalue sensitivity for A − λ B, potentially causing Algorithm 1 to yield unnecessarily inaccurate results.In the following, we will show that this is unlikely when random matrices U, V are used in Algorithm 1.This also implies that the (reciprocal) condition numbers computed in Algorithm 1 can be used with high probability to correctly identify finite simple eigenvalues.

Preliminary results
Let N 1 (µ, σ 2 ) denote the normal distribution with mean µ and variance σ 2 .In particular, x ∼ N 1 (0, 1) is a standard (real) normal random variable.We write z ∼ N 2 (0, 1) if z = x + iy is a standard complex normal variable, that is, x, y ∼ N 1 (0, 1  2 ) are independent.In the following, we will analyze real matrices (F = R) and complex matrices (F = C) simultaneously.For this purpose, we set ϕ = 1 for F = R and ϕ = 2 for F = C.
The matrices U and V from Algorithm 1 belong to the Stiefel manifold We will choose them randomly (and independently) from the uniform distribution on V n k (F).A common way to compute such a matrix is to perform the QR decomposition of an n × k Gaussian random matrix M , that is, the entries of M are i.i.d.real or complex standard normal variables, see e.g., [20,25].That this indeed yields the uniform distribution follows from the following variant of the well-known Bartlett decomposition theorem ([23, Theorem 3.2.14],[6, Proposition 7.2]); see also [17,Proposition 4.5].
Theorem 6 For F ∈ {R, C}, let M ∈ F n×k , n ≥ k, be a Gaussian random matrix.Consider the QR decomposition M = QR, where Q ∈ V n k (F) and R ∈ F k×k is upper triangular with non-negative diagonal entries.Then a) the entries of Q and the entries of the upper triangular part of R are all independent random variables; )) for j = 1, . . ., k; where χ 2 (ℓ) denotes the chi-squared distribution with ℓ degrees of freedom.
Part b) of Theorem 6 implies that each column of Q is distributed uniformly over the unit sphere in F n .Note that x = z/∥z∥ 2 , for a Gaussian random vector z ∈ F n , has the same distribution.The following result provides the distribution of the entries of x; this result can be found for F = R in [4].
Lemma 1 Consider a random vector x distributed uniformly over the unit sphere in F n for n ≥ 2. Then the entries of x are i.i.d. with where Beta denotes the beta distribution.
Proof By Theorem 6, the entries of x are independent.Without loss of generality, let i = 1.Using that x = z/∥z∥ 2 for a Gaussian random vector z and setting w = z 2 . . .z n , it follows that , where z 1 , w are independent and ).This implies the claimed result; see, e.g., [6, p. 320].
Our analysis will connect α and β from (13) to the nullspaces of k × (k + 1) standard Gaussian matrices, which are characterized by the following result.
Lemma 2 For a Gaussian random matrix Ω ∈ F k×(k+1) with k ≥ 2, let x be a vector in the nullspace of Ω such that ∥x∥ 2 = 1.Then, with probability one, |x i | is uniquely determined and satisfies Proof We assume that Ω has rank k, which holds with probability 1.For a Gaussian random vector ω ∈ F k+1 independent of Ω * , consider the QR decomposition [Ω * , ω] = QR.Letting x denote the last column of Q, it follows that x is orthogonal to the columns of Ω * or, in other words, x is in the nullspace of Ω. From Theorem 6, it follows that x is distributed uniformly over the unit sphere in F k+1 .The distribution ( 14) then follows from Lemma 1. Finally, note that |x i | 2 is uniquely determined because the nullspace of Ω has dimension 1.
Proposition 3 For F ∈ {R, C}, let U, V be n×k independent random matrices from the uniform distribution on the Stiefel manifold V n k (F).Consider a finite simple eigenvalue λ i ∈ F of a singular pencil A − λB with A, B ∈ F n×n .Let x i and y i be the right and left normalized eigenvectors of the perturbed regular pencil (10) for the eigenvalue λ i and let α, β be defined as in (13).Then |α| and |β| are independent random variables and where ϕ = 1 for F = R and ϕ = 2 for F = C.
Proof We will only prove the distribution for α; the derivation for β is entirely analogous.By the unitary invariance of the uniform distribution over the Stiefel manifold we may assume without loss of generality that X 1 = [e 1 . . .e k ] and x = e k+1 , where e i is the i-th vector of the standard canonical basis.By Theorem 6, the matrix V is obtained from the QR factorization Ω = V R of an n × k Gaussian random matrix Ω, with R being invertible almost surely.We partition Since submatrices of Gaussian random matrices are again Gaussian random matrices, this means that a α is in the nullspace of a k × (k + 1) Gaussian random matrix and has norm 1.Thus, the result on the distribution of α follows from Lemma 2. The independence of |α| and |β| follows from the independence of U and V combined with the fact that |α| does not depend on U and |β| does not depend on V .

Remark 7
It is important to emphasize that the case F = R in Proposition 3 not only requires A, B, D A , D B to be real but also the eigenvalue λ i to be real.
Remark 8 An analysis similar to the one above was performed in [17, Proposition 6.5] and [16, Section 4.1] for unstructured perturbations.This analysis also starts from the relation (13) and then analyzes the distribution of |α| • |β|.One significant difference in our case is that α and β are independent due to the structure of the perturbation in (10), while this does not hold for the setting considered in [17] and [16].

Statistics of |α| • |β|
As explained above, we aim at showing that the random variable |α||β| is unlikely to become tiny.We start by computing the expected value of |α||β|.Since |α| and |β| are independent random variables, we have The factors can be computed using the following result from [17,Lemma A.1].
To simplify the presentation, we will from now on denote the scalars α and β, in the setting of Proposition 3, as α F and β F with F ∈ {R, C}.Combining Proposition 3 and Lemma 3 gives the following result.
Lemma 4 Under the assumptions of Proposition 3, the following holds: Table 1 contains the computed expected values for different k, using the results of Lemma 4.
Using the well-known bounds for x > 0, the expected values of |α||β| from Lemma 4 can be bounded as One therefore expects that γ i underestimates the true values γ(λ i ) by roughly a factor 1/k.For real matrices A and B, one would prefer to use real matrices in the perturbation (10) as well, because eigenvalue computations are performed more efficiently in real arithmetic.As we can see from Table 1 as well as from the bounds ( 16), the expected value of |α||β| for real perturbations is only slightly smaller than the one for complex perturbations.However, as we will see in the following, the left tail of |α||β| is less favorable in the real case and it appears to be safer to use complex modifications of the original pencil even for real data.

Bounds on left tail of |α| • |β|
We will start with a simple tail bound that extends a result from [16, Proposition 4.3] to the complex case.
Corollary 3 Under the assumptions of Proposition 3, we have and for every 0 ≤ t ≤ 1.
Although we know from Proposition 3 that |α| and |β| are independent random variables, this is not used in Corollary 3. The following proposition, where we exploit the fact that |α| and |β| are independent, significantly improves the results of Corollary 3.
Proposition 4 Under the assumptions of Proposition 3, it holds that and Proof To derive the bound (19) we first observe that, since |α C | and |β C | are independent by Proposition 3, it holds that For real perturbations we have This gives where we applied (15).
Proposition 4 indicates that, even for a real singular pencil, it would be better to use complex perturbations as they give a much smaller probability of obtaining tiny |α||β|.This is confirmed by the following lower bound for Lemma 5 Under the assumptions of Proposition 3, it holds that Proof It is easy to see that From the Taylor expansion of P(|α R | < t) around t = 0 we get and the bound follows from applying (15).
From Lemma 5 and Proposition 4 we see that for sufficiently small t > 0 the lower bound (21) for P (|α R ||β R | < t) is much larger than the upper bound (19) for  Summary Together with the discussion in the beginning of this section, Proposition 4 allows us to compare the (reciprocal) eigenvalue sensitivity γ(λ i ) of the original pencil with the corresponding quantity γ i for any of the three modified pencils used in Algorithms 1-3.For complex perturbations, we obtain that holds with probability at least 1 − k 2 t 2 (1 − 2 ln t) for any t > 0.

Numerical examples
All numerical examples were obtained with Matlab 2021b [18].We used the implementations of Algorithms 1-3 available as routine singgep in Multi-ParEig [24].
Example 1 We consider the 8 × 8 singular matrix pencil which is constructed so that the KCF contains blocks of all four possible types.It holds that nrank(A, B) = 6, and the KCF has blocks J 1 (1/2), J 1 (1/3), N 1 , L 0 , L 1 , L T 0 , and L T 2 .Algorithm 1 was applied 10 5 times using random real and complex modifications; we compared the computed values of γ 1 to the exact value of γ(λ 1 ) for the eigenvalue λ 1 = 1/3.The histograms of γ 1 /γ(λ 1 ) for real and complex modifications together with the corresponding probability density function (pdf) from Appendix A are presented in Figure 2. The histograms appear to be consistent with the pdfs.The computed average values of |α||β| are 0.28437 for complex and 0.24934 for real modifications, which are both close to the theoretically predicted values for k = 2 in Table 3.We note that we get almost identical results for the other eigenvalue λ 2 = 1/2.
Algorithm 2 was applied 10 5 times using random real and complex projections.We compared the computed values of γ 1 to the exact value of γ(λ 1 ) for the real eigenvalue λ 1 .In both cases the method successfully computed all finite eigenvalues.The histograms of γ 1 /γ(λ 1 ) together with the corresponding pdf from Appendix A are presented in Figure 3. Similar to the previous example, both histograms appear to be consistent with the pdfs.The computed average values of |α||β| are 0.16413 for complex and 0.14124 for real projections; both are again very close to the values in Table 3 for k = 4. Let us note that we get essentially identical results if we exchange the methods and use Algorithms 2 or 3 in Example 1 and Algorithms 1 or 3 in Example 2, as expected by Corollary 1.However, if we apply Algorithm 1 or 3 with real modifications or Algorithm 2 with real projections to the real pencil ∆ 1 − λ∆ 0 , and consider any of the 8 complex eigenvalues λ 2 , . . ., λ 9 , we get results that cannot be explained by Proposition 3 and behave like the results in the next example.

Example 3
In this example we study the effect of real projections on a complex eigenvalue of a real pencil, which is a situation that is not covered by Proposition 3. By considering two equivalent singular pencils we will show that the distribution function for |α R ||β R |, when we apply real projections to complex eigenvalues of real pencils, depends on more than just the difference k between the size of the pencil and its normal rank, which is the key value in Proposition 3.
We take block diagonal matrices where e 1 , e 4 , e 5 are standard basis vectors in R 5 , and The 10 × 10 pencil A 0 − λB 0 has nrank(A 0 , B 0 ) = 6 and the KCF has blocks by real matrices Q i and Z i whose entries are independent random variables uniformly distributed on (0, 1) for i = 1, 2 to get two equivalent singular pencils of the same size, normal rank and eigenvalues.
Algorithm 2 was applied 10 5 times using random real projections to each of the pencils A 1 − λB 1 and A 2 − λB 2 and the computed value of γ 1 was compared to the exact value of γ(λ 1 ) for the complex eigenvalue λ 1 = 1 + i.The histograms of γ 1 /γ(λ 1 ) together with the theoretical distribution functions from Section A are presented in Figure 4. We see that, although the pencils A 1 − λB 1 and A 2 − λB 2 are equivalent, the histograms are different.The histograms also look different than in the case when we use real projections for a real eigenvalue for k = 4 (Figure 2 3 for k = 4.
Since we get histograms of different shape for two real equivalent pencils, this shows that the distribution function for |α R ||β R |, when we apply real perturbations to complex eigenvalues of real pencils, depends on more than just the structure of the KCF.
We remark that the histograms (not shown) for the real eigenvalue λ 3 = 2 for both pencils look identical to the right picture in Figure 3 from Example 2, where k = 4 as well, which agrees with Remark 3 that even if some eigenvalues of the real pencil are complex, this does not affect the behavior of real perturbations to real eigenvalues.The computed average values of |α R ||β R | for the real eigenvalue λ 3 are 0.14077 for A 1 − λB 1 and 0.14110 for A 2 − λB 2 , they both agree with the value in Table 3 for k = 4. ) and theoretical distribution functions (real and complex) for k = 4 using real perturbations for a complex eigenvalue of a real pencil for A 1 − λB 1 (left) and A 2 − λB 2 (right).
The last numerical example reflects that the case of real perturbations for a complex eigenvalue of a real singular pencil is not covered by Proposition 3. Indeed, this case was not taken properly into account in [17] and it remains an open problem to derive an expression or a tight simple bound for the δ-weak condition number of a complex eigenvalue under real perturbations.

Conclusions
We have analyzed three random based numerical methods for computing finite eigenvalues of a singular matrix pencil.All algorithms are based on random matrices that transform the original singular pencil into a regular one in such way that the eigenvalues remain intact.Our analysis confirms the numerical validity of these methods with high probability.
We also obtained sharp left tail bounds on the distribution of a product of two independent random variables distributed with the generalized beta distribution of the first kind or Kumaraswamy distribution.
where p k−1 and q k−1 are polynomials of degree k − 1.Some explicit expressions for small k are where p m−1 and q m−1 are polynomials of degree m − 1.In addition to the fact that this looks similar to (24), this conjecture is also supported by the following expressions for small m: counts the number of right singular blocks.The minimal reducing subspace M RS (A, B)

Summary 5 1 .
are matrices of rank k, and τ ∈ C is nonzero.Note that k is the smallest normal rank for such a modification to turn a singular into a regular pencil.The following result[11, Summary 4.7], see also[12, Remark 3.5], characterizes the dependence of eigenvalues and eigenvectors of the modified pencil(10) on τ , D A , D B , U , and V * .Let A − λB be an n × n singular pencil of normal rank n − k with left minimal indices n 1 , . . ., n k and right minimal indices m 1 , . . ., m k .Let N = n 1 + • • • + n k and M = m 1 + • • • + m k .Then the regular part of A − λB has size r := n − N − M − k and generically (with respect to the entries of D A , D B , U, V * ), the modified pencil A − λ B defined in (10) is regular and its eigenvalues are classified in the following four disjoint groups: True eigenvalues: There are r eigenvalues counted with their multiplicities that coincide with the eigenvalues of the original pencil A − λB with the same partial multiplicities.The right/left eigenvectors x, y of A − λ B belonging to these eigenvalues satisfy the orthogonality relations V * x = 0 and U * y = 0. 2. Prescribed eigenvalues: There are k eigenvalues such that V * x ̸ = 0 for all right eigenvectors x and U * y ̸ = 0 for all left eigenvectors y.These are the k eigenvalues of D A − λD B . 3. Random right eigenvalues: There are M eigenvalues, which are all simple

Proposition 1 ([ 12 ,
Proposition 4.1 and Theorem 4.2]) Let A − λB be a complex n × n singular pencil of normal rank n − k.Then, under the assumptions of Summary 5, the (n − k) × (n − k) pencil A 22 − λB 22 := U * ⊥ (A − λB) V ⊥ is generically regular and the eigenvalues of A 22 − λB 22 are precisely: a) the random eigenvalues of (10) (groups 3 and 4 in Summary 5); b) the true eigenvalues of A − λB.

Algorithm 3 :
where S A , S B , T A , and T B are k × k diagonal matrices and U, V are n × k matrices.Proposition 2 ([12, Proposition 5.1] ) Let A − λB be an n × n singular pencil of normal rank n−k such that all its eigenvalues are semisimple.Assume that the diagonal k × k pencils S A − λS B and T A − λT B are regular and that their 2k eigenvalues are pairwise distinct.Furthermore, let U, V ∈ C n×k have orthonormal columns such that the augmented pencil (12) is regular.Then the pencil (12) has the following eigenvalues: a) 2k prescribed eigenvalues, which are precisely the eigenvalues of S A − λS B and T A − λT B ; b) the random eigenvalues of (10) (groups 3 and 4 in Summary 5) with the same U and V and with D A = T A S A , D B = T B S B ; c) the true eigenvalues of A − λB.The algorithm based on the above proposition is given in Algorithm 3. Eigenvalues of singular pencil by augmentation.Input and output: See Algorithm 1.

1 : 2 : 3 :
Select random n × k matrices U and V with orthonormal columns.Select random diagonal k × k matrices T A , T B , S A , and S B Compute the eigenvalues λ i , i = 1, . . ., n + k, and normalized right and left eigenvectors [x T i1 x T i2 ] T and [y T i1 y T i2 ] T of the augmented pencil (12).4: Compute

Figure 1
compares the obtained bounds to P (|α C ||β C | < t) and P (|α R ||β R | < t), computed for k = 4 and k = 8 using the probability density functions from Appendix A. The solid black line corresponds to P (|α C ||β C | < t).The blue dotted and dashed lines are the refined bound (19) from Proposition 4 and the simple upper bound (17) from Corollary 3, respectively.The solid red line corresponds to P (|α R ||β R | < t).The corresponding bounds are magenta curves,

Fig. 3
Fig. 3 Example 2: Histogram of γ 1 /γ(λ 1 ) and pdf using complex (left) and real (right) projections right) or complex projections (Figure 2 left).While the shape of the left histogram in Figure 4 resembles the shape expected for complex perturbations, is the shape of the right histogram more in line with the distribution function for real perturbations.The computed average values of |α R ||β R | are also different, we get 0.12195 for A 1 − λB 1 and 0.13623 for A 2 − λB 2 , both values are completely different from the values in Table
1: Select random n × k matrices U and V with orthonormal columns.2: Select random diagonal k × k matrices D A and D B .3: Compute the eigenvalues λ i , i = 1, . . ., n, and right and left normalized eigenvectors x i and y i of the perturbed pencil (10).4: