Local law for random Gram matrices

We prove a local law in the bulk of the spectrum for random Gram matrices $XX^*$, a generalization of sample covariance matrices, where $X$ is a large matrix with independent, centered entries with arbitrary variances. The limiting eigenvalue density that generalizes the Marchenko-Pastur law is determined by solving a system of nonlinear equations. Our entrywise and averaged local laws are on the optimal scale with the optimal error bounds. They hold both in the square case (hard edge) and in the properly rectangular case (soft edge). In the latter case we also establish a macroscopic gap away from zero in the spectrum of $XX^*$.


Introduction
Random matrices were introduced in pioneering works by Wishart [43] and Wigner [42] for applications in mathematical statistics and nuclear physics, respectively. Wigner argued that the energy level statistics of large atomic nuclei could be described by the eigenvalues of a large Wigner matrix, i.e., a hermitian matrix H = (h ij ) N i,j=1 with centered, identically distributed and independent entries (up to the symmetry constraint H = H * ). He proved that the empirical spectral measure (or density of states) converges to the semicircle law as the dimension of the matrix N goes to infinity. Moreover, he postulated that the statistics of the gaps between consecutive eigenvalues depend only on the symmetry type of the matrix and are independent of the distribution of the entries in the large N limit. The precise formulation of this phenomenon is called the Wigner-Dyson-Mehta universality conjecture, see [33].
Historically, the second main class of random matrices is the one of sample covariance matrices. These are of the form XX * where X is a p × n matrix with centered, identically distributed independent entries. In statistics context, its columns contain n samples of a p-dimensional data vector. In the regime of high dimensional data, i.e., in the limit when n, p → ∞ in such a way that the ratio p/n converges to a constant, the empirical spectral measure of XX * was explicitly identified by Marchenko and Pastur [32]. Random matrices of the form XX * also appear in the theory of wireless communication; the spectral density of these matrices is used to compute the transmission capacity of a Multiple Input Multiple Output (MIMO) channel. This fundamental connection between random matrix theory and wireless communication was established by Telatar [39] and Foschini [23,22] (see also [40] for a review). In this model, the element x ij of the channel matrix X represents the transmission coefficient from the j-th transmitter to the i-th receiver antenna. The received signal is given by the linear relation y = Xs + w, where s is the input signal and w is a Gaussian noise with variance σ 2 . In case of i.i.d. Gaussian input signals, the channel capacity is given by The assumption in these models that the matrix elements of H or X have identical distribution is a simplification that does not hold in many applications. In Wigner's model, the matrix elements h ij represent random quantum transition rates between physical states labelled by i and j and their distribution may depend on these states. Analogously, the transmission coefficients in X may have different distributions. This leads to the natural generalizations of both classes of random matrices by allowing for general variances, s ij . .= E|h ij | 2 and s ij . . = E|x ij | 2 , respectively. We will still assume the independence of the matrix elements and their zero expectation. Under mild conditions on the variance matrix S = (s ij ), the limiting spectral measure depends only on the second moments, i.e., on S, and otherwise it is independent of the fine details of the distributions of the matrix elements. However, in general there is no explicit formula for the limiting spectral measure. In fact, the only known way to find it in the general case is to solve a system of nonlinear deterministic equations, known as the Dyson (or Schwinger-Dyson) equation in this context, see [8,41,24,30].
For the generalization of Wigner's model, the Dyson equation is a system of equations of the form s ij m j (z), for i = 1, . . . , N, z ∈ H, (1.2) where z is a complex parameter in the upper half plane H . .= {z ∈ C : Im z > 0}. The average m(z) = 1 N i m i (z) in the large N limit gives the Stieltjes transform of the limiting spectral density, which then can be computed by inverting the Stieltjes transform. In fact, m i (z) approximates individual diagonal matrix elements G ii (z) of the resolvent G(z) = (H − z) −1 , thus the solution of (1.2) gives much more information on H than merely the spectral density. In the case when S is a stochastic matrix, i.e., j s ij = 1 for every i, the solution m i (z) to (1.2) is independent of i and the density is still the semicircle law. The corresponding generalized Wigner matrix was introduced in [18] and the optimal local law was proven in [19,20]. For the general case, a detailed analysis of (1.2) and the shapes of the possible density profiles was given in [2,3] with the optimal local law in [4].
Considering the XX * model with a general variance matrix for X, we note that in statistical applications the entries of X within the same row still have the same variance, i.e., s ik = s il for all i and all k, l. However, beyond statistics, for example modeling the capacity of MIMO channels, applications require to analyze the spectrum of XX * with a completely general variance profile for X [28,13]. These are called random Gram matrices, see e.g. [24,26]. The corresponding Dyson equation is (see [24,13,40] and references therein) , We have m i (ζ) ≈ (XX * − ζ) −1 ii and the average of m i (ζ) yields the Stieltjes transform of the spectral density exactly as in case of the Wigner-type ensembles. In fact, there is a direct link between these two models: Girko's symmetrization trick reduces (1.3) to studying (1.2) on C N with N = n + p, where S and H are replaced by respectively, and z 2 = ζ. The limiting spectral density, also called the global law, is typically the first question one asks about random matrix ensembles. It can be strengthened by considering its local versions. In most cases, it is expected that the deterministic density computed via the Dyson equation accurately describes the eigenvalue density down to the smallest possible scale which is slightly above the typical eigenvalue spacing (we choose the standard normalization such that the spacing in the bulk spectrum is of order 1/N ). This requires to understand the trace of the resolvent G(z) at a spectral parameter very close to the real axis, down to the scales Im z ≫ 1/N . Additionally, entry-wise local laws and isotropic local laws, i.e., controlling individual matrix elements G ij (z) and bilinear forms v, G(z)w , carry important information on eigenvectors and allow for perturbation theory. Moreover, effective error bounds on the speed of convergence as N goes to infinity are also of great interest.
Local laws have also played a crucial role in the recent proofs of the Wigner-Dyson-Mehta conjecture. The three-step approach, developed in a series of works by Erdős, Schlein, Yau and Yin [15,16] (see [17] for a review), was based on establishing the local law as the first step. Similar input was necessary in the alternative approach by Tao and Vu in [37,38].
In this paper, we establish the optimal local law for random Gram matrices with a general variance matrix S in the bulk spectrum; edge analysis and local spectral universality is deferred to a forthcoming work. We show that the empirical spectral measure of XX * can be approximated by a deterministic measure ν on R with a continuous density away from zero and possibly a point mass at zero. The convergence holds locally down to the smallest possible scale and with an optimal speed of order 1/N . In the special case when X is a square matrix, n = p, the measure ν does not have a point mass but the density has an inverse square-root singularity at zero (called the hard edge case). In the soft edge case, n = p, the continuous part of ν is supported away from zero and it has a point mass of size 1 − n/p at zero if p > n. All these features are well-known for the classical Marchenko-Pastur setup, but in the general case we need to demonstrate them without any explicit formula.
We now summarize some previous related results on Gram matrices. If each entry of X has the same variance, local Marchenko-Pastur laws have first been proved in [16,34] for the soft edge case; and in [12,10] for the hard edge case. The isotropic local law was given in [9]. Relaxing the assumption of identical variances to a doubly stochastic variance matrix of X, the optimal local Marchenko-Pastur law has been established in [1] for the hard edge case. Sample correlation matrices in the soft edge case were considered in [5].
Motivated by the linear model in multivariate statistics and to depart from the identical distribution, random matrices of the form T ZZ * T * have been extensively studied where T is a deterministic matrix and the entries of Z are independent, centered and have unit variance. If T is diagonal, then they are generalizations of sample covariance matrices as T ZZ * T * = XX * and the elements of X = T Z are also independent. With this definition, all entries within one row of X have the same variance since s ij = E|x ij | 2 = (T T * ) ii , i.e., it is a special case of our random Gram matrix. In this case the Dyson system of equations (1.3) can be reduced to a single equation for the average m(z) , i.e., the limiting density can still be obtained from a scalar self-consistent equation. This is even true for matrices of the form XX * with X = T Z T , where both T and T are deterministic, investigated for example in [14]. For general T the elements of X = T Z are not independent, so general sample covariance matrices are typically not Gram matrices. The global law for T ZZ * T * has been proven by Silverstein and Bai in [36]. Knowles and Yin showed optimal local laws for a general deterministic T in [31].
Finally, we review some existing results on random Gram matrices with general variance S, when (1.3) cannot be reduced to a simpler scalar equation. The global law, even with nonzero expectation of X, has been determined by Girko [24] via (1.3) who also established the existence and uniqueness of the solution to (1.3). More recently, motivated by the theory of wireless communication, Hachem, Loubaton and Najim initiated a rigorous study of the asympotic behaviour of the channel capacity (1.1) with a general variance matrix S [27,28], This required to establish the global law under more general conditions than Girko; see also [26] for a review from the point of view of applications. Hachem et. al. have also established Gaussian fluctuations of the channel capacity (1.1) around a deterministic limit in [29] for the centered case. For a nonzero expectation of X, a similar result was obtained in [25], where S was restricted to a product form. Very recently in [7], a special k-fold clustered matrix XX * was considered, where the samples came from k different clusters with possibly different distributions. The Dyson equation in this case reduces to a system of k equations. In an information-plus-noise model of the form (R + X)(R + X) * , the effect of adding a noise matrix to X with identically distributed entries was studied knowing the limiting density of RR * [11].
In all previous works concerning general Gram matrices, the spectral parameter z was fixed, in particular Im z had a positive lower bound independent of the dimension of the matrix. Technically, this positive imaginary part provided the necessary contraction factor in the fixed point argument that led to the existence, uniqueness and stability of the solution to the Dyson equation, (1.3). For local laws down to the optimal scales Im z ≫ 1/N , the regularizing effect of Im z is too weak. In the bulk spectrum Im z is effectively replaced with the local density, i.e., with the average imaginary part Im m(z) . The main difficulty with this heuristics is its apparent circularity: the yet unknown solution itself is necessary for regularizing the equation. This problem is present in all existing proofs of any local law. This circularity is broken by separating the analysis into three parts. First, we analyze the behavior of the solution m(z) as Im z → 0. Second, we show that the solution is stable under small perturbations of the equation and the stability is provided by Im m(E + i0) for any energy E in the bulk spectrum. Finally, we show that the diagonal elements of the resolvent of the random matrix satisfy a perturbed version of (1.3), where the perturbation is controlled by large deviation estimates. Stability then provides the local law.
While this program could be completed directly for the Gram matrix and its Dyson equation (1.3), the argument appears much shorter if we used Girko's linearization (1.4) to reduce the problem to a Wigner-type matrix and use the comprehensive analysis of (1.2) from [2,3] and the local law from [4]. There are two major obstacles to this naive approach.
First, the results of [2,3] are not applicable as S does not satisfy the uniform primitivity assumption imposed in these papers (recall that a matrix A is primitive if there is a positive integer L such that all entries of A L are strictly positive). This property is crucial for many proofs in [2,3] but S in (1.4) is a typical example of a nonprimitive matrix. It is not a mere technical subtlety, in fact in the current paper, the stability estimates of (1.2) require a completely different treatment, culminating in the key technical bound, the Rotation-Inversion lemma (see Lemma 3.6 later).
Second, Girko's transformation is singular around z ≈ 0 since it involves a z 2 = ζ change in the spectral parameter. This accounts for the singular behavior near zero in the limiting density for Gram matrices, while the corresponding Wigner-type matrix has no singularity at zero. Thus, we need to perform a more accurate analysis near zero. If p = n, the soft edge case, we derive and analyze two new equations for the first coefficients in the expansion of m around zero. Indeed, the solutions to these new equations describe the point mass at zero and provide information about the gap above zero in the support of the approximating measure. In the hard edge case, n = p, an additional symmetry allows us to exclude a point mass at zero.

Acknowledgement: The authors thank Zhigang Bao for helpful discussions.
Notation For vectors v, w ∈ C l , the operations product and absolute value are defined componentwise, i.e., ..,l |w i |, v 1 . .= |v| . Note that w = 1 , w where we used the convention that 1 also denotes the vector (1, . . . , 1) ∈ C l . For a matrix A ∈ C l×l , we use the short notations A ∞ . .= A ∞→∞ and A 2 . .= A 2→2 if the domain and the target are equipped with the same norm whereas we use A 2→∞ to denote the matrix norm of A when it is understood as a map (C l , · 2 ) → (C l , · ∞ ).

Main results
Let X = (x ik ) i,k be a p × n matrix with independent, centered entries and variance matrix S = (s ik ) i,k , i.e., (B) There are L 1 , L 2 ∈ N and ψ 1 , ψ 2 > 0 such that (D) The dimensions of X are comparable with each other, i.e., there are constants r 1 , r 2 > 0 such that In the following, we will assume that s * , L 1 , L 2 , ψ 1 , ψ 2 , r 1 , r 2 and the sequence (µ m ) m are fixed constants which we will call, together with some constants introduced later, model parameters. The constants in all our estimates will depend on the model parameters without further notice. We will use the notation f g if there is a constant c > 0 that depends on the model parameter only such that f ≤ cg and their counterparts f g if g f and f ∼ g if f g and f g. The model parameters will be kept fixed whereas the parameters p and n are large numbers which will eventually be sent to infinity.
We start with a theorem about the deterministic density.
for all ζ ∈ H such that Im m(ζ) > 0 for all ζ ∈ H. Moreover, there is a probability measure ν on R whose support is contained in [0, 4s * ] such that for all ζ ∈ H.
Part (i) of this theorem has already been proved in [28] and we will see that it also follows directly from [2,3]. We included this part only for completeness. Part (ii) is a new result.
For ζ ∈ C \ R, we denote the resolvent of XX * at ζ by and its entries by R ij (ζ) for i, j = 1, . . . , p.
We state our main result, the local law, i.e., optimal estimates on the resolvent R, both in entrywise and in averaged form. In both cases, we provide different estimates when the real part of the spectral parameter ζ is in the bulk and when it is away from the spectrum. As there may be many zero eigenvalues, hence, a point mass at zero in the density ν, our analysis for spectral parameters ζ in the vicinity of zero requires a special treatment. We thus first prove the local law under the general assumptions (A) -(D) for ζ away from zero. Some additional assumptions in the following subsections will allow us to extend our arguments to all ζ.
All of our results are uniform in the spectral parameter ζ which is contained in some spectral domain for some δ ≥ 0. In the first result, we assume δ > 0. In the next section, under additional assumptions on S, we will work on the bigger spectral domain D 0 that also includes a neighbourhood of zero.

Theorem 2.2 (Local Law for Gram matrices).
Let δ, ε * > 0 and γ ∈ (0, 1). If X is a random matrix satisfying (A) -(D) then for every ε > 0 and D > 0 there is a constant C ε,D > 0 such that for all p ∈ N. Furthermore, for any sequences of deterministic vectors w ∈ C p satisfying w ∞ ≤ 1, we have These results are optimal up to the arbitrarily small tolerance exponents γ > 0 and ε > 0. We remark that under stronger (e.g. subexponential) moment conditions in (C), one may replace the p γ and p ε factors with high powers of log p.
Owing to the symmetry of the assumptions (A) -(D) in X and X * , we can exchange X and X * in Theorem 2.2 and obtain a statement about X * X instead of XX * as well.
For the results in the up-coming subsections, we need the following notion of a sequence of high probability events.
We denote the eigenvalues of XX * by λ 1 ≤ . . . ≤ λ p and define For a spectral parameter χ ∈ R in the bulk, the nonnegative integer i(χ) is the index of an eigenvalue expected to be close to χ.
(i) (Bulk rigidity away from zero) For every ε > 0 and D > 0, there exists a constant C ε,D > 0 such that holds true for all p ∈ N.
The constant C ε,D depends, in addition to ε and D, only on the model parameters as well as on δ and ε * .
(ii) Away from zero, all eigenvalues lie in the vicinity of the support of ν, i.e., a.w.o.p.
In the following two subsections, we distinguish between square Gram matrices, n = p, and properly rectangular Gram matrices, |p/n − 1| ≥ d * > 0, in order to extend the local law, Theorem 2.2, to include zero in the spectral domain D. Since the density of states behaves differently around zero in these two cases, separate statements and proofs are necessary.

Square Gram matrices
The following concept is well-known in linear algebra. For understanding singularities of the density of states in random matrix theory, it was introduced in [2].

Definition 2.5 (Fully indecomposable matrix).
with nonegative entries is called fully indecomposable if for any two subsets I, J ⊂ {1, . . . , K} such that #I + #J ≥ K, the submatrix (t ij ) i∈I,j∈J contains a nonzero entry.
For square Gram matrices, we add the following assumptions.
(iii) (Bulk rigidity down to zero) For every ε * > 0 and every ε > 0 and D > 0, there exists a constant C ε,D > 0 such that for all p ∈ N. The constant C ε,D depends, in addition to ε and D, only on the model parameters and on ε * .
We remark that the bound of the individual resolvent entries (2.9) deteriorates as ζ gets close to zero since Im m(ζ) ∼ |ζ| −1/2 in this regime while the averaged version (2.5), with δ = 0, does not show this behaviour.

Properly rectangular Gram matrices
(F2) The matrix elements of S are bounded from below, i.e., there is a ϕ > 0 such that The constants d * and ϕ in (E2) and (F2), respectively, are also considered as model parameters. Note that (F2) is a simpler version of (F1). For properly rectangular Gram matrices we work under the stronger condition (F2) for simplicity but our analysis could be adjusted to some weaker condition as well.

12)
for all p ∈ N if p > n and

13)
for all p ∈ N if p < n. Moreover, in both cases

14)
for all p ∈ N.
The constant C ε,D depends, in addition to ε and D, only on the model parameters and on ε * .
If p > n, then the Stieltjes transform of the empirical spectral measure of XX * has a term proportional to 1/ζ due to the macroscopic kernel of XX * . This is the origin of the additional factor 1/|ζ| in (2.12). Remark 2.10. As a consequence of Theorem 2.7 and Theorem 2.9 and under the same conditions, the standard methods in [9] and [4] can be used to prove an anisotropic law and delocalization of eigenvectors in the bulk.

Quadratic vector equation
For the rest of the paper, without loss of generality we will assume that s * = 1 in (A), which can be achieved by a simple rescaling of X. In the whole section, we will assume that the matrix S satisfies (A), (B) and (D) without further notice.

Self-consistent equation for resolvent entries
We introduce the random matrix H and the deterministic matrix S defined through Note that both matrices, H and S have dimensions (p + n) × (p + n). We denote their entries by H = (h xy ) x,y and S = (σ xy ) x,y , respectively, where σ xy = E|h xy | 2 with x, y = 1, . . . , n + p.
It is easy to see that condition (B) implies for all x, y = 1, . . . , n + p.
In the following, a crucial part of the analysis will be devoted to understanding the resolvent of H at z ∈ H, i.e., the matrix whose entries are denoted by G xy (z) for x, y = 1, . . . , n + p. For V ⊂ {1, . . . , n + p}, we use the notation G The Schur complement formula and the resolvent identities applied to G(z) yield the self-consistent equations where g 1,i (z) . .= G ii (z) for i = 1, . . . , p and g 2,k (z) . .= G k+p,k+p (z) for k = 1, . . . , n with the error terms for r = 1, . . . , p and m = 1, . . . , n.
We will prove a local law which states that g 1,i (z) and g 2,k (z) can be approximated by M 1,i (z) and M 2,k (z), respectively, where M 1 : H → C p and M 2 : H → C n are the unique solution of The system of self-consistent equations for g 1 and g 2 in (3.4) can be seen as a perturbation of the system (3.5). With the help of S, equations (3.5a) and (3.5b) can be combined to a vector equation Since S is symmetric, has nonnegative entries and fulfills (A) with s * = 1, Theorem 2.1 in [2] is applicable to (3.6). Here, we take a = 0 in Theorem 2.1 of [2]. This theorem implies that (3.6) has a unique solution M with Im M(z) > 0 for any z ∈ H. Moreover, by this theorem, M x is the Stieltjes transform of a symmetric probability measure on R whose support is contained in [−2, 2] for all x = 1, . . . , n + p and we have for all z ∈ H. The function M is the Stieltjes transform of a symmetric probability measure on R which we denote by ρ, i.e., We combine (3.4a) and (3.4b) to obtain where g = (g 1 , g 2 ) t and d = (d 1 , d 2 ) t . We think of (3.9) as a perturbation of (3.6) and most of the subsequent subsection is devoted to the study of (3.9) for an arbitrary perturbation d.
Before we start studying (3.6) we want to indicate how m and R are related to M and G, respectively. The Stieltjes transforms as well as the resolvents are essentially related via the same transformation of the spectral parameter. If G 11 (z) denotes the upper left p × p block of G(z) then R(z 2 ) = (XX * − z 2 ) −1 = G 11 (z)/z. In the proof of Theorem 2.1 in Subsection 3.4, we will see that m and (We always choose the branch of the square root satisfying Im √ ζ > 0 for Im ζ > 0.) Assuming this relation and introducing m 2 (ζ) .
from (3.5). Solving the second equation for m 2 and plugging the result into the first one yields (2.1) immediately. In fact, m 2 is the analogue of m corresponding to X * X, i.e, the Stieltjes transform of the deterministic measure approximating the eigenvalue density of X * X.

Structure of the solution
We first notice that the inequality s ik ≤ 1/(n + p) implies for all w ∈ C p , i.e., S t 2→∞ ≤ 1. Now, we establish some preliminary estimates on the solution of (3.6). (3.11b) If z ∈ H and |z| ≤ 10 then In particular, the support of the measures representing M x is independent of x away from zero.
The proof essentially follows the same line of arguments as the proof of Lemma 5.4 in [2]. However, instead of using the lower bound on the entries of S L as in [2] we have to make use of the lower bound on the entries of L k=1 S k . To prove another auxiliary estimate on S, we define the vectors S In particular, together with (A), this implies In the study of the stability of (3.6) when perturbed by a vector d, as in (3.9), the linear operator for v ∈ C n+p plays an important role. Before we collect some properties of operators of this type in the next lemma, we first recall the definition of the gap of an operator from [2]. In the next lemma, we study matrices of the form F(r) xy . . = r x σ xy r y where r ∈ (0, ∞) n+p and x, y = 1, . . . , n + p. If inf x r x > 0 then (3.2) implies that all entries of L k=1 F(r) k are strictly positive. Therefore, by the Perron-Frobenius theorem, the eigenspace corresponding to the largest eigenvalue λ(r) of F(r) is onedimensional and spanned by a unique non-negative vector f = f(r) such that f, f = 1.
The block structure of S implies that there is a matrix F (r) ∈ R p×n such that However, for this kind of operator, we obtain σ F(r) = −σ F(r) , i.e., Gap( F(r)) = 0 by above definition. Therefore, we will compute Gap( F (r) F (r) t ), instead. We will apply these observations for F(z) where the blocks F (|M(z)|) will be denoted by F (z).

Lemma 3.3.
For a vector r ∈ (0, ∞) n+p which is bounded by constants r + ∈ (0, ∞) and r − ∈ (0, 1], i.e., for all x = 1, . . . , n + p, we define the matrix F(r) with entries F(r) xy . . = r x σ xy r y for x, y = 1, . . . , n + p. Then the eigenspace corresponding to λ(r) . . = F(r) 2→2 is one-dimensional and λ(r) satisfies the estimates There is a unique eigenvector f = f(r) corresponding to λ(r) satisfying f x ≥ 0 and f 2 = 1. Its components satisfy The estimates in (3.17) and (3.18) can basically be proved following the proof of Lemma 5.6 in [2] where S L is replaced by Therefore, we will only show (3.19) assuming the other estimates.
Proof. We write f = ( f 1 , f 2 ) t for f 1 ∈ C p and f 2 ∈ C n and define a linear operator on C p through Thus, T 2 = L as T f 1 = L f 1 . Using (B') we first estimate the entries t ij by Estimating f 1 2 and f 1 ∞ from (3.18) and applying Lemma 5.6 in [3] or Lemma 5.7 in [2] yields Here we used (D) and note that the factor inf i,j t ij in Lemma 5.6 in [3] is replaced by p inf i,j t ij as t ij are considered as the matrix entries of T and not as the kernel of an integral operator on where f(z) is the unique eigenvector of F(z) associated to F(z) 2 . In particular, we obtain for z ∈ H satisfying |z| ≤ 10.
Proof. The derivation of (3.20) follows the same steps as the proof of (4.4) in [3] (compare Lemma 5.5 in [2] as well). We take the imaginary part of (3.6), multiply the result by |M| and take the scalar product with f.
Thus, we obtain

Stability away from the edges and continuity
All estimates of M − g, when M and g satisfy (3.6) and (3.9), respectively, are based on inverting the linear operator for v ∈ C n+p . The following lemma bounds B −1 (z) in terms of Im M(z) if z is away from zero. For δ > 0, we use the notation f δ g if and only if there is an r > 0 which is allowed to depend on model parameters such that f δ −r g.
There is a universal constant κ ∈ N such that for all δ > 0 we have for all z ∈ H satisfying δ ≤ |z| ≤ 10.
For the proof of this result, we will need the two following lemmata. We recall that by the Perron-Frobenius theorem an irreducible matrix with nonnegative entries has a unique ℓ 2 -normalized eigenvector with positive entries corresponding to its largest eigenvalue. By the definition of the spectral gap, Definition 3.2, we observe that if AA * is irreducible then Gap(AA * ) = AA * 2 − max(σ(AA * ) \ { AA * 2 }). Lemma 3.6 (Rotation-Inversion Lemma). There exists a positive constant C such that for all n, p ∈ N, unitary matrices U 1 ∈ C p×p , U 2 ∈ C n×n and A ∈ R p×n with nonnegative entries such that A * A and AA * are irreducible and A * A 2 ∈ (0, 1], the following bound holds:

25)
where v 1 ∈ C p and v 2 ∈ C n are the unique positive, normalized eigenvectors with This lemma is proved in the appendix.
The proof of (3.26) follows a similar way as the proof of (5.28) in [2].
for some universal κ ∈ N which will be a consequence of Lemma 3.6. We apply this lemma with and v 1 .
and a similar relation holds for v 2 , U 2 v 2 . Thus, we compute for a, b ∈ R, and estimating the absolute value by the real part yields To estimate the last factor in (3.29), we multiply the real part of (3.6) with |M| and obtain Estimating · 2 of the last equation yields Finally, we use (3.30) for the first factor in (3.29) and (3.31) for the last factor and apply the last estimate in (3.12a) and Jensen's inequality, (Im M) 2 ≥ Im M 2 , to estimate the second factor which yields This completes the proof of (3.27).
We conclude that the Stieltjes transform M has the same regularity by decomposing ρ into a measure supported around zero and a measure supported away from zero and using Lemma A.7 in [2].

Proof of Theorem 2.1
Proof of Theorem 2.1. We start by proving the existence of the solution m of (2.1). Let M = (M 1 , M 2 ) t be the solution of (3.6) satisfying Im M(z) > 0 for z ∈ H. For ζ ∈ H, we set m(ζ) .
Then it is straightforward to check that m satisfies (2.1) by solving (3.5b) for M 2 and plugging the result into (3.5a). Note that Im m(ζ) > 0 for all ζ ∈ H since M 1,i is the Stieltjes transform of a symmetric measure on R (cf. the explanation before (3.7) for the symmetry of this measure).
For the following arguments, it is important that M is purely imaginary for Re z = 0 as M( for all η ∈ (0, ∞) due to (3.6). The study of this equation will imply the stability of the QVE at z = 0. The following proposition is the main result of this subsection.  for all z ∈ H. There are numbers λ * , δ 1, depending only on S, such that for all z ∈ H satisfying |z| ≤ 10 and Re z ∈ [− δ, δ]. Moreover, there is a matrix-valued function T : H → C 2p×2p , depending only on S and satisfying T (z) ∞ 1, such that for all w ∈ C 2p and z ∈ H satisfying |z| ≤ 10 and Re z ∈ [− δ, δ].
The remainder of this subsection will be devoted to the proof of this proposition. Therefore, we will always assume that (A), (E1) and (F1) are satisfied.
The estimate in (3.44), with some minor modifications which we will explain next, is shown as in the proof of (6.30) of [2].
Proof. From (3.40) and the definition of S, we obtain η v 1 −η v 2 = v 1 , Sv 2 − v 2 , S t v 1 = 0 for all η ∈ (0, ∞) which proves (3.45). Differing from [2], the discrete functional J is defined as follows: for u ∈ (0, ∞) 2K (we used the notation u(i) to denote the i-th entry of u) where Z is the 2K × 2K matrix with entries in {0, 1} defined by Decomposing u = (u 1 , u 2 ) t for u 1 , u 2 ∈ (0, ∞) K and writing u 1 (i) = u(i) and u 2 (j) = u(K + j) for their entries we obtain where J is defined in (3.46), and u 1 = u 2 , then there is a constant Φ < ∞ depending only on (Ψ, ϕ, K) such that where we used u 1 = u 2 . This concludes the proof of Lemma 3.12.
Recalling the function v in Lemma 3.11, we set u Then we have u 1 = u 2 by (3.45) and since I 1 , . . . , I 2K is an equally sized partition of {1, . . . , 2p}. Therefore, the assumptions of Lemma 3.12 are met which implies (3.44) of Lemma 3.11 as in [2].
We recall from Lemma 3.4 that f = (f 1 , f 2 ) is the unique nonnegative, normalized eigenvector of F corresponding to the eigenvalue F 2 . Moreover, we define f − . .= (f 1 , −f 2 ) which clearly satisfies Since the spectrum of F is symmetric, σ(F) = − σ(F) with multiplicities, and F 2 is a simple eigenvalue of F, the same is true for the eigenvalue − F 2 of F and f − spans its associated eigenspace. We introduce Proof. In the whole proof, the quantities v, f, f − and F are evaluated at z = iη for η > 0. Therefore, we will mostly suppress the z-dependence of all quantities. Differentiating (3.6) with respect to z and using (3.39) As F 2 < 1 by (3.20), the matrix (1 + F) is invertible which yields (3.51) for all η ∈ (0, ∞).
and decomposing because of (3.53). As |M(iη)| ∼ 1 by (3.44) for η ∈ (0, 10] the bound (3.19) in Lemma 3.3 implies that there is an ε ∼ 1 such that for all η ∈ (0, 10] we have Thus, using (3.45) and (3.55), this implies which concludes the proof of (3.52). In Using In the next lemma, we establish an expansion of M(z) on the upper half-plane around z = 0. The proof of this result and later the stability estimates on g − M will be a consequence of the equation In particular, there is a δ ∼ 1 such that |M(z)| ∼ 1 uniformly for z ∈ H satisfying Re z ∈ [− δ, δ] and |z| ≤ 10.
Using the expansion of M in (3.58a) in a similar argument as in the proof of Similarly, using (3.26), we obtain (3.59).
By a standard argument from perturbation theory and possibly reducing δ ∼ 1, we can assume that B(z) has a unique eigenvalue β(z) of smallest modulus for z ∈ H satisfying |Re z| ≤ δ and |z| ≤ 10 such that |β ′ | − |β| 1 for β ′ ∈ σ(B(z)) and β ′ = β. This follows from |M| ∼ 1 and thus Gap(F (z)F (z) t ) 1 by Lemma 3.3. For z ∈ H satisfying |Re z| ≤ δ and |z| ≤ 10, we therefore find a unique (unnormalized) vector b(z) ∈ C 2p such that We introduce the spectral projection P onto the spectral subspace of B(z) in (C 2p , · ∞ ) associated to β(z) which fulfills the relation Note that P is not an orthogonal projection in general. Let Q . .= 1 − P denote the complementary projection onto the spectral subspace of B(z) not containing β(z) (this Q is different from the one in the proof of Lemma 3.13). Since for z ∈ H satisfying |Re z| ≤ δ and |z| ≤ 10.
The estimate B −1 (z)Q ∞ 1 in (3.64) follows from Gap(F (z)F (z) t ) 1 by Lemma 3.3, |M(z)| ∼ 1 and a standard argument from perturbation theory as presented in Lemma 8.1 of [2]. Here, it might be necessary to reduce δ. We remark that B * = |M| 2 /M 2 − F and similarly P * = b , · / b 2 b, i.e., B * and P * emerge by the same constructions where M is replaced by M. Therefore, we obtain (B −1 (z)Q) * ∞ 1.
Proof of Proposition 3.10. The part (i) follows from the previous lemmata. The part (ii) has already been proved for |z| ≥ δ in Lemma 3.9 and for any δ 1. Therefore, we restrict ourselves to |z| ≤ δ for a sufficiently small δ 1. We recall e iψ = M/|M|.
Owing to Lemma 3.14 and (3.63), there are positive constants δ, Φ, Φ ∼ 1 which only depend on the model parameters such that for all z ∈ H satisfying |z| ≤ δ. Here, we used w 2 ≤ w ∞ for all w ∈ C 2p . Note that we employed (3.63) for estimating b − e − 2 as well as to obtain b ∞ ∼ 1 and | b 2 | ∼ 1 for all z ∈ H satisfying |z| ≤ δ if δ 1 is small enough. Lemma 3.15 implies the existence of Ψ, Ψ ∼ 1 such that for all z ∈ H satisfying |z| ≤ δ if 1 δ ≤ δ is sufficiently small. With these definitions, we set The estimate on h . . = g(z) − M(z) = u|M| will be obtained from inverting B in (3.57). In order to control the right-hand side of (3.57), we decompose it, according to 1 = P + Q, as (3.67). Using e − hSh = 0 and (3.66), we obtain Similarly, due to (3.66) and gde − = g 1 (z)d 1 (z) − g 2 (z)d 2 (z) = 0 by the perturbed QVE (3.9), we get Thus, inverting B in (3.57), multiplying the result with |M|, taking its norm and using (3.67) yield by the definition of λ * in (3.68). This concludes the proof of (3.42).
For the proof of (3.43), inverting B in (3.57) and taking the scalar product with w yield where we used e − gd = 0 and set T w .
Using (3.66) and (3.67) as well as a similar argument as in the proof of (3.42) for the first term in the definition of T and (B −1 Q) * ∞ 1 by (3.64) for the second term, we obtain T ∞ 1. Moreover, as above we see that the first term on the right-hand side of (3.69) is w ∞ h 2 ∞ . The estimates (3.66) and (3.67) imply that the second term on the right-hand side of (3.69) is Applying (3.42) to these bounds yields (3.43).

Properly rectangular Gram matrices
In this subsection, we study the behaviour of M 1 and M 2 for z close to zero for p/n different from one. We establish that the density of the limiting distribution is zero around zero -a well-known feature of the Marchenko-Pastur distribution for p/n different from one.
We suppose that the assumptions (A), (C) and (D) are fulfilled and we will study the case p > n. More precisely, we assume that for some d * > 0 which will imply that each component of M 1 diverges at z = 0 whereas each component of M 2 stays bounded at z = 0. Later, we will see that these properties carry over to m 1 and m 2 . We use the notation D δ (w) . .= {z ∈ C : |z − w| < δ} for δ > 0 and w ∈ C.
The ansatz (3.71) is motivated by the following heuristics. Considering H as an operator C p ⊕ C n → C n ⊕ C p , we expect that the first component described by X * : C p → C n has a nontrivial kernel for dimensional reasons whereas the second component has a trivial kernel. Since the nonzero eigenvalues of H 2 correspond to the nonzero eigenvalues of XX * and X * X, the Marchenko-Pastur distribution indicates that there is a constant δ * 1 such that H has no nonzero eigenvalue in (−δ * , δ * ). As the first component M 1 of M corresponds to the "first component" of H, the term −u/z in (3.71) implements the expected kernel. For dimensional reasons, the kernel should be p − n dimensional which agrees with part (i) of Proposition 3.16. The factor z in the terms za(z) and zb(z) in (3.71) realizes the expected gap in the eigenvalue distribution around zero.
Proof of Proposition 3.16. We start with the defining equations for u and b. We assume that u ∈ (0, 1] p fulfills for some δ * > 0. We then define a : D δ * (0) → C p through z 2 a(z) = u − 1 1 + Sb(z) (3.74) and set M 1 (z) . .= za(z) − u/z and M 2 (z) . .= zb(z) for z ∈ D δ * (0). Thus, for z ∈ D δ * (0), we obtain where we used (3.74) in the first step and (3.73) in the second step. Similarly, solving (3.74) for Sb(z) yields For the rigorous argument, we first establish the existence and uniqueness of u and b that follow from the next two lemmata whose proofs are given later.
Given u and b(z), the formula (3.74) defines a(z) for z = 0. To extend its definition to z = 0, we observe that the right-hand side of (3.74) is a holomorphic function for all z ∈ D δ * (0) by Lemma 3.18. Since b(0) = 1/(S t u) and the derivative of the right-hand side of (3.74) vanishes as b ′ (0) = 0, the first two coefficients of the Taylor series of the right-hand side on D δ * (0) are zero by (3.72). Thus, (3.74) defines a holomorphic function a : D δ * (0) → C p .
We conclude this subsection with the proofs of Lemma 3.17 and Lemma 3.18.
Proof of Lemma 3.17. We will show that the functional has a unique minimizer u with u i > 0 for all i = 1, . . . , p which solves (3.72). Note that Next, we show that u ⋆ i < 1 for all i = 1, . . . , p. Assume that u ⋆ i = 1 for some i ∈ {1, . . . , p}. Consider a vector u that agrees with u ⋆ except that u ⋆ i is replaced by λ ∈ (0, 1). An elementary calculation then shows that J( u) ≥ J(u ⋆ ) implies s ik = 0 for all k = 1, . . . , n which contradicts (3.14).
To see the uniqueness of the solution of (3.72), we suppose that u ⋆ , v ⋆ ∈ (0, 1] p satisfy ( for some number c. Here we used 1. and 2. of Lemma A.2 in [3] in the second step and 3. of Lemma A.2 in [3] in the last step. Since we can choose c < 1 by (3.80), we conclude u ⋆ = v ⋆ . This argument applies particularly to minimizers of J on (0, 1] p . In the following, we will denote the unique minimizer of J by u. To compute the sum of the components of u we multiply (3.72) by u and sum over i = 1, . . . , p and obtain i.e., p i=1 u i = p − n. Finally, we show that the components of the minimizer u are bounded from below by a positive constant which only depends on the model parameters. For k ∈ {1, . . . , n}, we obtain , (3.82) where we used (F2) in the first step, n ≤ p in the second step, p i=1 u i = p − n in the third step and (3.70) in the last step. This implies the third bound in (3.76). Therefore, we obtain for all i = 1, . . . , p from (3.72) where we used (A) with s * = 1 in the last step. This shows that u i is bounded from below by a positive constant which only depends on the model parameters, i.e., the second bound in (3.76).
Proof of Lemma 3.18. Instead of solving (3.73) directly, we solve a differential equation with the correctly chosen initial condition in order to obtain b. Note that b 0 . . = 1/(S t u) fulfills (3.73) for z = 0 and b 0 ∼ 1 by (3.76) and (3.14).
for all i = 1, . . . , p, where we used the definition of b 0 , (3.72) and u i ≤ 1. Therefore, (1 + Sb) −1 ∞ ≤ 1/2 for all b ∈ U δ ′ , i.e., U δ ′ → C n×n , b → L(b) will be a holomorphic map. In particular, Hence, the right-hand side of the differential equation This is the derivative of (3.73). Since b(0) = b 0 fulfils (3.73) for z = 0 the unique solution of (3.85) with this initial condition is a solution of (3.73) for z ∈ D δ * (0). There is only one holomorphic solution of (3.73) due to the uniqueness of the solution of (3.85). This proves the existence and uniqueness of b(z) in Lemma 3.18.

Local law for H
In this section, we will follow the approach used in [4] to prove a local law for the Wigner-type matrix H. We will not give all details but refer the reader to [4]. Therefore, we consider (3.4) as a perturbed QVE of the form (3.9) with g . . = (g 1 , g 2 ) t : H → C p+n and d . . = (d 1 , d 2 ) t : H → C p+n , in particular g(z) = (G xx (z)) x=1,...,n+p where G xx are the diagonal entries of the resolvent of H defined in (3.3). We recall that ρ is the probability measure on R whose Stieltjes transform is M , cf. (3.8), where M is the solution of (3.6) satisfying Im M(z) > 0 for z ∈ H. Definition 4.1 (Stochastic domination). Let P 0 : (0, ∞) 2 → N be a given function which depends only on the model parameters and the tolerance exponent γ. If ϕ = (ϕ (p) ) p and ψ = (ψ (p) ) p are two sequences of nonnegative random variables then we will say that ϕ is stochastically dominated by ψ, ϕ ≺ ψ, if for all ε > 0 and D > 0 we have In the following, we will use the convention that τ . .= Re z and η . .= Im z for z ∈ C.     [4] were proved. In fact, inspecting the proofs in [4], rigidity at a particular point τ 0 in the bulk requires only (i) the local law, (4.2a), around τ 0 = Re z, (ii) the local law somewhere outside of the support of ρ, (4.2b), and (iii) a uniform global law with optimal convergence rate, (4.2b), for any z away from supp ρ.
). Thus, we get the chain of inequalities Here we used Lemma 4.9 in the last step. As the left and the right-hand-side are equal all of the inequalities are equalities which concludes the proof of part (i) and part (ii).
We will omit the proof of part (iii) of Theorem 4.7 as it is very similar to the proof of part (vi) of Theorem 2.9 below which will be independent of part (iii) of Theorem 4.7.
Proof of Theorem 2.9. Since δ * is chosen as in Proposition 3.16 we conclude δ π ≥ δ 2 * 1 from part (iv) of this proposition. Part (ii) and (iii) of the theorem follow immediately from (4.17) in Theorem 4.7.
If p > n, then dim ker XX * = p − n a.w.o.p. as p − n ≤ dim ker XX * ≤ dim ker H 2 = p − n a.w.o.p by part (i) of Theorem 4.7. By Proposition 3.16, we obtain π * = u = 1 − n/p, where u is defined as in this proposition. This proves part (iv). If p < n, then part (v) follows from interchanging the roles of X and X * and following the same steps as in the proof of part (iv).
For the proof of (2.14), we observe that dim ker(XX * ) = pπ * a.w.o.p. in both cases by (iv) and (v). Thus,   a.w.o.p. for ζ ∈ D δ (0), δ chosen as above, by (4.17), where a is the holomorphic function on D δ * (0) defined in Proposition 3.16. The right-hand side of the previous equation can therefore be uniquely extended to a holomorphic function on D δ * (0). As before, the estimate (2.4b) can be extended to ζ ∈ H with |ζ| ≤ δ by the maximum principle.
The local law for ζ around zero needed an extra argument, Theorem 2.9, due to the possible singularity at ζ = 0. We note that this separate treatment is necessary even if p < n, in which case XX * does not have a kernel and R(ζ) is regular at ζ = 0, since we study XX * and X * X simultaneously. Our main stability results are formulated and proved in terms of H, as defined in (3.1). Therefore, these results are not sensitive to whether p or n is bigger which means whether XX * has a kernel or X * X.

A. Appendix: Proof of the Rotation-Inversion lemma
In this appendix, we prove the Rotation-Inversion lemma, Lemma 3.6.
The last inequality follows from (A.2) and because b is orthogonal to a ± . Since β 2 ≥ κ/100, we conclude that in the first regime (A.4) is satisfied.

Regime 2:
In this regime we project on the second component of (U + A)w.

Regime 3:
By the symmetry in a ± and α ± and therefore in λ and 1 − λ this regime is treated in the same way as Regime 2 by estimating the norm of the first component of (U + A)w from below.

Regime 4:
Here we project onto the orthogonal complement of the subspace spanned by a + and a − , We compute the first term in this last expression more explicitly, (A.6) For the second equality we used that P u 2 = | v 1 , u 1 | 2 + | v 2 , u 2 | 2 , u = (u 1 , u 2 ) ∈ C p+n .
With the choice of variables we are minimizing the last line in (A.6) under the restrictions that are satisfied in this regime, We use the resulting estimate in (A.5) and in this way we arrive at In the second to last inequality we used β ≤ 1/5 which was already established in the consideration of Regime 2 and in the last inequality we used κ ≤ 2.

Regime 5:
In this regime we project onto the span of a + and a − , (U + A)w = (1 + U * A)w ≥ P (1 + U * A)(α + a + + α − a − ) − β P (1 + U * A)b = |α + | 2 + |α − | 2 κ − β P U * Ab . (A.7) The second term in the last line is estimated by using where the suprema are taken over normalized vectors h and u in the 2-dimensional subspace spanned by a ± and its orthogonal complement, respectively. First we perform the supremum over h and get where the vectors u 1 ∈ C p and u 2 ∈ C n are normalized. Computing where we used the choice of Regime 5 in the last step. Plugging this bound into (A.7) and using β ≤ κ 1/2 /10 as well as β ≤ 1/5 yields (U + A)w ≥ 1 − β 2 κ − β κ 1/2 ≥ κ/2 .
This finishes the proof of (A.4). In order to show (3.25), and thus the lemma, we notice that where the infimum is taken over normalized vectors u in the span of a + and a − . Thus, it suffices to estimate the norm of the inverse of P (1 + U * A)P , restricted to the 2-dimensional subspace with orthonormal basis (v 1 , 0) and (0, v 2 ). In this basis this linear operator takes the form of the simple 2 × 2-matrix, Its inverse is bounded by the right hand side of (3.25), up to the factor Gap(AA * ) that we encountered in (A.4), and the lemma is proven.