Isotropic Local Laws for Sample Covariance and Generalized Wigner Matrices

We consider sample covariance matrices of the form $X^*X$, where $X$ is an $M \times N$ matrix with independent random entries. We prove the isotropic local Marchenko-Pastur law, i.e. we prove that the resolvent $(X^* X - z)^{-1}$ converges to a multiple of the identity in the sense of quadratic forms. More precisely, we establish sharp high-probability bounds on the quantity $\langle v, (X^* X - z)^{-1} w \rangle - \langle v,w\rangle m(z)$, where $m$ is the Stieltjes transform of the Marchenko-Pastur law and $v, w \in \mathbb C^N$. We require the logarithms of the dimensions $M$ and $N$ to be comparable. Our result holds down to scales $Im z \geq N^{-1+\epsilon}$ and throughout the entire spectrum away from 0. We also prove analogous results for generalized Wigner matrices.


Introduction
The empirical density of eigenvalues of large N × N random matrices typically converges to a deterministic limiting law. For Wigner matrices this law is the celebrated Wigner semicircle law [22] and for sample covariance matrices it is the Marchenko-Pastur law [20]. Under some additional moment conditions this convergence also holds in very small spectral windows, all the way down to the scale of the eigenvalue spacing. In this paper we normalize the matrix so that the support of its spectrum remains bounded as N tends to infinity. In particular, the typical eigenvalue spacing is of order 1/N away from the spectral edges. The empirical eigenvalue density is conveniently, and commonly, studied via its Stieltjes transform -the normalized trace of the resolvent, 1 part η. Understanding the eigenvalue density on small scales of order η around a fixed value E ∈ R is roughly equivalent to understanding its Stieltjes transform with spectral parameter z = E + iη. The smallest scale on which a deterministic limit is expected to emerge is η N −1 ; below this scale the empirical eigenvalue density remains a fluctuating object even in the limit of large N , driven by the fluctuations of individual eigenvalues. We remark that a local law on the optimal scale 1/N (up to logarithmic corrections) was first obtained in [11].
In recent years there has been substantial progress in understanding the local versions of the semicircle and the Marchenko-Pastur laws (see [9,5] for an overview and detailed references). This research was originally motivated by the Wigner-Dyson-Mehta universality conjecture for the local spectral statistics of random matrices. The celebrated sine kernel universality and related results for other symmetry classes concern higher-order correlation functions, and not just the eigenvalue density. Moreover, they pertain to scales of order 1/N , smaller than the scales on which local laws hold.
Nevertheless, local laws (with precise error bounds) are essential ingredients for proving universality. In particular, one of their consequences, the precise localization of the eigenvalues (called rigidity bounds), has played a fundamental role in the relaxation flow analysis of the Dyson Brownian Motion, which has led to the proof of the Wigner-Dyson-Mehta universality conjecture for all symmetry classes [12,13].
The basic approach behind the proofs of local laws is the analysis of a self-consistent equation for the Stieltjes transform, a scalar equation which controls the trace of the resolvent (and hence the empirical eigenvalue density). A vector self-consistent equation for the diagonal resolvent matrix entries, [(H − z) −1 ] ii , was introduced in [15 Later, a matrix self-consistent equation was derived in [7]. Such self-consistent equations provide entrywise control of the resolvent and not only its trace. This latter fact has proved a key ingredient in the Green function comparison method (introduced in [15] and extended to the spectral edge in [16]), which allows the comparison of local statistics via moment matching even below the scale of eigenvalue spacing.
In this paper we are concerned with isotropic local laws, in which the control of the matrix entries [(H − z) −1 ] ij is generalized to a control of quantities of the form v , (H − z) −1 w , where v, w ∈ C N are deterministic vectors. This may be interpreted as basis-independent control on the resolvent. The fact that the matrix entries are independent distinguishes the standard basis of C N in the analysis of the resolvent. Unless the entries of H are Gaussian, this independence of the matrix entries is destroyed after a change of basis, and the isotropic law is a nontrivial generalization of the entrywise law. The first isotropic local law was proved in [18], where it was established for Wigner matrices.
The main motivation for isotropic local laws is the study of deformed matrix ensembles. A simple example is the sum H + A of a Wigner matrix H and a deterministic finite-rank matrix A. As it turns out, a powerful means to study the eigenvalues and eigenvectors of such deformed matrices is to derive large deviation bounds and central limit theorems for quantities of the form v , (H − z) −1 w , where v and w are eigenvectors of A. Deformed matrix ensembles are known to exhibit rather intricate spectra, depending on the spectrum of A. In particular, the spectrum of H + A may contain outliers -lone eigenvalues separated from the bulk spectrum. The creation or annihilation of an outlier occurs at a sharp transition when an eigenvalue of A crosses a critical value. This transition is often referred to as the BBP transition and was first established in [1] for unitary matrices and extended in [4,3] to other symmetry classes. Similarly to the above deformed Wigner matrices, one may introduce a class of deformed sample covariance matrices, commonly referred to as spiked population models [17], which describe populations with nontrivial correlations (or "spikes").
The isotropic local laws established in this paper serve as a key input in establishing detailed results about the eigenvalues and eigenvectors of deformed matrix models. These include: (a) A complete picture of the distribution of outlier eigenvalues/eigenvectors, as well as the non-outlier eigenvalues/eigenvectors near the spectral edge.
(b) An investigation of the BBP transition using that, thanks to the optimality of the high-probability bounds in the local laws, the results of (a) extend even to the case when some eigenvalues of A are very close to the critical value.
This programme for the eigenvalues of deformed Wigner matrices was carried out in [18,19]. In the upcoming paper [2], we shall carry out this programme for the eigenvectors of spiked population models.
In this paper we prove the isotropic Marchenko-Pastur law for sample covariance matrices as well as the isotropic semicircle law for generalized Wigner matrices. Our proofs are based on a novel method, which is considerably more robust than that of [18]. Both proofs (the one from [18] and the one presented here) crucially rely on the entrywise local law as input, but follow completely different approaches to obtain the isotropic law from the entrywise one. The basic idea of the proof in [18] is to use the Green function comparison method to compare the resolvent of a given Wigner matrix to the resolvent of a Gaussian random matrix, for which the isotropic law is a trivial corollary of the entrywise one (by basis transformation). Owing to various moment matching conditions imposed by the Green function comparison, the result of [18] required the variances of all matrix entries to coincide and, for results in the bulk spectrum, the third moments to vanish. In contrast, our current approach does not rely on Green function comparison. Instead, it consists of a precise analysis of the cancellation of fluctuations in Green functions. We use a graphical expansion method inspired by techniques recently developed in [6] to control fluctuations in Green functions of random band matrices.
Our first main result is the isotropic local Marchenko-Pastur law for sample covariance matrices H = X * X, where X is an M × N matrix. We allow the dimensions of X to differ wildly: we only assume that log N log M . In particular, the aspect ratio φ = M/N -a key parameter in the Marchenko-Pastur law -may scale as a power of N .
Our entrywise law (required as input for the proof of the isotropic law) is a generalization of the one given in [21]. In addition to generalizing the proof of [21], we simplify and streamline it, so as to obtain a short and self-contained proof.
Our second main result is the isotropic local semicircle law for generalized Wigner matrices. This extends the isotropic law of [18] from Wigner matrices to generalized Wigner matrices, in which the variances of the matrix entries need not coincide. It also dispenses with the third moment assumption of [18] mentioned previously. In fact, our proof applies to even more general matrix models, provided that an entrywise law has been established. As an application of the isotropic laws, we also prove a basis-independent version of eigenvector delocalization for both sample covariance and generalized Wigner matrices.
We conclude with an outline of the paper. In Section 2 we define our models and state our results, first for sample covariance matrices (Section 2.1) and then for generalized Wigner matrices (Section 2.2). The rest of the paper is devoted to the proofs. Since they are very similar for sample covariance matrices and generalized Wigner matrices, we only give the details for sample covariance matrices. Thus, Sections 3-6 are devoted to the proof of the isotropic Marchenko-Pastur law for sample covariance matrices; in Section 7, we describe how to modify the arguments to prove the isotropic semicircle law for generalized Wigner matrices. Section 3 collects some basic identities EJP 19 (2014), paper 33. and estimates that we shall use throughout the proofs. In Section 4 we prove the entrywise local Marchenko-Pastur law, generalizing the results of [21]. The main argument and the bulk of the proof, i.e. the proof of the isotropic law, is given in Section 5. For a sketch of the argument we refer to Section 5. 3. Finally, in Section 6 we draw some simple consequences from the isotropic law: optimal control outside of the spectrum and isotropic delocalization bounds.

Conventions
We use C to denote a generic large positive constant, which may depend on some fixed parameters and whose value may change from one expression to the next. Similarly, we use c to denote a generic small positive constant. For two positive quantities A N and B N depending on N we use the notation A N B N to mean C −1 A N B N CA N for some positive constant C.

Sample covariance matrix
Let X be an M × N matrix whose entries X iµ are independent complex-valued random variables satisfying We shall study the N × N matrix X * X; hence we regard N as the fundamental large parameter, and write M ≡ M N . Our results also apply to the matrix XX * provided one replaces N ↔ M . See Remark 2.11 below for more details . We always assume that M and N satisfy the bounds for some positive constant C. We define the ratio φ = φ N := M N , which may depend on N . Here, and throughout the following, in order to unclutter notation we omit the argument N in quantities, such as X and φ, that depend on it. We make the following technical assumption on the tails of the entries of X. We assume that, for all p ∈ N, the random variables (N M ) 1/4 X iµ have a uniformly bounded p-th moment: there is a constant C p such that It is well known that the empirical distribution of the eigenvalues of the N ×N matrix X * X has the same asymptotics as the Marchenko-Pastur law [20] where we defined to be the edges of the limiting spectrum. Note that (2.4) is normalized so that its integral is equal to one. The Stieltjes transform of the Marchenko-Pastur law (2.4) is where the square root is chosen so that m φ is holomorphic in the upper half-plane and satisfies m φ (z) → 0 as z → ∞. The function m φ = m φ (z) is also characterized as the unique solution of the equation satisfying Im m(z) > 0 for Im z > 0. The formulas (2.4)-(2.7) were originally derived for the case when φ = M/N is independent of N (or, more precisely, when φ has a limit in (0, ∞) as N → ∞). Our results allow φ to depend on N under the constraint (2.2), so that m φ and φ may also depend on N through φ.
Throughout the following we use a spectral parameter with η > 0, as the argument of Stieltjes transforms and resolvents. Define the resolvent For z ∈ C, define κ := κ(z) to be the distance of E = Re z to the spectral edges γ ± , i.e.
κ := min |γ + − E| , |γ − − E| . (2.9) The following notion of a high-probability bound was introduced in [6], and has been subsequently used in a number of works on random matrix theory. It provides a simple way of systematizing and making precise statements of the form "ξ is bounded with high probability by ζ up to small powers of N ".
be two families of nonnegative random variables, where U (N ) is a possibly N -dependent parameter set. We say that ξ is stochastically dominated by ζ, uniformly in u, if for all (small) ε > 0 and (large) D > 0 we have for large enough N N 0 (ε, D). Throughout this paper the stochastic domination will always be uniform in all parameters (such as matrix indices and the spectral parameter z) that are not explicitly fixed. Note that N 0 (ε, D) may depend on the constants from (2.2) and (2.3) as well as any constants fixed in the assumptions of our main results. If ξ is stochastically dominated by ζ, uniformly in u, we use the notation ξ ≺ ζ. Moreover, if for some complex family ξ we have |ξ| ≺ ζ we also write ξ = O ≺ (ζ).
which is the number of nontrivial (i.e. nonzero) eigenvalues of X * X; the remaining N − K eigenvalues of X * X are zero. (Note that the K nontrivial eigenvalues of X * X coincide with those of XX * .) Fix a (small) ω ∈ (0, 1) and define the domain Throughout the following we regard ω as fixed once and for all, and do not track the dependence of constants on ω.
uniformly in z ∈ S and any deterministic unit vectors v, w ∈ C N . Moreover, we have Beyond the support of the limiting spectrum, one has stronger control all the way down to the real axis. For fixed (small) ω ∈ (0, 1) define the region S ≡ S(ω, K) of spectral parameters separated from the asymptotic spectrum by K −2/3+ω , which may have an arbitrarily small positive imaginary part η.
uniformly in z ∈ S and any deterministic unit vectors v, w ∈ C N . Remark 2. 6. All probabilistic estimates (2.12)-(2.15) of Theorems 2.4 and 2.5 may be strengthened to hold simultaneously for all z ∈ S and for all z ∈ S, respectively. For instance, (2.12) may be strengthened to for all ε > 0, D > 0, and N N 0 (ε, D).
In the case of Theorem 2.5 this generalization is an immediate consequence of its proof, and in the case of Theorem 2.4 it follows from a simple lattice argument combined with the Lipschitz continuity of R and m φ on S. See e.g. [10,Corollary 3.19] for the details.
Remark 2. 7. The right-hand side of (2.15) is stable under the limit η → 0, and may therefore be extended to η = 0. Recalling the previous remark, we conclude that (2.15) also holds for η = 0. The next results are on the nontrivial eigenvalues of X * X as well as the corresponding eigenvectors. As remarked above, the matrix X * X has K nontrivial eigenvalues, which we order according to λ 1 λ 2 · · · λ K . Let u (1) , . . . , u (K) ∈ C N be the normalized eigenvectors of X * X associated with the nontrivial eigenvalues λ 1 , . .
uniformly for α (1 − ε)K and all normalized v ∈ C N . If in addition |φ − 1| c for some constant c > 0, then (2.16) holds uniformly for all α K. Remark 2. 9. Isotropic delocalization bounds in particular imply that the entries u (α) i of the eigenvectors u (α) are strongly oscillating in the sense that The following result is on the rigidity of the nontrivial eigenvalues of X * X, which coincide with the nontrivial eigenvalues of XX * . Let γ 1 γ 2 · · · γ K be the classical eigenvalue locations according to φ (see (2.4 (2.20)

Generalized Wigner matrix
Let H = H * be an N × N Hermitian matrix whose entries H ij are independent complex-valued random variables for i j. We always assume that entries are centred, i.e. EH ij = 0. Moreover its Stieltjes transform; here we chose the square root so that m is holomorphic in the upper half-plane and satisfies m(z) → 0 as z → ∞. Note that m = m(z) is also characterized as the unique solution of Fix a (small) ω ∈ (0, 1) and define uniformly in z ∈ S W and any deterministic unit vectors v, w ∈ C N .
Theorem 2.12 is the isotropic generalization of the following result, proved in [9]. A similar result first appeared in [16]. Theorem 2.13 (Local semicircle law, [9,16] uniformly in z ∈ S W and i, j = 1, . . . , N . Theorem 2.14. Let S ⊂ S W be an N -dependent spectral domain, m(z) a deterministic function on S satisfying c | m(z)| C for z ∈ S, and Ψ(z) a deterministic control parameter satisfying cN −1 Ψ(z) N −c for z ∈ S and some constant c > 0. Suppose uniformly in z ∈ S and any deterministic unit vectors v, w ∈ C N .
The proof of Theorem 2.14 is the same as that of Theorem 2. 12. Below we give the proof for Theorem 2.12, which can be trivially adapted to yield Theorem 2.14.
Combining Theorem 2.14 with the isotropic local semicircle law from [9], we may for instance obtain an isotropic local semicircle law for matrices where the lower bound of (2.22) is relaxed, so that some matrix entries may vanish.
Beyond the support of the limiting spectrum [−2, 2], the statement of Theorem 2.12 may be improved to a bound that is stable all the way down to the real axis. For fixed (small) ω ∈ (0, 1) define the region of spectral parameters separated from the asymptotic spectrum by N −2/3+ω , which may have an arbitrarily small positive imaginary part η. The statements in Theorems 2.12 and 2.15 can also be strengthened to simultaneously apply for all z ∈ S W and z ∈ S W , respectively; see Remark 2. 6. Let u (1) , . . . , u (N ) denote the normalized eigenvectors of H associated with the eigenvalues λ 1 , . . . , λ N .  Finally, in analogy to Theorem 2.10, we record the following rigidity result, which is a trivial consequence of [9,Theorem 7.6] with X = CN −2/3 and Y = CN −1 ; see also [16,Theorem 2.2]. Write λ 1 λ 2 · · · λ N for the eigenvalues of H, and let γ 1 γ 2 · · · γ N be their classical locations according to , defined through

Preliminaries
The rest of this paper is devoted to the proofs of our main results. They are similar for sample covariance matrices and generalized Wigner matrices, and in Sections 3-6 we give the argument for sample covariance matrices (hence proving the results of Section 2.1). How to modify these arguments to generalized Wigner matrices (and hence prove the results of Section 2.2) is explained in Section 7. We choose to present our method in the context of sample covariance matrices mainly for two reasons. First, we take this opportunity to give a version of the entrywise local law (Section 4) -required as input for the proof of the isotropic law -which is more general and has a simpler proof than the local law previously established in [21]. Second, the proof of the isotropic law in the case of sample covariance matrices is conceptually slightly clearer due to a natural splitting of summation indices into two categories (which we distinguish by the use of Latin and Greek letters); this splitting is an essential structure behind our proof in Section 5, and is also used in the case of generalized Wigner matrices, in which case it is however purely artificial.
We now move on to the proofs. In order to unclutter notation, we shall often omit the argument z from quantities that depend on it. Thus, we for instance often write G instead of G(z). We put the arguments z back when needed, typically if we are working with several different spectral parameters z.

Basic tools
We begin by recording some basic large deviations estimates. We consider complexvalued random variables ξ satisfying for all p ∈ N and some constants C p .   The following lemma collects basic algebraic properties of stochastic domination ≺.
We shall use them tacitly throughout the following.

Suppose that
uniformly in u. 2. Suppose that ξ 1 (u) ≺ ζ 1 (u) uniformly in u and ξ 2 (u) ≺ ζ 2 (u) uniformly in u. Then uniformly in u. 3. Suppose that Ψ(u) N −C is deterministic and ξ(u) is a nonnegative random variable satisfying Eξ(u) 2 N C for all u. Then, provided that ξ(u) ≺ Ψ(u) uniformly in u, we have Eξ(u) ≺ Ψ(u) uniformly in u. Proof. The claims (i) and (ii) follow from a simple union bound. For (iii), pick ε > 0 and assume to simplify notation that ξ and Ψ do not depend on u. Then for arbitrary D > 0. The claim (iii) therefore follows by choosing D 3C.
Next, we give some basic facts about the Stieltjes transform m φ of the Marchenko-Pastur law defined in (2.6). They have an especially simple form in the case φ 1; the complementary case φ < 1 can be easily handled using (2.20). Recall the definition (2.9) of κ. We record the following elementary properties of m φ , which may be proved e.g. by starting from the explicit form (2.6).  (3.5) and Im m φ (z) In addition to the resolvent R from (2.8), we shall need another resolvent, G: Although our main results only pertain to R, the resolvent G will play a crucial role in the proofs, in which we consider both X * X and XX * in tandem. In the following formulas the spectral parameter z plays no explicit role, and we therefore omit it from the notation, as explained at the beginning of this section.  The idea is that we are observing the statistics of a population of size M by making N independent measurements ("samples") of the population. Each observation is a column of X. Hence the population index i labels the rows of X and the sample index µ the columns of X.
Moreover, for i, j / ∈ T we define the resolvents entries When T = {a}, we abbreviate ({a}) by (a) in the above definitions; similarly, we write (ab) instead of ({a, b}).
We shall use the following expansion formulas for G.
Moreover, for i, j / ∈ T and i = j we have Moreover, for µ, ν / ∈ T we define the resolvents entries . (3.10) ij X jµ . (3.11) Moreover, for µ, ν / ∈ T and µ = ν we have The following lemma is an immediate consequence of the fact that for φ 1 the spectrum of XX * is equal to the spectrum of X * X plus M − N zero eigenvalues. (A similar result holds for for φ 1, and if X is replaced with X [T ] or X (U ) .) In particular, we find (3.13) in agreement with (2.20) and the heuristics M −1 Tr G ∼ m φ −1 and N −1 Tr R ∼ m φ .
The following lemma is an easy consequence of the well-known interlacing property of the eigenvalues of XX * and X (i) (X (i) ) * , as well as the eigenvalues of X * X and Finally, we record the fundamental identity ii , (3.14) which follows easily by spectral decomposition of G [T ] .

Reduction to the case φ 1
We shall prove Theorems 2.4 and 2.5 by restricting ourselves to the case φ 1 but considering both X * X and XX * simultaneously. In this short section we give the details of this reduction. Define the control parameter We shall in fact prove the following. Recall the definitions (2.11) of S and (2.14) and of S.
uniformly in z ∈ S and any deterministic unit vectors v, w ∈ C N . Moreover, we have uniformly in z ∈ S and any deterministic unit vectors v, uniformly in z ∈ S and any deterministic unit vectors v, w ∈ C N . . . , u (N ) ∈ C N denote the normalized eigenvectors of X * X associated with the nontrivial eigenvalues λ 1 , . . . , λ N , and let u (1) , . . . , u (N ) ∈ C M denote the normalized eigenvectors of XX * associated with the same eigenvalues λ 1 , . . . , λ N .
What remains therefore is to prove Theorems 2.10, 3.11, 3.12, and 3. 13. We shall prove Theorem 2.10 in Section 4.3, Theorem 3.11 in Section 5, and Theorems 3.12 and 3.13 in Section 6.

The entrywise local Marchenko-Pastur law
In this section we prove a entrywise version of Theorem 3.11, in which the vectors     [21] in the following two ways. 1. The restriction 1 φ C in [21] is relaxed to 1 φ N C (and hence, as explained in Section 3.2, to N −C φ N C ). 2. The uniform subexponential decay assumption of [21] is relaxed to (2.3). On the other hand, thanks to the stronger subexponential decay assumption the statement of Theorem 3.1 of [21] is slightly stronger than Theorem 4.1: in Theorem 3.1 of [21], the error bounds N ε in the definition of ≺ are replaced with (log N ) C log log N .
The difference (ii) given above is technical and amounts to using Lemma 3.1, which is tailored for random variables satisfying (2.3), for the large deviation estimates. We remark that all of the arguments of the current paper may be translated to the setup of [21], explained in (ii) above, by modifying the definition of ≺. The essence of the proofs remains unchanged; the only nontrivial difference is that in Section 5 we have to control moments whose power depends weakly on N ; this entails keeping track of some basic combinatorial bounds. We do not pursue this modification any further.
The difference (i) is more substantial, and requires to keep track of the φ-dependence of all appropriately rescaled quantities throughout the proof. In addition, we take this opportunity to simplify and streamline the argument from [21]. This provides a short and self-contained proof of Theorem 4.1, up to a fluctuation averaging result, Lemma 4.9 below, which was proved in the current simple and general form in [9]. EJP 19 (2014), paper 33.

A weak local Marchenko-Pastur law
We begin with the proof of (4.1) and (3.18). For the following it will be convenient to use the rescaled spectral parameters Using z and z we may write the the defining equation (2.7) of m φ as From the definition (2.11) of S, we find for all z ∈ S. We remark that, as in [21], the Stieltjes transform m φ satisfies |m φ (z)| 1 for z ∈ S; see (3.5).
We define the z-dependent random control parameters where we defined the Stieltjes transform of the empirical density of X * X, The goal of this subsection is to prove the following weaker variant of Theorem 4.1. and for large enough N depending on .
Proof. The proof is a simple induction argument using (3.10) and the bound |m φ | c from (3.5). We omit the details.
As in the works [16,21], the main idea of the proof is to derive a self-consistent equation for m R = 1 N µ R µµ using the resolvent identity (3.11). To that end, we introduce the conditional expectation EJP 19 (2014), paper 33. i.e. the partial expectation in the randomness of the µ-th column of X. We define (4.10) where in the last step we used (2.1) and (4.3). Using (3.11) with T = ∅, Lemma 3.9, and The following lemma contains the key estimates needed to control the error terms Z µ and Λ o . The errors are controlled using of the (random) control parameter (4.12) whose analogue in the context of Wigner matrices first appeared in [16].
as well as (4.14) Proof. The proof is very similar to that of Theorems 6.8 and 6.9 of [21]. We consequently only give the details for the estimate of Λ o ; the estimate of Z µ is similar.
On the event Ξ, we estimate the right-hand side using where the first step follows from (3.14), the second from Lemma 3.9, the third from , and the fourth from the fact that Im m φ cη by (3.6). Recalling (3.5), we have therefore proved that which, together with the analogous bound for Z µ , concludes the proof of (4.13).
In order to prove the estimate Λ o ≺ Ψ Θ from (4.14) for η (log N ) −1 , we proceed similarly. From ( 4.15) and the trivial deterministic bound R µµ R where the estimate is similar to (4. 16), except that in the last step we use Lemma 3.10 to estimate Tr R µν − Tr R. Since η (log N ) −1 , we easily find that |R µν | ≺ Ψ Θ . This concludes the proof.
As in [21,Equation (6.13)], in order to analyse the stability of the equation (4.4) we introduce the operation D on functions u : Note that, by (4.4), the function m φ satisfies D(m φ ) = 0. Next, we derive a stability result for D −1 . Roughly, we prove that if D(u) is small then u is close to m φ . Note that this result is entirely deterministic. It relies on a discrete continuity argument, whose essence is the existence of a sufficiently large gap between the two solutions of D(·) = 0. Once this gap is established, then, together with the fact that u is close to m φ for large η, we may conclude that u is close to m φ for smaller η as well. We use a discrete version of a continuity argument (as opposed to a continuous one used e.g. in [21]), which allows us to bypass several technical issues when applying it to estimating the random quantity |m R − m φ |. For more details of this application, see the explanation following (4.34).
For z ∈ S introduce the discrete set Thus, if Im z 1 then L(z) = {z} and if Im z 1 then L(z) is a one-dimensional lattice with spacing N −5 plus the point z. Clearly, we have the bound There exists a constant ε > 0 such that the following holds. Suppose that δ : S → C satisfies N −2 δ(z) ε for z ∈ S and that δ is Lipschitz continuous with Lipschitz constant N . Suppose moreover that for each fixed E, the function η → δ(E + iη) is nonincreasing for η > 0. Suppose that u : S → C is the Stieltjes transform of a probability measure. Let z ∈ S, and suppose that for all w ∈ L(z) we have D(u)(w) δ(w) .
Then we have for some constant C independent of z and N . Proof. Let u be as in Lemma 4.5, and abbreviate R := D(u). Hence, by assumption on u, we have |R| δ. We introduce u 1 ≡ u R 1 and u 2 ≡ u R 2 by setting u 1 := u and defining u 2 as the other solution of the quadratic equation D(u) = R. Note that each u i is continuous. Explicitly, for |R| 1/2 we get (Note that the sign ± in the expression for u 1,2 bears no relation to the indices 1, 2, since we have not even specified which complex square root we take.) In particular, for R = 0 we have λ ±,R=0 = γ ± , defined in (2.6). Observe that for any complex square root √ · and w, ζ We use these formulas to compare (4.20) with a small R with (4.20) with R = 0. Thus we conclude from (4.20) and (4.5) that for i = 1 or for i = 2 we have for some constant C 0 2. What remains is to show that (4.21) holds for i = 1. We shall prove this using a continuity argument. Note first that (4.20) and (4.5) yield for some constant C 1 1.
Note that by the lower bound of (4.22) the two roots u R 1 (i) and u R 2 (i) are distinct, and they are continuous in R. Therefore there is an ε ∈ (0, 1/2] such that for |R(i)| ε we have, after possibly increasing C 0 , that at z = i. Next, we note that (4.21) and (4.22) imply, for any z with Im z 1, that |u i − m φ | C 0 |R| for some i ∈ {1, 2}, and that |u 1 − u 2 | (2C 1 ) −1 . Hence, requiring that ε (8C 0 C 1 ) −1 we find from (4.23) with z = i and using the continuity of u 1 that (4.23) holds provided Im z 1.
the monotonicity assumption on δ we find that (4.24) holds for all z l ∈ L(z). We now prove that where in the second step we used the induction assumption, in the third step the Lipschitz continuity of δ and the bound η N −1 , and in the last step the bounds δ N −2 and κ + η + δ C. Next, recalling (4.24), it is easy to deduce (4.26) with l replaced by l + 1, using the bounds (4.25) and (4.27). This concludes the proof.
We may now combine the probabilistic estimates from Lemma 4.4 with the stability of D −1 from Lemma 4.5 to get the following result for η 1, which will be used as the starting estimate in the bootstrapping in η. Lemma 4. 6. We have Λ ≺ N −1/4 uniformly in z ∈ S satisfying Im z 1.
Next, we plug the estimates from Lemma 4.4 into (4.11) in order to obtain estimates on m R . The summation in m R = 1 N µ R µµ will give rise to an error term of the form uniformly in µ and z ∈ S, as well as uniformly in z ∈ S.
where we absorbed the error term N −1 on the right-hand side of (4.11) into Ψ 2 Θ using (3.6). Thus, using (4.13) we get After taking the average [·] = 1 N µ · the second term on the right-hand side vanishes.
Taking the average of (4.32) therefore yields, using (4.30) and (4.3), from which the claim follows.
From (4.31) and (4.13) we get uniformly in S. In order to conclude the proof of Proposition 4.2, we use a continuity argument. The main ingredients are (4.33), Lemma 4.5, Lemma 4. 6. Choose ε < ω/4 and an arbitrary D > 0. It is convenient to introduce the random function v(z) := max Our goal is to prove that with high probability there is a gap in the range of v, i.e.
for all z ∈ S and large enough N N 0 (ε, D). This equation says that with high probability the range of v has a gap: it cannot take values in the interval (N ε/2 , N ε ]. The basic idea behind the proof of (4.34) is to use the deterministic result from Lemma 4.5 to propagate smallness of the random variable Λ(z) from large values of η to smaller values of η. Since we are dealing with random variables, one has to keep track of probabilities of exceptional events. To that end, we only work on a discrete set of values of η, which allows us to control the exceptional probabilities by a simple union bound. We remark that the first instance of such a stochastic continuity argument combined with stability of a self-consistent equation was given in [11] in the context of Wigner matrices. Over the years it has been improved through several papers in the context of Wigner matrices [9,15,16] as well as in the context of sample covariance matrices [13,21].
Next, we prove (4.34). Since {v(z) N ε } ⊂ Ξ(z) ∩ Ξ(w) for all z ∈ S and w ∈ L(z), we find that (4.33) implies for all z ∈ S and w ∈ L(z) that for large enough N N 0 (ε, D) (independent of z and w). Using (4.19) and a union bound, we therefore get (Here we used the trivial observation that the conclusion of Lemma 4.5 is valid not only at z but in the whole set L(z).) Using (4.13) and (4.30) we therefore get (4.34).
We conclude the proof of Proposition 4.2 by combining (4.34) and Lemma 4.6 with a continuity argument, similar to the proof of [9, Proposition 5.3]. We choose a lattice ∆ ⊂ S such that |∆| N 10 and for each z ∈ S there exists a w ∈ ∆ satisfying |z − w| N −4 . Then (4.34) combined with a union bound yields   for some (in fact any) z ∈ S satisfying Im z 1. It is not hard to infer from (4.36) and

Fluctuation averaging and conclusion of the proof of Theorem 4.1
In order to improve the negative power of (N η) in uniformly in z ∈ S.
In order to prove Lemma 4.8, we invoke the following fluctuation averaging result. We remark that the fluctuation averaging mechanism was first exploited in [14]. Here we use the result from [9], where a general version with a streamlined proof was given.
Recall the definition of the partial expectation E [µ] from (4.9).  Proof. This result was given in a slightly different context in Theorem 4.7 in [9]. However, it is a triviality that the proof of Theorem 4.7 in [9] carries over word for word, µν ; see Remark B.3 in [9]. The proof relies only on the identity (3.10), which is the analogue of Equation (4.6) in [9].
Remark 4. 10. The conclusion of Lemma 4.9 remains true under somewhat more general hypotheses, whereby Λ is not required to be small. Indeed, (4.41) holds provided that Φ o is as in Lemma 4.9 and that The proof is the same as that of Theorem 4.7 in [9].
Proof of Lemma 4. 8. We apply Lemma 4.9 to where in the second step we used (3.6). Summarizing, we have proved the self-improving estimate What remains is the proof of (4.2). To that end, in analogy to the partial expectation E [µ] defined above, we define E (i) (·) := E(·|X (i) ). Introducing the right-hand side of (3.8) yields as follows after some elementary algebra using (4.4). Moreover, using (3.5) it is not This concludes the proof of (4.2) for i = j.
In particular, |G ii | ≺ φ −1/2 . The same argument applied to the matrix X (j) instead of X yields |G (j) ii | ≺ φ −1/2 . Thus we get from (3.9) that for i = j we have where the last step follows using (3.3), exactly as in the proof of Lemma 4.4, and (4.1). This concludes the proof of (4.2), and hence of Theorem 4.1.

Proof of Theorem 2.10
The proof of Theorem 2.10 is similar to that of Theorem 2.2 in [16] and Theorem 3.3 in [21]. We therefore only sketch the argument. First we observe that, since the nontrivial eigenvalues λ 1 , . . . , λ K of X * X and XX * coincide and for all γ > 0, it suffices to prove Theorem 2.10 for φ 1, i.e. K = N .
Define the normalized counting functions The proof relies on the following key estimates.
What remains is the proof of (4.45) and (4.46). Here the argument from [21, Section 8] applies with trivial modifications. The key inputs in our case are (4.44), Lemma 4.5, (4.40), and Lemma 4.9 combined with Remark 4. 10. We omit further details.
In this section we complete the proof of Theorem 3. 11. Since (3.18) was proved in Section 4, we only need to prove (3.16) and (3.17). For definiteness, we give the details of the proof of (3.16); the proof of (3.17) is very similar, and the required modifications are outlined at the end of Section 5.15 below.

Rescaling
It is convenient to introduce the rescaled quantities The reason for this scaling is that for z ∈ S the diagonal entries of G and z are of order one (See (4.5) as well as (5.2) and (5.3) below). Note that all formulas from Lemma 3.6 hold after the replacement (z, G) → ( z, G).
We also introduce the rescaled quantity The motivation behind this definition is that for z ∈ S, as can be easily seen using (3.5 as well as

Proof. From (5.3) and (5.2) we easily find
The statement for general T satisfying |T | then follows easily by induction on the size of T , using the identity (3.7) and the fact that φ −1/2 Ψ 1.

Reduction to off-diagonal entries
By linearity and polarization, in order to prove (3.16) it suffices to prove that v , Gv − φ −1/2 m φ ≺ φ −1 Ψ for deterministic unit vectors v. All of our estimates will be trivially uniform in the unit vector v and z ∈ S, and we shall not mention this uniformity any more. Thus, for the following we fix a deterministic unit vector v ∈ C M .
Hence it suffices to prove that The rest of this section is devoted to the proof of (5.7).

Sketch of the proof
The basic reason why (5.7) holds is that G ab can be expanded, to leading order, as a sum of independent random variables using the identity (3.9). To simplify the presentation in this sketch, we set M = N , so that φ = 1 and the rescalings from If we could replace the diagonal entries by the deterministic value m φ , it would suffice to estimate the sum a =b µ,ν v a X aµ R (ab) µν X * νb v b . By the independence of the entries of X we have, using (3.3), where we used the analogue of (3.14) for R, (4.1), (3.10), and the normalization of v. Hence, if we could ignore the error arising from the approximation G aa ≈ m φ , the proof of Theorem 3.11 would be very simple.
The error made in the approximation G aa ≈ m φ is of order Ψ by (5.3), so that the corresponding error term on the right-hand side of (5.8) may be bounded using (3.3) by However, the vector v is normalized not in 1 but in 2 . In general, all that can be said about its 1 -norm We conclude that the simple replacement of G aa with its deterministic approximation in (5.8) is not affordable. Not only the leading term but also every error term has to be expanded in the entries of X. This expansion is most effectively controlled if performed within a high-moment estimate. Thus, for large and even p we shall estimate EJP 19 (2014), paper 33.
(To simplify notation we drop the unimportant complex conjugations on p/2 factors.) We shall show that the expectation forces many indices of the leading-order terms to coincide, at least in pairs, so that eventually every v a appears at least to the second power, which consistently leads to estimates in terms of the 2 -norm of v. Any index that remains single gives rise to a small factor M −1/2 which counteracts the large factor v 1 M 1/2 . The trivial bound (arising from estimating each entry |v a | by 1 and the summation over a and b by M 2 ) is affordable only at a very high order, when the number of factors Ψ N −ω/2 that have been generated is sufficient to compensate the loss from the trivial bound. This idea will be used to stop the expansion after a sufficiently large, but finite, number of steps.
Before explaining the general strategy, we sketch a second moment calculation.
Here the first term is maximally expanded, but the second and third are not; we therefore continue to expand them in a similar fashion by applying (3.7) to each resolvent entry. In general, this procedure does not terminate, but it does generate finitely many maximally expanded terms with no more than a fixed number, say , of off-diagonal resolvent entries, in addition to finitely many terms that are not maximally expanded but contain more than off-diagonal entries. By choosing large enough, these latter terms may be estimated trivially. We therefore focus on the maximally expanded terms, and we write We get a similar expression for G * cd . We plug both of these expansions into (5.10) and multiply out the product. The leading term is We now expand both resolvent entries using (3.9), which gives The goal is to use the expectation to get a pairing (or a more general partition) of the entries of X. In order to do that, we shall require all terms that are not entries of EJP 19 (2014), paper 33.
Page 27/53 ejp.ejpecp.org X to be independent of the randomness in the rows a, b, c, d of X. While the entries of R satisfy this condition, the entries of G do not. We shall hence have to perform a further expansion on them using the identities (3.7) and (3.9). In fact, these two types of expansions will have to be performed in tandem, using a two-step recursive procedure.
The main reason behind this is that even if all entries of G are maximally expanded, each application of (3.9) produces a diagonal entry that is not maximally expanded; for such terms the expansion using (3.7) has to be repeated. For the purposes of this sketch, however, we omit the details of the further expansion of the entries of G, and replace them with their deterministic leading order, m φ (see (5.3)). This approximation Since all entries of R are independent of all entries of X, we can compute the expectation with respect to the rows a, b, c, d. Note that the only possible pairing is a = d, µ = β, b = c, and ν = α. This results in the expression This calculation, while giving the right order, was in fact an oversimplification, since where a star over the summation indicates that all summation indices that are not explicitly equal to each other have to be distinct. The above calculation leading to (5.12) is valid for the first summation of (5.13), whose contribution (up to leading order) is zero, since the only possible pairing contradicts the condition that the indices a, b, c, d be all distinct. It is not too hard to see that, among the sums in (5.13), only the last one gives a nonzero contribution (up to leading order), and it is, going back to (5.10), equal to here we used the bound (5.3). Notice that taking the expectation forced us to chose the pairing a = d, b = c to get a non-zero term. This example provides a glimpse into the mechanism that guarantees that the 2 -norm of v appears.
Next, we consider a subleading term from the first summation in (5.13), which has three off-diagonal entries: * We proceed as before, expanding all off-diagonal entries of G using (3.9). Up to leading order, we get * The expectation again renders this term zero if a, b, c, d are distinct.
Based on these preliminary heuristics, we outline the main steps in estimating a high moment of Z.
Step 1. Partition the indices in (5.9) according to their coincidence structure: indices in the same block of the partition are required to coincide and indices in different blocks are required to be distinct. This leads to a reduced family, T , of distinct indices.
Step 2. Make all entries of G maximally expanded by repeatedly applying the identity (3.7). Roughly, this entails adding upper indices from the family T to each entry of G using the identity (3.7). We stop the iteration if either (3.7) cannot be applied to any entry of G or we have generated a sufficiently large number of off-diagonal entries of G.
Step 3. Apply (3.9) to each maximally expanded off-diagonal entry of G. This yields factors of the form µ,ν X aµ R (T ) µν X * νb with a, b ∈ T and R (T ) is independent of all entries of X by construction. In addition, this application of (3.9) produces new diagonal entries of G that are not maximally expanded.
Step 4. Repeat Steps 2 and 3 recursively in tandem until we only have a sum of terms whose factors consist of maximally expanded diagonal entries of G, entries of R (T ) , and entries of X from the rows indexed by T .
Step 5. Apply (3.8) to each maximally expanded diagonal entry of G. We end up with factors consisting only of entries of R (T ) and entries of X from the rows indexed by T .
Step 6. Using the fact that all entries of R are independent of all entries of X, take a partial expectation over the rows of X indexed by the set T ; this only involves the entries of X. Only those terms give a nonzero contribution whose Greek indices have substantial coincidences.
Step 7. For entropy reasons, the leading-order term arises from the smallest number of constraints among the summation vertices that still results in a nonzero contribution. This corresponds to a pairing both among the Greek and the Latin indices.
This naturally leads to estimates in terms to the 2 -norm of v.
Step 8. Observe that if a Latin index i remained single in the partitioning of Step 1 (so that the corresponding weight factor will involve the 1 -norm i |v i |) then, by a simple parity argument, the number of appearances of the index i will remain odd along the expansion of Steps 2-5. This forces us to take at least a third (but in fact at least a fifth) moment of some entry X iµ , which reduces the combinatorics of the summations compared with the fully paired situation from Step 7. This combinatorial gain offsets the factor M 1/2 lost in taking the 1 -norm of v.
Steps 1-6 require a careful expansion algorithm and a meticulous bookkeeping of the resulting terms. We shall develop a graphical language that encodes the resulting EJP 19 (2014), paper 33. monomials. Expansion steps will be recorded via operations on graphs such as merging certain vertices or replacing some vertex or edge by a small subgraph. Several ingredients of the graphical representation and the concept of graph operations are inspired by tools from [6] developed for random band matrices. Once the appropriate graphical language is in place and the expansion algorithm has been constructed, the observations in Steps 7 and 8 will yield the desired estimate by a power counting coupled with a parity argument.

The p-th moment of Z and introduction of graphs
We shall estimate Z with high probability by estimating its p-th moment for a large but fixed p. It is convenient to rename the summation variables in the definition of Z as (a, b) = (b 1 , b 2 ). Let p be an even integer and write where we recall the definition of Z from (5.6). We We shall perform the summation by first fixing the partition P ∈ P p and by deriving an upper bound that is uniform in P ; at the very end we shall sum trivially over P ∈ P p .
In order to handle expressions of the form (5.15), as well as more complicated ones required in later stages of the proof, we shall need to develop a graphical notation. The basic idea is to associate matrix indices with vertices and resolvent entries with edges. The following definition introduces graphs suitable for our purposes.

Definition 5.3 (Graphs)
. By a graph we mean a finite, directed, edge-coloured, multigraph Here V is a finite set of vertices, E a finite set of directed edges, and ξ is a "colouring of E", i.e. a mapping from E to some finite set of colours. The graph Γ may have multiple edges and loops. More precisely, E is some finite set with maps α, β : E → V ; here α(e) and β(e) represent the source and target vertices of the edge e ∈ E. We denote by deg Γ (i) the degree of the vertex i ∈ V (Γ).
We may now express the right-hand side of (5.15) using graphs. Fix the partition P ∈ P p . We associate a graph ∆ ≡ ∆(P ) with P as follows. The vertex set V (∆) is given by the blocks of P , i.e. V (∆) = P . The set of colours, i.e. the range of ξ, is {G, G * } (we emphasize that these two colours are simply formal symbols whose name is supposed to evoke their meaning). The set of edges E(∆) is parametrized as follows by the resolvent entries on the right-hand side of (5.15). Each resolvent entry G # b k1 b k2 gives rise to an edge e ∈ E(∆) with colour ξ(e) = G if # is nothing and ξ(e) = G * if # is * . The source vertex α(e) of this edge is the unique block of P satisfying (k, 1) ∈ α(e), and its target vertex β(e) the unique block of P satisfying (k, 2) ∈ β(e). Figure 1 illustrates the construction of ∆(P ), where the two different types of line correspond to the two colours G, G * . The graph ∆ has no loops.  .15). The index associated with block i ∈ V (∆) is denoted by a i , so that a i = b kr for any (k, r) in the block i of the partition P .
Using the graph ∆ ≡ ∆(P ) we may rewrite the right-hand side of (5.15). Each vertex i ∈ V (∆), associated with a block of P , is assigned a summation index a i ∈ {1, 2, . . . , M }, and we write a = (a i ) i∈V (∆) . The indicator function on the right-hand side of ( 5.15) translates to the condition that a i = a j for i = j (where i, j ∈ V (∆)). We use the notation * to denote summation subject to this condition (distinct summation indices). Thus we may rewrite (5.15) as  The function w a (∆) has the interpretation of a deterministic (complex) weight for the summation over a; it satisfies the basic estimate We record the following basic properties of ∆.
Our first goal is to use the expansion formulas (3.7)-(3.9) to express A a (∆) as a sum of monomials involving only entries of X and R, so that no entries of G remain. The entries of R and X will be independent by construction, which will make the evaluation EJP 19 (2014), paper 33. of the expectation possible. The result will be given in Proposition 5.10 below, which expresses Y (∆) as a sum of terms associated with graphs, which are themselves conveniently indexed using a finite binary tree, denoted by T . To bookkeep this expansion, we shall need a more general class of graphs than ∆.

Generalized colours and encoding
For the following we fix p ∈ 2N and a partition P ∈ P p , and set ∆ = ∆(P ). We shall develop an expansion scheme for monomials of type A a (∆). A fundamental notion in our expansion is that of maximally expanded entries of G, given in Definition 5.4 below.
We shall need to enlarge the set of colours of edges, so as to be able to encode entries of not only G and G * , but also entries of R, R * , X, and X * ; in addition, we shall need to encode diagonal entries of G and G * that are in the denominator, as in the formulas (3.7), as well as to keep track of upper indices. We need all of these factors, since our expansion relies on a repeated application of the identities (3.7), (3.8), and (3.9).
In order to define the graphs precisely, we consider graphs Γ satisfying Definition  4. We shall only consider graphs Γ satisfying (5.20) an assumption we make throughout the following. This means that only new Greek summation indices but no new Latin indices are generated, corresponding to the repeated applications of (3.8) and (3.9). In particular, the vertex colouring for our graph is very simple: the vertices of ∆ are black and all other vertices are white.
As our set of colours we choose ξ = (ξ 1 , ξ 2 , ξ 3 ) : ξ 1 ∈ {G, G * , R, R * , X, X * } , ξ 2 ∈ {+, −} , ξ 3 ⊂ V b (Γ) . (5.21) Note that these colours are to be interpreted merely as list of formal symbols; the choice of their names is supposed to evoke their meaning. The component ξ 1 determines whether the edge encodes an entry of G (corresponding to ξ 1 = G), of G * (corresponding to ξ 1 = G * ), of R (corresponding to ξ 1 = R), of R * (corresponding to ξ 1 = R * ), of X (corresponding to ξ 1 = X), or of X * (corresponding to ξ 1 = X * ). The component ξ 2 determines whether the entry is in the numerator (corresponding to ξ 2 = +) or in the denominator (corresponding to ξ 2 = −). Finally, the component ξ 3 is used to keep track of the upper indices of entries of G and G * , which we shall set to be a ξ3 := {a i : i ∈ ξ 3 }. The entries of R and R * also have upper indices, but they always carry the maximal set a b of upper indices, i.e. they always appear in the form R (a b ) and R * (a b ) . Hence, upper indices need not be tracked for the entries of R and R * , and for them we set ξ 3 (e) = ∅. Let Γ be a graph with colour set (5.21).
Properties (i)-(iv) are straightforward compatibility conditions which are obvious in light of the type of matrix entry that the edge e encodes. Property (v) states that only diagonal entries of G and G * may be in the denominator. Property (vi) states that only entries of G or G may have a (nontrivial) upper index and the lower indices of an entry of G or G * may not coincide with its upper indices (by definition of minors).
In order to give a precise definition of the monomial encoded by a coloured edge, and hence of a graph Γ, it is convenient to split the vertex indices as a = (a i ) i∈V (Γ) =  (a b , a w ), where  When drawing graphs, we represent a black vertex as a black dot and a white vertex as a white dot. An edge with ξ 1 = G is represented as a solid directed line joining two black dots, and an edge with ξ 1 = G * as a dashed directed line joining two black dots. If ξ 2 = − we indicate this by decorating the edge with a white diamond (not to be confused with a white dot). Notice that such edges are always loops, according to property (v). Sometimes we also indicate the component ξ 3 (e) in our graphs, simply by writing it next to the edge e. See Figure 2 for our graphical conventions when depicting edges with ξ 1 ∈ {G, G * }.
For the other edges, e ∈ E(Γ) with ξ 1 (e) ∈ {R, R * , X, X * }, we set A a (e, Γ) := When drawing graphs, we represent an edge with ξ 1 = R as a solid directed line joining two white vertices, an edge with ξ 1 = R * as a dashed directed line joining two white vertices, an edge with ξ 1 = X as a dotted directed line from a black to a white vertex, and an edge with ξ 1 = X * as a dotted directed line from a white to a black vertex. Note that we use the same line style to draw Xand X * -edges, since the orientation of the edge together with the vertex colouring distinguishes them uniquely. See Figure 3 for an illustration of these conventions, and Figure 6 for an illustration of (5.28).  Figure 3: The graphical conventions for entries of R (a b ) (corresponding to ξ 1 = R), R * (a b ) (corresponding to ξ 1 = R * ), X (corresponding to ξ 1 = X), and X * (corresponding to ξ 1 = X * ).

R
Having defined A a (e, Γ) for an arbitrary graph Γ with colour set (5.21) and e ∈ E(Γ), we define the monomial encoded by Γ, A a (e, Γ) . (5.23) Note that (5.23) extends (5.18). At this point we introduce a convention that will simplify notation throughout the proof. We allow the monomial A a (Γ) to be multiplied by a deterministic function of z that is bounded, i.e. in general we replace (5.23) with A a (Γ) := u(Γ)

e∈E(Γ)
A a (e, Γ) , (5.24) where u(Γ) is some deterministic function of z satisfying |u(Γ, z)| C Γ for z ∈ S. This will allow us to forget signs and various factors of z and m φ that are generated along the expansion. The functions u(Γ) could be easily tracked throughout the proof, but all that we need to know about them is that they satisfy the conditions listed after (5.24). Not tracking the precise form of these prefactors is sufficient for our purposes, since after completing the graphical expansion we shall estimate each graph individually, without making use of further cancellations among different graphs.

R-groups
We define an R-group to be an induced subgraph of Γ consisting of three edges, e 1 , e 2 , e 3 , such that e 1 and e 3 are X-edges and e 2 is an R-edge, and they form a chain EJP 19 (2014), paper 33. in the sense that β(e 1 ) = α(e 2 ), β(e 2 ) = α(e 3 ), and both of these vertices have degree two. We call e 2 the centre of the R-group and define A(e 2 ) := α(e 1 ) and B(e 2 ) := β(e 3 ). If A(e 2 ) = B(e 2 ) we call the R-group diagonal ; otherwise we call it off-diagonal. See Figure 4 for an illustration. We require that our graphs Γ satisfy the following property. (vii) Each X-edge and R-edge of Γ belongs to some R-group of Γ. In particular, all white vertices have degree two, and an R-group is uniquely determined by its centre.
The R-groups constitute graphical representations of the monomials on the right-hand sides of (3.8) and (3.9). It is important to stress that there is no restriction on possible coincidences among the white-vertex indices (a i ) i∈Vw ; this means that even if two Greek summation indices arising from two different applications (3.8) or (3.9) coincide, they will nevertheless be encoded by distinct white vertices. This allows us to keep the graphical structure involving R and X edges very simple. Note that the initial graph ∆ = ∆(P ) trivially satisfies the properties (i)-(vii).

Maximally expanded edges and sketch of the expansion
The following definition introduces a notion that underlies our entire expansion.
Note that it only applies to G-edges. We conclude this section with an outline of the expansion algorithm that will ultimately yield a family of graphs, whose contributions can be explicitly estimated and whose encoded monomials sum up to the monomial encoded by ∆ = ∆(P ) from Section 5. 4. The goal of the expansion is to get rid of all G-edges, by replacing them with R-groups. Of course, this replacement has to be done in such a manner that the original monomial A a (∆) can be expressed as a sum of the monomials encoded by the new graphs. Having done the expansion, we shall be able to exploit the fact that the R-entries and the X-entries are independent. This independence originates from the upper indices i and j in the entries of R in (3.8) and (3.9). It allows us to take the expectation in the X-variables. Combined with sufficient information about the graphs generated by the expansion, this yields a reduction in the summation that is sufficient to complete the proof.
The expansion relies of three main operations: (a) make one of the G-entries maximally expanded by adding upper indices using the identity (3.7); EJP 19 (2014), paper 33.
(b) expand all off-diagonal maximally expanded G-entries of in terms of X using the identity (3.9); (c) expand all diagonal maximally expanded G-entries in terms of X using (3.8).
We shall implement each ingredient by a graph surgery procedure. Operation (a) is the subject of Section 5.8; it creates two new graphs, τ 0 (Γ) and τ 1 (Γ), from an initial graph Γ. Operation (b) is the subject of Section 5.9; it creates one new graph, ρ(Γ), from an initial graph Γ. As it turns out, Operations (a) and (b) have to be performed in tandem using a coupled recursion, described by a tree T , which is the subject of Section 5. 10.
Once this recursion has terminated, Operation (c) may be performed (see Section 5.11).

Operation
which follow immediately from (3.7); here a, b, c ∈ a b \T and a, b = c. The same identities hold for G * . The basic idea is to take some graph Γ with at least one G-entry that is not maximally expanded, to pick the first such G-entry, and to apply the first identity of (5.25) if this entry is in the numerator and the second identity if this entry is in the denominator. By Definition 5.4, if the G-entry is not maximally expanded, there is a c ∈ a b such that (5.25) may be applied. The right-hand sides of (5.25) consist of two terms: the first one has one additional upper index, and the second one at least one additional off-diagonal G-entry. These two terms can be described by two new graphs, derived from Γ, denoted by τ 0 (Γ) and τ 1 (Γ). The graph τ 0 (Γ) is almost identical to Γ, except that the edge corresponding to the selected entry G We now give the precise definition of Operation (a). Take a graph Γ that has a Gedge that is not maximally expanded. We shall define two new graphs, τ 0 (Γ) and τ 1 (Γ) as follows. Let e be the first 1 G-edge of Γ that is not maximally expanded, and let i be the first vertex of V b (Γ) \ ξ 3 (e) ∪ {α(e), β(e)} ; note that by assumption on Γ and e this set of vertices is not empty. We now apply (5.25) to the entry A a (e, Γ). We set a = a α(e) , b = a β(e) , c = a i , and T = a ξ3(e) in (5.25), and express A a (e, Γ) as a sum of two terms given by the right-hand sides of (5.25); we use the first identity of (5.25) if ξ 2 (e) = + and the second if ξ 2 (e) = −. This results in a splitting of the whole monomial into a sum of two monomials, A a (Γ) = A 0,a (Γ) + A 1,a (Γ) , in self-explanatory notation. By definition, τ 0 (Γ) is the graph that encodes A 0,a (Γ) and τ 1 (Γ) the graph that encodes A 1,a (Γ). Hence, by definition, we have A a (Γ) = A a (τ 0 (Γ)) + A a (τ 1 (Γ)) .  Page 36/53 ejp.ejpecp.org Moreover, it follows immediately that the maps τ 0 and τ 1 do not change the vertices, so that we have The procedure Γ → (τ 0 (Γ), τ 1 (Γ)) may also be explicitly described on the level graphs alone, but we shall neither need nor do this. Instead, we give a graphical depiction of this process in Figure 5. We only draw the edge e and the vertices α(e), β(e), and i. All other edges of Γ are left unchanged by the operation, and are not drawn. The set ξ 3 is indicated in parentheses next to each edge, provided it is not empty. The first graph depicts the operation for the case α(e) = β(e) (encoding an off-diagonal entry), the second for the case α(e) = β(e) and ξ 2 (e) = + (encoding a diagonal entry in the numerator), and the third for the case α(e) = β(e) and ξ 2 (e) = − (encoding a diagonal entry in the denominator). The first graph on the right-hand side in each identity encodes τ 0 (Γ) and the second τ 1 (Γ).
Recall that the graphs do not track irrelevant signs according to the convention made around (5.24).
The following result is trivial.

Operation (b): construction of the graph ρ(Γ)
In this section we give the second operation, (b), outlined in Section 5. 7. The idea is that Operation (a) from Section 5.5 generates off-diagonal G-entries that are maximally expanded. They in turn will have to be expanded further using (3.9), so as to extract their explicit X-dependence. Roughly, the map ρ replaces each maximally expanded off-diagonal G-edge by an off-diagonal R-group.
It will be convenient to have a shorthand for a maximally expanded entry of G. To that end, we define, for a, b ∈ a b , the maximally expanded entry Using (3.9) we may write, for a = b, where z * denotes the complex conjugate of z. Note that the first diagonal term on the right-hand side is not maximally expanded (while the second one is).
The identity (5.28) may also be formulated in terms of graphs. We denote by ρ(Γ) the graph encoding the monomial obtained from A a (Γ) by applying the identity (5.28) to each maximally expanded off-diagonal G-entry of Γ. This replacement can be done in any order. By definition of ρ(Γ), we have aw A a b ,aw (Γ) = aw A a b ,aw (ρ(Γ)) . (5.29) Note that both sides depend on a b . Each application of (5.28) adds two white vertices, so that in general V w (ρ(Γ)) ⊃ V w (Γ). In particular, in (5.29) we slightly abuse notation by using the symbol a w for different families on the left-and right-hand sides. The point is that we always perform an unrestricted summation of the Greek indices associated with the white vertices of the graph. However, the black vertices are left unchanged, so that we have V b (ρ(Γ)) = V b (Γ) . (5.30) Like τ 0 and τ 1 , the map ρ may be explicitly defined on the level of graphs, which we shall however not do in order to avoid unnecessary and heavy notation. See Figure 6 for an illustration of ρ. Figure 6: A graphical depiction of the map ρ resulting from applications of (5.28). For simplicity, we draw a graph with a single edge. The indices a, b, µ, ν of (5.28) are associated with the vertices i, j, k, l, so that we have a = a i , b = a j , µ = a k , and ν = a l . In the picture we abbreviated V b = V b (Γ). Note that V b (·) remains unchanged under ρ while V w (·) is increased by the addition of two new white vertices, k, l. The prefactor z is omitted from the graphical representation.
The following result is an immediate corollary of the definition of ρ.

Constructing the tree T : recursion using (a) and (b)
We now apply Operations (a) and (b) alternately and recursively to the graph ∆ = ∆(P ). We start by applying Operation (a) to the graph ∆; the two new graphs thus produced may have newly created maximally expanded off-diagonal G-entries. We then apply ρ to these edges. Along the procedure we get new R-groups and additional diagonal entries, some which may not be maximally expanded. We then repeat the cycle: apply Operation (a) and then Operation (b). For some graphs the procedure stops because all G-edges have become maximally expanded. For some other graphs, the algorithm would continue indefinitely, since Operation (b) keeps on producing diagonal G-entries that are not maximally expanded. We shall however show that in such graphs the number of off-diagonal G-edges and R-groups increases as the algorithm is run. Since both of these objects are small, after the accumulation of a sufficiently large number of them we can stop the recursion and estimate such terms brutally. In summary, the end result will be a family of graphs encoding monomials in the entries of R (a b ) , R * (a b ) , X, X * as well as diagonal entries of G, G * . In addition, by a brutal truncation in this procedure, the algorithm yields terms that do not satisfy this property, but contain a large enough number of off-diagonal G-edges and R-groups to be negligible.
The algorithm generates a family of graphs Θ σ which are indexed by finite binary strings σ, or, equivalently, by vertices of a rooted binary tree T = (V (T ), E(T )). We start the algorithm with Θ ∅ := ∆, corresponding to the empty string or the root of the tree. The tree is constructed recursively according to Θ 0 := ρ(τ 0 (∆)) , Θ 1 := ρ(τ 1 (∆)) , Θ 00 := ρ(τ 0 (Θ 0 )) , and so on, until a stopping rule is satisfied (see Definition 5.7 below). See Figure 7 for an illustration of the resulting tree. Figure 7: The tree T whose vertices are binary strings σ. The root is the empty string ∅. Each vertex of σ ∈ V (T ) encodes a graph Θ σ . The graph associated with the two children of a vertex σ are obtained from Θ σ using the maps τ 0 , τ 1 , and ρ. More precisely, an arrow towards the left corresponds to the map ρ•τ 0 and an arrow towards the right to the map ρ • τ 1 . In this example, the graph Θ 11 satisfies the stopping rule from Definition 5.7, and is therefore a leaf of T .
We use the notation iσ, for i = 0, 1, to denote the binary string σ to which i has been appended on the left. The children in T of the vertex σ ∈ V (T ) are 0σ and 1σ. The precise construction of Θ σ and the binary tree T is as follows. Let > 0 be a cutoff EJP 19 (2014), paper 33. to be chosen later (see (5.39) below); it will be used as a threshold for the stopping rule which ensures that the tree T is finite. Let d(Γ) denote the number of off-diagonal G-edges plus the number of off-diagonal R-groups of Γ, i.e.  The construction of the tree T relies on the following stopping rule.
Definition 5.7 (Stopping rule). We say that a graph Γ satisfies the stopping rule if d (Γ) or if all G-edges of Γ are maximally expanded.
We continue this recursion on each leaf until all leaves satisfy the stopping rule from Definition 5. 7. By Lemma 5.9 below, the resulting tree T is finite, i.e. the recursion terminates after a finite number of steps.
Lemma 5. 8. The graphs Θ σ have the following two properties. First, In particular, the set of black vertices remains unchanged throughout the recursion: Note that both sides depend on a b , and we slightly abuse notation as explained after The interpretation of (5.33) is that the value of any graph Θ σ is equal to the sum of the values of its two children, Θ 0σ and Θ 1σ .
The following estimate ensures that the tree T is finite, i.e. that the expansion procedure does not produce and infinite sequence of graphs whose value d(·) remains below indefinitely.
Next, let f = f (Γ) denote the number of G-edges minus the number of R-edges in the graph Γ. It follows immediately that f is left invariant by τ 0 and ρ, and is increased by at most 4 by τ 1 : f (τ 0 (Γ)) = f (ρ(Γ)) = f (Γ) and f (τ 1 (Γ)) f (Γ)+4. Since in the initial graph there is no R-edge, so that f (∆) = |E(∆)|, we conclude that f (Θ σ ) |E(∆)|+4 = p+4 for all σ ∈ V (T ). By Definition 5.7, the number of R-edges is bounded by . (Note that only off-diagonal R-groups have been created along the procedure, so that the number of R-edges is the same as the number of off-diagonal R groups. Diagonal R-groups will appear in later in Section 5.12). Hence we conclude that the number of G-edges of any Θ σ is bounded by p + 5 .
In order to estimate the number of zeros in the string σ, we note that, since each Gentry can have at most |V (∆)| 2p upper indices, the total number of upper indices in all the G-entries of A a (Θ σ ) is bounded by 2p(p + 5 ). We conclude by noting that τ 1 and ρ do not decrease the total number of upper indices in the G-entries, while τ 0 increases this number by one. Hence the total number of zeros in any string σ is bounded by 2p(p + 5 ). Thus, the total length of σ is bounded by 2p(p + 5 ) + 2p(p + 6 ). This concludes the proof.
Next, we express Y (∆) from (5.17) in terms of the graphs we just introduced. By Lemma 5.8

and the fact that
is not a leaf of T , we may replace the value of Θ σ by the sum of the values of its two children. Starting from the root ∅ and the graph Θ ∅ = ∆, we may propagate this identity recursively from the root down to the leaves. We conclude that For the following we partition L(T ) = L 0 (T ) ∪ L 1 (T ) into the trivial leaves L 0 (T ) and the nontrivial leaves L 1 (T ). By definition, the trivial leaves of T are those σ ∈ V (T ) satisfying d(Θ σ ) . We shall estimate the contribution of the trivial leaves brutally in Section 5.11 below, using the fact that they contain a large enough number of small factors.
By Definition 5.7, if σ ∈ L 1 (T ) is a nontrivial leaf then all G-edges of Θ σ are diagonal and maximally expanded. The estimate of the nontrivial leaves will be performed in Sections 5.12-5.14.

The trivial leaves
In this section we estimate the contribution of Θ σ for a trivial leaf σ ∈ L 0 (T ). Thus, fix σ ∈ L 0 (T ). From (5.28) and Lemma 5.2 we get for a = b We therefore conclude that each off-diagonal R-group of Γ yields a contribution of size O ≺ (φ −1/2 Ψ) after summation over the indices associated with the vertices incident to EJP 19 (2014), paper 33.
its centre. Moreover, by definition of T , each R-group of Θ σ is off-diagonal. In addition, each off-diagonal G-edge yields a contribution of size φ −1/2 Ψ by Lemma 5.2. Thus we get, summing out all indices associated with white vertices (i.e. inner vertices of R- Hence the contribution of Θ σ to the right-hand side of (5.36) may be bounded by * where we estimated the summation over a b by M 2p using the trivial bound |w a b (∆)| 1 (from ( 5.19) and v 2 = 1). In the last step we used Lemma 3.2 (i) and (iii). The assumption EZ 2 N C of Lemma 3.2 (iii) for the random variable Z = aw A a b ,aw (Θ σ ) follows from the following lemma combined with Hölder's inequality, and from the fact that the number of white vertices of Θ σ is independent of N , so that the sum aw contains O(N C ) terms.
Using Lemma 5.9, we therefore conclude that the contribution of all trivial leaves to the right-hand side of (5.36) is bounded by (5.38) where C p, = 2 2p(p+6 ) estimates the number of vertices in T (see Lemma 5.9). The last step holds provided we choose Here we used the bound Ψ CN −ω/2 , which follows from the definitions (3.15), (2.11), and (3.5).

The nontrivial leaves I: Operation (c)
From now on we focus on the nontrivial leaves, σ ∈ L 1 (T ). Our goal is to prove the following estimate, which is analogous to (5.38). Its proof will be the content of this and the two following subsections, and will be completed at the end of Section 5.14.
By definition of L 1 (T ), all G-edges of Θ σ are diagonal and maximally expanded for any σ ∈ L 1 (T ). The first step behind the proof of Proposition 5.12 uses Operation (c) from Section 5.5, i.e. expanding all diagonal G-entries of A a (Θ σ ) using ( 3.8). Roughly, this amounts to replacing diagonal G-edges by (a collection of) diagonal R-groups. More precisely, for entries in the denominator we use the identity In order to handle entries in the numerator, we rewrite this identity in the form  Recall that all G-entries of A a (Θ σ ) are diagonal and maximally expanded. We apply (5.40) or (5.42) to each G-entry of A a (Θ σ ), and multiply everything out. The result may be written in terms graphs as (5.43) where the error term O ≺ (φ −1/2 Ψ) contains all terms containing at least one error term from the expansion (5.42). The sum on the right-hand side of (5.43) consists of monomials in the entries of R (a b ) , R * (a b ) , X, and X * (note that entries of G and G * no longer appear), and can hence be encoded using a family graphs which we call G(Θ σ ). By construction, the family G(Θ σ ) is finite. (In fact, it satisfies |G(Θ σ )| 6 , where we used that the number of G-entries of A a (Θ σ ) to which (5.40) or (5.42) are applied is bounded by p + 5 6 ; see the proof of Lemma 5. 9.) Exactly as in Section 5.11, we may brutally estimate the contribution of the rest term on the right-hand side of (5.43) by * with defined in (5.39); we omit the details.
Hence, in order to complete the proof of Proposition 5.12, it suffices to prove that for all σ ∈ L 1 (T ) and all Γ ∈ G(Θ σ ) we have * As before, the map Θ σ → G(Θ σ ) may be explicitly given on the level of graphs, but we refrain from doing so. Instead, we illustrate this process for some simple cases in Figure 8.   5.42) that generate G(Θ σ ) from Θ σ . A G-edge encoding an entry in the denominator is replaced by either nothing (leaving just the vertex) or a diagonal R-group. A G-edge encoding an entry in the numerator is replaced by either nothing or up to − 1 diagonal R-groups.

The nontrivial leaves II: taking the expectation
Let us now consider a nontrivial leaf σ ∈ L 1 (T ). By definition of L 1 (T ), all Gedges of Θ σ are diagonal and maximally expanded. Therefore, any Γ ∈ G(Θ σ ) does not contain any G-edges. This was the goal of the expansion generated by Operations (a)-(c). Hence, each Γ ∈ G(Θ σ ) consists solely of R-groups.
Let σ ∈ L 1 (T ) and Γ ∈ G(Θ σ ). Fix the summation indices a b , and recall that a i = a j for i, j ∈ V b (Γ) and i = j. By definition of R (a b ) , the |V b (Γ)| + 1 families R (a b ) µν N µ,ν=1 and (X aiµ ) N µ=1 , i ∈ V b (Γ), are independent. Therefore we may take the expectation of the R-entries and the X-entries separately. The expectation of the X-entries may be kept track of using partitions, very much like in Section 5.4, except in this case the partition is on the white vertices. In fact, the combinatorics here are much simpler, since two white vertices may only be in the same block of the partition if they are adjacent to a common black vertex. Indeed, the (Latin) indices associated with two different black vertices are different, so that the two entries of X encoded by two X-edges incident to two different black vertices are independent, since X aµ and X bν are independent if a = b for all µ and ν (even if µ = ν). The precise definition is the following.
We recall from Property (vii) in Section 5.6 that each white vertex j ∈ V w (Γ) is adjacent in Γ to a unique black vertex π(j) ≡ π Γ (j). For each i ∈ V b (Γ) we introduce a partition ζ i of the subset of white vertices π −1 ({i}), and constrain the values of the indices (a j : π(j) = i) to be compatible with ζ i . On the level of graphs, such a partition amounts to merging vertices in π −1 ({i}). Abbreviate ζ = (ζ i ) i∈V b (Γ) , and denote by Γ ζ the graph obtained from Γ by merging, for each i ∈ V b (Γ), the vertices adjacent to i according to ζ i . Note that, like Γ, each Γ ζ satisfies the properties (i)-(vi) from Section 5.5, but, unlike Γ, in general Γ ζ does not satisfy the property (vii) from Section 5. 6. See Figure 9 for an illustration of the mapping Γ → Γ ζ .
Define the indicator function χ aw (Γ) := i∈V b (Γ) 1 a j = a j for j, j ∈ π −1 Γ ({i}) and j = j , which constrains the summation indices associated with different white vertices adjacent to the same black vertex to have different values. By definition of Γ ζ , we therefore EJP 19 (2014), paper 33. Γ Γ ζ Figure 9: The process Γ → Γ ζ . Since this operation is local at each black vertex, we only draw the neighbourhood of (more precisely the unit ball around) a selected black vertex i ∈ V b (Γ). The depicted black vertex is part of two diagonal R-blocks and four offdiagonal R-blocks; the latter ones are not drawn completely. The blocks of the partition ζ i are drawn in grey. On the right we draw the corresponding neighbourhood of Γ ζ . and abbreviated E R (·) for the set of R-edges and E i (·) for the set of X-edges incident to i. Here we used the independence described above. Since EX aµ = 0, we immediately get that W a b ,aw (Γ ζ ) = 0 unless, for each i ∈ V b (Γ), each block of ζ i has size at least two. By (2.3) we get in fact that 1 each block of ζ i has size at least two .

(5.46)
The following result is the main power counting estimate for W a b ,aw . It shows that each black vertex of degree one in ∆ (corresponding to Latin indices that remained unpaired in the partition (5.15)) results in an extra factor M −1/2 . This will balance the passage from 1 -to 2 -norm of v, as explained in Section 5. 3. Note that by definition of Γ ζ we have V b (Γ ζ ) = V b (Γ) and deg Γ (i) = deg Γ ζ (i) for all i ∈ V b (Γ). For the following we therefore drop the argument of V b . Define the subset V * b := i ∈ V b : deg ∆ (i) = 1 .
For i ∈ V b (Γ) let n ζ (i) denote the number of vertices of Γ ζ adjacent to i (these are all white since there are no G-edges in Γ ζ , which are the only edges that join two black vertices).
Proof. Recalling (5.46), we assume without loss of generality that, for each i ∈ V b , eack block of ζ i has size at least two; in particular, we assume that for each i ∈ V b we have deg Γ (i) 2. From (5.46) we get By definition, τ 0 and ρ leave deg(i) invariant, and τ 1 increases deg(i) by 0 or 4. In particular, they all leave the parity of deg(i) invariant for i ∈ V b . We conclude that deg Γ (i) is odd for each i ∈ V * b . Since each block of ζ i has size at least two, we find that We therefore conclude that The proof is then completed by the following claim.
As observed above, τ 0 and ρ leave deg(i) invariant, and τ 1 increases deg(i) by 0 or 4. Let i ∈ V * b . Since by assumption deg Γ (i) 2, we find that in fact deg Γ (i) 5. This yields from which (5.47) follows.

5.
14 The nontrivial leaves III: summing over a and conclusion of the proof of Proposition 5.12 As above, fix a tree vertex σ ∈ L 1 (T ), a graph Γ ∈ G(Θ σ ), and a partition ζ. In order to conclude the proof, we use Lemma 5.13 on each Γ ζ to sum over a w in (5.45).
Recall the quantity d from (5.31), defined as the number of off-diagonal G-edges plus the number of off-diagonal R-groups. By definition of ∆, d(∆) = p. Moreover, τ 0 , τ 1 , and ρ do not decrease d. Since by construction Γ has no G-entries, we conclude that Γ has at least p off-diagonal R-groups. We may therefore choose a set E o (Γ) ⊂ E R (Γ) of size at least p, such that each e ∈ E o (Γ) is the centre of an off-diagonal R-group (see Section 5.6). The set E o is naturally mapped into E R (Γ ζ ), and is denoted by E o (Γ ζ ). We denote by α(e) and β(e) the end points of e in Γ ζ . By (5.4), we have A a b ,aw (e, Γ ζ ) ≺ e∈Eo(Γ ζ ) Ψ 1(α(e) =β(e)) .
We may now sum over a w on the right-hand side of (5.45): from Lemma 5.13 we get 1(α(e) = β(e)) + 1(α(e) = β(e)) In the second step we multiplied out the last p-fold product on the second line and classified all terms according to number, k, of factors 1(α(e) = β(e)); we used that the total number of free summation variables is |V w (Γ ζ )| − k. In the third step we used that i∈V b n ζ (i) = |V w (Γ ζ )| and the bound Ψ N −1 .
We may now sum over a b to prove (5.44). Using the bound (5.19), we therefore get * where the last step follows from the fact that, by definition of ∆, deg ∆ (i) 1 for all i ∈ V b , as well as the estimate a |v a | k M 1/2 if k = 1 Summing over σ ∈ L 1 (T ) concludes the proof of Proposition 5. 12.
In order to prove the second estimate of Theorem 3.13, we use the same η = N −1+ω as above and write z = λ α + iη. Taking the imaginary part inside the absolute value on the left-hand side of (3.16), we get 1(Ξ) Im w , G(z)w ≺ Im m φ −1 (z) where in the second step we used (3.22), (3.5), and z ∈ S with high probability; this latter estimates follows from (2.5), the fact that γ α 2ω by assumption, and Theorem 2. 10. Repeating the above argument, we therefore find 1(Ξ)| u (α) , w | 2 ≺ φ −1 N −1+ω , and the second claim of Theorem 3.13 follows.
Note first that if η κ then it is easy to see that (3.19) follows from (3.16), (3.6), and the lower bound η κ N −2/3 . For the following we therefore assume that η κ. By  Step 6. Using the independence of the entries of H and G (a b ) , we may take the partial expectation in the rows (or, equivalently, columns) indexed by a b . Note that now we have two classes of H-edges: white-black (incident to a black and a white vertex) and black-black (incident to two black vertices). Since the white indices are distinct from the black ones, the expectation factorizes over these two classes of H-edges. Exactly as in Section 5, taking the expectation in the white-black H-edges yields, for each i ∈ V b , a partition of the white vertices adjacent to i, whereby each block of the partition must contain at least two vertices. The expectation over the black-black H-edges imposes an additional constraint among the loops incident to the white vertices, which are unimportant for the argument.
Finally, for i = j ∈ V b , we have the constraint that the number of edges joining i and j cannot be one.
Steps 7 and 8. The parity argument from the proof of Theorem 5.14 may be taken over with minor modifications, which arise from the additional black-black H-edges described in Step 6. Recall that the goal is to gain a factor N −1/2 from each black vertex i ∈ V b that has an odd degree. If i is incident to a black-white H-edge, the counting from Section 5 applied unchanged and yields a power of N −1/2 . If i is not incident to a black-white H-edge, it must be incident to a black-black Hedge (recall that all graphs must be connected). By the constraints arising from the expectation in Step 6, i must then in fact be incident to at least two blackblack H-edges which connect i to the same black vertex j. This yields a factor E|H aiaj | 2 C/N , which is the desired small factor. (We may in general only allocate N −1/2 from the factor N −1 to the vertex i, since j may also be a vertex that has degree one in ∆, in which case we have to allocate the other factor in N = N −1/2 N −1/2 to j.) This concludes sketch of how the argument of Section 5 is to be modified for the proof of Theorem 2. 12. We omit further details. Finally, Theorems 2.15 and 2.16 follow from Theorem 2.12 by repeating the arguments of Section 6 almost to the letter.