1 Introduction

We consider an array \((X_{ij})_{1 \le i \le j}\) of i.i.d. real random variables and set, for \(i > j, X_{ij} = X_{ji}\). Then, for each integer \(n \ge 1\), we may define the random symmetric matrix:

$$\begin{aligned} X = ( X_{ij} )_{1 \le i , j \le n}. \end{aligned}$$

The eigenvalues of the matrix \(X\) are real and are denoted by \(\lambda _n (X) \le \cdots \le \lambda _1(X)\). In the large \(n\) limit, the spectral properties of this matrix are now well understood as soon as \(X_{ij}\) has at least two finite moments see e.g. [35, 8, 15, 28] for reviews, or [20, 21, 23, 27, 29] for recent results on universality. The starting point of this analysis is the Wigner’s semi-circular law, which asserts that if the variance of \(X_{ij}\) is normalized to \(1\), then the empirical spectral measure

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^n \delta _{\lambda _{i} (X) / \sqrt{n} } \end{aligned}$$

converges almost surely for the weak convergence topology to the semi-circular law \(\mu _2\) with support \([-2,2]\) and density \(f_2 ( x) = \frac{1}{2 \pi } \sqrt{ 4 - x ^2}\). As already advertised, many more properties of the spectrum are known. For example, if the entries are centered and have a subexponential tail, then, see [18, 19], for any \(p >2\) and \(\varepsilon > 0\),

$$\begin{aligned} \max \left\{ \Vert v \Vert _p : v \text{ eigenvector} \text{ of} X \text{ with} \Vert v \Vert _2 =1\right\} \end{aligned}$$

is \(O ( n^{ 1 / p - 1/2 + \varepsilon })\), where \(\Vert v \Vert _p = \left( \sum _{i=1} ^n |v_i|^p \right)^{\frac{1}{p}}\). This implies that the eigenvectors are strongly delocalized.

When the second moment is no longer finite, much less is known and the picture is different. Let \(0 < \alpha < 2\) and assume for simplicity that

$$\begin{aligned} \mathbb{P }( |X_{11} |\ge t ) \sim _{t \rightarrow \infty } t^{-\alpha }. \end{aligned}$$
(1)

Then, we are not anymore in the basin of attraction of Wigner’s semi-circular law: now the empirical spectral measure

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^n \delta _{\lambda _i (X) / n^{ 1 / \alpha }} \end{aligned}$$

converges a.s. for the weak convergence topology to a new limit law \(\mu _\alpha \), see [7] and also [6, 10]. It is known that \(\mu _\alpha \) is symmetric, has full support, a bounded density \(f_\alpha \) which is analytic outside a finite set of points. Moreover, \(f_\alpha (0)\) has an explicit expression and as \(x\) goes to \(\pm \infty , f_\alpha ( x) \sim (\alpha / 2) |x|^{-\alpha -1}\). Finally, as \(\alpha \) goes to \(2, \mu _\alpha \) converges for the weak convergence topology to \(\mu _2\). One of the difficulty of this type of random matrices is the lack of an exactly solvable model as in the Gaussian Unitary Ensemble or the Gaussian Orthogonal Ensemble in the finite variance case.

In the present paper, we give a rate of local convergence to \(\mu _\alpha \) and investigate the behavior of the eigenvectors of \(X\). In a fascinating article [12], Bouchaud and Cizeau have made some prediction for the eigenvectors of \(X\). They argue that the situation is different for \(0< \alpha < 1\) and \(1 < \alpha < 2\). They quantify the localized nature of a vector \(v\) with \(\Vert v \Vert _2 =1\) by two scalars: \( \Vert v \Vert _4\) and \( \Vert v \Vert _1\). If \(\Vert v \Vert _4 = o (1)\) the vector is said to be delocalized, if \(\Vert v \Vert _4 \ne o (1)\) but \(\Vert v \Vert _1 \gg 1\) then \(v\) is weakly delocalized (we might also say weakly localized), while if \(\Vert v \Vert _1 = O(1)\) then the vector is localized. Now suppose that \(v\) is an eigenvector of \(n^{-1/\alpha } X\) associated to an eigenvalue \(\lambda \). For \(1 < \alpha < 2\), we have proved that all but \(o(n)\) of the eigenvectors are delocalized (this disproved the prediction of [12]). Our result is in fact stronger as we can show that the \(L^\infty \) norm of these vectors goes to zero, which insures that the \(L^p\) norm goes to zero for all \(p>2\) and to infinity for all \(p<2\) by duality.

For \(0 < \alpha < 1\), Bouchaud and Cizeau predict that with high probability, if \(|\lambda | < E_\alpha \) then \(v\) is weakly delocalized, while if \(|\lambda | > E_\alpha , v\) is localized. It is reasonable to predict that \(E_\alpha \) goes to \(0\) as \(\alpha \downarrow 0\) and goes to infinity as \(\alpha \uparrow 2\). It is not clear whether this threshold \(E_\alpha \) depends on the choices of the norms \(L^1\) and \(L^4\) to quantify localization and delocalization. We are far from proving the existence of such a threshold within the spectrum. Nevertheless, for \(0 < \alpha < 2/3\), we have proved that there exists \(E_\alpha >0\) such that if \(|\lambda | \ge E_\alpha \) then a localization occurs : the mass of \(v\) is carried by at most \( n^{1-\delta _\alpha }\) entries, for some \(\delta _\alpha >0\).

This heavy-tailed matrix model is in some sense similar to the adjacency matrix of Erdős-Rényi graphs with parameter \(p/n\) since its entries are of order one only with probability of order \(1/n\). In the regime where \(p\) is going to infinity faster than \(n^{2/3}\), this adjacency matrices were shown to belong to the university class of Wigner random matrices [16, 17]. If \(pn / ( \log n )^c\) goes to infinity for some constant \(c\), the delocalization of eigenvectors was also proved in these articles. In the related model of the adjacency matrix of uniformly sampled \(d\)-regular graphs on \(n\) vertices, the delocalization of eigenvectors has been studied in Dumitriu and Pal [14] and Tran et al. [30]. It was also conjectured by Sarnak that as soon as \(d \ge 3\), this model also belongs to the university class of Wigner random matrices.

1.1 Main results

Let us now be more precise. Throughout the paper, the array \((X_{ij})_{1 \le i \le j}\) will be real i.i.d. symmetric \(\alpha \)-stable random variable such that for all \(t \in \mathbb{R }\),

$$\begin{aligned} \mathbb{E }\exp ( i t X_{11} ) = \exp ( - w_\alpha |t |^\alpha ), \end{aligned}$$

for some \(0 < \alpha < 2\) and \(w_\alpha = \pi / ( \sin (\pi \alpha / 2)\Gamma (\alpha )) \). With this choice, the random variables \((X_{ij})\) are normalized in the sense that (1) holds. The assumption that the random variables follow an \(\alpha \)-stable law should not be crucial for our results, it will however simplify substantially some proofs. We define the hermitian matrix

$$\begin{aligned} A_n = a_n^{-1} X \quad \text{ with} \quad a_n = n^{1 / \alpha }. \end{aligned}$$

The eigenvalues of the matrix \(A\) are denoted by \(\lambda _n ( A) \le \cdots \le \lambda _1 (A)\). The empirical spectral measure of \(A\) is defined as

$$\begin{aligned} \mu _A = \frac{1}{n} \sum _{i=1}^n \delta _{\lambda _i ( A)} = \frac{1}{n} \sum _{i=1}^n \delta _{\lambda _i (X) / n^{ 1 / \alpha }}. \end{aligned}$$

The resolvent of \(A\) will be denoted by

$$\begin{aligned} R(z) = ( A - z ) ^{-1}, \end{aligned}$$

where \(z \in \mathbb{C }_+ = \{z \in \mathbb{C }: \mathrm{Im}(z) >0\}\). The Cauchy–Stieltjes transform of \(\mu _A\) is easily recovered from the resolvent:

$$\begin{aligned} g_{\mu _A} ( z) = \int \frac{1}{x - z } \mu _A (dx) = \frac{1}{n} {\mathrm{tr }}( R (z ) ). \end{aligned}$$
(2)

From [7, 10], for any fixed interval \(I \subset \mathbb{R }\), a.s. as \(n \rightarrow \infty \),

$$\begin{aligned} \frac{ \mu _{A_n} ( I )}{ | I | } - \frac{ \mu _{\alpha } ( I ) }{ |I | } \; \rightarrow \; 0, \end{aligned}$$
(3)

where \(| I |\) denotes the length of the interval \(I\). As in [20, 21], the opening move for proving statements about the eigenvectors of \(A\) is to reinforce the convergence (3) for small intervals whose length vanishes with \(n\). We will express our main results in terms of a scalar \(\rho \) depending on \(\alpha \):

$$\begin{aligned} \rho = \left\{ \begin{array}{l@{\quad }ll} \frac{1}{2}&\text{ if}&\frac{8}{5} \le \alpha < 2 \\ \frac{\alpha }{ 8 - 3 \alpha }&\text{ if}&1 < \alpha < \frac{8}{5} \\ \frac{\alpha }{ 2+ 3 \alpha }&\text{ if}&0 < \alpha \le 1. \end{array} \right. \end{aligned}$$

The scalar \(\rho \) depends continuously on \(\alpha \) and is non-decreasing. Roughly speaking we are able to prove that the convergence (3) holds for all intervals of size larger than \(n^{ - \rho + o(1)}\). A precise statement is the following.

Theorem 1.1

(Local convergence of the empirical spectral distribution) Let \(0 < \alpha < 2\). There exists a finite set \({\mathcal{E }}_\alpha \subset \mathbb{R }\) such that if \(K \subset \mathbb{R }\backslash {\mathcal{E }}_\alpha \) is a compact set and \(\delta >0\), the following holds. There are constants \(c_0, c_1 >0\) such that for all integers \(n \ge 1\), if \(I\subset K\) is an interval of length \( |I| \ge c_1 n^{-\rho } ( \log n)^2\), then

$$\begin{aligned} \left| \mu _{A} (I) - \mu _\alpha ( I) \right| \le \delta |I |, \end{aligned}$$

with probability at least \(1 - 2 \exp \left( - c_0 n \delta ^2 | I | ^2 \right)\).

In the forthcoming Theorem 3.5, we will give a slightly stronger form of Theorem 1.1: we will allow the parameter \(\delta \) to depend explicitly on \(n\) and \(| I |\) and the logarithmic correction in front of \(n^{-\rho }\) will be reinforced. The proof of Theorem 1.1 will be based on estimates of the diagonal coefficients of the resolvent matrix \(R(z)\) as \(z = E + i \eta \) gets close to the real axis with \(\eta = n^{ - \rho + o(1)}\). For technical reasons, we have only been able to establish (3) for intervals outside the finite set \({\mathcal{E }}_\alpha \) which contains \(0\). The same type of result should hold for all sufficiently large intervals. In Proposition 2.1, we will give an upper bound on \(\mu _{A} (I) \) (i.e. a Wegner’s estimate) which will be valid for all intervals of size larger than \(n^{- (\alpha + 2) / 4} \). The threshold \(\rho \le \frac{1}{2}\) may be optimal, even though for Wigner’s matrices it is simply one, since the spectral measure of heavy tails random matrices fluctuates like \(O(n^{-1/2})\) rather than like \(O(n^{-1})\) for Wigner’s matrices (see [9, 26]).

Theorem 1.1 will have the following corollary on the delocalization of the eigenvectors.

Theorem 1.2

(Delocalization of eigenvectors) Let \(1 < \alpha < 2\). There exist a finite set \({\mathcal{E }}_\alpha \subset \mathbb{R }\) and a constant \(c >0\) such that if \(K \subset \mathbb{R }\backslash {\mathcal{E }}_\alpha \) is a compact set, with probability tending to \(1\),

$$\begin{aligned} \max \{ \Vert v_k \Vert _\infty : 1 \le k \le n , \lambda _k (A) \in K\} \le n^{ -\rho (1 - \frac{1}{ \alpha })} ( \log n )^{c} , \end{aligned}$$
(4)

where \(v_1, \ldots , v_n\) is an orthogonal basis of eigenvectors of \(A\) associated to the eigenvalues \(\lambda _1 (A), \ldots , \lambda _n (A)\).

Notice that for \(p > 2, \Vert v \Vert _p \le \Vert v \Vert ^{2/p}_2 \Vert v \Vert ^{1 - 2/p}_\infty \). Hence, Theorem 1.2 implies that the \(L^p\)-norm of any eigenvector associated to an eigenvalue in \(K\) goes in probability to \(0\) as soon as \(p > 2\). Similarly, from \(\Vert v \Vert ^2_2 \le \Vert v \Vert _1 \Vert v \Vert _\infty \), we have a lower bound of order \(n^{ \rho (1 - \frac{1}{\alpha }) + o(1)}\) on the \(L^1\)-norm of the eigenvectors. Note that our estimate becomes trivial as \(\alpha \downarrow 1\) and give upper bound of order \(n ^ { - 1 / 4 + o (1)}\) as \(\alpha \uparrow 2\). For any \(\kappa > 0\), in the proof of Theorem 1.2, we will see that by increasing suitably \(c\), the probability that the event (4) holds is at least \(1 - n^{-\kappa }\).

We now present our result on localization of eigenvectors. We are not able to prove localization for all eigenvectors but only for ”typical” eigenvectors associated to an eigenvalue in a small interval. More precisely, we consider \(v_1, \ldots , v_n\) an orthogonal basis of eigenvectors of \(A\) associated to the eigenvalues \(\lambda _1 (A) , \ldots , \lambda _n (A)\). If \(I\) is an interval of \(\mathbb{R }\), we define \(\Lambda _I\) as the set of eigenvectors whose eigenvalues are in \(I\). Then, if \(\Lambda _I\) is not empty, for \( 1 \le k \le n\), set

$$\begin{aligned} W_I (k) = \frac{ n}{| \Lambda _I | } \sum _{v \in \Lambda _I} \langle v , e_k \rangle ^2, \end{aligned}$$

where, throughout this paper,

$$\begin{aligned} |\Lambda _I | = n \mu _A ( I) = N_I \end{aligned}$$
(5)

is the cardinal of \(\Lambda _I. W_I(k) / n\) is the average amplitude of the \(k\)-th coordinate of eigenvectors in \(\Lambda _I\). By construction, the average amplitude of \(W_I\) is \(1\):

$$\begin{aligned} \frac{1}{n} \sum _{k =1} ^ n W_I (k) = 1. \end{aligned}$$

If the eigenvectors in \(\Lambda _I\) are localized and \(I\) contains few eigenvalues, then we might expect that for some \(i, W_I (k) \gg 1\), while for most of the others \(W_I(k) = o(1)\). More quantitatively, fix \(0 < \delta < 1\) and assume that for some \(0 < \kappa < 1\) and \(0 < \varepsilon < 1\)

$$\begin{aligned} \frac{1}{n} \sum _{k =1} ^ n W_I (k)^\kappa \le \varepsilon , \end{aligned}$$

then, setting \(J=\{ k : W_I (k) \ge (\delta ^{-1} \varepsilon )^{ - \frac{1}{ 1 - \kappa } } \}\), we find

$$\begin{aligned} \frac{1}{n} \sum _{k \in J} W_I (k) \ge 1 - \delta . \end{aligned}$$

In particular, all but a proportion \(\delta \) of the mass of \(W_I\) is carried by a set \(J\) of cardinal at most \(|J| \le n ( \delta ^{-1} \varepsilon )^{ \frac{1}{ 1 - \kappa } }\). If \(\varepsilon \) goes to \(0\) with \(n\), this indicates a localization phenomenon. With this in mind, we can state our result.

Theorem 1.3

(Localization of eigenvectors) Let \(0 < \alpha < 2/3, 0 < \kappa < \alpha /2\) and \(\rho \) be as above. There exists \(E_{\alpha ,\kappa }\) such that for any compact \(K \subset [-E_{\alpha ,\kappa },E_{\alpha ,\kappa }]^c\), there are constants \(c_0,c_1 >0\) and for all integers \(n \ge 1\), if \(I\subset K\) is an interval of length \( |I| \ge n^{-\rho } ( \log n)^2\),

$$\begin{aligned} \frac{1}{n} \sum _{k =1} ^ n W_I (k)^{\frac{\alpha }{2}} \le c_1 |I|^ \kappa , \end{aligned}$$

with probability at least \(1 - 2 \exp \left( - c_0 n | I | ^4 \right)\).

This result is interesting when \(I = [ E - n^{-\rho + o(1)}, E + n^{-\rho + o(1)} ]\) is a small neighborhood around some large \(E\). Then it shows that for any \(0 < \kappa < \alpha /2\), the mass of the eigenvectors corresponding to eigenvalues around \(E\) is concentrated around order \(n^{ 1 - 2 \rho \kappa / ( 2 - \alpha ) }\) entries as long as \(|E|\) is large enough. The proof of Theorem 1.3 will be done by showing that

$$\begin{aligned} \frac{1}{n} \sum _{k=1} ^ n \left( \mathrm{Im}R(E + i \eta )_{kk} \right)^ {\frac{\alpha }{2} } \end{aligned}$$
(6)

vanishes if \(\eta =n^{-\rho + o(1)} \) even though that

$$\begin{aligned} \frac{1}{n} \sum _{k=1} ^ n \mathrm{Im}R(E + i \eta )_{kk} \end{aligned}$$

stays bounded away from 0. This phenomenon will have an interpretation in terms of a random self-adjoint operator introduced in [10] which is, in some sense, the limit of the matrices \(A\). We will prove that the imaginary part of its resolvent vanishes at \(E + i \eta \), with \(\eta = o(1)\) and \(|E|\) large enough, while its expectation does not, see Theorem 5.1. Note that if \(0 < \eta \ll n^{-1}\), then we necessarily have that for almost all \(E, \mathrm{Im}R(E + i \eta )_{kk}\) converges to \(0\). The fact that our estimate \( |I| \ge n^{-\rho }\) gets worse as \(\alpha \) goes to \(0\) is an artifact of the proof : our rate of convergence of \(R(E + i \eta )_{kk}\) to its limit gets worse as \(\alpha \) gets small. It is however intuitively clear that the localization should be stronger when \(\alpha \) is smaller. However, in the forthcoming Theorem 5.9, we will prove that, for any \(0 < \alpha < 2/3\), the expression (6) goes to \(0\) if \(\eta =n^{-\frac{1}{6}} \). Finally, it is worth to notice that computing the fractional moments of the resolvent matrix as a way to prove localization is already present in the literature on random Schrödinger operators, see e.g. [2].

The remainder of the paper is organized as follows. In Sect. 2 we establish general upper bounds on \(N_I\) defined by (5). Section 3 contains the proof of Theorem 1.1. Section 4 is devoted to the proof of Theorem 1.2. The arguments developed in these two sections are based on ideas from the seminal work of Erdős, Schlein, Yau (see e.g. [1820]) concerning Wigner’s matrices with enough moments, as well as on the analytic approach of heavy tailed matrices as initiated in [6, 7]. However, there was a technical gap due to the lack of concentration inequalities, as well as of simple loop equations, that hold for finite second moment Wigner matrices. A few of the required new estimates due to the specific nature of heavy tailed matrices are contained in the appendix on concentration inequalities and stable laws. In Sect. 5 we prove Theorem 1.3, which is based on the representation of the asymptotic spectral measure given in [10] and a new fixed point argument which allows to prove the vanishing of the imaginary part of the resolvent in the regime \(\alpha \in (0,2/3)\).

The whole article is quite technical, but hopefully shall be useful for further local study of the spectrum of random matrices which do not belong to the universality class of Wigner’s semi-circle law.

2 Upper bound on the spectral counting measure

For \(\eta > 0\) and \( E \in \mathbb{R }\), we set \(I = [E - \eta , E + \eta ]\). The goal of this section is to provide a rough upper bound on \(N_I\) when \(\eta \) is large enough, where \(N_I\) was defined by (5). Let

$$\begin{aligned} \gamma = \left( \frac{1}{2} + \frac{1}{\alpha }\right)^{-1}. \end{aligned}$$
(7)

Proposition 2.1

(Upper bound on counting measure) Let \(0 < \alpha < 2\). There exist \(c,c^{\prime }>0\) depending only on \(\alpha \) such that if \(\eta \ge n^{- \frac{\alpha + 2}{ 4}}\), then, for all integers \(n\), for all \(t\ge c\),

$$\begin{aligned} \mathbb{P }\left( N_I \ge t n \eta ^{\gamma } \right)\le c\exp (-c^{\prime } t^{\frac{2}{2-\alpha }})+ 2n\exp (-c^{\prime } t^{\frac{4}{2+\alpha }}). \end{aligned}$$

This bound will later be refined in the forthcoming Proposition 3.6. The upper bound on the eigenvalues counting measure implies an upper bound for the trace of the resolvent.

Corollary 2.2

(Trace of resolvent) Let \(0 < \alpha < 2\) and \(z = E + i \eta \in \mathbb{C }_{+}\). There exists \(c >0\) depending only on \(\alpha \) such that if \(\eta \ge n^{- \frac{\alpha + 2}{ 4}}\), then, for all integers \(n\),

$$\begin{aligned} \mathbb{E }{\mathrm{tr }}R(z) R^*(z) \le c(\log n)^{\frac{2+\alpha }{4}} n \eta ^{- \frac{4}{2 + \alpha }}. \end{aligned}$$

Proof

By the spectral theorem

$$\begin{aligned} {\mathrm{tr }}R(z) R^*(z) = \sum _{j=1} ^n |\lambda _j (A)- z|^{-2}. \end{aligned}$$

Let \(\eta \) be the imaginary part of \(z\). Define \(I_0 = [E - \eta , E + \eta ]\) and for integer \(k \ge 0, I_{k+1} = [E - 2^{k+1} \eta , E + 2^{k+1} \eta ] \backslash [E - 2^{k} \eta , E + 2^{k} \eta ] \). By construction, if \(\lambda _j (A) \in I_k\) then \(|\lambda _j (A)- z|^{-2} \le 2^{- 2k + 1} \eta ^{-2}\). Therefore, if \(N_{I_k} = n \mu _{A} (I_k)\) is the number of eigenvalues in \(I_k\), then

$$\begin{aligned} {\mathrm{tr }}R(z) R^*(z) \le \sum _{k\ge 0 } 2^{- 2k + 1} \eta ^{-2} N_{I_k}. \end{aligned}$$
(8)

We write \(I_k = I^+_ k \cup I^-_k\), where \(I^\pm _k = \mathbb{R }_{\pm } \cap I_k\). To estimate \(\mathbb{E }[ N_{I_k^\pm }]\) we apply Proposition 2.1. Namely it yields that for each interval \(I\) of length \(\eta \ge n^{-\frac{\alpha +2}{4}}\), for any \(\tau \ge c\)

$$\begin{aligned} \mathbb{E }[ N_{I}]&= \int _0^\infty \mathbb{P }\left(N_{I}\ge t\right) dt \\&\le \tau n \eta ^\gamma + n\eta ^\gamma \int _\tau ^\infty \mathbb{P }\left(N_{I}\ge t n \eta ^\gamma \right) dt\\&\le n \eta ^\gamma \left(\tau +\int _\tau ^\infty \left(\exp (-c^{\prime } t^{\frac{2}{2-\alpha }})+ 2n\exp (-c^{\prime } t^{\frac{4}{2+\alpha }})\right)\right)dt\\&\le c_0 n \eta ^\gamma \left(\tau + n\exp (-c^{\prime } \tau ^{\frac{4}{2+\alpha }})\right)\!, \end{aligned}$$

for some finite constant \(c_0 > 0 \) and \(\tau \ge 1\). Therefore, taking \(\tau \) of order \((\log n)^{\frac{2+\alpha }{4}}\), we deduce that there exists some finite constant \(c_1 > 0 \) such that

$$\begin{aligned} \mathbb{E }[ N_{I}]\le c_1 (\log n)^{\frac{2+\alpha }{4}} n \eta ^\gamma . \end{aligned}$$

Therefore, we deduce from (8) that

$$\begin{aligned} \mathbb{E }{\mathrm{tr }}R(z) R^*(z)\!\le \! 2c_1 (\log n)^{\frac{2+\alpha }{4}}\!\sum _{k\ge 0 }\! 2^{- 2k} \eta ^{-2} n (\eta 2^k)^{\gamma }\!\le \! 2c_1(\log n)^{\frac{2+\alpha }{4}} n \!\eta ^{- \frac{4}{2 + \alpha }}\!\sum _{k \ge 0}\! 2^{- \frac{4k}{2 + \alpha }}\!. \end{aligned}$$

\(\square \)

The rest of this section is devoted to the proof of Proposition 2.1.

2.1 A geometric upper bound

In this paragraph, we recall a general upper bound for \(N_I\) that is due to Erdős–Schlein–Yau [20], namely if we let \(A^{(k)}\) be the principal minor matrix of \(A\) where the \(k\)-th row and column have been removed, \(W^{(k)}\) be the vector space generated by the eigenvectors of \(A^{(k)}\) corresponding to eigenvalues at distance greater than \(\eta \) from \(E\), we have

$$\begin{aligned} N_I \le 4 \eta ^2 \, a_n^2 \, \sum _{k =1}^n {\mathrm{dist }}(X_k,W^{(k)})^{-2}\,. \end{aligned}$$
(9)

Let us prove (9). We start with the resolvent formula,

$$\begin{aligned} R(z)_{kk} = - \left( z - a_n^{-1} X_{kk} + a_n ^{-2} \langle X_k , R^{(k)}X_k \rangle \right)^{-1}\!, \end{aligned}$$
(10)

where \(X_k = ( X_{k1}, \ldots ,X_{k k-1}, X_{k k+1}, \ldots , X_{kn}) \in \mathbb{R }^{n-1}\) and \(R^{(k)} = ( A^{(k)} - z ) ^{-1}\).

Identifying the real and imaginary part, we get, using the fact that \(z\) and the eigenvalues of \(R^{(k)}\) are in \(\mathbb{C }_+\),

$$\begin{aligned} \mathrm{Im}R(z)_{kk}&\le \left( \mathrm{Im}\left(z - a_n^{-1} X_{kk} + a_n ^{-2} \langle X_k , R^{(k)}X_k \rangle \right) \right)^{-1} \\&\le a_n ^{2} \langle X_k , \mathrm{Im}R^{(k)}X_k \rangle ^{-1}. \end{aligned}$$

Let \((\lambda _{i}^{(k)})_{1 \le i \le n-1}\) and \((u_{i}^{(k)})_{1 \le i \le n-1}\) be the eigenvalues and eigenvectors of \(A^{(k)}\). Choosing \(z = E + i \eta \), we have the spectral decomposition

$$\begin{aligned} \mathrm{Im}R^{(k)} = \sum _{i = 1} ^{n-1} \frac{\eta }{ ( \lambda _i^{(k)} - E)^2 + \eta ^2 } u_i^{(k)} {u_i^{(k)}}^*\!. \end{aligned}$$

If \( |\lambda _i^{(k)} - E| \le \eta \), then \( \frac{\eta }{ ( \lambda _i - E)^2 + \eta ^2 } \ge 1 / ( 2 \eta )\). Therefore, we deduce

$$\begin{aligned} \mathrm{Im}R (z )_{kk} \le 2 \eta a_n^{2} \left(\sum _{i=1}^{n-1} 1_{|\lambda _i^{(k)} - E| \le \eta } \langle X_k , u_i ^{(k)} \rangle ^2 \right)^{-1}\,. \end{aligned}$$

We rewrite the above expression as

$$\begin{aligned} \mathrm{Im}R (z )_{kk} \le 2 \eta a_n^{2} {\mathrm{dist }}^{-2}( X_k, W^{(k)} ), \end{aligned}$$
(11)

where

$$\begin{aligned} W^{(k)} = \mathrm vect \left\{ u_{i}^{(k)} : 1 \le i \le n-1 , \lambda _i^{(k)} \notin [E - \eta , E + \eta ] \right\} . \end{aligned}$$

Since \(I = [E - \eta , E + \eta ]\), the inequalities (11) and

$$\begin{aligned} {\mathrm{tr }}\, \mathrm{Im}R(z) = n \int \frac{\eta }{(E - \lambda )^2 + \eta ^2} \mu _A ( d\lambda ) \ge n \int _I \frac{\eta }{(E - \lambda )^2 + \eta ^2} \mu _A ( d\lambda ) \ge \frac{N_ I }{2 \eta } \end{aligned}$$

give (9). We set

$$\begin{aligned} N^{(k)}_I = | \{1 \le i \le n - 1 : \lambda ^{(k)}_i \in I \} |= n - 1 - \mathrm dim (W^{(k)}) = (n -1) \mu _{A^{(k)}} ( I). \end{aligned}$$

From Weyl interlacement theorem, we have

$$\begin{aligned} N_I - 1 \le N^{(k)}_I = n - 1 - \mathrm dim (W^{(k)}) \le N_I +1. \end{aligned}$$
(12)

2.2 Proof of Proposition 2.1

We note that up to increasing the constant \(c\) and \(1/c^{\prime }\), it is sufficient to prove the proposition only for all \(\eta \ge c_1 n^{- \frac{\alpha + 2}{ 4}}\) for some \(c_1\) (indeed, \(N_{I^{\prime }} \le N_I\) if \(I^{\prime } \subset I\) and if \(N_I \le t n | I |^\gamma \) then \(N_{I^{\prime }} \le t n |I^{\prime }|^\gamma (|I | / |I^{\prime } | )^\gamma \)).

In the sequel we denote in short \({\mathrm{dist }}_k = {\mathrm{dist }}( X_k , W^{(k)})\). From (9)–(12), we write

$$\begin{aligned} N_I \le {\mathbf{1 }}_{ N_I \le \lfloor t n \eta ^{\gamma }\rfloor } t n \eta ^{\gamma } + 4 \eta ^2 a_n^2 \, \sum _{k =1}^n {\mathrm{dist }}^{-2}_k {\mathbf{1 }}_{ N^{(k)}_I >\lfloor t n \eta ^{\gamma }\rfloor } . \end{aligned}$$
(13)

We have \( {\mathrm{dist }}^2 _k = \langle X_k , P_k X_k \rangle \), where \(P_k\) is the orthogonal projection onto \(W^{(k)}\). We note that \(X_k\) is independent of \(W^{(k)}\). From Lemma 7.1, there exists a positive \(\alpha /2\)-stable random variable \(S_k\) and a standard Gaussian vector \(G_k\), independent from \(S_k\), such that

$$\begin{aligned} {\mathrm{dist }}^2 _k = \Vert P_k G_k \Vert _\alpha ^2 S_k. \end{aligned}$$

Note that \(\eta \ge c n^{- \frac{\alpha + 2}{ 4}}\) is equivalent to \(n \eta ^{\frac{2\gamma }{\alpha }} \ge c^{\frac{2\gamma }{\alpha }}\). By Corollary 6.2 (applied to \(A=P_k\)), there exists universal constants \(C,\delta \) so that if \(N^{(k)}_I\ge t c^{-\gamma } n \eta ^{\gamma } \ge t n^{1-\frac{\alpha }{2}}\) for \(t \ge C c^{\gamma }\), with probability at least

$$\begin{aligned} 1 - 2 \exp ( - \delta n ( t \eta )^{\frac{2 \gamma }{ \alpha }} /2) \ge 1 - 2 \exp ( - c_0 t ^{\frac{4}{2 + \alpha }} ) \end{aligned}$$
(14)

we have,

$$\begin{aligned} \Vert P_k G_k \Vert _\alpha \ge \delta \left( t n \eta ^\gamma \right)^{\frac{1}{\alpha }}\!\!. \end{aligned}$$

Hence, if \(F_n\) denotes the event that \(N_I > \lfloor t n \eta ^{\gamma } \rfloor \) and for all \(k, \Vert P_k G_k \Vert _\alpha \ge \delta \left( t n \eta ^\gamma \right)^{\frac{1}{\alpha }}\), we have from (9) and (13)

$$\begin{aligned} N_I {\mathbf{1 }}_{F_n} \le 4 \eta ^2 \delta ^{-2} \left( t \eta ^\gamma \right)^{-\frac{2}{\alpha }} \, \sum _{k =1}^n S_k^{-1}. \end{aligned}$$

With our choice of \(\gamma , 2 - 2 \gamma / \alpha = \gamma \), hence, with \(c_1 = 4 \delta ^{-2}\), we deduce

$$\begin{aligned} N_I {\mathbf{1 }}_{F_n} \le c_1 n \eta ^\gamma t^{-\frac{2}{\alpha }} \left( \frac{1}{n} \sum _{k =1}^n S_k^{-1} \right). \end{aligned}$$

The variables \((S_k)_{1 \le k \le n}\) have the same distribution but are correlated. Nevertheless, note that the function \(x \mapsto \exp ( x^\delta )\) is convex on \([b_\delta , +\infty ) \) with \(b_\delta = 0\) if \(\delta \ge 1\), and \(b_\delta = (1/\delta -1)^{1/\delta }\) if \(0 \le \delta \le 1\). Hence from the Jensen inequality, we deduce

$$\begin{aligned} \exp \left( \frac{1}{n} \sum _{k =1}^n S_k^{-1} \right)^{\delta } \le \exp \left( \frac{1}{n} \sum _{k =1}^n S_k^{-1} \vee b_\delta \right)^{\delta } \le \frac{1}{n} \sum _{k =1}^n \exp ( S_1 ^{-\delta } \vee b^\delta _\delta ). \end{aligned}$$

In particular for every \(c_2 > 0\),

$$\begin{aligned} \mathbb{E }\exp \left\{ c_2 \left( \frac{1}{n} \sum _{k =1}^n S_k^{-1} \right)^{\frac{\alpha }{2 - \alpha } }\right\} \le c_3 \mathbb{E }\exp \left\{ c_2 \left( S_1 ^{-\frac{\alpha }{2 - \alpha } } \right) \right\} \!\!, \end{aligned}$$

where \(c_3 = \exp ( c_2 b_{\alpha / (2 - \alpha ) } ^{\alpha / (2 - \alpha )} )\). By Lemma 7.3, for \(c_2\) small enough, the above is finite. Thus, from the Markov inequality, for some constants \(c_4,c_5 >0\),

$$\begin{aligned} \mathbb{P }\left( N_I {\mathbf{1 }}_{F_n} > t n \eta ^\gamma \right) \le \mathbb{P }\left( \frac{1}{n} \sum _{k =1}^n S_k^{-1} > c_1^{-1} t^{ \frac{2}{\alpha }} \right) \le c_4 \exp ( - c_5 t^{ \frac{2}{2 - \alpha }}). \end{aligned}$$

Therefore, by (14), we deduce

$$\begin{aligned} \mathbb{P }\left( N_I > t n \eta ^\gamma \right)&\le \mathbb{P }\left( \{N_I > t n \eta ^\gamma \}\cap F_n^c \right) + \mathbb{P }\left( N_I {\mathbf{1 }}_{F_n} > t n \eta ^\gamma \right) \\&\le 2n \exp ( - c_0 t ^{\frac{4}{2 + \alpha }} ) + c_4 \exp ( - c_5 t^{ \frac{2}{2 - \alpha }}) \end{aligned}$$

which completes the proof of the proposition.

3 Local convergence of the spectral measure

To prove the local convergence of the spectral measure, we shall prove that an observable of the resolvent satisfies nearly a fixed point equation, which also entails an approximate equation for the resolvent. Such an equation was already derived in [6, 7] but the error terms are here carefully estimated. This step will be crucial to obtain, in the second part of this section, a rate of convergence of the Stieltjes transform of the spectral measure toward its limit. The range of convergence will be first derived roughly, and then improved for \(\alpha >1\) thanks to bootstraps arguments.

3.1 Approximate fixed point equation

The observables we shall be interested in will be

$$\begin{aligned} Y(z) := \mathbb{E }[( - i R(z)_{11} ) ^\frac{\alpha }{2} ] \quad \text{ and} \quad X(z) := \mathbb{E }[- i R (z)_{11} ]\,. \end{aligned}$$
(15)

(For \(1 \le k , \ell \le n\), we will write indifferently \(R_{ k \ell } (z)\) or \(R (z)_{ k \ell }\)). For \(\beta \in [0,2]\), we define \({\mathcal{K }}_\beta = \{z \in \mathbb{C }: | \arg (z) | \le \frac{\pi \beta }{2} \}\). By construction \( - i R_{11} ( z) \in {\mathcal{K }}_{1}\) for \(z\in \mathbb C _+\), so that \( Y(z) \in {\mathcal{K }}_{\alpha / 2}\) and \( X(z) \in {\mathcal{K }}_{1}. \) On \({\mathcal{K }}_{\alpha / 2}\), we may define the entire functions

$$\begin{aligned} \varphi _{\alpha ,z} (x) = \frac{1}{\Gamma (\frac{\alpha }{2})} \int _0^\infty t^{\frac{\alpha }{2} - 1}e^{itz} e^{- \Gamma ( 1 - \frac{\alpha }{2} ) t^{\frac{\alpha }{2}} x} dt \end{aligned}$$
(16)

and

$$\begin{aligned} \psi _{\alpha ,z} (x) = \int _0^\infty e^{itz} e^{- \Gamma ( 1 - \frac{\alpha }{2} ) t^{\frac{\alpha }{2}} x} dt. \end{aligned}$$
(17)

For further use, we define, with the notation of (10),

$$\begin{aligned} M_n (z) =\frac{1}{n-1} \mathbb{E }{\mathrm{tr }}\left\{ R^{(1)} (z) (R^{(1)}(z) )^*\right\} \!\! . \end{aligned}$$
(18)

Note that, writing explicitly the dependence in \(n, R_n^{(1)} = ( A_n^{(1)} - z ) ^{-1}\) and \(\frac{a_n}{a_{n-1}} A_n^{(1)}\) has the same distribution than \(A_{n-1}\). We may thus apply Corollary 2.2 to \(R^{(1)}\) (it can be checked the difference between \(a_{n}\) and \(a_{n-1}\) is harmless). For some constant \(c>0\), we therefore have the upper bound for all \(z = E + i \eta \) and \(\eta \ge n^{ -\frac{\alpha +2}{4}}\),

$$\begin{aligned} M_n (z) \le c(\log n)^{\frac{2+\alpha }{4}}\eta ^{-\frac{4}{2+\alpha }}. \end{aligned}$$
(19)

The main result of this paragraph is the following approximate fixed point equations.

Proposition 3.1

(Approximate fixed point equation) Let \(0 < \alpha < 2\) and \(z = E + i \eta \in \mathbb{C }_+, E \ne 0\). There exists \(c =c(E)>0, c(E)\le \text{ const.} |E|^{-\alpha }\vee |E|^{-\alpha /2}\), such that if \(n^{ -\frac{\alpha +2}{4}} \le \eta \le 1\) and

$$\begin{aligned} \varepsilon&= \eta ^{-\alpha \wedge 1} n ^{-\frac{1}{\alpha \vee 1}} + \frac{( \log n) {\mathbf{1 }}_{\alpha =1} }{\eta n}\nonumber \\&+ \left( \eta ^{-1} \sqrt{\frac{M_n(z)}{n} } \right)^{1 \wedge \alpha } \left( 1 + ( \log n ) {\mathbf{1 }}_{1 < \alpha \le 4/3} + ( \log n )^2 {\mathbf{1 }}_{0 < \alpha \le 1} \right)\!, \end{aligned}$$
(20)

then, for any integer \(n \ge 1\),

$$\begin{aligned} |Y ( z) - \varphi _{\alpha ,z}( Y ( z) )|\le c \eta ^{-\frac{\alpha }{2}} \varepsilon + c \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}}, \end{aligned}$$

and

$$\begin{aligned} |X ( z) - \psi _{\alpha ,z}( Y ( z) )|\le c \eta ^{-1} \varepsilon + c \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}}. \end{aligned}$$

We note that we could use the bound (19) to get an explicit upper bound on \(\varepsilon \). In the forthcoming Proposition 3.6 we will however improve this bound for some range of \(\eta \).

In the first step of the proof, we compare \(Y(z)\) with an expression which gets rid of the off-diagonal terms of \(R^{(1)}\) in (10). More precisely, with the notation of (10), we define

$$\begin{aligned} I (z) : = \mathbb{E }\left[\left( (- i z ) + a_n ^{-2} \sum _{k=2}^n X_{1k}^2 ( - i R^{(1)}_{kk} ) \right) ^{-\frac{ \alpha }{2} }\right]\; \in {\mathcal{K }}_{\alpha / 2}, \end{aligned}$$
(21)

and similarly,

$$\begin{aligned} J (z) : = \mathbb{E }\left[\left( (-i z ) + a_n ^{-2} \sum _{k=2}^n X_{1k}^2 ( - i R^{(1)}_{kk} ) \right) ^{-1}\right]\; \in {\mathcal{K }}_{1}. \end{aligned}$$
(22)

We shall prove that

Lemma 3.2

(Diagonal approximation) Let \(0 < \alpha < 2\) and \(z = E + i \eta \in \mathbb{C }_+\). There exists \(c >0\) such that if \(\varepsilon \) is given by (20) then

$$\begin{aligned} | Y(z) - I (z) | \le c \eta ^{-\frac{\alpha }{2}} \varepsilon \quad \text{ and} \quad | X(z) - J (z) | \le c \eta ^{-1} \varepsilon . \end{aligned}$$

We start with a technical lemma.

Lemma 3.3

(Off-diagonal terms) Let \(B\) be an hermitian matrix with resolvent \(G = ( B - z)^{-1}\). For any \(0 < \alpha < 2\), there exists a constant \(c = c(\alpha ) >0\) such that for \(n \ge 2\),

$$\begin{aligned} \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right| \ge \sqrt{ \frac{{\mathrm{tr }}( GG^* ) }{ n^2 } } t \right) \le c t^{ - \alpha } \log \left(n \left( 2 \vee t \right) \right)\log \left( 2 \vee t \right)\!, \end{aligned}$$

and if \(1 < \alpha < 2\),

$$\begin{aligned} \mathbb{E }\left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right| \le c \sqrt{ \frac{{\mathrm{tr }}( GG^* ) }{n^2 }} \left(1 + {\mathbf{1 }}_{1< \alpha \le 4/3} \log n \right)\!. \end{aligned}$$

Proof

Let \( 0 < \alpha \le 1\). We use a decoupling technique: from [13, Theorem 3.4.1] there exists a universal constant \(c > 0\) such that

$$\begin{aligned} \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right| \ge t \right)&\le \! c \, \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X^{\prime }_{1\ell } G_{k\ell }\right|\ge t / c \right)\nonumber \\&\le c \, \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X^{\prime }_{1\ell } \mathrm{Re}( G_{k\ell } ) \right|\!\ge \! t / {2c} \right) \qquad \nonumber \\&+ c \, \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X^{\prime }_{1\ell } \mathrm{Im}( G_{k\ell } ) \right|\!\ge \! t / {2c} \right)\!,\nonumber \\ \end{aligned}$$
(23)

where \(X^{\prime }_1\) is an independent copy of \(X_1\). From the stable property of \(X^{\prime }_1\), we deduce that

$$\begin{aligned} \sum _{2 \le k \ne \ell \le n } X_{1k} X^{\prime }_{1\ell } \mathrm{Re}( G_{k\ell }) \stackrel{d}{=} X^{\prime }_{11} \left( \sum _{\ell } \left| \sum _{k\ne \ell } X_{1k} \mathrm{Re}( G_{k\ell } ) \right| ^ \alpha \right)^{\frac{1}{\alpha }}, \end{aligned}$$
(24)

and similarly for the imaginary part. From the stable property of \(X_1\),

$$\begin{aligned} \sum _{\ell } \left| \sum _{k\ne \ell } X_{1k} \mathrm{Re}( G_{k\ell } ) \right| ^ \alpha \stackrel{d}{=} \sum _{\ell } | \hat{X}_\ell | ^ \alpha \sum _{k\ne \ell } | \mathrm{Re}( G_{k\ell } ) | ^ \alpha , \end{aligned}$$
(25)

where \((\hat{X}_\ell )_\ell \) is a random vector whose marginal distribution is again the law of \(X_{11}\) (note however that the entries of \((\hat{X}_\ell )_\ell \) are correlated). Let \(\rho _\ell = \sum _{k\ne \ell } | \mathrm{Re}( G_{k\ell } ) | ^ \alpha \) and \(\rho = \sum _\ell \rho _\ell \). For \( s \ge 2\) to be chosen later, we define \(Y_\ell = \hat{X}_{\ell } {\mathbf{1 }}( |\hat{X}_{\ell }| \le s a_n )\) and \(Y^{\prime }_1 = X^{\prime }_{11} {\mathbf{1 }}( | X^{\prime }_{11}| \le s)\). It is straightforward to check that

$$\begin{aligned} \mathbb{E }| Y_\ell |^ \alpha \le c \log ( s^\alpha n ) \quad \text{ and} \quad \mathbb{E }| Y^{\prime }_1 |^ \alpha \le c \log ( s ). \end{aligned}$$

Hence, from (24)–(25)

$$\begin{aligned} \mathbb{P }\left( |X^{\prime }_{11} | \left( \sum _{\ell } | \hat{X}_\ell | ^ \alpha \rho _\ell \right) ^{\frac{1}{\alpha }} \ge t \right)&\le \mathbb{P }\left( |Y^{\prime }_{1} | \left( \sum _{\ell } | Y_\ell | ^ \alpha \rho _\ell \right) ^{\frac{1}{\alpha }} \ge t \right) \\&+ \; \mathbb{P }\left( \max _{\ell } |\hat{X}_{\ell }| \ge s a_n \right) + \mathbb{P }\left( |\hat{X}^{\prime }_{11}| \ge s \right) \\&\le c s^{-\alpha } + \frac{c \rho \log (s^ \alpha n ) \log (s) }{t^\alpha }\!, \end{aligned}$$

where we have used the Markov inequality and the union bound

$$\begin{aligned} \mathbb{P }( \max _{\ell } |\hat{X}_{\ell }| \ge s a_n ) \le n \mathbb{P }( |X_{11}|\ge s a_n ) \le c s^{-\alpha }. \end{aligned}$$

We choose \(s = 2 \vee ( t / \rho ^{1 / \alpha })\), we find that

$$\begin{aligned} \mathbb{P }\left( |X^{\prime }_{11} | \left( \sum _{\ell } | \hat{X}_\ell | ^ \alpha \rho _\ell \right) ^{\frac{1}{\alpha }} \ge t \rho ^{1 / \alpha } \right) \le \frac{c \log ( (2 \vee t )n ) \log ( 2 \vee t) }{t^\alpha } . \end{aligned}$$

The same statement holds for \(\mathrm{Im}(G)\). To sum up, we deduce from (23) that

$$\begin{aligned} \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right| \ge a_n ^{-2} \rho ^{ 1 / \alpha } t \right) \le c t^{ - \alpha } \log \left(n \left( 2 \vee t \right) \right)\log \left( 2 \vee t \right) \!. \end{aligned}$$

Then, the first statement of the lemma follows Hölder’s inequality which asserts that

$$\begin{aligned} a_n ^{-2} \rho ^{1 / \alpha }&\! \le \!&n^{ - \frac{2}{\alpha }} \left( \sum \limits _{\ell } \sum \limits _{k } | G_{k\ell } | ^ \alpha \right)^ { \frac{1}{\alpha }} \!\le \! n^{ - \frac{2}{\alpha }} \left( \sum \limits _{\ell } \sum \limits _{k } | G_{k\ell } | ^ 2 \right)^ { \frac{1}{2}} n^{\frac{2}{\alpha } -1}\! =\! \sqrt{ \frac{{\mathrm{tr }}( GG^* ) }{n^2 }},\nonumber \\ \end{aligned}$$
(26)

where we used that \(G_{k\ell } =\bar{G}_{\ell k}\).

Now, assume \(\alpha > 1\). By integrating the above bound, we find easily

$$\begin{aligned} \mathbb{E }\left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right|&= \int _0 ^ \infty \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right| \ge t \right) dt\\&\le c \sqrt{ \frac{{\mathrm{tr }}(GG^* ) }{n^2 }} \log n. \end{aligned}$$

(In fact, with slightly more care, we may replace \(\log n\) by \((\log n)^{\frac{1}{\alpha }} \log (\log n)\)). It remains to check that we can remove the term \(\log n\) for \(\alpha > 4/3\). It will come easily from the bound

$$\begin{aligned} \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right| \ge t \right) \le c \left( \frac{{\mathrm{tr }}( GG^* ) }{n^2 t ^2 } \right)^{ \frac{\alpha }{ 4 - \alpha }}. \end{aligned}$$
(27)

Let \(s \ge 1\) to be chosen later. We now set \(Y_k = X_{1k} {\mathbf{1 }}( |X_{1k}| \le s a_n ) / a_n \). We write

$$\begin{aligned} \left| \sum _{2 \le k \ne \ell \le n } Y_k Y_\ell G_{k\ell } \right|^2 = \sum _{ k_1 \ne \ell _1 , k_2 \ne \ell _2 } Y_{k_1} Y_{\ell _1} Y_{k_2} Y_{\ell _2}G_{k_1\ell _1} G_{k_2 \ell _2 }^*. \end{aligned}$$

The variables \(Y_k\) are iid and by symmetry \(\mathbb{E }Y_1 = \mathbb{E }Y_1^3 = 0\). Hence, since \(G_{k\ell } = G_{\ell k}\), taking expectation we obtain

$$\begin{aligned} \mathbb{E }\left| \sum _{2 \le k \ne \ell \le n } Y_k Y_\ell G_{k\ell } \right|^2 = 2 \sum _{ k \ne \ell } \mathbb{E }[ Y^2_{1} ]^2 G_{k\ell } G_{\ell k}^* \le 2 \mathbb{E }[ Y^2_{1} ]^2 {\mathrm{tr }}GG^*. \end{aligned}$$

It is routine to check that, for \(p > \alpha \), \( \mathbb{E }[ |Y _{1} |^p ] \le c(p) s^{p - \alpha } n^{-1}\). Hence, arguing as above, we find

$$\begin{aligned} \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } G_{k\ell }\right| \ge t \right)&\le c \frac{ s^{2 ( 2 - \alpha ) } {\mathrm{tr }}GG^* }{n^{2} t ^2 } + c s ^{-\alpha }. \end{aligned}$$

We conclude by choosing \(s = 1 \vee ( n^2 t^2 / {\mathrm{tr }}GG^* ) ^{1 / (4 - \alpha )}\). \(\square \)

We now can turn to the proof of Lemma 3.2

(Proof of Lemma 3.2)

Define

$$\begin{aligned} T(z) = - a_n^{-1} X_{11} + a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } R^{(1)}_{k\ell }. \end{aligned}$$
(28)

We notice that for any, \(z, z^{\prime } \in \mathbb{C }_+\) and \(\alpha > 0\),

$$\begin{aligned} | (iz)^{-\alpha /2} - (iz^{\prime })^{-\alpha /2} |\le \frac{\alpha }{2} |z - z^{\prime }| ( \mathrm{Im}(z) \wedge \mathrm{Im}(z^{\prime })) ^{-\alpha /2-1}. \end{aligned}$$

With the notation of (10), we also note that \(\mathrm{Im}(\sum _{k=1}^n X_{1k}^2 R^{(1)}_{kk} ) \ge 0\) and \(\mathrm{Im}( - a_n^{-1} X_{11}+ \langle X_1 , R^{(1)}X_1 \rangle ) \ge 0\). Hence, from (10), for any event \(\Omega \),

$$\begin{aligned} \left| Y(z) - I (z) \right|&\le \frac{\alpha }{2} \eta ^{-\frac{\alpha }{2} - 1} \mathbb{E }[ |T(z)| {\mathbf{1 }}_{\Omega }] + \eta ^{-\frac{\alpha }{2}} \mathbb{P }( \Omega ^c). \end{aligned}$$
(29)

Applying the same argument with \(X(z),J(z)\), we get

$$\begin{aligned} \left| Y(z) - I (z) \right| \le \frac{\alpha }{2} \eta ^{-\frac{\alpha }{2} }D(z),\qquad \left| X(z) - J (z) \right| \le \eta ^{-1} D(z) \end{aligned}$$

with

$$\begin{aligned} D(z)= \eta ^{- 1} \mathbb{E }[ |T(z)| {\mathbf{1 }}_{\Omega }] + \mathbb{P }( \Omega ^c). \end{aligned}$$
(30)

We may bound \(D(z)\) by using that by Lemma 3.3. Indeed, since \(R^{(1)} \) is independent of \(X^{(1)}\), it gives

$$\begin{aligned} \mathbb{P }\left( \left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } R^{(1)}_{k\ell }(z)\right| \ge \sqrt{ \frac{M_ n (z)}{ n } } t \right) \le c t^{ - \alpha } \log \left(n \left( 2 \vee t \right) \right)\log \left( 2 \vee t \right)\! .\nonumber \\ \end{aligned}$$
(31)

and for \( 1 < \alpha < 2\),

$$\begin{aligned} \mathbb{E }\left| a_n ^{-2} \sum _{2 \le k \ne \ell \le n } X_{1k} X_{1\ell } R^{(1)}_{k\ell }(z)\right| \le c \sqrt{ \frac{M_ n (z)}{ n } } \left(1 + {\mathbf{1 }}_{1< \alpha \le 4/3} \log n \right)\!. \end{aligned}$$
(32)
  • Let us first assume that \(1 < \alpha < 2\), then taking \(\Omega ^c = \emptyset \) in (29) and using (32), it shows that for some constant \(c>0\),

    $$\begin{aligned} D(z) \le c \eta ^{ - 1} \left( n^{-\frac{1}{\alpha }} + \sqrt{\frac{M_n(z)}{n}} \left(1 + {\mathbf{1 }}_{1< \alpha \le 4/3} \log n \right) \right). \end{aligned}$$
  • Assume that \(0< \alpha \le 1\), we take in (30)

    $$\begin{aligned} \Omega = \left\{ a_n^{-1} |X_{11}| \le t \, ; a_n ^{-2} \left| \sum _{1 \le k \ne \ell \le n } X_{1k} X_{1\ell } R_{k\ell }^{(1)}(z)\right| \le t \right\} . \end{aligned}$$

    Then, we have

    $$\begin{aligned} \mathbb{E }[|T(z)|\!{\mathbf{1 }}_{\Omega }]&\!\le \!&\!\int _0^t \mathbb{P }\left(\! a_n^{-1} |X_{11}|\!\ge \!y\!\right)dy \!+\! \int _0^t \mathbb{P }\left(\!\left| a_n ^{-2}\!\sum _{1 \le k \ne \ell \le n } X_{1k} X_{1\ell } R_{k\ell }^{(1)}(z)\!\right| \!\ge \! y\!\right)dy. \end{aligned}$$

    Assume that \(1 / n \le t \le 1\). Then, using (31), we find that \( \mathbb{E }[|T(z)|{\mathbf{1 }}_{\Omega }] \) is bounded up to multiplicative constant (depending on \(\alpha \)) by

    $$\begin{aligned} \frac{t^{1 - \alpha }}{n} + \frac{\log (n)}{n} {\mathbf{1 }}_{\alpha =1} + \left(\frac{M_n(z)}{n}\right)^{\frac{\alpha }{2} } \left( \log n \right)^2 t^{1 - \alpha }. \end{aligned}$$

    (using (20) we may safely bound the terms \( \log ( t n / M_n )\) by \(\log n\)). With \(t = \eta \) we find

    $$\begin{aligned} D(z) \le c \eta ^{-\alpha } n ^{-1} +c \frac{1_{\alpha =1} ( \log n) }{\eta n} + c \left(\frac{M_n(z)}{n\eta ^2}\right)^{\frac{\alpha }{2}} \left( \log n \right)^2 \!. \end{aligned}$$

This yields the claimed bounds. \(\square \)

We next relate \(I(z)\) and \(J(z)\) with the functions \(\varphi _{\alpha ,z}\) and \(\psi _{\alpha ,z}\) defined in (16)–(17) by using the well known identities

$$\begin{aligned} x^{-\delta } = \frac{ 1}{ \Gamma ( \delta )} \int _0^\infty t^{\delta - 1} e^{ -x t} dt \quad \text{ and} \quad x^{\delta } = \frac{ \delta }{ \Gamma (1 - \delta )} \int _0^\infty t^{-\delta - 1} ( 1 - e^{ -x t} ) dt \end{aligned}$$
(33)

valid for \(x\) in the interior of \( {\mathcal{K }}_1, \delta > 0\) and \(0 <\delta < 1\) respectively. We get

$$\begin{aligned} I (z) = \frac{ 1}{ \Gamma ( \frac{\alpha }{2} )} \int _0^\infty t^{\frac{\alpha }{2} - 1} \mathbb{E }\exp \left\{ i t \left( z + a_n ^{-2} \sum _{k=2}^n X_{1k}^2 R^{(1)}_{kk} (z) \right) \right\} dt. \end{aligned}$$

We may apply Corollary 7.2 to take the expectation over \(X_1\) and get

$$\begin{aligned} I (z)&= \frac{ 1}{ \Gamma ( \frac{\alpha }{2} )} \int _0^\infty t^{\frac{\alpha }{2} - 1} e^{itz} \mathbb{E }\exp \left\{ - w_\alpha ^{\alpha } ( 2 t ) ^{\frac{\alpha }{2} } \frac{1}{n} \sum _{k=2}^n \left( - i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}} |g_k|^\alpha \right\} dt,\\&= \mathbb{E }\left[\varphi _{\alpha ,z}\left( \frac{1}{n} \sum _{k=2}^n \left( -i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}}\frac{ |g_k|^\alpha }{\mathbb{E }[|g_k|^\alpha ]} \right)\right] \end{aligned}$$

where \(w_\alpha > 0\) was defined in the introduction and \((g_i)_{i \ge 1}\) are iid standard gaussian variables. Similarly, we find that

$$\begin{aligned} J (z)&= \int _0^\infty e^{itz} \mathbb{E }\exp \left\{ - w_\alpha ^{\alpha } (2 t ) ^{\frac{\alpha }{2} } \frac{1}{n} \sum _{k=2}^n \left( - i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}} |g_k|^\alpha \right\} dt\end{aligned}$$
(34)
$$\begin{aligned}&= \mathbb{E }\left[ \psi _{\alpha ,z} \left( \frac{1}{n} \sum _{k=2}^n \left( -i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}}\frac{ |g_k|^\alpha }{\mathbb{E }[|g_k|^\alpha ]} \right)\right]\,. \end{aligned}$$
(35)

The next lemma due to Belinschi et al. [6] will be crucial in the sequel. Recall that the functions \(\varphi _{\alpha ,z}\) and \(\psi _{\alpha ,z}\) were defined in (16)–(17).

Lemma 3.4

([6, Lemma 3.6]) For any \(z \in \mathbb{C }_+\), the functions \(\varphi _{\alpha ,z}\) and \(\psi _{\alpha ,z}\) are Lipschitz with constant \(c = c(\alpha ) |z|^{ - \alpha } \) and \(c = c(\alpha ) |z|^{ - \alpha /2} \) on \({\mathcal{K }}_{\alpha /2}\). Moreover \(\varphi _{\alpha ,z}\) maps \({\mathcal{K }}_{\alpha /2}\) into \({\mathcal{K }}_{\alpha /2}\) and \(\psi _{\alpha ,z}\) maps \({\mathcal{K }}_{\alpha /2}\) into \({\mathcal{K }}_{1}\).

Proof

The first statement follows from [6, Lemma 3.6] by a change of variable. For the second, we note that if \(x \in {\mathcal{K }}_{\alpha /2}\) then \(x = ( - i w ) ^{\alpha /2}\) with \(w \in \mathbb{C }_+\) and from (33) \(\varphi _{\alpha ,z} ( x) = \mathbb{E }( i z + i w S )^{- \alpha /2}\) where \(S\) is a non-negative \(\alpha /2\) stable law with Laplace transform, for \(y > 0, \mathbb{E }\exp ( - y S) = \exp ( - \Gamma ( 1- \alpha /2) y^{\alpha /2} )\). Similarly, \(\psi _{\alpha ,z} (x) = \mathbb{E }( i z + i w S )^{- 1}\). In particular, \(\varphi _{\alpha ,z} (x) \in {\mathcal{K }}_{\alpha /2}\) and \(\psi _{\alpha ,z} (x) \in {\mathcal{K }}_{\alpha /2}\). \(\square \)

We are now able to prove Proposition 3.1.

Proof of Proposition 3.1

We recall that \(I(z)\) and \(J(z)\) were defined by (21)–(22). The point is that the Lipschitz constant in Lemma 3.4 depends on \(|z|\) and not only \(\mathrm{Im}(z)\). Hence since \( \rho _k := \left( - i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}} \in {\mathcal{K }}_{\alpha /2}\), using exchangeability, we deduce that with \(c=c(\alpha ) |z|^{ - \alpha /2} \) as in Lemma 3.4,

$$\begin{aligned} \left| I(z) -\varphi _{\alpha ,z} (\mathbb{E }\rho _2) \right|&= \left|\mathbb{E }\left[\varphi _{\alpha ,z}\left( \frac{1}{n} \sum _{k=2}^n \left( -i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}}\frac{ |g_k|^\alpha }{\mathbb{E }[|g_k|^\alpha ]} \right)\right]\right.\\&\left.-\,\varphi _{\alpha ,z}\left(\frac{1}{n-1} \sum _{k=2}^n \mathbb{E }[\left(-i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}}]\right)\right]\\&\le c \mathbb{E }\left|\frac{1}{n-1} \sum _{k=2}^n \rho _k |g_k|^\alpha - \mathbb{E }\frac{1}{n-1} \sum _{k=2}^n \rho _k |g_k|^\alpha \right| +\frac{c\mathbb{E }[|\rho _2|]}{n}\\&\le c \left( \mathbb{E }\left|\frac{1}{n-1} \sum _{k=2}^n \rho _k |g_k|^\alpha - \frac{1}{n-1} \sum _{k=2}^n \rho _k \mathbb{E }|g_k|^\alpha \right| \right.\\&\left.+\, \mathbb{E }\left|\frac{1}{n-1} \sum _{k=2}^n \rho _k - \mathbb{E }\frac{1}{n-1} \sum _{k=2}^n \rho _k \right|+\frac{\mathbb{E }[|\rho _2|]}{n}\right) . \end{aligned}$$

By the Cauchy–Schwarz inequality and Lemma 8.4, we obtain

$$\begin{aligned} \left| I(z) \!-\!\varphi _{\alpha ,z} (\mathbb{E }\rho _2) \right| \!\le \! c n^{-1} \sqrt{\mathbb{E }\left[\sum _{k=2}^n |\rho _k | ^ 2\right] } \!+\!c \left( n^{-1} \sqrt{\mathbb{E }\left[\sum _{k=2}^n |\rho _k | ^ 2\right] }\right)^2\!+\! c \eta ^{- \frac{\alpha }{2} } n^{ - \frac{\alpha }{4} }. \end{aligned}$$

By applying the Jensen inequality, we also notice that since \(0 < \alpha \le 2\),

$$\begin{aligned}&\mathbb{E }\left[ \frac{1 }{n-1} \sum _{k=2}^n |\rho _k | ^ 2\right]\!=\! \mathbb{E }\left[ \frac{1}{n-1} \sum _{k=2}^n |R^{(1)}_{kk} (z) | ^ \alpha \right] \\&\le \left( \mathbb{E }[\frac{1 }{n-1} \sum _{k=2}^n |R^{(1)}_{kk} (z) | ^ 2 ] \right)^{ \frac{ \alpha }{2}} \le \left( \mathbb{E }\left[ \frac{1}{n-1} {\mathrm{tr }}\left\{ R ^{(1)}(z) {R^{(1)}}^* (z)\right\} \right] \right)^{ \frac{\alpha }{2}}\!=\!M_{n}(z)^{\frac{\alpha }{2}}. \end{aligned}$$

Hence we obtain an error of

$$\begin{aligned} \left| I(z) -\varphi _{\alpha ,z} (\mathbb{E }\rho _2) \right| \le c n^{-\frac{1}{2}} M_n(z)^{\frac{\alpha }{4}} + c n^{-1} M_n(z)^{\frac{\alpha }{2}} +c\eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}}. \end{aligned}$$
(36)

In the forthcoming computations, we shall always consider \(\eta \) so that \(\varepsilon \) of (20) is smaller than one so that \( n^{-1} M_n(z)^{\frac{\alpha }{2}}\) vanishes and is neglectable compared to \( n^{-\frac{1}{2}} M_n(z)^{\frac{\alpha }{4}}\). However \(\mathbb{E }\rho _2\) and \(Y(z)\) are close. More precisely, by equation (97) (in Appendix) applied with \(f(x)=(-ix)^{\frac{\alpha }{2}}\) we find that

$$\begin{aligned} \left|\sum _{i= 1}^n \left( - i R_{kk}(z)\right)^{\frac{\alpha }{2}} - \sum _{i=2} ^{n} \left( - i R^{(1)}_{kk}(z)\right)^{\frac{\alpha }{2}} + ( - iz ) ^{\frac{\alpha }{2}} \right| \le 2 n (n\eta )^{-\frac{\alpha }{2}}. \end{aligned}$$

Taking the expectation, we get

$$\begin{aligned} \left|\mathbb{E }\rho _2 - Y(z) \right| \le c (n\eta )^{-\frac{\alpha }{2}}. \end{aligned}$$

By Lemma 3.4, the function \(\varphi _{\alpha ,z}\) is Lipschitz for some constant \(c = c(\alpha ,|z|)\) on \({\mathcal{K }}_{\alpha /2}\). We deduce from (36) that

$$\begin{aligned} \left| I(z) -\varphi _{\alpha ,z} (Y(z)) \right|&\le c n^{-\frac{1}{2}} M_n(z)^{\frac{\alpha }{4}} +c\eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}} + c (n\eta )^{-\frac{\alpha }{2}} \\&\le c^{\prime } \eta ^{-\frac{\alpha }{2+\alpha }} n^{-\frac{1}{2}} (\log n)^{\frac{\alpha (2+\alpha )}{16}} +2 c\eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}}, \end{aligned}$$

where we used the upper bound on \(M_n(z)\) given by (19). We note finally that the first term is always smaller for \(n\) large enough and \(\eta \le 1\) than the second term. We have thus proved that there exists \(c >0\), such that for all \(n^{- \frac{\alpha + 2}{ 4}} \le \eta \le 1\) and all integers,

$$\begin{aligned} \left| I(z) -\varphi _{\alpha ,z} (Y(z)) \right| \le c \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}}. \end{aligned}$$

The statement of Proposition 3.1 on \(Y(z)\) follows by applying Lemma 3.2: we find

$$\begin{aligned} \left| Y(z) -\varphi _{\alpha ,z} (Y(z)) \right|&\le c \eta ^{ - \frac{ \alpha }{2}} n^{ - \frac{\alpha }{4} } + | Y (z) - I(z) | \end{aligned}$$

Finally, we observe that the bound on \(X(z)\) follows similarly from (35):

$$\begin{aligned} \left| X(z) -\psi _{\alpha ,z} (Y(z)) \right|&\le c \eta ^{ - \frac{ \alpha }{2}} n^{ - \frac{\alpha }{4} } + | X (z) - J(z) | . \end{aligned}$$

We now use Lemma 3.2 and Proposition 3.1 is proved.n\(\square \)

3.2 Rate of convergence of the resolvent

We will now use Proposition 3.1 as the stepping stone to obtain a quantitative rate of convergence of the spectral measure \(\mu _A\) toward its limit.

By Lemma 3.4, if \(z = E + i \eta \) and \(| z|\) is large enough, say \(E_0\), then \(\varphi _{\alpha ,z}\) and \(\psi _{\alpha ,z}\) are Lipschitz with constant \(L < 1\) and in particular, it has a unique fixed point

$$\begin{aligned} y(z) = \varphi _{\alpha ,z} ( y(z) ), \quad y (z) \in {\mathcal{K }}_{\frac{\alpha }{2}}. \end{aligned}$$
(37)

Existence and uniqueness of such a fixed point extends to all \(z\) except on a finite set by the implicit function theorem (see [6]).

From [7, Theorem 1.4], the empirical measure \(\mu _A\) converges a.s. to a probability measure \(\mu _\alpha \) (for the topology of weak convergence). The Cauchy–Stieltjes transform of the limit measure \(\mu _\alpha \) is equal to

$$\begin{aligned} g_{\mu _\alpha } (z) = \int \frac{\mu _\alpha (dx) }{x - z} = i \psi _{\alpha , z} ( y(z)). \end{aligned}$$

The above identity characterizes the probability measure \(\mu _\alpha \).

Theorem 3.5

(Convergence of Stieltjes transform) For all \(0 < \alpha < 2\), there exists a finite set \({\mathcal{E }}_\alpha \subset \mathbb{R }\) such that if \(K\) is a compact set with \(K \cap {\mathcal{E }}_\alpha = \emptyset \) the following holds for some constant \(c = c ( \alpha , K)\).

  1. (i)

    If \(1 < \alpha < 2\) : for any integer \(n \ge 1, z = E + i \eta \) with \(E \in I, c \sqrt{ \frac{ \log n}{ n }} \vee \left( n^{ - \frac{\alpha }{ 8 - 3 \alpha } } ( 1 + {\mathbf{1 }}_{ 1 < \alpha < 4/3} ( \log n ) ^{ \frac{2 \alpha }{ 8 - 3 \alpha } })\right) \le \eta \le 1\),

    $$\begin{aligned} \left|\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) \right| \le c \delta , \end{aligned}$$
    (38)

    where \( \delta = \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}} + \eta ^{- \frac{8 - 3 \alpha }{2\alpha } } n^{-\frac{1}{2}} ( 1 + {\mathbf{1 }}_{ 1 < \alpha < 4/3} \log n ) + \eta ^{-1} \exp ( - \delta n \eta ^2 )\).

  2. (ii)

    If \(0 < \alpha \le 1\), the same statement holds with \(c n^{-\frac{\alpha }{2 +3 \alpha }} ( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1\) and \(\delta = \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}} + \eta ^{ - \frac{2 + 3 \alpha }{2} } n^{-\frac{\alpha }{2}}( \log n ) ^2\).

Moreover for any interval \(I \subset K\) of length \(|I |\ge \eta \,( 1 \vee \delta | \log ( \delta ) |^{-1}) \),

$$\begin{aligned} \left|\mathbb{E }\mu _{A} (I) - \mu _\alpha ( I) \right| \le c \delta |I |. \end{aligned}$$

This result implies Theorem 1.1. Indeed the presence of the expectation of \(\mu _A(I)\) instead of \(\mu _A(I)\) does not pose a problem due to Lemma 8.1 in Appendix. We start the proof of Theorem 3.5 with a weaker statement.

Proposition 3.6

(Convergence of Stieltjes transform : weak form) Statement (ii) of Theorem 3.5 holds and

  1. (i’)

    If \(1 < \alpha < 2\) and \( c n^{-1/5} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} ( \log n )^{\frac{2 }{5}} \right) \le \eta \le 1 \) then (38) holds with \(\delta = \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}} + \eta ^{- 5/2 } n^{-1/2} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} \log n \right)\).

Proof

Assume first that \(z = E + i \eta \) with \(| E | \ge E_0\) and \(\eta \le 1\). If we apply Lemma 3.4 to \(\varphi _{\alpha ,z}\) we find

$$\begin{aligned} |Y(z) - y (z) | \le \frac{1}{ 1 - L} |Y(z) - \varphi _{\alpha ,z} (Y(z))| . \end{aligned}$$

Also, by exchangeability \(\mathbb{E }g_{\mu _A} (z) = \mathbb{E }G_{11} (z) = i X(z)\). Hence applying Lemma 3.4 to \(\psi _{\alpha ,z}\), we deduce

$$\begin{aligned} |\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) | \le |X(z) - \psi _{\alpha ,z} (Y(z)) | + \frac{L}{ 1 - L} |Y(z) - \varphi _{\alpha ,z} (Y(z))|. \end{aligned}$$
(39)

Also, the Cauchy–Weyl interlacing theorem implies that the same type of bounds holds for the minor \(A^{(1)}\) instead of \(A\). Indeed applying Lemma 8.2 to \(f (x) = ( x - z)^{-1}\), we have

$$\begin{aligned} | g_{\mu _A} (z) - g_{\mu _{A^{(1)}} } (z) | \le 2 (n \eta )^{-1}. \end{aligned}$$

We recall that \(\mu _{\alpha }\) has a bounded density (see [6, 7, 10]). Hence \(\mathrm{Im}( g_{\mu _\alpha } (z))\) is uniformly bounded. We get for any \(z = E + i \eta , | E | \ge E_0, \eta \ge n ^{-\frac{\alpha +2}{4}}\),

$$\begin{aligned} \mathbb{E }\mathrm{Im}( g_{\mu _{A^{(1)} }} (z) )\!\le \! \mathrm{Im}( g_{\mu _\alpha } (z))\!+\!\left|\mathbb{E }g_{\mu _{A^{(1)}}} (z)\!-\!g_{\mu _\alpha } ( z) \right| \le c\!+\!\left|\mathbb{E }g_{\mu _{A^{(1)}}} (z)\!-\!g_{\mu _\alpha } ( z)\right|.\nonumber \\ \end{aligned}$$
(40)

On the other hand, the spectral theorem implies the important identity

$$\begin{aligned} M_n (z)= \frac{1}{n-1} \mathbb{E }{\mathrm{tr }}\left\{ R^{(1)} (z)(R^{(1)})^*(z)\right\} = \eta ^{-1} \mathbb{E }\mathrm{Im}( g_{\mu _{A^{(1)} }} (z) ). \end{aligned}$$

Then, by (39) and (40),

$$\begin{aligned} \eta M_n (z)\le 2 (n \eta )^{-1} + c + |X(z) - \psi _{\alpha ,z} (Y(z)) | + \frac{L}{ 1 - L} |Y(z) - \varphi _{\alpha ,z} (Y(z))|. \end{aligned}$$

We first consider the case \(1 < \alpha < 2\). Then, by Proposition 3.1, we obtain for \(\eta \ge n^{-1/{2\alpha }} \ge n^{-1/2}\),

$$\begin{aligned} \eta M_n(z) \le c + c\eta ^{-2} \sqrt{\frac{M_n(z)}{n}} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} \log n \right)\!. \end{aligned}$$

By monotonicity, we find that \(\eta M_n(z) \) is upper bounded by \(x^*\) where \(x^*\) is the unique fixed point of

$$\begin{aligned} x = c + c \eta ^{-\frac{5}{2} } n^{-\frac{1}{2}} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} \log n \right) \sqrt{x}. \end{aligned}$$

It is easy to check that the unique fixed point of \(x = a + b x^\beta \), with \(a,b >0\) and \(0 < \beta < 1\) is upper bounded by \(\kappa (\beta ) a\) if \(a \ge b^{\frac{1}{1 - \beta }}\). We deduce that, for some constant \(c_1 > 0\) and all \(n^{-1/5} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} ( \log n )^{\frac{2 }{5}} \right) \le \eta \le 1 \),

$$\begin{aligned} \eta M_n(z) \le c_1. \end{aligned}$$

So finally, from Proposition 3.1, we find that for all \(n^{-1/5} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} ( \log n )^{\frac{2 }{5}} \right) \le \eta \le 1 \),

$$\begin{aligned} |\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) | \le c_2 \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}} + c_2 \eta ^{-2}n^{-\frac{1}{ \alpha }} + c_2 \eta ^{-\frac{5 }{2} } n^{-\frac{1}{2}} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} \log n \right)\!. \end{aligned}$$

We notice that the middle term is negligible compared to the last for our range of \(\eta \).

Assume finally that \(0 < \alpha \le 1\), then arguing as above for \(\eta \ge n^{-1/2} \ge n^{-1/{2\alpha }} \),

$$\begin{aligned} \eta M_n (z)\le c + c \eta ^{ - 1} \left(\frac{M_n(z)}{n\eta ^2}\right)^{\frac{\alpha }{2}} ( \log n ) ^2. \end{aligned}$$
(41)

We deduce that, for some \(c_1 >0\) and all \(n^{-\frac{\alpha }{2 +3 \alpha }} ( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1, \eta M_n \le c_1\). We find that for all \(n^{-\frac{\alpha }{2 +3 \alpha }} ( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1\),

$$\begin{aligned} |\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) |&\le c_2 \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}} + c_2 \eta ^{-1 - \alpha }n^{-1} ( 1 + (\log n) {\mathbf{1 }}_{\alpha = 1} ) \\&+\, c_2 n^{-\frac{\alpha }{2}} \eta ^{ - \frac{2 + 3 \alpha }{2} } ( \log n ) ^2. \end{aligned}$$

Again the middle term is negligible compared to the first for our range of \(\eta \).

We have proven the proposition if \(z = E + i \eta \) and \(|E| \ge E_0\) is large enough. It remains to prove the statement for all \(E\) outside of a finite set. It is proven in [6] that \(\varphi _{\alpha ,z} ( y ) = z^{-\alpha } g (y)\) where \(g\) is an analytic function on \({\mathcal{K }}_{\alpha /2} \backslash \{0\}\) (which depends on \(\alpha \)). Let \(y(z)\) be the fixed point defined in (37). It follows that the set of \(z \in \mathbb{C }\) such that \(\varphi ^{\prime }_{\alpha , z} (y (z) ) = 1\) is finite (for details see [6, §5.3]). We define \({\mathcal{E }}^{\prime }_\alpha \) as the set of real \(E\) such that there exists \(0 \le \eta \le E_0 + 1\) with \(\varphi _{\alpha , E + i \eta } ^{\prime } ( y ( E + i \eta )) = 1\). We set finally \({\mathcal{E }}_\alpha = \{ 0 \} \cup {\mathcal{E }}^{\prime }_\alpha \). This set is finite. Let \(K\) be a compact interval which does not intersect \({\mathcal{E }}_\alpha \). From the inverse function theorem, \(\varphi _{\alpha ,z}\) is invertible in a neighborhood of \(y\) with inverse with locally bounded derivative. Hence, there exist \(\tau , c_0 > 0\) such that for \(t \ge 0 , 0 \le \eta \le E_0 + 1, E \in K\),

$$\begin{aligned} \text{ if} | y - y (E + i \eta ) | \le \tau \text{ and} | y - \varphi _{\alpha , E + i \eta } ( y ) | \le t \text{ then} | y - y (E + i \eta ) | \le c_0 t \text{.} \qquad \end{aligned}$$
(42)

Therefore, we may use Lemma 3.4 and an alternative version of (39) : for any \(z = E + i \eta \) with \(E \in K \) and \(0 \le \eta \le E_0 + 1\), if \(| Y(z) - y (z) | \le \tau \) then

$$\begin{aligned} |\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) | \le |X(z) - \psi _{\alpha ,z} (Y(z)) | + \frac{ c_0 c(\alpha ) }{ | z| ^{\frac{\alpha }{2} } } |Y(z) - \varphi _{\alpha ,z} (Y(z))|. \end{aligned}$$
(43)

To apply Proposition 3.1, we shall use an inductive argument to insure that the hypothesis \(| Y(z) - y (z) | \le \tau \) is satisfied. We set for integer \(\ell , \eta _0 = \eta , \eta _{\ell +1} = \eta _\ell + \frac{\tau }{3} ( 1 \wedge \eta _\ell ) ^2\) and \(z_\ell = E + i \eta _\ell \). As \(\eta _k\) goes to infinity as \(k\) goes to infinity when \(\eta _0\ne 0\), there exists \(k\) such that \(E_0 \le \eta _k \le E_0 + \tau \). Then \( \varphi _{\alpha , z_k}\) is a contraction and the above argument proves that

$$\begin{aligned} |\mathbb{E }g_{\mu _A} (z_k) - g_{\mu _\alpha } ( z_k) | \le c \delta \quad \text{ and} \quad | Y(z_k) - y (z_k) | \le c \delta , \end{aligned}$$

(note that \(\delta \) is a pessimistic bound since \(\mathrm{Im}( z_k ) \) is bounded away from \(0\)). We notice that it is sufficient to prove the statement of the proposition in the range, for \(1 < \alpha < 2, \kappa n^{-1/5} \left( 1 + {\mathbf{1 }}_{ 1 < \alpha \le \frac{4}{3}} ( \log n )^{\frac{2 }{5\alpha }} \right) \le \eta \le 1 \) and for \(0 < \alpha \le 1, \kappa n^{-\frac{\alpha }{2 +3 \alpha }} ( \log n )^{\frac{4}{2 + 3 \alpha } }\), where \( \kappa > 0\) is any fixed constant. Hence, up to increasing \( \kappa \), we may assume that \(s c \delta \le \tau /3\), where \(s \ge 1\) is large number that will be chosen later on.

To obtain a priori bounds for \(Y(z)-y(z)\) and \(|\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) |\) at \(z=z_{k-1}\) from those at \(z=z_{k}\) observe that for any probability measure \(\mu \) on \(\mathbb{R }\) and \(0 \le \beta \le 1\),

$$\begin{aligned} | g_{\mu } (E + i \eta _\ell ) ^\beta - g_{\mu } ( E + i \eta _{\ell +1} ) ^\beta | \le \frac{ | \eta _ \ell - \eta _{\ell +1} | }{ \eta _\ell ^{1 + \beta }} \le \frac{ \tau }{3}. \end{aligned}$$
(44)

Using the above control with \(\mu =\sum _{k=1}^n \langle v_k, e_1\rangle ^2 \delta _{\lambda _k}\) so that \(g_\mu (z)=R_{11}(z)\) we deduce by applying (44) with \(\beta =\alpha /2\) that

$$\begin{aligned} |(-iR_{11}(E + i \eta _\ell ) )^{\frac{\alpha }{2}}- (-iR_{11}(E + i \eta _{\ell +1}) )^{\frac{\alpha }{2}}|\le \frac{\tau }{3} \end{aligned}$$

and thus \(| Y (z_{k-1}) - y ( z_{k-1}) | \le \tau \). We get a similar control for \(\mathbb{E }g_{\mu _A} (z_{k-1})\) by applying (44) with \(\beta =1\) so that

$$\begin{aligned} M_n (z_{k-1} ) \le \eta ^{-1} \left( \frac{\tau }{3} + c \delta + \sup _\ell \mathrm{Im}g_{\mu _\alpha } (z_\ell ) \right) \le \eta ^{-1} ( \tau + \sup _\ell \mathrm{Im}g_{\mu _\alpha } (z_\ell ) ) = c_1 \eta ^{-1}. \end{aligned}$$

Therefore, using Proposition 3.1, we find for some constant \(c^{\prime } >0\).

$$\begin{aligned} | Y (z_{k-1}) - \varphi _{\alpha , z_{k-1}} ( Y ( z_{k-1}) ) | \vee | X (z_{k-1}) - \psi _{\alpha , z_{k-1}} ( Y ( z_{k-1}) ) | \le c^{\prime } \delta . \end{aligned}$$

From what precedes, it implies that \(| Y (z_{k-1}) - y ( z_{k-1}) | \le c_0 c^{\prime } \delta \). We choose \(s\) large enough so that \( c^{\prime }c_0 \le s c\), so that we have \( c^{\prime } c_0 \delta \le \tau /3\). Also, we may use (43). We find for some new constant \(c^{\prime \prime }\),

$$\begin{aligned} |\mathbb{E }g_{\mu _A} (z_k) - g_{\mu _\alpha } ( z_k) | \le c^{\prime \prime } \delta . \end{aligned}$$

Finally, if \(s\) was also chosen large enough so that \( c^{\prime \prime } \le s c\), then \(c^{\prime \prime } \delta \le \tau /3\), and we may repeat the above argument down to \(\ell = 0\). \(\square \)

When \(\alpha \in (1,2)\), a bootstrap argument allows to improve significantly Proposition 3.6. The idea is, in the spirit of [20], that if the imaginary part of \(\left\langle X, R^{(1)} X\right\rangle \) is a priori bounded below by something going to zero more slowly than \(\eta \), then we can improve the result of the key Lemma 3.2. Before moving to the proof, we state a classical deconvolution lemma.

Lemma 3.7

(From Stieltjes transform to counting measure) Let \(L > 0, 0 < \varepsilon < 1, K\) be an interval of \(\mathbb{R }\) and \(\mu \) be a probability measure on \(\mathbb{R }\). We assume that for some \(\eta > 0 \) and all \(E \in K\), either

$$\begin{aligned} \mathrm{Im}g_\mu ( E + i \eta ) \le L \quad \text{ or} \quad \mu \left( [ E - \frac{\eta }{2} , E + ] \right) \le L \eta . \end{aligned}$$

Then, there exists a universal constant \(c\) such that for any interval \( I \subset K\) of size at least \(\eta \) and such that \({\mathrm{dist }}( I , K^c ) \ge \varepsilon \), we have

$$\begin{aligned} \left| \mu ( I ) - \frac{1}{\pi } \int _{I} \mathrm{Im}g_{\mu } (E + i \eta ) dE \right| \le c ( L \vee \varepsilon ^{-1}) \eta \log \left( 1 + \frac{| I |}{\eta } \right) . \end{aligned}$$

Proof

Let us prove the first statement. We observe that

$$\begin{aligned} \frac{1}{\pi } \mathrm{Im}( g_{\mu } (y + i \eta ) ) = \frac{1 }{\pi } \int _{\mathbb{R }} \frac{\eta }{ (y - x )^2 + \eta ^2} \mu (dx) = P_\eta * \mu (y), \end{aligned}$$

where \(P_\eta \) is the Cauchy law with parameter \(\eta \). We thus need to perform a classical deconvolution. We may for example adapt Tao and Vu [29, Lemma 64] (see also e.g. [22, p. 15]). Define

$$\begin{aligned} F (y) = \frac{1}{\pi } \int _{I} \frac{\eta }{ (y - x )^2 + \eta ^2} dx = P_\eta ( I - y ). \end{aligned}$$

In particular

$$\begin{aligned} \left| \mu ( I ) - \frac{1}{\pi } \int _{I} \mathrm{Im}g_{\mu } (E + i \eta ) dE \right| = \left| \mu ( I ) - \int F(y) \mu (dy) \right|. \end{aligned}$$

Now, the Cauchy law has density, \( P_{\eta } ( t ) = \frac{1}{\pi } \frac{\eta }{ \eta ^2+ t^2 }.\) It follows that for \(\{ y \in I\}, \{ y \in I^c, {\mathrm{dist }}( y , I) \le |I | \}\) and \(\{ y \in I^c, {\mathrm{dist }}( y , I) \ge |I | \}\) we may use respectively the bounds

$$\begin{aligned}&\left| F(y) - 1 \right| \le \frac{ c }{ 1+ {\mathrm{dist }}( y , I^c ) \eta ^{-1} } \; \text{,} \quad | F(y) | \le \frac{ c }{ 1+ {\mathrm{dist }}( y , I) \eta ^{-1} } \quad \text{ and} \\&\qquad \qquad | F(y) | \le \frac{ c | I | \eta }{ {\mathrm{dist }}( y , I)^2 }. \end{aligned}$$

We write if \(I = [a , b], I_1 = I^c \cap [ a - | I | , b + | I | ] \cap K\) and \(I_2 = I^c \cap [ a - | I | , b + | I | ]^c \cap K \),

$$\begin{aligned} \left| \mu ( I ) - \frac{1}{\pi } \int _{I} \mathrm{Im}g_{\mu } (E + i \eta ) dE \right|&\le \int _I \frac{ c }{ 1+ {\mathrm{dist }}( y , I^c ) \eta ^{-1} } \mu (dy)\\&+ \int _{I_1 }\frac{ c }{ 1+ {\mathrm{dist }}( y , I ) \eta ^{-1} } \mu (dy) \\&+ \int _{I_2 } \frac{ c | I | \eta }{ {\mathrm{dist }}( y , I)^2 } \mu (dy) \\&+ \int _{K^c }\frac{ c }{ 1+ {\mathrm{dist }}( y , I ) \eta ^{-1} } \mu (dy). \end{aligned}$$

However, by assumption if \(J = [ E - \eta /2, E + \eta /2] \) is an interval of size \(\eta \) with \(E \in K\),

$$\begin{aligned} L \ge \mathrm{Im}g_\mu ( E + i \eta ) = \int _{\mathbb{R }} \frac{\eta }{ (E - x )^2 + \eta ^2} \mu (dx) \ge \frac{3}{ 4 \eta } \mu ( J). \end{aligned}$$

We deduce that \( \mu ( J) \le \frac{ 4L}{3} \eta .\) Now, consider a partition \({\mathcal{P }}\) of \(\mathbb{R }\) into intervals of size \( \eta \). We get from this last upper bound

$$\begin{aligned} \int _I \frac{ c }{ 1+ {\mathrm{dist }}( y , I^c ) \eta ^{-1} } \mu (dy)&\le \sum _{ J \in {\mathcal{P }}\cap I} \frac{ c \mu ( J) }{ 1 + {\mathrm{dist }}( J , I^c ) \eta ^{-1}}\\&\le \sum _{ k = 0 }^{ | I | \eta ^{-1}} \frac{ c^{\prime } L \eta }{ 1 + k} \le c^{\prime \prime } L \eta \log ( 1 + |I | \eta ^{-1} ). \end{aligned}$$

The other terms are bounded similarly. \(\square \)

Proof of Theorem 3.5

In view of Proposition 3.6 and Lemma 3.7 applied to \(\mathbb{E }\mu _A\) and \(\mu _\alpha \), it remains to prove statement \((i)\) of Theorem 3.5. We thus assume in the sequel that \(1 < \alpha < 2\). The proof is divided into five steps. Throughout the proof, we assume that \(n \ge 3\) (without loss of generality) and we denote by \(\mathbb{E }_1 [ \cdot ] \) and \(\mathbb{P }_1 ( \cdot ) \) the conditional expectation and probability given \({\mathcal{F }}_1\), the \(\sigma \)-algebra generated by the random variables \((X_{ij})_{ i \ge j \ge 2}\).

Step one: Lower bound on the Stieltjes transform Let \(K = [ a , b]\) be an interval which does not intersect the finite set \({\mathcal{E }}_\alpha \), defined in Proposition 3.6. The limit spectral measure \(\mu _\alpha \) has a positive density on \(\mathbb{R }\). In particular, there exists a constant \( c_0 = c_0(K_0,\alpha ) > 0\) such that for all \(0 \le \eta \le 1\) and \(x \in K\),

$$\begin{aligned} \mathrm{Im}g_{\mu _{\alpha }}( x + i \eta ) \ge c_0. \end{aligned}$$

Consequently, if there exists \(0 \le \eta \le 1\) such that for all \(x \in K\),

$$\begin{aligned} | \mathbb{E }g_{\mu _A} (x + i \eta ) - g_{\mu _\alpha } (x + i \eta ) | \le \frac{c_0}{2} \end{aligned}$$
(45)

then

$$\begin{aligned} \mathbb{E }\mathrm{Im}g_{\mu _A}( x + i \eta ) \ge \frac{c_0}{2}. \end{aligned}$$
(46)

Note that Proposition 3.6 already proves that (45) holds if \(n \ge n_0\) is large enough and

$$\begin{aligned} \eta _0 = n^{-\varepsilon }, \end{aligned}$$

for some \(\varepsilon > 0\). By an inductive argument, we aim at proving that (45) holds for the same constants \(n_0\) but for some \(\eta \ll \eta _0\).

For some constant \(\delta > 0 \) to be defined later on, we set for \(1 < \alpha < 2\),

$$\begin{aligned} \eta _\infty = \sqrt{ \frac{ \log n}{ 2 \delta n }} \vee \left( n^{ - \frac{\alpha }{ 8 - 3 \alpha } } ( 1 + {\mathbf{1 }}_{ 1 < \alpha < 4/3} ( \log n ) ^{ \frac{2 \alpha }{ 8 - 3 \alpha } })\right)\!. \end{aligned}$$

Note that \(\eta _\infty \ge n ^{- \frac{ \alpha +2 }{ 4}}\) for all \(n\) large enough (say again \(n_0\)).

Step two: Start of the induction We assume that (45) holds for some \(\eta _1 \in [ n ^{- \frac{ \alpha +2 }{ 4}} , \eta _0]\) and that

$$\begin{aligned} | Y (x + i \eta _1 ) - y (x + i \eta _1) | \le \frac{\tau }{3}, \end{aligned}$$
(47)

where \(\tau \) was defined in (42). Let \(0 < \tau ^{\prime } < \tau \) to be chosen later on. We are going to prove that (45)–(47) hold also for

$$\begin{aligned} \eta \in \left[ \eta _1 - \tau ^{\prime } \eta _1 ^2 , \eta _1 \right]. \end{aligned}$$
(48)

provided that \(\eta _1 \ge t \eta _\infty , n \ge n_0\) and \(t\) large enough. As in the proof of Proposition 3.6, cf. (44), if \(\tau ^{\prime } \) is small enough, we note that (47) implies that

$$\begin{aligned} | Y (x + i \eta ) - y (x + i \eta ) | \le 2\frac{ | \eta - \eta _1 | ^2 }{ \eta _1 ^ 2 } + | Y (x + i \eta _1 ) - y (x + i \eta _1) | \le \tau . \end{aligned}$$
(49)

First, by Weyl’s interlacing property (45) holds for \(A^{(1)}\) with \(c_0/2\) replaced by \(c_0 /4\) (\(\eta _1\gg n^{-1}\)). Also it follows by Jensen’s inequality that for \( z = x + i \eta \), with \(x \in K\),

$$\begin{aligned} \left(\mathrm{Im}R^{(1)} _{kk} (z) \right)^{\frac{\alpha }{2}}&= \left( \sum _{ i = 1} ^{n-1} \frac{ \eta }{ (\lambda ^{(1)}_i - x)^2 + \eta ^2 } \langle v^{(1)}_ i , e_k \rangle ^2 \right)^{\frac{\alpha }{2}}\nonumber \\&\ge \sum _{ i = 1} ^{n-1} \left( \frac{ \eta }{ (\lambda ^{(1)}_i - x)^2 + \eta ^2 } \right)^{\frac{\alpha }{2}} \langle v^{(1)}_ i , e_k \rangle ^2 =\left( \left(\mathrm{Im}R^{(1)} (z) \right)^{\frac{\alpha }{2}}\right)_{kk}\nonumber \\&\ge \eta ^{1-\frac{\alpha }{2}} \sum _{ i = 1} ^{n-1} \frac{ \eta }{ (\lambda ^{(1)}_i - x)^2 + \eta ^2 } \langle v^{(1)}_ i , e_k \rangle ^2 =\eta ^{1-\frac{\alpha }{2}} \mathrm{Im}R^{(1)}_{kk} (z), \end{aligned}$$
(50)

where we have used the fact \( [\eta / ( (\lambda ^{(1)}_i - x)^2 + \eta ^2 )]^{\alpha /2 -1}\ge \eta ^{1-\alpha /2}\) for \(\alpha \in [0,2]\). Note also that from (44) and (48)

$$\begin{aligned} | \mathrm{Im}R^{(1)} _{kk} (z)- \mathrm{Im}R^{(1)}_{kk} (E + i \eta _1) | \le \frac{ | \eta - \eta _1 | }{\eta ^2} \le \frac{ \tau ^{\prime } }{ 1 - \tau ^{\prime } }. \end{aligned}$$

Hence, if \(\tau ^{\prime }\) is chosen small enough so that the above is less than \(c_0 / 8\), we deduce from (46) and (50) that with \(c_1 =c_0 / 16\),

$$\begin{aligned} \mathbb{E }\frac{1}{n-1} \sum _{k=1}^{n-1} \left(\mathrm{Im}R^{(1)}_{kk} (z) \right)^{\frac{\alpha }{2}}&\ge \eta ^{1-\frac{\alpha }{2}} \mathbb{E }\frac{1}{n-1} \sum _{k=1}^{n-1}\mathrm{Im}R^{(1)} (z)_{kk} \nonumber \\&\ge 2 c_1 \eta ^{1-\frac{\alpha }{2}}. \end{aligned}$$
(51)

Now, for bounding from below

$$\begin{aligned} \frac{1}{n-1} \sum _{k=1}^{n-1} \left(\mathrm{Im}R^{(1)} _{kk} (z) \right)^\frac{\alpha }{2} \ge \frac{1}{n-1} {\mathrm{tr }}\left\{ (\mathrm{Im}R^{(1)}(z))^{\alpha /2} \right\} \!, \end{aligned}$$
(52)

we observe by Lemma 8.1, since \( x^{\alpha /2}\) has total variation on \([0,\eta ^{-1}]\) equal to \(\eta ^{-\alpha /2}\), that for \(r \ge 0\),

$$\begin{aligned}&\mathbb{P }\left( \frac{1}{n-1}{\mathrm{tr }}\left\{ ( \mathrm{Im}R^{(1)}(z) )^{\alpha /2} \right\} -\mathbb{E }\frac{1}{n-1}{\mathrm{tr }}\left\{ (\mathrm{Im}R^{(1)}(z))^{\alpha /2}\right\} \le - r \right)\\&\qquad \le \exp \left(-\frac{(n-1) r^2\eta ^{\alpha }}{2} \right). \end{aligned}$$

Applying the above with \(r=c_1 \eta ^{ 1 - \frac{\alpha }{2}}\) shows with (51) that for some \(\delta >0\),

$$\begin{aligned} \mathbb{P }(\Lambda (z)^c)\le e^{-\delta n \eta ^{2}}\!, \end{aligned}$$

where

$$\begin{aligned} \Lambda (z):= \left\{ \frac{1}{n-1} {\mathrm{tr }}\left\{ \left(\mathrm{Im}R ^{(1)} (z) \right)^{\frac{\alpha }{2}} \right\} \ge c_1 \eta ^{ 1 - \frac{\alpha }{2}} \right\} . \end{aligned}$$

Note that this probabilistic bound is non trivial only if \(\eta _1 \ge n^{-\frac{1}{2}} \ge n ^{- \frac{ \alpha +2 }{ 4}}\) (recall that \(1 < \alpha < 2\)).

Step three: Gaussian concentration for quadratic forms For any \(z = x + i \eta \in \mathbb{C }_+\), we may bound from below the imaginary part of

$$\begin{aligned} Q(z)=a_n^{-2}\sum _{k=2}^n R^{(1)}_{kk}(z)X_{1k}^2 \end{aligned}$$

on the event \(\Lambda (z) \in {\mathcal{F }}_1\). Indeed, as the \(\mathrm{Im}R^{(1)}_{kk},1\le k\le n-1,\) are non negative, we can use Lemma 7.1 to see that conditionally on \({\mathcal{F }}_1\),

$$\begin{aligned} \mathrm{Im}Q(z) \stackrel{d}{=} \left(\frac{1}{n-1} \sum _{k=2}^{n} \left(\mathrm{Im}R_{kk} ^{(1)} (z) G_k^2 \right)^{\frac{\alpha }{2}}\right)^{\frac{2}{\alpha }} S = L(z) S , \end{aligned}$$
(53)

where the equality holds in law and \(S\) is a positive \(\alpha /2\)-stable law whereas the \(G_k\) are independent standard Gaussian variables, independent from \(S\). Moreover, if \(\Lambda (z)\) holds then from (52)

$$\begin{aligned} \sum _{k=2}^{n} \left(\mathrm{Im}R_{kk} ^{(1)} (z) \right)^{\frac{\alpha }{2}}\ge c_1 (n-1) \eta ^{1 - \frac{\alpha }{2}}\ge c_1 (n-1)\eta \max _{k} \left(\mathrm{Im}R_{kk} ^{(1)} (z) \right)^{\frac{\alpha }{2}}. \end{aligned}$$

Hence, if \(\Lambda (z)\) holds and if \(\eta \ge c n^{-\frac{\alpha }{2}}\), we may apply Corollary 6.2 to \(p = \alpha \) and \(A\) the diagonal matrix with diagonal entries \(\sqrt{\mathrm{Im}R_{kk} ^{(1)} (z)}, 2 \le k \le n\). We deduce that for some universal constants \(c > 0, 0 < \delta < 1\), if \(\Lambda (z)\in {\mathcal{F }}_1\) holds, then

$$\begin{aligned} \mathbb{P }_1 \left( \Big (\sum _{k=2}^n |\sqrt{\mathrm{Im}R_{kk} ^{(1)} (z)} G_k|^\alpha \Big )^{\frac{1}{\alpha }} \le \delta ((n-1) \eta ^{1 - \frac{\alpha }{2}})^{1/\alpha }\right) \le e^{-\delta n\eta ^{\frac{2}{\alpha }}}. \end{aligned}$$

Taking the square, this yields to

$$\begin{aligned} \mathbb{P }_1 \left( L (z) \le \delta \eta ^{\frac{2}{\alpha }-1} \right) \le \mathbb{P }_1 \left( L (z) \le \delta ^2 \eta ^{\frac{2}{\alpha }-1} \right) \le e^{-\delta n \eta ^{\frac{2}{\alpha }}}. \end{aligned}$$
(54)

Finally, we observe similarly that by Lemma 7.1,

$$\begin{aligned} \mathrm{Im}(Q(z)+T(z))=\langle X_1, \mathrm{Im}R^{(1)} X_1\rangle \stackrel{d}{=} \tilde{L} (z) \tilde{S}, \end{aligned}$$

where \(T(z)\) was defined by (28) and, conditionally on \({\mathcal{F }}_1, \tilde{S}\) is a positive \(\alpha /2\)-stable law, independent of \(\tilde{L}(z) \). Also, \(\tilde{L}(z) = \Vert A G \Vert _\alpha ^2\), where \(A = \left( \mathrm{Im}R^{(1)}\right)^{1/2}\) and \(G \in \mathbb{R }^{n-1}\) is a standard Gaussian vector. If \(\Lambda (z)\) holds, we may again apply Corollary 6.2 to \(p = \alpha \) and \(A\), we find that if \(\eta \ge c n^{-\frac{\alpha }{2}}\), the random variable \(\tilde{L}(z)\) satisfies if \( \Lambda (z)\) holds, the probabilistic bound

$$\begin{aligned} \mathbb{P }_1 \left( \tilde{L} (z) \le \delta \eta ^{\frac{2}{\alpha }-1} \right)&\le e^{-\delta n \eta ^{\frac{2}{\alpha }}}\!. \end{aligned}$$

We may thus summarize the last two steps by stating that if \(n ^{-1/2} \le \eta _1 \le 1\) holds then

$$\begin{aligned} \mathbb{P }\left( \Pi (z) ^c \right) \le 3 \exp ( - \delta n \eta ^2 ). \end{aligned}$$

where \(z = x + i \eta , x \in K\) and

$$\begin{aligned} \Pi (z) = \Lambda (z) \cap \left\{ L (z) \ge \delta \eta ^{\frac{2}{\alpha }-1} \right\} \cap \left\{ \tilde{L} (z) \ge \delta \eta ^{\frac{2}{\alpha }-1}\right\} . \end{aligned}$$

(recall that \(1 < \alpha < 2\)).

Step four: Improved convergence estimates We next improve the results of Proposition 3.1 for our choice of \(z = x+ i \eta , x \in K\). We write instead of (29)

$$\begin{aligned} \left| Y(z) - I (z) \right|&\le \frac{\alpha }{2} ( \delta \eta ^ { \frac{2}{\alpha } -1 } )^{- \frac{\alpha }{2} - 1} \mathbb{E }\left[ ( S \wedge \tilde{S} )^{ - \frac{\alpha }{2} - 1} |T(z)| \right] + \eta ^{-\frac{\alpha }{2}} \mathbb{P }( \Pi (z)^c),\qquad \end{aligned}$$
(55)
$$\begin{aligned} \left| X(z) - J (z) \right|&\le ( \delta \eta ^ { \frac{2}{\alpha } -1 } )^{-2} \mathbb{E }\left[ ( S \wedge \tilde{S} )^{ - 2} |T(z)| \right] + \eta ^{-1} \mathbb{P }( \Pi (z)^c). \end{aligned}$$
(56)

Then from Lemma 3.3 and (27), there exists \(p > 1\) (depending on \(\alpha \)) such that

$$\begin{aligned} ( \mathbb{E }| T (z) |^p )^{\frac{1}{p}} \le c \left( n^{-\frac{1}{\alpha }} + \sqrt{\frac{M_n(z)}{n}}( 1 + {\mathbf{1 }}_{1 < \alpha \le 4/3} \log n ) \right) . \end{aligned}$$

From Hölder’s inequality and Lemma 7.3, we deduce that for some new constant \(c>0\),

$$\begin{aligned} \mathbb{E }\left[ ( S \wedge \tilde{S} )^{ - 2} |T(z)| \right] \le c \left( n^{-\frac{1}{\alpha }} + \sqrt{\frac{M_n(z)}{n}} ( 1 + {\mathbf{1 }}_{1 < \alpha \le 4/3} \log n ) \right). \end{aligned}$$

From (56)–(56), it follows that for \(n ^{-1/2} \le \eta _1 \le 1\) and some new constant \(c>0\),

$$\begin{aligned}&\left| Y(z) - I (z) \right| \vee \left| X(z) - J (z) \right| \\&\quad \le \; c \eta ^{- 2(\frac{2}{\alpha } -1) } \left( n^{-\frac{1}{\alpha }} + \sqrt{\frac{M_n(z)}{n}} ( 1 + {\mathbf{1 }}_{1 < \alpha < 4/3} \log n ) \right) + 3 \eta ^{-1} \exp ( - \delta n \eta ^2 ). \end{aligned}$$

Note that \( \eta ^{- 2(\frac{2}{\alpha } -1) }n^{-\frac{1}{\alpha }} \le 1\) if \(\eta \ge n^{- \frac{1}{2 ( 2 - \alpha )}}\ge n ^{-1/2} \) while \(\eta ^{-1} \exp ( - \delta n \eta ^2 ) \le 1\) if \(( 2 \delta n / \log n )^{-1/2} \le \eta \) . Note also that this last expression improves upon Lemma 3.2 and then Proposition 3.1 can be improved into

$$\begin{aligned}&\left| Y(z) - \varphi _{\alpha , z} ( Y (z) ) \right| \vee \left| X(z) - \psi _{\alpha , z} ( Y (z) ) \right|\nonumber \\&\le \; c \eta ^{- 2(\frac{2}{\alpha } -1) } \left( n^{-\frac{1}{\alpha }} + \sqrt{\frac{M_n(z)}{n}}( 1 + {\mathbf{1 }}_{1 < \alpha \le 4/3} \log n ) \right)\nonumber \\&\quad +\, c \eta ^{-\frac{\alpha }{2}} n^{- \frac{\alpha }{4}} + c \eta ^{-1} \exp ( - \delta n \eta ^2 ). \end{aligned}$$
(57)

Then, by (49) we may use the bound (43). From (40), we thus obtain, for \(( 2 \delta n / \log n )^{-1/2} \le \eta _1 \le 1\),

$$\begin{aligned} \eta M_n (z)&\le c + c\eta ^{- 2(\frac{2}{\alpha } -1) } \sqrt{\frac{M_n(z)}{n}} ( 1 + {\mathbf{1 }}_{1 < \alpha \le 4/3} \log n ) \\&= c + c\eta ^{- \frac{8 - 3 \alpha }{2\alpha } } n^{-\frac{1}{2}} \sqrt{\eta M_n(z)} ( 1 + {\mathbf{1 }}_{1 < \alpha < 4/3} \log n ). \end{aligned}$$

We deduce that, for some constant \(c > 0\), if \( \eta _\infty \le \eta _1 \le 1\),

$$\begin{aligned} \eta M_n(z) \le c. \end{aligned}$$

So finally, from (43)–(57), we find that for \(\eta _\infty \le \eta _1 \le 1\),

$$\begin{aligned}&|\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) | \nonumber \\&\le \, c_3 \eta ^{-\frac{\alpha }{2}} n^{-\frac{\alpha }{4}} + c_3 \eta ^{- \frac{8 - 3 \alpha }{2\alpha } } n^{-\frac{1}{2}} ( 1 + {\mathbf{1 }}_{1 < \alpha \le 4/3} \log n ) + c_3 \eta ^{-1} \exp ( - \delta n \eta ^2 ).\qquad \end{aligned}$$
(58)

Step five: End of the induction From (42)–(58), we deduce that if \( t \eta _\infty \le \eta _1 \le 1\) and \(t\) large enough, then

$$\begin{aligned} |\mathbb{E }g_{\mu _A} (z) - g_{\mu _\alpha } ( z) | \le \frac{c_0}{2} \quad \text{ and} \quad \left| Y(z) - y(z) \right| \le \frac{\tau }{3 }. \end{aligned}$$

We have thus proved that (45)–(47) holds also for our choice of \(\eta \).

The argument is completed as follow. Let \(K = [a , b]\) be a compact interval that does not intersect \({\mathcal{E }}_\alpha \). Starting for \(\eta _0 = n^{-\varepsilon }\), by applying \(m\) times the induction, we deduce that (45) holds for \(\eta _m = \eta _{m-1} - \tau ^{\prime } \eta _{m-1}^2\). Since this sequence vanishes as \(m\) goes to infinity, and, for some \(m\), we have \(t \eta _\infty \le \eta _m < 2 t \eta _\infty \). We deduce that for all \(n\) large enough (say \(n_0\)), (58) holds for \(z = x + i \eta \), with \(t \eta _\infty \le \eta \le 1\) and \(K = [a , b]\). The statement follows. \(\square \)

4 Weak delocalization of eigenvectors

Following Erdős–Schlein–Yau [20], from local convergence of the empirical spectral distribution (Theorem 3.5), it is possible to deduce the delocalization of eigenvectors. Using the union bound, Theorem 1.2 follows from the next proposition.

Proposition 4.1

(Delocalization of the eigenvectors) For any \(1 < \alpha < 2\), there exist \(\delta , c >0\) and a finite set \({\mathcal{E }}_\alpha \subset \mathbb{R }\) such that if \(I\) is a compact interval with \(I \cap {\mathcal{E }}_\alpha = \emptyset \), then for any unit eigenvector \(v\) with eigenvalue \(\lambda \in I\) and any \(1 \le i \le n\),

$$\begin{aligned} |\langle v , e_i \rangle | \le _{st} Z n ^ {-\rho ( 1 - \frac{1}{\alpha }) } ( \log n )^c, \end{aligned}$$

where \(\rho \) is as in Theorem 1.1 and \(Z\) is a non-negative random variable whose law depends on \((\alpha , I)\) and which satisfies

$$\begin{aligned} \mathbb{E }\exp ( Z^\delta ) < \infty . \end{aligned}$$

Proof

Let \({\mathcal{E }}_\alpha \) be as in Theorem 1.1. The density of \(\mu _\alpha \) is uniformly lower bounded on \(I\) by say \(4\varepsilon >0\). We set

$$\begin{aligned} \eta = c_1 \left( \sqrt{ \frac{ \log n}{ n }} \vee \left( n^{ - \frac{\alpha }{ 8 - 3 \alpha } } ( 1 + {\mathbf{1 }}_{ 1 < \alpha < 4/3} ( \log n ) ^{ \frac{2 \alpha }{ 8 - 3 \alpha } })\right) \right) , \end{aligned}$$

where the constant \(c_1\) is large enough to guarantee that for any interval \(J\) of length at least \(\eta \) in \(I\) we have \(|\mathbb{E }\mu _A (J) - \mu _\alpha (J) | \le 2 \varepsilon | J|\). Then, we partition the interval \(I = \cup _\ell I_\ell \) into \(c_2 \eta ^{-1} \) intervals of length \( \eta \). From what precedes we have for any \( 1 \le \ell \le c_2 \eta ^{-1}, \mathbb{E }\mu _A (I_\ell ) > 2 \varepsilon | I_\ell |\).

Now, by Lemma 8.1, the event \(F_n\) that for all \( 1 \le \ell \le c_2 \eta ^{-1}\),

$$\begin{aligned} \mu _A (I_\ell ) > \mathbb{E }\mu _A (I_\ell ) - \varepsilon | I_\ell | > \varepsilon | I_\ell |, \end{aligned}$$
(59)

has probability at least \( 1 - c_2 \eta ^{-1} \exp ( - n \varepsilon ^2 c^2_3 \eta ^2 / 2 ) \ge 1 - c \exp ( - c n^{\delta })\) for some constants \(c , \delta >0\).

Let \(v\) be a unit eigenvector and \(\lambda \in I\) such that \(A v = \lambda v\). Set \(v_i = \langle v , e_i \rangle \). We recall the formula

$$\begin{aligned} v_1 ^2 = ( 1 + a_n^{-2} \langle X_1 , ( A^{(1)} - \lambda ) ^{-2} X_1 \rangle ) ^{-1} , \end{aligned}$$

with \(X_1 = ( X_{1 2}, \ldots , X_{1 n}) \in \mathbb{R }^{n-1}\) and \(A^{(1)}\) is the principal minor matrix of \(A\) where the first row and column have been removed. We may now argue as in the proof of Proposition 2.1 : for some \(1 \le \ell \le c_2 \eta ^{-1}, \lambda \in I_\ell \), and it follows

$$\begin{aligned} v_1 ^2 \le a_n^{2} c_3 ^2 \eta ^2 \left( \sum _{i : \lambda _i ^{(1)} \in I_\ell } \langle X_1 , u_i ^{(1)} \rangle ^{2} \right)^{-1}, \end{aligned}$$

where \((\lambda _i ^{(1)},u_i ^{(1)}), 1 \le i \le n-1\) denotes the eigenvalues and an eigenvectors basis of \(A^{(1)}\). We rewrite the above expression as

$$\begin{aligned} v_1 ^2 \le a_n^{2} c_3 ^2 \eta ^2 {\mathrm{dist }}^{-2}( X_1, W^{(1)} ) = a_n^{2} c_3 ^2 \eta ^2 \langle X_1, P_1 X_1 \rangle ^{-1} , \end{aligned}$$
(60)

where \(W^{(1)} = \mathrm vect \left\{ u_{i}^{(1)} : 1 \le i \le n-1 , \lambda _i^{(1)} \notin I_\ell \right\} ,\) and \(P_1\) is the orthogonal projection onto the orthogonal of \(W^{(1)}\). The rank of \(P_1\) is equal to

$$\begin{aligned} N^{(1)}_{I_\ell } = | \{1 \le i \le n - 1 : \lambda ^{(1)}_i \in I_\ell \} |= n - 1 - \mathrm dim (W^{(1)}). \end{aligned}$$

From Weyl interlacement theorem, we get

$$\begin{aligned} n \mu _A (I_\ell ) - 1 \le N^{(1)}_{I_\ell } \le n \mu _A (I_\ell ) + 1. \end{aligned}$$
(61)

From Lemma 7.1, there exists a positive \(\alpha /2\)-stable random variable \(S\) and a standard Gaussian vector \(G\) such that

$$\begin{aligned} {\mathrm{dist }}^{2}( X_1, W^{(1)} ) \stackrel{d}{=} \Vert P_1 G \Vert _\alpha ^2 S. \end{aligned}$$

By Corollary 6.2 and (12), if \(n\) is large enough, on the event \(F_n\), see (59), with probability at least

$$\begin{aligned} 1 - 2 \exp \left( - \delta \frac{ ( \varepsilon c_3 n \eta )^{\frac{2}{ \alpha }} }{ n^{\frac{2}{\alpha } - 1}} \right) \ge 1 - 2 \exp ( - c n^{\delta }) \end{aligned}$$

the lower bound

$$\begin{aligned} \Vert P_1 G \Vert _\alpha \ge \delta \left( \varepsilon c_3 n \eta \right)^{\frac{1}{\alpha }} \end{aligned}$$

holds. Let us denote this enlarged event by \(\bar{F}_n\). Hence, for some \(c >0\), on \(\bar{F}_n\), we have from (60)

$$\begin{aligned} v_1 ^2 \le c \eta ^{2( 1 - 1 / \alpha )} S^{-1} . \end{aligned}$$

In summary, we have shown that

$$\begin{aligned} |v_1 | \le c \eta ^{ 1 - 1 / \alpha } S^{- 1 / 2} + {\mathbf{1 }}_{\bar{F}_n^c}, \end{aligned}$$

where \(\bar{F}_n^c\) has probability at most \(c \exp ( - c n^{\delta })\), for some \(c >1\). For \(0 < \delta ^{\prime } < 1\), it yields,

$$\begin{aligned} \mathbb{E }\exp \left\{ \left( \frac{ |v_1| }{ c \eta ^{ 1 - 1 / \alpha } } \right)^ {\delta ^{\prime }} \right\}&\le \mathbb{E }\exp \left\{ S^{- \delta ^{\prime } / 2} + \eta ^{ \delta ^{\prime } ( 1 / \alpha - 1 ) } {\mathbf{1 }}_{\bar{F}_n^c} \right\} \\&\le \sqrt{ \mathbb{E }e^ { 2 S^{- \delta ^{\prime } / 2} } \mathbb{E }e^ { 2 \eta ^{ \delta ^{\prime } ( 1 / \alpha - 1 ) } {\mathbf{1 }}_{\bar{F}_n^c} } } \\&\le \sqrt{ \mathbb{E }e^ { 2 S^{- \delta ^{\prime } / 2} } ( 1 + e^ { 2 n ^ {\delta ^{\prime }/2} } c e^{ - c n^{\delta }}) }, \end{aligned}$$

where we have used that \(\eta \ge 1/n\) and \(\alpha < 2\). Using, Lemma 7.3, if \(\delta ^{\prime }\) is small enough, the above is uniformly bounded in \(n\). This gives our statement for any \(\delta ^{\prime \prime } < \delta ^{\prime }\). \(\square \)

5 Analysis of the limit recursive equation

We next turn to the analysis of the limiting equation describing the resolvent, in case \(\alpha <1\). Let \({\mathcal{H }}\) be the set of analytic functions \(h : \mathbb{C }_+ \rightarrow \mathbb{C }_+\) such that for all \(z \in \mathbb{C }_+, |h(z)| \le \mathrm{Im}(z)^{-1}\). We also consider the subset \({\mathcal{H }}_0\) of functions of \({\mathcal{H }}\) such that for all \(z \in \mathbb{C }_+, h( - \bar{z}) = - \bar{h} ( z)\). For every \(n\) and \(1 \le k \le n\), the function \(z \mapsto R (z)_{kk}\) is in \({\mathcal{H }}\). It is proved in [10] that \(R_{kk}\) converges weakly for the finite dimensional convergence to the random variable \(R_0\) in \({\mathcal{H }}_0\) which is the unique solution of the recursive distributional equation for all \(z \in \mathbb{C }_+\),

$$\begin{aligned} R_0(z) \stackrel{d}{=} - \left( z + \sum _{k \ge 1} \xi _k R_k (z)\right)^{-1}, \end{aligned}$$
(62)

where \(\{\xi _k\}_{k \ge 1}\) is a Poisson process on \(\mathbb{R }_+\) of intensity measure \(\frac{\alpha }{2} x^{ \frac{\alpha }{2} - 1} dx\), independent of \((R_k)_{k\ge 1}\), a sequence of independent copies of \(R_0\). In [10], \(R_0(z)\) is shown to be the resolvent at a vector of a random self-adjoint operator defined associated to Aldous’ Poisson Weighted Infinite Tree. We define \(\bar{\mathbb{C }}_+ = \{z \in \mathbb{C }: \mathrm{Im}(z) \ge 0 \} = \mathbb{C }_+ \cup \mathbb{R }\). In the following statement, we establish a new property of this resolvent.

Theorem 5.1

(Unicity for the resolvent recursive equation) Let \(0 < \alpha < 2/3\). There exists \(E_\alpha > 0 \) such that for any \(z \in \bar{\mathbb{C }}_+\) with \(|z| \ge E_\alpha \), there is a unique random variable \(R_0 (z)\) on \(\bar{\mathbb{C }}_+\) which satisfies the distributional equation (62) and \(\mathbb{E }| R_0(z) |^ {\frac{\alpha }{2} } < + \infty \). Moreover, for any \(0 < \kappa < \alpha / 2\), there exist \(E_{\alpha ,\kappa } \ge E_\alpha \) and \(c>0\), such that for any \(z \in \mathbb{C }_+\) with \(|z| \ge E_{\alpha ,\kappa }\),

$$\begin{aligned} \mathbb{E }\mathrm{Im}R_0(z) ^ {\frac{\alpha }{2} } \le c \, \mathrm{Im}( z ) ^ {\kappa }. \end{aligned}$$

In particular, if \(\mathrm{Im}( z) = 0, R_0(z)\) is a real random variable.

The main part of this section is devoted to the proof of Theorem 5.1. We will then analyze its consequence on our random matrix and prove Theorem 1.3 in Sect. 5.4. As usual, it is based on a fixed point argument. However, as \(R_0\) is complex-valued, it is not enough to get a fixed point argument for the moments of \(R_0\) as was done previously in [6, 7]. Instead, we prove that moments of linear combinations of \(R_0\) and its conjugate satisfy a fixed point equation. We then show that this new fixed point equation is well defined and for sufficiently large \(z\), has a unique solution.

5.1 Proof of Theorem 5.1

We shall give the proof of Theorem 5.1 in this section, but postpone the proofs of technical lemmas to the next subsection. By construction \( H (z) = - i R_0 ( z) \in {\mathcal{K }}_{1}\) as well as \(\bar{H} (z) = i \bar{R}_0 ( z) \in {\mathcal{K }}_{1}\) (recall that for \(\beta \in [0,2]\) , \({\mathcal{K }}_\beta = \{z \in \mathbb{C }: | \arg (z) | \le \frac{\pi \beta }{2} \}\)). For ease of notation, we define the bilinear form \(h.u\) for \( h \in \mathbb{C }\) and \(u \in {\mathcal{K }}_1^+ = {\mathcal{K }}_1 \cap \bar{\mathbb{C }}_+\) given by

$$\begin{aligned} h.u = \mathrm{Re}( u ) h + \mathrm{Im}(u ) \bar{h}. \end{aligned}$$

For example, \(| i.u | = | \mathrm{Re}(u) - \mathrm{Im}(u)|\). Note also that if \(h \in {\mathcal{K }}_1\) and \(u\in {\mathcal{K }}_1^+\), then \(h.u \in {\mathcal{K }}_1\). We set

$$\begin{aligned} \gamma _z ( u ) = \Gamma (1 - \frac{\alpha }{2}) \mathbb{E }( H (z).u )^{\frac{\alpha }{2}} \; \in {\mathcal{K }}_{\alpha /2}. \end{aligned}$$

We let \({\mathcal{C }}_\alpha \) (resp. \({\mathcal{C }}_\alpha ^{\prime }\)) denote the set of continuous functions \(g\) from \( {\mathcal{K }}_1^+\) to \({\mathcal{K }}_{\alpha /2}\) (resp. \(\mathbb{C }\)) such that \(g( \lambda u ) = \lambda ^{\alpha /2 } g(u)\), for all \(\lambda > 0\). Then, for \(\alpha /2 \le \beta \le 1\), we introduce the norm

$$\begin{aligned} \Vert g \Vert _{\beta } = \max _{ u \in S^1_+} | g(u) | + \max _{ u \ne v \in S^1_+} \frac{ | g(u) - g(v) | }{ |u - v | ^ { \beta } } \left( | i.u | \wedge | i.v | \right)^ {\beta - \frac{\alpha }{2}} \end{aligned}$$

where \(S_+^1=\{u\in {\mathcal{K }}_1^+, |u|=1\}\). We then define \( {\mathcal{H }}_{\beta }\) (resp. \( {\mathcal{H }}^{\prime }_{\beta }\)) as the set of functions \(g\) in \({\mathcal{C }}_\alpha \) (resp. \({\mathcal{C }}_\alpha ^{\prime }\)) such that \(\Vert g \Vert _\beta \) is finite. Note that \(\Vert g \Vert _{\beta } \) contains two parts : the infinite norm and a weighted \(\beta \)-Hölder norm which get worse as the argument of \(u\) or \(v\) gets close to \(\pi /4\). Notice also that \( {\mathcal{H }}^{\prime }_{\beta }\) is a real vector space and \( {\mathcal{H }}_{\beta }\) is a cone.

The starting point of our analysis is that \(\gamma _z\) belongs to \({\mathcal{H }}_{\beta }\).

Lemma 5.2

(Regularity of fractional moments) Let \(0 < \alpha < 2\) and \(z \in \bar{\mathbb{C }}_+\),

  • Let \(R_0(z)\) be a solution of (62) such that \(\mathbb{E }| R_0(z) |^ {\frac{\alpha }{2} } < + \infty \). Then for all \(0 < \beta <1\), all \(z\in \bar{\mathbb{C }}_+ \backslash \{ 0\} \), \(\mathbb{E }| R_0(z)| ^\beta \le c | \mathrm{Re}( z) | ^{-\beta } \) for some constant \(c = c(\alpha ,\beta )\).

  • Let \(H\) be a random variable in \({\mathcal{K }}_{1}\) such that \(\mathbb{E }| H|^ { \frac{\alpha }{2}} \) is finite. If we define for \(u \in {\mathcal{K }}_1 ^ +, \gamma ( u ) = \mathbb{E }( H.u )^{\frac{\alpha }{2}} \), then \(\gamma \in {\mathcal{H }}_{\beta }\) for all \(\alpha /2 \le \beta \le 1\) and \(\Vert \gamma \Vert _\beta \le c \mathbb{E }| H|^ { \frac{\alpha }{2}}\) for some universal constant \(c > 0\).

Let \( h \in {\mathcal{K }}_1\) and \(g \in {\mathcal{H }}_{\beta }\). We define formally the function given for all \( u \in S^1_+\) by

$$\begin{aligned}&F_h ( g ) (u) = \int _0 ^ {\frac{\pi }{2}} d\theta (\sin 2 \theta )^{\frac{\alpha }{2} -1}\, \int _{0}^\infty dy\, y^{-\frac{\alpha }{2}-1}\\&\qquad \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } }\left( e^{-r^{\frac{\alpha }{2} } g(e^{i\theta })} - e^{- y r h . u }e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )}\right)\!. \end{aligned}$$

We next see that \(F_{-iz}\) is closely related to a fixed point equation satisfied by \(\gamma _z\).

Lemma 5.3

(Fixed point equation for fractional moments) Let \(z \in \bar{\mathbb{C }}_+, 0 < \alpha < 2\) and \(R_0(z)\) solution of (62) such that \(\mathbb{E }| R_0(z) |^ {\frac{\alpha }{2} } < + \infty \). Then for all \(u \in {\mathcal{K }}_1^+\),

$$\begin{aligned} \gamma _z (u ) = c_\alpha F_{-iz} ( \gamma _z ) (\check{u}), \end{aligned}$$

where

$$\begin{aligned} c_\alpha = \frac{ \alpha }{ 2^{ \frac{\alpha }{2} } \Gamma ( \alpha / 2 ) ^2 \Gamma ( 1 - \alpha / 2 ) } \quad \text{ and} \quad \check{u} =i \bar{u}= \mathrm{Im}(u) + i \mathrm{Re}(u). \end{aligned}$$

To prove this lemma, we will properly define and study the function \(F_h\) on \( {\mathcal{H }}_{\beta }\), at least for some values of \((h,\alpha ,\beta )\). We shall prove that

Lemma 5.4

(Domain of definition of \(F_h\)) Let \(h \in {\mathcal{K }}_1\) with \(|h| \ge 1, 0 < \alpha < 1\) and \(\beta \) such that

$$\begin{aligned} \frac{\alpha }{2} < \beta < 1 - \frac{ \alpha }{ 2}. \end{aligned}$$

Then \(F_h\) defines a map from \({\mathcal{H }}_{\beta }\) to \({\mathcal{H }}^{\prime }_{\beta }\), and there exists a constant \(c = c(\alpha )\) such that

$$\begin{aligned} \Vert F_h ( g ) \Vert _\beta \le c |h|^{-\frac{\alpha }{2}} ( \Vert g \Vert _\beta + 1 ) . \end{aligned}$$

We could not prove unfortunately that \(F_h\) is a contraction for \(\Vert .\Vert _\beta \) but for a weaker and less appealing norm on \({\mathcal{H }}^{\prime }_{\beta }\) which is given for \(\varepsilon > 0\) by:

$$\begin{aligned} \Vert g \Vert _{\beta , \varepsilon } = \max _{ u \in S^1_+} | g(u) | | i.u |^ \varepsilon + \max _{ u \ne v \in S^1_+} \frac{ | g(u) - g(v) | }{ |u - v | ^ { \beta } } \left( | i.u | \wedge | i.v | \right)^ {\beta + \varepsilon }. \end{aligned}$$

It turns out that the map \(F_h\) is Lipschitz for this new norm if \(\alpha \) is small enough.

Lemma 5.5

(Contraction property of \(F_h\)) Let \(h \in {\mathcal{K }}_1\) with \(|h| \ge 1, 0 < \alpha < 2/3, \alpha / 2 < \beta < 1 - \alpha /2\) and \(0 < \varepsilon < (1 - 3 \alpha /2)\wedge (\beta -\alpha /2) \). Then there exists a finite constant \(c = c(\alpha ,\beta ,\varepsilon )\) such that, for all \(f, g \in {\mathcal{H }}_{\beta }\),

$$\begin{aligned} \Vert F_h ( f ) - F_h ( g ) \Vert _{\beta ,\varepsilon } \le c |h|^{- \alpha } ( 1 + \Vert f \Vert _\beta + \Vert g \Vert _{\beta } ) \Vert f - g \Vert _{\beta ,\varepsilon }. \end{aligned}$$

We can now turn to the proof of Theorem 5.1. To this end define the map \(G_z\) which maps \(g \in {\mathcal{H }}_{\beta }\) to the function

$$\begin{aligned} G_z ( g ) (u) = c_\alpha F_{-iz} ( g ) (\check{u} ), \quad u \in S^1_+, \end{aligned}$$
(63)

where \(c_\alpha \) and \(\check{u}\) are given by Lemma 5.3. Then by Lemma 5.4, if \(|z|\) is large enough, any fixed point \(g\) of \(G_z\) satisfies \(\Vert g \Vert _\beta \le c_0 /2\) for some constant \(c_0= c_0(\alpha , \beta )\). By Lemma 5.5, for any \(0 < \varepsilon < 1 - 3 \alpha /2 \), if \(|z| \ge E_{\alpha , \varepsilon }\) is large enough, \(G_z\) satisfies

$$\begin{aligned} \Vert G_z ( f) - G_z (g) \Vert _{\beta ,\varepsilon } \le \frac{ 1 + \Vert f \Vert _\beta + \Vert g \Vert _{\beta } }{1 + 2 c_0} \Vert f - g \Vert _{\beta ,\varepsilon }. \end{aligned}$$

Thus, by Lemma 5.3, \(\gamma _z\) is the unique solution in \({\mathcal{H }}_\beta \) of the fixed point equation \( \gamma _z = G_z (\gamma _z). \) However, by Lemma 5.7 below, the law of \(R_0(z)\) which satisfies (62) is uniquely characterized by its fractional moments \(\gamma _z\). Therefore, there is a unique solution to this recursive distributional equation.

To prove the estimate on \(\mathbb{E }[\mathrm{Im}R_0(z)^{\alpha /2}]\), we start by proving that \(\mathrm{Im}R_0(E)\) vanishes almost surely. Indeed, we first note that when \(z = E \ne 0\) is real, there is a real solution of the fixed point equation \(\gamma _z = G_z (\gamma _z)\). Let us seek for a probability distribution \(P_E\) in \(\mathbb{R }\) such that (62) holds. We recall that if \(y_k\) are non-negative i.i.d. random variables, independent of \(\{\xi _k\}_{k\ge 1}\), then \(\sum _{k} y_k \xi _k \) is equal in law to \( ( \mathbb{E }y_1^{\frac{\alpha }{2}} ) ^{\frac{2}{\alpha } } \sum _{k} \xi _k\) and \(S = \sum _{k} \xi _k\) is a non-negative \(\alpha /2\)-stable law. Thus, using the Poisson thinning property, by definition \(P_E\) has to be the law of

$$\begin{aligned} -\left(E+ a^{2/\alpha } S - b^{2/\alpha } S^{\prime } \right)^{-1} \end{aligned}$$

if \(S\) and \(S^{\prime }\) are independent \(\alpha /2\)-stable positive laws and \(a = \int \max ( x, 0)^{\alpha /2}dP_E(x), b = \int \max ( -x, 0)^{\alpha /2}dP_E(x)\). We find the system of equations

$$\begin{aligned} a&= \mathbb{E }\left((E + a^{2/\alpha } S - b^{2/\alpha } S^{\prime })^{-1} \right)_-^{\alpha /2},\\ b&= \mathbb{E }\left((E + a^{2/\alpha } S - b^{2/\alpha } S^{\prime })^{-1} \right)_+^{\alpha /2}, \end{aligned}$$

(where we have used the notation \((x)_+ = \max ( x, 0)\), \((x)_- = \max ( -x, 0)\)). Notice that \(a^{2/\alpha } S - b ^{2/\alpha } S^{\prime }\) is an \(\alpha /2\)-stable variable, it has a bounded density. Hence, for any \(0 < \alpha < 2\), \( | E + a^{ \alpha /2} S - b^{ \alpha /2} S^{\prime } |^{-\alpha /2}\) is perfectly integrable. Thus, by construction \(\tilde{\gamma }_E(u)=\Gamma (1-\frac{\alpha }{2}) \int (-i u. x)^{\frac{\alpha }{2}}dP_E(x)\) belongs to \({\mathcal{H }}_\beta \) and it is a fixed point of \(G_E\). This insures the existence of \(a,b\ge 0\) and also the fact that \(P_E\) is the law of \(R_0(E)\) as soon as \(E\) is large enough so that \(G_E\) is a contraction.

To consider \(\gamma _z\) with small imaginary part, we need the additional following lemma.

Lemma 5.6

(Continuity of the maps \(F_h\)) Let \( 0 < \alpha < 2/3, \alpha / 2 < \beta < 1 - \alpha /2, 0<\varepsilon \le \beta -\frac{\alpha }{2}\) and \(0 < \kappa < \alpha /2\). There exists a constant \(c = c( \alpha , \beta , \kappa ) >0\), such that for any \(h,k \in {\mathcal{K }}_1, |h|, |k| \ge 1\), and \(g \in {\mathcal{H }}_{\beta }\)

$$\begin{aligned} \Vert F_h ( g) - F_k (g) \Vert _{\beta ,\varepsilon } \le c ( |h| \wedge |k| ) ^{-\frac{\alpha }{2} -\kappa } | h - k |^{\kappa } (1 + \Vert g \Vert _{\beta } ). \end{aligned}$$

Set \(z = E + i \eta \) with \( |E| \ge E_{\alpha , \varepsilon }\) and let \(0 < \kappa < \alpha /2\). Then, by Lemma 5.6, we have

$$\begin{aligned} \Vert \gamma _E\!-\!\gamma _z \Vert _{\beta ,\varepsilon }&\!=\! |\!G_E (\!\gamma _E )\!-\! G_z( \gamma _z ) |\! \le \! \Vert G_E ( \gamma _E )\!-\! G_z( \gamma _E )\Vert _{\beta ,\varepsilon }\!+\! \Vert G_z ( \gamma _E )\!-\! G_z( \gamma _z )\Vert _{\beta ,\varepsilon }\\&\le c E ^{-\frac{\alpha }{2} -\kappa } (1 + c_0 ) \eta ^\kappa \!+\! \frac{1}{2} \Vert \gamma _E - \gamma _z \Vert _{\beta ,\varepsilon }. \end{aligned}$$

Hence, for some \(c^{\prime } = c^{\prime } (\alpha ,\beta , \kappa )\), we deduce from Lemma 5.3 that if \(z=E+i\eta \)

$$\begin{aligned} \Vert \gamma _E - \gamma _z \Vert _{\beta ,\varepsilon } \le c^{\prime } \eta ^\kappa . \end{aligned}$$

Then, since \(\gamma _E ( e^{i \frac{\pi }{4}} )=0\) as \(\gamma _E\) is real, for any \(u \in S^1_+\),

$$\begin{aligned} \Gamma \left(1-\frac{\alpha }{2}\right) \mathbb{E }\mathrm{Im}R_0(z) ^{\frac{\alpha }{2}}&= | \gamma _z ( e^{i \frac{\pi }{4}} ) - \gamma _E ( e^{i \frac{\pi }{4}} ) | \\&\le | \gamma _z ( e^{i \frac{\pi }{4}} ) \!-\! \gamma _z (u ) | \!+\! | \gamma _E ( e^{i \frac{\pi }{4}} ) \!-\! \gamma _E ( u ) | + | \gamma _z ( u ) - \gamma _E ( u ) | \\&\le c | u - e^{i \frac{\pi }{4}} |^{\frac{\alpha }{2}} + \Vert \gamma _z - \gamma _E \Vert _{\beta , \varepsilon } | i . u | ^{-\varepsilon } \\&\le c^{\prime \prime } | u - e^{i \frac{\pi }{4}} |^{\frac{\alpha }{2}} + c^{\prime \prime } \eta ^\kappa | u - e^{i \frac{\pi }{4}} |^{-\varepsilon }. \end{aligned}$$

Choosing \(u\) such that \( | u - e^{i \frac{\pi }{4}} |\) is of order \(\eta ^{\frac{2 \kappa }{\alpha + 2 \varepsilon } } \), we deduce that for all \(z = E + i \eta \) with \( |E| \ge E_{\alpha , \varepsilon }, \mathbb{E }\mathrm{Im}R_0(z) ^{\frac{\alpha }{2}} \) is bounded by \(\eta ^{\frac{ \kappa \alpha }{\alpha + 2 \varepsilon } } \) up to a multiplicative constant. Since \(\varepsilon > 0\) can be arbitrarily small, this concludes the proof of Theorem 5.1.

5.2 Proofs of technical lemmas

We collect in this part the proofs of a few technical results used in the proof of Theorem 5.1.

5.2.1 Proof of Lemma 5.2 (Regularity of fractional moments)

If \(\mathrm{Re}(z)=E\),

$$\begin{aligned} |R_0(z)|\le |E-\sum \xi _k \mathrm{Re}(R_{k}(z))|^{-1} \end{aligned}$$

where \(\sum \xi _k \mathrm{Re}(R_{k}(z))\) is equal in law to \(aS-bS^{\prime }\) for two non-negative constants \(a,b\) and two independent stable laws \(S,S^{\prime }\). Assume for example that \(E > 0\). By conditioning on \(S^{\prime }\) and integrating over \(S\), we deduce from Lemma 7.4 that there exists a finite constant \(C = c(\alpha ,\beta )\) so that

$$\begin{aligned} \mathbb{E }|R_0(z)|^ \beta \le \mathbb{E }| E - aS + bS^{\prime } |^{-\beta } \le C \mathbb{E }| E + b S^{\prime } |^{-\beta } \le C^2 E ^{-\beta }. \end{aligned}$$

In particular, as \(\eta \) goes to \(0\), any limit point \(R_0(E)\) of \(R_0(E + i \eta )\), solution of (62), satisfies the above inequality. The conclusion of the first point follows.

To prove the second point, we notice that it is straightforward that \(\gamma \) belongs to \({\mathcal{C }}_\alpha \). Moreover, for any \(\beta \in [\frac{\alpha }{2},1]\), there exists a constant \(c = c(\alpha , \beta )\) such that for any \(x, y\) in \({\mathcal{K }}_1\),

$$\begin{aligned} | x^{\frac{\alpha }{2}} - y^ {\frac{\alpha }{2}} | \le c |x - y | ^ { \beta } \left( | x | \wedge | y | \right)^ { \frac{\alpha }{2} - \beta }. \end{aligned}$$
(64)

Also, we have, for \(u \in {\mathcal{K }}_1 ^ +\) and \(h \in {\mathcal{K }}_1\),

$$\begin{aligned} |i.u | |h| \le | h.u | \le \sqrt{2} |u | |h| . \end{aligned}$$
(65)

Indeed, if \( u = s + i t, s , t \ge 0\), then \(|h.u|^ 2 = \mathrm{Re}(h)^ 2 ( s + t)^2 + \mathrm{Im}(h)^ 2 ( s - t)^2\). This last expression is bounded from below by \( |h|^2 ( (s+ t)^ 2 \wedge (s- t)^ 2 ) = |h|^2 (s- t)^ 2 = |i.u |^2 |h|^2\). While it is bounded from above by \( |h|^ 2 ( ( s + t)^2 + ( s - t)^2 )= 2 |h|^2| u |^2\).

Now, using Jensen inequality and (65), we find

$$\begin{aligned} \left| \mathbb{E }( H.u )^{\frac{\alpha }{2}} \right| \le \mathbb{E }\left| ( H.u )^{\frac{\alpha }{2}} \right| \le (\sqrt{2} |u |)^{\frac{\alpha }{2}} \mathbb{E }| H | ^{\frac{\alpha }{2}}, \end{aligned}$$

whereas (64) and (65) imply for \(\beta \in [\frac{\alpha }{2}, 1]\),

$$\begin{aligned} \left| \mathbb{E }( H.u )^{\frac{\alpha }{2}} - \mathbb{E }( H.v )^{\frac{\alpha }{2}} \right|&\le \mathbb{E }\left| ( H.u )^{\frac{\alpha }{2}} - ( H.v )^{\frac{\alpha }{2}} \right| \\&\le c\, \mathbb{E }| H.(u - v) | ^ {\beta } \left( | H.u | \wedge | H.v | \right)^ { \frac{\alpha }{2} - \beta } \\&\le c\, 2^ {\frac{\beta }{2}} \, | u - v | ^ {\beta } \left( | i.u | \wedge | i.v | \right)^ { \frac{\alpha }{2} - \beta } \mathbb{E }| H | ^{\frac{\alpha }{2}}. \end{aligned}$$

This completes the proof with \(\Vert \gamma \Vert _\beta \le \left( \sqrt{2}^{\frac{\alpha }{2}} +c2^{\frac{\beta }{2}} \right) \mathbb{E }| H | ^{\frac{\alpha }{2}}\).

5.2.2 Proof of Lemma 5.3 (Fixed point equation for fractional moments)

Write \(u = u_1 + i u_2\) and \(-iz = h \in {\mathcal{K }}_1\). By definition, we have

$$\begin{aligned} \gamma _z ( u)&= \Gamma \left( 1 - \frac{\alpha }{2} \right) \mathbb{E }\left( \frac{u_1}{ h + \sum _k \xi _k H_k } + \frac{u_2}{ \bar{h} + \sum _k \xi _k \bar{H}_k } \right)^{\frac{\alpha }{2} } \\&= \Gamma \left( 1 - \frac{\alpha }{2} \right) \mathbb{E }\left( \frac{h.\check{u} + \sum _k \xi _k \check{H}_k.\check{u} }{ \left| h + \sum _k \xi _k H_k \right|^2 } \right)^{\frac{\alpha }{2} } . \end{aligned}$$

We use the formulas (33) : for all \(w \in {\mathcal{K }}_1, \delta > 0\),

$$\begin{aligned} | w |^{-2 \delta } = ( \bar{w} )^{-\delta } ( w )^{-\delta }&= \Gamma ( \delta ) ^{-2} \int _{ [ 0,\infty ) ^2} dx dy \, x^{\delta - 1} y^{\delta - 1} e^{ -x \bar{w} - y w} \\&= \Gamma ( \delta ) ^{-2} 2^{ 1- \delta } \int _0 ^ {\frac{\pi }{2}} d\theta \sin ( 2 \theta ) ^{ \delta - 1} \int _0 ^ \infty dr \, r^{2\delta - 1} e^{ -r e^{ i \theta }.w} . \end{aligned}$$

and for \(0 < \delta < 1\),

$$\begin{aligned} w ^{\delta }&= \delta \Gamma ( 1- \delta )^{-1} \int _{0}^\infty dx \, x^{-\delta - 1} (1 - e^{ -x w} ). \end{aligned}$$

Formally, we find that \(\gamma _z ( u) \) is equal to

$$\begin{aligned}&c_\alpha \int _0 ^ {\frac{\pi }{2}} d\theta \sin ( 2 \theta ) ^{ \frac{\alpha }{2} - 1} \int _{0}^\infty dx \, x^{-\frac{\alpha }{2} - 1} \\&\quad \times \int _0 ^ \infty dr \, r^{\alpha - 1} \mathbb{E }\left( e^ { - r e^{ i \theta } . h - \sum _k \xi _k r e^{ i \theta } . H_k } - e^ { - ( r e^{ i \theta } + x \check{u} ) . h- \sum _k \xi _k ( r e^{ i \theta } + x \check{u} ). H_k } \right). \end{aligned}$$

If we perform the change of variable \(x = r y\) and apply Levy-Kintchine formula, we obtain the stated formula. The exchange of expectation and integrals is then justified by invoking Lemmas 5.2 and 5.4. We next prove Lemma 5.4.

5.3 Properties of the map \(F\)

Proof of Lemma 5.4

We start by proving that for all \(u \in S^1_+\), for \(h\in {\mathcal{K }}_1, |h|\ge 1\),

$$\begin{aligned} | F_h ( g) (u ) | \le c |h|^{-\frac{\alpha }{2}} ( \Vert g \Vert _\beta + 1). \end{aligned}$$
(66)

By Lemma 9.1, for \(h \in {\mathcal{K }}_1\), the map on \({\mathcal{K }}_{\frac{\alpha }{2}}\) given by

$$\begin{aligned} x \mapsto \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h } e^{-r^{\frac{\alpha }{2} } x } , \end{aligned}$$

is bounded by \(c |h|^{-\alpha /2}\) and Lipschitz with constant \(c |h|^{-\alpha }\). Let \(T > 0 \) to be chosen later on. From (65) and (100), for \(\theta \in [0,\frac{\pi }{2}]\) and \(h\in {\mathcal{K }}_1, u\in S_1^+, g:{\mathcal{K }}_1\rightarrow {\mathcal{K }}_{\frac{\alpha }{2}}\), we have

$$\begin{aligned}&\int _{T}^\infty dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } }\left( e^{-r^{\frac{\alpha }{2} } g(e^{i\theta })} - e^{- y r h . u }e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )}\right) \right| \nonumber \\&\quad \le \int _{T}^\infty dy\, y^{-\frac{\alpha }{2}-1} \left( \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } } e^{-r^{\frac{\alpha }{2} } g(e^{i\theta })}\right|\right.\nonumber \\&\qquad \left.+ \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .(e^{ i \theta }+yu) } e^{-r^{\frac{\alpha }{2} } g(e^{i \theta } + y u )} \right|\right) \nonumber \\&\quad \le c \int _{T}^\infty dy\, \frac{ y^{-\frac{\alpha }{2}-1} }{| h | ^{ \frac{\alpha }{2}} | i. e^{i\theta } | ^{ \frac{\alpha }{2}} } + c \int _T^\infty dy\, \frac{ y^{-\frac{\alpha }{2}-1} }{| h | ^{ \frac{\alpha }{2}} | i.(e^{i\theta } + y u ) | ^{ \frac{\alpha }{2}} } \nonumber \\&\quad \le c^{\prime } | h | ^{ - \frac{\alpha }{2}} \left| \theta - \frac{\pi }{4} \right| ^{ - \frac{\alpha }{2}} T^{-\frac{\alpha }{2}}, \end{aligned}$$
(67)

where we have used the fact that

$$\begin{aligned} | i.e^{i\theta } | = | \cos \theta - \sin \theta | = \sqrt{2} | \sin ( \theta - \pi / 4) | \ge c | \theta - \pi / 4 |, \end{aligned}$$

and the control, for any real \(t , \delta , T>0\), any \(\gamma _1<\gamma _2<1, \gamma _1\ne 0\), (here \(\gamma _2=-\gamma _1=\alpha /2)\),

$$\begin{aligned} \int _{T}^\infty \frac{ y^{\gamma _1-1} }{ | y t - \delta | ^{ \gamma _2} } d y&= |\delta |^{\gamma _1-\gamma _2} |t|^{-\gamma _1} \int _{ \frac{ T | t |}{ | \delta | } } ^\infty \frac{ x^{\gamma _1-1} }{ | x \pm 1 | ^{ \gamma _2} } dx\nonumber \\&\le c (T^{\gamma _1}|\delta |^{-\gamma _2} 1_{\gamma _1<0}+ |\delta |^{\gamma _1-\gamma _2} |t|^{-\gamma _1} 1_{\gamma _1>0}), \end{aligned}$$
(68)

where the sign depends on whether or not \(t, \delta \) have a different sign.

For the integration over \(y\) in the interval \([0,T]\), we find similarly by (101) that

$$\begin{aligned} A&:= \int _0^{T} \, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } }\left( e^{-r^{\frac{\alpha }{2} } g(e^{i\theta })} - e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )}\right) \right|\\&\le c \int _0^{T} dy\, \frac{ y^{-\frac{\alpha }{2}-1} }{| h | ^{ \alpha } | i.e^{i\theta } | ^{ \alpha } } | g(e^{i\theta }) - g(e^{i \theta } + y u )|\,. \end{aligned}$$

Recalling that \(g(z )=|z|^{\frac{\alpha }{2}} g(\frac{z}{|z|})\) (by definition of \({\mathcal{C }}_\alpha \) and thus \({\mathcal{H }}_\beta \)) and using (64)–(65) we find that there exist finite constants \(C,C^{\prime }\) so that for all \(z,z^{\prime }\in \mathcal{K }_1^+\),

$$\begin{aligned}&|g(z)-g(z^{\prime })|\nonumber \\&\quad \le \Vert g\Vert _\beta \left( \left| |z|^{\frac{\alpha }{2}}-|z^{\prime }|^{\frac{\alpha }{2}}\right| +(|z|\wedge |z^{\prime }|)^{\frac{\alpha }{2}} \left| \frac{z}{|z|}-\frac{z^{\prime }}{|z^{\prime }|}\right|^\beta \left(|i.\frac{z}{|z|}|\wedge |i.\frac{z^{\prime }}{|z^{\prime }|}|\right)^{\frac{\alpha }{2}-\beta }\right)\nonumber \\&\quad \le C\Vert g\Vert _\beta \left( (|z|\wedge |z^{\prime }|)^{\frac{\alpha }{2}-\beta }+ (|i.z|\wedge |i.z^{\prime }|)^{\frac{\alpha }{2}-\beta }\right)|z-z^{\prime }|^\beta \nonumber \\&\quad \le C^{\prime } \Vert g\Vert _\beta (|i.z|\wedge |i.z^{\prime }|)^{\frac{\alpha }{2}-\beta } |z-z^{\prime }|^\beta . \end{aligned}$$
(69)

Using the fact that \(|e^{i \theta } + y u|\ge 1\) as \(u, e^{i\theta }\in S_1^+, y\ge 0\), we find with (65) that

$$\begin{aligned} A&\le c | h | ^{ - \alpha } |i. e^{i \theta } | ^{ -\alpha } \Vert g \Vert _\beta \left( \int _0^{T} dy\, \frac{ y^{\beta -\frac{\alpha }{2}-1} }{ | i.( e^{i \theta } + y u )| ^{ \beta - \frac{\alpha }{2}} } + \int _0^{T} dy\, \frac{ y^{\beta -\frac{\alpha }{2}-1} }{| i.e^{i \theta } | ^{ \beta - \frac{\alpha }{2}} } \right) \nonumber \\&\le c^{\prime } | h | ^{ - \alpha } | \theta - \frac{\pi }{4} | ^{- \beta -\frac{\alpha }{2} } \Vert g \Vert _\beta T^{\beta -\frac{\alpha }{2}}, \end{aligned}$$
(70)

where we have used that \(\beta \in ( \alpha /2,1]\) to obtain a convergent integral.

In the integration over \(y\) on the interval \([0,T]\), we have left aside the term

$$\begin{aligned} \int _0^{T} dy\, y^{-\frac{\alpha }{2}-1} \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } } e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )} \left( 1 - e^{- y r h. u } \right). \end{aligned}$$

We shall use this time the third statement (102) of Lemma 9.1 with \(\kappa = 1\). We choose \(T = |i.e^{i \theta } | /2\) so that for all \(y \in [0,T]\) from (65)

$$\begin{aligned} | h . ( e^{i \theta } + y u ) | \ge | h | | i.e^{ i \theta } | - \sqrt{2} | h | T \ge ( 1 - \frac{ 1 }{ \sqrt{2}} ) | h | | i. e^{ i \theta } |. \end{aligned}$$

For this choice of \(T\), we get

$$\begin{aligned}&\int _0^{T} dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } } e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )} \left( 1 - e^{- y r h. u } \right)\right| \nonumber \\&\quad \le c \int _0^{T} dy\, \frac{ y^{-\frac{\alpha }{2} } }{ | h|^{ \frac{\alpha }{2}} | i. e^{ i \theta } |^{ \frac{\alpha }{2} + 1} } \le c^{\prime } | h|^{ - \frac{\alpha }{2}} \left| \theta - \frac{\pi }{4} \right|^{ - \frac{\alpha }{2} - 1} T^{1 - \frac{\alpha }{2}}. \end{aligned}$$
(71)

Finally, using our choice of \(T\), we deduce from (67), (70), (71) that

$$\begin{aligned} \int _{0}^\infty dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } }\left( e^{-r^{\frac{\alpha }{2} } g(e^{i\theta })} - e^{- y r h . u }e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )}\right) \right| \end{aligned}$$

is bounded by \(c | h | ^{ - \frac{\alpha }{2}} | \theta - \frac{\pi }{4} | ^{ -\alpha } ( 1+ \Vert g \Vert _\beta )\) for \(|h|\ge 1\). We obtain (66) since

$$\begin{aligned} \Big | \theta - \frac{\pi }{4} \Big | ^{ -\alpha }( \sin 2 \theta ) ^{\frac{\alpha }{2} - 1} \end{aligned}$$

is integrable over \([0, \pi /2]\).

The proof of the lemma will be complete if we prove that for all \(u, v \in S_1 ^ + \),

$$\begin{aligned} | F_h ( g) (u ) - F_h ( g) (v ) | \le c | u -v | ^\beta ( | i. u | \wedge | i. v | ) ^{ \frac{\alpha }{2} - \beta } ( 1 + \Vert g \Vert _\beta ) |h | ^{ -\frac{\alpha }{2}}. \end{aligned}$$
(72)

To do so, we fix \(\theta \in [0, \pi /2]\) and assume for example that \(| i. ( e^{i \theta } + y u ) | \le | i. ( e^{i \theta } + y v ) |\). We first use the Lipschitz bound (101), together with (69), and write

$$\begin{aligned}&\int _{0}^\infty dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } }\left( e^{- y r h . u }e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )} - e^{- y r h . u }e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y v )}\right) \right| \nonumber \\&\quad \le c \int _{0}^\infty dy\, \frac{ y^{-\frac{\alpha }{2}-1} }{| h | ^{ \alpha } | i . (e^{i\theta } + y u ) | ^ \alpha } | g(e^{i \theta } + y u ) - g(e^{i \theta } + y v )| \nonumber \\&\quad \le c | h | ^{ - \alpha } | u - v | ^\beta \Vert g \Vert _\beta \int _0^{\infty } dy\, \frac{ y^{\beta -\frac{\alpha }{2}-1} }{ | i. ( e^{i \theta } + y u ) | ^{ \beta +\frac{\alpha }{2}} } \nonumber \\&\quad \le c^{\prime } | h | ^{ - \alpha } | u - v | ^\beta \Vert g \Vert _\beta \left| \theta - \frac{\pi }{4} \right| ^{ - \alpha } ( | i. u | \wedge | i. v | ) ^{ \frac{\alpha }{2} - \beta }, \end{aligned}$$
(73)

where we have used (68) with \(\gamma _1=\beta -\alpha /2>0\) and \(\gamma _2=\kappa >\gamma _1\).

Now, in our control of

$$\begin{aligned} \int _{0}^\infty dy\, y^{-\frac{\alpha }{2}-1} \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h .e^{ i \theta } }\left( e^{- y r h . u }e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y u )} - e^{- y r h . v }e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y v )}\right) \end{aligned}$$

we have so far left aside

$$\begin{aligned} \int _{0}^\infty dy\, y^{-\frac{\alpha }{2}-1} \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h . ( e^{ i \theta } + y u ) } e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y v )} \left( 1- e^{- y r h . (v-u) }\right) \end{aligned}$$

where \(| i. ( e^{i \theta } + y u ) | \le | i. ( e^{i \theta } + y v ) |\). By (102) applied to \(\kappa = \beta \),

$$\begin{aligned} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h . ( e^{ i \theta } + y u ) } e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y v )} \left( 1- e^{- y r h . (v-u) }\right) \right| \end{aligned}$$

is bounded up to multiplicative constant by

$$\begin{aligned} |h |^{ -\frac{\alpha }{2}-\beta } | i .(e^{i \theta } + y u ) | ^{ - \frac{\alpha }{2} - \beta } y^\beta | v - u |^ \beta . \end{aligned}$$

Using again (68) with \(0<\gamma _1=\beta -\alpha /2<\gamma _2=\beta +\alpha /2\) yields

$$\begin{aligned}&\int _{0}^\infty dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{- r h . ( e^{ i \theta } + y u ) } e^{-r^{\frac{\alpha }{2}}g(e^{i \theta } + y v )} \left( 1- e^{- y r h . (v-u) }\right) \right| \nonumber \\&\quad \le c | h | ^{ - \frac{\alpha }{2} -\beta } | u - v |^\beta | \delta | ^{ - \alpha } [ | i.u| ^{\frac{\alpha }{2} - \beta }+ | i.v| ^{\frac{\alpha }{2} - \beta } ]\,. \end{aligned}$$
(74)

We may conclude the proof of (72) by noticing that the bounds given by (73)–(74) and multiplied by \(( \sin 2 \theta ) ^{\frac{\alpha }{2} - 1}\) are uniformly integrable on \([0, \pi /2]\). \(\square \)

We can now build upon the proof of Lemma 5.4 to get proofs for Lemmas 5.5 and 5.6.

Proof of Lemma 5.5

We shall now use the norm \(\Vert .\Vert _{\beta ,\varepsilon }\) for which we have the following analogue of (69): if \(0 \le \varepsilon \le \beta -\frac{\alpha }{2}\), for all \(z,z^{\prime }\in \mathcal{K }_1^+\),

$$\begin{aligned} |f(z)|&\le \Vert f\Vert _{\beta ,\varepsilon } \frac{|z|^{\frac{\alpha }{2}+\varepsilon }}{|i.z|^\varepsilon }\end{aligned}$$
(75)
$$\begin{aligned} |f(z)-f(z^{\prime })|&\le C \Vert f\Vert _{\beta ,\varepsilon }\frac{(|z|\vee |z^{\prime }|)^{\frac{\alpha }{2}+\varepsilon }}{(|i.z|\vee |i.z^{\prime }|)^{\beta +\varepsilon }} |z-z^{\prime }|^\beta . \end{aligned}$$
(76)

We start by showing that for any \(u \in S^1_+\),

$$\begin{aligned} |F_h ( g) (u) - F_h (f) (u) | \le c |h|^{-\alpha } ( 1 + \Vert f \Vert _\beta + \Vert g \Vert _{\beta } ) \Vert f - g \Vert _{\beta ,\varepsilon } |i.u|^{ -\varepsilon }. \end{aligned}$$
(77)

The proof is similar to the argument in Lemma 5.4. We notice that \(F_h ( g) (u ) - F_h (f) (u)\) is equal to

$$\begin{aligned} \int _0 ^{\frac{\pi }{2}} d \theta (\sin 2\theta )^{ \frac{\alpha }{2} - 1} \int _0 ^ \infty dy\, y^{-\frac{\alpha }{2}-1} \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} Z(r,y,\theta ), \end{aligned}$$

where, with \(g_u = g ( e^{i \theta } + y u ), f_u = f ( e^{i \theta } + y u ), h_u = h . (e^{i\theta } + y u ) \),

$$\begin{aligned} Z&= e^{ - r h_0 } \left( e^{ - r^{\frac{\alpha }{2}} g_0 } - e^{ - r^{\frac{\alpha }{2}} f_0 } \right) - e^{ - r h_u } \left( e^{ - r^{\frac{\alpha }{2}} g_u } - e^{ - r^{\frac{\alpha }{2}} f_u } \right) \\&= \left( e^{ - r h_0 } - e^{ - r h_u } \right) \left( e^{ - r^{\frac{\alpha }{2}} g_u } - e^{ - r^{\frac{\alpha }{2}} f_u } \right)\\&+\,e^{ - r h_0 } \left(e^{ - r^{\frac{\alpha }{2}} g_0 } - e^{ - r^{\frac{\alpha }{2}} f_0 } - e^{ - r^{\frac{\alpha }{2}} g_u} + e^{ - r^{\frac{\alpha }{2}} f_u } \right). \end{aligned}$$

We set \(\delta = - i . e^{i\theta } \) and \( t = i . u\). On the integration interval \([T, \infty )\) of \(y\) we use the first form of \(Z\) and treat the two terms separately. As in Lemma 5.4, we use (101) and (75) to find

$$\begin{aligned}&\int _{T}^\infty dy\, y^{-\frac{\alpha }{2}-1} \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} \left|Z(r,y,\theta )\right|\nonumber \\&\quad \le c \int _{T}^{\infty } dy\, \frac{ y^{-\frac{\alpha }{2}-1} \Vert f- g \Vert _{\beta ,\varepsilon } }{| h | ^{\alpha } |\delta |^{\alpha + \varepsilon } } + c \int _{T}^\infty dy\, \frac{ y^{-\frac{\alpha }{2}-1} y^{\frac{\alpha }{2}+\varepsilon }\vee 1 \Vert f- g \Vert _{\beta ,\varepsilon } }{| h | ^\alpha | t y - \delta | ^{ \alpha + \varepsilon } } \nonumber \\&\quad \le c^{\prime } | h | ^{ - \alpha } \Vert f - g \Vert _{\beta ,\varepsilon } (T ^{ -\frac{\alpha }{2}} \delta ^{- \varepsilon -\alpha }+ \delta ^{-\alpha }|t|^{-\varepsilon })\,. \end{aligned}$$

The above computation requires the hypothesis \(\varepsilon >0\) to insure that the control of the integrals hold following (68) with \(\gamma _1\ne 0\).

For the integration interval \([0, T)\) of \(y\) we use the second form of \(Z\) and choose \( T = | i . e^{i \theta } | / 2 = |\delta | /2\). We use (104), (75) and \(\kappa =\alpha /2 + \varepsilon \), we find

$$\begin{aligned}&\int _{0}^T dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} \left( e^{ - r h_0 } - e^{ - r h_u } \right) \left( e^{ - r^{\frac{\alpha }{2}} g_u } - e^{ - r^{\frac{\alpha }{2}} f_u } \right) \right|\\&\quad \le c \int _{0}^{T} dy\, \frac{ y^{\kappa -\frac{\alpha }{2}-1} \Vert f- g \Vert _{\beta ,\varepsilon } }{| h | ^{\alpha + \kappa } |\delta |^{\alpha + \kappa + \varepsilon } } \\&\quad \le c^{\prime } | h | ^{ -\frac{3\alpha }{2} - \varepsilon } |\delta | ^{ - \frac{3\alpha }{2}-\varepsilon } \Vert f - g \Vert _{\beta ,\varepsilon } . \end{aligned}$$

Similarly, by (103) and (75), (76) and (69), our choice of \(T\) gives

$$\begin{aligned}&\int _{0}^T dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} e^{ - r h_0 } \left(e^{ - r^{\frac{\alpha }{2}} g_0 } - e^{ - r^{\frac{\alpha }{2}} f_0 } - e^{ - r^{\frac{\alpha }{2}} g_u} + e^{ - r^{\frac{\alpha }{2}} f_u } \right) \right| \nonumber \\&\quad \le c \int _{0}^{T} dy\, \frac{ y^{\beta -\frac{\alpha }{2}-1} \Vert f- g \Vert _{\beta ,\varepsilon } |\delta |^{-\varepsilon - \beta } }{| h | ^{\alpha } |\delta |^{\alpha } }\\&\qquad +\! c \int _{0}^{T} dy\, \frac{ y^{\beta -\frac{\alpha }{2}-1} ( \Vert f \Vert _\beta + \Vert g \Vert _\beta ) |\delta |^{\frac{\alpha }{2} - \beta } \Vert f - g \Vert _{\beta ,\varepsilon } |\delta |^{-\varepsilon }}{| h | ^{\frac{3\alpha }{2} } |\delta |^{\frac{3\alpha }{2} }} \nonumber \\&\quad \le c^{\prime } | h | ^{ - \alpha } |\delta | ^{-\frac{3\alpha }{2} - \varepsilon } \Vert f - g \Vert _{\beta ,\varepsilon } + c^{\prime } | h | ^{ - \frac{3\alpha }{2} } |\delta | ^{-\frac{3}{2} - \varepsilon } ( \Vert f \Vert _\beta + \Vert g \Vert _\beta ) \Vert f - g \Vert _{\beta ,\varepsilon }. \end{aligned}$$

Since \(3 \alpha / 2 + \varepsilon < 1\), we may integrate our bounds over \(\theta \) and obtain (77).

The proof of the lemma will be complete if we show that for any \(u \ne v \in S_1^+\), with \(|i.u| \le |i.v|\),

$$\begin{aligned}&| F_h ( g) (u) - F_h (f) (u) - F_h ( g) (v) + F_h (f) (v) |\nonumber \\&\quad \le c |h|^{-\alpha } ( 1 + \Vert f \Vert _\beta + \Vert g \Vert _{\beta } ) \Vert f - g \Vert _{\beta ,\varepsilon } | i.u | ^{- \beta - \varepsilon } | u - v | ^\beta . \end{aligned}$$
(78)

The proof is simpler than in the previous case as we do not need to consider separately the cases where \(y\) is small or large. We set \(\delta = - i . e^{i\theta } , t = i . u, t^{\prime } = i.v, |t| \le |t^{\prime }|\). Using (104) with \(\kappa =\beta \) and (75), we find

$$\begin{aligned}&\int _{0}^\infty dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} \left( e^{ - r h_v } - e^{ - r h_u } \right) \left( e^{ - r^{\frac{\alpha }{2}} g_u } - e^{ - r^{\frac{\alpha }{2}} f_u } \right) \right| \\&\quad \le c^{\prime } | h| ^{-\alpha -\beta } | u - v |^ \beta \Vert f- g \Vert _{\beta ,\varepsilon } (| \delta |^ { - \frac{3 \alpha }{2} - \varepsilon } |t|^ { \frac{\alpha }{2} - \beta } + |\delta |^ { - \alpha } |t| ^ { - \beta - \varepsilon } ). \end{aligned}$$

Moreover, using again (103), (75), (76) and (69) we find

$$\begin{aligned}&\int _{0}^\infty \!dy\, y^{-\frac{\alpha }{2}-1} \left| \!\int _0^\infty \! dr \, r^{\frac{\alpha }{2}-1} e^{ - r h_v } \left(e^{ - r^{\frac{\alpha }{2}} g_v }\!-\!e^{ - r^{\frac{\alpha }{2}} f_v }\!-\!e^{ - r^{\frac{\alpha }{2}} g_u}\!+\!e^{ - r^{\frac{\alpha }{2}} f_u } \right) \right| \nonumber \\&\quad \qquad +\, c \int _{0}^{\infty } \! dy\, \frac{ y^{\beta -\frac{\alpha }{2}-1} | u\!-\!v |^ \beta |t y\!-\!\delta |^{\frac{ \alpha }{2} - \beta } ( \Vert f \Vert _\beta \!+\!\Vert g \Vert _\beta ) \Vert f\!-\! g \Vert _{\beta ,\varepsilon } ( 1 \!\vee \! y^{\frac{\alpha }{2} + \varepsilon } ) |t y\!-\!\delta |^{- \varepsilon } }{| h | ^{\frac{3\alpha }{2} } |t y\!-\! \delta |^{\frac{3 \alpha }{2}} } \\&\quad \le c^{\prime } | h| ^{-\alpha } | u\!-\!v |^ \beta (1\!+\!\Vert f \Vert _\beta \!+\! \Vert g \Vert _\beta ) \Vert f\!-\! g \Vert _{\beta ,\varepsilon } ( | \delta |^ { - \frac{3 \alpha }{2} - \varepsilon } |t|^ { \frac{\alpha }{2} - \beta } \!+\!| \delta | ^ { - \alpha } | t | ^ { - \beta - \varepsilon } ). \end{aligned}$$

Now, by assumption, \( 3 \alpha /2 + \varepsilon < 1\) and we may integrate our bounds over \(\theta \) and obtain (78). \(\square \)

Proof of Lemma 5.6

The proof is very close to the previous one, and we simply outline it. We assume for example \(| h | \le |k|\). By Lemma 5.4, we can also assume that \( | h - k | \le |h|\) and in particular \(|k| \le 2 |h|\). We first prove that for any \(u \in S^1_+\),

$$\begin{aligned} | F_h ( g) (u) - F_k (g) (u) |\le c |h|^{-\frac{\alpha }{2} - \kappa } | h - k |^{\kappa } (1 + \Vert g \Vert _{\beta } ). \end{aligned}$$
(79)

The expression \(F_h ( g) (u ) - F_k (g) (u)\) is equal to

$$\begin{aligned} \int _0 ^{\frac{\pi }{2}} d \theta (\sin 2\theta )^{ \frac{\alpha }{2} - 1} \int _0 ^ \infty dy\, y^{-\frac{\alpha }{2}-1} \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} Z(r,y,\theta ), \end{aligned}$$

where, with \(g_u = g ( e^{i \theta } + y u ), h_u = h . (e^{i\theta } + y u ) , k_u = k . (e^{i\theta } + y u ) \),

$$\begin{aligned} Z&= e^{ - r ^{\frac{\alpha }{2}} g_0 } \left( e^{ - r h_0 } - e^{ - r k_0 } \right) - e^{ - r^{\frac{\alpha }{2}} g_u } \left( e^{ - r h_u } - e^{ - r k_u } \right) \\&= \left( e^{ - r ^{\frac{\alpha }{2}} g_0 }\!-\!e^{ - r ^{\frac{\alpha }{2}} g_u } \right) \left( e^{ - r h_0 }\!-\!e^{ -r k_0 } \right)\!+\!e^{ - r ^{\frac{\alpha }{2}} g_u } \left(e^{ - r h_0 }\!-\! e^{ - r k_0 }\!-\!e^{ - r h_u}\!+\!e^{ - r k_u } \right)\!. \end{aligned}$$

Let \(T > 0\). We set \(\delta = - i . e^{i\theta } \) and \( t = i . u\). On the integration interval \([T,\infty )\) for \(y\), we use the first form of \(Z\). Then, from (102) in Lemma 9.1, we find

$$\begin{aligned}&\int _{T}^\infty dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} \left( e^{ - r ^{\frac{\alpha }{2}} g_0 } \left( e^{ - r h_0 } - e^{ - r k_0 } \right) - e^{ - r^{\frac{\alpha }{2}} g_u } \left( e^{ - r h_u } - e^{ - r k_u } \right) \right) \right| \nonumber \\&\le c \int _{T}^{\infty } dy\, \frac{ y^{ -\frac{\alpha }{2}-1} | h- k | ^\kappa }{ |h|^{ \frac{\alpha }{2} + \kappa } |\delta |^{ \frac{\alpha }{2} + \kappa } } +c \int _{T}^{\infty } dy\, \frac{ y^{-\frac{\alpha }{2}-1} ( 1 \vee y^\kappa ) | h- k |^\kappa }{ |h|^{ \frac{\alpha }{2} + \kappa } |t y - \delta |^{ \frac{\alpha }{2} + \kappa } } \\&\le c^{\prime } | h | ^{ -\frac{\alpha }{2} - \kappa } | h- k |^\kappa ( |\delta | ^{ - \frac{\alpha }{2}-\kappa } T^{ - \frac{\alpha }{2}} + |\delta | ^{ - \frac{\alpha }{2}-\kappa } T^{\kappa - \frac{\alpha }{2}} ), \end{aligned}$$

where we have used that \(\kappa < \alpha /2\). On the integration interval \([0,T)\) for \(y\), we use the second form of \(Z\). We choose \(T = |i.e^{i\theta }|/2 = | \delta | /2\). For the first term, by (104) in Lemma 9.1,

$$\begin{aligned}&\int _{0}^T dy\, y^{-\frac{\alpha }{2}-1} \left| \int _0^\infty dr \, r^{\frac{\alpha }{2}-1} \left( e^{ - r ^{\frac{\alpha }{2}} g_0 } - e^{ - r ^{\frac{\alpha }{2}} g_u } \right) \left( e^{ - r h_0 } - e^{ -r k_0 } \right) \right| \nonumber \\&\le c \int _{0}^{T} dy\, \frac{ y^{\beta -\frac{\alpha }{2}-1} | h- k | ^\kappa |\delta |^{ \frac{\alpha }{2} - \beta } \Vert g \Vert _\beta }{ |h|^{ \alpha + \kappa } |\delta |^{ \alpha + \kappa } } \\&\le c^{\prime } | h | ^{ - \alpha - \kappa } |\delta | ^{ - \alpha - \kappa } | h- k |^\kappa \Vert g \Vert _\beta . \end{aligned}$$

The second term is easily bounded by (105) with \(\kappa _1=\kappa \) and \(\kappa _2=0\). Integrating our bounds over \(\theta \), we obtain (79). The proof that for all \(u \ne v \in S^+_1\), with \(| i. u | \le | i. v | \).

$$\begin{aligned}&| F_h ( g) (u ) - F_k (g) (u) - F_h (g) (v) + F_k ( g) (v ) | \nonumber \\&\quad \le \; c | u -v | ^\beta | i. u | ^{ \frac{\alpha }{2} - \beta - \kappa } ( 1 + \Vert g \Vert _\beta ) |h| ^{ -\frac{\alpha }{2}} | h - k |^{\kappa } \end{aligned}$$
(80)

is easier as it does not require to consider separately small and large \(y\); we leave it to the reader. \(\square \)

5.3.1 Computation of characteristic function

With the notation of proof of Theorem 5.1, we define for \(z \in \bar{\mathbb{C }}_+, u \in {\mathcal{K }}_1^+\),

$$\begin{aligned} \chi _ z ( u ) = \mathbb{E }\exp ( - u . H(z)). \end{aligned}$$

We note that the distribution of \(R_0(z)\) is characterized by the value of \(\chi _z\) on any open neighborhood in \({\mathcal{K }}_1^+\). The next lemma asserts that the distribution of \(R_0(z)\) is also characterized by the value of \(\gamma _z\) on \({\mathcal{K }}_1^+\) (that is on \(S_+^1\) by homogeneity).

Lemma 5.7

(From fractional moment to characteristic function) Let \(z \in \bar{\mathbb{C }}_+, 0 < \alpha < 2\) and \(R_0(z)\) solution of (62) such that \(\mathbb{E }| R_0(z) |^ {\frac{\alpha }{2} } < + \infty \). For all \(u = u_1+ i u_2 \in {\mathcal{K }}_1^+ \),

$$\begin{aligned} \chi _ z ( u )&= \int _{\mathbb{R }_+^2} J_1 ( s ) J_1 (t) e^{ - \frac{(-iz)s^2 }{4 u_1} - \frac{(i\bar{z})t^2}{4u_2} } e^{ - \gamma _ z \left( \frac{s^2}{4 u_1}+ i \frac{t^2}{4 u_2} \right) } ds dt \\&- \int _{0}^\infty \!J_1 ( s ) e^{ - \frac{(-iz)s^2 }{4 u_1} } e^{ - \Gamma (1 - \frac{\alpha }{2}) \!\gamma _z( \frac{s^2}{4 u_1}) }ds\!-\!\int _{0}^\infty \! J_1 ( t ) e^{ - \frac{(i\bar{z})t^2 }{4 u_2} } \!e^{ - \Gamma (1 - \frac{\alpha }{2}) \gamma _z( i \frac{t^2}{4 u_2}) }dt\!+\!1. \end{aligned}$$

where \(J_1 (x) = \frac{x}{2} \sum _{k \ge 0} \frac{ (-x^2 / 4) ^k }{k ! (k+1)!}\) is a Bessel function of the first kind.

Proof

We use the formulas for \(w \in {\mathcal{K }}_1\),

$$\begin{aligned} 1 - e^{-w^{-1}} = \int _0 ^\infty J_1 ( s ) e^{ - \frac{ w s ^2}{4} } ds \end{aligned}$$

(see [1]) and for \(z,z^{\prime } \in \mathbb{C }\),

$$\begin{aligned} e^{- z - z^{\prime }} = ( 1 - e^{-z}) ( 1 - e^{-z^{\prime }}) - ( 1- e^{-z}) - ( 1- e^{-z^{\prime }}) +1 . \end{aligned}$$

Then, it follows from (62) that

$$\begin{aligned} e^{ - u_1 H - u_2 \bar{H} }&\stackrel{d}{=} \exp \left( - \frac{ u_1 }{ -i z + \sum _{k \ge 1} \xi _k H_k } - \frac{ u_2 }{ i \bar{z} + \sum _{k \ge 1} \xi _k \bar{H}_k } \right) \\&\stackrel{d}{=}&\int _{\mathbb{R }_+^2} J_1 ( s ) J_1 (t) e^{ - \frac{(-iz)s^2 }{4 u_1} - \frac{(i\bar{z})t^2}{4u_2} } e^{ - \sum _{k} \xi _k \left( \frac{H_k s^2 }{4 u_1} + \frac{\bar{H}_k t^2}{4u_2} \right) } ds dt \\&- \int _{0}^\infty J_1 ( s ) e^{ - \frac{(-iz)s^2 }{4 u_1} } e^{ - \sum _{k} \xi _k \frac{H_k s^2 }{4 u_1} }ds\!-\!\int _{0}^\infty J_1 ( t ) e^{ - \frac{(i\bar{z})t^2 }{4 u_2} } \!e^{ - \sum _{k} \xi _k \frac{\bar{H}_k t^2 }{4 u_2} }dt\!+\!1. \end{aligned}$$

Since \(J_1\) is bounded on \(\mathbb{R }_+\), we may safely take expectation. The conclusion follows from Levy-Khintchine formula. \(\square \)

5.4 Proof of Theorem 1.3

We start with a simple lemma which relates \(W_I (i)\) to the diagonal of resolvent.

Lemma 5.8

(From eigenvectors to diagonal of resolvent) Let \( \alpha > 0 \) and \(I = [E - \eta , E+\eta ]\) be an interval. Setting \(z = E + i \eta \in \mathbb{C }_+\), we have

$$\begin{aligned} \frac{1}{n} \sum _{k=1}^n W_I (k)^{\frac{\alpha }{2}} \le \left( \frac{ 2 n \eta }{ | \Lambda _I | } \right)^{\frac{\alpha }{2}} \frac{1}{n} \sum _{k=1} ^n \left( \mathrm{Im}R ( z)_{kk} \right)^{\frac{\alpha }{2}}. \end{aligned}$$

Proof

From the spectral theorem, we have

$$\begin{aligned} \mathrm{Im}R ( z)_{kk} \ge \sum _{ v_i \in \Lambda _I} \frac{ \eta \langle v_i , e_k \rangle ^2 }{( \lambda _i ( A) - E )^2 + \eta ^2 } \ge \frac{1}{2\eta } \sum _{ v \in \Lambda _I} \langle v , e_k \rangle ^2 = \frac{ | \Lambda _I| }{2 n \eta } W_I ( k). \end{aligned}$$

It remains to sum the above inequality. \(\square \)

At this stage, it should be clear that the proof of Theorem 1.3 will rely on Theorem 5.1 and on an extension of the previous fixed point argument to finite \(n\) system. The bottleneck in the proof will be on the lower bound of \(| \Lambda _I |/n\eta \) which in particular requires according to Lemma 3.7 that \(\mu _A(I)\le L |I|\). This last control is difficult when \(\alpha <1\) as in this case \(n^{-1}\sum _{k=1}^n( \mathrm{Im}R(z)_{kk})^{\alpha /2}\) goes to zero like \(\eta ^{\alpha /2}\) so that arguments such as those used in the proof of Theorem 3.5 do not hold. It will be responsible for the restrictive condition \(\eta \ge n ^{-\rho + o(1)}\) in the statement of Theorem 1.3. For completeness we will also prove in this subsection a vanishing upper bound on

$$\begin{aligned} \frac{1}{n} \sum _{k=1} ^n \left( \mathrm{Im}R ( z)_{kk} \right)^{\frac{\alpha }{2}}, \end{aligned}$$

for \(\eta \) of order \(n^{ - 1/6}\) for all \( \alpha > 0\). More precisely, we have

Theorem 5.9

(Vanishing fractional moment for the resolvent) Let \( 0 < \alpha < 2/3 , 0 < \varepsilon < \frac{\alpha ^2}{2 (4 - \alpha ) }, \rho ^{\prime } = \frac{2 + \alpha }{4 ( 3 +\alpha ) }\) and \(c_0 = \frac{ (2 + \alpha )^2}{ 16 ( 3 + \alpha ) }\). There exist \(c_1 = c_1 ( \alpha ), c = c ( \alpha , \varepsilon ) > 0\) such that if \(n \ge 1, z = E + i \eta \in \mathbb{C }_+, |z| \ge c, n^{-\rho ^{\prime }} ( \log n )^ {c_0} \le \eta \le 1 \),

$$\begin{aligned} \mathbb{E }\frac{1}{n} \sum _{k=1} ^n \left( \mathrm{Im}R ( z)_{kk} \right)^{\frac{\alpha }{2}} \le c \eta ^ { -\frac{\alpha ( 3 + \alpha ) }{2 + \alpha } } n^{-\frac{\alpha }{2} + c_1 \varepsilon } + c \eta ^{\frac{\alpha }{2} - \varepsilon }. \end{aligned}$$

Moreover, if \(n^{-\rho } ( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1 \),

$$\begin{aligned} \mathbb{E }\frac{1}{n} \sum _{k=1} ^n \left( \mathrm{Im}R ( z)_{kk} \right)^{\frac{\alpha }{2}} \le c \eta ^{\frac{\alpha }{2} - \varepsilon }. \end{aligned}$$

Theorem 1.3 is a consequence of the second statement of Theorem 5.9 together with Theorem 3.5, Lemma 5.8 and Lemma 8.4 which asserts that

$$\begin{aligned} \mathbb{P }\left( \frac{1}{n} \sum _{k=1} ^n \left( \mathrm{Im}R ( z)_{kk} \right)^{\frac{\alpha }{2}} \ge \mathbb{E }\frac{1}{n} \sum _{k=1} ^n \left( \mathrm{Im}R ( z)_{kk} \right)^{\frac{\alpha }{2}} + t \eta ^{\frac{\alpha }{2} - \varepsilon } \right) \le \exp ( - n \eta ^{ 4 - \frac{4\varepsilon }{\alpha } } t^ { \frac{1}{\alpha }} ) . \end{aligned}$$

We consider for \(u\in {\mathcal{K }}_1^+\),

$$\begin{aligned} \gamma ^n_z(u):=\Gamma (1-\frac{\alpha }{2}) \mathbb{E }\left[ \frac{1}{n}\sum _{k=1}^n \left( -iR(z)_{kk}.u\right)^{\frac{\alpha }{2}}\right] =\Gamma (1-\frac{\alpha }{2}) \mathbb{E }\left[ \left( -iR(z)_{11}.u\right)^{\frac{\alpha }{2}}\right] \,. \end{aligned}$$

Lemma 5.10

(Bound on fractional moments of the resolvent) Let \( 0 < \alpha / 2 \le \beta < 2 \alpha / ( 4 - \alpha )\) and \(\rho ^{\prime }, c_0\) as in Theorem 5.9. There exists \(c> 0\) such that if \(n \ge 1, z = E + i \eta \in \mathbb{C }_+, |z| \ge 1, \eta \ge n^{-\rho ^{\prime }} ( \log n )^ {c_0} \), then

$$\begin{aligned} \Vert \gamma ^n_{z}\Vert _{\beta } \le c. \end{aligned}$$
(81)

The proof of Theorem 5.9 provides also the local convergence of the fractional moments \(\gamma _z^n\) for the norm \(\Vert .\Vert _{\frac{\alpha }{2},\varepsilon }\). Indeed it is based again on an approximate fixed point argument for these quantities.

Lemma 5.11

(Approximate fixed point for fractional moments of the resolvent) Let \( 0 < \alpha < 2/3 \) and \(\rho ^{\prime }, c_0\) as in Theorem 5.9 and \(G_z\) as in (63). For all \(0 < \varepsilon < \frac{\alpha ^2}{2 (4 - \alpha ) }\), there exists \(c = c ( \alpha , \varepsilon ) > 0\) such that if \(n \ge 1\), \(z = E + i \eta \in \mathbb{C }_+, |z| \ge c, n^{-\rho ^{\prime }} ( \log n )^ {c_0} \le \eta \le 1\),

$$\begin{aligned} \Vert \gamma _{z}^n- G_z ( \gamma _{z}^n)\Vert _{\frac{\alpha }{2}+\varepsilon , \varepsilon } \le c \eta ^ { -\frac{\alpha ( 3 + \alpha ) }{2 + \alpha } } n^{-\frac{\alpha }{4} }. \end{aligned}$$

Moreover, if \(n^{-\rho } ( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1 \),

$$\begin{aligned} \Vert \gamma _{z}^n- G_z (\gamma _{z}^n)\Vert _{\frac{\alpha }{2}+\varepsilon , \varepsilon } \le \eta ^ { -\frac{5 \alpha }{4} } n ^{ - \frac{\alpha }{4}}. \end{aligned}$$

We now check that the above two lemmas imply Theorem 5.9. Note in the proof below that they also imply the convergence of \(\gamma _z^n\) to \(\gamma _z\) for \(\eta \ge n^{-\rho ^{\prime }}(\log n)^{c_0}\).

Proof of Theorem 5.9

We prove the first statement. Let \(0 < \varepsilon < \frac{\alpha ^2}{2 (4 - \alpha ) }\) and \(\delta = \eta ^ { -\frac{\alpha ( 3 + \alpha ) }{2 + \alpha } } n^{-\frac{\alpha }{4} }\). Now since \(\Vert \gamma _{z}^n\Vert _{\frac{\alpha }{2} + \varepsilon } \) and \(\Vert \gamma _{z} \Vert _{\frac{\alpha }{2} + \varepsilon }\) are uniformly bounded, we have by Lemma 5.5 and Lemma 5.11,

$$\begin{aligned} \Vert \gamma _{z}^n -\gamma _{z}\Vert _{\frac{\alpha }{2}+\varepsilon ,\varepsilon }\le c|z|^{-\alpha }\Vert \gamma _{z}^n -\gamma _{z}\Vert _{\frac{\alpha }{2}+\varepsilon ,\varepsilon } +c \delta \end{aligned}$$
(82)

as long as \(|z| \ge c\) with imaginary part \( n^{-\rho ^{\prime }} ( \log n )^ {c_0} \le \eta \le 1 \). Hence, if \(|z|\) is large enough, \(c|z|^{-\alpha }\) is less than \(2\) and it follows that

$$\begin{aligned} \Vert \gamma _{z}^n -\gamma _{z}\Vert _{\frac{\alpha }{2}+\varepsilon ,\varepsilon }\le 2 c \delta . \end{aligned}$$
(83)

Now, we may argue as in the proof of Theorem 5.1. By Theorem 5.1, for \(|z|\) large enough, \(| \gamma _z ( e^{i \frac{\pi }{4}} ) | \le c^{\prime } \eta ^{\frac{\alpha }{2} - \varepsilon } \), for some constant \(c^{\prime } > 0\). Then, for any \(u \in S^1_+\), using Lemma 5.10, Lemma 5.2 and (83),

$$\begin{aligned}&\Gamma (1-\frac{\alpha }{2}) \mathbb{E }\mathrm{Im}R(z)_{11} ^{\frac{\alpha }{2}} = | \gamma ^n_z ( e^{i \frac{\pi }{4}} ) | \\&\le | \gamma ^n_z ( e^{i \frac{\pi }{4}} ) - \gamma ^n_z (u ) | + | \gamma ^n_z ( u ) - \gamma _z ( u ) | + | \gamma _z ( e^{i \frac{\pi }{4}} ) - \gamma _z (u ) | + |\gamma _z ( e^{i \frac{\pi }{4}} ) | \\&\le c^{\prime \prime } | u - e^{i \frac{\pi }{4}} |^{\frac{\alpha }{2}} + \Vert \gamma _z - \gamma _z^n \Vert _{\alpha /2+\varepsilon , \varepsilon } | i . u | ^{-\varepsilon } + |\gamma _z ( e^{i \frac{\pi }{4}} ) | \\&\le c^{\prime \prime } | u - e^{i \frac{\pi }{4}} |^{\frac{\alpha }{2}} + c^{\prime \prime }\delta | u - e^{i \frac{\pi }{4}} |^{- \varepsilon } + c^{\prime } \eta ^{\frac{\alpha }{2} - \varepsilon }. \end{aligned}$$

Choosing \(u\) such that \( | u - e^{i \frac{\pi }{4}} |\) is of order \(\delta ^{\frac{2 }{\alpha + 2 \varepsilon } } \), we deduce that for all \(z = E + i \eta \) with \( |E| \ge E_{\alpha , \varepsilon }, \mathbb{E }\mathrm{Im}R(z) ^{\frac{\alpha }{2}} \) is bounded up to a multiplicative constant, by

$$\begin{aligned} \eta ^ { -\frac{\alpha ( 3 + \alpha ) }{2 + \alpha } } n^{-\frac{\alpha }{4} + O ( \varepsilon ) } + \eta ^{\frac{\alpha }{2} - \varepsilon }. \end{aligned}$$

Since \(\varepsilon > 0\) can be arbitrarily small, this concludes the proof for the case \( n^{-\rho ^{\prime }} ( \log n )^ {c_0} \le \eta \le 1 \). The proof for \(n^{-\rho } ( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1 \) is identical: we find, by using the second part of Lemma 5.11 in (82)

$$\begin{aligned} \mathbb{E }\frac{1}{n} \sum _{k=1} ^n \left( \mathrm{Im}R ( z)_{kk} \right)^{\frac{\alpha }{2}} \le c \eta ^ { -\frac{5 \alpha }{4} } n ^{ - \frac{\alpha }{4} + O ( \varepsilon )} + c \eta ^{\frac{\alpha }{2} - \varepsilon }. \end{aligned}$$

It remains to notice that for \(\varepsilon \) small enough, in our range of \(\eta \), the second term dominates the first term. \(\square \)

Proof of Lemma 5.10

As in the proof of Lemma 5.2, it is sufficient to check that for some constant \(c = c( \alpha , \beta )\),

$$\begin{aligned} \mathbb{E }| R_{11} ( E + i \eta ) | ^ \beta \le c |E|^{-\beta }. \end{aligned}$$
(84)

As usual, from (10), we have

$$\begin{aligned} | R(z)_{11} | = \left| \left( z - a_n^{-1} X_{11} + a_n ^{-2} \langle X_1 , R^{(1)}X_1 \rangle \right) \right|^{-1}. \end{aligned}$$

We first get rid of the non-diagonal term in the scalar product \(\langle X_1 , R^{(1)}X_1 \rangle \). We perform this as in the proof of Lemma 3.2. Using the definition (28) and (64) with \(\alpha / 2 = \beta \), we find

$$\begin{aligned} \left| \mathbb{E }| R ( E + i \eta )_{11} | ^ \beta - \mathbb{E }\left| z + a_n ^{-2} \sum _{k= 2} ^n R^{(1)}_{kk} X_{1k}^2 \right|^{-\beta } \right| \le c \eta ^{-2 \beta }\mathbb{E }| T(z)| ^ \beta \end{aligned}$$

In particular, since \( | z |^{-\beta } \le | \mathrm{Re}(z) |^{-\beta }\), we find

$$\begin{aligned} \mathbb{E }| R( E + i \eta )_{11} | ^ \beta \le \mathbb{E }\left| E + a_n ^{-2} \sum _{k= 2} ^n \mathrm{Re}( R^{(1)}_{kk} ) X_{1k}^2 \right|^{-\beta } + c \eta ^{-2 \beta }\mathbb{E }| T(z)| ^ \beta \end{aligned}$$

Now, we decompose the sum into a positive and a negative part

$$\begin{aligned} \sum _{k= 2} ^n \mathrm{Re}( R^{(1)}_{kk} ) X_{1k}^2 = \sum _{k= 2} ^n \left( \mathrm{Re}( R^{(1)}_{kk} ) \right)_+ X_{1k}^2 - \sum _{k= 2} ^n \left( \mathrm{Re}( R^{(1)}_{kk} ) \right)_- X_{1k}^2. \end{aligned}$$

Note that, conditioned on \(R^{(1)}\), the two sums are independent. We invoke Lemma 7.1

$$\begin{aligned} a_n ^{-2} \sum _{k= 2} ^n \mathrm{Re}( R^{(1)}_{kk} ) X_{1k}^2 \stackrel{d}{=} a S - b S^{\prime }, \end{aligned}$$

where, conditioned on \(R^{(1)}\), \(a,b,S,S^{\prime }\) are independent non-negative random variables, \(S,S^{\prime }\) being \(\alpha /2\)-stable random variables. Hence from what precedes, we have

$$\begin{aligned} \mathbb{E }| R( E + i \eta ) _{11}| ^ \beta \le \mathbb{E }\left| E + a S - b S^{\prime } \right|^{-\beta } + c \eta ^{-2 \beta }\mathbb{E }| T(z)| ^ \beta . \end{aligned}$$

Assume for example that \(E > 0\). Let \({\mathcal{F }}\) be the filtration generated by \((R^{(1)}, a,b,S)\) and \(\mathbb{E }^{\prime } = \mathbb{E }[ \cdot | {\mathcal{F }}] \). Using Lemma 7.4 conditionally to \({\mathcal{F }}\) yields that for some constants \(c=C^2> 0\),

$$\begin{aligned} \mathbb{E }\left| E + a S - b S^{\prime } \right|^{-\beta } = \mathbb{E }\left[ \mathbb{E }^{\prime } \left| E + a S - b S^{\prime } \right|^{-\beta } \right] \le C \, \mathbb{E }\left| E + a S \right|^{-\beta } \le c E^{-\beta } . \end{aligned}$$

If \( E < 0\), we repeat the same argument with the filtration generated by \((R^{(1)}, a,b ,S^{\prime })\).

Now, if \(0 < \beta < 2 \alpha / ( 4 - \alpha )\), using the tail bound (27), we find

$$\begin{aligned} \mathbb{E }| T(z)| ^ \beta \le c \left( n^{-\frac{\beta }{\alpha }} + \left( \frac{M_n}{n} \right)^\frac{\beta }{2} \right). \end{aligned}$$
(85)

We now use the bound given by (19) on \(M_n\) which is valid for all \(\eta \ge n^{-\frac{\alpha +2 }{4}}\),

$$\begin{aligned} \eta ^{-2 \beta }\mathbb{E }| T(z)| ^ \beta \le c \eta ^{ - 2 \beta -\frac{2\beta }{ 2 + \alpha } } n^{-\frac{\beta }{2} } ( \log n ) ^ {\frac{\beta ( 2 + \alpha ) }{8} }. \end{aligned}$$

This concludes the proof of the lemma, since for \(\eta \ge n^{-\rho ^{\prime }} ( \log n )^ { \frac{ (2 + \alpha )^2}{ 16 ( 3 + \alpha ) } } \), the above expression is uniformly bounded. \(\square \)

Note that in the proof of Lemma 5.10 we have used the bound (19) instead of the bound \(M_n \le c\eta ^{-1}\) given by the proof of Proposition 3.6 because it is valid for a wider range of \(\eta \).

Proof of Lemma 5.11

Set \(h = -i z \in {\mathcal{K }}_1, H_{k}(h)=-i R^{(1)}(ih)_{kk}\) and define

$$\begin{aligned} I^n_h(u)&= \Gamma \left( 1 - \frac{\alpha }{2} \right) \mathbb{E }\left( \left( h +a_n^{-2} \sum _{k=2}^n X_{1k}^2 H_k \right)^{-1} . u \right)^{\frac{\alpha }{2} } \\&= \Gamma \left( 1 - \frac{\alpha }{2} \right) \mathbb{E }\left( \frac{ h. \check{u} + a_n^{-2} \sum _{k=2}^n X^2_{1k} H_k(h). \check{u} }{ \left| h +a_n^{-2} \sum _{k=2}^n X_{1k}^2 H_k \right|^2 } \right)^{\frac{\alpha }{2} } , \end{aligned}$$

where we recall that \(\check{u} = \mathrm{Im}(u) + i \mathrm{Re}(u)\).

Step one: Diagonal approximation. In this first step, we generalize Lemma 3.2. We will upper bound the expression \( \Vert \gamma _{ih}^n-I^n_{h}\Vert _{\beta ,\varepsilon }. \) Using the definition (28), we find that for any \(u\in S_1^+\), with \(\eta =\mathrm{Re}(h)>0\),

$$\begin{aligned} |\gamma _{ih}^n(u)-I^n_{h}(u)|\le c \eta ^{ - \alpha } \mathbb{E }[|T(z)|^{\frac{\alpha }{2}}], \end{aligned}$$

where we have used (64) with \(\beta =\frac{\alpha }{2}\). Using (85) for \(\beta = \alpha / 2\), we deduce that

$$\begin{aligned} |\gamma _{ih}^n(u)-I^n_{h}(u)|\le c \eta ^{ - \alpha } \left(n^{-1/2} +\left(\frac{M_n}{n}\right)^{\frac{\alpha }{4}}\right). \end{aligned}$$

Whereas using (19) to bound \(M_n\), we find for \(n^{ - \frac{\alpha +2 }{4}} \le \eta \le 1\) that

$$\begin{aligned} |\gamma _{ih}^n(u)-I^n_{h}(u)|\le c n^{ - \frac{\alpha }{4} }\eta ^ { - \frac{ \alpha ( 3 + \alpha )}{ 2 + \alpha } } ( \log n )^ { \frac{\alpha ( 2 + \alpha )}{ 8 } } . \end{aligned}$$
(86)

To bound

$$\begin{aligned} \Delta _h^n(u,v):=|\gamma _{ih}^n(u)-\gamma _{ih}^n(v)-I^n_{h}(u)+I^n_h(v)| \end{aligned}$$

we first observe that for \(x_1,x_2,y_1,y_2\in {\mathcal{K }}_1\), by using the standard interpolation trick, for \(\kappa _1,\kappa _2,\beta \in [0,1]\), we have if \(N=|x_1|\wedge |x_2|\wedge |y_1|\wedge |y_2|\), and \(\kappa _1+\kappa _2\ge \alpha /2\)

$$\begin{aligned}&|x_1^{\frac{\alpha }{2}}-x_2^{\frac{\alpha }{2}}-y_1^{\frac{\alpha }{2}} +y_2^{\frac{\alpha }{2}}|\le N^{\frac{\alpha }{2}-\kappa _1-\kappa _2} |x_1-y_1|^{\kappa _1}(|x_1-x_2|^{\kappa _2}+|y_1-y_2|^{\kappa _2})\\&\qquad +N^{\frac{\alpha }{2}-\beta }|x_1-x_2-y_1+y_2|^\beta . \end{aligned}$$

We use this inequality with

$$\begin{aligned} x_j= \frac{ h . \check{u} + a_n^{-2} \sum _{k=2}^n X^2_{1k} H_k(h).\check{u} -i (j-1) T(z).\check{u} }{ \left| h +a_n^{-2} \sum _{k=2}^n X_{1k}^2 H_k -i (j-1) T(h)\right|^2 }, \end{aligned}$$

and in \(y_j, v\) replaces \(u\). For \(j \in \{ 1, 2\} \), one can check that, with \(D_j=( h +a_n^{-2} \sum _{k=2}^n X_{1k}^2 H_k -i(j-1) T(z))^{-1}\),

$$\begin{aligned} |x_i-y_i|\le |D_i| |u-v| \; , \quad |x_1-x_2| \vee |y_1-y_2| \le |D_1| |D_2| |T(z)|, \end{aligned}$$

and

$$\begin{aligned} |x_1-x_2-y_1+y_2|\le |D_1| |D_2| |u-v||T(z)|\,. \end{aligned}$$

Moreover, using (65), we find \(N\ge ( |i.u|\wedge |i.v| ) ( |D_1| \wedge |D_2| ) \). Recall finally that \(|D_1|\) and \(|D_2|\) are bounded by \(\eta ^{-1}\). Hence, choosing \(\kappa _1=\beta ,\kappa _2=\frac{\alpha }{2} +\varepsilon \) (with \(\varepsilon \) small enough so that \(\frac{\alpha }{2}+\varepsilon <\frac{2\alpha }{4-\alpha }\)) we deduce that,

$$\begin{aligned} \Delta _h^n(u,v)\!\le \!\eta ^{ - \beta -\frac{\alpha }{2} } (|i.u|\wedge |i.v|)^{-\beta -\varepsilon } |u\!-\!v|^\beta \mathbb{E }[|T(z)|^{\frac{\alpha }{2}+\varepsilon }]\!+\!\eta ^{ - \beta - \frac{\alpha }{2}} |u\!-\!v|^\beta \mathbb{E }[|T(z)|^\beta ]. \end{aligned}$$

We naturally choose \(\beta =\frac{\alpha }{2}+\varepsilon < \frac{2\alpha }{4-\alpha }\). From (27), \(T(z) \in L^\beta \) and

$$\begin{aligned} \mathbb{E }[|T(z)|^\beta ]\le c n^{-\frac{\beta }{\alpha }} + c\left(\frac{M_n}{n}\right)^{\frac{\beta }{2}} \le c \left(\eta ^{\frac{4}{2 + \alpha } } n ( \log n )^{ - \frac{2 + \alpha }{ 4} } \right)^{-\frac{\beta }{2}}\!, \end{aligned}$$

where we have finally assumed that \(n^{-\frac{2 + \alpha }{4} } \le \eta \le 1\) and used (19). This gives for \(n^{-\frac{2 + \alpha }{4} } \le \eta \le 1\)

$$\begin{aligned} \Vert \gamma ^n_{ih}-I^n_h\Vert _{\frac{\alpha }{2}+\varepsilon ,\varepsilon } \le c \eta ^{ - \beta - \frac{\alpha }{2} - \frac{2 \beta }{2 + \alpha } } n^{-\frac{\beta }{2}} ( \log n )^{ \frac{\beta (2 + \alpha )}{ 8} } \end{aligned}$$

Now it easy to check that for \(\eta \ge n^{-\frac{2 + \alpha }{2 (4+\alpha )} } \), we have \(\eta ^{ - \beta - \frac{\alpha }{2} - \frac{2 \beta }{2 + \alpha } } n^{-\frac{\beta }{2}} < \eta ^ { -\frac{\alpha ( 3 + \alpha ) }{2 + \alpha } } n^{-\frac{\alpha }{4} }\). It follows for \(n^{-\rho ^{\prime } } \le \eta \le 1\) and a new constant \(c > 0\), depending on \(\varepsilon \), that

$$\begin{aligned} \Vert \gamma ^n_{ih}-I^n_h\Vert _{\frac{\alpha }{2}+\varepsilon ,\varepsilon } \le c \eta ^ { -\frac{\alpha ( 3 + \alpha ) }{2 + \alpha } } n^{-\frac{\alpha }{4} } . \end{aligned}$$
(87)

If instead we assume that \(n^{-\rho } ( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1 \), then, from the proof of Proposition 3.6, we may use the stronger bound \(M_n \le c \eta ^{-1}\) if \(|z|\) large enough. We then find

$$\begin{aligned} \Vert \gamma ^n_{ih}-I^n_h\Vert _{\frac{\alpha }{2}+\varepsilon ,\varepsilon } \le c \eta ^{- \frac{ 3 \beta }{2} - \frac{\alpha }{2}} n ^{ - \frac{\beta }{2}} \le c \eta ^ { - \frac{5 \alpha }{4} } n ^{ - \frac{\alpha }{4} } . \end{aligned}$$
(88)

(where, for the last inequality, we have used the fact that \(\eta \ge n ^{-1/3}\) for \(n^{-\rho }\) \(( \log n )^{\frac{4}{2 + 3 \alpha } } \le \eta \le 1 \) and \(n\) large enough).

Step two: approximate fixed point equation Next, we extend the proof of Proposition 3.1. We denote by \(\mathbb{E }_1 [ \cdot ] \) and \(\mathbb{P }_1 ( \cdot ) \) the conditional expectation and probability given \({\mathcal{F }}_1\), the \(\sigma \)-algebra generated by the random variables \((X_{ij})_{ i \ge j \ge 2}\). We assume that \(\alpha / 2 < \beta < 1 - \alpha / 2 \) and \(0 < \varepsilon < 1 - 3 \alpha / 2\). We first remark that by arguments similar to the proof of Lemma 5.3 and by Corollary 7.2, we have

$$\begin{aligned} I^n_h(u)=\mathbb{E }[ G_{z} (Z_n)(u)], \end{aligned}$$

where, conditioned on \({\mathcal{F }}_1, Z_n(u)=\frac{\kappa }{n}\sum _{k=2}^n ( H_k . u ) ^{\frac{\alpha }{2}}|g_k|^\alpha , g_k\) are i.i.d standard normal variables and \(\kappa = \Gamma ( 1 - \alpha / 2 ) / \mathbb{E }|g_1 | ^\alpha \). Note that, from Lemma 5.2, \(\Vert Z_n\Vert _\beta \le \frac{c}{n }\sum _{k=2}^n | H_k | ^{\frac{\alpha }{2}} |g_k|^\alpha \) which belongs to \(L^p\) for any \(p >0\). Therefore, we can use Lemma 5.5 and the Hölder inequality to insure that,

$$\begin{aligned} \Vert I^n_h- G_z (\gamma _z^n)\Vert _{\beta ,\varepsilon }&\le c|h|^{-\alpha } \left(1+\Vert \gamma _z^n\Vert _\beta +\mathbb{E }\left[\left(\frac{\kappa }{n}\sum _{k=2}^n | H_k | ^{\frac{\alpha }{2}} |g_k|^\alpha \right)^p\right]^{1/p}\right)\nonumber \\&\mathbb{E }\left[ \Vert \gamma _z^n-Z_n\Vert _{\beta ,\varepsilon }^q\right]^{1/q}\!, \end{aligned}$$
(89)

where \(1 / p + 1 /q = 1\) and

$$\begin{aligned} \mathbb{E }[ \Vert \gamma _z^n-Z_n\Vert _{\beta ,\epsilon }^q]^{1/q} \le \mathbb{E }[ \Vert \mathbb{E }[ Z_n ] -Z_n\Vert _{\beta ,\epsilon }^q]^{1/q} + \Vert \gamma _z^n-\mathbb{E }[ Z_n ] \Vert _{\beta ,\epsilon }\,. \end{aligned}$$

From the triangle inequality,

$$\begin{aligned} \mathbb{E }[ \Vert \mathbb{E }[ Z_n ] -Z_n\Vert _{\beta ,\epsilon }^q]^{1/ q } \le \mathbb{E }[ \Vert \mathbb{E }[ Z_n ] - \mathbb{E }_1 [Z_n] \Vert _{\beta ,\epsilon }^q]^{1/ q } + \mathbb{E }[ \Vert \mathbb{E }_1 [ Z_n ] -Z_n\Vert _{\beta ,\epsilon }^q]^{1/ q }. \end{aligned}$$

But, using Lemma 8.5 from the Appendix,

$$\begin{aligned}&\mathbb{E }[ \Vert \mathbb{E }_1 [ Z_n ] -Z_n\Vert _{\beta ,\epsilon }^q] \le \mathbb{E }\left\Vert\frac{c }{n}\sum _{k=2}^n ( H_k . u )^{\frac{\alpha }{2}} (|g_k|^\alpha -\mathbb{E }|g_k|^\alpha )\right\Vert_{\beta , \varepsilon } ^q \\&\quad \le c(q) ( \log n )^ { \frac{q}{2}} ( \eta ^{ \alpha }n ) ^{-\frac{q}{2}}. \end{aligned}$$

Similarly, by Lemma 8.5,

$$\begin{aligned} \mathbb{E }[ \Vert \mathbb{E }[ Z_n ] - \mathbb{E }_1[ Z_n ] \Vert _{\beta ,\epsilon }^q]&\le c^q \mathbb{E }\left\Vert \mathbb{E }\frac{1 }{n} \sum _{k=2}^n ( H_k . u )^{\frac{\alpha }{2}} - \frac{1 }{n} \sum _{k=2}^n ( H_k . u )^{\frac{\alpha }{2}} \right\Vert^q_{\beta , \varepsilon }\\&\le c^{\prime }(q) ( \log n )^{\frac{q \alpha }{4}} ( \eta ^ 2 n ) ^{ - \frac{q \alpha }{4} } . \end{aligned}$$

Whereas using (97) as we did in the proof of Proposition 3.1, we have, with \(c_0 = 2 \Gamma ( 1 - \frac{\alpha }{2} )\),

$$\begin{aligned} \Vert \gamma _z^n-\mathbb{E }Z_n \Vert _{\beta ,\epsilon }\le c_0 (n\eta )^{-\frac{\alpha }{2}}. \end{aligned}$$

Hence, there exists a new constant \(c(q)\) such that

$$\begin{aligned} \mathbb{E }\left[ \Vert \gamma _z^n-Z_n\Vert _{\beta ,\varepsilon }^q\right]^{1/q} \le c(q) ( \log n )^{\frac{ \alpha }{4}} ( \eta ^ 2 n ) ^{ - \frac{ \alpha }{4} }. \end{aligned}$$

Similarly, using the triangle inequality at the first line, (97) at the second line and the Jensen inequality at the third,

$$\begin{aligned}&\mathbb{E }\left[\left(\frac{1}{n}\sum _{k=2}^n | H_k | ^{\frac{\alpha }{2}} |g_k|^\alpha \right)^p\right]^{1 / p} \\&\quad \le \mathbb{E }\left[\left(\frac{1}{n}\sum _{k=2}^n | H_k | ^{\frac{\alpha }{2}} \mathbb{E }|g_k|^\alpha \right)^p\right] ^{1 / p} + \mathbb{E }\left[ \left|\frac{1 }{n}\sum _{k=2}^n | H_k | ^{\frac{\alpha }{2}}(|g_k|^\alpha -\mathbb{E }|g_k|^\alpha )\right|^p \right] ^{1 / p} \\&\quad \le \mathbb{E }|g_1 |^\alpha \mathbb{E }\left[\left(\frac{1}{n}\sum _{k=2}^n | R_k | ^{\frac{\alpha }{2}} \right)^p\right] ^{1 / p} + c_0(n\eta )^{-\frac{\alpha }{2}} + c(p)^{ 1 / p} ( \eta ^{ \alpha }n ) ^{-\frac{1}{2}} \\&\quad \le \mathbb{E }|g_1 |^\alpha \left(\frac{1}{n}\sum _{k=2}^n \mathbb{E }| R_k | ^{\frac{p \alpha }{2}} \right) ^{1 / p} + c_0 (n\eta )^{-\frac{\alpha }{2}} + c(p)^{ 1 / p} ( \eta ^{ \alpha }n ) ^{-\frac{1}{2}}. \end{aligned}$$

We choose \(p > 1\) such that \(p \alpha /2 < 2 \alpha / (4-\alpha )\) and we finally use (84) and Lemma 5.10. Then, for our range of \(\eta \), the right hand side of the above inequality is of order \(1\). Putting these estimates in (89), we find finally that for any \(\alpha / 2 < \beta < 2 \alpha / ( 4 - \alpha )\), any \(0 < \varepsilon < 1 - 3 \alpha / 2\) and \(n^{-\rho ^{\prime } } (\log n ) ^{c_0} \le \eta \le 1\), there exists a constant \(c(\alpha , \beta , \varepsilon )\) such that

$$\begin{aligned} \Vert I^n_h- G_z(\gamma _z^n)\Vert _{\beta ,\varepsilon } \le c | z | ^{-\alpha } ( \log n )^{\frac{ \alpha }{4}} ( \eta ^ 2 n ) ^{ - \frac{ \alpha }{4} }. \end{aligned}$$

Putting this together with (87), this conclude our proof (the above term is negligible compared to the right hand side of (87) or (88)). \(\square \)