Strong renewal theorems and local large deviations for multivariate random walks and renewals

We study a random walk $\mathbf{S}_n$ on $\mathbb{Z}^d$ ($d\geq 1$), in the domain of attraction of an operator-stable distribution with index $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_d) \in (0,2]^d$: in particular, we allow the scalings to be different along the different coordinates. We prove a strong renewal theorem, $i.e.$ a sharp asymptotic of the Green function $G(\mathbf{0},\mathbf{x})$ as $\|\mathbf{x}\|\to +\infty$, along the"favorite direction or scaling": (i) if $\sum_{i=1}^d \alpha_i^{-1}<2$ (reminiscent of Garcia-Lamperti's condition when $d=1$ [Comm. Math. Helv. $\mathbf{37}$, 1963]); (ii) if a certain $local$ condition holds (reminiscent of Doney's condition [Probab. Theory Relat. Fields $\mathbf{107}$, 1997] when $d=1$). We also provide uniform bounds on the Green function $G(\mathbf{0},\mathbf{x})$, sharpening estimates when $\mathbf{x}$ is away from this favorite direction or scaling. These results improve significantly the existing literature, which was mostly concerned with the case $\alpha_i\equiv \alpha$, in the favorite scaling, and has even left aside the case $\alpha\in[1,2)$ with non-zero mean. Most of our estimates rely on new general (multivariate) local large deviations results, that were missing in the literature and that are of interest on their own.

We assume that S is aperiodic and in the domain of attraction of a non-degenerate multivariate stable distribution with index α := (α 1 , . . . , α d ) ∈ (0, 2] d : there is a recentering sequence b n = (b (1) n , . . . , b (d) n ) and scaling sequences a (1) n , . . . , a (d) n such that, setting A n the diagonal matrix with A n (i, i) = a (i) n , we have as n → +∞ (1.1) Here, Z is a multivariate stable law, whose non-degenerate density is denoted g α (x). As in [35,11,28], we allow the scaling sequences to be different along different coordinates.
The case where a (i) n ≡ a n for all 1 i d (that is A n = a n I d ) was considered by Lévy [26] and Rvaceva [36], and will be referred to as the balanced case. We refer to Appendix A for further discussion on generalized domains of attractions (here we only consider the case where A n is diagonal), and for a brief description of multivariate regular variation.
The contribution of the present paper is threefold: (i) we give the sharp behavior of G(x) in the case α ∈ [1, 2) with non-zero mean, in a cone around the mean vector (we call it favorite direction): this was missing in the literature-we also treat the case α = 1 with infinite mean; (ii) we give uniform bounds on G(x), giving improved estimates when x is outside the favorite direction; (iii) we extend the results to the case of random walks in the domain of attraction of an operator stable distribution, allowing for different scalings along the different components (and we weaken Williamson's condition [41,Eq. (3.10)] in the case α ∈ (0, 1)).
As a central tool, we prove some multivariate local large deviations estimates, i.e. we go beyond the local limit theorem in a large deviation regime. This is of its own interest since such estimates were missing in the literature, and appear central in controlling the small-n contribution to G(x). We prove a local large deviation in the general setting, see Theorem 2.1. Then we propose a new (and natural) multivariate Assumption 2.2, which extends Doney's condition [13,Eq. (1.9)] to the multivarate settind, and generalizes Williamson's condition [41,Eq. (3.10)]: we obtain a better local large deviation result under this assumption.
Let us now give a brief overview on how the rest of the paper is organized. First, we present our local large deviations estimates and our Assumption 2.2 (that gives a sharper result), in Section 2. In Section 3, we state our strong renewal theorems (along the favorite direction or scaling), that we divide into three parts: the centered case, i.e. when b n ≡ 0; the non-zero mean case with α i > 1; the case α i = 1, that we set aside because it needs additional care. In Section 9, we present the uniform bounds on G(x) (in dimension d = 2 for simplicity). The rest of the paper, Sections 5 to 9, is devoted to the proofs: Section 5 for the local large deviations, Sections 6-7-8 for the strong renewal theorems, and Section 9 for the estimates when x is away from the favorite direction or scaling. Finally, we collect in the appendix some useful comments: in Appendix A, we recall some definitions and results about multivariate regular variation and generalized domains of attraction; in Appendix B, we discuss further on our Assumption 2.2.

A general working assumption
We assume in the rest of the paper, mostly for simplicity of notations, that the left and right tail distributions of X 1 , F i (−x) andF i (x), are dominated by subexponential distributions. Assumption 1.1. There exists some slowly varying functions (ϕ i ) i d , and some γ i α i such that for all x ∈ N and i ∈ {1, . . . , d} (1.6) When E[(X (i) 1 ) 2 ] = +∞, we may take γ i = α i , and ϕ i (·) a constant multiplicative of L i (·). When E[(X (i) 1 ) 2 ] < +∞, we may take ϕ i (·) and γ i such that n 1 ϕ i (n)n 1−γi < +∞. This assumption is essentially used to generalize (1.2) to the case α i = 2: the exponent γ i gives further information on the left and right tail distribution. It does not appear to be EJP 24 (2019), paper 46. a real restriction (components are allowed to have a much stronger tail, having formally γ i = ∞), but is easier for presenting the results. Also, we used the same exponent for the left and right tail distribution for simplicity, but all results can be adapted to the case of different tail behaviors. A typical example we have in mind is when the distribution of X 1 is regularly varying in R d with exponent −(γ 1 , . . . , γ d ). We refer to Appendix A for a definition of multivariate regular variation, see in particular (A.2)-we also present two examples (Examples A.1-A.2) of distribution of X 1 we keep in mind.

Local large deviations
Let us start by stating the local limit theorem obtained by Griffin in [19] in our setting, and disentangled by Doney [11] (it is proven in dimension 2, but as stressed by Doney its proof is valid in any dimension): uniformly for x = (x 1 , . . . , x d ) ∈ Z d , a (1) n · · · a (d) n P S n = x − g α x n → 0 as n → +∞ , , . . . , Our first set of results concerns local large deviation estimates, which improve (2.1) in the case x n → +∞. But let us start by reviewing some of the existing literature. A great part of it focuses on the balanced case (A n = a n I d ): in [25], large deviations are proven, and in [42,33], some sufficient conditions (that we do not detail here) are given to obtain a local limit theorem of the type P(S n ∈ A) ∼ nP(X 1 ∈ A) -the case α = 1 is left aside. As far as the "non-balanced" case is concerned, we refer to [29,Ch. 9] for large deviations estimates, see for example Theorem 9.1.3, where it is shown that P S n , θ > x n is of the order of nP X 1 , θ > x n when in the domain of attraction of an operator stable distribution with no normal component.
To summarize, there exists no general result that would treat "mixed" Normal and stable cases, and that would give a good (and general) local large deviation, under a weak assumption. Our aim is therefore to provide simple local large deviation estimates, that will be a crucial tool for our renewal results of Sections 3-4. We also give an improved result below, under some more local assumption on the distribution of X 1 . The proof of the local large deviation results are presented in Section 5.

A first local limit theorem
Let us denoteŜ n := S n − b n the recentered walk (we take the integer part of b n simply so thatŜ n is still Z d valued). As far as a large deviation estimate is concerned, univariate large deviation estimates already give (we recall these results in Section 5.1 below) that there is a constant C 0 such that for any x 0, where the inequalityŜ n x is componentwise. We now give a local version of it.

A local multivariate assumption for an improved local limit theorem
In dimension d = 1, better local large deviations can be obtained under a local assumption on the distribution of X 1 , see [13] in the case α ∈ (0, 1) and [3,Thm 2.7] in the case α ∈ (0, 2). We present here an assumption which can be thought as the analogous of Doney's condition [13,Eq. (1.9)] to the multivariate setting, and generalizes Williamson's condition [41,Eq. (3.10)]. We comment on that Assumption below. Assumption 2.2. There exist a constant C d , slowly varying functions (ϕ i ) 1 i d and exponents (γ i ) 1 i d (the same as in Assumption 1.1) such that for any fixed i ∈ {1, . . . , d} . . , d} verify: i , and ψ(·) a slowly varying function, such that P( We mention that a two-dimensional, balanced, version of Example 2.4 is used in [18] (it comes from the biophysics literature, see [16]): the dimension is d = 2, β i ≡ 1, and β = 2 + α, α > 0.
Let us now give a general idea behind the choice of Assumption 2.2-assume for simplicity that all x i 's are positive. We start with writing natural assumption is that the probability of being at one particular site is bounded by uniform on the rectangle): this gives the first denominator of (2.4).
Then, P(X by Assumption 1.1: it gives the first numerator in (2.4). The last term is, by Hölder's inequality, bounded by which accounts for the product of the h (i) xi (x j ). We keep in mind two cases: (i) when the coordinates are independent (see Example 2.3), we recover h j for some a > 0; (ii) when the coordinates are dependent (see Example 2.4), there is some threshold t( xj −a for some a > 0, and this satisfies the conditions (2.5) (we refer to Appendix B for more details, see (B.1)-(B.2) and below). We stress that the term h (i) |xi| (|x j |) in (2.4) is central: in particular, item (ii) in (2.5) insures that there is a constant C such that for any i, (2.6) which is Doney's condition [13,Eq. (1.9)] for each component (generalized to the case α i 1). Also, we point out that Assumption 2.2 is similar in spirit but weaker than Williamson's condition [41,Eq. (3.10)], which considers the balanced case α i ≡ α < min(d, 2), and says that there is a constant K 0 < +∞ such that for any x ∈ Z d , The case of dimension d = 1 with α 1 ∈ (0, 2) is proven in [3, Theorem 2.7]: Theorem 2.5 therefore generalizes it to the case α 1 = 2, and to the multivariate, non-balanced case. It is a significant improvement of Theorem 2.1, in particular when (several) x i 's are much larger than a (i) n .

About the balanced case, and Williamson's condition
We may obtain another bound if we consider the balanced case, and assume that there is a positive exponent γ, and some slowly varying ϕ(·) such that This is a natural extension of Williamson's condition (2.7) to the case α = 2, and as seen in Appendix B (when treating Example 2.4), it implies Assumption 2.2. Theorem 2.6. Suppose that a (i) n ≡ a n (balanced case) and that (2.8) holds. Then there are constants c 3 , C 3 , such that for x a n we have In practice, we will not use assumption (2.8) in the rest of the paper: it requires to work in the balanced case, and would not improve our renewal results. We however include Theorem 2.6 since it is an important improvement of Theorem 2.5, and may reveal useful (in particular in the setting of [18] and [5] where (2.8) is verified).

Some conventions for the rest of the paper
First of all, all regularly varying quantities (a ..) will be interpreted as functions of positive real numbers, which may be taken infinitely differentiable (see [6,Th. 1.8.2]).
As we may work along subsequences and exchange the role of the X (i) 's, we assume that a (1) n · · · a (d) n (insuring in particular that α 1 · · · α d )-the first coordinate is the one with the less fluctuations. Finally, assume that a (j) n → a ∈ (0, 1) then rescale the limiting law by a). Having a i,1 = 1 for all EJP 24 (2019), paper 46. i corresponds to the balanced case. We will also assume that: either b (i) n ≡ 0 (as it is the case when α i < 1; α i > 1 with µ i = 0; in the symmetric case for α i = 1), or that b (i) n /a (i) n → ±∞ (as it is the case when α i 1 with µ i ∈ R * or α i = 1 with p i = q i )-the only case where subtleties may arise is when α i = 1 with |µ i | = 0 or +∞ and In the rest of the paper, we denote u ∨ v = max(u, v) and u ∧ v = min(u, v). For two sequences (u n ) n 0 , (v n ) n 0 , we write u n ∼ v n is u n /v n → 1 as n → +∞, u n = O(v n ) if u n /v n stays bounded, and u n v n if u n = O(v n ) and v n = O(u n ).

Strong renewal theorems
We now consider the Green function G(x) := ∞ n=1 P(S n = x), and we study its behavior as x → +∞. If (S n ) n 0 is a (multivariate) renewal process, we interpret G(x) as the renewal mass function, P(x ∈ S).

About the favorite direction or scaling
In the sum ∞ n=1 P(S n = x), the main contribution comes from some typical number of jumps: identifying that number allows us to determine a favorite direction or scaling along which we will get sharp asymptotics of G(x). Let us define n i : Then n i is the typical number of steps for the i-th coordinate to reach x i . This definition might not give a unique n i , but any choice will work, and n i is unique up to asymptotic equivalence. If α i > 1 with µ i = 0, then we have n i = |x i |/|µ i |; if α i = 1 and µ i ∈ R * or α i = 1 and p i = q i then we have n i ∼ |x i |/|µ i (|x i |)| (see details below, in Section 8.1); and if b (i) There are mainly three regimes that we consider, I. Centered case: b n ≡ 0. The typical number of steps to reach x is n i0 = min i n i ; the favorite scaling are the points x with x i a (i) ni 0 for all i, see (3.2) below. II. Non-zero mean case: µ i ∈ R * for some i, with α i > 0. Let i 0 = min{i, µ i = 0}: the typical number of steps to reach x is n i0 (3.9) below. Some more subtleties arise in that case.
We now present strong renewal theorems, i.e. sharp asymptotics of G(x), in cases I-II-III, along the favorite direction or scaling (the proofs are presented in Sections 6-7-8).
Recall that g α (·) is the density of the limiting multivariate stable law.

Case I (centered ): b n ≡ 0
We assume here that b n ≡ 0, and that n ) −1 < +∞, and S n is transient. We leave aside for the moment the case d = 1, α 1 = 1 (considered in [3]), and the case d = 2, α = (2, 2), which are marginal cases-the transience of the random walk depends on the slowly varying functions L i (·).
then we have that, We refer to (3.2) as x going to infinity along the favorite scaling. Note that under (3.2) we have n i ∼ |t i | αi n 1 , so we can exchange the role of the coordinates if needed.

Comments on the balanced case
In the balanced case, a (i) n1 ≡ |x 1 | and α i ≡ α: we obtain that if either α > 2/d or Assumption 2.2 holds and if x i /x 1 → t i ∈ R * for all i ∈ {1, . . . , d}, . This recovers Williamson's result [41] (we used a change of variable for the integral), under weaker conditions if α d/2.   In the balanced case (a (i) n ≡ a n ), then S n is transient if and only if +∞ 1 du uσ(u) < +∞ (recall the definition (1.3) of σ(·)), and we can rewrite the above as: if x → +∞ such that |x 1 |/|x 2 | stays bounded away from 0 and +∞, then , as |x 1 | → +∞ . Let i 0 be the first i such that µ i = 0, and assume that x i0 and µ i0 have the same sign. Theorem 3.3. Assume that α i0 > 1, µ i0 = 0, and that µ i = 0 for i < i 0 . Assume that one among the following three conditions holds: EJP 24 (2019), paper 46.
As for Theorem 3.1, we refer to (3.5) as x going to infinity along the favorite direction.

Comments on the balanced case
If a (i) n ≡ a n and α i ≡ α, case II corresponds to having α > 1 and one µ : In the symmetric case where we have µ i ≡ µ = 0, the result simplifies: let us state it along the diagonal 1 = (1, . . . , 1) for simplicity, Indeed, we used that a r/|µ| ∼ |µ| −1/α a r , and a change of variable for the integral.

Case III
n ≡ 0}, and assume that α i0 = 1 with either µ i0 ∈ R * or p i0 = q i0 . For an overview of results and estimates on (univariate) random walks of Cauchy type, we refer to [3]-many of the estimates we use below come from there.

Renewal estimates away from the favorite direction or scaling
In this section, we provide bounds on G(x) that hold uniformly on x: in particular, this sharpens our estimates when x goes away from the favorite direction or scaling (one would have C α , C α or C α → 0 in Theorems 3.1, 3.3 or 3.4). We do not obtain sharp asymptotics for G(x), mostly because the local large deviation estimates of Section 2 are not sharp-first of all because our Assumption 2.2 does not give the precise asymptotic of P(X 1 = x). Let us stress that in [4], the authors manage to obtain the sharp asymptotic of G(x) in a specific setting (with application to a DNA model): X 1 ∈ N 2 , and the local probabilities P(X 1 = x) are known asymptotically, one coordinate having a heavy-tail, the second one having an exponential tail. One should also be able to obtain the sharp asymptotics of G(x) for instance in Example 2.4, but we do not pursue it here to avoid additional lengthy and technical calculations.
We also stress that having uniform bounds on G(x) turn out to be useful, for instance when studying the intersection of two independent (multivariate) renewal processes S = {S n } n 0 , S = {S n } n 0 with same distribution. Indeed, E[|S ∩ S |] = x∈Z d P(x ∈ S) 2 , and to known whether S ∩ S is finite, good bounds on G(x) = P(x ∈ S) are essential. The main contribution to E[|S ∩ S |] will come from points along the favorite direction, and one needs to know how fast G(x) decreases when x moves away from it. We refer to [5,App. A.2] for some results on the intersection of two independent renewal processes.
For the simplicity of the exposition, we only present the case of dimension d = 2. Also, we will work under Assumption 2.2. Often, results will be sharper in the case of renewal processes, as will be outlined in our theorems. We divide our statements into three n ≡ 0 (non-zero mean for both coordinates); b (i0) n ≡ 0 and b (i1) n ≡ 0 (mixed case). The proofs are presented in Section 9.
n2 ), see (3.5)-(3.9)) corresponds to having n 1 − n 2 = O(m i ) for i = 1, 2. We will state only the case n 1 n 2 , the other case being symmetric. Theorem 4.2. Suppose that Assumption 2.2 holds, and that for i = 1, 2: either α i 1 and µ i ∈ R * , or α i = 1 and p i = q i . Then for every δ > 0 there is a constant C δ such that, for all x ∈ Z 2 (recalling the definition (3.1) of n i , and of m i := a If S n is a renewal process, then G(x) = 0 as soon as n 2 |x 1 |, in particular if n 2 n 1+δ 1 .
We stress that in the case α 1 , α 2 > 1, then we can replace n i by x i /µ i (x i and µ i with the same sign) and m i by a (i) |xi| .

About the balanced case
In the balanced case (a (i) n ≡ a n ), Theorem 4.3 gives the following: for the case |s| a r ) gives, for any |s| r . We therefore get the same conclusion as in (4.5). The case |s| r is similar to the case α > 1 above.
We mention that assumption (2.8) would not improve much (4.5)-(4.6): the improvement would be only at the level of the slowly varying function, that are absorbed by the exponent δ. We refer to the end of Section 9.2 for a discussion.

Case II-III (non-zero mean, mixed
Here, we consider again the setting of cases II and III of Section 3, in the case where the second coordinate is "centered".
There is a constant C and for any δ > 0 there is a constant C δ such that for any x ∈ Z 2 : If S n is a renewal process (necessarily α i1 ∈ (0, 1)), then there is an exponent EJP 24 (2019), paper 46.
Notational warning: In the rest of the paper, we use c, C, c , C ,... as generic constants, and we will keep the dependence on parameters when necessary, writing for example c ε , C ε for constants depending on a parameter ε.

Proof of the local large deviations
In this section, we prove the local limit theorems of Section 2: Theorem 2.1 in Section 5.2, Theorem 2.5 in Section 5.3, and Theorem 2.6 in Section 5.4. But first of all, let us recall some univariate large deviation results.

Univariate large deviations: a reminder of Fuk-Nagaev inequalities
We start by giving a brief reminder of useful large deviation results for univariate random walks (i.e. we focus on S (1) ) in the domain of attraction of an α 1 -stable distribution-this will be useful throuhout the section. Most of these estimates can be found in [31], but the case α 1 = 1 was improved recently, cf. [3]. This will enable us to obtain local limit theorems for multivariate random walks in the next section.
In the rest of the section, we denote M (i) k . We refer to Section 5 in [3] for an overview on how to derive the following statement from [31].
As a consequence of Theorem 5.1, there is a constant c 0 such that, whenever x a Indeed, the left-hand side is bounded by P M Using a union bound, and because of Assumption 1.1, the first term is bounded by a constant times nϕ 1 (x)x −γ1 . For the second term, we use Theorem 5.1, which gives that Another useful consequence of Theorem 5.1 is the following: let C, C be two (large) constants, with C < C/10, then there is a constant c such that for any x Ca We used (nϕ 1 (x)x −γ1 ) 2 for technical purposes (it is needed in the following), but the bound is also valid without the square (or even without this term), bounding n . Indeed, Theorem 5.1 gives that the left-hand side is bounded by To obtain (5.2) from this, we use the following.
n ) −1 so the first and second term are smaller than exp n , and n . We also used that nσ 1 (a n when α = 2.

Proof of Theorem 2.1
We fix i ∈ {1, . . . , d}, and consider some x ∈ Z d with x i a (i) n . Recall thatŜ n = S n − b n . We denote d n : The two terms are treated similarly, so we only focus on the first one. We have where we used the local limit theorem (2.1) to get that there is a constant C > 0 such that for any k 1 and y ∈ Z d , we have P(S k = y) C(a Then, in order to use (5.1) for the last probability, we need to control 1 EJP 24 (2019), paper 46.

Claim 5.2.
There exists a constant c > 0 such that for all n d (i) n := For the second inequality we used [3, Claim 5.3] that we reproduce below (separate the positive and negative part of X (i) 1 ), using also that a n/2 is bounded by a constant.
Note that with the same method, using Theorem 5.1 instead of (5.1), one is able to obtain a local version of Theorems 5.1.

Proposition 5.4.
There are some C 4 , C 5 > 0 such that, for any x with x i C 4 a (i) n , and The proof of this proposition is a straightforward transposition of the proof of Theorem 2.1, we leave the details to the reader (for the univariate setting, we refer to Proposition 6 in [3] and its proof). We also state two other bounds (in dimension d = 2 for simplicity), that will be useful in the proof of Theorem 2.5.

Claim 5.5.
There are constants C 6 , C 7 such that, for any For any  Then we can use Theorem 5.1 to control the probabilities in the right-hand sides.
Proof of Claim 5.5. We prove only (5.8), the proof of (5.7) being identical as that of (5.6).
We decompose the probability into four parts, according to whether S (i) x i /2 or not, for i = 1, 2: there are two terms we need to control (the other two being symmetric).
(1) The first term we need to control is For the last inequality, we used the local limit theorem (2.1) to bound the last probability n ) uniformly in x 1 , x 2 , z 1 , z 2 , and then summed over z 1 , z 2 . Then, we use Claim 5.2 to get that, provided x i C 6 a (i) n with C 6 large enough, the last probability is bounded by where we used Cauchy-Schwarz inequality at last.
(2) The second term we need to control is Then, we can use Proposition 5.4, say for the second probability: indeed, we have that uniformly for the range of z 2 considered, n (thanks to Claim 5.2). Using this in (5.9) and summing over z 1 and z 2 (and using again Claim 5.2), we finally get that (5.9) is bounded by Let us stress that, to obtain the statement of Claim 5.5, we additionally use that P Ŝ (1) This comes from splitting the left-hand side according to whether S n/4 − 1 2 b n/2 x 1 /8 or not, and using again Claim 5.2 to get that | b n/4 − 1 2 b n/2 | x 1 /16.

Proof of Theorem 2.5
Let us write the details only in dimension d = 2 to avoid lengthy notations, the proof works identically when d 3. Also, we only deal with x 0. We fix a constant n falls in the range of the local limit theorem (2.1), so we need to consider only two cases: n (the case n is symmetric) and n .

Case
We will treat three different contributions, by writing, for some C 9 > 0 For the last term, we use Proposition 5.4, together with Theorem 5.1 (more precisely (5.2)), to get that it is bounded by a constant times n is large enough (and similarly for the last term), to get that In order to treat the first two terms in (5.10), we control the probability, for k ∈ Z, where we set δ n := b n − b n−1 , which is uniformly bounded by a constant. By Assumption 2.2, we get that, for any u ∈ [2 k x 1 , 2 k+1 x 1 ) and v ∈ Z, We used Potter's bound [6, Thm. 1.5.6] to get that for x 1 sufficiently large, for every Since this bound is uniform over u ∈ [2 k x 1 , 2 k+1 x 1 ), we may sum over u the last probability in (5.12): note that (5.14) EJP 24 (2019), paper 46.
• When k − 3, we therefore get from (5.12) that (taking η < γ 1 in (5.13)) We used the local limit theorem to get that there is a constant C such that for any z ∈ Z, P(Ŝ n , and then that v∈Z h 2 k x1 (|v|)/(1 + |v|) C for some constant C not depending on k or x 1 , thanks to item (ii) in (2.5). From this, we obtain that • When k − 4, we use Claim 5.5 in (5.14), so that plugged in (5.12) we obtain that where we used again item (ii) in (2.5) to bound v∈Z h 2 k x1 (|v|)/(1 + |v|) by a (uniform) constant, and took η = 1. Then, we can use Theorem 5.1 to get that there are constants c, c such that uniformly for k Indeed, we used that since 2 k x 1 C 9 a (1) n , we have that nϕ 1 (2 k x 1 )(2 k x 1 ) −γ1 is bounded by a constant. Also, in the case α 1 = 2, we used that σ 1 (2 k x 1 ) σ 1 (x 1 ), and that by definition of a (1) n (the last inequality comes from Potter's bound).
Therefore, summing over k between −4 and − log 2 (x 1 /C 9 a (1) n ) , we finally obtain that P Ŝ n = x, M Note that the second term is bounded by a constant, uniformly for x 1 /a (1) n C. Therefore, we conclude that Notice that, in the case α 1 < 2, we have γ 1 = α 1 , and n ∼ (a (1) n ) −1 , so that the second term is negligible, since the first term is bounded below by a power of x 1 /a (1) n .
We also used that (x 1 /a

Case
n . As a first step, we write Term 1. Let us bound the first term in (5.21). We use Claim 5.5 (more precisely (5.8)), together with Theorem 5.1 (more precisely (5.2)) to get that Note that we also used that (5.2) is also bounded by exp(−cx 2 /a (2) n ) for the first inequality. Then, we used that (a + b) 1/2 a 1/2 + b 1/2 for any a, b 0, and then that Term 3. We now bound the third term in (5.21) by a constant times (x 1 x 2 ) −1 nϕ 1 (x 1 )x −γ1 1 . We proceed as for the previous section (5.12)- (5.19). The analogous of (5.12) is, for Then, one bounds P(X 1 = (u, v)) by using Assumption 2.2 (as in (5.13)), and by summing over u ∈ [2 k x 1 , 2 k+1 x 1 ) one needs to control (analogously to (5.14)) uniformly over v C 9 a n .
The first probability in (5.23) is treated by using Proposition 5.4, together with (5.2) (and the remark below): since x 2 − v + δ (2) n is bounded below by x 2 /2 uniformly in the range of v considered (and assuming that C 9 < C 8 /2), we get that We used the fact that x 2 C 8 a n for the last inequality. Hence, we get that for k − 3 and the last sum is bounded by a constant uniform in k, x 1 , thanks to item (ii) in (2.5). Summing over k − 3, we get that, analogously to (5.16), For the second probability in (5.23) (with k − 4), we invoke Claim 5.5: one can easily adapt the proof of (5.8), using that x 2 − v + δ (2) n x 2 /2 uniformly for the range of v considered, to get that P Ŝ (1) For the second inequality, we used Theorem 5.1, more precisely (5.17). Therefore, we obtain that for k − 4 with 2 k x 1 C 9 a (1) n , with the last sum bounded by a constant uniform in k, x 1 . Summing over k between −4 and − log 2 (x 1 /C 9 a n ) (as done in (5.18)), we get that To conclude, we have that Term 4. We now bound the fourth term in (5.21). We stress that the treatment is not completely symmetric to that of Term 3, since we wish to obtain a bound that depends on the tail of the first coordinate (i.e. on ϕ 1 (·) and γ 1 ), whereas (5.28) above yields the . We however proceed analogously: we control We end up with, analogously to (5.25), Also, for k − 4, instead of (5.26), we get and, analogously to (5.27), we obtain All together, and since nϕ( Term 2. It remains to deal with the second term in (5.21), which is the most technical.
We will estimate the probabilities, for k, j ∈ Z Here, we split the probability into two contributions: either the two maxima in M n , M (2) n are attained in one increment (with both coordinates large), see (5.32), or the two maxima are attained by separate increments, see (5.39). Part 1. The first contribution is, using a union bound and the exchangeability of the Then we use Assumption 2.2 (item (i) in (2.5)) to get that there is a constant C such that for any j, k, and any (u, v) EJP 24 (2019), paper 46. Therefore, in (5.32), we can sum over u, v the last probability, and we treat it differently according to whether k − 3 or not and j − 3 or not (similarly to (5.14)): after summation over u, v, we obtain the following upper bound Then, we can use Theorem 5.1 to get that for k − 4 with 2 k x 1 C 9 a (1) n we have, with the same argument as for (5.17), Hence, we obtain (the calculation is analogous to that in (5.18)) In the case k − 4, j − 4, we get that and a similar calculation as above gives All together, we obtain that Again, we use Assumption 2.2 to bound the first two probabilities: for the ranges of u, v and s, t considered, using item (iii) in (2.5), we have 2 k x2 (|s|) 1 + |s| . Then, we may sum the last probability in (5.39) over u and t in the range considered, and get after summation (using also that for the range of v and s considered we and to treat these terms, we can again use Theorem 5.1, in the same way as for (5.34).
Then we can sum over v and s and use item (ii) in (2.5) to get that v h (i) 2 k x1 (|v|)/(1 + |v|) Going back to (5.39), and starting with the case k, j − 3, we get +∞ k,j=−3 Similarly, and using (5.34), we get that if k − 4, j − 3 (the case k − 3, j − 4 is symmetric) As above (with the same argument as in (5.36)), we therefore get that An identical argument holds in the case k − 4, j − 4, and we end up with k −log 2 (x1/C9a For the last inequality, we used that x 2 C 8 a (2) n , so that nϕ 2 (x 2 )x −γ2 2 is bounded by a constant, thanks to the definition (1.4) of a (2) n . Therefore, going back to (5.31), and using (5.38)-(5.41), we obtain that This concludes the proof of Theorem 2.5, since the same bound applies to any coordinate.

Proof of Theorem 2.6
Again, we prove only the case of the dimension d = 2 for simplicity. Recall that we work in the balanced case, so we write a n ≡ a (i) n and α ≡ α i . Let us assume that |x 1 | |x 2 |, so that c|x 1 | x |x 1 | (the other case is symmetric). Suppose also for simplicity that x 1 is positive (so we can drop the absolute value), and x 1 > C 8 a n . We n ∈ (C 9 a n , x 1 /8) The last term in (5.44) can be bounded using Proposition 5.4, together with (5.2) where we used that e −cx1/an (a n /x 1 ) 2 provided that x 1 /a n C 8 with C 8 large enough. For the first term in (5.44), because of the exchangeability of the X i and thanks to a union bound, we get P Ŝ n = x, M (1) n x 1 /8 Here, we used (2.8): P(X 1 = y) is bounded by a constant times ϕ( y ) y −(2+γ) for the range of y under summation (and it is bounded by a constant times ϕ( ). It remains to control the middle term in (5.44). We write P Ŝ n = x, M (1) n ∈ (C 9 a n , x 1 /8) =  For the last inequality, we used (2.8) to bound P(X 1 = y) cϕ( y ) y −(2+γ) : this is bounded, for y with y 1 2 −(j+1) x 1 , by a constant times ϕ(2 −(j+1) x 1 )x −(d+γ) 1 2 (j+1)(2+γ) , with ϕ(2 −(j+1) x 1 ) 2 j ϕ(x 1 ) thanks to Potter's bound. Then, for every j, the sum over y with y 1 2 −(j+1) x 1 gives rise to P Ŝ (1) n is bounded by a constant).
Then, it remains to use Theorem 5.1, more precisely (5.17), to get that the last sum in (5.47) is bounded by which is bounded by a constant. Therefore we have that P Ŝ n = x, M (1) n ∈ (C 9 a n , x 1 /8) ni ∼ |x i |. We work along the favorite direction, that is we assume that x i /a (i) n1 → t i ∈ R * for any i ∈ {1, . . . , d}, which is equivalent to having n i ∼ |t i | αi n 1 (the reference coordinate is the first one, but this is only for commodity). We fix ε > 0, and we decompose G(x) into three subparts: The middle part gives the main contribution: we treat it first, before we show that the other two parts are negligible. In this section and in the rest of the paper, we often omit the integer part: for instance, we do as if εn 1 and ε −1 n 1 were integers.

Main contribution
Because n i ∼ |t i | αi n 1 , we have that for ε n/n 1 ε −1 the probability P(S n = x) falls into the range of application of the local limit theorem (2.1). We have The second term is negligible compared to the first one, so we focus on the first term.
6.2 Third part in (6.1) Using the local limit theorem (2.1), we have that there is a constant C such that .
For the last inequality we used that (a Using again the regular variation of a (i) n , we get that there is a constant c such that the second term in (6.1) is . (6.3) (

First part in
Here, we used for the first term that n/(a provided that n 1 /n is large enough (i.e. ε small enough). Then, we use that (a For the second inequality, we used that |x i | ca c ε 1/2α1 for all n εn 1 (thanks to Potter's bound). Then, since that n 1 ϕ 1 (|x 1 |)|x 1 | −γ1 is bounded by a constant, we get that

Proof in the marginal case
Again, we work along the favorite direction (see (3.2)), so that in particular we have n 1 ∼ λn 2 for some constant λ > 0 (recall the definition (3.1) of n i ). For ε > 0 fixed, we split the Green function as The main contribution comes from the second sum. Thanks to the local limit theorem (2.1), we get that for n sufficiently large For n > ε −1 n 1 we have that |x 1 |/a (1) n 2ε 1/2 for |x 1 | large enough (thanks to the definition of n 1 and the fact that a (1) n is regularly varying with exponent 1/2), and also we have n 1 2 λε −1 n 2 so that |x 2 |/a (2) n 2λ −1/2 ε 1/2 . All together, and since g α is continuous at 0, for every η > 0 we can choose ε small enough so that for large enough |x 1 | |x 2 | Then, since v → n v (a and a similar lower bound holds, with 2η replaced by −2η. We now treat the first sum in (6.6). First, for n εn 1 , Theorem 2.1 gives that Exactly as what is done above to obtain (6.5), we obtain that .
For εn 1 n ε −1 n 1 , the local limit theorem (2.1) gives that there is a constant C > 0 such that As a conclusion, we get that there is some constant C ε such that

About the balanced case
Let us write a n ≡ a (i) n , and L(·), σ(·) in place of L i (·), σ i (·). The walk S n is transient if and only if +∞ n=1 (a n ) −2 < +∞. We may compare the sum to the integral +∞ 1 (a t ) −2 dt which by a change of variable u = a t (by definition of a t we have t ∼ (a t ) 2 /σ(a t )), dt ∼ 2u du/σ(u): we get that +∞ 1 (a t ) −2 dt < +∞ if and only if +∞ 1 2 du uσ(u) < +∞. With the same change of variable, and using that a n1 ∼ |x 1 |, we get that +∞ n n1 which gives the announced result.
7 Proof for case II (non-zero mean): µ i = 0 for some i with α i > 1 In this section, we prove Theorem 3.3. Recall our notations: we have µ i = 0 (b (i) n ≡ 0) for i > i 0 , and µ i0 = 0 with α i0 > 1. We set n i0 := x i0 /µ i0 (x i0 and µ i0 need to have the same sign), and we also denote m i0 := a (i0) ni 0 , so that the typical number of steps for the i 0 -th coordinate to visit x i0 is n i0 + O(m i0 ). For simplicity, we work with µ i0 , x i0 > 0. We consider the case where x → +∞ along the favorite direction, recall (3.5).
We fix ε > 0, and decompose G(x) into three subparts: The main contribution is the second part, that we treat first, before we show that the two other parts are negligible.

Main contribution
Since the summation index ranges from n i0 − ε −1 m i0 to n i0 + ε −1 m i0 and because we work in the favorite direction, we obtain that P(S n = x) falls into the range of application of the local limit theorem (2.1). We have that The second term is negligible compared to a (i0) ni 0 ), so we focus on the first term. Let us consider the different terms ( n for the range considered. First of all, notice that n = (1 + o(1))n i0 , since m i0 = o(n i0 ): it gives in particular that a (i) uniformly for |n − n i0 | ε −1 m i0 . We used that ( n ≡ 0 (in particular if i > i 0 ), then more directly, using (3.5) 3) * The last case we need to consider is when α i = 1. Then b The first term goes to 0 as n i0 → +∞, uniformly for |n − n i0 | ε −1 m i0 : indeed, m i0 is regularly varying in n i0 with exponent 1/α i0 < 1, in contrast with a (i) ni 0 which is regularly varying with exponent 1 (and |µ i (a (i) ni 0 )| is a slowly varying function). For the second term, we have n ∼ n i0 , and n i0 /a (i) ni 0 goes to 1. We therefore obtain that uniformly for |n − n i0 | ε −1 m i0 , using again (3.5), Combining all the possibles cases in (7.2)-(7.3)-(7.4), and recalling Section 2.4 (i.e. ni 0 ), we get that for all i, uniformly for |n − n i0 | ε −1 m i0 , ni 0 and that if α i < α i0 we have a i,i0 = 0). Because g α is continuous, we obtain that The last identity holds thanks to a Riemann sum approximation, as m i0 = a (i0) ni 0 → +∞.

Last part in (7.1)
We prove that there are constants C 10 , C 11 such that for every r C 10 m i0 ni 0 : we therefore get that +∞ n=ni 0 +ε −1 mi 0 Let us now prove (7.7). For any n n i0 + r with r C 10 m i0 and C 10 large enough, we have that n : we can use Theorem 2.1 to get that (We used that a (i0) n c m i0 for n n i0 .) We therefore get +∞ n=ni 0 +r Let us deal with the first sum. If r n i0 , it is bounded above by a constant times For the first sum we used that γ i0 > 1, and for the second sum that the sequence under summation is regularly varying In the case r n i0 , the first term in (7.9) is bounded by a constant times For the second sum in (7.9) (α i0 = 2), we get in the case r n i0 that it is bounded by C a (1) For r n i0 the same bound holds with a (i0) . This term is therefore negligible compared to (7.10) (in the case r n i0 ). This concludes the proof of (7.7).

First part in (7.1)
We again split the sum into two parts: The second part in (7.11) can be treated in the same manner as for (7.7): we have for any C 10 m i0 r n i0 /2 EJP 24 (2019), paper 46.
Indeed, this comes from the same argument as for (7.9)-(7.10)-we are able to use for n n i0 − C 10 m i0 . Then, using (7.12) with r ε = ε −1 m i0 , we get as for (7.8) that It remains to control the first term in (7.11), and this is where we use one of our assumptions in Theorem 3.3.
We invoke Theorem 2.1: there is a constant C such that uniformly for n n i0 /2 (so First of all, bounding below a (i) n by a constant, we have the last identity being valid since e −c(ni 0 /mi 0 ) 2 decays faster than any power of n i0 .
n ) is regularly varying with exponent larger than −1, so that To obtain the o(1), we used that ϕ i0 (n i0 )n ni 0 is regularly varying with exponent since the sum grows like a slowly varying functionφ (or remains bounded). The o(1) comes from the fact that a ni 0 ) is regularly varying with exponent larger than −γ i0 .
n : Theorem 2.5 gives that n ≡ 0, then since we assumed that t i = 0, we have that |x i | ca for all n n i0 /2, provided n i0 is large. Otherwise, we write |x i − b uniformly for n n i0 /2. Therefore, using that |b ni 0 provided that n i0 is large enough. All together, recalling also that |x i0 − b (i0) ni 0 | cn i0 , we get that Summing the second term over n n i0 /2, (7.15) already gives that it is negligible compared to (7.6). For the other term, we have the last identity holding since γ i0 > 1.
8 Proof for case III: α i 0 = 1 In this section, we prove Theorem 3.4.
We also define We used the definition (1.4) of a (i0) n for the asymptotic equivalence, and then that L i0 (x) = o(µ i0 (x)) (both if µ i0 ∈ R * or p i0 = q i0 , thanks to [6, Prop. 1.5.9.a]). We stress that the typical number of steps for the i 0 -th coordinate to reach x i0 is n i0 + O(m i0 ). The intuition is that, when looking for which n we have that ni 0 , and using that |b n − b ni 0 | is roughly of the order of (n − n i0 )|µ i0 (a (i0) ni 0 )| (see (8.4) below for details) we find that n−n i0 has to be of the order of a (i0) This intuition is confirmed in [2,21], where it is shown that (N (x i0 ) − n i0 )/m i0 converges in distribution as |x i0 | → +∞, where N (x i0 ) = inf{n, S (i0) n > x i0 } is the first passage time to x i0 (if x i0 > 0). Then we fix ε > 0, and again split G(x) into three parts: As suggested above, the main contribution is the middle sum. In the following, we work with x i0 and µ i0 (x i0 ) positive, simply to avoid the use of absolute values.

Main contribution
For the middle sum in (8.2), the fact that we work along the favorite scaling tells that we can use the local limit theorem (2.1), and get that Note that, for the range of n considered, we have that n = (1 + o(1))n i0 (recall (8.1)), so that a (i) n ≡ 0: thanks to our assumption (3.9), we get that The first part goes to t i ∈ R, thanks to (3.9). The last part goes to 0 thanks to Claim 5.3 (recall n i0 ∼ a (i) ni 0 goes to 1. For the middle part, we use assumption (3.8) to get that µ i (a (i) In the end, we obtain that uniformly for |n − n i0 | ε −1 m i0 Using this in the sum (8.3), and with the definitionκ i =ã i,i0 1 {i i0} (withã i,i0 = 0 if α i < 1), we get thanks to the continuity of g α that the right-hand side of (8.3) is where we used a Riemann-sum approximation to obtain the last integral.

Third term in (8.2)
First of all, let us stress that there is a constant C 12 such that for 2n i0 n n i0 + C 12 m i0 , we have We used that µ i0 (a (i0) n )) for n i0 n 2n i0 (see Claim 5.3), and the fact that n i0 L i0 (a (i0) ni 0 : the first inequality comes from the fact that b (i0) n is regularly varying with exponent 1, the second from the fact that b (i0) n /a (i0) n → +∞. Therefore, we can use Theorem 2.5 (recall α i0 = 1) to get that for all n n i0 + C 12 m i0 , Then, for any n i0 r Then, setting r = ε −1 m i0 , and using the definition (1.4) of a (i0) n , we get that For the sum with n 2n i0 , we use (8.8) with the fact that For the summation, we used the sequence under summation is regularly varying with exponent smaller than −1. For the last o(1), we used that b ni 0 )), we have as in (8.9) ni 0 −r With r = ε −1 m i0 , we obtain the same upper bound as in (8.10). For the term n n i0 /2, we use that ni 0 provided that n i0 is large enough. Since we work along the favorite direction (3.9), we have that |x i − b Hence, similarly to (8.11), we get that

Proofs when x is away from the favorite direction or scaling
In this section, we prove the renewal estimates when away from the favorite direction or scaling, i.e. we prove Theorems 4.1 (in Section 9.1), 4.2 (in Section 9.2), and 4.3 (in Section 9.3). Again, let us work with all x i 's positive in this section, to avoid the use of absolute values. Recall also that we work in dimension d = 2 with α = (2, 2) and under Assumption 2.2.

Case I, proof of Theorem 4.1
Recall that n i is defined up to asymptotic equivalence by a (i) ni ∼ x i , and i 0 , i 1 are such that n i0 = min{n 1 , n 2 }, n i1 = max{n 1 , n 2 }. In such a way, we have that x i /a (i) ni 0 c for i = 1, 2. Let us work in the case where x i1 /a (i1) ni 0 C 13 for some large constant C 13 (otherwise one falls in the favorite scaling (3.2)): it is equivalent to having n i1 /n i0 larger than some large constant C 13 . We let n i0 m n i1 (we optimize its value below), and decompose G(x) into two parts For the first part, since x i1 ca (i1) n for n m n i1 , Theorem 2.5 gives us that For the second inequality, we used that for all n n i0 , and also that x i1 ∼ a (i1) For the second part in (9.1), we fix some δ > 0 (small), and we use the local limit EJP 24 (2019), paper 46. theorem (2.1) to get that (using that α −1 We used in the last inequality that n/(a , together with Potter's bound. Then, it remains to optimize our choice of m: combining (9.3) with (9.2), and using that there is a constant c δ such that a .
This gives the first part of the statement, i.e. (4.1).
In the case where (S n ) n 0 is a renewal process, i.e. X 1 0, (in particular α i ∈ (0, 1)), then one has a much sharper estimate than (9.3). Indeed, we have the following large deviation result: for all n C 13 n 1 (so that x i0 a (i0) n /4), for some exponent ζ i0 (whose value depends on α i0 ). The first inequality follows from the same argument as for (

Case II-III (a), proof of Theorem 4.2
Here, we assume α 1 , α 2 1 and for i = 1, 2: either µ i ∈ R * , or α i = 1 with p i = q i . Recall that n i is such that b (i) ni = x i (we work in the case where they are both positive), and m i = a (i) . Note that in any case we have m i = o(n i ) as n i → +∞.

A preliminary estimate
We prove first the following result, that will be useful in this section and the next one: log m . To show this, recall that |x i − b ni for all n between n i − m i and n i + m i (see (7.5) and (8.5)). Therefore, we have that .
ni , see (8.7) in the case α i = 1), A similar argument holds for the sum between n − m and n − n i , and this concludes the proof of the first line of (9.6).
The last inequality comes from the fact that: either µ i is a positive constant (so the bound is trivial); or Let us mention that the case n 2 n 1 2n 2 would be treated symmetrically. We let m = (n 2 − n 1 )/2 so that n 1 + m = n 2 − m, and we assume that m m 1 ∨ m 2 (otherwise we are in the favorite direction). We write For the first term, we split it according to whether n < n 1 − m or n n 1 − m. For n ∈ (n 1 − m, n 1 + m), and since n n 2 − m 2 , we have that |x 2 − b (see (8.7)): Theorem 2.5 gives that Then, by (9.6), and using that n 2 ϕ(a n2 (together with Potter's bound), we obtain for any δ > 0 n1+m n=n1−m P(S n = x) Notice that α 2 = γ 2 if α 2 ∈ (0, 2): we can rewrite the above as n1+m n=n1−m P(S n = x) For the terms with n n 1 −m, we use again Theorem 2.5: since |x 1 −b (8.7)), and setting j = n 2 − n so that |x 2 − b (2) n | cjµ 2 (a (2) n2 ), we have Therefore, summing over j as done in (8.12)-(8.13) (in the case α 1 = 1, the case α 1 > 1 is identical), we obtain as in (9.8) Similarly, we can treat the cases n 2 − m n n 2 + m and n n 2 + m, and we get that n2+m n=n2−m Notice that a  All together, combining (9.8)-(9.9) and (9.10)-(9.11), and bounding log(m/m i ) by a constant times (m/m i ) δ , we can conclude that which concludes the proof of (4.3), recalling that we chose m = (n 2 − n 1 )/2.  For the first term, since n m 3n 2 /4, for any δ there is a constant c δ such that |x 2 − b (2) n | c δ n 1−δ 2 , so that Theorem 2.5 gives that for any δ > 0 we have a constant c δ such that, for n m, Here, the second term decays faster than any power of n 2 (if the term is present, a n2 is regularly varying with exponent 1/2), so it is negligible compared to the first term. Then summing over n m and using (9.6), we obtain that m n=1 P(S n = x) C δ m 1+δ (n 2 ) −(1+γ2)+δ . (9.13) For the second term in (9.12), we divide it into three parts: m < n < 3n 2 /4; 3n 2 /4 n 2n 2 ; and n > 2n 2 . For m < n 3n 2 /4, we use again that |x 2 −b (2) n | c δ n 1−δ 2 and |x 1 − b (1) n | cb (1) n (recall m 3n 1 /2), so that Theorem 2.5 again gives that for any δ > 0 For the last inequality, we discarded the exponential term since it decays faster than any power in n, and used that n m n 1∧(γ2/γ1) 2 to bound n δ 2 n 2δ by nδ 2 n −δ (by pickinḡ δ = δ + 3δ(1 ∧ (γ 2 /γ 1 ))) . Summing over n m, and since γ 1 + δ > 1, we get that  to get by Theorem 2.5 that (discarding the exponential term as above) n | × n 2 (n 2 ) −(1+γ1)+δ , so that, summing over n, (9.6) gives that 2n2 n=3n2/4 P(S n = x) C δ n −γ1+2δ 2 which is smaller than a constant times n −1+2δ 2 m 1−γ1 since m n 2 (and γ 1 1).
Hence, summing over j = n 2 −n between m and n 2 , and using that n 2 ϕ(a n2 )(a n2 ) −α (and m 2 = a n2 /µ 2 (a n2 )) we get that which gives the upper bound in (9.8) up to the factor log(m/m 1 ). Similar bounds hold for the other terms, showing that the use of Theorem 2.6 does not bring any real improvement.

The case of a renewal process
If S n is a renewal process (necessarily α i1 < 1), we can improve the bound (4.7). For a fixed δ > 0, we set m := n i1 × (n i0 /n i1 ) δ , which is larger than n i1 , but smaller than n i0 /2.
Alternatively, we also obtain from Theorem 2.5 that for any n (using x i1 ∨ a (i1) n a (i1) Summing over n, and treating the different parts of the sum according to whether n n i0 − m i0 , n ∈ (n i0 − m i0 , n i0 + m i0 ) (in which range |x i0 − b

Acknowledgment
The author wishes to thank Ron Doney for useful comments and for pointing out some references.

A.1 A few words on generalized domains of attraction
We stress that the convergence (1.1) is a special case of generalized domains of attractions (called operator-stable distributions): in general, the renormalization matrix A n is invertible, and does not need to be diagonal as in our case. A few relevant and historical references are Sharpe [37], Hudson [24], Hahn and Klass [22,23], and a comprehensive overview of the subject can be found in [29]. We stress that a local limit theorem exists in general, see [19]. We also mention that when A n is diagonal, all marginals X (i) are in the domain of attraction of an α i -stable distribution, which is not necessarily the case in operator-stable distributions, cf. [30].
Sharpe [37] found that one can decompose a multivariate (operator-)stable distribution into the product of two marginals, one normal and one strictly non-normal. In our setting, it means that if we set d 0 = max{i; α i = 2}, then the stable law Z in (1.1) has two independent components: (Z 1 , . . . , Z d0 ) normal, and (Z d0+1 , . . . , Z d ) strictly non-normal, the convergence of these two marginals being enough for the joint convergence see [35,28]. Then, we refer to [29] for a characterization of the convergence to an operatorstable distribution (either normal or strictly non-normal), in terms of regular variation in R d of the distribution of X 1 (this is a generalization of Feller conditions [15, § IX.8], i.e. (1.2), to the multivariate case).
In the simpler case we are interested in, that is when the matrix A n is diagonal, Resnick and Greenwood [35] (resp. Haan Omey and Resnick [20]) first gave a characterization of the domains of attraction in dimension 2 (resp. d), with the help of a (simpler) theory of regular variation in R d . We summarize it below, but we first recall the definition of regularly varying function in R d , as introduced in [35,20].

A.2 About regular variation in R d , and convergence to stable distributions
The theory of regular variations in R is well established, and an exhaustive and seminal reference is [6]. The study of regular variation in R d turns out to be very rich, and has also been extensively studied, starting with [35,20,27]: we refer to [29,Part II] and references therein for more details. Here we give a brief (simplified) definition in the special case we are interested in.
We are now ready to give a necessary and sufficient condition for S to be in the domain of attraction of an α-stable distribution (in the case of a diagonal A n ), as stated in [35,20]. Since the convergence of the two marginals (Z 1 , . . . , Z d0 ) (to a normal law) and (Z d0+1 , . . . , Z d ) (to a strictly non-normal law) is enough, we state the results in the case where d 0 = d or d 0 = 0. First, S is in the domain of attraction of a non-degenerate normal law if and only if the truncated second moment function S(x) := E X 1 , x 2 1 {| X1,x |<1} is regularly varying at +∞ with index (2, . . . , 2). On the other hand, S is in the domain of attraction of a strictly non-normal law if and only if P X1 (·) is regularly varying at infinity with index (ρ 1 , . . . , ρ d ) ∈ (−2, 0) d (with ρ i = −α i , the scaling sequences being a (i) n = r i (n), with r i (·) defined by (A.2)). We stress that having P(X 1 > x) regularly varying at +∞ with index −(γ 1 , . . . , γ d ) is a sufficient condition sufficient condition for being in the domain of attraction of an α-stable distribution, with α = (α 1 , . . . , α d ), α i = γ i ∧ 2.
Finally, let us give two examples of regularly varying distribution of X 1 (N d -valued) we have in mind, that can be thought as "fully independent" and "fully dependent" cases-one can easily think about other, intermediate, cases.
Example A.2. There exist positive exponents β, (β i ) 1 i d , and slowly varying functions ψ, (ψ i ) 1 i d such that Moreover, using a Riemann sum approximation, we get that v 1 h (1) u (v)/v converges to ∞ 0 t(1 + t β2 ) −β dt as u → +∞ (recall (B.2)). Going back to (B.1) and summing over x 2 , we therefore get that P(X (1) This can be generalized to the setting of Example 2.4: we get that Assumption 2.2 is verified, and we find that there is a constant c 2 (depending only on β, β i ) such that P(X (i) . Details are left to the reader.