A weak Cramér condition and application to Edgeworth expansions

We introduce a new, weak Cramér condition on the characteristic function of a random vector which does not only hold for all continuous distributions but also for discrete (non-lattice) ones in a generic sense. We then prove that the normalized sum of independent random vectors satisfying this new condition automatically verifies some small ball estimates and admits a valid Edgeworth expansion for the Kolmogorov metric. The latter results therefore extend the well known theory of Edgeworth expansion under the standard Cramér condition, to distributions that are purely discrete.


Introduction
If X i are i.i.d. real random variables, centered with unit variance, Central Limit Theorem ensures that, if S n := X 1 + . . . + X n , then where Φ denotes the cumulative distribution function of the standard Gaussian variable.
Under a third moment hypothesis, Berry-Esseen bounds [Ber41] then allow to control the speed of convergence, i.e. there exists a positive constant C such that, for all x ∈ R and is very natural to ask if an higher order asymptotic expansion of the distribution function can be made explicit. The first rigorous treatment of asymptotic expansions of distribution functions of normalised sums of i.i.d real valued random variables was given by Cramér [Cra28] after some formal expansions were proposed by Chebyshev [Tch90] and Edgeworth [Edg06]. Namely, if the entries X i admit a finite moment of order q ≥ 3 and satisfy the so-called Cramér condition, there exists explicit polynomials P 1 , . . . , P q−1 whose coefficients depend on the cumulants of X i such that, if D denotes the derivation operator, we have (1.1) Equivalently, if Q n denotes the law of S n / √ n, there exists an explicit measure Q n,q admitting a density with respect to the Lebesgue measure such that Q n 1 (−∞,x] − Q n,q 1 (−∞,x] ∞ = O(n −q/2 ). (1.2) The above mentionned Cramér condition under which such expansions are valid involves the common characteristic function of the entries, namely if φ(t) := E[e itX1 ], it requires that lim sup t→+∞ |φ(t)| < 1. Thanks to Riemann-Lebesgue lemma, this condition is automatically satisfied if the entries X i admit a density with respect to the Lebesgue measure. On the contrary, Cramér condition always fails to hold when the entries are purely discrete, see Section 2 below. Besides, expansion (1.1) is not valid for general entries, and in particular in the simple case of a sum of independent symmetric Bernoulli variables. Precisely, for purely discrete lattice-valued random variables, extra terms of all orders must be added in the Edgeworth expansion to take account of errors in approximating to discrete distributions by smooth distributions. A comprehensive account of Edgeworth expansions for sums of independent lattice-valued random variables can be found for instance in Chapter 5 of [BR86].
After the pionneering work of Cramér, great efforts have been made to extend the above Edgeworth type expansions to various settings, and the litterature on the subject is overwhelming, see, e.g., the bibliographical notes of [Pet75,BR86,Hal92] and the references therein. Without pretending to exhaustivity, let us mention a few. Extensions to the case of multidimensional, or Banach-valued random variables can be found in [Göt81,Göt89]. Asymptotic expansions for independent, non identically distributed random variables is treated in [BR86,p. 216], other kind of dependence are considered for instance in [GH83,Lah93]. Egdeworth expansions have also been performed for other metrics than Kolmogorov's one, namely for the Fisher or relative entropy metric [BCG13], or for metrics of the type sup f ∈A Q n (f ) − Q n,q (f ) , where f belongs to a suitable class of functions, for example a class of regular functions, see [Hip77,GH78,Sun11] or [Rot05,CD14] via Stein's method, the class of indicators of convex sets [Ben04], and more recently the class of continuous functions associated to the total variation distance [BC16]. Let us emphasize the fact that if the functions of the class A are smooth enough, roughly speaking as smooth as the order of the expansion, the term (1.3) is a O(n −q/2 ), provided the entries have enough moments and regardless of the Cramér condition. However, in the case of more singular metrics such as the Kolmogorov one, and as noticed in [Hal92,p. 81], "Cramér's continuity condition is central to much of the theory, even in the relatively simple case of sums of independent random variables". In particular, to our knowledge, in the non-lattice case, expansions of type (1.1) are only known under the Cramér condition, and to quote [GvZ06], "little seems to be known about Edgeworth expansions for sums of variables with discrete but non-lattice distribution".
In this paper, we introduce a new, weak Cramér condition on the characteristic function of a random vector which englobes the classical Cramér condition but is also satisfied by discrete, non-lattice distributions, in a generic sense, see Definition 2.1 and Proposition 2.4 below. Under this weakened assumption, we then establish in Theorem 4.3 and Corollary 4.4, the validity of the Edgeworth expansion in the Kolmogorov metric. In other words, we establish that expansion (1.1) holds under our weak Cramér condition. As a result, and contrarily to the lattice case where additionnal terms are needed, we obtain that for generic non-lattice discrete distributions, the Edgeworth expansion has the same form as in the case of continuous entries.
The plan of the paper is the following. In the next section, we introduce the weak Cramér condition. We exhibit simple examples of variables satisfying this condition and show that, in some sense, it is generic for discrete, non-lattice random variables. In Section 3, as a first example of application, we then explicit some small ball estimate for the normalized sum of random vectors satisfying the weak Cramer condition. Finally, in Section 4, we extend the validity of the Edgeworth expansion for functionnals of such sums. In order to facilitate the reading of the paper, the proofs of the results stated in Sections 3 and 4 are postponed in the last Section 5.

Weakening the Cramér condition
All the random variables appearing in the sequel are supposed to be defined on an abstract probability space (Ω, F, P) and E denotes the associated expectation. A generic element of the set Ω will be denoted by ω. The notation || · || refers to the standard Euclidean norm. Let us first recall that a random vector with values in R d is said to satisfy the (classical) Cramér condition if its characteristic function φ X (t) := E[e it·X ] is such that lim sup ||t||→∞ |φ X (t)| < 1.
(2.1) For instance, any distribution having a continuous component satisfies the Cramér condition in virtue of Riemann-Lebesgue Lemma. There exists also purely singular distributions that satisfy the classical Cramér condition (2.1). For example, if 0 < θ < 1/2 is not the inverse of a Pisot number, then Salem proved in Theorem 2 p. 40 of [Sal63] that if (ε k ) k≥0 is sequence of i.i.d. random variables such that P(ε k = 0) = P(ε k = 1) = 1 2 and if X := +∞ k=0 θ k ε k , then the law of X is both singular with respect to the Lebesgue measure on [0, 1] and its Fourier transform is such that lim |t|→∞ |φ X (t)| = 0. On the other hand, it can be shown that for any purely discrete real random variable X, one has lim sup |t|→∞ |φ X (t)| = 1, see, e.g., [BR86,p. 207].
In this section, we introduce a new, weak Cramér type condition that quantifies the fact that the characteristic function of a random vector is bounded away from one at infinity. As we shall see below, contrarily to the above classical condition, this weaker condition is satisfied by both continuous and discrete (but non-lattice) distributions. In fact, we shall even prove in the next Proposition 2.4 that this weak Cramér condition is "generically" satisfied among discrete distributions.
Definition 2.1. A random vector X with values in R d and with characteristic function φ X , is said to satisfy the following weak Cramér condition with exponent b > 0 if there exists constants C > 0 and R > 0 such that for all t > R (2.2) For later convenience, the class of probability measures on IR d satisfying this property will be denoted by C(d, b).
The above definition naturally extends to sequences of random vectors.
Definition 2.2. A sequence of random vectors (X i,n ) 1≤i≤n with values in R d and with characteristic function φ Xi,n is said to satisfy the following mean weak Cramér condition with exponent b > 0, if there exists constants C > 0 and R > 0 such that for t > R and for n large enough 1 n Again, for later convenience, the class of sequences of random vectors with values in IR d satisfying this property will be denoted by C(d, b).
Obvioulsy, the classical Cramér condition implies the weak one for any positive value of the parameter b. Roughly speaking, the classical Cramér condition might be thought as the limiting case when b → 0 of the conditions C(d, b). However the major difference between the classical condition (2.1) and the weak one (2.2), or its average version (2.3), is that the class C(d, b) contains discrete distributions whereas, as already noticed just above, probability measures satisfying the classic Cramér condition cannot be discrete.
Remark 2.3. The weak Cramér condition can be tensorized in the following way. If X 1 and X 2 are two independent random vectors in the class C(d 1 , b 1 ) and C(d 2 , b 2 ) respectively, then the random vector (X 1 , X 2 ) belongs to the class C(d 1 + d 2 , max(b 1 , b 2 )). Indeed, if for ||t 1 || and ||t 2 || large enough we have then for some positive constants C and C and for ||(t 1 , t 2 )|| large enough, we have The next proposition illustrates the fact that the condition defining the class C(1, b) is actually generically satisfied by discrete real random variables. It also emphasizes the relation between the exponent b and the number of atoms of the considered distribution. Roughly speaking, the more atoms the distribution has, the smaller the exponent b can be chosen. Equivalently, the more atoms the distribution has, the better is the quantitative bound on the distance between its characteristic function and one.
Proposition 2.4. Let us fix an integer p ≥ 3. To any vector u = (u i ) 1≤i≤p ∈ R p , we associate the set M p (u) of measures of the configuration space having the u i 's as atoms, Then, if λ p denote the Lebesgue measure on R p , for all b > 1 Remark 2.5. The above Proposition 2.4 naturally extends to the multi-dimensional case, i.e. to the case where u i ∈ R d , for 1 ≤ i ≤ p. For the sake of simplicity, we restrict ourselves to the scalar case.
Remark 2.6. Let us insist on the genericity of the weak Cramér condition for non-lattice, discrete random variables. Consider a cloud of data D = {x i , 1 ≤ i ≤ n} ⊂ R, which is composed of n ≥ 3 independent realizations of a continuous real random variable X. Suppose now that, as in a bootstrap procedure, one generates a new random variable X * by resampling among the initial data (x i ) 1≤i≤n . The new variable X * takes values in the finite set D, thus does not satisfies the classical Cramér condition. However, Proposition 2.4 ensures X * satisfies the weak Cramér condition.
The aim of next proposition is to give more concrete examples of random variables satisfying the weak Cramér condition (2.2).
Proposition 2.7. Let p ≥ 3 be an integer and u 1 , . . . , u p be algebraic numbers which are rationally independent. Let c 1 , . . . , c p be p positive numbers such that Indeed, among the atoms of √ N are the numbers √ p for any primer number p. It is well-known that these numbers are linearly independent over the field Q and the conclusion follows from the infinity of prime numbers and Proposition 2.7.
Example 2.9. Let p ≥ 5 be a prime number and set θ = 2π p . Then, Indeed, the reals numbers (cos(iθ)) 1≤i≤p−1 are irrational, algebraic and linearly independent over the field Q.
In the next Sections 3 and 4, we shall give examples of application of the weak Cramér condition, in establishing that the normalized sum of independent variables satisfying this condition automatically satisfies a sharp small ball estimate and also admits a natural Edgeworth expansion. Before that, we conclude this section by noticing that if a sequence of random vectors satisfies the mean weak Cramér condition (2.3), then it automatically satisfies a local (classical) Cramér condition. To state this result properly, we need to introduce a notation, namely, to any sequence (X i,n ) 1≤i≤n and to any > 0, we associate the average th moment ρ (n) = ρ (n)((X i,n )) defined as The local Cramér bound announced above is the following.
Proposition 2.10. Let (X i,n ) 1≤i≤n be a sequence of independent random vectors with values in R d , belonging to the class C(d, b) and such that sup n≥1 ρ 1 (n) < +∞. Then, for all 0 < r < R, the following local Cramér bound holds lim sup Proof. Let us argue by contradiction and fix 0 < r < R. If lim sup there exists an increasing subsequence (n(k)) k≥0 of integers such that Since the characteristic functions are continuous, for a fixed integer k, the above supremum is achieved at a point u k in the compact set C[r, R] := {u ∈ R d , r ≤ ||u|| ≤ R}.
Up to the extraction of another subsequence, we can thus suppose that the sequence (u k ) k≥1 converges to a point u * ∈ C[r, R]. We have then Let us now observe that u → |φ X i,n(k) (u)| 2 can be interpreted as the Fourier transform of the symmetrized version of X i,n(k) . Namely if X i,n(k) is an independent copy of X i,n(k) and if we set Z i,n(k) : (2.6) By assumption sup n≥1 ρ 1 (n) < +∞, hence the sequence (Z N (k),n(k) ) k≥1 is bounded in L 1 (Ω, F, P), so up to another extraction of a subsequence, we can suppose that it converges in distribution to a random variable with values in R d , say W . In other words, if φ W denotes the characteristic function of W , for all u ∈ R d , we have (2.7) Now, from Equations (2.5), (2.6) and (2.7), we get that there exists u * ∈ C[r, R] such that This implies that P(W ∈ 2πZ) = 1 and thus φ W (nu * ) = 1 for all n ∈ N. But this is in contradiction with the fact that the initial sequence X i,n satisfy the mean weak Cramér condition. Indeed, for a fixed n, taking u = nu * in Equation (2.6), we always have If the sequence X i,n satisfies the weak mean Cramér condition, for n sufficiently large but finite, and for k large enough, the right-hand side of Equation (2.9) is bounded by By Equation (2.7), the left-hand side of Equation (2.9) converges to φ W (nu * ) = 1 as k goes to infinity, hence the contradiction.

Small ball estimates
Despite the richness of the class of random variables or vectors satisfying the weak Cramér condition, the latter is flexible enough to prove some fairly general results that are classical for continuous random variables but difficult to obtain as soon as the underlying variables have a discrete component. To illustrate this, we will establish in this section a small ball estimate for the normalized sum of independant random vectors belonging to the class C(d, β).
Theorem 3.1. Let us consider a sequence of independent, centered random vectors 3. the sequence (X i,n ) i≥1 belongs to the class C(d, b).
Then there exists a constant Γ > 0 such that, for all 0 < γ < 1 b + 1 2 and for n large enough, we have the small ball estimate The proof of Theorem 3.1 is given in Section 5.2 below. It is inspired by Halasz method which allows to relate the small ball probability to the local and asymptotic behavior of the Fourier transform of the normalized sum of the X i,n . On the one hand, the local behavior in the neighbourhood of zero of this Fourier transform is controlled thanks to the two first hypotheses of the mean third moment and the mean covariance. On the other hand, the mean weak Cramér condition then allows to control the behavior at infinity of the Fourier transform. The behavior of the Fourier transform outside the neightbourhood of zero and infinity is finally controlled thanks to the local Cramér bound establish in Proposition 2.10.
Remark 3.2. Naturally, the small ball estimate of Theorem 3.1 is easy to obtain if the X i are continuous random variables with uniformly bounded densities. But this estimate is not trivial for discrete variables or even in the case of continuous random variables with non bounded densities. For example, in dimension d = 1, for general random variables, as soon as γ > 1/2, it is sharper than Berry-Esseen bounds which are of the type Remark 3.3. Let us also note that the estimate of Theorem 3.1 is hopeless in the case where the random variables X i are lattice, which is a case where the weak mean Cramér condition clearly does not hold. For example, if the law of X i is uniform on {−1, +1}, we have for γ > 1/2, and for n even To conclude this section, let us illustrate Theorem 3.1 by expliciting a small ball estimate for a random sum of cosine, the random coefficients being discrete, Bernoulli type, random variables.
Example 3.4. Let us consider a sequence (ε k ) k≥1 of independent and identically distributed random variables such that P(ε k = 1) = P(ε k = −1) = 1/2. Fix a prime number p ≥ 5 and consider the sum The variables X k := cos(2kπ/p)ε k are independent, centered and they satisfy conditions 1 and 2 of Theorem 3.1. For all k ≥ 1, the Fourier transform of X k is given by It is periodic, taking value one at zero, and thus does not satisfies the weak Cramér condition (2.2) nor its average version (2.3). Therefore, we can not apply Theorem 3.1 directly. Nevertheless, we can always write and where R n := S n − S p n/p is such that |R n | ≤ p uniformly in n. The new variables Y k are still independent, centered and they satisfy conditions 1 and 2 of Theorem 3.1. But as already noticed in Example 2.9 above, along a period, the atoms cos(2 π/p) are linearly independent over Q so that the variables Y k now do satisfy the weak Cramér condition as well as its average version. In conclusion, despite the fact that the entries are discrete Bernoulli type random variables, Theorem 3.1 applies and the random sum of cosines S n satisfies the small ball estimate (3.1).

Edgeworth expansion
Let us now consider another type of result which is usually stated under a classical Cramér condition, and which we will show to hold true under the weak mean Cramér condition introduced in Section 2, namely the Edgeworth expansion for a sum independent random vectors. Edgeworth expansion is well known as a means for obtaining approximate tail probabilities of a random variable starting from information on the moments or cumulants of the latter.
In order to state the expansion result for the sum of independent vectors, we need to introduce a certain number of notations, which we adopt from the standard reference [BR86]. The cumulative distribution of the standard Gaussian variable will be denoted by Φ. We consider a sequence (X i ) i≥1 of independent and centered random vectors with values in R d , with positive definite covariance matrices and finite absolute s-th moments for some integer s ≥ 3. We denote by V n the mean covariance matrix and B n a root of its inverse, namely V n := 1 n n k=1 cov(X k ), B 2 n := V −1 n , and we denote by Q n the law of the normalized sum Q n law := 1 √ n B n (X 1 + · · · + X n ).
The average ν-th cumulant of the sequence B n X j will be denoted byχ ν,n . Following Equation (7.2) p. 51 of [BR86], we consider the formal polynomials P r (z, {χ ν,n }) associated to these average cumulants as well as the signed measures P r (−Φ, {χ ν,n }) defined by Equation (7.11) p. 54. We then denote by Q n the approximated law of Q n associated to the Edgeworth expansion, namely Note that the measure Q n is no more a probability measure, still it admits a density with respect to the standard Gaussian measure ρ(x) on R d . Namely there exists explicit polynomials P l,n , whose coefficients depend on the average cumulants, such that d Q n (x) = 1 + s−2 l=1 n − l 2 P l,n (x) ρ(x)dx. (4.1) For a measurable function f , and for s > 0, we define Finally, for ε > 0, we consider the modulus of continuity and its Gaussian average Having introduced the above notations, we can now formulate the expansion result for independent random vectors, under the classical Cramér condition, as it is stated in Theorem 20.6 of [BR86].    Remark 4.6. As noticed in the introduction, we emphasize the fact that, under the weak Cramér condition, the Edgeworth expansion (4.5) has the same form as the one obtained under the stronger classical Cramér condition, i.e. no additionnal terms are needed as it is the case of lattice distributions.
Remark 4.7. In the statement of Theorem 4.3, the hypotheses concerning the decrease of the sequence ε n can be replaced by requiring that ε n = n −δ with s − 2 2 < δ < 1 b + 1 2 .

Proofs
This last section is dedicated to the proofs of the results stated above. Namely, in the next Section 5.1, we give the proofs of Propositions 2.4 and 2.7 concerning random variables that satisfy the weak Cramér condition. The proof of the small ball estimate stated in Theorem 3.1 is given in Section 5.2 whereas the proof of the Edgeworth expansion stated in Theorem 4.3 is given is Section 5.3.

Proofs of Propositions 2.4 and 2.7
Before giving the proof of Proposition 2.4, let us state and prove two auxiliary lemmas.
With the help of Lemmas 5.1 and 5.2, we can now give the proof of Proposition 2.4 by choosing the vector u at random, with a density with respect to the Lebesgue measure. Then, the variables V j being independent, if we denote by P V2 the conditional probability given V 2 , we have Relying on Borel-Cantelli lemma, almost surely, there is only finitely many integers r such that the event A r is realized. Thus, there exists a set Ω ⊂ Ω of full measure, such that for all ω ∈ Ω , there exists r 0 = r 0 (ω) > 0 such that for any r > r 0 and any |q| ≤ 1 + 4|r|M In particular, since b > z, the right hand side of the following inequality is unbounded as |r| goes to infinity. This shows that almost surely, condition (5.3) fails to hold, hence the result.
We now give the proof of Proposition 2.7. The general idea behind the proof is similar to the one of Proposition 2.4, but instead of using Borel-Cantelli Lemma i.e. a probabilistic argument as above, the next result is based on Diophantine approximation, and more precisely on the Subspace Theorem on simultaneous rational approximation.
Proof. The proof is again based on Lemma 5.2. Let us fix b > z > 1 p−2 . If we denote as above v j := u j − u 1 for 2 ≤ j ≤ p, the numbers 1, v 3 /v 2 , . . . , v p /v 2 are rationally independent, and with the same notations as the ones of Remark 7.3.4 of [BG06], taking the Subspace Theorem ensures that there is some constant κ > 0 such that for any integers (q 2 , . . . , q p ) ∈ Z p : In particular, the right hand side of is unbounded, and the condition (5.3) of Lemma 5.2 is not fulfilled, hence the result.

Small ball estimates
Let us first give the proof of Theorem 3.1 on the small ball estimate for sum of independent random vectors.
Note that from the arithmetico-geometric inequality, we always have Thus, if we introduce the following notation to simplify the expressions : R d e n log(Φn(u)) e − n||u|| 2 2t 2 du.
= I 1 + I 2 + I 3 where the last sum corresponds to the decomposition the last integral over the whole R d into three parts: the integral for ||u|| ≤ r for a small r > 0 to be fixed later, the integral for ||u|| ≥ R for another constant R > 0 to be precised, and finally the in between integral for r < ||u|| < R, i.e. du.
Let us first consider the integral I 1 in a neighborhood of zero and prove the following auxiliary lemma.
Proof of Lemma 5.3. Let X be an independent copy of X and consider the new random vector X := X − X . We have then φ X (u) = |φ X (u)| 2 for all u ∈ R d , cov(X) = 2 cov(X) and by the triangle inequality E[||X|| 3 ] ≤ 8 E[||X|| 3 ]. Since X admits a third moment, its characteristic function φ X is a class C 3 . Performing Taylor-Lagrange expansion at zero, for any u ∈ R d , there exists θ u ∈]0, 1[ such that Using the equivalence of norms in finite dimension, we deduce that there exists a finite In other words, doubling the constant η d , we obtain that is and in particular We can now give an upper bound for the integral term I 1 . Indeed, applying the last Lemma 5.3 to each random vector X i,n , and using Cauchy-Schwarz inequality, we have Now from the hypotheses 1 and 2 on the mean of the third moment of X i,n and on the covariance matrix, we deduce that for n large enough and for all u ∈ R d |Φ n (u)| 2 ≤ 1 − c 1 ||u|| 2 + η d ρ ||u|| 3 .
Injecting this estimate in the integral for ||u|| ≤ r gives that for n large enough ||u||≤r e n log(Φn(u)) e − n||u|| 2 In particular, we have for n large enough We now focus on the integral I 2 in the neighborhood of infinity. Since the variables X i,n satisfy the weak Cramér condition, there exists constants A > 0 and R > 0 such that, for n large enough and for ||u|| > R, we have Taking the logarithm, using again the fact that log(1 − x) ≤ −x for 0 < x < 1, one deduce that for n and R large enough and for all ||u|| > R Injecting this new estimate in the integral for ||u|| ≥ R gives ||u||≥R e n log(Φn(u)) e − n||u|| 2 where V d is the volume of the unit sphere in dimension d. Now, if we fix 0 < a < 1/b, a simple change of variable yields We are left with the in between integral I 3 , where the integral bounds r and R are now fixed. Since the sequence X i,n satisfy the weak Cramér condition, from Proposition 2.10, there exists ε > 0 such that, for n large enough, uniformly in r < ||u|| < R we have Φ n (u) ≤ 1 − ε. We can moreover choose ε small enough so that we also have log(Φ n (u)) ≤ −ε/2. Injecting this last estimate in the integral between r and R yields: du ≤ e −ε/2 n n t 2 In particular, we get I 3 ≤ e t 2 δ 2 2 e −ε/2 n . (5.10) Eventually, combining Equations (5.4),(5.5),(5.9) and (5.10), we get that for all δ > 0 and t > 0, and for n large enough t 2 δ 2 2 e −ε/2 n .
Letting t = 1/δ, we conclude that there exists a positive constant Γ which does not depend on n or δ such that for n large enough In particular, if δ is of the form δ = n −γ for some 0 < γ < a + 1 2 < 1 b + 1 2 , making the constant Γ a bit larger, we get that for n large enough hence the result.

Edgeworth expansion
We now give the proof of Theorem 4.3 stated in Section 4 which asserts that there is a valid Edgeworth expansion for the normalized sum of independent random vectors satisfying the weak mean Cramér condition (2.3).
Proof of Theorem 4.3. As already noticed in Remark 4.2, the classical Cramér condition is used only once in the original proof of Theorems 20.1 and 20.6 of [BR86], from which we adopt the notations, namely for the control of the integral I 1 in Equation (20.36) p.

As our hypotheses in Theorem 4.3 only differ from the ones of Battharcharia and
Rao by the fact that the classical Cramér condition is replaced by the weak mean Cramér condition, we are left to check in details that an analoguous control of I 1 can be actually achieved under the weakened Cramér condition. Roughly speaking, the global strategy of the original proof is to truncate and center the original variables X i,n appearing in the statement and show that if the normalized sum of the truncaded and centered variables satisifies a valid Edgeworth expansion, then so does the normalized sum of the original variables. Thus, starting from the variables (X i,n ) 1≤i≤n , we introduce the new variables with components in R d Z i,n =: (Z i,n (1), . . . , Z i,n (d)) .
Note that we have ||Z i,n || ≤ 2 √ n for all 1 ≤ i ≤ n. We denote by Q n the law of the normalized sum Q n law := 1 √ n (Z 1,n + . . . + Z n,n ) , and by Q n its characteristic function. The integral term I 1 which is the object of our attention involves multi-index derivatives of Q n so let us specify our notations. For a given multi-index γ = (γ 1 , . . . , γ d ) ∈ N d , we denote by |γ| its length, namely |γ| := d i=1 γ i . If we are now given a family a n multi-indexes γ = (γ 1 , . . . , The proof of Theorems 20.1 and 20.6 in [BR86] also involves a smoothing Kernel K ε , with Fourier transform K ε , whose derivatives satisfy the a priori estimate (20.18) p.210, namely for all ε > 0, for all t ∈ R d , and for all mutli-indexes α of length |α| ≤ s + d + 1 for some absolute constant c 3 (s, d) > 0. We can now make explicit the integral term I 1 that we want to control under the weakened Cramér condtion: where c n := √ n 16ρ3(n) . By independence of the variables X i,n and thus by independence of the new variables Z i,n , we have Let us observe that, for all multi-indexes α = (α 1 , . . . , α d ) ∈ N d , we have and thus, since |Z i,n | ≤ 2 √ n, (5.13) Using the multi-index and multidimensional Leibniz rule, if α ∈ N d and β ∈ N d are multi-indexes such that α i ≤ β i for all 1 ≤ i ≤ d, we have then or, developping the logarithm, J γ (n, ε) ≤ n d/2 exp (n − |γ|) log n n − |γ| K γ (n, ε), We are now left to check that, if the characteristic functions of the X i,n satisfy the weak mean Cramér condition of the statement, then so do the characteristic functions φ i,n of the truncated and centered variables Z i,n . The next lemma shows that it is the case, at least for ||t|| less that a fractional power of n. = E e i t·Xi,n − E e i t·Xi,n − 1 1 ||Xi,n||> √ n .
In particular, recalling that, by definition φ Xi,n (t) := E e i t·Xi,n , we have Let us go back to the proof of Theorem 4.3 and the estimate of K γ (n, ε) from which we will deduce obvious estimates for J γ (n, ε) and finally I 1 (n, ε). As in the proof of Theorem 3.1, let us decompose K γ (n, ε) as the sum K γ (n, ε) := K 1 γ (n, ε) + K 2 γ (n, ε) + K 3 γ (n, ε), where B d (R) denotes the volume of the ball of radius R in R d . The right hand side of (5.20) goes to zero exponentially fast in n as n goes infinity, uniformly in ε > 0. We now consider the integral term K 3 γ (n, ε) and we define R(n, ε) := ε A 4ρ 1 b n s 2b + 1 2 .
At this point, we have thus a similar control of I 1 = I 1 (n, ε n ) as in Equation (20.34) of [BR86].