Exponential bounds for multivariate self-normalized sums

In a non-parametric framework, we establish some non-asymptotic bounds for self-normalized sums and quadratic forms in the multivariate case for symmetric and general random variables. This bounds are entirely explicit and essentially depends in the general case on the kurtosis of the Euclidean norm of the standardized random variables.

We denote E the expectation under . In the following we put Z n = n −1 n i=1 Z i . Define S a square root of the matrix S 2 = E(Z Z ′ ) and similarly S n a square root of S 2 n = n −1 n i=1 Z i Z ′ i . We assume in the following that S 2 is full rank and therefore S 2 n is also full rank with probability 1 as soon as n > p. For further use, we define γ r = E( S −1 Z r 2 ), r > 0, where || || 2 is the Euclidean norm on q . Now consider the self-normalized sum and its Euclidean norm nZ ′ n S −2 n Z n (2) Self-normalized sums have recently given rise to an important literature : see for instance [13,6] or [4] for self-normalized processes. It has been proved that non-asymptotic exponential bounds can be obtained for these quantities under very weak conditions on the underlying moments of the variables Z i . Unfortunately, except in the symmetric case, these bounds established in the real case (q = 1) are not universal and depend on the skewness γ 3 = E|S −1 Z| 3 or even an higher moments for instance γ 10/3 = E|S −1 Z| 10/3 , see [13]. Actually, uniform bounds in are impossible to obtain, otherwise this would contradict Bahadur and Savage's Theorem, see [2,18]. Recall that the behaviour of self-normalized sums is closely linked to the behaviour of the statistics of Student, which is the basic asymptotic root for constructing confidence intervals (see Remark 2 below). Moreover, available bounds are not explicit and only valid for n ≥ n 0 , n 0 large and unknown. To our knowledge, non-asymptotic exponential bounds with explicit constants are only available for symmetric distribution [12,9,17], in the unidimensional case (q = 1). In this paper, we obtain generalizations of these bounds for (2) in the multivariate case by using a multivariate extension of the symmetrization method developed in [16] as well as arguments taken from the literature on self-normalized process, see [4]. Our bounds are explicit but depend on the kurtosis γ 4 of the Euclidean norm of S −1 Z rather than on the skewness. They hold for any value of the parameter size q. One technical difficulty in the multidimensional case is to obtain an explicit exponential bound for the smallest eigenvalue of the empirical variance which allows to control the deviation of S 2 n from S 2 , a result which has its own interest.

Exponential bounds for self-normalized sums
Some bounds for self-normalized sums may be quite easily obtained in the symmetric case (that is for random variables having a symmetric distribution) and are well-known in the unidimensional case. In non-symmetric and/or multidimensional case theses bounds are new and not trivial to prove. One of the main tools for obtaining exponential inequalities in various setting is the famous Hoeffding inequality (see [12]) yielding that for independent real random variables (r.v.) Y i , i = 1, ..., n, with finite support say [0, 1], we have A direct application of this inequality to self-normalized sums (via a randomization step introducing Rademacher r.v.'s) yields (see [9,8]) that, for n independent random variables Z i symmetric about 0, and not necessarily bounded (nor identically distributed), we have In the general non-symmetric case, the master result of [13] for q = 1 states that if γ 10/3 < ∞, then for some A ∈ and some a ∈]0, 1[, where F q is the survival function of a χ 2 (q) distribution defined by F q (t) = +∞ t f q ( y)d y with f q ( y) = 1 2 q/2 Γ(q/2) y q/2−1 e − y 2 and Γ(p) = +∞ 0 y p−1 e − y d y.
However the constants A and a are not explicit and, despite of its great interest to understand the large deviation behaviour of self-normalized sums, the bound is of no direct practical use. In the non-symmetric case our bounds are worse than (4) as far as the control of the approximation by a χ 2 (q) distribution are concerned, but entirely explicit.

Moreover for nq ≤ t, we have
Pr nZ ′ n S −2 n Z n ≥ t = 0.
The proof is postponed to Appendix (1). Part a) in the symmetric multidimensional case follows by an easy but crude extension of [12] or [9,8]. It is also given under a different form in [10]. The exponential inequality (5) is classical in the unidimensional case. Other type of inequalities with suboptimal rate in the exponential term have also been obtained by [14].
In the general multidimensional framework, the main difficulty is actually to keep the self-normalized structure when symmetrizing the original sum. We first establish the inequality in the symmetric case by an appropriate diagonalization of the estimated covariance matrix, which reduces the problem to q -unidimensional inequalities. The next step is to use a multidimensional version of Panchenko's symmetrization lemma (see [16]). However this symmetrization lemma destroys partly the self-normalized structure (the normalization is then S 2 n + S 2 instead of the expected S 2 n ), which can be retrieved by obtaining a lower tail control of the distance between S 2 n and S 2 . This is done by studying the behavior of the smallest eigenvalue of the normalizing empirical variance. The second term in the right hand side of inequality (6) is essentially due to this control. However, for q > 1, the bound of part a) is clearly not optimal. A better bound, which has not exactly an exponential form, has been obtained by [17] following previous works by [7]. Pinelis' result gives a much more precise evaluation of the tail for moderate q. It essentially says that in the symmetric case the tail of the self-normalized sum can essentially be bounded by the tail of a χ 2 (q) distribution. Notice that this tail F q satisfies the following approximation (see [1], p. 941, result 26.4.12 ) ).
This bounds gives the right behavior of the tail (in q) as t grows, which is not the case for a). However, in the unidimensional case a) still gives a better approximation than [17]. a) can still be used in the multidimensional case to get crude but exponential bounds. We expect however Pinelis' inequality to give much better bounds for moderate q and moderate sample size n in the symmetric case. For these reason, we will extend the results of Theorem 1 by using a χ 2 (q) type of control. This essentially consists in extending Lemma 1 of [16] to non exponential bound.
Theorem 2. The following inequalities hold, for finite n > q and for t < nq: (Pinelis 1994) if Z has a symmetric distribution, without any moment assumption, then we have b) for general distribution of Z with kurtosis γ 4 < ∞, for any a > 1 and for t ≥ 2q(1 + a) and For t ≥ nq, we have Pr nZ ′ n S −2 n Z n ≥ t = 0.

Remark 1.
Notice that the constant K(q) does not increase with large q as it can be seen on Figure 1. A close examination of the bounds shows that essentially γ 4 (q + 1) has to be small compared to n

Remark 2.
It can be tempting to compare our bounds with some more classical results in statistics. We recall that, in an unidimensional framework, the studentized ratio is given by T n = n S −1 nZ n where S n is the unbiased estimator of the variance S n = ( 1 In a Gaussian framework, T n has a Student distribution with (n − 1) degrees of freedom. In opposition, our self-normalized sum is defined by T n = n 1 n . It is related to T n by the relation T n = f n ( T n ) with x. As a consequence, one gets in an unidimensional symmetric case, for t > 0, For large n we recover an sub-gaussian type of inequality. At fixed n, , this inequality is noninformative for t → ∞ since the right-hand side tends to 1. Recall that, in a Gaussian framework, the tail P r( T n > t) is of order O( 1 t n−1 ) as t → ∞.

Remark 3.
In the best case, past studies give some bounds for n sufficiently large, without an exact value for "sufficiently large". Here, the bounds are valid and explicit for any n > q.
These bounds are motivated by some statistical applications to the construction of non-asymptotic confidence intervals with conservative coverage probability in a semi-parametric setting. Selfnormalized sums appear naturally in the context of empirical likelihood and its generalization to Cressie-Read divergences, see [11,15]. In particular, [5] shows how the bounds obtained here may be used to construct explicit non asymptotic confidence regions, even when q depends on n.

A.1 Some lemmas
The first lemma is a direct extension of Panchenko, 2003, Corollary 1 to the multidimensional case, which will be used both in theorem 1 and 2.

Lemma 1.
Let q be the unit sphere of q , q = {λ ∈ q , λ 2 = 1}. Let Z (n) = (Z i ) 1≤i≤n and Y (n) = (Y i ) 1≤i≤n be i.i.d. centered random vectors in q with Z (n) independent of Y (n) . We denote, for any random vector W = (W i ) 1≤i≤n with coordinates in q , S 2 n,W = 1 then, for all t ≥ 0, Proof. This proof follows Lemma 1 of [16] with some adaptations to the multidimensional case. Denote By Jensen inequality, we have Pr-almost surely and, for any convex function Φ, by Jensen inequality, we also get We obtain E(Φ(A n (Z (n) ))) ≤ E(Φ(C n (Z (n) , Y (n) ))).
Now remark that Now, notice that sup λ∈ q λ ′ Z n λ ′ S 2 n λ > 0 and apply the arguments of the proof of [16]'s Corollary 1 applied to inequality (11) to obtain the result.
The next lemma allows to establish an non exponential version of the preceding lemmas. Indeed a consequence of this lemma is that, if the tail of the symmetrized version in inequality (9) is controlled by a chi-square tail, then the non symmetrized version may be controlled by an exponential multiplied by a polynomial. The rate in the exponential is asymptotically correct. + q; 0). Let ν and ξ be two r.v.'s, such that for any

Lemma 2. For any t
Then, for t ≥ 2q, we have and for t > q, we have Proof. We follow the lines of the proof of Panchenko's lemma, with function Φ t . Remark that Φ t (0) = 0 and Φ t (t) = q, then we have By integration by parts, we have It follows by straightforward calculations that, for t > q, For t ≥ 2q, and using the recurrence relation 26.4.8 of [1], page 941. .
We now extend a result of [3], which controls the behavior of the smallest eigenvalue of the empirical variance. In the following, for a given symmetric matrix A, we denote µ 1 (A) its smallest eigenvalue. 2 ) < +∞. Then, for any n > q and 0 < t ≤ µ 1 (S 2 ), Proof. The proof of this result is adapted from [3] and makes use of some idea of [4] . We first have by a truncation argument and applying Markov's inequality on the last term in the inequality (see the proof of [3], Lemma 4), for every M > 0, We call I the first term on the right hand side of this inequality.
Notice that by symmetry of the sphere, we can always work with the northern hemisphere of the sphere rather than the sphere. In the following, we denote by q the northern hemisphere of the sphere. Notice that, if the supremum of the ||Z i || 2 is smaller than M , then for u, v in q , we have Thus if u and v are apart by at most tη/( N ( q , ǫ) be the smallest number of caps of radius ǫ centered at some points on q (for the ||.|| 2 norm) needed to cover q . Now we follow the same arguments as [3] to control I: I is bounded by the sum of the probabilities that the infimum of n i=1 (v ′ Z i ) 2 over each cap is smaller thant nt and that sup i=1,...,n ||Z i || 2 ≤ M . We bound this sum by the number of caps times the larger probability: for any η > 0, The proof is now divided in three steps, i) control of N ( q , tη 2M 2 ), ii) control of the maximum over q of the last expression in I, iii) optimization over all the free parameters.
i) On the one hand, we have, for some constant b(q) > 0, For instance, we may choose b(q) = π q−1 . Indeed, following [3], the northern hemisphere can be parameterized in polar coordinates, realizing a diffeomorphism with q−1 × [0, π]. Now proceed by induction, notice that for q = 2, q , the half circle can be covered by [π/2ǫ] ∨ 1 + 1 ≤ 2([π/2ǫ] ∨ 1) ≤ π/ǫ ∨ 1 caps of diameter 2ǫ, that is, we can choose the caps with their center on a ǫ−grid on the circle. Now, by induction we can cover the cylinder q−1 × [0, π] with [π/2ǫ (π) q−2 /ǫ q−2 ] ∨ 1 + 1 ≤ π q−1 /ǫ q−1 intersecting cylinders which in turn can be mapped to region belonging to caps of radius ǫ, covering the whole sphere (this is still a covering because the mapping from the cylinder to the sphere is contractive).
Therefore, for such t and all M > 0, we get that Pr(µ 1 ( We now optimize in M 2 > 0 and the optimum is attained at yielding the bound Using the constant b(q) = π q−1 we get the expression of C(q), which is bounded by the simpler bound (for large q this bound will be sufficient) 4π 2 (q+1)e 2 (q−1) For all a > 0 and all t > 0, we have We now use Lemma 3 applied to the r.v.'s (S −1 Z i ) 1≤i≤n with covariance matrix equal to I d q . It is easy to check that γ 4 = γ 4 . For all 1 < a, we have, Since inf a>−1 ≤ inf a>1 , we conclude that, for any t > 0, We achieve the proof by noticing that γ 4 ≥ q 2 from Jensen's inequality and E( S −1 Z 2 2 ) = q.