On a concentration inequality for sums of independent isotropic vectors

We consider a version of a classical concentration inequality for sums of independent, isotropic random vectors with a mild restriction on the distribution of the radial part of these vectors. The proof uses a little Fourier analysis, the Laplace asymptotic method and a conditioning idea that traces its roots to some of the original works on concentration inequalities.


Introduction
The purpose of this paper is to give a multi-dimensional version of a concentration inequality that traces its origins to P. Lévy [14]. These Lèvy type concentration results assert a rate of decay on the maximal amount of mass that the distribution of a sum of independent, nondegenerate, random variables can place in an arbitrary interval. We remark that this is in contrast to another version of the concentration phenomena, see [13] and the vast literature listed there, where measures concentrate most of their mass on particular subsets of a measure space. However, the proof of our result relies on an early version of the latter type of concentration phenomena due to Bernstein and Hoeffding. Motivation to study the concentration inequality in the context of isotropic random vectors arises from a variety of sources. Sums of independent isotropic vectors are the so-called isotropic random flights which arise in problems in astronomy, [2], [4]. Such sums appear in random searches in the context of biological encounters, [3]. They are also used as polymer models in chemistry, [6], [8]. Yet another example of isotropic random vectors arises in a discrete model for the theory of magnetic fields generated by a turbulent media, [15], where random vectors of the form M ξ appear where M is a random element of the orthogonal group, O(3), and the random vector ξ is independent of M.
Early works on the concentration inequalities in the real valued case are due to Döblin, [5], Kolmogorov [11], Lèvy [14] and Rogozin [17]. The higher dimensional case has been treated in Kanter, [10]. In [10], the author considered independent, R Nvalued, symmetric random vectors X 1 , · · · , X n . Then if C ⊂ R N is a convex set and where Φ(s) = e −s (I 0 (s) + I 1 (s)) and I 0 and I 1 are the modified Bessel functions given by Our work considers the convex set C = Q(l), the cube centered at Q(l) = [0l] d . As remarked in [10], Φ( n 2 ) ≥ cn −1/2 . Thus, the Kanter result doesn't contain the dimensional dependence which does appear in our bound on the concentration function in Theorem 2.1.
The authors wish to thank the referee for pointing out the Lèvy decoupage technique [1], used in our conditioning argument at (3.6) below, which greatly improved an earlier version of the paper.

Statement of Result
Let us give a precise statement of the result proven by Kolmogorov [11]. Suppose {ξ k } k≥1 are independent random variables defined on some probability space (Ω, F, P ). Set S n = n k=1 ξ k and define for l > 0, the concentration function Q k (l) = sup Kolmogorov's version of the concentration inequality states that there exists a constant We prove a higher dimensional version of this result. Due to possible degeneracies, the higher dimensional result will not hold in general, but a natural condition to impose on random vectors under which the result turns out to be true is isotropy. We say an R d valued random vector, X, on a probability space (Ω, F, P ) is isotropic if P X −1 = P (U X) −1 for every U ∈ O(d), the group of orthogonal matrices. With this definition we have, Theorem 2.1. Let X 1 , X 2 , · · · be independent, isotropic random vectors with values in R d , d ≥ 2 and put S n = X 1 + X 2 + · · · + X n . Let l > 0 and L > 0 be given. Define, for a > 0, the cube (2.1) ECP 17 (2012), paper 27.
On a concentration inequality for sums of independent isotropic vectors There various immediate corollaries that may be derived from this. We sate two such.

Corollary 2.2. Assume in addition to the conditions above that
Then there is a positive constant c independent of n, l and L such that, Proof. The conditions of the corollary ensure that the solution of This follows since the left hand side minus the right hand side is increasing in and this forces the solution to be less than 1 4 . Since is then bounded from 1 we can take this in (2.1) and find the c as claimed by using Theorem 2.1.
Another possibility is the following.
There is a constant c(d) such that for every x we have Proof. In this case, n i=1 p i ≥ n 2 and L = l = 1. Taking = 1 2 in Theorem 2.1 the corollary holds with a suitable choice of constant c.

Proof of the Concentration Inequality
We commence with two lemmas, the first is a local central limit theorem. This is likely a known result and as the proof is short we include it for completeness.
Given k, one selects coordinates on the sphere φ 1 , φ 2 , · · · , φ d−1 , φ i ∈ [0, π), for i = 1, · · · , d − 2, and φ d−1 ∈ [0, 2π), so that k = |k| cos φ 1 . The volume form on the sphere, normalized to have total mass one, is given in these coordinates by Then since the characteristic function of a random vector Y which is uniformly distributed on the sphere of radius 1 in R d is real, For d = 2, by [12] page 115, where J 0 (z) is the 0 th order Bessel function of the first kind given by For d > 3, we have by [12] page 114, where J d 2 −1 is the order d 2 − 1 Bessel function of the first kind. Since for any d ≥ 2, we need to check the asymptotics of the right hand side.
In order to determine the asymptotics at (3.1), we need the r → ∞ decay of ψ d (r). For d ≥ 2 from [12] page 134 for some positive constant c, We use the fact that in any dimension d ≥ 2, c d ( ) ≡ sup r≥ |ψ d (r)| < 1 to show the second integral dies off exponentially fast in m. In fact, and by (3.2), the second integral is bounded in m for m proceeding with the Laplace asymptotic method gives Lemma 3.2. Let Y 1 , Y 2 , · · · , Y m be independent random vectors with Y i uniformly distributed on the sphere of radius R i , i = 1, · · · , m. Assume there is a number l > 0 such that each R i ≥ l and the R i are non-random. Let m be an even integer. Then for L > 0 and any x ∈ R d , Proof. It suffices to provide an appropriate bound on the L ∞ norm of the density of In fact, for m even, by the arithmetic-geometric mean and Jensen's inequalities, where the last line follows from Lemma 3.1. This completes the proof.
We can now prove the main result, Theorem 2.1.
Proof. We use the Lèvy decoupage technique [1] to decompose the random variables X i , based on whether |X i | ≥ l or |X i | < l. Set Recall that we are assuming that p i ≥ 1 2 , i = 1, 2, · · · , n. For i = 1, 2, · · · , n, let the random variables {U i : 1 ≤ i ≤ n} and {V i : 1 ≤ i ≤ n} be independent with Then, as above, take { Y i : 1 ≤ i ≤ n} to be uniformly distributed on the unit sphere in R d and independent of the previously defined random variables. In an obvious notation we may take the probability measure P as P = P (U,V ) ×P η ×P Y and our original random variables may be represented as Notice that {Y 1 i : 1 ≤ i ≤ n, η i = 0} satisfies the conditions of Lemma 3.2 with respect to the probability measure P η × P Y a.s.. Denote the vector of outcomes of the sequence {η i : 1 ≤ i ≤ n} by ζ n = (η 1 , η 2 , · · · , η n ) .
Now integrate with respect to P (U,V ) to complete the proof.