Improved Rademacher symmetrization through a Wasserstein based measure of asymmetry

We propose of an improved version of the ubiquitous symmetrization inequality making use of the Wasserstein distance between a measure and its reflection in order to quantify the symmetry of the given measure. An empirical bound on this asymmetric correction term is derived through a bootstrap procedure and shown to give tighter results in practical settings than the original uncorrected inequality. Lastly, a wide range of applications are detailed including testing for data symmetry, constructing nonasymptotic high dimensional confidence sets, bounding the variance of an empirical process, and improving constants in Nemirovski style inequalities for Banach space valued random variables.


Introduction
The symmetrization inequality is a ubiquitous result in the probability in Banach spaces literature and in the concentration of measure literature. Dating back at least to Paul Lévy, it is found in the classic text of Ledoux and Talagrand (1991), Section 6.1, and the more recent Boucheron et al. (2013), Section 11.3. Giné and Zinn (1984) use symmetrization in the context of empirical process theory, which is followed by a collection of more recent appearances such as Panchenko (2003); Koltchinskii (2006); Giné and Nickl (2010); Arlot et al. (2010); Lounici and Nickl (2011); Kerkyacharian et al. (2012);Fan (2011).
The symmetrization inequality is as follows. Let (B, · ) be a Banach space, and let X 1 , . . . , X n ∈ B be independent and identically distributed random variables with measure µ. Let ε 1 , . . . , ε n be independent and identically distributed Rademacher random variables, which are such that P (ε i = 1) = P (ε i = −1) = 1/2. These are sometimes referred to as symmetric Bernoulli or random signs. The symmetrization inequality is This can be readily proved via Jensen's Inequality and the insight that if Z is a symmetric random variable, that is Z d = −Z, then Z d = εZ. The most notable oversight of this result is that it does not incorporate any measure of the symmetry of the data. Specifically, in the extreme case that the X i are symmetric about their mean, then the coefficient of 2 can be dropped and the inequality becomes an equality. Taking note of this fact, Arlot et al. (2010) state that "it can be shown that this factor of 2 is unavoidable in general for a fixed n when the symmetry assumption is not satisfied, although it is unnecessary when n goes to infinity." They furthermore "conjecture that an inequality holds under an assumption less restrictive than symmetry (e.g., concerning an appropriate measure of skewness of the distribution )." Hence, in response to this conjecture, we propose an improved symmetrization inequality making use of Wasserstein distance and Hilbert space geometry in order to account for the symmetry, or lack thereof, of the distribution of the X i under analysis. The main contribution of this paper is that for some Hilbert space H and X 1 , . . . , X n ∈ H iid random variables with measure µ, there is for a fixed constant C(µ) depending only on the symmetry of the underlying measure µ of the X i , which quantifies the symmetry of µ, such that This result is detailed and proved in Section 2.2. Furthermore, an empirical bound, C n (µ), on the constant C can be calculated as is done in Section 3. In the case that the distribution of the X i is symmetric, our data driven estimate C n (X) = O(n −1/2 ) implying an n −1 rate of convergence to the desired zero for the additive term above. Applications of this result to testing the symmetry of a data set, constructing nonasymptotic high dimensional confidence sets, bounding the variance of an empirical process, and improving coefficients in probabilistic inequalities in the Banach space setting are given in Section 4.

Definitions
We first require the standard notions of Wasserstein distance and Wasserstein space as stated below. For a thorough introduction to such topics, see Villani (2008).
Definition 2.1 (Wasserstein Distance). Let (X , d) be a Polish space and p ∈ [1, ∞). For two probability measures µ and ν on X , the Wasserstein p distance is where the infimum is taken over all measures γ on X × X with marginals µ and ν.
An equivalent and useful formulation of Wasserstein distance is where the infimum is taken over all possible joint distributions of X and Y with marginals µ and ν, respectively.
Definition 2.2 (Wasserstein Space). Let P (X ) be the space of probability measures on X . The Wasserstein space is for any arbitrary choice of x 0 . This is the space of measures with finite pth moment.
Convergence in Wasserstein space is characterized by weak convergence of measure and convergence in pth moment. From Theorem 6.8 of Villani (2008), convergence in Wasserstein distance is equivalent to weak convergence in P p (X ). Hence, for a sequence of measures µ n , Secondly, we will make use of empirical measures and the already mentioned Rademacher random variables. Definition 2.3 (Empirical Measure). For independent and identically distributed random variables X 1 , . . . , X n , the empirical measure is a random measure defined as 1 Xi∈A for some measurable set A. We will denote the empirical measure of the reflected variables −X 1 , . . . , −X n by µ − n .

Symmetrization Result
In the following lemma, we bound the expectation on the left by the sum of a "symmetric" term and an "asymmetric" term.
Lemma 2.5. Let H be an Hilbert space, and let X 1 , . . . , X n ∈ H be independent and identically distributed random variables with common law µ. Define µ − to be the law of −X. Furthermore, let ε 1 , . . . , ε n be independent and identically distributed Rademacher random variables also independent of the X i . Then, for any 1-Lipschitz function ψ, where W 2 is the Wasserstein 2 distance.
This lemma leads immediately to the following theorem. The intuition behind this theorem is that averaging a collection of random variables has an inherent smoothing and symmetrizing effect. Thus, as the sample size n increases, the difference between the expectations of the true average and the Rademacher average become negligible.
Theorem 2.6. Using the setting of Lemma 2.5 with either of the following two conditions that 1. ψ is additionally positive homogeneous (e.g. a norm), or 2. the metric d is positive homogeneous in the sense that for a ∈ R, d(ax, ay) = |a|d(x, y), Proof. Running the proof of Lemma 2.5 after swapping Under condition 1, the result is immediate. Under condition 2, let µ be the law of (X i − EX i ) as before. Then, redefining µ * n to be the law of where the infimum is taken over all joint distributions of X and Y with marginals µ and µ+µ − 2 , respectively. The desired result follows.
3 Empirical estimate of W 2 (µ, µ − ) In order to explicitly make use of the above results, an empirical estimate of W 2 (µ, µ − ) is required. We first establish the following bound.
Proposition 3.1. Let X 1 , . . . , X n be iid with law µ and let Y 1 , . . . , Y n be iid with law ν. Furthermore, let µ n and ν n be the empirical distributions of µ and ν, respectively. Then, Proof. The following infima are taken over the joint distributions of the random variables in question. Let X and Y be random variables of law µ and ν, respectively. Also, let S n be the group of permutations on n elements.
where the above inequality arises by replacing the infimum over all possible joint distributions of the X i and Y i with a specific joint distribution.
The following subsections establish that it is reasonable to replace W 2 (µ, µ − ) with a data driven estimate of EW 2 (µ n , µ − n ) in Lemma 2.5 and Theorem 2.6. Rates of convergence of W 2 (µ n , µ − n ) are presented, and a bootstrap estimator for EW 2 (µ n , µ − n ) is proposed and tested numerically.

Rate of convergence of empirical estimate
As W p (·, ·) is a metric, the triangle inequality implies that and therefore, . By Lemma A.4, W p (µ, µ n ) → 0 with probability one making the discrepancy negligible for large data sets. However, it is also possible to get a hard upper bound on this term; specifically, the recent work of Fournier and Guillin (2015) proposes explicit moment bounds on W p (µ, µ n ). Their result can be used to demonstrate the speed with which our empirical measure of asymmetry, W 2 (µ n , µ − n ), converges to zero when µ is symmetric. In the case that µ is symmetric, W 2 (µ, µ − ) = 0, the ideal correction term is equal to zero. This implies that our empirical bound Therefore, the moment bound from Theorem 1 of Fournier and Guillin (2015) implies that where δ ∈ (0, 0.5] depending on the specific moment used and the dimensionality of the measure. Thus, the empirical Wasserstein distance achieves a faster convergence rate in the symmetric case than the general rate of n −1/2 . The tightness of the bounds proposed in Fournier and Guillin (2015) was tested experimentally. While the moment bounds are certainly of theoretical interest, implementing these bounds resulted in an inequality less sharp than the original symmetrization inequality. However, the bootstrap procedure detailed in the following section does produce a practically useful estimate of the expected empirical Wasserstein distance.

Bootstrap Estimator
We propose a bootstrap procedure to estimate the expected Wasserstein distance between the empirical measure and its reflection, EW 2 (µ n , µ − n ). Given observations x 1 , . . . , x n , letμ n be the empirical measure of the data. Then, for some specified m, two sets Y 1 , . . . , Y m and Z 1 , . . . , Z m can be sampled as independent draws fromμ n . The goal is to move a mass of 1/m from each of the Y i to each of the negated −Z i in an optimal fashion. Hence, the m × m matrix of pairwise distances is constructed with entries A i,j = d(Y i , −Z j ), which can be accomplished in O(m 2 ) time. From here, the problem reduces to a linear assignment problem, a specific instantiation of a Minimum-cost flow problem from linear programming (Ahuja et al., 1993). That is, given a complete bipartite graph with vertices L ∪ R such that |L| = |R| = m and with weighted edges, we wish to construct a perfect matching minimizing the total sum of the edge weights. Here, the weights are the pairwise distances A i,j . This linear program can be efficiently solved in O(m 3 ) time via the Hungarian algorithm (Kuhn, 1955). For more on linear programs in the probabilistic setting, see Steele (1997).
This estimated distance can be averaged over multiple bootstrapped samples. Though, in general, only a few replications are necessary to achieve a stable estimate as the bootstrap estimator has a very small variance. Indeed, to see this, consider the bounded difference inequality detailed in Section 3.2 of Boucheron et al. (2013), which is a direct corollary of the Efron-Stein-Steele inequality (Efron and Stein, 1981;Steele, 1986;Rhee and Talagrand, 1986).
Definition 3.2 (A function of bounded differences). For X some measurable space and a real valued function f : X n → R, f is said to have the bounded differences property if for all i = 1, . . . , n, sup Boucheron et al. (2013)). If f has the bounded differences property with constants c 1 , . . . , c n , then Var (f (X 1 , . . . , X n )) ≤ 1 4 n i=1 c 2 i .
In our setting, Y i and Z i for i = 1, . . . , m are independent random variables with laŵ µ n . The function f (Y 1 , . . . , Y m , Z 1 , . . . , Z m ) is the value of the optimal matching from the {Y i } to the {−Z i }. This f is, in fact, a function of bounded differences, because modifying a single argument will at most change the optimal value by c = m −1 (max i,j=1,...,n {d(x i , −x j )} − min i,j=1,...,n {d(x i , −x j )}) = C/m. Thus, from the bounded differences theorem, Therefore, if m is chosen to be of order n, as in the numerical experiments below, then the variance of the bootstrap estimate decays at rate of O(n −1 ). The proposed bootstrap procedure was experimentally tested on both high dimensional Rademacher and Gaussian data as will be seen in Sections 3.3.1 and 3.3.2. For each replication, the observed data was randomly split in half. That is, given a random permutation ρ ∈ S n , the symmetric group on n elements, the Hungarian algorithm was run to calculate the cost of an optimal perfect matching between {X ρ(1) , . . . , X ρ( n 2 ) } and {−X ρ( n 2 +1) , . . . , −X ρ(n) }.

Numerical Experiments
From Proposition 3.1, there is an obvious positive bias in our new symmetrization inequality when using the Wasserstein distance between the empirical measures, W 2 (µ n , µ − n ), in lieu of the Wasserstein distance between the unknown underlying measures, W 2 (µ, µ − ). This is specifically troublesome when µ is symmetric or nearly symmetric. That is, if W 2 (µ, µ − ) = 0, then barring trivial cases, the distance between the empirical measures will be positive with positive probability. However, as stated in Lemma A.4, W 2 (µ n , µ − n ) → 0 with probability one, which will still make this approach superior to the standard symmetrization inequality. In the following subsections, we will compare the magnitude of the expected symmetrized sum and the asymmetric correction term, which are, respectively, The goal is to demonstrate through numerical simulations that the latter is smaller than the former and thus that newly proposed R n + C n is a sharper upper bound than the original 2R n for n −1/2 E n i=1 (X i − EX i ) .

Rademacher Data
For a dimension k and a sample size n = {2, 4, 8, . . . , 256}, the data for this first numerical test was generated from a multivariate symmetric Rademacher distribution. That is, for a size n iid sample from this distribution, X 1 , . . . , X n , let X i,j be the jth entry of the ith random variable with X i,1 , . . . , X i,k iid Rademacher(1/2) random variables. Across 10,000 replications, random samples were drawn and used to estimate the expected Rademacher average, R n , and the expected empirical Wasserstein distance, C n , under the 1 -norm. The dimensions considered were k = {2, 20, 200}. The results are displayed on the left column of Figure 1. As the sample size n increases with respect to k, we get closer to an asymptotic state and the bound based on the empirical Wasserstein distance becomes more attractive.

Gaussian Data
For a dimension k and a sample size n = {2, 4, 8, . . . , 256}, the data for this second numerical test was generated from a multivariate Gaussian mixture distribution. Specifically, 1 2 N (−1, I k ) + 1 2 N (1, I k ) , which is a symmetric distribution. Over 10,000 replications, random samples were drawn and used to estimate the expected Rademacher average, R n , and the expected empirical Wasserstein distance, C n , under the 2 -norm. The dimensions considered were k = {2, 20, 200}. The results are displayed on the right column of Figure 1. Similarly to the multivariate Rademacher setting, as the sample size n increases, the bound based on the empirical Wasserstein distance becomes sharper than the original symmetrization bound.

Applications
In the following subsections, a collection of applications of the improved symmetrization inequality are detailed. These include a test for data symmetry, the construction of nonasymptotic high dimensional confidence sets, bounding the variance of an empirical process, and Nemirovski's inequality for Banach space valued random variables.

Permutation test for data symmetry
In the previous sections, we proposed the Wasserstein distance W 2 (µ, µ − ) to quantify the symmetry of a measure µ. Now, given n iid observations X 1 , . . . , X n with common measure µ, we propose a procedure to test for whether or not µ is symmetric. The bootstrap approach from Section 3 for estimating the empirical Wasserstein distance is applied, and a permutation test is applied to the bootstrapped sample. Note that while the Wasserstein-2 metric is specifically used in our improved symmetrization inequality, for this test, any Wasserstein-p metric can be utilized as is done in the numerical simulations below. The bootstrap-permutation test proceeds as follows: 0. Choose a number r of bootstrap replications to perform.
1. For each bootstrap replication, permute the data by some uniformly randomly drawn ρ ∈ S n , the symmetric group on n elements.
3. Denote this new half-negated data set Y where Y i = X ρ(i) for i ≤ n/2 and Y i = −X ρ(i) for i > n/2.  dotted lines), and the bound using the scaled empirical Wasserstein distance, R n + W 2 (µ n , µ − n )/ √ 2 (blue solid lines) were estimated over 10,000 replications. The dimension of the data is k = {2, 20, 200}. For the Rademacher setting, the 1 -norm was used. For the Gaussian setting, the 2 -norm was used. As the sample size increases, the Wasserstein term converges to zero thus sharpening the upper bound. : For data in R 5 , the 1 and 2 metrics, and the Wasserstein distances W 1 and W 2 , the experimentally computed power of the permutation test is plotted for Rademacher(p) data as p, the probability of 1, increases thus skewing the distribution. The sample size is n = 100 on the left plot and is n = 10 on the right plot. The n = 100 case includes an asymptotic test for skewness. This test fails in the nonasymptotic n = 10 case and thus is not included.
6. Average the r p-values to get an overall p-value, p = r −1 r j=1 p j . Note that for very large data sets, it may be computationally impractical to find a perfect matching between two sets of n/2 nodes as performing this test as stated has a computational complexity of order O(mn 3 ). In that case, randomly draw n < n elements from the data set in step 1, draw a ρ ∈ S n , and proceed as before.
This permutation test was applied to simulated multivariate Rademacher data in R 5 . For sample sizes n = 10 and n = 100, let X 1 , . . . , X n be iid multivariate Rademacher(p) random variables where each X i is comprised of a vector of independent univariate Rademacher(p) random variables. For values of p ∈ [0.5, 0.8], the power of this test was experimentally computed over 1000 simulations. The results are displayed in Figure 2. For the 1 and 2 metrics and Wasserstein distances W 1 and W 2 , the performances of the permutation test were comparable except for the ( 2 , W 2 ) case, which performed poorer in both the large and small sample size settings. For the large sample size, n = 100, Mardia's test for multivariate skewness (Mardia, 1970(Mardia, , 1974 was included, which uses the result that whereΣ is the empirical covariance matrix of the data. However, this is shown to be less powerful than the proposed permutation test. Furthermore, as this test is asymptotic in design, it gave erroneous results in the n = 10 case and was thus excluded from the figure.

High dimensional confidence sets
A method for constructing nonasymptotic confidence regions for high dimensional data using a generalized bootstrap procedure was proposed in the article of Arlot et al. (2010). Beginning with a sample of independent and identically distributed Y 1 , . . . , Y n ∈ R K and the assumptions that the Y i are symmetric about their mean, i.e. Y i − µ d = µ − Y i , and are bounded in L p -norm, i.e. Y i − µ p ≤ M , they prove, among many other results, that for some fixed α ∈ (0, 1), the following holds with probability 1 − α: where φ : R K → R is a function that is subadditive, positive homogeneous, and bounded by L p -norm. By substituting our Theorem 2.6 for their Proposition 2.4 allows us to drop the symmetry condition and achieve a more general (1 − α) confidence region.

Bounds on empirical processes
Symmetrization arises when bounding the variance of an empirical process. In Boucheron et al. (2013), the following result is stated as Theorem 11.8 and is subsequently proved using the original symmetrization inequality resulting in suboptimal coefficients.
Theorem 4.2 (Boucheron et al. (2013), Theorem 11.8). For i ∈ {1, . . . , n} and s ∈ T , a countable index set, let X i = (X i,s ) s∈T be a collection of real valued random variables. Furthermore, let X 1 , . . . , X n be independent. Assume EX i,s = 0 and |X i,s | ≤ 1 for all i = 1, . . . , n and for all s ∈ T . Defining Z = sup s∈T n i=1 X i,s , then The given proof uses the symmetrization inequality twice as well as the contraction inequality (see Ledoux andTalagrand (1991) Theorem 4.4, andBoucheron et al. (2013) Theorem 11.6) to establish the bounds Making use of the improved symmetrization inequality cuts the coefficient of EZ by a factor of 4 to the tighter Beyond this textbook example of bounding the variance of an empirical process, symmetrization arguments are used to construct confidence sets for empirical processes in Giné and Nickl (2010); Lounici and Nickl (2011);Kerkyacharian et al. (2012);Fan (2011). The coefficients in all of their results can be similarly improved using the improved symmetrization inequality.

Type, Cotype, and Nemirovski's Inequality
In the probability in Banach spaces setting, let X i ∈ (B, · ) for i = 1, . . . , n be a collection of independent mean zero Banach space valued random variables. A collection of results referred to as Nemirovski inequalities (Nemirovski, 2000;Dümbgen et al., 2010) are concerned with whether or not there exists a constant K depending only on the norm such that For example, in the Hilbert space setting, orthogonality allows for K = 1 and the inequality can be replaced by an equality. One such result requires the notion of type and cotype. A Banach space (B, · ) is said to be of Rademacher type p for 1 ≤ p < ∞ (respectively, of Rademacher cotype q for 1 ≤ q < ∞) if there exists a constant T p (respectively, C q ) such that for all finite non-random sequences (x i ) ∈ B and (ε i ), a sequence of independent Rademacher random variables, These definitions and the original symmetrization inequality lead to the following proposition.
The proposition can be refined by applying our improved symmetrization inequality along with the Rademacher type p condition if the X i are additionally norm bounded. If the X i have a common law µ, let W 2 = W 2 (µ, µ − ) be the Wasserstein distance between µ and its reflection.
Proposition 4.4. Under the setting of Proposition 4.3, additionally assume that X i ≤ 1 for i = 1, . . . , n. Then, Proof. In the context of Theorem 2.6, set ψ(·) = · p . Given the bound X i ≤ 1, we have that ψ Lip = p. Scale by p, and the first result follows.
Note that for identically distributed X i ∈ B, the order of the original bound for a type p Banach space is O(n 1−p ) while the Wasserstein correction term is O(n −1/2 ). This correction will give an obvious benefit for spaces of type p < 3/2. However, even for spaces of type 2, the new bound can be tighter specifically in the high dimensional setting when d n. Indeed, consider ∞ (R d ), which is discussed in particular in Section 3.2 of Dümbgen et al. (2010) where it is shown to be of type 2 with constant T p = 2 log(2d). For iid X i ∈ ∞ (R d ), the bounds to compare are Figure 3 displays such a comparison for n = 10, d ∈ {5, 25, 50}, and iid X i,j + α/(1 + α) ∼ Beta (α, 1) for i = 1, . . . , n and j = 1, . . . , d. Hence, the X i are Beta random variables that are shifted to have zero mean. W 2 (µ, µ − ) is approximated by EW 2 (µ 5 , µ − 5 ), which is computed via the bootstrap procedure outlined in Section 3. The new bound can be seen to have better performance than the old one specifically in the cases of d = 25 and d = 50 when α is not too large.

A Nemirovski variant with weak variance
As one further example of improved symmetrization, a variation of Nemirovski's inequality found in Section 13.5 of Boucheron et al. (2013) is proved via a similar symmetrization argument for the p norm with p ≥ 1. Let X 1 , . . . , X n ∈ R d be independent mean zero random variables. Let B q = {x ∈ R d : x q ≤ 1}, and define the weak variance Σ 2 p = n −2 E sup t∈Bq n i=1 t, X i 2 . The resulting inequality is E S n 2 p ≤ 578dΣ 2 p . Replacing the old symmetrization inequality with the improved version reduces the coefficient of 578 roughly by a factor of 4 resulting in E S n 2 p ≤ 146dΣ 2 p + O(n −1/2 ).

Discussion
The symmetrization inequality is a fundamental result for probability in Banach spaces, concentration inequalities, and many other related areas. However, not accounting for the amount of asymmetry in the given random variables has led to pervasive powers of two throughout derivative results. Our improved symmetrization inequality incorporates such a quantification of asymmetry through use of the Wasserstein distance. Besides being theoretically sound, it is shown in simulations to provide a tightness superior to that of the original result. Going beyond the inequality itself, this Wasserstein distance offers a novel and powerful way to analyze the symmetry of random variables or lack thereof. It can and should be applied to countless other results that were not considered in this current work.

A Past results used
Lemma A.1 ( Kantorovich-Rubinstein Duality, see Villani (2008) (1981) ). For Hilbert space valued random variables X i with law µ i and Y i with law ν i for i = 1, . . . , n, define µ * n to be the law of n i=1 X i and similarly for ν * n . Then, Lemma A.4 ( Convergence of Empirical Measure, see Bickel and Freedman (1981) ). Let X 1 , . . . , X n be independent and identically distributed Banach space valued random variables with common law µ. Let µ n be the empirical distribution of the X i . Then, W p (µ n , µ) → 0, as n → ∞.