Hypercontractivity and comparison of moments of iterated maxima and minima of independent random variables

: We provide necessary and su(cid:14)cient conditions for hypercontractivity of the minima of nonnegative, i.i.d. random variables and of both the maxima of minima and the minima of maxima for such r.v.’s. It turns out that the idea of hypercontractivity for minima is closely related to small ball probabilities and Gaussian correlation inequalities.

We give necessary and sufficient conditions for a positive random variable X to be {p, q} − h−hypercontractive for each of these functions. One surprising consequence (Theorem 5.5) is that in order that X be {p, q} − M n 1 m n 2 M n 3 . . . m n k −hypercontractive it is sufficient (and, of course, necessary) that it is separately {p, q} − min and max hypercontractive. Note that the name hypercontractivity is usually attached to inequalities of the form (Eh q (X 1 , σX 2 )) 1/q ≤ (Eh p (X 1 , X 2 )) 1/p , which in turn are used to prove inequalities of the form discussed here. The reason we permit ourselves to use this notion with a somewhat different interpretation is that we prove below that, in our context the two notions are equivalent (see, e.g., Theorem 5.1).
The main technical tool of the paper of de la Peña, Montgomery-Smith and Szulga is a comparison result for tail distributions of two positive random variables X and Y of the type: There exists a constant, c, such that P (X > ct) ≤ cP (Y > t) for all t > c. There are two conditions under which they prove that such comparison holds. The first is a hypercontractivity of the max for one of the two variables (and some p < q). The second is an inequality of the type max i≤n X i p ≤ C max i≤n Y i p , for some C and every n. In such a case we'll say that X and Y are p − max−comparable, and if one replaces max with a general function h we'll call them p − h−comparable. We consider here this notion also for the function min and for iterated min's and max's. Among other things we prove an analogous theorem to that of de la Peña, Montgomery-Smith and Szulga (Theorem 3.3) giving a sufficient condition for P (X ≤ ct) ≤ δP (Y ≤ t) for all t ≤ c X p for some 0 < c, δ < 1. We also combine our theorem with a version of that of de la Peña, Montgomery-Smith and Szulga (Theorem 5.4) to give a sufficient condition for the comparison of the tail distributions of X and Y to hold for all t ∈ IR + .
Another application of the technique we develop here is contained in Corollary 5.2, where we give a sufficient condition for a random variable X to be hypercontractive with respect to another interesting (family of) function(s) -the k-order statistic(s).
Our initial motivation for attacking the problems addressed in this paper was an approach we had to solve (a somewhat weaker version of) the so called Gaussian Correlation Problem. Although we still can not solve this problem, we indicate (in and around Theorem 6.8) the motivation and the partial result we have in this direction. As a byproduct we also obtain (in Theorem 6.4) an inequality, involving the Gaussian measure of symmetric convex sets, stated by Szarek (1991) (who proved a somewhat weaker result) as well as a similar inequality for symmetric stable measures.
The rest of the paper is organized as follows. Section 2 provides some basic lemmas and notations. Hypercontractivity for minima and some equivalent conditions are given in section 3. Section 4 presents hypercontractivity for maxima in a way suitable for our applications. In section 5, we combine the results in section 3 and 4 to obtain hypercontractivity for iterated min's and max's, and comparison results for the small ball probabilities of possibly different random vectors. We also give there the sufficient condition for the comparison of moments of order statistics. In section 6, we apply our results to show that the α symmetric stable random variables with 0 < α ≤ 2 are minmax and maxmin hypercontractive, which is strongly connected to the regularity of the α-stable measure of small balls. In this section we also indicate our initial motivation, related to the modified correlation inequality as well as a partial result in this direction. Finally, in the last section, we mention some open problems and final remarks.

Section 2. Notations and Some Basic Lemmas.
For nonnegative i.i.d. r.v.'s {Z j }, let m n = m n (Z) = min j≤n Z j and M n = M n (Z) = max j≤n Z j . The r-norm of the random variable W is W r = (E|W | r ) 1/r for r > 0 and W 0 = lim r→0 + W r = exp(E(ln |W |)).
Here and in the rest of the paper we always assume, unless otherwise specified, that 0 < p < q.
Proof. (a). Note that (b). The result follows from the Paley-Zygmund inequality Proof. (a). We have where the equality follows from the mean value theorem with x ≤ η ≤ 1.
(c) is proved similarly.
Proof. This follows easily by induction and Minkowski's inequality Section 3. Hypercontractivity for minima.
Definition 3.1. We say that a nonnegative random variable W is {p, q}-minhypercontractive (with constant C), if there exists C such that for all n, In this case we write W ∈ min H p,q (C).
Proof. Let H(t) = H p,n (t) = E(m n (W ) ∧ t) p and note that H(t)/t p is non-increasing. Taking α as in the Lemma 2.1, m 2n p p ≥ Em p 2n I m n ≤α m n p = EH(m n )I m n ≤α m n p .
Since H(t)/t p is non-increasing, Furthermore, H(α m n p ) ≥ Em p n I m n ≤α m n p ≥ 2 −1 m n p p , which gives the conclusion.
The following theorem is a min-analog of a result of de la Peña, Montgomery-Smith and Szulga (1994), proved for maxima, (cf. Theorem 4.4 below).
Theorem 3.3. Fix ρ > 0. Let 0 ≤ p < q, and let X, Y be r.v.'s such that X ∈ min H p,q (C) and there exists a constant B such that m n (Y ) q ≤ B m n (X) q for all n. Then P (X ≤ τ t) ≤ δP (Y ≤ t) for all t ≤ t 0 = ρ X q for some constants 0 < δ < 1 and τ > 0 depending on p, q, ρ, B and C only.
Proof. We first prove the assertion of the theorem for p > 0 and some ρ > 0 depending on p, q, B and C only. Then we'll show how to use this to obtain the result for general ρ and for p = 0. By Markov's inequality By Lemma 2.1 (b) for each λ, 0 < λ < 1, Hence, taking t = t n = BD −1 m n (X) p we obtain P 1/q (m n (Y ) > t n ) ≤ P 1/p (m n (X) > λB −1 Dt n ) which gives P 1/q (Y > t n ) ≤ P 1/p (X > λB −1 Dt n ) for all n. By Lemmas 2.1 and 3.2 for each t n+1 ≤ u ≤ t n , this yields where K is as in Lemma 3.2. Hence, denoting λD(BK) −1 by τ , we have that P 1/q (Y > u) ≤ P 1/p (X > τu) is satisfied for all u such that lim n→∞ t n < u ≤ t 1 = BD −1 X p . If u ≤ lim n→∞ t n , then P (X > τu) = 1 and the above inequality holds true for the obvious reasons. We thus get for all u ≤ BD −1 X p . Let us observe that by (3.1) for each n P (X > λ m 2 n (X)|| p ) ≥ D p/2 n and hence by Lemma 3.2 P (X > λK −n X|| p ) ≥ D p/2 n .
By adjusting τ we can get the inequality for every preassigned ρ. Indeed, given any , as long as ρρ −1 0 t ≤ ρ X q or, equivalently, as long as t ≤ ρ 0 X q .
To prove the case p = 0, choose 0 < r < q, say, r = q/2. Since we are assuming that X ∈ H 0,q (C), X ∈ H r,q (C). Now apply the theorem for the pair (p, q) = (q/2, q).
To avoid awkward statements in the following theorem, we do not state the exact dependence of each of the constants on the other ones. It is important to note that the constants appearing in conditions (i) − (iv) of that theorem (chosen from { , τ, ρ, C, σ}) depend only on p, q and the constants from the other equivalent conditions. In particular each of these constants depends on the distribution of X only through the constants in the other conditions. This will be useful when considering hypercontractivity of maxima of minima in section 5.
Theorem 3.4. Fix ρ > 0. Let X be a nonnegative r.v. such that X q < ∞ and let 0 ≤ p < q. The following conditions are equivalent (i) X ∈ min H p,q (C) for some C, (ii) there exist ε < 1, τ > 0 such that . This implication follows immediately by Theorem 3.3 applied to Y = X.
(ii) ⇒ (iii). If τ, ε, t 0 are as in (ii) then by induction we obtain for each n, On the other hand Thus if ε = pq −1 (1 − r q ), τ , t 0 are as in (iii), then the above inequality and hence (3.2) is fulfilled for t ≤ τ t 0 and σ ≤ rτ .
Therefore it is enough to choose σ = min τ ρ(1 − ) 1/p , τ r to have the inequality (3.2) be satisfied for all t ≥ 0.
For p = 0 we proceed along similar lines using The case p = 0 follows by a simple limit argument.
Remark 3.5. It follows by Theorem 3.4 that {p, q}-min-hypercontractivity depends only on the existence of the q-moment and a regularity property of the distribution function at 0, i.e. the following property, which we will call sub-regularity of X (or, more precisely, of the distribution of X) at 0, Theorem 3.6. Fix q > 1. Let {X i } i≤n be an i.i.d sequence of nonnegative r.v.'s satisfying condition (ii) of Theorem 3.4 and such that EX q 1 < ∞. Then there exists a constant σ such that for each n, and each function h: IR n + → IR + which is concave in each variable separately, we have (Eh q (σX 1 , σX 2 , . . . , σX n )) 1/q ≤ Eh(X 1 , X 2 , . . . , X n ).
Moreover, σ depends only on the constants appearing in the statement of Theorem 3.4 (ii).
Proof. We first simplify by noting that condition (ii) of Theorem 3.4 is satisfied (uniformly in M) by X ∧ M for every M. By Lemma 2.3 it is enough to prove that there exists σ > 0, such that for each concave g: IR + → IR + , (Eg q (σX)) 1/q ≤ Eg(X), where we may assume that g is constant on (M, ∞). To prove this inequality for such a g we first note that by Theorem 3.4, X is {q, 1}-min-hypercontractive, therefore there exists σ > 0 such that Since for each bounded concave g: IR + → IR + there exists a measure µ on IR + (the measure µ is given by the condition µ((x, y]) = g + (x) − g + (y) where g + (x) is the right derivative of g at x) such that g = IR + h t µ(dt) + g(0) the theorem follows by Minkowski's inequality.
Proof. Theorem 3.6 implies that W is {q, 1}-min-hypercontractive and the result follows by Theorem 3.4.

Section 4. Hypercontractivity of maxima
In this section we treat the case of maxima in a way similar to that of minima in Section 3. However there are some essential differences which do not allow us to treat these two cases together.
We will write, W ∈ max H p,q (C) in this case.
Proof. The right side is obvious and the left follows by taking complements and using the inequality, Then for a > 0 and n a positive integer, To see the right hand inequality, just note that, for every a > 0, For the left hand inequality, we again break up the integral as above and using the defining properties of b n as well as the monotonicity of x/(1 + x) in Lemma 4.2: The next Theorem is an extension of Theorem 3.5 of de la Peña, Montgomery-Smith and Szulga (cf. Asmar, Montgomery-Smith (1993)).
For p > 0, the constants A, B can be chosen in the following way: Proof. First we note that it is enough to prove the Theorem for p > 0. The case p = 0 follows easily from this case for the couple (q/2, q) and ρ replaced with Cρ. By the Paley-Zygmund inequality (Lemma 2.1 (b)), We next note that by Markov's inequality and the assumptions: for τ = 2 1/q CD, Now, by Proposition 4.3, (a), the assumptions above and (4.1), Y ∞ ≤ tD/λ and, since 2 1/p τ > D, EY q I Y >2 1/p τt/λ = 0. The conclusion follows trivially.
Remark 4.5. Since A > 1, Theorem 4.4 yields immediately that In the next theorem we make the same convention concerning the constants as we made before the statement of Theorem 3.4.
Theorem 4.6. Let X be a nonnegative r.v., 0 ≤ p < q, ρ > 0. The following conditions are equivalent (iv) there exists a constant σ > 0 such that Proof. (i) ⇒ (ii). By Theorem 4.4 applied to Y = X we derive an existence of constants A, B such that Hence for any t ≥ t 0 , (ii) ⇒ (iii). If t 0 , B are as in (ii) then for t ≥ t 0 Hence, for any D > 1, we have and it is enough to choose D > 1 such that B 2q /(q ln D) < ε.
By (ii) we obtain for t ≥ t 0 σ, where for the moment σ is any number < 1, On the other hand for any R > 1 Hence, by Lemma 2.2 (c), the inequality in (iv) holds if Therefore if we choose R so that and further choose σ so that σ < (RD) −1 , then the inequality in (iv) is satisfied for all t ≥ X p /2 1+1/q . If t < X p /2 1+1/q , then (E(t ∨ σX) q ) 1/q ≤ 2 1/q (t + σ X q ) and (E(t ∨ X) p ) 1/p ≥ X p and therefore, using (ii) with t = t 0 , if additionally σ < (2 1+1/q ρ(1 + B q ) 1/q ) −1 (< X p (2 1+1/q X q ) −1 ), then the inequality in (iv) is satisfied for all t ≥ 0. (iv) ⇒ (i). This implication is proved in the same way as the one in Theorem 3.4. It is enough to replace ∧ by ∨ everywhere.
(i) Theorems 4.4 and 4.5 have appeared in a similar form in unpublished notes from a seminar held by the second author and prepared by Rychlik (1992). (ii) The equivalence of (ii) and (iii) in Theorem 4.6 can be deduced from more general results (cf. Bingham, Goldie and Teugels, page 100). (iii) It follows from Theorem 4.4 that if X is {p, q} max-hypercontractive then for some ε > 0 and all r < q + ε, X is also {r, q + ε}-max-hypercontractive. (iv) The property of {p, q}-max-hypercontractivity is equivalent to which we will call q-sub-regularity at +∞.
Theorem 4.8. Fix q > 1. If {X i } i≤n is i.i.d sequence of nonnegative r.v.'s satisfying EX 1 < ∞ and condition (ii) of Theorem 4.6, then there exists a constant σ such that for each n and each X i , i = 1, . . . , n independent copies of X, (Eh q (σX 1 , σX 2 , . . . , σX n )) 1/q ≤ Eh(X 1 , X 2 , . . . , X n ) for each function h: IR n + → IR + which is in each variable separately nondecreasing and convex, and lim Moreover, σ depends only on the constants appearing in the statement of Theorem 4.6 (ii).
Proof. The proof is the same as in the case of Theorem 3.6, except that we have to replace everywhere ∧ by ∨ and that the measure µ is given by µ((x, y]) = g + (y) − g + (x) and then In analogy to Corollary 3.7 we obtain Corollary 4.9. If {X i }, h are as in Theorem 4.8, q > 1 and in an addition h is αhomogeneous for some α > 0, then the random variable W = h(X 1 , X 2 , . . . , X n ) is q-subregular at +∞.
In this section we will impose on X both the condition of sub-regularity at 0 and that of q-sub-regularity at +∞.
Theorem 5.1. If 0 ≤ p < q and X is a nonnegative random variable in min H p,q (C 1 ) ∩ max H p,q (C 2 ), then there exists a constant σ > 0 such that for each 0 < s < t < ∞ Furthermore, σ depends only on C 1 , C 2 , p and q.
Proof. Let R > 1 be any fixed number, and let r = R −1 . Let ρ be any positive number, and let τ be such that the inequality in Theorem 3.4 (iii) holds for ε = pq −1 (1 − r q ) for all t ≤ t 0 = ρ X q . Then, let α = 2 −1/q−1 (1 − ) 1/p . The constant B is such that the inequality in Theorem 4.6 (ii) is true for all t ≥ t 0 and let D be such that the inequality in Theorem 4.6 (iii) is satisfied for We will show that for σ = min αρ, α/D, r/D, rτ the inequality (5.1) holds true for each 0 < s < t < ∞. Consider the following five cases.
Since σ < αρ the inequality holds by the choice of α.
Case 2. t ≤ τ t 0 , rt > s. We have Therefore by Lemma 2.2 (b) the inequality (5.1) holds if which is true by the choice of τ since σ < rτ, t ≤ τ t 0 .
Therefore by Lemma 2.2 (c) to have (5.1) it is enough to show Since the function (1 − x q )/(1 − x p ) is increasing on IR + and s/t ≥ r it is enough to prove that pq −1 (1 − r q )(1 − r p ) −1 P X ≤ rtσ −1 ≥ P (X ≤ t) which was proved in the preceding case, because 1 − r p < 1.

By Lemma 2.2 (c) it is enough that
But, then by the choice of D we have Case 5. s > αt 0 , t ≤ Rs. We have
Since t/s ≤ R it suffices to show that which is shown in the same way as in the preceding. Taking into account the remarks before Theorems 3.4 and 4.6 we check easily that given p, q the constant σ depends only on the min and max hypercontractivity constants of X.
Corollary 5.2. If X, p, q are as in Theorem 5.1, then there exists a constant C such that if (X i ), i = 1, . . . , n is a sequence of independent copies of X and X k,n denotes the k-th order statistics of the sequence (X i ), i = 1, . . . , n then X k,n q ≤ C X k,n p and X k,n is q-sub-regular at +∞ and sub-regular at 0.
Proof. The statistic X k,n can be written as h(X 1 , X 2 , . . . , X n ) where for each i and each fixed x i , . . . , x i−1 , x i+1 , . . . , x n the function f(x i ) = h(x 1 , x 2 , . . . , x i−1 , x i , x i+1 , . . . , x n ) = (s∨x i ∧t) for some 0 < s < t, and all x i ∈ IR + . And therefore the first part of the corollary follows by the observation. The second part is obtained easily because we have that X k,n is {q, p}-max and min hypercontractive.
The preceding corollary can be considerably generalized. At first let us define a class F of functions g: IR + → IR + which can by written as g(x) = ∆ h s,t (x)µ(ds, dt) for some positive measure µ on ∆ = {(s, t) ∈ IR + × IR + : s ≤ t} and where h s,t are functions defined by h s,t (x) = s ∨ x ∧ t. It is possible to give an intrinsic description of functions in F . Instead let us observe that if f is twice continuously differentiable on IR + , then f ∈ F if and only if for each x ∈ IR + , 0 ≤ xf (x) ≤ f(x) and f(0) ≥ IR + x(f (x) ∨ 0)dx. In this case the measure µ is given by the following condition: for measurable h: where I(y) is the countable family of open, disjoint intervals with the union equal {s ∈ IR + : f (s) > y}. It is not difficult to prove that we have the representation Theorem 5.3. Let X be in min H p,q (C 1 ) ∩ max H p,q (C 2 ). Then there exists a constant σ > 0 such that for each n and each h: IR n + → IR + , which in each variable separately is in class F , it follows that (Eh q (σX 1 , σX 2 , . . . , σX n )) 1/q ≤ Eh(X 1 , X 2 , . . . , X n ).
Proof. The proof follows the same pattern as proofs of Theorems 3.6, 4.8 and Corollaries 3.7, 4.9, and is based on Theorem 5.1 Applying comparison results of Theorems 3.3 and 4.4 we obtain easily Theorem 5.4. Let X, Y be nonnegative r.v.'s such that X ∈ min H p,q (C 1 )∩max H p,q (C 2 ) and there exist constants B 1 and B 2 such that m n (Y ) q ≤ B 1 m n (X) q and M n (Y ) q ≤ B 2 M n (X) q for all n, then there exists a constant D, depending only on p, q, C 1 , C 2 , B 1 and B 2 , such that P (Y ≤ t) ≥ P (DX ≤ t) for all t ∈ IR + .

Finally we have
Theorem 5.5. If X ∈ min H p,q (C 1 ) ∩ max H p,q (C 2 ), then there exists a constant D, depending only on p, q, C 1 and C 2 , such that for all l and all n 1 , k 1 , n 2 , k 2 , . . . n l , k l , Proof. If X is both min-hypercontractive and max-hypercontractive, by Theorem 5.1 we have a 0 < σ < ∞ for which inequality (5.1) is satisfied. One now applies Lemma 2.3 applied to the functional, h: IR n 1 k 1 ·...·n l k l → IR + , which is the composition of the min's and max's in the statement of this Theorem.
Section 6. Minmax hypercontractivity of norms of stable random vectors.
In this section we apply the results in earlier sections to certain questions concerning Gaussian and symmetric stable measures. In particular, in the second half of this section, we give our initial motivation for initiating this research as well as some partial result concerning a version of the Gaussian Correlation Conjecture.
The following lemma is a consequence of Kanter's inequality, (cf. Ledoux and Talagrand (1991), p. 153) which can be viewed as a concentration result similar to Levy's inequalities. The formulation of the lemma below for Gaussian measures was suggested by X. Fernique.
Proof. Now for any 0 ≤ t ≤ 1, define the probability measure ν t by ν t (C) = ν(tC) = P (X/t ∈ C) where X is the symmetric α stable random variable with law ν. Then where t −α = 1 + s −α and X is an independent copy of X. Hence, by Lemma 6.2 Theorem 6.4. Under the set up of Lemma 6.2, for each b < 1, there exists R(b) such that for all 0 ≤ t ≤ 1, Proof. Fix B with ν(B) ≤ b. Choose s ≥ 1 so that ν(sB) = b. Now, apply the Proposition 6.3 with κ = t, to get Remark 6.5. In the case of α = 2 Theorem 6.4 was formulated in Szarek (1991), Lemma 2.6, where a weaker result, which was sufficient for the main results of the paper, was actually proved. Recently, Lata la proved that in the case of α = 2, the conclusion of Theorem 6.4 holds whenever the measure ν is log concave.
The key difference is that we need the right hand side of (6.1) to involve µ(B) for all B such that µ(B) ≤ b and the constant R depending only on the number b.
If X satisfies the conclusion of Theorem 5.5 we write X ∈ min max H p,q (D) Corollary 6.6. Let 0 < α ≤ 2, 0 ≤ p < q. If α = 2 we assume that q < α. If W is a α-stable, symmetric vector in a separable Banach space then W ∈ min max H p,q (C) for some constant C which depends only on α, p and q.
This implies that for every n and {y i } i≤n , Now, if B is a separable Banach space and W is a symmetric stable random variable of index α with values in B, there exists a probability measure Γ on the sphere, S, of the dual space B * and a constant, c, such that So, W n converges in distribution to W . Hence, for any countable weak * -dense set {v * j } j in the unit ball of B * , we have (since p < α and m is finite): But, then we have Hence, W q ≤ σ −1 W p . Note that we can interpolate (by Hölder's inequality) to obtain for every 0 < p < q < 2 a σ for which the last inequality holds. And again, this σ depends only on p, q and α. If W is Gaussian (α = 2), then the comparison of the p and q norms is well known and not restricted to q < 2 (see, e.g., Ledoux and Talagrand (1990), p. 60). Now, for any q < α (in the Gaussian case any q) and any q < r < α we have P ( W ≤ 1 2 W q ) bounded below by a positive constant, say, b, depending only on the σ = σ(q, r) obtained above. This means that, putting K = {x : x ≤ 1 2 W q }, we have for any 0 ≤ u ≤ 1, P (W ∈ uK) ≤ b. Hence, by Theorem 6.4 we have Hence, with ρ = 1/2, τ = ( /R(b)) 2/α , condition (ii) of Theorem 3.4 holds. Hence, W ∈ min H 0,q (C) for some C depending only on p, q and α. In particular, (using n = 1) we now have that W ∈ max H 0,q (C). So, by Theorem 5.5, W ∈ min max H p,q (D), for some D depending only on p, q and α.
Proof. By Corollary 6.6 a constant σ can be found, which depends only on α, p, q and such that the conclusion of Theorem 5.1 holds true for X = X i for i = 1, 2, .., n. Now we can proceed as in the proof of Theorem 5.3.
Before proceeding with the next result we would like to explain its connection with the Gaussian Correlation Conjecture.
The conjecture we refer to says that for any symmetric, convex sets A and B in IR n , where µ is a mean zero Gaussian measure on IR n . In 1977 L. Pitt (1977) proved that the conjecture holds in IR 2 . Khatri (1967) and Sǐdák (1967Sǐdák ( , 1968) proved (6.2) when one of the sets is a symmetric slab (a set of the form {x ∈ IR n : |(x, u)| ≤ 1} for some u ∈ IR n ). For more recent work and references on the correlation conjecture, see Schechtman, Schlumprecht and Zinn (1995), and Szarek and Werner (1995). The Khatri-Sǐdák result as a partial solution to the general correlation conjecture has many applications in probability and statistics, see Tong (1980). In particular, it is one of the most important tools discovered recently for the lower bound estimates of the small ball probabilities, see, for example, Kuelbs, Li andShao (1995), andTalagrand (1994). On the other hand, the Khatri-Sǐdák result only provides the correct lower bound rate up to a constant at the log level of the small ball probability. If the correlation conjecture (6.2) holds, then the existence of the constant of the small ball probability at the log level for the fractional Brownian motion (cf. Li and Shao (1995)) can be shown. Hence, hypercontractivity for minima, small ball probabilities and correlation inequalities for slabs are all related in the setting of Gaussian vectors.
Let C n denote the set of symmetric, convex sets in IR n . Since the correlation conjecture iterates, for each α ≥ 1,the following is a weaker conjecture.
Conjecture C α . For any l, n ≥ 1, and any A 1 , · · · , A l ∈ C n , if µ is a mean zero, Gaussian measure on IR n , then One can restate this (as well as the original conjecture) using Gaussian vectors in IR n as follows: for l, n ≥ 1, and any A = A 1 × · · · × A l ⊆ IR nl let · A = the norm on IR nl with the unit ball A, · l = the norm on IR n with the unit ball A l .
Then, C α can be rewritten as: Restatement of Conjecture C α . For all l, n ≥ 1, and any t > 0, By taking complements, reversing the inequalities and raising both sides of the inequality to a power, say N, we get: Again, reversing the inequalities and raising both sides to the power K, Using the usual formula for p th moments in terms of tail probabilities we would get: Note that if the conjecture (6.2) were true then (6.3) would hold with α = 1. Even in the case K = N = 1, the best that is known is the above inequality with constant √ 2. (Of course, if N = 1, the case K = 1 is the same as the case of arbitrary K.) To see this first let T =: ∪ L l=1 T l =: l is the polar of A l . Now define the Gaussian processes Y t and X t for t ∈ T l by Y f,l = f(G) and X f,l = f(G l ). Then, sup t∈T Y t = max l≤L G l and sup t∈T X t = max l≤L G l l . We now check the conditions of the Chevet-Fernique-Sudakov/Tsirelson version of Slepian's inequality (see also, Marcus-Shepp (1972)). Let s = (f, p) and t = (g, q). If p = q, (Y s , Y t ) has the same distribution as (X s , X t ), and hence Therefore, in either case one can use √ 2. Hence, by the version of the Slepian result mentioned above, E sup On the other hand the results of de la Peña, Montgomery-Smith and Szulga (mentioned in the introduction) allow one to go from an L p inequality to a probability inequality if one has one more ingredient, hypercontractivity. By their results if one can prove that there exists a constant γ < ∞ such that for all K, N and symmetric, convex sets , then one would obtain for some α, By using independence to write each side as a power and then taking N th roots and letting N → ∞ we obtain Since the constant outside the probability is now 1 we can take complements and reverse the inequality. Now, unraveling the norm and rewriting in terms of µ we return to the inequality C α . By Theorems 5.4 and 5.5 the two conditions above translate into four conditions, two for max and two for min. The proof of the next theorem consists of checking three of these conditions. Unfortunately we do not know how to check the forth one and must leave it as an assumption.
Theorem 6.8. Let Y = max l≤L G l and X = max l≤L G l l , where the norms · l were defined above. If for some 0 < p < q then for all t ≥ 0 where the constant c depends on p and q only.
Proof. In order to apply Theorem 5.4, we need to show that there exist constants C 1 , C 2 and C 3 which depend only on p and q such that (6.5) max hypercontractivity M n (X) q ≤ C 1 M n (X) p , (6.6) min hypercontractivity m n (X) q ≤ C 2 m n (X) p , and (6.7) M n (Y ) q ≤ C 3 M n (X) q .
To prove (6.5), note that M n (X) is a norm of Gaussian vectors. and (6.5) follows from the hypercontractivity of norms of Gaussian vectors (cf. for example, Ledoux and Talagrand (1991), p. 60).
As a consequence of Theorem 6.8, we have the following modified correlation inequality for centered Gaussian measure. for any centered Gaussian measure µ and any convex, symmetric sets A l , 1 ≤ l ≤ L.
Remark 6.10. From Theorem 4.4 alone one gets the following: There exists an α < ∞ such that if, e.g., one has This gives some indication of the necessity of handling the case of "small" sets.
Section 7. Final remarks and some open problems.
In this section we mention a few results and open problems that are closely related to the main results in this paper. At first, we give a very simple proof of the following result.
Proposition 7.1. For 0 < p < q < ∞, if there exists a constant C, such that for all n, (7.1) M n (X) q ≤ C M n (X) p then the following are equivalent: (i) There exists a constant C, such that for all n, (7.2) M n (Y ) p ≤ C M n (X) p ; (ii) There exists a constant C such that (7.3) P (Y > t) ≤ C · P (X > t).
There are many questions related to this work. Let us only mention a few here.
Question 7.2. Is the best min-hypercontractive constant in (6.4) with Y = X for symmetric Gaussian vectors X in any separable Banach space C = Γ 1/q (q) Γ 1/p (p) ?
The constant follows from the small ball estimates, P (|X| < s) ∼ K · s as s → 0, of one-dimensional Gaussian random variable X. Note that if β > 1 and P (|X| < s) ∼ K · s β as s → 0, then the resulting constant in this case is smaller. Thus the conjecture looks reasonable in view of Proposition 6.3.
A related question is, under a max-hyper condition, what can one say about a nontrivial lower bound for M k+1 p / M k p , particularly, in the Gaussian case. This may be useful in answering the question.
A result of Gordon (1987) compares the expected minima of maxima for, in particular, Gaussian processes. We mention this here because a version of Gordon's results could perhaps be used to prove the next Conjecture. Note that if the conjecture holds, then the modified correlation inequality C α holds.
Conjecture 7.3. Let G, G l and norm · l be as in Section 1. If Y = max l≤L G l and X = max l≤L G l l , then m n (Y ) q ≤ C m n (X) q .
Our final conjecture is related to stable measures. It is a stronger statement than our Proposition 6.4 and holds for the symmetric Gaussian measures.