Concentration inequalities for $s$-concave measures of dilations of Borel sets and applications

We prove a sharp inequality conjectured by Bobkov on the measure of dilations of Borel sets in $\mathbb{R}^n$ by a $s$-concave probability. Our result gives a common generalization of an inequality of Nazarov, Sodin and Volberg and a concentration inequality of Gu\'edon. Applying our inequality to the level sets of functions satisfying a Remez type inequality, we deduce, as it is classical, that these functions enjoy dimension free distribution inequalities and Kahane-Khintchine type inequalities with positive and negative exponent, with respect to an arbitrary $s$-concave probability.


Introduction
The main purpose of this paper is to establish a sharp inequality, conjectured by Bobkov in [B3], comparing the measure of a Borel set in R n with a s-concave probability and the measure of its dilation. Among the s-concave probabilities are the log-concave ones (s = 0) and thus the Gaussian ones, so that it is expected that they satisfy good concentration inequalities and large and small deviations inequalities. This is indeed the case and these inequalities as well as Kahane-Khintchine type inequalities with positive and negative exponent are deduced. By using a localization theorem in the form given by Fradelizi and Guédon in [FG], we exactly determine among s-concave probabilities µ on R n and among Borel sets F in R n , with fixed measure µ(F ), what is the smallest measure of the t-dilation of F (with t > 1). This infimum is reached for a one-dimensional measure which is s-affine (see the definition below) and F = [−1, 1]. In other terms, it gives a uniform upper bound for the measure of the complement of the dilation of F in terms of t, s and µ(F ).
The resulting inequality applies perfectly to sublevel sets of functions satisfying a Remez inequality, i.e. functions such that the t-dilation of any of their sublevel sets is contained in another of their sublevel set in a uniform way (see section 2.3 below). The main examples of such functions f are the seminorms (f (x) = x K , where K is a centrally symmetric convex set in R n ), the real polynomials in n-variables (f (x) = P (x) = P (x 1 , . . . , x n ), with P ∈ R[X 1 , . . . , X n ]) and more generally the seminorms of vector valued polynomials in n-variables (f (x) = N j=1 P j (x)e j K , with P 1 , . . . , P N ∈ R[X 1 , . . . , X n ] and e 1 , . . . , e N ∈ R n ). Other examples are given in section 3. For these functions we get an upper bound for the measures of their sublevel sets in terms of the measure of other sublevel sets. This enables to deduce that they satisfy large deviation inequalities and Kahane-Khintchine type inequalities with positive exponent. But the main feature of the inequality obtained is that it may also be read backward. Thus it also implies small deviation inequalities and Kahane-Khintchine type inequalities with negative exponent.
Before going in more detailed results and historical remarks, let us fix the notations. Given subsets A, B of the Euclidean space R n and λ ∈ R, we set A + B = {x + y; x ∈ A, y ∈ B}, λA = {λx; x ∈ A} and A c = {x ∈ R n ; x / ∈ A}. For all s ∈ (−∞, 1], we say that a measure µ in R n is s-concave if the inequality µ(λA + (1 − λ)B) ≥ [λµ s (A) + (1 − λ)µ s (B)] 1/s holds for all compact subsets A, B ⊂ R n such that µ(A)µ(B) > 0 and all λ ∈ [0, 1]. The limit case is interpreted by continuity, thus the right hand side of this inequality is equal to µ λ (A) µ 1−λ (B) for s = 0. Notice that an sconcave measure is t-concave for all t ≤ s. For a probability µ, supp (µ) denotes its support. For γ ∈ (−1, +∞], a function f : R n → R + is γ-concave if the inequality f (λx + (1 − λ)y) ≥ [λf γ (x) + (1 − λ)f γ (y)] 1/γ holds for all x and y such that f (x)f (y) > 0 and all λ ∈ [0, 1], where the limit cases γ = 0 and γ = +∞ are also interpreted by continuity, for example the +∞concave functions are constant. The link between the s-concave probabilities and the γ-concave functions is described in the work of Borell [Bor2].
Theorem [Bor2] Let µ be a measure in R n , let G be the affine hull of the support of µ, set d = dim G and m the Lebesgue measure on G.
According to this theorem, we say that a measure µ is s-affine when its density ψ satisfies that ψ γ (or log ψ if s = γ = 0) is affine on its convex support with γ = s/(1 − sd). In [Bor1], Borell started the study of concentration properties of s-concave probabilities. He noticed that for any centrally symmetric convex set K the inclusion K c ⊃ 2 t+1 (tK) c + t−1 t+1 K holds true. From the definition of s-concavity he deduced that for every s-concave measure µ µ (K c From this very easy but non-optimal concentration inequality, Borell showed that seminorms satisfy large deviation inequalities and Kahane-Khintchine type inequalities with positive exponent. The same method was pushed forward in 1999 by Lata la [L] to deduce a small ball probability for symmetric convex sets which allowed him to get a Kahane-Khintchine inequality until the geometric mean. In 1991, Bourgain [Bou] used the Knothe map [K] to transport sublevel sets of polynomials. He deduced that, with respect to 1/n-concave measure on R n (i.e. uniform measure on convex bodies), the real polynomials in nvariables satisfy some non-optimal distribution and Kahane-Khintchine type inequalities with positive exponent. The same method was used by Bobkov in [B2] and recently in [B3] to generalize the result of Bourgain to s-concave measures and arbitrary functions, by using a "modulus of regularity" associated to the function. But the concentration inequalities obtained in all these results using Knothe transport map are not optimal.
In 1993, Lovász and Simonovits [LS] applied the localization method (using bisection arguments) to get the sharp inequality between the measure of a symmetric convex set K and its dilation, for a log-concave probability µ This improves inequality (1) of Borell in the case s = 0. The method itself was further developped in 1995 by Kannan, Lovász and Simonovits [KLS] in a form more easily applicable. In 1999, Guédon [G] applied the localization method of [LS] to generalize inequality (2) to the case of s-concave probabilities, getting thus a full extension of inequality (1). Guédon proved that if µ(tK) < 1 then and deduced from it the whole range of sharp inequalities (large and small deviations and Kahane-Khintchine) for symmetric convex sets. In 2000, Bobkov [B1] used the localization in the form given in [KLS] and the result of Lata la [L] to sharpen the result of Bourgain on polynomials, with log-concave measures and proved that polynomials satisfy a Kahane-Khintchine inequality until the geometric mean. In 2000 (published in 2002 [NSV1]), Nazarov, Sodin and Volberg used the same bisection method to prove a "geometric Kannan-Lovász-Simonovits lemma" for log-concave measures. They generalized inequality (2) to arbitrary Borel set where F c t is the complement of F t , the t-dilation of F , which is defined by where | · | denotes the Lebesgue measure. Notice that this definition of t-dilation is not the original definition of Nazarov, Sodin and Volberg [NSV1]. In the later, they introduced an auxiliary compact convex set K and used t instead of t+1 2 . The definition given above is the complement of their original one inside K. The interest of our definition is that this auxiliary set becomes useless. If F is open then its t-dilation is open, if F is a Borel set then its t-dilation is analytic, hence universally measurable. The t-dilation is an affine invariant, i.e. for any affine transform A : R n → R n , we have (AF ) t = A(F t ). Notice that the definition of the t-dilation is one-dimensional in the sense that, if we denote by D the set of affine lines in R n , then In [NSV1], Nazarov, Sodin and Volberg also noticed that t-dilation is well suited for sublevel sets of functions satisfying a Remez type inequality and deduced from the concentration inequality (4) that these functions satisfy the whole range of sharp inequalities (large and small deviations and Kahane-Khintchine). The preprint [NSV1] had a large diffusion and interested many people. For example, Carbery and Wright [CW] and Alexander Brudnyi [Br3] directly applied the localization as presented in [KLS] to deduce distributional inequalities and Kahane-Khintchine type inequalities for the norm of vector valued polynomials in n-variables and functions with bounded Chebyshev degree, respectively.
Our main result is the following theorem which extends inequality (3) of Guédon to arbitrary Borel sets (since as we shall see in section 2, if F is a centrally symmetric convex set K then F t = tK) and inequality (4) of Nazarov, Sodin and Volberg to the whole range of s-concave probabilities. It establishes a conjecture of Bobkov [B3] (who also proved in [B3] a weaker inequality).
Theorem 1 Let F be a Borel set in R n and t > 1. Let s ∈ (−∞, 1] and µ be a s-concave probability. Let Notice that inequality (5) is sharp. For example, there is equality in (5) if n = 1, F = [−1, 1] and µ is of density with respect to the Lebesgue measure on R, where a + = max(a, 0), for every a ∈ R. Notice that this measure µ is s-affine on its support (which is As noticed by Bobkov in [B3], in the case s ≤ 0, the right hand side term in inequality (5) vanishes if µ(F c t ) = 0 so the condition µ(F t ) < 1 may be cancelled. But in the case s > 0, the situation changes drastically. This condition is due to the fact that a s-concave probability measure, with s > 0, has necessarily a bounded support. From this condition we directly deduce the following corollary, which was noticed by Guédon [G] in the case where F is a centrally symmetric convex set.
Corollary 1 Let F be a Borel set in R n . Let s ∈ (0, 1] and µ be a s-concave probability. Denote by V the relative interior of the (convex compact) support of µ. Then In section 2, we determine the effect of dilation on examples. The case of convex sets is treated in section 2.1, the case of sublevel sets of the seminorm of a vector valued polynomial in section 2.2 and the case of sublevel sets of a Borel measurable function in section 2.3. In section 2.3, we also give a functional version of Theorem 1 and we investigate the relationship between Remez inequality and inclusion of sublevel sets. In section 3 we deduce distribution and Kahane-Khintchine inequalities for functions of bounded Thebychev degree. Section 4 is devoted to the proof of Theorem 1. The main tool for the proof is the localization theorem in the form given by Fradelizi and Guédon in [FG].
After we had proven these results, we learned from Bobkov that, using a different method, Bobkov and Nazarov [BN] simultaneously and independently proved Theorem 1.

Dilation of a set on examples 2.1 Convex sets
Fact 1 Let K be an open convex set then, for every t > 1, and if moreover K is centrally symmetric then K t = tK.
Proof: The second equality in (6) deduces from the convexity of K. To prove the equality of the sets in (6), we prove both inclusions: Thus 1 λ < t+1 2 . Therefore If moreover K is centrally symmetric it is obvious that K t = tK.

Remarks:
1) It is not difficult to see that if we only assume that K is convex (and not necessarily open) then the same proof shows actually that where relint(A) is the relative interior of A, i.e. the interior of A relative to its affine hull.
2) The family of convex sets described by (6) where introduced by Hammer [H], they may be equivalently defined in the following way. Let us recall that the support function of a convex set K in the direction u ∈ S n−1 is defined by h K (u) = sup x∈K x, u and that an open convex set K is equal to the intersection of the open slabs containing it: Then for every t > 1, Moreover, since this definition can be extended to the values t ∈ (0, 1], it enables thus to define the t-dilation of a convex set for 0 < t ≤ 1 and in the symmetric case, the equality K t = tK is still valid for t ∈ (0, 1]. Using that the family of convex sets (K t ) t>0 is absorbing, Minkowski defined what is now called the "generalized Minkowski functional" of K: Notice that α K is convex and positively homogeneous. If moreover K is centrally symmetric then K t = tK, which gives α K (x) = x K . We shall see in the next section, below, how this notion was successfully used in polynomial approximation theory (see for example [RS]).
From Fact 1, Theorem 1 and Corollary 1, we deduce the following corollary.
Corollary 2 Let K be a convex set in R n and t > 1. Let s ∈ (−∞, 1] and µ be a s-concave probability. Let V = relint supp (µ) .
iii) If s > 0 and K is centrally symmetric then Applying iii) to the uniform probability on V we deduce that for every convex

Sublevel set of the seminorm of a vector valued polynomial
Let P be a polynomial of degree d, with n variables and with values in a Banach space E, that is where e 1 , ..., e N ∈ E and P 1 , ..., P N are real polynomials with n variables and degree at most d. Let K be a centrally symmetric convex set in E, and denote by · K the seminorm defined by K in E and let c > 0 be any constant. The following fact was noticed and used by Nazarov, Sodin and Volberg in [NSV1], in the case of real polynomials.
Fact 2 Let P be a polynomial of degree d, with n variables and with values in a Banach space E and let t > 1. Let K be a centrally symmetric convex set in E and c > 0. Then for every t ∈ R such that |t| ≥ 1.
This fact is actually a reformulation, in terms of dilation, of the Remez inequality [R] which asserts that for every real polynomial Q of degree d and one variable, for every interval I in R and every Borel subset J of I, Let us prove the inclusion. Let x 0 ∈ F t . There exists an interval I = [a, b] containing x 0 such that |I| < t+1 2 |F ∩ I|. The key point is that Taking the supremum, using that T d is increasing on [1, +∞) and the definition of F , we get Remark: Notice that the Chebyshev polynomial of degree one is T 1 (t) = t. Hence if we take the polynomial P (x) = x = x i e i , where (e 1 , ..., e n ) is the canonical orthonormal basis of R n , we see that the case of vector valued polynomials generalizes the case of symmetric convex sets.
Fact 2 has an interesting reformulation in terms of polynomial inequalities in real approximation theory. It may be written in the following way. Denote by P n d (E) the set of polynomials of degree d, with n variables and with values in a Banach space E. Let P ∈ P n d (E) and K be a symmetric convex set in E. Let F be a Borel set in R n and t > 1. For Let us assume that the Borel set F in R n has the property that, for each x in R n , there is an affine line D containing x such that |F ∩ D| > 0, which is the case if F has non-empty interior. Then t>1 F t = R n . In this case, we may define for every x ∈ R n the "generalized Minkowski functional" of F at x as Using this quantity, we get the following reformulation of Fact 2.
Corollary 3 Let F be a Borel set in R n . Let P ∈ P n d (E) and K be a centrally symmetric convex set in E. For every x in R n , Let us introduce the notations coming from approximation theory. With the notations of the corollary, we define Then the inequality may be written in the following form.
For F being convex and the polynomial P being real valued, this is a theorem of Rivlin-Shapiro [RS] (see also an extension in [RSa1] and [RSa2]). We get thus an extension of their theorem to non-convex sets F , as well as Remez inequality generalizes to Borel sets the classical Tchebychef inequality valid for segments in R.
Applying Theorem 1 to the level set of a polynomial we get the following corollary, which was proved in the case s = 0 by Nazarov, Sodin and Volberg in [NSV1] and in the case d = 1 and P (x) = x by Guédon in [G].
Corollary 4 Let P be a polynomial of degree d, with n variables and with values in a Banach space E and let t > 1. Let K be a centrally symmetric convex set in E and c > 0. Let s ≤ 1 and µ be a s-concave probability. If µ({x; P (x) K Applying Corollary 1, we get the following extension of a theorem of Brudnyi and Ganzburg [BG] (which treats the case of probabilities µ which are uniform on a convex body). It is a multi-dimensional version of Remez inequality.
Corollary 5 Let P be a polynomial of degree d, with n variables and with values in a Banach space E. Let K be a centrally symmetric convex set in E. Let s ∈ (0, 1], µ be a s-concave probability and let V be the support of µ. Then, for Proof: We apply Corollary 1 to F = {x; P (x) K ≤ sup x∈ω P (x) K } and Fact 2 to deduce that Since ω ⊂ F , we may apply the preceding inclusion to t = 1+µ(ω c ) s 1−µ(ω c ) s and this gives the first inequality. The second one follows using that T d (t) ≤ (2t) d for every t ≥ 1 and easy computations.

Sublevel set of a Borel measurable function
In Fact 1 and Fact 2 we saw the effect of dilation on convex sets and level sets of vector valued polynomials. We want to describe now the most general case of level sets of Borel measurable functions. As in Fact 2, we shall see in the following proposition that for any Borel measurable function, an inclusion between the dilation of the level sets is equivalent to a Remez type inequality.
Proposition 1 Let f : R n → R be a Borel measurable function and t > 1. Let u f (t) ∈ [1, +∞). The following are equivalent. i) For every interval I in R n and every Borel subset J of I such that |I| < t|J|, We shall say that a non-decreasing function u f : (1, +∞) → [1, +∞) is a Remez function of f if it satisfies i) or ii) of the previous proposition, for every t > 1 and that it is the Remez function of f if it is the smallest Remez function of f .
For example, using i), the Remez inequality asserts that if we take f (x) = P (x) K where P is a polynomial of degree d, with n variables and with values in a Banach space E and K is a symmetric convex set then t → T d (2t − 1) is a Remez function of f . Using ii) and Fact 1, we get that u f (t) = 2t − 1 is the Remez function of f (x) = x K .
Proof of Proposition 1: i) =⇒ ii): Let F = {x ∈ R n ; |f (x)| ≤ λ} and let x ∈ F 2t−1 . There exists an interval I containing x such that |I| < t|F ∩ I|. Hence ii) =⇒ i): Let I be an interval in R n and J be a Borel subset of I such that |I| < t|J|. Let λ = sup J |f | and let x ∈ I, then J ⊂ {|f | ≤ λ} ∩ I hence

t). This gives i).
Applying Theorem 1 to the level set of a Borel measurable function, we get the following.
Theorem 2 Let f : R n → R be a Borel measurable function and u f : (1, +∞) → [1, +∞) be a Remez function of f . Let s ∈ (−∞, 1] and µ be a s-concave probability. Let t > 1 and λ > 0. If µ({x; |f (x)| ≥ λu f (t)}) > 0, then For s = 0, Remark: Theorem 2 improves a theorem given by Bobkov in [B3]. As in [B3], notice that Theorem 2 is a functional version of Theorem 1. As a matter of fact, we may follow the proof given by Bobkov. If a Borel subset F of R n and u > 1 are given, we apply Theorem 2 to t = u+1 2 , λ = 1 and f = 1 on F, f = 2 on F u \ F and f = 4 on F c u .
Using ii) of Proposition 1 it is not difficult to see that u f (t) = 2. Then inequality (5) follows from inequality (7).
Applying Corollary 1 in the similar way as in Corollary 5 and using Proposition 1 instead of Fact 2, we get the following.
Corollary 6 Let f : R n → R be a Borel measurable function. Let u f : (1, +∞) → [1, +∞) be a Remez function of f . Let s ∈ (0, 1] and µ be a sconcave probability. Let ω ⊂ R n , then Instead of using u f , Bobkov in [B2] and [B3] introduced a related quantity, the "modulus of regularity" of f , It is not difficult to see that and thus Hence δ f is the smallest function satisfying that for every interval I and every which is a Remez-type inequality. For smooth enough functions, the relationship between u f , the Remez function of f and δ f is given by where u −1 f is the reciprocal function of u f . Hence if f (x) = P (x) K where P is a polynomial of degree d, with n variables and with values in a Banach space E and K is a symmetric convex set then, using that u f (t) ≤ T d (2t − 1) and T d (t) ≤ 2 d−1 t d , for every |t| ≥ 1, we get for every |t| ≥ 1. For f (x) = x K , we get δ f (ε) = 2ε ε+1 as noticed by Bobkov in [B2]. Notice that inequalities (8) improve the previous bound given by Bobkov in [B2] and [B3].
The interest of the quantity δ f comes from the next corollary, which was conjectured by Bobkov in [B3] (for s = 0, it deduces from [NSV1] as noticed in [B2]).
Corollary 7 Let f : R n → R be a Borel measurable function and 0 < ε ≤ 1. Let s ≤ 1 and µ be a s-concave probability. Let λ < f L ∞ (µ) , then and if µ is log-concave (i.e. for s = 0) then Proof: We apply Theorem 1 to the set F = {|f | < λε} and t = 2 δ f (ε) − 1. Let x ∈ F t , there exists an interval I containing x such that From the definition of δ f , this implies that Hence F t ⊂ {|f | < λ}. This gives the result.

Distribution and Kahane-Khintchine type inequalities
It is classical that from an inequality like inequality (7) (or in its equivalent form (9)), it is possible to deduce distribution and Kahane-Khintchine type inequalities. Due to its particular form, this type of concentration inequality may be read forward or backward and thus permits to deduce both small and large deviations inequalities.

Functions with bounded Chebyshev degree
Before stating these inequalities, let us define an interesting set of functions, the functions f such that their Remez function u f is bounded from above by a power function, i.e. there exists A > 0 and d > 0 satisfying u f (t) ≤ (At) d , for every t > 1 which means that for every interval I in R n and every Borel subset In this case, the smallest power satisfying this inequality is called the Chebyshev degree of f and denoted by d f and the best constant corresponding to this degree is denoted by A f . This is also equivalent to assume that δ f (ε) ≤ A f ε 1/d f , for every 0 < ε < 1. Notice that if f has bounded Chebyshev degree (i.e. d f < +∞) then |f | 1/d f has Chebyshev degree one and A |f | 1/d f = A f . For such functions inequality (7) becomes, for every t > 1, and for s = 0 For If f (x) = P (x) K where P is a polynomial of degree d, with n variables and with values in a Banach space E and K is a symmetric convex set then d f = d and A f = 4. More generally, following [NSV1] and [CW], if f = e u , where u : R n → R is the restriction to R n of a plurisubharmonic functionũ : C n → R such that lim sup |z|→+∞ũ (z) log |z| ≤ 1, then d f = 1 and A f = 4. Another type of example was given by Nazarov, Sodin and Volberg in [NSV1]: if with c k ∈ C and x k ∈ R n then d f = d. Finally, Alexander Brudnyi in [Br1], [Br2], [Br3] (see also Nazarov, Sodin and Volberg [NSV2]) proved also that for any r > 1, for any holomorphic function f on B C (0, r) ⊂ C n , the open complex Euclidean of radius r centered at 0, the Chebyshev degree of f is bounded.

Small deviations and Kahane-Khintchine type inequalities for negative exponent
Let us start with the following small deviation inequality, which was proved by Guédon [G] in the case where f = · K and by Nazarov, Sodin and Volberg [NSV1] in the case where s = 0. It was proved in a weaker form and conjectured in this form by Bobkov in [B3]. This type of inequality is connected to small ball probabilities.
Corollary 8 Let f : R n → R be a Borel measurable function and 0 < ε ≤ 1. Let s ≤ 1 and µ be a s-concave probability. Let λ < f L ∞ (µ) , then In particular, if µ is log-concave (i.e. for s = 0) then Proof: Let s = 0.The proof given by Guédon in [G] works here also. We reproduce it here for completeness. Since s ≤ 1 the function x → (1 − x) 1/s is convex on (−∞, 1], hence The result follows from inequality (9) and the inequality above applied to . For s = 0 the result follows by taking limits or can be proved along the same lines.
In the case of functions with bounded Chebyshev degree, inequality (12) take a simpler form and, by integrating on level sets, it immediately gives an inverse Hölder Kahane-Khintchine type inequality for negative exponent. Thus, we get the following corollary, generalizing a theorem of Guédon [G] (for f = . K ) and Nazarov, Sodin and Volberg [NSV1] (for s = 0).
Corollary 9 Let f : R n → R be a Borel measurable function with bounded Chebyshev degree. Let s ≤ 1 and µ be a s-concave probability. Denote by M f the µ-median of |f | 1/d f and denote c s := (1 − 2 −s )/s, for s = 0 and c 0 = ln 2. Then for every 0 < ε < 1 and for every −1 < q < 0, Proof: Inequality (13) deduces from inequality (12) by taking λ = M f . The proof of inequality (14) is then standard, we apply inequality (13) Then we take the q-th root (recall that q < 0) to get inequality (14).

Large deviations and Kahane-Khintchine type inequalities for positive exponent
On the contrary to the small deviations case, the behaviour of large deviations of a function with bounded Chebyshev degree with respect to a s-concave probability heavily depends on the range of s, mainly on the sign of s. But all behaviours follow from inequality (10) applied to λ = M f , the µ-median of |f | 1/d f , which gives, for every s ≤ 1, s = 0, and for s = 0, For s ≥ 0, it follows from inequality (15) that |f | 1/d f has exponentially decreasing tails and a standard argument implies an inverse Hölder inequality.
Corollary 10 Let f : R n → R be a Borel measurable function with bounded Chebyshev degree. Let 0 ≤ s ≤ 1 and µ be a s-concave probability. Denote by M f the µ-median of |f | 1/d f and denote c s := (1 − 2 −s )/s, for s > 0 and c 0 = ln 2. Then for every t > 1 and for every p > 0, Proof: Inequality (16) deduces from inequality (15). The proof of inequality (17) is then standard, we write and we apply inequality (16) as in the proof of Corollary 9.
For s < 0 the situation changes drastically, inequality (15) only implies that the tail of |f | 1/d f decreases as t 1/s , which is the sharp behaviour if we take the example of measure µ on R given after Theorem 1 and f (x) = |x|.
Corollary 11 Let f : R n → R be a Borel measurable function with bounded Chebyshev degree. Let s ≤ 0 and µ be a s-concave probability. Denote by M f the µ-median of |f | 1/d f and denote d s := (2 −s − 1) 1/s . Then for every t > 1 Proof: Inequality (18) deduces from inequality (15). The proof of inequality (19) is then standard.

Proof of Theorem 1
While in [B2] and [B3], Bobkov used an argument based on a transportation argument, going back to Knothe [K] and Bourgain [Bou], our proof follows the same line of argument as Lovász and Simonovits in [LS], Guédon in [G], Nazarov, Sodin and Volberg in [NSV1], Brudnyi in [Br3] and Carbery and Wright in [CW], the geometric localization theorem, which reduces the problem to the dimension one. The main difference with these proofs is that the geometric localization is used here in the presentation given by Fradelizi and Guédon in [FG] which don't use an infinite bisection method but prefers to see it as an optimization theorem on the set of s-concave measures satisfying a linear constraint and the application of the Krein-Milman theorem. Let us recall the main theorem of [FG].
Theorem [FG] Let n be a positive integer, let K be a compact convex set in R n and denote by P(K) the set of probabilities in R n supported in K. Let f : K → R be an upper semi-continuous function, let s ∈ [−∞, 1 2 ] and denote by P f the set of s-concave probabilities λ supported in K satisfying f dλ ≥ 0. Let Φ : P(K) → R be a convex upper semi-continuous function. Then is achieved at a probability ν which is either a Dirac measure at a point x such that f (x) ≥ 0, or a probability ν which is s-affine on a segment [a, b], such that f dν = 0 and [a,x] f dν > 0 on (a, b) or [x,b] f dν > 0 on (a, b).

Remarks:
1) In Theorem [FG] and in the following, we say that a measure ν is s-affine on a segment [a, b] if its density ψ satisfies that ψ γ is affine on [a, b], where γ = s 1−s .
2) Notice that in Theorem [FG] it is assumed that s ≤ 1 2 . If 1 2 < s ≤ 1, as follows from Theorem [Bor1], the set of s-concave measures contains only measures whose support is one-dimensional and the Dirac measures. Moreover, a quick look at the proof of Theorem [FG] shows that the conclusions of the theorem remain valid except the fact that the measure ν is s-affine. It would be interesting to know if Theorem [FG] may be fully extended to 1 2 < s ≤ 1.
The proof of Theorem 1 splits in two steps. The first step consists in the application of Theorem [FG] to the reduce to the one-dimensional case and the second step is the proof of the one-dimensional case: This ends the proof in this case since ν(F c ) = ν([c, b]), ν(F c t ) = ν ([d, b]) and ν([a, b]) = 1.
The general case is more complicate. The proof of Nazarov, Sodin and Volberg [NSV1], to treat the log-concave (s = 0) one-dimensional case, extends directly to the case s ≤ 1, with some suitable adaptations in the calculations, so we don't reproduce it here. But for s ≤ 1 2 , using that ν may be assumed s-affine, we can shorten the proof (in fact, we only use the monotonicity of the density of ν).
Since F t is open in R, it is the countable union of disjoint intervals. By approximation, we may assume that there are only a finite number of them. Since a ∈ F ⊂ F t and b / ∈ F t , we can write where a 0 = a. Let F i = F ∩ (a i , b i ). Denote by ψ the density of ν with respect to the Lebesgue measure. There are two cases: -If ψ is non-decreasing: this is the easiest case. Let 0 ≤ i ≤ N . Since b i / ∈ F t , using the definition of F t , it follows that for every interval I containing b i , we have |I| ≥ t+1 Hence the function ρ := 1 − t+1 2 1 F satisfies bi x ρ(u)du ≥ 0. Integrating by parts this gives bi ai ρ(u)ψ(u)du = ψ(a i ) bi ai ρ(x)dx + bi ai bi x ρ(u)du ψ ′ (x)dx ≥ 0.
Hence ν (a i , b i ) ≥ t+1 2 ν(F i ) and since F t = ∪(a i , b i ) and F = ∪F i , it follows that ν(F t ) ≥ t+1 2 ν(F ). Therefore, using the comparison between the s-mean (with s ≤ 1) and the arithmetic mean, we conclude that -If ψ is non-increasing: We first prove that, for each 0 For i ≥ 1, we have a i / ∈ F t and it is similar as the previous case. Indeed, for every x ∈ (a i , b i ), |[a i , x]| ≥ t+1 2 |[a i , x] ∩ F | and an integration by parts gives that ν (a i , b i ) ≥ t+1 2 ν(F i ). From the comparison of the means, inequality (22) follows. For i = 0, we have a 0 = a ∈ F . We define F ′ 0 = [a 0 , c 0 ], where c 0 is chosen such that |F ′ 0 | = |F 0 |. Since ψ is non-increasing, we have ν(F ′ 0 ) ≥ ν(F 0 ) and since Hence b 0 − a 0 ≥ t+1 2 (c 0 − a 0 ). As in the joint remark with Guédon given before, we get that Therefore we get inequality (22) for i = 0: The inequality (22)  (1 − x) s + t − 1 t + 1 1/s . From Minkowski inequality for the s-mean, with s ≤ 1, the function ϕ is convex on [0, 1]. Denote λ i = ν( a i , b i ) /ν(F t ). Using that ϕ(0) = 0 and the convexity of ϕ we get ν(F i ) ≤ ϕ(ν a i , b i ) = ϕ λ i ν(F t ) ≤ λ i ϕ ν(F t ) .
Summing on i and using that N i=1 λ i = 1, we conclude that This is the result.