Transport-entropy inequalities on the line

We give a necessary and sufficient condition for transport-entropy inequalities in dimension one. As an application, we construct a new example of a probability distribution verifying Talagrand's T2 inequality and not the logarithmic Sobolev inequality.


Introduction
Transport-entropy inequalities were introduced by Marton and Talagrand in the nineties [29,38]. As their name indicates, this type of inequalities compare optimal transport costs in the sense of Monge-Kantorovich to the relative entropy functional (also called Kullback-Leibler divergence). Transport-entropy inequalities have deep connections to the concentration of measure phenomenon [27,19], to log-Sobolev type inequalities [33,5,23], or large deviation theory [21,19]. They also directly appear in the definition proposed by Lott, Villani and Sturm of a metric measured space with positive Ricci curvature [28,37]. The interested reader can consult the books [27,42] or the recent survey [22] for an overview of their applications.
The purpose of this note is to give a necessary and sufficient condition for a large class of transport-entropy inequalities involving probability measures on the real line.
Before presenting our main result, we first need to define transport costs and transport-entropy inequalities. Let α : R + → R + be a cost function; the optimal transport cost between two probability measures µ, ν is defined by where the infimum runs over the set of couplings π between µ and ν, i.e probability measures on R 2 such that π(dx × R) = µ(dx) and π(R × dy) = ν(dy).
A Borel probability measure µ on R is said to satisfy the transport-entropy inequality T α (a) for some a > 0 if T α(a · ) (ν, µ) ≤ H(ν | µ), for all ν ∈ P(R) (the set of Borel probability measures on R), where α(a · ) denotes the cost function t → α(at) and where H(ν | µ) stands for the relative entropy of ν with respect to µ. This latter is defined by when ν is absolutely continuous with respect to µ and ∞ otherwise. For instance, the celebrated Talagrand's inequality T 2 enters this family of inequalities. We recall that µ satisfies T 2 (C) if where T 2 is an abbreviated notation for T x 2 . With the definition introduced above, T 2 (C) holds if and only if T x 2 1/ √ C holds.
In all the paper, we will use the following notation. The cumulative distribution function F ν of a probability measure ν on R is the right continuous and non-decreasing function defined by The generalized inverse of F ν is defined by If µ is a probability measure with no atom and ν is another probability measure we will denote by T µ,ν the map defined by It is well known that T µ,ν is the only one non-decreasing and left-continuous function that pushes forward µ onto ν, that is to say for all bounded measurable f : R → R.
In this paper, we will say that that a Borel probability µ on R satisfy Poincaré inequality with the constant λ > 0 if (1.5) λVar Note that when f is differentiable at x, then |∇f |(x) = |f ′ (x)|. Proposition 4.12 clarifies this definition of the Poincaré inequality.
The following theorem is our main result. It characterizes transportentropy inequalities T α for convex functions α which are quadratic near 0.
Theorem 1.7. Let µ be a Borel probability measure on R and α : R + → R + be a convex function such that α(t) = t 2 for all t ≤ h. The following propositions are equivalent (1) There is some a > 0 such that µ verifies T α (a).
(2) There are λ > 0 and d > 0 such that (i) µ verifies Poincaré inequality with constant λ and (ii) the map T := T µ 1 ,µ sending µ 1 on µ verifies Moreover, there exist two positive constants κ 1 , κ 2 depending only on h such that the optimal constants a opt , λ opt , d opt are related as follows: In other words, the transport-entropy inequality T α carries two different informations: the existence of a spectral gap and a quantitative information on the way the exponential distribution µ 1 is deformed in order to produce µ. Theorem 1.7 improves the results obtained by the author in a preceding work [18], where different necessary or sufficient conditions were investigated (see Section 4.1 for a discussion). Here, a true equivalence is obtained.
It is well known that an absolutely continuous probability measure µ on R verifies Poincaré inequality if and only if the following holds where p denotes the density of µ with respect to the Lebesgue measure and m is a median of µ. This result follows from a similar necessary and sufficient condition for weighted Hardy's inequalities due to Muckenhoupt [32] (extending previous works by Artola, Talenti [39] and Tomaselli [40]). Moreover, it can be shown (see e.g [1]) that the optimal constant λ opt in Poincaré inequality (1.5) verifies with possible cases of equalities see [31].
To complete Theorem 1.7, we shall give in Section 4 an easy to check sufficient condition for the contraction property (1.8) for absolutely continuous µ with smooth density. This condition deals with the asymptotic behavior of the logarithm of the density of µ. Theorem 1.7 is satisfactory from a theoretical point of view. Its conclusion is reminiscent of the characterizations of different functional inequalities on the line by Bobkov and Houdré [9,8] and Bobkov and Götze [6]. Theorem 1.7 is also a useful tool for constructing examples illustrating borderline situations. We will use it in the last section to give a new example of a probability measure which verifies Talagrand's T 2 inequality but not the logarithmic Sobolev inequality. Contrary to the previous example given by Cattiaux and Guillin in [13], the tail behavior of the probability exhibited in the present paper is exactly Gaussian. In the same section, we will answer a question raised by Cattiaux and Guillin in [13] about the equivalence of Talagrand's inequality to Gaussian concentration and Poincaré inequality. We will use Theorem 1.7 again to give an appropriate counterexample.
One of the main ingredient in the proof of Theorem 1.7 is the fact that optimal transport has a very simple structure in dimension one. The following theorem is very classical and goes back to the works by Hoeffding, Fréchet and Dall'Aglio [26,16,15]. A proof can be found in the books by Villani [41] or Rachev-Ruschendorf [34]. Theorem 1.10. Let α : R + → R + be a convex function such that α(0) = 0 and suppose that µ ∈ P(R) has no atom, then for all probability measure ν ∈ P(R) such that α(|x − y|) µ(dx)ν(dy) < ∞, the map T µ,ν defined by (1.3) realizes the optimal transport of µ onto ν. In other words, the coupling π(dxdy) = δ Tµ,ν (x) µ(dx) achieves the infimum in (1.1) and so An immediate consequence of Theorem 1.10 is that the optimal transport cost T α (ν, µ) is linear with respect to α on the convex cone of non-negative convex cost functions α vanishing at 0: in particular, if α = α 1 + α 2 with α i : R + → R + a convex function, then This property is really specific of the dimension one. In general, one only has the trivial inequality To prove Theorem 1.7, we shall use this observation with a decomposition of α into a function α 1 which is quadratic near 0 and then linear and a function α 2 which vanishes in a neighborhood of 0 and has the same growth as α. The transport inequality T α is thus equivalent to the realization of both T α 1 and T α 2 . The transport-entropy inequality T α 1 is equivalent to Poincaré inequality as proved by Bobkov, Gentil and Ledoux [5] (see also Theorem 3.1 below). We shall establish that T α 2 is equivalent to the contraction condition (1.8), which will complete the proof of Theorem 1.7.
The paper is organized as follows. Section 2 is devoted to transportentropy inequalities associated to functions α vanishing in a neighborhood of 0. This class of transport-entropy inequalities have their own interest since they can be even be verified by discrete probability measures. We show the equivalence between these inequalities and contraction properties like (1.8). In Section 3, we complete the proof of Theorem 1.7 following the strategy explained above. Section 4 is devoted to examples. The article ends with an appendix relating the definition we adopted of Poincaré inequality (1.5) to other more classical formulations.
2. Transport-entropy inequalities for costs vanishing in a neighborhood of 0 To begin with, let us observe that Talagrand's inequality T 2 cannot be satisfied by a discrete probability measure of the form where the µ k 's are non-negative negative numbers of sum equal to 1. Indeed, if a probability measure verifies T 2 then it verifies Poincaré inequality (1.5) (see for instance the proof of Theorem 1.7), which excludes probabilities µ as above (unless it is a Dirac mass).
In this section, we study transport-entropy inequalities associated to cost functions which are identically 0 in a neighborhood of 0. As we shall see, the interest of this type of cost functions is that the associated transportentropy inequality can also be satisfied by discrete probability measures.
Let us mention that inequalities of this type appeared also in a paper by Bonciocat and Sturm [11] in their study of curvature of discrete metric spaces.
In all what follows, β : R + → R + will be a convex function such that β(t) = 0 for all t ≤ h, for some h > 0, and β is increasing on [h, ∞). The main result of this section is the following Theorem 2.1. A Borel probability measure µ on R verifies the transportentropy inequality T β (a) for some constant a > 0 if and only if the transport map T = T µ 1 ,µ sending the exponential distribution µ 1 onto µ verifies the contraction property Moreover, the optimal constants a opt and d opt verify It is very easy to construct discrete probabilities enjoying a transportentropy inequality T β . For example, consider the map T : Define µ as the image of µ 1 under T . Since T is left continuous, we have T = T µ 1 ,µ and so µ verifies the transport-entropy inequality T β 2 (a) for some constant a with the cost function β 2 defined by In this example, h = d opt = 1 and so the optimal constant a opt verifies (3)).
To prove Theorem 2.4, we need to introduce some additional notation. Let µ be a probability measure on R which is not a Dirac mass and define and t µ = sup Supp(µ).
Let us define two families of probability measures {µ + x } and {µ − x } on R + as follows: where X is a random variable with law µ.
In other words, for all bounded measurable function f : R → R, Define, for all b ≥ 0 where m is the median of µ defined by m = F −1 µ (1/2), with the convention sup ∅ = 0. Theorem 2.1 follows immediately from the following improved version.
Theorem 2.4. Let µ be a probability measure on R which is not a Dirac mass, and let µ 1 be the two sided exponential distribution defined by (1.4).
The following propositions are equivalent (1) There is a > 0 such that µ verifies the transport inequality T β (a).
(2) There are b > 0 and K > 0 such that max( The constants are related in the following way: (1) ⇒ (2) with b = a/2 and K = 3.
Let us give an interpretation of the map S appearing in condition (3). More generally, if µ and ν are arbitrary Borel probability measures on R, we define the map S µ,ν : R × [0, 1] → R ∪ {±∞} as follows: Remark that in case µ has no atom, S µ,ν coincides with T µ,ν defined by (1.3). As the following theorem explains, this map realizes the optimal transport of µ onto ν.
Theorem 2.6. Let α : R + → R + a convex cost function such that α(0) = 0 and µ, ν be two probability measures on R such that α(|x−y|) µ(dx)ν(dy) < ∞; then the coupling π o ∈ P(R 2 ) whose distribution function is given by achieves the infimum in the definition of T α (ν, µ). Moreover, if X is a random variable with law µ and U a random variable uniformly distributed on [0, 1] and independent of X, then Theorem 2.6 generalizes Theorem 1.10; we state it for completeness but it will not be used in the sequel. Note that the coupling π o remains optimal for a more general class of transport costs [12].
During the proof of Theorem 2.4, we will use the following simple technical lemma twice.
Lemma 2.7. Let β : R + → R + be a convex function such that β = 0 on [0, h] and β is increasing on [h, ∞). Then, for all b > 0 and k > 0 which proves the claim.
Proof of Theorem 2.4.
(2) ⇒ (3). To prove (3) we can first restrict to the case y > x and then using the monotonicity of S we can further assume that v = 0 and u = 1. So Property (3) is equivalent to the following one To establish (2.8) it is enough to consider the cases y > x ≥ m and m ≥ y > x. Namely, suppose that (2.8) is true with a constantc for these two particular cases, and consider y > m > x. Then, it holds ∀y > x ≥ m, 2β −1 (k) and k = log(K), where the second inequality follows from Lemma 2.7. Reasoning exactly as above we show that the same inequality holds when x < y ≤ m. So, according to what precedes, (3) holds with c =c/2.
(3) ⇒ (4). By assumption, it holds for all x, y ∈ R and u, v ∈ [0, 1]. Let us apply this inequality to x = F −1 µ (s) and y = F −1 µ (t) with s, t ∈ (0, 1). It is easy to check that . So choosing properly u and v yields Finally applying this inequality to s = F µ 1 (z) and t = F µ 1 (w) gives the desired inequality.

So, defining
So applying (2.9) to g = f • T , we get for all bounded measurable f . According to Bobkov and Götze dual characterization [7] (see also [22]), we conclude that µ verifies T β (a).

Proof of Theorem 1.7
According to Bobkov, Gentil and Ledoux [5], the Poincaré inequality is equivalent to a family of transport-entropy inequalities involving the cost functions α h 1 defined by The constants are related as follows The preceding theorem is stated in dimension one only, but it is true in any dimension.
Proof. The implication (2) ⇒ (1) is true on any metric space (see [25]). We refer to [5] or [42] for the proof of (1) ⇒ (2) in the case when µ is absolutely continuous with respect to Lebesgue. In what follows, we show that this implication is still true when µ is not.
Let µ be a Borel probability measure on R. For all σ > 0, let γ σ = N (0, σ 2 ) be a centered Gaussian distribution with variance σ 2 and define µ σ = µ * γ σ . The probability γ σ verifies the Poincaré inequality with the constant 1/σ 2 . According to the well known tensorization property of Poincaré inequality [27], it is not difficult to check that the product measure µ ⊗ γ σ verifies the following inequality for all Lipschitz function g : R 2 → R. Considering functions g of the form g(x, y) = f (x + y), we obtain So µ σ verifies Poincaré with the constant λ σ = 1 λ 2 + σ 2 −1 . Since µ σ is absolutely continuous, we can conclude applying [5] that µ σ verifies the family of transport-entropy inequalities T α h 1 (a) with a, h satisfying the constraints given in Theorem 3.1. Since µ σ → µ for the weak topology and λ σ → λ, when σ goes to 0, it is not difficult to see that µ verifies the transport-entropy inequalities T α h 1 (a) for a and h in the good range. (This last step is easier to check on the dual form of Bobkov-Götze.) We are now ready to prove Theorem 1.7 using the decomposition trick explained in the introduction.

Examples
This section is devoted to examples. First we recall the result obtained in [18] and make the link with the present paper. After that, we give a general sufficient condition for transport-entropy inequalities which holds for absolutely continuous distributions with smooth densities. We end the section by showing how Theorem 1.7 can be used to construct borderline examples, typically a probability enjoying T 2 but not the logarithmic Sobolev inequality. [18]. Let us make the connection between [18] and the present paper. Let us recall that a probability measure µ on R verifies Cheeger's inequality, if

Connection with
where m is a median of µ and |∇f | is defined by (1.6). Cheeger's inequality is known to be strictly stronger than Poincaré inequality. For probability distributions on R, it was proved by Bobkov and Houdré [8] that Cheeger's inequality holds if and only if the transport map T µ 1 ,µ is Lipschitz.
In [18], we obtained the following incomplete characterization It is not difficult to construct a probability verifying for example T 2 and not Cheeger's inequality (and thus which is not covered by Theorem 4.1). For example, consider the probability ν(dx) = 1 Z |x| r e −|x| dx for some r ∈ (0, 1). One can check that ν verifies Muckenhoupt's conditions (1.9) and so Poincaré. Let T 1 be the transport map T µ 1 ,ν . Writing F ν (x) = F µ 1 (T −1 1 (x)) and taking the derivative at x = 0, we see that T ′ 1 (x) → ∞ when x → 0 and so T 1 is not Lipschitz. According to Bobkov and Houdré [8], it follows that ν does not verify Cheeger's inequality (this example is taken from [8]). Now, consider T 2 (x) = sign(x) min(|x|; |x|) and define µ as the image of ν under T 2 . We claim that µ verifies Talagrand's inequality T 2 and not Cheeger's inequality. Indeed, since ν verifies Poincaré inequality, one concludes from Theorems 3.1 and 1.7 that On the other hand, Combining these two inequalities we see that Moreover, since T 2 is 1-Lipschitz, µ verifies Poincaré inequality and so according to Theorem 1.7 µ verifies T 2 . Finally, T ′ (x) = T ′ 2 (T 1 (x))T ′ 1 (x) → ∞ when x → 0 and so µ does not verify Cheeger's inequality.

4.2.
A general criterion on the density. We recall below a sufficient condition obtained by the author in [18] that ensures that a probability on R with a smooth density verifies a transport-entropy inequality.
Theorem 4.2. Suppose that α : R + → R + is a convex function of class C 2 such that α(t) = t 2 for small values of t and verifying the following regularity assumption: α ′′ (t) (α ′ (t)) 2 → 0 when t → ∞. Let µ be an absolutely continuous probability measure on R with a density of the form dµ( where m is the median of µ, then µ verifies the transport-entropy inequality T α (a) for some a > 0.
Note that in the quadratic case, condition (4.3) was first obtained by Cattiaux and Guillin in [13]. The proof of [18] goes as follows: using a classical asymptotic analysis, we show that the condition (4.3) ensures that max(K − (b); K + (b)) is finite for b small enough. On the other hand, the condition lim inf x→±∞ |V ′ (x)| > 0 (which is implied by (4.3)) is enough to have Cheeger's inequality. The conclusion follows from Theorem 4.1.
Let us mention that multidimensional generalizations of condition (4.3) were proposed in [20] or in [14]. In the one dimensional case, we do not know if it is possible to use Theorem 1. Let us recall that a Borel probability measure µ on R is said to verify the logarithmic Sobolev inequality if (4.4) Ent for all f Lipschitz, with |∇f | defined by (1.6). The known hierarchy between the above mentioned inequalities is the following: This chain of implications was first established by Otto and Villani in [33] on Riemannian manifolds (see also [5]); it is true in a general framework [24].

4.3.1.
A probability measure verifying T 2 and not the logarithmic Sobolev inequality. In [13], Cattiaux and Guillin were the first to show that Talagrand's inequality was not equivalent to Log-Sobolev. They proved that the probability measure µ CG defined on R by µ CG (dx) = 1 Z exp(−|x| 3 − |x| β − 3x 2 sin 2 (x)) dx, with 2 < β < 5/2, verifies T 2 but not the logarithmic Sobolev inequality. Our purpose is to produce another example whose tail distribution is exactly Gaussian.
Let us define a probability measure µ on R as the image of the exponential distribution µ 1 (dx) = exp(−|x|) dx/2 under the map T : R → R defined as follows: T is odd, continuous and for all k ∈ N, T (x) = k on the interval [k 2 , (k + 1) 2 − 1] and affine on [(k + 1) 2 − 1, (k + 1) 2 ]. We claim that this probability µ do the job. First, observe that µ verifies Poincaré inequality. This follows immediately from the fact that T is 1-Lipschitz. Moreover, it easily follows from the definition of T that the following inequality holds: According to Theorem 1.7, we conclude that µ verifies T 2 . (Note that Theorem 1.7 actually applies because To show that µ does not verify the logarithmic Sobolev inequality, we shall use the following criterion due to Bobkov and Götze [7] (see also [4]): Theorem 4.5. Let µ be a Borel probability measure on R and let p : R → R + be the density of the absolutely continuous part of µ. The probability µ verifies the logarithmic Sobolev inequality if and only if where m is any median of µ. Moreover, the optimal constant C LS in (4.4) is such that where c 1 , c 2 are universal constants.
Remark 4.7. We refer to Proposition 4.12 for the relation between (4.4) and (4.6). In particular, the probability µ defined above enters the class of probability measures for which (4.4) and (4.6) are equivalent.
Let us come back to our example and show that the probability µ constructed above does not verify the logarithmic Sobolev inequality. We will show that D + = ∞. Let f : R + → R be a bounded measurable function; then it holds is the density of the absolutely continuous part of µ on R + . Observe also that the median of µ is 0 and that for all n ∈ N, µ[n, ∞) = µ 1 [n 2 , ∞) = 1 2 e −n 2 .
After some calculations, we get we conclude that D + n → ∞, when n → ∞, and so D + = ∞. This completes the proof that µ does not verify the logarithmic Sobolev inequality.
Remark 4.8. If one wants to construct a counterexampleμ absolutely continuous with respect to the Lebesgue measure, it suffices to replace in the definition of T the constant steps by linear steps with small slope.

4.3.2.
A probability with a Gaussian tail verifying Poincaré inequality and not T 2 . To motivate the construction of this probability, let us say a word on the tightening of functional inequalities. Recall that an absolutely continuous probability measure µ on R n verifies the defective logarithmic Sobolev inequality if there are some constants C, D ≥ 0 such that for all f : R n → R Lipschitz. A very classical result states that if µ verifies a defective logarithmic Sobolev inequality with constants C, D and a Poincaré inequality with constant λ, then it verifies the logarithmic Sobolev inequality with a constant that can be expressed in terms of C, D and λ. Up to a subtle centering argument due to Rothaus [35], this tightening result is intuitively clear. The tightening recipe "defective functional inequality + Poincaré inequality = tight functional inequality" appears to be very general, and holds for a large class of functional inequalities (see e.g [2,3]). A natural question is to ask if this tightening principle holds for transport-entropy inequalities.
Let us say that a probability µ on R n equipped with its standard Euclidean norm · 2 verifies the defective transport-entropy inequality T 2 if there are C, D ≥ 0 such that for all probability measure ν. (The transport cost T 2 (ν, µ) is defined as the infimum of E X − Y 2 2 over all the possible random variables X, Y with respective law µ and ν.) This defective T 2 inequality has been characterized in various places ( [13,10,17]). It has been shown that it was equivalent to Gaussian concentration or equivalently to the finiteness of e ε x 2 2 µ(dx) for some ε > 0. Therefore, if the tightening principle was true for transportentropy inequalities, then we would have the following equation (4.9) The question of validating or infirming (4.9) was communicated to us by Cattiaux and Guillin.
Our next goal is to disclaim (4.9) by exhibiting a counterexampleμ on R. The construction is as follows:μ will be the image of the exponential distribution µ 1 under an odd, continuous, non-decreasing and Lipschitz map T : R → R which verifies |T (x)| ≤ |x| for all x ∈ R but does not satisfy the growth condition (1.8) for α(x) = x 2 , which means that (4.10) sup Let us take for granted the existence of such a map T . The fact that it is Lipschitz then implies thatμ verifies Poincaré and the inequality |T (x)| ≤ |x| easily implies that e εx 2μ (dx) < ∞ for all ε < 1. Finally, we conclude from Theorem 1.7 and condition (4.10) thatμ does not verify T 2 (here we use the fact that T is actually the transport map between µ 1 andμ). Now let us construct such a map T . The strategy is to wait until there is enough room under the graph of x → √ x to put a linear step with slope 1 and range of length n, for each n ∈ N * . A possible construction is as follows: let x n = n(n+1)

2
, for all n ∈ N and define T ( , for all n ∈ N * . This defines T on R + and so everywhere since T is assumed to be odd. This map T is clearly non-decreasing and 1-Lipschitz and it is not difficult to check that |T (x)| ≤ |x| for all x ∈ R. Finally, T (x 2 n )−T (x 2 n −n) = x n −x n−1 = n which proves (4.10). our definition of Poincaré (and log-Sobolev), we took the class of Lipschitz functions as domain of the inequality, with (4.11) |∇f |(x) = lim sup y→x |f (y) − f (x)| |y − x| in the right hand side. The following proposition establishes the equivalence between this definition and others appearing in the literature.
Proposition 4.12. Let µ be a Borel probability measure on R with the following decomposition: µ = µ ac + µ s , where µ ac and µ s are non-negative Borel measures such that µ ac is absolutely continuous with respect to Lebesgue and µ s is such that there is a closed set C with µ s (C c ) = 0 = Leb (C). Let λ > 0; the following are equivalent (1) The probability measure µ verifies λVar µ (f ) ≤ |∇f | 2 dµ, ∀f Lipschitz.
The same conclusion holds for the logarithmic Sobolev inequality instead of Poincaré inequality.
We recall that according to Rademacher theorem, Lipschitz functions are Lebesgue almost everywhere differentiable, so that the right hand side of (3) is well defined.
Proof. We do the proof in the case of Poincaré inequality. We remark that when f is differentiable at x, then |∇f |(x) = |f ′ (x)|. So (1) ⇒ (2) and (3) ⇒ (1). Let us show that (2) implies (3). First notice that (2) is equivalent to (4.13) λVar for all bounded continuous f and with F f (x) = x 0 f (t) dt. Take f a measurable bounded function. Define φ n (x) = n 2π e −nx 2 /2 ,f n = φ n * f , and h n (x) = min(1; nd(x, C)), where d(x, C) = inf y∈C |x − y|, and finally f n =f n h n . The functions f n andf n are continuous on R and it is not difficult to check that |f n | ≤ |f n | ≤ M, where M = sup |f |. Define F n = F fn and F = F f ; it holds for all x > 0 Sincef n → f in L 1 ([a, b], Leb), for all bounded interval [a, b], and 1 − h n → 1 C pointwise (this property requires that C is closed), we easily conclude form the fact that Leb(C) = 0 that F n → F pointwise. Moreover, the inequality |F n (x)| ≤ M |x| enables to use Lebesgue dominated convergence theorem (µ has a finite moment of order 2). So Var µ (F n ) → Var µ (F ) when n goes to ∞. On the other hand, since f n is bounded and continuous, one can apply (4.13), and conclude that where the equality follows from the fact that f n vanishes on C. It is not difficult to see that one can extract fromf n a sequence converging Lebesgue almost everywhere on R. Since |f n | ≤ M for all n, one can apply Fatou's lemma along this sequence and conclude that (4.14) λVar µ (F ) ≤ f 2 dµ ac , ∀f bounded. Now, let g be a Lipschitz function on R. Being Lipschitz, this function is absolutely continuous, and so its derivative g ′ (t) exists Lebesgue almost everywhere and is in L 1 ([a, b], Leb) for all bounded interval [a, b] and it holds g(x) = g(0) + x 0 g ′ (t) dt, ∀x ∈ R (see e.g [36]). Applying (4.14) to the bounded function f defined by f (t) = g ′ (t) if g is differentiable at t and f (t) = 0 otherwise, we finally obtain (3).