Viscous Aubry-Mather theory and the Vlasov equation

The Vlasov equation models a group of particles moving under a potential $V$; moreover, each particle exerts a force, of potential $W$, on the other ones. We shall suppose that these particles move on the $p$-dimensional torus ${\bf T}^p$ and that the interaction potential $W$ is smooth. We are going to perturb this equation by a Brownian motion on ${\bf T}^p$; adapting to the viscous case methods of Gangbo, Nguyen, Tudorascu and Gomes, we study the existence of periodic solutions and the asymptotics of the Hopf-Lax semigroup.


Introduction
The Vlasov equation models the motion of a group of particles under the action of a time-dependent potential V and a mutual interaction W . For definiteness, we shall suppose that the particles move on the torus T p : = R p Z p ; we put on the position and velocity space T p × R p the coordinates (x, v) and we suppose that, at time t, the particles are distributed on T p × R p according to a probability measure f t . Then, the Vlasov equation has the form and ·, · denotes the scalar product in R p . Since [7], one looks for weak solutions of (V L) ∞ ; in other words, given an initial distribution f 0 , one looks for a continuous curve of probability measures f t satisfying for all φ ∈ C ∞ 0 ([0, +∞) × T p × R p ). Our hypotheses on V and W are the following: * Dipartimento di Matematica, Università Roma Tre, Largo S. Leonardo Murialdo, 00146 Roma, Italy. email: bessi@matrm3.mat.uniroma3.itWork partially supported by the PRIN2009 grant "Critical Point Theory and Perturbative Methods for Nonlinear Differential Equations 1) V ∈ C(T, C 3 (T p )), and 2) W ∈ C 3 (T p ). Thus, W lifts to a C 3 function on R p , Z p -periodic; we shall also suppose that W (x) = W (−x) and that W (0) = 0.
A recent idea (see [11], [12]) is to view (V L) ∞ as a Lagrangian system in the space of measures; indeed, it is possible to define what it means for a curve µ t of probability measures on T p to minimize the Lagrangian The advantages are that one can use the tools of Lagrangian dynamics (Aubry-Mather theory, Hamilton-Jacobi equations, minimal characteristics, etc...) albeit in the difficult "differential manifold" of probability measures.
In this paper, we are going to adapt to the viscous case an older approach: following [7], we jury-rig a fixed point theorem to the viscous Mather theory of [13]. Let us briefly outline what we are doing in the case of periodic orbits.
We have the following.
Thus, our "characteristics" are the solutions of a Fokker-Planck equation bringing mass forward in time; the drift of this equation, or the optimal trajectory, is determined by a Hamilton-Jacobi equation, backward in time. This is quite typical for this kind of problems: see for instance equation (5.40) of [17] or theorem 3.9 of [12].
We briefly sketch the proof of theorem 1; the complete details are in section 1 below. First of all, we fix ψ satisfying points d1)-d3) above; then we find, as in [13], a couple (u ψ ,H ψ (c)) which solves (HJ) ψ,per . By [13], the numberH ψ (c) is unique and u ψ is unique up to an additive constant. To c − ∂ x u ψ is associated a stochastic flow, whose stationary Fokker-Planck equation is (F P ) c−∂xu ψ ,per ; again by [13], (F P ) c−∂xu ψ ,per has a unique periodic solution ρ ψ satisfying d1)-d3). In other words, we have a map : ψ → ρ ψ bringing densities to densities; we shall find a fixed point ρ β of this map by the Schauder fixed point theorem. We shall see that (u ρ β , ρ β ,H ρ β (c)) solves (HJ) ρ β ,per − (F P ) c−∂xuρ β ,per practically by definition; the existence of a minimum in (2) will follow from the fact that the fixed points of ρ β are a compact set.
In section 2, we study the Hopf-Lax semigroup. We denote by M 1 (T p ) the space of Borel probability measures on T p with the 1-Wasserstein distance (see section 1 for a definition); we shall prove the following. and R β together with its density ρ β solve Among the solutions (u β , ρ β ) of (HJ) β,f -(F P ) −m,c−∂xu β ,µ , there is one which minimizes We call such a minimum (Λ m c U )(µ).
Since minimizing over fixed points is uncomfortable, one could ask whether this restriction can be removed, getting a problem more similar to (1). exactly as in the zero-viscosity situation (we refer again to [12], theorem 3.9.) We also note a quirk of the notation: in the Hamilton-Jacobi equation we have H ρ β , while in (3) we have L c, 1 2 ρ β ; again, we share this factor two with the zero-viscosity situation and we shall see the reason for it in the proof of lemma 2.5 below.
Theorems like theorem 3 are common in the theory of mean field games (see for instance [4]). In the language of mean field games, we are saying that each particle tries to minimize unilaterally the cost where X solves (SDE) −m,Y,δx 0 and ρ is the distribution of the other particles; this is the reason of equation (HJ) ρ β ,f in theorem 2. The result of the independent efforts of all the particles (or the Nash equilibrium, as it is called) is that the whole community minimizes (3).
Let U : M 1 (T p ) → R be bounded; theorem 3 prompts us to define where the infimum is taken over all Lipschitz vector fields Y ; the density ρ satisfies (F P ) −m,Y,µ . Naturally, if U is linear as in theorem 3, then Ψ m c U = Λ m c U . We shall see in proposition 2.10 below that Ψ m c has the semigroup property: Ψ m+n c = Ψ m c • Ψ n c . Theorem 3 tells us that the infimum in (4) is a minimum when U is a linear function on measures as in theorem 2; we don't know whether this is true when U is in some more reasonable class, for instance continuous or Lipschitz. We don't even know whether, for U continuous, Ψ 1 c U is continuous; however, when U is linear as in theorem 2, we can prove that Ψ m c U is Lipschitz, uniformly in m. This allows us to find, for a suitable λ ∈ R, Lipschitz fixed points of the operator Ψ c,λ defined by There is a unique λ ∈ R for which Ψ c,λ has a fixed pointÛ in C(M 1 (T p ), R). In other words, for any µ ∈ M 1 (T p ), there is a Lipschitz vector fieldȲ such that where X solves (SDE) −1,Ȳ ,µ andρ solves (F P ) −1,Ȳ ,µ .
The functionÛ is Lipschitz for the 1-Wasserstein distance; by (5), the infimum in the definition of (4) of Ψ 1 c,λÛ is a minimum.
The proof of this theorem is similar to the corresponding statement in Aubry-Mather theory. Indeed, using an approximation with finitely many particles, we shall prove that, for a suitable λ ∈ R, the sequence (Λ c,λ ) n (0) of continuous functions on the compact space M 1 (T p ) is equibounded and equilipschitz; by Ascoli-Arzelà, it has a subsequence converging to a limitÛ ; we shall prove thatÛ is a fixed point of Λ c,λ . §1

Periodic orbits
In this section, we are going to prove theorem 1. We begin with a study of (HJ) ψ,per ; we follow the approach of [13] but, for completeness' sake, we reprove several results of this paper using, as in [2], the Feynman-Kac formula. Definitions.
• We group in a set Den the functions on T × T p which satisfy points d1)-d3) in the introduction. Clearly, the set Den is closed in C(T × T p ).
• We define M 1 (T p ) as the space of all Borel probability measures on T p ; if µ 1 , µ 2 ∈ M 1 (T p ), we define the 1-Wasserstein distance between them as where |x − x ′ | T p is the distance on the flat torus T p . The minimum is taken over all the Borel probability measures γ on T p × T p whose first and second marginals are, respectively, µ 1 and µ 2 . It is standard (see for instance section 7.1 of [17]) that d 1 turns M 1 (T p ) into a complete metric space, and induces the weak * topology.
We note that, if ψ ∈ Den and L p denotes the Lebesgue measure on T p , then the function : t → ψ(t, ·)L p belongs to C(T, M 1 (T p )).
• We extend the definition of P ψ we gave in the introduction: for ψ ∈ C(R, M 1 (T p )) we set There is C 1 > 0, independent on ψ ∈ C(R, M 1 (T p )), such that the function P ψ (t, x) defined in (1.1) satisfies Proof. We recall that, by definition, where, as usual, By our hypotheses on V and W , we have that For 0 ≤ j ≤ 3, differentiation under the integral sign implies that Since : t → ψ(t, ·) is continuous from R to the weak * topology, the formula above implies that P ψ is in C(R, C 3 (T p )). Since ψ(t, ·) is a probability mesure and the C 3 norm is convex, (1.2) follows from the last formula and (1.3).

\\\
From now on, we shall fix ψ ∈ Den; the functions P ψ , L c,ψ and H ψ are defined as in the introduction.
Following [2], we note that, if (u, A) solves and is periodic in space (i. e. it quotients to a continuous function on T p ), then the couple (v, A) = (e −βu , A) is a solution, periodic in space, of the "twisted" Schrödinger equation Vice-versa, the logarithm of a positive solution of (T S) ψ,per solves (HJ) ψ,per . Thus, solving (HJ) ψ,per reduces to solving (T S) ψ,per ; that's what we are going to do next.
For φ ∈ C(T p ) and A ∈ R we consider the evolution (or involution, since it goes backward in time) If t ≤ 0, we can use the Feynman-Kac formula (see for instance [6]) and write the unique solution of (T S) ψ,φ In the formula above, w is a Brownian motion on [t, +∞] with w(t) = 0, and E w is the expectation with respect to the Wiener measure.
We shall see in lemma 1.4 below that there is a bijection between the positive eigenfunctions of L (ψ,0,−1) and the positive solutions of (T S) β,per ; now, we prove that such eigenfunctions exist.
Proof. We recall from [16] (see also chapter XVI of [3] for G. Birkhoff's original exposition) a few facts about the Perron-Frobenius theorem. Let us denote by C + ⊂ C(T p ) the cone of strictly positive, continuous functions. We forego the easy proof that L (ψ,0,−1) brings C + into itself.
Let v 1 , v 2 ∈ C + ; we say that v 1 and v 2 are equivalent, It turns out ( [16]) that ( C+ ≃ , θ) is a complete metric space. We refer again to [16] or [3] for the proof that As a consequence, points 1) and 2) follow from the contraction mapping theorem if we prove that D < +∞.
Actually, we are going to show that D is bounded from above independently of ψ ∈ Den; equivalently, the Lipschitz constant of L (ψ,0,−1) does not depend on ψ. We shall need this fact in the next lemma.
Let v 1 , v 2 ∈ C + . Recalling the definition of θ, we see that (1.8) Thus, D < +∞ follows if we prove that there is C 3 > 0 such that for all v ∈ C + ; since the term on the left is homogeneous of degree zero in v, we can suppose that v satisfies (1.7).
We prove (1.9); in the following, C i always denotes a constant independent on v and ψ. By (1.5) and the fact that v > 0, we have that Setting 1 √ β z = y and simplifying e −β c,x outside the integral with e β c,x inside, we get the first inequality below The second inequality above comes from lemma 1.1 and the fact that v, which belongs to C + , is positive.
By lemma 1.1, the constant C 1 does not depend on ψ ∈ Den.
We assert that for a constant C 5 > 0 independent on ψ and v. Indeed, the inequality follows since v is positive; since v is periodic and satisfies (1.7), the integral above is 1, and the equality follows.
For the estimate from above, we get from (1.5) that We simplify e −β c,x outside the integral with e β c,x inside; now lemma 1.1 gives us the first inequality below; the equality follows from the change of variables 1 √ β z = y.
Since v is positive periodic, and by (1.7) integrates to 1 on the unit cube, we get the first inequality below.
Since the sum in the last formula is finite, we get that for a constant C 7 > 0 independent on ψ and v. Now (1.9) follows from the last formula and (1.10); we have seen that (1.9), by the contraction mapping theorem, implies points 1) and 2) of the thesis.

Lemma 1.3.
Let ψ ∈ Den and let (v ψ , B ψ ) be as in the last lemma. Then, v ψ ∈ C 3 (T p ) and the following two points hold.
Proof. We prove point 1). Since v ψ satisfies (1.7), by (1.10) and (1.11) there is C 8 > 1 such that Since we also have that L (ψ,0,−1) v ψ = B ψ v ψ , we get that Integrating on T p and using (1.7), we get that from which iii) of point 1) follows.
From point iii) and (1.12), possibly increasing C 8 , we get point ii). We show i).
We would like to differentiate under the integral sign in (1.5); we cannot do this immediately, because we only know that the final condition φ (which in our case is v ψ ) is in C 0 . Let E (0,z) denote the expectation of the Brownian bridge with w(−1) = 0 and w(0) = z; by (1.5) we get that, for v ∈ C + , We recall from [14] that, ifw is a Brownian bridge withw(−1) =w(0) = 0, then w(t): is a Brownian bridge with w(−1) = 0, w(0) = √ β(y − x). This and the last formula imply that The formula above allows us to differentiate under the integral sign, even if v is only C 0 ; using lemma 1.1, we easily get for a constant C 9 independent of ψ. By ii), we get that Since L (ψ,0,−1) v ψ = B ψ v ψ , formula i) now follows from iii).
Step 1. We begin to observe that θ and the sup norm induce equivalent topologies on the subset A of the functions of C + which satisfy (1.7). Indeed, (1.8) proves that the C 0 topology is stronger; for the opposite inclusion, let θ(v n , v) → 0 and let v n , v satisfy (1.7). Since θ(v n , v) → 0, we have that, for any ǫ > 0 and n large enough, The definition of α implies the first two inequalities below; the last one follows by the first inequality above.
Since v and v n satisfy (1.7), if we integrate the formula above on T p , we get that α(v n , v) → 1 and that α(v, v n ) → 1; since min v > 0, again from the formula above we get that v n → v uniformly.
Let v ∈ C + be fixed; we assert that the map : ψ → Ξ(ψ, v) is continuous from the || · || sup to the θ topology. Indeed, we saw in step 1 that, on C + , the C 0 topology is stronger than the θ topology; thus, it suffices to prove that Ξ(·, v): (Den, || · || sup ) → (C + , || · || sup ) is continuous. The proof of this, which ends the proof of the assertion, follows by applying the theorem of continuity under the integral sign to (1.5), and we forego it.
Step 3. We assert that the map : ψ → v ψ is continuous from (Den, || · || sup ) to (A, || · || sup ); by step 1, it suffices to prove that it is continuous from (Den, || · || sup ) to (A, θ). We have seen in the proof of lemma 1.2 that : v → Ξ(ψ, v) is a contraction for the θ-topology, whose Lipschitz constant does not depend on ψ. Since : ψ → Ξ(ψ, v) is continuous by step 2, we can apply the theorem of contractions depending on a parameter, and get that the map : ψ → v ψ is continuous from (Den, || · || sup ) to (C + , θ), as we wanted.
Step 4. We assert that the map : ψ → B ψ is continuous from Den to R.
suffices to prove that both maps : ψ → v ψ and : ψ → L (ψ,0,−1) v ψ are continuous from Den to C 0 (T p ). The first fact has been proven in step 3; we prove that : Now the assertion follows from the fact that (with the sup norm in all spaces) : ψ → v ψ is continuous, that : ψ → L (ψ,0,−1) v is continuous, and that : v → L (ψ,0,−1) v is uniformly Lipschitz by (1.13).
End of the proof of point 2). For φ ∈ C + , we get from (1.5) that (1.14) Setting A ψ = 1 β log B ψ , the formula above implies that The same proof which yielded (1.13) also yields that there is C 10 > 0 such that, if A and A ′ satisfy the estimate of point 1), iii) of this lemma, then where the the equality comes from (1.15) and the last inequality comes from (1.13) and (1.16). Since the map : ψ → v ψ is continuous from Den to the C 0 topology by step 3, and : ψ → A ψ is continuous too (because : ψ → B ψ is continuous by step 4 and point 1), iii) of this lemma holds), point 2) follows.

\\\
In the next lemma, we show how the fixed points of L (ψ,0,−1) induce solutions of (T S) ψ,per .
Then, there is a constant C 10 > 0, independent on ψ ∈ Den, such that , and let us consider the map I: ψ → (v ψ , A ψ ).
Proof. As in the proof of lemma 1.3, we set By (1.15), we get thatv ψ (−1, x) =v(0, x); in other words,v ψ quotients on T × T p ; equivalently, it satisfies the second formula of (T S) ψ,per .
Let us prove (1.18) for the functionv ψ defined by (1.19); we prove the inequality on the left, since the one on the right is analogous.
The first equality below is (1.19). Sincev ψ is periodic, we can suppose that t ∈ [−1, 0]; now (1.5), implies the first inequality below; the second inequality follows from lemma 1.1 and the fact that t ∈ [−1, 0]; the third one comes from point 1), ii) and iii) of lemma This yields the inequality on the left of (1.18).
We prove (1.17). We begin to note that the estimate on A ψ follows by point 1), iii) of lemma 1.3, and by the fact that A ψ = 1 β log B ψ . We end the proof of (1.17) with the estimates on the derivatives. Letw be the Brownian bridge with w(−1) = 0 =w(0) and letẼ (0,0) denote its expectation; for t < 0, let w be the Brownian bridge with w(t) = 0 = w(0) and let E (0,0) denote its expectation; we recall that This yields the second inequality below, while (1.19) yields the first one; the third one comes from the change (1.20) By point 1), i) of lemma 1.3, we can differentiate under the integral sign and get that Sincev ψ is periodic in time, (1.17) follows.
By theorem 9.1 and proposition 6.6 of [6], the Feynman-Kac formula holds for the unbounded final condition e β c,x v ψ ; this, (1.19) and (1.5) imply thatv ψ satisfies the first formula of (T S) ψ,per for t < 0; since it is periodic in t, it satisfies it for all times. Moreover,v ψ > 0 because, by (1.19) and (1.5), it is an integral, with a positive weight, of the positive v ψ . This ends the proof of point 1).
We have just seen that (1.19) gives a bijection between the periodic, positive solutions of (T S) ψ,per and the positive eigenfunctions of L (ψ,0,−1) ; since the latter are unique up to a multiplicative constant by point 2) of lemma 1.2, we get that the former too are unique up to a multiplicative constant; this proves point 2).
We prove point 4). To prove that the map : ψ → A ψ is continuous, it suffices to note that A ψ = 1 β log B ψ , that the map : ψ → B ψ is continuous by point 2) of lemma 1.3, and that B ψ is bounded away from zero and infinity by point 1), iii) of the same lemma.
1) (Existence and uniqueness) There is a unique coupleĤ ψ (c) ∈ R and u ψ ∈ C(T, C 3 (T p ))∩ C 1 (T, C 1 (T p )) which solves (HJ) ψ,per and satisfies Proof. By lemma 1.4, there is a unique couple which solves (T S) ψ,per and such thatv ψ (0, ·) is positive and satisfies (1.7). We have seen at the beginning of this section that, for any λ > 0, the couple solves (HJ) ψ,per ; vice-versa, if u ψ solves (HJ) ψ,per , then its exponential solves (T S) ψ,per . Thus, if we define u ψ as above, for the unique λ for which (1.21) holds, we have existence. Now point 2) of lemma 1.4 implies that all positive solutions of (T S) ψ,per are of the form (λv ψ , A ψ ); since we have just seen that there is a bijection between the solutions of (HJ) ψ,per and the positive solutions of (T S) ψ,per , we get uniqueness.

\\\
Let the Lagrangian L c,ψ be as in the introduction, and let u ψ be as in lemma 1.5. It is well-known ( [9]) that u ψ satisfies, for t ≤ 0, where z solves the stochastic differential equation and Y (t, z) varies among the vector fields continuous in t and Lipschitz in z. We have denoted by E w the expectaction with respect to the Wiener measure. From [9], we get that the minimal Y ψ is given by By (1.22), there is C 12 > 0 such that, for any ψ ∈ Den, Definition. We group in a set V ect all the vector fields Y : T × T p → R p which satisfy (1.24). The distance on V ect is given by the norm of (1.24).
We would like to consider the law of the stochastic differential equation above when the initial condition is distributed according to a measure µ. One way to do this is to call ρ x0 the solution of (F P ) t,Y,δx 0 and to set ρ(s, Another one, which yields the same law, is to suppose that the Brownian motion is on a probability space Ω on which there is a random variable M independent on w(s) for s ≥ t and with law µ; we consider the solution z of the stochastic differential equation above with initial condition M and we say that z solves Let Y ∈ V ect; by [13], there is µ ∈ C(T, M 1 (T p )) which is invariant by the stochastic differential equation; in other words, there is a measure µ 0 such that, if µ t is the measure induced by a solution z of (SDE) 0,Y,µ0 for t ≥ 0, then µ 0 = µ 1 . Equivalently, we are saying that there is a weak solution µ of (F P ) Y,per . We sketch a proof of this fact: the map which brings the measure µ 0 into µ 1 , the solution of the Proof. We begin to note that We have to prove that µ t = 0 for all t ≥ 0.
We define the operator A Y as has a unique solution φ. We set ). Indeed, ψ is C 1 in t and C 2 in x on s < t by theorem 9 of chapter 1 of [10]; it is obviously C 2 on s > t; it is C 2 also in a neighbourhood of s = t, because, by the uniqueness of the Cauchy problem for the equation We use ψ as a test function in (1.26), getting the second equality below.
Since the formula above holds for all γ ∈ C 1 c ([0, +∞) × T p ), we get the thesis. \\\ Let Y ∈ V ect. By [13], there is µ ∈ C(T, M 1 (T p )) which solves (F P ) Y,per in the weak sense. Then, the following holds.
2) The measure µ is unique.
3) There is C 13 > 0, independent on Y ∈ V ect, such that Proof. Classical results about PDE's (see lemma 2.3 below for more details) imply that there is a density where L p denotes the Lebesgue measure on T p . It is standard that, for t > 0, ρ x0 (t, ·) satisfies properties d2) and d3) of the introduction, and that for a constant C 13 > 0 which depends only on the C 1 norm of Y ; as a consequence, C 13 is the same for all Y ∈ V ect and x 0 ∈ T p (again, we refer the reader to lemma 2.3 below).
For t > 0, we define follows from this, (1.27), (1.28) and the fact that norms are convex. One consequence of point 3) is that ρ Y also satisfies hypothesis d1) of the introduction; since we saw above that it satisfies d2) and d3), we get that ρ Y ∈ Den. Again from point 3), we get that ρ Y is a classical solution of (F P ) Y,per ; since by [13] there is only one of them, we get point 2).
We prove point 4). Let Y n → Y in C(T × T p ), and let ρ Yn and ρ Y solve (F P ) Yn,per and (F P ) Y,per respectively. We have just proved that ρ Yn satisfies point 3) of the thesis; thus, we can apply Ascoli-Arzelà and get that, up to subsequences, ρ Yn → ρ in C(T × T p ). Taking limits in (1.25) we see that ρ is a weak, periodic solution of (F P ) Y,per ; by the uniqueness of point 2), we get that ρ = ρ Y . Thus, any subsequence of ρ Yn has a sub-subsequence converging to ρ Y in C(T × T p ); by a well-known principle, this implies that

\\\
Definition. Let C 13 be as in lemma 1.7. We group in a set ρ ∈ Den reg the elements of Den which belong to Lip(T × T p ) and such that ||ρ|| Lip(T×T p ) ≤ C 13 . By point 3) of lemma 1.7, if Y ∈ V ect, then ρ Y ∈ Den reg .
There is a continuous map Φ: Den → Den whose fixed points ρ β induce solutions Proof. We define the map Φ by composition. By lemma 1.5 and formula (1.24), we know that there is a map This map is continuous by point 3) of lemma 1.5.
Let ρ Y be as in point 1) of lemma 1.7; by point 4) of this lemma, the map Den, and has image in Den reg , as we wanted.

\\\
Proof of theorem 1. We begin to show that there are couples (u β , ρ β ) which satisfy (HJ) ρ β ,per − (F P ) c−∂xu β ,per . By lemma 1.8, this follows if we show that Φ has fixed points. But this is true by Schauder's fixed point theorem: indeed, by lemma 1.8, Φ is a continuous map from Den to itself which preserves the compact, convex set Den reg .
Let us now call S the set of the triples (u, ρ, H) such that ρ ∈ Den is a weak solution of (F P ) c−∂xu,per and (u, H) is a classical solution of (HJ) ρ,per . Let (u n , ρ n , H n ) ∈ S be such that (1.29) By lemma 1.1, L c, 1 2 ρ is bounded from below independently on ρ; as a consequence, the inf in the right hand side of (1.29) is finite. Note that, if ρ n ∈ Den, lemma 1.5 implies that c − ∂ x u n ∈ V ect; since ρ n is a fixed point, we get by lemma 1.7 that ρ n ∈ Den reg m ; since Den reg is compact in Den, we can suppose that, up to subsequences, ρ n →ρ in Den.
By point 3) of lemma 1.5, this implies that yielding the thesis.

\\\ §2
The evolution equation In this section, we shall prove theorems 2 and 3. We begin with some notation.
We recall that the map is convex, i. e.
Indeed, the dual formulation implies that d 1 is the supremum of a family of linear functions. Since the functions f in the dual formulation belong to Lip 1 (T p ) and T p has diameter √ p, we can as well suppose that ||f || ∞ ≤ 1 2 √ p; as a consequence, where || · || tot denotes total variation.
Definition. We are going to denote by the norm symbol the distance on C([−m, 0], M 1 (T p )), which is no norm at all: if R 1 , R 2 ∈ C([−m, 0], M 1 (T p )), then we set Though this is no norm, it is convex thanks to the convexity of d 1 : Definition. Lemma 2.1.
Let f ∈ C 3 (T p ) and let H Z (t, q, p) = 1 2 |p| 2 + Z(t, q), with Z ∈ C([−m, 0], C 3 (T p )). 1) Then, there is a unique solution u Z of and m, such that 3) The map Proof. We know that the twisted Schroedinger equation with potential Z and final condition e −βf ∈ C 3 (T p ) has a unique solution v Z , which can be represented by the Feynman-Kac formula (1.20) with e −βf in stead of v ψ and Z in stead of P ψ . Since e −βf > 0, we get that v Z > 0 too. We saw in section 1 that u Z = − 1 β log v Z solves (HJ) Z , and point 1) follows. Points 2) and 3) follow as in section 1 if we prove that and that the map :  1) Let f ∈ C 3 (T p ), let µ ∈ M 1 (T p ) and let R ∈ Den m (µ). Then, there is a unique 3) The map Definition. By point 2) of corollary 2.2, there is C 15 > 0 such that, setting Y = c − ∂ x u R , we have where d 1 denotes the 1-Wasserstein distance.
Proof. The uniqueness of point 1) comes from lemma 1.6; for the existence, we begin to recall from PDE theory (see for instance chapter 1 of [10]) that, for x 0 ∈ T p , (F P ) −m,Y,δx 0 has a solution R x0 with density ρ x0 . Always from [10], the function ρ x0 satisfies the first formula of (2.3) for a constant C 16 (T ) which depends neither on x 0 ∈ T p nor on the particular element Y ∈ V ect m . Moreover, as T → −m, we get from [10] that, if g ∈ C(T p ), then uniformly in x 0 ∈ T p ; since d 1 induces the weak * topology and T p is compact, we have that d 1 (R x0 (T ), δ x0 ) ≤ Clearly, ρ Y is a solution of (F P ) −m,Y,µ , and this ends the proof of point 1).
We have seen that ρ x0 satisfies the first formula of (2.3); since norms are convex, (2.4) implies that ρ Y too satisfies this formula. Now ρ x0 satisfies d 1 (R x0 (T ), δ x0 ) ≤ C 17 (T ), and the map is convex; it follows again by (2.4) that ρ Y too satisfies the second formula of (2.3).

\\\
Definition. We define Den reg m (µ) as the subset of the elements R ∈ Den m (µ) which, for t ∈ (−m, 0], have a density ρ with respect to the Lebesgue measure. Moreover, we ask that R and ρ satisfy   Proof. Let R n ∈ Den reg m (µ) have density ρ n for n ∈ N. We must show that it has a subsequence converging in Den m (µ).
Since ρ n satisfies the first formula of (2.5), Ascoli-Arzelà implies that, up to subsequences, ρ n → ρ in C 0 loc ((−m, 0] × T p ); clearly, ρ satisfies the first formula of (2.5). Denoting by L p the Lebesgue measure on T p , we set R(t) = ρ(t)L p and we see that, for any fixed T ∈ (−m, 0], where the first inequality comes from (2.1) and the limit from the fact that ρ n → ρ in C 0 loc ((−m, 0] × T p ). Since R n satisfies the second formula of (2.5), we have that The last two formulas imply that R satisfies the second formula of (2.5).
It remains to prove that R n → R in C([−m, 0], M 1 (T p )); it suffices to note that, for δ ∈ (0, m), where the last inequality comes from the second formula of (2.5) and from (2.1). Since C 17 (T ) → 0 as T ց −m, we can fix δ > 0 so that the first term on the right is smaller than ǫ; having thus fixed δ, we take n so large that, by convergence in C 0 loc ((−m, 0] × T p ), the second term on the right is smaller than ǫ, and we are done.

\\\
We only sketch the proof of the next lemma, since it is identical to point 4) of lemma 1.7.

Lemma 2.5.
Given ǫ > 0, we can find δ > 0 with the following property. Let Y,Ȳ ∈ V ect m and let SinceȲ n , Y n ∈ V ect m and ||Ȳ n − Y n || C([−m,0]×T p ) → 0, by Ascoli-Arzelà up to taking subsequences we can suppose thatȲ n , Y n → Y in C([−m, 0] × T p ); we can also suppose that µ n → µ. To reach a contradiction with the formula above, it suffices to show that R Yn and RȲ n both converge to R Y ; since the proof for RȲ n is analogous, we prove convergence for R Yn .
We note that {R Yn } in contained in Den reg m (µ n ) by lemma 2.3; thus, by lemma 2.4, it has a subsequence converging to a limit R. Since R Yn is a weak solution of (F P ) −m,Yn,µn , we easily get that R is a weak solution of (F P ) −m,Y,µ ; by lemma 1.6, R = R Y . In other words, every subsequence of R Yn has a sub-subsequence converging to R Y ; this implies that R Yn converges to R Y , and we are done. We apply the Schauder fixed point theorem and we get that Φ has a fixed point in Den reg m (µ). With the same argument as in the proof of theorem 1, we see that, if R is a fixed point of Φ, then (u R , R) solves (HJ) R,f − (F P ) −m,c−∂xuR,µ . This yields existence.
We continue as in the proof of theorem 1. Let us call S the set of the couples (u, R) where u is a classical solution of (HJ) R,f and R ∈ Den m (µ) is a weak solution of (F P ) −m,c−∂xu,µ .
Let us consider a sequence (u n , R n ) ∈ S such that, denoting by ρ n the density of R n , Whatever is R n ∈ Den m (µ), u n satisfies the estimates of point 2) of corollary 2.2; in particular, c − ∂ x u n ∈ V ect m . Since R n satisfies (F P ) −m,c−∂xun,µ lemma 2.3 implies that R n ∈ Den reg m (µ); by lemma 2.4, up to subsequences we can suppose that R n →R, withR ∈ Den reg m (µ). By point 3) of corollary 2.2, we get that u n →ū in C 1 ([−m, 0], C 1 (T p )) ∩ C([−m, 0], C 3 (T p )), and thatū solves (HJ)R ,f . Thus, (ū,R) ∈ S; now, the formula above easily implies that (ū,R) is minimal in S.

\\\
We turn to the proof of theorem 3; our route will pass through an approximation with a finite number of particles.
Definitions. Let us define the Lagrangian for one particle as The Lagrangian for n particles, each of mass 1 n , is Let U be as in the statement of theorem 2. For any given z = (z 1 , . . . , z n ) ∈ (T p ) n , we define We note that we are not considering the most general vector field Y on (T p ) n . On the contrary, we assign to each particle x i ∈ T p a control Y i which depends only on x i , and not on the positions of the other particles; these, however, interact with x i via the potential W . We have chosen this particular problem because we want U n (−m, z) to converge, as n → +∞, to Λ m c U ; we recall that, in the definition of Λ m c , there is a control Y which depends on the single particle in T p .

Lemma 2.6.
Let us suppose that U is as in the statement of theorem 1 and let U n (−m, z) be defined as in (2.6). Then for any fixed n ∈ N, the infimum in (2.6) is a minimum.
Proof. Let {Y n,k } k≥1 be a minimizing sequence. We are going to show that we can build another minimizing sequence, say {Ỹ n,k } k≥1 , which is Lipschitz in (t, x) uniformly in k. Once we know this, the lemma follows by Ascoli-Arzelà.
For the vector field Y n,k , let us define ρ n,k i and ρ n,k as in (2.7); we set Note that, in contrast with L n c , a factor 1 2 in the interaction sum is missing. We know from lemma 1.1 that the potential in L n c,Y n,k ,i satisfies a uniform C 3 estimate. By [9], for (t, x) ∈ [−m, 0] × T p , there isỸ n,k i on which the minimum below is attained with X(t, s, x) which solves (SDE) t,Y,δx ; the minimum is taken over all the Lipschitz vector fields Y . Always by [9],Ỹ n,k i = c − ∂ x u n,k i (t, x) and u n,k i solves the Hamilton-Jacobi equation for the Lagrangian L n c,Y n,k ,i and final condition f . By lemma 2.1, is bounded in terms of the C 3 norm of the potential of L n c,Y n,k ,i . By lemma 1.1, the latter depends neither on n nor on k; thus,Ỹ n,k i belongs to V ect m ; in particular, it is Lipschitz uniformly in n and k.
In the following, whenever we have a drift, say Y B i , we shall denote by X B i (t, s, x i ) the solution of (SDE) t,Y B i ,δx i ; we shall set X B = (X B 1 , . . . , X B n ) and z = (z 1 , . . . , z n ). We are going to isolate the first particle and show that the mean action decreases if we substitute Y n,k 1 with the smootherỸ n,k 1 defined above. Since the interaction potential is even and satisfies W (0) = 0, we get the first equality below; since the Brownian motions (w 1 , . . . , w n ) are independent, we get the second one. L c (s, X n,k j (−m, s, z j ), Y n,k j (s, X n,k j (−m, s, z j )))ds+ s, z 1 )))ds + f (X 1 (−m, 0, z 1 )) .
If we consider (Ỹ n,k 1 , Y n,k 2 , . . . , Y n,k n ) instead of (Y n,k 1 , Y n,k 2 , . . . , Y n,k n ), we see that the terms a1) and a2) in the formula above remain the same, while, by our choice ofỸ n,k 1 , a3) gets smaller. After applying this procedure to each coordinate, we get a sequenceỸ n,k = (Ỹ n,k 1 , . . . ,Ỹ n,k n ) which satisfies the following two properties.
In particular,Ỹ n,k i ∈ V ect m for all i, n and k; as a consequence, we can apply point 2) of lemma 2.3, getting that {ρ n,k i } n,k ⊂ Den reg m (δ zi ) for all i. By lemma 2.4, we find a subsequence, which we denote by the same index, such that (ρ n,k 1 L p , . . . , ρ n,k n L p ) → (ρ n 1 L p , . . . , ρ n n L p ) in Den m (δ z1 ) × . . . × Den m (δ zn ).
Thus, for each i, ). By point 3) of lemma 2.1, this implies that and that eachỸ n i is minimal for L n c,Ỹ ,i . By the last formula and lemma 2.5, we get that ρ n i solves (F P ) −m,Ỹ n i ,δz i . The last three formulas imply that L n c (s,X n,k (−m, s, z),Ỹ n,k (s,X n,k (−m, s, z)))ds + U (R n,k (−m, 0, z)) .
Since {(Ỹ n,k 1 , . . . ,Ỹ n,k n )} k≥1 is a minimizing sequence, we get that (Ỹ n 1 , . . . ,Ỹ n n ) is minimal. \\\ From the proof of the last lemma, we extract the following corollary: it says that the minimum in (2.6) is a Nash equilibrium ( [4]). Note one fact about the value function u n i in the corollary below: for simplicity, we let i = 1 . Then the function u n 1 depends not only on (x 2 , . . . , x n ), but on x 1 too: namely, if x 1 moves, the drifts (Y 2 , . . . , Y n ) will adjust, and the Lagrangian L c,Y,1 will change. If it hadn't been too clumsy, we could have written u n,(x1,...,xn) 1 and said that c − ∂ x u n,(x1,...,xn) 1 (x) is the best drift for particle x 1 .
Then, for each i we have thatȲ n i (t, Proof. If for one i we hadȲ n i = c − ∂ x u n i , then, isolating particle i as in the last lemma, we could see that the vector field has a lower Lagrangian action, contradicting the minimality ofȲ n .

Lemma 2.8.
Let µ ∈ M 1 (T p ) and let us suppose that Let Y n = (Y n 1 , . . . , Y n n ) be a drift minimal in (2.6); by corollary 2.7, Y n i = c − ∂ x u n i for the value function u n i defined in (2.9). Let ρ n be defined as in (2.7). Then, there is (u, ρ) which satisfies (HJ) ρ,f − (F P ) −m,c−∂xu,µ , and a subsequence {n k } such that Moreover, the function U n k (−m, z 1 , . . . , z n k ) defined in (2.6) converges to the function U (−m, µ) defined by Proof. Since Y n i = c − ∂ x u n i , the third formula of (2.11) follows from the second one; we prove the first two ones.
Step 1. We prove the convergence of the densities.
For i, j ∈ (1, . . . , n), we consider the densitieŝ where ρ n l is the same as in formula (2.7). Let R n i = ρ n i L p ,R n i =ρ n i L p and R n = ρ n L p . Formula (2.1) implies the first inequality below, while the second one follows from the fact that ρ n j and ρ n i are probability densities. (2.14) By lemma 1.1, we get the second inequality below.
As a result, the value function u n i satisfies point 2) of corollary 2.2; thus, Y n i ∈ V ect m and we can apply lemma 2.3, getting that R n i belongs to Den reg m . Since this set is convex, (2.13) implies thatR n i ∈ Den reg m ; by lemma 2.4, we have that Den reg m is a compact set; thus, fixing i = 1, there is n k → +∞ such thatR n k 1 converges to R ∈ Den reg m ; in particular, R and its density ρ satisfy (2.5). This gives convergence only for R n k 1 ; however, from (2.14) we get that which implies that allR n k i converge to the same limit R. By the same argument of (2.14), Thus, (2.15) implies the first formula of (2.11).
Step 2. We prove the convegence of the solutions of Hamilton-Jacobi. We set Now, u n k i is the value function of L c,Y n k ,i , whose potential is V (t, x) + W n k i (t, x); by the last formula, we can apply point 3) of lemma 2.1 and get that u n k i satisfies the limit in the second formula of (2.11), with u a solution of (HJ) V +W or, which is the same, of (HJ) ρ,f .
From now on, for ease of notation, we drop the n k of the subsequence. We recall that each ρ n i solves (F P ) −m,Y n i ,δz i ; by the third formula of (2.11) and lemma 2.5, we get that, ifρ i is a solution of (F P ) −m,c−∂xu,δz i , then 2) implies the inequality below, and the last formula implies the limit.
This means that ρ n L p and 1 n n i=1ρ i L p have the same limit; we saw in step 1 that ρ n L p converges to ρL p ; thus, to prove that ρL p solves (F P ) −m,c−∂xu,µ , it suffices to prove that the limit of 1 n n i=1ρ i L p solves the same equation. This follows easily, since by definition 1 n n i=1ρ i L p solves the Fokker-Planck equation with drift c − ∂ x u and initial condition 1 n (δ z1 + . . . + δ zn ), and (2.10) holds.
Step 4. We prove the last assertion of the lemma; the equality below comes from (2.6) and the fact that Y n is minimal. We recall that, by corollary 2.7, Now ρ n i L p is the push-forward of the Wiener measure by X n i , and the Brownian motions w i are independent. This implies that Using (2.11), we get immediately that U n (−m, (z 1 , . . . , z n )) →

\\\
Proof of theorem 3. Let the measure µ, the couple (u, ρ) and the function U (−m, µ) be as in the last lemma; let the operator Λ m c be as in the introduction, and let Y = c − ∂ x u. We are going to prove that m, 0, µ)) . Note that, in principle, U (−m, µ) could depend on the subsequence {n k } k≥1 chosen in lemma 2.8; the formula above says that this is not the case. Moreover, it says that any (u, ρ) arising in lemma 2.8 as the limit of a subsequence, minimizes the last expression of (2.18).
The second equality of (2.18) follows from lemma 2.8: it is just another way of writing (2.12). Again by lemma 2.8, (u, ρ) ∈ S, and thus, by the definition of (Λ m c U )(µ), To prove this, we consider the n-particle value function U n (−m, z 1 , . . . , z n ). LetỸ be a Lipschitz vector We set z = (z 1 , . . . , z n ) and by (2.6) we get the inequality below; the first equality is the definition of L n c , the second one comes from the fact that the Brownian motions w 1 , . . . , w n are independent.
We take limits in the formula above, using the last assertion of lemma 2.8 for the left hand side and (2.21) for the right hand side; we get that SinceỸ is an arbitrary Lipschitz vector field, we get that (2.20) holds.
This yields the inequality opposite to (2.19). In other words, we have proven the first equality of (2.18); the second one, as we have seen, is lemma 2.8. As for the third one, it suffices to prove the inequality opposite to (2.20), which we do presently.
Let (z 1 , . . . , z n ) satisfy (2.10), let Y n = (Y n 1 , . . . , Y n n ) be minimal in (2.6), and let us setỸ n = Y n 1 . Let ρ n satisfy (F P ) −m,Ỹ n ,µ . By (2.11) and (2.17), we obtain that there is γ n → 0 such that Since the limit of the term on the right is U (−m, µ) by lemma 2.8, we get that yielding the inequality opposite to (2.20).
Since Y and Y 2 are bounded uniformly inδ, we get that |a| ≤ M if τ ∈ [−n, 0] for a constant M independent onδ; since Y coincides with the Lipschitz Y 2 on [−n +δ, 0], we get that, for τ ≥ −n +δ, for a constant K independent onδ. From the last two formulas, we get that Using the Gronwall lemma and (2.27), we get that there is a function γ(δ), tending to zero asδ tends to zero, such that |X(−n, s, ρ Y1 (−n))(w) −X(−n, s, ρ Y1 (−n))(w)| ≤ γ(δ) for all realizations w of the Brownian motion. From this, (2.24) follows easily.
On the other hand, it is easy to see that the formula above implies that, asδ → 0, ρ Y (0)L p converges weak * to ρ Y2 (0)L p . Since U is Lipschitz for the 1-Wasserstein distance, (2.23) follows.
1) The map Ψ m c defined in the introduction has the semigroup property, i. e. for n, m ≥ 0 and U ∈ C(M 1 (T p ), R), Proof. Properties 2) and 3) follow in a standard way from the definition of Ψ m c ; we prove 1). Let µ ∈ M 1 ; by the definition of Ψ n+m c U as an infimum, for any ǫ > 0 we can find a Lipschitz vector field Y such that 1 2 ρY (s, X(−(n + m), s, µ), Y (s, X(−(n + m), s, µ)))ds + U (ρ Y (0)L p ) − ǫ where X solves (SDE) −(n+m),Y,µ and ρ Y is, as usual, the solution of the Fokker-Planck equation with initial condition µ. By the Chapman-Kolmogorov formula, the formula above implies the first inequality below.

Fixed points
As in [8] and in [15], the following proposition is essential in proving theorem 4.
Let U be linear as in theorem 2. Then, there is L > 0, independent on n, such that Λ n c U = Ψ n c U is L-Lipschitz for the Wasserstein distance d 1 .
To prove this proposition, we shall need two lemmas.
Proof. We have seen in section 1 that, if u is a solution of (HJ) R,f , v = e −βu and a ∈ R, then e −βa v = e −β(u+a) is a solution of (T S) R,e −β(f +a) with A = 0. Let a k ∈ R such that e −βa k v(−k, ·) = e −β(u+a k ) satisfies (1.7) for k = 0, 1, 2, . . .. By the Feynman-Kac formula, for k ≥ 0, Since e −βa k v(−k, ·) satisfies (1.7), formulas (1.10) and (1.11) hold and we get that, for k ≥ 0, We consider (3.1) with e −βa k v(−k − 1, x) on the right hand side and differentiate under the integral sign; proceeding as in lemma 1.3, and using the last formula, we get that, for k ≥ 0, As in lemma 1.4, this implies that there is C 3 > 0, independent on k ≥ 0 (it depends only on C 1 and C 2 ) such that, for k ≥ 0, if t ∈ [−(k + 3), −(k + 2)], then 1 C 3 ≤ e −βa k v(t, ·) ≤ C 3 and By our definition of v, From the two formulas above, we get that It remains to bound ∂ x u(t, x) when t ∈ [−2, 0]; since f ∈ C 3 (T p ), this follows by differentiation under the integral sign in (1.20), and we are done.

Lemma 3.3.
Let U be as in theorem 2 and let U n (−m, z) be defined as in (2.6). Then, there is a constant C > 0 such that, for all positive integers n and m, we have Proof. It suffices to prove that, for i = 1, . . . , n, the function : z i → U n (−m, z 1 , . . . , z i , . . . , z n ) is C n -Lipschitz and that the constant C does not depend neither on m ∈ N nor on (z 1 , . . . , z i−1 , z i+1 , . . . , z n ) ∈ (T p ) n−1 . We shall prove this for i = 1; from this the general case follows, since U is a symmetric function of (z 1 , . . . , z n ).
It is standard ( [5]) that where the minimum is taken over all the permutations σ of {1, . . . , n}. In terms of transport, when we are connecting two n-uples of deltas, there is not just a minimal transfer plan, but a minimal transfer map.
But this is an immediate consequence of lemma 3.3.

\\\
When U is linear, we define Λ c,λ U = Λ c U + λ; thus, in case of a linear U , we have that Λ c,λ U = Ψ c,λ U for the operator Ψ c,λ defined in the introduction. In the next lemma, we stick to the Λ c,λ notation.

Lemma 3.4.
Let the operator Λ c,λ be defined as in the introduction and let U = 0. Then, there is a unique λ ∈ R such thatÛ (µ): = lim inf n→+∞ (Λ n c,λ 0)(µ) is finite for all µ ∈ M 1 . Moreover,Û is L-Lipschitz for the constant L of proposition 3.1.
Proof. Clearly, there is at most one λ ∈ R for which the lim inf above is finite; let us prove that it exists.
For n large enough, the last formula contradicts the fact thatÛ 2 is a fixed point.