Stationary Solutions of Damped Stochastic 2-dimensional Euler's Equation

Existence of stationary point vortices solution to the damped and stochastically driven Euler's equation on the two dimensional torus is proved, by taking limits of solutions with finitely many vortices. A central limit scaling is used to show in a similar manner the existence of stationary solutions with white noise marginals.


Introduction
The present work concerns a particular class of solutions to the 2-dimensional incompressible Euler's equation with frictional damping, on the torus T 2 = R 2 /Z 2 , ∂ t u t + u t · ∇u t + ∇p t = −θu t + F t , ∇ · u t = 0, (1) where u t is the velocity vector field, p t is the (scalar) pressure, θ > 0 and F t is a stochastic forcing term. Our motivation stems from works on 2-dimensional turbulence: our model can be regarded as an inviscid version of the one considered in [6], which aimed to describe the energy cascades phenomena in stationary, energy-dissipated, 2-dimensional turbulence. Inspired by recent renewed theoretical interest for point vortices methods in the study of 2-dimensional Euler's equation stemming from [9], we will study solutions to (1) obtained as systems of interacting point vortices, and Gaussian limits of the latter ones. Even if our models are not able to capture turbulence phenomena such as the celebrated energy spectrum decay law of inverse cascade predicted by Kolmogorov, we believe that the mechanism of creation and damping of point vortices we describe might contribute to provide a description of experimental behaviours of models such as the ones in [6]. Moreover, the mathematical treatment of measure-or distribution-valued solution to Euler's equation is not a trivial task, due to the need of quite weak notions of solution in presence of a singular nonlinearity. From the mathematical viewpoint, equation (1) has been widely investigated especially as inviscid limit of driven and damped Navier-Stokes equation, see for instance [5], [8] and references therein. Aside from the fact that we are dealing directly with the inviscid case, a substantial difference of this work with respect to those ones is the space regularity of solutions. Indeed, existence and uniqueness for 2-dimensional Euler equations are well established facts in spaces of suitably regular function spaces, while the interesting case of solutions taking values in signed measures or distributions remains quite open, especially in the uniqueness part: we refer to [15] for a general overview of the theory. The results of [9], which we review in subsection 2.2, established an important link between the theory of point vortices models and Gaussian invariant measures to Euler's equation. We refer to [15,16] and to [3] for reviews on, respectively, the former and latter ones. We also mention that limits of Gibbsian point vortices ensembles (originally proposed by Onsager, [19]) converging to Gaussian invariant measures were already considered for instance in [4] (the similarities between the two being already pointed out by Kraichnan [14]). However, Flandoli [9] was the first, as far as we know, to prove convergence of the system evolving in time, as opposed to the simple convergence of invariant measures of the other ones. His approach was based on the weak vorticity formulation of [23] (see also its references), which had already been considered in the point vortices model, [24], and turned out to be suitable to treat solutions with white noise marginals. Our results generalise the ones of [9] by combining a stochastic forcing term (already considered in the vortices setting in [10], or in function spaces in [7]) and damping. Stationary solutions are regarded with particular interest in the theory, and the invariant distributions we consider are also invariants of Euler's equation with no damping of forcing (see Remark 7): to our knowledge, the Poissonian invariant distributions with infinite vortices we introduce below are new, while their Gaussian counterpart (the enstrophy measure, more generally known as white noise) have been an object of interest since the works of Hopf, [12]. For a more general discussion on invariant measures of Lévy type we refer to [1], in which most of the basic ideas we rely upon are finely presented, although their arguments then proceed along point of view of Dirichlet forms theory.
We will treat our model equation in vorticity form, where ∇ ⊥ = (∂ 2 , −∂ 1 ). The idea is to exhibit solutions by adapting the point vortices model for Euler's equation, which, in absence of forcing and damping, we recall to be the measure valued solutions where x i ∈ T 2 is the position and ξ i ∈ R the intensity of a vortex, to Euler's equation (see section 2 for the appropriate notion of solution). Inclusion of the damping term in our model will amount to an exponential quenching of the vortex intensities, with rate θ. Because of dissipation due to friction (which physically results from the 3-dimensional environment in which the 2-dimensional flow is embedded), a forcing term is necessary in order for the model to exhibit stationary behaviour. We will choose as Π t a Poisson point process, so to add new vortices and rekindle the system. The linear part of (2), which is a Poissonian Ornstein-Uhlenbeck equation, suggests that stationary distributions are made of countable vortices with exponentially decreasing intensity, but in fact dealing with solutions of (2) having such marginals seems to be as hard as the white noise marginals case. The latter will be also addressed, taking as in [9] a "central limit" scaling of the vortices model, resulting in solutions of (2) with space white noise marginal, and space-time white noise as forcing term.
Our main result will be the existence of solutions to (2) in these two cases: infinite vortices marginals and Poisson point process forcing; white noise marginals and space-time white noise forcing. The latter one draws us closer to the models in [6], where the forcing term was Gaussian with delta time-correlations. We will apply a compactness method: our approximant processes will not be approximated solutions (as in Faedo-Galerkin methods), but true point vortices solutions with finitely many vortices, for which we are able to prove well-posedness thanks to the techniques of [16].
We regard the following results as a first step in the analysis of equation (2) by point vortices methods, the natural prosecution being the study of driving noises with more complicated space correlations, such as the ones used in numerical simulations reviewed in [6].

Preliminaries and Main Result
Consider the the 2-dimensional torus T 2 = R 2 /Z 2 ; we denote by H α = H α (T 2 ) = W α,2 (T 2 ), for α ∈ R the L 2 = L 2 (T 2 )-based Sobolev spaces, which enjoy the compact embeddings H α ֒→ H β whenever β < α, the injections being furthermore Hilbert-Schmitd if α > β + 1. Sobolev spaces are conveniently represented in terms of Fourier series: let e k (x) = e 2πik·x , x ∈ T 2 , k ∈ Z 2 , be the usual Fourier orthonormal basis: then whereû k =ū −k ∈ C (we only consider real spaces). We denote by M = M(T 2 ) the space of finite signed measures on T 2 : recall that measures have Sobolev regularity M(D) ⊂ H −1−δ for all δ > 0 (for instance, because by dominated convergence their Fourier coefficients converge to constants). The brackets ·, · will stand for L 2 duality couplings, such as the one between measures and continuous functions, or between Sobolev spaces of opposite orders, unless we specify otherwise. The capital letter C will denote (possibly different) constants, and subscripts will point out occasional dependences of C on other parameters. Lastly, we write X ∼ Y when the random variables X, Y have the same law.

Random Variables
In order to lighten notation, in this paragraph we denote random variables (or stochastic processes) and their laws with the same symbols. Let us also fix H := H −1−δ , with δ > 0, the Sobolev space in which we embed our random measures and distributions. We will deal with stochastic objects of Gaussian and Poissonian nature: the former are likely to be the more familiar ones, so we begin our review with them. We refer to [20,22] for a complete discussion of the underlying classical theory. Let W t be the cylindrical Wiener process on L 2 (T 2 ), that is W t , f is a real-valued centred Gaussian process indexed by t ∈ [0, ∞) and f ∈ L 2 (T 2 ) with covariance for any t, s ∈ [0, ∞) and f, g ∈ L 2 (T 2 ). Since the embedding L 2 (T 2 ) ֒→ H −1−δ (T 2 ) is Hilbert-Schmidt, W t defines a H −1−δ -valued Wiener process. The law η of W 1 is called the white noise on T 2 , and it can thus be regarded as a Gaussian probability measure on H −1−δ . Analogously, the law ζ of the (distributional) time derivative of W can be identified both with a centred Gaussian process indexed by L 2 ([0, ∞)×T 2 ) and identity covariance operator or with a centred Gaussian probability measure on H −3/2−δ ([0, ∞) × T 2 ); ζ is called the space-time white noise on T 2 . The couplings of η against L 2 functions are called Ito-Wiener integrals: we will see that double Ito-Wiener integrals play a crucial role in this context, so let us recall their definition (for which we refer to [13]). The double stochastic integral with respect to η is the isometry I 2 : L 2 sym (T 2×2 ) → L 2 (η) (which is not onto, the image being the second Wiener chaos) defined extending by density the following expression on symmetric products: . Equivalently, it is the extension by density of the map i1,i2=1,...,n i1 =i2 where n ≥ 0, A 1 , . . . , A n ⊂ T 2 are disjoint Borel sets and a i,j ∈ R. Let us compare it with another notion of double integral: considering η as a random distribution in H −1−δ , the tensor product η ⊗ η is defined as a distribution in H −2−2δ (T 2×2 ), so for h ∈ H 2+δ (T 2×2 ) we can couple h, η ⊗ η . For any h ∈ H 2+δ sym (T 2×2 ), it holds (as an equality between L 2 (η) variables) (since it is true for the dense subset of symmetric products) where we remark that D h(x, x)dx makes sense since h has a continuous version. We thus see that Ito-Wiener integration corresponds to "subtract the diagonal contribution" to the tensor product: in order to make the dependence of double Ito-Wiener integrals on η, and motivated by the above discussion, we will use in the following the notation : h, η ⊗ η := I 2 (h).
Besides those Gaussian distributions, we will be interested in a number of Poissonian variables, which we now define in the framework of [20]. For λ > 0, let π λ be the Poisson random measure on [0, ∞) × H −1−δ with intensity measure ν given by the product of the measure λdt on [0, ∞) and the image of σδ x where σ = ±1 and x ∈ T 2 are chosen uniformly at random. In other terms, one can define the compound Poisson process on H −1−δ (in fact on M), starting from the jump times t i of a Poisson process of parameter λ, a sequence σ i of i.i.d. ±1-valued Bernoulli variable of parameter 1/2 and a sequence x i of i.i.d uniform variables on T 2 . Notice that, since its intensity measure has 0 mean, π λ is a compensated Poisson measure, or equivalently Σ λ t is a H −1−δ -valued martingale. Moreover, Σ λ t has the same covariance of the cylindrical Wiener process W t (up to the factor λ): and also the same quadratic variation, We will need a symbol for another Poissonian integral, the where M, θ > 0. Thanks to the negative exponential, the above integrals converge also when M = ∞, defining a random measure: we will call it Ξ λ,θ = Ξ λ,θ ∞ . Remark 1. By (12), a sample of the random measure Ξ λ,θ M is a finite sum of point vortices ξ i δ xi with ξ i ∈ R, x i ∈ T 2 . We will say that the random vector (ξ i , ξ i δ xi has the law of Ξ λ,θ M . Analogously (and in a sense more generally speaking), the sequence (t i , σ i , x i ) i∈N is sampled under π λ if the sum of σ i δ ti δ xi has the law of the Poisson point process π λ .
Our Poissonian measures are characterised by their Laplace transforms: for any measurable and bounded f : where dσ denotes the uniform measure on ±1. By the isometry property of Poissonian integrals, the second moments of Σ λ t and Ξ λ,θ M are given by : where the second one is defined in analogy with (7), and, as in that case, in fact it extends to an isometry L 2 Let us note here that (16) also extends to functions h that are smooth outside the diagonal set (x, x) : x ∈ T 2 ⊂ T 2×2 , but possibly discontinuous or singular on it: this is going to be important in the sequel. An important link between the objects we just defined is the following: there exists a unique stationary solution with invariant measure Ξ θ,λ ∞ , and if u 0 ∼ Ξ θ,λ M , then u t will have law Ξ θ,λ M+t for any later time t > 0. The linear equation (18), in both the outlined cases, has a unique H −1−δ -valued strong solution, with continuous trajectories in the Gaussian case, and cadlag trajectories in the Poissonian one. Well-posedness of the linear equation and uniqueness of the invariant measure are part of the classical theory (see [20]), and they descend from the explicit solution by stochastic convolution: from which it is not difficult to derive also the last statement of the Proposition.

Weak solutions of 2D Euler equation
We now review some definitions of measure-valued and distribution-valued solutions to the 2D Euler's equation: the point is how to make sense of the multiplication appearing in the nonlinearity. The equation in terms of the vorticity ω is (4) and it has to be complemented with boundary conditions: on the torus T 2 one should impose that ω t have zero average. However, since we are dealing with a conservation law, the space average is not involved in the dynamics (it is constant). Remark 2. We will henceforth deliberately ignore the zero average condition: it will always be possible to subtract a constant number (constant in time and space, but possibly a random variable) to take care of it, but we refrain from doing so to avoid a superfluous notational burden. Let G be the Green function of ∆ on T 2 with zero average, and let K = ∇ ⊥ G be the Biot-Savart kernel; the former has the explicit representation We will use the fact that |∇G(x, y)|, |K(x, y)| ≤ C d(x,y) for all x, y, with C a universal constant. The second equation of (4) can be inverted by means of the Biot-Savart kernel: we can write u t = K * ω t , and thus obtain an equation where only ω appears. Its integral form against a smooth test function f is (keeping in mind that ∇ · ∇ ⊥ ω ≡ 0 to perform integration by parts), which can be symmetrised (swapping x and y) into where H f (x, y) = 1 2 K(x, y)(∇f (x) − ∇f (y)) is a bounded symmetric function, smooth outside the diagonal set These three formulations are equivalent for smooth ω t , but the integral forms, especially the symmetrised one, have been used to define more general solutions of Euler's equation, see [23]. One such solution is the system of (finitely many) Euler's point vortices: the evolution of the vorticity is given by (3), This model is thoroughly discussed for instance in [16], where it is remarked that it satisfies (20) if the double space integral is taken outside the diagonal △ 2 , where K is singular: It is thus possible, in sight of the notation we introduced in (16), to formulate Euler equation in the point vortices case as follows: if ω t = N i=1 ξ i δ xi,t and we denote then it holds The need to avoid the diagonal set △ 2 in order to give meaning to singular solutions is going to be crucial in what follows, as it is in the proof of the forthcoming important well-posedness result.
(In fact the latter is a slight generalisation of the results in [16], which will be a consequence of the further generalisation we will prove in section 3.) In [9], Flandoli performed a scaling limit of the point vortices system to exhibit (stationary) solutions with space white noise marginals: the meaning of the equation for such irregular vorticity processes was understood by carrying to the limit the formulation (22), since, as we have seen in the last paragraph, the Wiener-Ito interpretation of the nonlinear term makes perfect sense in the case of white noise. To proceed rigorously, let us give the following: be a H −1−δ -valued continuous stochastic process defined on a probability space (Ω, F , P), with fixed time marginals ω t having the law of white noise η for all t ∈ [0, T ]. We say that ω is a weak solution to Euler's equation if for any f ∈ C ∞ (T 2 ), P-almost surely, for any t ∈ [0, T ], Remark 3. Notice that the Ito-Wiener integrals (in space) appearing in the definition are almost surely integrable in time since their L 2 (P) norms are uniformly bounded in t. The latter definition coincides with the one of [9], only, in that article, it was not observed that the approximation procedure used to define the nonlinear term in fact coincides with the classic Ito-Wiener integral. The formulation (23) is in fact quite general: interpreting the colons as "subtraction of the diagonal contribution", this formulation might include all deterministic solutions, both in the classical and weak formulation (21) (cf. [23]), the point vortices solution of Proposition 2, and it is the sense in which the limit process with white noise marginals of [9] solves Euler's equation.
Proposition 3 (Flandoli). There exists a stationary stochastic process ω t with fixed-time marginals ω t ∼ η and trajectories of class C([0, T ], H −1−δ ) for any δ > 0 which is a solution of Euler's equation in the sense of Definition 1.
Remark 4. In fact, [9] proves the same result also for processes with fixed-time marginals For the sake of completeness, we recall that solutions to Euler's equation with white noise marginals were first built in [2], by means of Galerkin approximation on T 2 .

Main Results
Fix λ, θ > 0. Our model is the stochastic differential equation where dΠ t is either the Poisson process dΣ λ t or the space-time white noise dW t . We have seen in Proposition 1 how the linear part of the equation behaves; the intuition provided by the point vortices system suggests that, thanks to the Hamiltonian form of the nonlinearity, the latter only contributes to "shuffle" the vorticity without changes to the fixed time statistics. This intuition can be motivated as follows. Since the point vortices system preserves the product Lebesgue measure, the system must preserve the Poissonian random measures Ξ λ,θ M we introduced in subsection 2.1, because the positions of vortices under those measures are uniformly, independently scattered (this fact will be rigorously proved in section 3 for M < ∞). Building Gaussian solutions by approximation with Poissonian ones thus must produce the same phenomenon. In other words, with an eye towards stationary solutions, we expect to be able to build a Poissonian stationary solution with ω t ∼ Ξ θ,λ ∞ in the case Π t = Σ λ t , and a stationary Gaussian solution with ω t ∼ λ 2θ η in the case Π t = √ λW t .
Remark 5. These claims are deeply related with the fact that 2D Euler's equation preserves enstrophy, T 2 ω(x) 2 dx, when smooth solutions are considered. The quadratic form associated to enstrophy, that is the L 2 (T 2 ) product, is (up to multiplicative constants) the covariance of random fields Ξ λ,θ M and η: as already remarked in [1], one should expect all random fields with such covariance to be invariant for Euler's equation, even if the very meaning of the latter sentence has to be clarified.
First and foremost, we need to specify a suitable concept of solution: inspired by the discussion of the last paragraph, we give the following one.
Definition 2. Fix T, δ > 0, and let (Ω, F , P, F t ) be a probability space with a filtration F t satisfying the usual hypothesis, with respect to which for some q ≥ 1 (D([0, T ], S) denotes the space of S-valued cadlag functions into a metric space S). We consider the cases: We say that (Ω, F , P, F t , Π t , ω 0 , (ω t ) t∈[0,T ] ) is a weak solution of (24) if for any f ∈ C ∞ (T 2 ) it holds P-almost surely for any t ∈ [0, T ]: If instead, given (Ω, F , P, F t , W t ) there exists a process ω t as above, we call it a strong solution.
Remark 6. Equation (26) is motivated in sight of (19) and (23). The "variation of constants" expression in the above definition is equivalent to the "integral" one as one can verify integrating by parts in time. Both versions will be useful in what follows, but we deem (26) the most suggestive.
Remark 7. The nonlinear term of (26) is well-defined thanks to the isometry properties of Gaussian and Poissonian double integral (see section 2): indeed, the integrand is bounded in L 2 (P) uniformly in time, so that, in particular, t 0 : H f , ω s ⊗ ω s : ds is a continuous function of time.
We are now able to state our main result. Theorem 1. There exist weak solutions of (24) in all the outlined cases, stationary (as H −1−δ -valued stochastic processes) in the cases (Ps) and (Gs).
As already remarked, equation (24) is difficult to deal with directly in the Gaussian (or even the stationary Poisson) case: for instance it does not seem possible to treat it with fixed point or semigroup techniques. We prove existence of stationary solutions by taking limits of point vortices solutions, corresponding to the case (P). We begin with a solution ω M of the equation (24) with noise Σ λ t starting from finitely many vortices distributed as Ξ θ,λ M . Well-posedness in this case is ensured by a generalisation of Proposition 2, whose proof is the content of section 3. The first limit we consider is M → ∞, so to build a stationary solution with invariant measure Ξ θ,λ and thus obtain existence in case (Ps). Scaling intensities σ → σ √ N and generation rate λ → N λ, we prove that as N → ∞ the limit points are stationary solutions of (24) driven by space-time white noise and with invariant measure the space white noise. The nonstationary Gaussian case (G) will be derived analogously, in this sort of central limit theorem.
We are applying a compactness method : first, we prove probabilistic bounds on the involved distribution, in order to -second step-apply a compactness criterion ensuring tightness of the approximating processes; finally, we pass to the limit the equation satisfied by the approximants.
Remark 8. Consider the case when no damping or forcing are present: we noted above that the classical finite vortices system (3) preserves the product Lebesgue's measure, so in particular the distributions Ξ θ,λ M with M < ∞ and θ, λ > 0 are also invariant. The very same limiting procedure we are going to use, as M → ∞, proves existence of stationary solutions to Euler's equation in its weak formulation (23) with invariant measure Ξ θ,λ ∞ (or η, the case of [9]), where the definition of solution is to be given in the fashion of Definition 1. More generally, Poissonian and Gaussian stationary solutions, as suggested in [1], should be particular cases of stationary solutions with independently scattered random distributions.

Solutions with finitely many vortices
Even in the case of initial data distributed as Ξ λ,θ M , that is with almost surely finitely many initial vortices, solving the nonlinear equation is not a trivial task. We will build a solution describing explicitly how the initial vortices and the ones added by the noise term evolve, as a system of increasingly numerous differential equations for the positions of vortices x i . Intuitively, the process ω M,t is defined as follows: from the initial datum ω M (0), which is sampled under Ξ θ,λ M , we let the system evolve according to the deterministic dynamicṡ until the first jump time t 1 of the driving noise Σ λ t , when we add the vortex corresponding to the jump, and so on. To treat the model rigorously, let us introduce the following notation: let x 1,0 , . . . , x n,0 and ξ 1,0 , . . . , ξ n,0 be the (random) positions and signs of vortices of the initial datum, and set for notational convenience t 1 = · · · = t n = 0 their birth time; at time t i it is added a vortex with intensity ξ i,ti = ±1 in the position x i,ti , but we can pretend it to actually have existed since time 0, and just come into play at the time t i . Thus, our equations are In this formulation of the problem, part of the randomness consists in the positions and intensities of the initial vortices and the ones to be: the random jump times t i then determine when the latter ones become part of the system. Let us thus fix the t i 's (that is, condition the process given the distribution of the t i 's) so to reduce us to a deterministic problem with random initial data. The existence of a solution for almost every initial condition is ensured by the following generalisation of Proposition 2. We use the hypothesis that the jump times t i are locally finite (there are only finitely many of them in every compact [0, T ]) so to reduce ourselves to a system of finitely many vortices. In fact, we repeat the proof of [16] adapting it to our context. The issue is the possibility of collapsing vortices, which is ruled out as follows. We define an approximating system with interaction kernel smoothed in a ball around 0: the smooth interaction readily gives well-posedness of the approximants, on which we evaluate a Lyapunov functional measuring how close the vortices can get. Bounding the Lyapunov function then ensures that as the regularisation parameter goes to 0, the approximant vortices in fact perform the same motion prescribed by the non-smoothed equation.
Proof. Let δ > 0, and consider smooth functions G δ coinciding with G outside the fattened diagonal (x, y) ∈ D 2 : d(x, y) < δ (d being the distance on the torus T 2 ), and such that Note in particular that the latter inequality was already true for G. Let us first restrict ourselves to a time interval [0, T ]: in particular, we can consider only the finitely many vortices with t i ≤ T , let them be x 1 , . . . , x n . The system with smoothed interaction kernel K δ = ∇ ⊥ G δ has a unique, global in time, smooth solution thanks to Cauchy-Lipschitz theorem: let x δ i,t denote the solution (note that smoothing K does not effect the evolution of the intensities ξ i,t ).
Because of the Hamiltonian structure of the equations, that is, since K δ = ∇ ⊥ G δ , it holds divẋ δ i,t = 0. This implies the invariance of product Lebesgue measure: for any Let us now introduce a Lyapunov function measuring how close the existing vortices are by means of G δ : By replacing G δ with G δ − k for a large enough k > 0 in the definition of L δ we can assume that L δ is nonnegative. Observe that, because of (31), D n L δ (0)dx 1 , . . . dx n ≤ C for a constant C independent of δ. Upon differentiating, and keeping in mind thaṫ whereã ijk (t) depend on time t as functions of the intensities ξ i,t ,ã ijk = 0 whenever two indices are equal, since ∇G δ (x δ i,t −x δ j,t )·∇ ⊥ G δ (x δ i,t −x δ j,t ) = 0 and it always holds |ã ijk (t)| ≤ 1. We can use this to prove the following integral bound on L δ : denoting by dx n the n-fold Lebesgue measure of the distribution of initial position, C T being a constant depending only on T (n depends on T ). Note that in the second inequality we have used the invariance of Lebesgue's measure. The last passage follows from the aforementioned integrability of L δ (0) and the fact that, because of (31), the integrands in the second term are bounded by With these estimates at hand, we can now pass to the limit as δ → 0: let since when two points x, y are closer than δ, G δ (x, y) ≥ C log(δ) for some universal constant C. As a consequence, byČebyšëv's inequality, By construction, in the event Ω c δ,T the solution x δ i,t is in fact a solution of the original system in [0, T ]. Hence, the thesis holds if the event is negligible. But this is true: Ω δ,T is monotone in its arguments, so that the intersection in δ is negligible because of the above estimates, hence the increasing union in T must be negligible too.
The forthcoming Corollary is a direct consequence of Proposition 4: indeed to complete our construction we only need to randomise the jump times and intensities so that the initial conditions and driving noise have the correct distribution. Assume that Proof. Fix s < t: by construction, given the positions x i,0 , the initial intensities ξ i,0 and the jump times t i (in a P-full measure event), ω M,t is given by a deterministic function of (x i,s , ξ i,s ) i:ti<s and (t i , x i,0 , ξ i,0 ) i:s≤ti<t . As a consequence, ω M,t is a function of ω M,s and of the driving noise (Σ λ r ) s≤r<t , which is independent from ω M,s : this implies the Markov property. Since the trajectories of positions x i,t and the evolution of intensities ξ i,t are smooth in time, ω M,t is also smooth in time, save for the jump times t i when a new Dirac's delta is added.
As for the marginal distributions, let us first evaluate: Using the definition of ξ i,t , and distinguishing the cases i ≤ n and i > n (which correspond to two independent groups of random variables), we can write where N is a Poisson point process of parameter λ on R whose points are denoted by s i , and the second passage follows from the fact that the points N in disjoint intervals are independent and their distribution does not change if we reverse the parametrisation of the interval. Comparing to the characteristic function of Ξ M+t given in (14), we conclude that ω M,t ∼ Ξ θ,λ M+t . Observe now that in this case it holds, for any f ∈ C ∞ (T 2 ), P-almost surely for all t ≥ 0, (cf. with subsection 2.1). Given this, it is straightforward to show that we do have built solutions of (26): for f ∈ C ∞ (T 2 ), by (29) and (30), The latter equation holds regardless of the choice of initial positions, intensities and jump times (as soon as the dynamics is defined) so in particular it holds P-almost surely uniformly in t, and this concludes the proof.
The method of [16] thus provides, quite remarkably, existence and pathwise uniqueness of measure-valued strong solutions. Unfortunately, it only seems to apply to systems of finitely many vortices, since it relies on the very particular, discrete nature of the measures involved to control the "diagonal collapse" issue. Let us conclude this section noting that we have obtained the first piece of Theorem 1, namely we have built solutions in the case (P) for all M < ∞.

Proof of the Main Result
In section 3 we built the point vortices processes ω M,t = i:ti≤t ξ i,t δ xi,t . Let us introduce the scaling in N ≥ 1: we will denote ω M,N,t = i:ti≤t (in the sense of Definition 2) with fixed time marginals ω M,N,t ∼ 1 √ N Ξ θ,N λ M+t . It is worth to note here that, by construction of ω M,N,t , its natural filtration F t coincides with the one generated by the driving noise Σ N λ t and the initial datum. The forthcoming paragraphs deal with, respectively: a recollection of some compactness criterions, the bounds proving that the laws of ω M,N are tight, the proof of the fact that limit points of our family of processes are indeed solutions in the sense of Definition 2, that is, the main result.

Compactness Results
Let us first review a deterministic compactness criterion due to Simon (we refer to [25] for the result and the required generalities on Banach-valued Sobolev spaces).
Proposition 5 (Simon). Assume that • X ֒→ B ֒→ Y are Banach spaces such that the embedding X ֒→ Y is compact and there exists 0 < θ < 1 such that for all with r 0 , r 1 ∈ [0, ∞], and we define then if s * ≤ 0, F is relatively compact in L p ([0, T ], B) for all p < − 1 s * . In the case s * > 0, F is moreover relatively compact in C([0, T ], B).
Let us specialise this result to our framework. Take with δ > 0: by Gagliardo-Niremberg estimates the interpolation inequality is satisfied with θ = δ/2. Let us take moreover s 0 = 0, s 1 = 1/2 − γ with γ > 0, r 1 = 2 and r 0 = q ≥ 1, so that the discriminating parameter is Note that as we take δ smaller and smaller, and q bigger and bigger, we can get s * < 0 arbitrarily close to 0, but not 0. We have thus derived: is bounded for any choice of δ > 0 and p ≥ 1, and for some γ > 0, then it is relatively compact in L q ([0, T ], H −1−δ ) for any 1 ≤ q < ∞. As a consequence, if a sequence of stochastic processes u n : [0, T ] → H −1−δ defined on a probability space (Ω, F , P) is such that, for any δ > 0, p ≥ 1 and some γ > 0, there exists a constant C δ,γ,q for which then the laws of u n on L q ([0, T ], H −1−δ ) are tight for any 1 ≤ q < ∞.
The processes we will consider are discontinuous in time: this is why we consider only fractional Sobolev regularity in time. However, as we have just observed, this prevents us to use Simon's criterion to prove any time regularity beyond L q . This is why we will combine the latter result with a compactness criterion for cadlag functions. We refer to [17]  2. for all ε, ε ′ > 0 there exists R > 0 such that for any sequence of F n -stopping times τ n ≤ T it holds sup n sup 0≤r≤R P n d(u n τn , u n τn+r ) ≥ ε ′ ≤ ε.

Tightness of Point Vortices Processes
The following estimate on our Poissonian random measures is the crux in all the forthcoming bounds; it is essentially a Poissonian analogue of the ones in Section 3 of [9].
For any 1 ≤ p < ∞ there exists a constant C p > 0 such that for any measurable bounded functions h : T 2 → R and f : uniformly in N ≥ 0 and M ∈ [0, ∞]. As a consequence, since for δ > 0 the Green function ∆ −1−δ is smooth, uniformly in M, N .
Proof. Since we reduce ourselves to symmetric functions. Moreover, without loss of generality we can check (34) for functions with separate variables f (x, y) = h(x)h(y), h : T 2 → R measurable and bounded, for which it holds Moments of the random variable h, ω M,N can be evaluated by differentiating the moment generating function (14): using Faà di Bruno's formula to take 2p derivatives we get [20,21] for similar classical computations). Let us stress that when an integral in the latter formula is null, its 0-th power is to be interpreted as 0 0 = 1. The contribution of 1 2|k = σ k dσ is crucial: when k is odd, 1 2|k is null, so only terms with m k = 0 survive in the sum (again, 0 0 = 1). Thus, the highest power of N appearing is N r2 ≤ N 2p/2 = N p , which is compensated by the N −p we factored out, and this concludes the proof.
We can now discuss convergence at fixed times. and if the variables converge almost surely, they do so also in L p (Ω, H −1−δ ) for any 1 ≤ p < ∞, δ > 0.
Proof. The embedding H α ֒→ H β is compact as soon as α > β, and we know that the variables are uniformly bounded elements of L p (Ω, H −1−δ ) for any p ≥ 1 by (34), so by Cebyšëv's inequality their laws are tight. Identification of limit laws is yet another consequence of (14): by Theorem 2 of [11] (an infinite-dimensional Lévy theorem) we only need to check that characteristic functions E e i ωM,N ,h converge to the ones of the announced limits for any h ∈ H 1+δ . Since (14) is valid for all M ∈ [0, ∞], the limit for M → ∞ poses no problem. As for the limit N → ∞, for any test function h ∈ H 1+δ , where in the second step we used the following elementary expansion: for φ ∈ C(T 2 ), Since E [exp (i h, η )] = exp(− h 2 2 ), this concludes the proof. The latter result provides compactness "in space" ("equi-boundedness"): in order to apply Corollary 2 and Theorem 2, we also need to obtain a control on the regularity "in time" ("equi-continuity"). We will obtain it by exploiting the equation satisfied by ω M,N , which we derived in Corollary 1, which allows us to prove the forthcoming estimate on increments. (37) Proof. In order to lighten notation, and since the final result must not depend on M, N , let us drop them when writing ω M,N,t = ω t . By its definition in 32 and Remark 6 we know that the process satisfies the integral equation for any smooth f ∈ C ∞ (T 2 ). Since this equation holds P-almost surely uniformly in s, t ∈ [0, T ], it is also true when we replace t with the stopping time τ . It is convenient to recall that so we can use the weak integral equation against the orthonormal functions e k to control the full norm: We estimate increments by bounding separately the terms in the equation, let us start from the linear one: (40) where the last passage makes use of the uniform estimate (34). The nonlinearity is the harder one, and its singularity is the reason why we can not obtain space regularity beyond H −3−δ , where the second passage uses (34), and the third is due to the fact that by Taylor expansion By (11), the martingale ( f, N −1/2 (Σ N λ t+r − Σ N λ t ) ) t∈[0,T ] has constant quadratic variation λr f 2 L 2 , so Burkholder-Davis-Gundy inequality gives Applying estimates (40,41,43) to the functions e k , from (38) and Cauchy-Schwarz inequality we get E | ω τ +r − ω τ , e k | 2 ≤ C θ,λ,T r|k| 4 , so that (39) gives us (1 + |k| 2 ) −3−δ Cr T + |k| 4 T + λ ≤ C θ,λ,T,δ r, which concludes the proof. for any δ > 0, 1 ≤ q < ∞.
Proof. Since ω M,N,t ∼ 1 √ N Ξ θ,N λ M+t , they are bounded in L p (Ω, H −1−δ ) for any δ > 0, 1 ≤ p < ∞ uniformly in M, N, t as shown in Proposition 7, and as a consequence the processes ω M,N are uniformly bounded in L p (Ω × [0, T ], H −1−δ ), for any δ > 0, 1 ≤ p < ∞. Moreover, we have proved fixed-time tightness. We are thus left to prove Aldous' condition in H −3−δ and to control a fractional Sobolev norm in time in order to apply Corollary 2 and Theorem 2, concluding the proof. As in the previous proof, we denote ω M,N,t = ω t .
We only need to apply the uniform bound on increments (37). Starting from the fractional Sobolev norm, we evaluate which converges as soon as α < 1/2. Aldous's condition follows fromČebyšëv's inequality: if τ is a stopping time for ω t , then where the right-hand side is smaller than ε ′ > 0 as soon as R, which we can choose, is small enough.
Let us conclude this paragraph with a martingale central limit theorem concerning the driving noise of our approximant processes.
for any δ > 0, 1 ≤ q < ∞, and limit points have the law of the Wiener process √ λW t on H −1−δ with covariance E [ W t , f , W s , g ] = t ∧ s f, g L 2 (T 2 ) .
Proof. By (43) we readily get E Π N τ +r − Π N τ 2 H −1−δ ≤ C δ,λ r for any N ∈ N, δ, r > 0 and any τ stopping time for Π N , uniformly in N . The very same argument of the last proposition (here with a better space regularity) proves then the claimed tightness. The martingale property (with respect to the processes own filtrations) carries on to limit points since it can be expressed by means of the following integral formulation: for any s, t ∈ [0, T ], E (Π N t − Π N s )Φ(Π N | [0,s] ) = 0 for all the real bounded measurable functions Φ on (H −1−δ ) [0,s] . Limit points are Gaussian processes, since at any fixed time as one can show by repeating the computations on characteristic functions in Proposition 7 with θ = 0, M = t. It now suffices to recall the covariance formulas (6) and (10), to conclude that any limit point has the law of √ λW .