The role of disorder in the dynamics of critical fluctuations of mean field models

The purpose of this paper is to analyze how the disorder affects the dynamics of critical fluctuations for two different types of interacting particle system: the Curie-Weiss and Kuramoto model. The models under consideration are a collection of spins and rotators respectively. They both are subject to a mean field interaction and embedded in a site-dependent, i.i.d. random environment. As the number of particles goes to infinity their limiting dynamics become deterministic and exhibit phase transition. The main result concern the fluctuations around this deterministic limit at the critical point in the thermodynamic limit. From a qualitative point of view, it indicates that when disorder is added spin and rotator systems belong to two different classes of universality, which is not the case for the homogeneous models (i.e., without disorder).


Introduction
Interacting particle systems with mean field interaction are characterized by the complete absence of geometry in the space of configurations, in the sense that the strength of the interaction between particles is independent of their mutual position. The advantage of dealing with this kind of models is that they usually are analytically tractable and it is rather simple derive their macroscopic equations. Even if the mean field hypothesis may seem too simplistic to describe physical systems, where geometry and short-range interactions are involved, mean field models have been recently applied to social sciences and finance, as in [2,7,8,10,13,15]. We briefly introduce the general framework and some of its peculiar features. By mean field stochastic process we mean a family x (N ) = (x (N ) (t)) t≥0 with the following characteristics: which is a random probability on (E, E). Then (ρ N (t)) t≥0 is a measure-valued Markov process.
Although this is by no means a standard definition of mean field model, it captures the basic features of the specific models we will consider. Let (F, F ) be a topological vector space, and h : E → F be a measurable function. Objects of the form are called empirical averages. In the case the flow ( hdρ N (t)) t≥0 is a Markov process, we say hdρ N (t) is an order parameter. Note that the empirical measure itself is an order parameter (taking F = set of signed measures on E, and h(x) = δ x ). Whenever possible, it is interesting to find finite dimensional order parameters, i.e. order parameters for which F is finite dimensional. One of the nice aspects of mean field models is that, in many interesting cases, one can prove a Law of Large Numbers (as N → +∞) for the order parameters, and characterize the deterministic limit as a solution of an ordinary differential equation. This limit is often called the McKean-Vlasov limit. In particular, the differential equation describing the limit evolution of the empirical measure, will be referred to as the McKean-Vlasov equation. This equation has the form where L is a nonlinear operator acting on signed measures on E (even though other spaces may be more convenient for the analysis of L).
Our main interest is the study of the fluctuations of the order parameter around its limiting dynamics. We can capture different features of these fluctuations depending on whether or not the time is rescaled with N . If time is not rescaled and we consider the evolution in a time interval [0, T ], with T fixed, a Central Limit Theorem holds for the order parameter for all regimes; in other words, the fluctuations of the order parameter converge to a Gaussian process, which is the unique solution of a linear diffusion equation. Whenever time is rescaled in such a way T goes to infinity as N does, we may observe different behaviors. To avoid further complications, we assume the Markov process x (N ) (t) has a "nice" chaotic initial condition: x • Subcritical regime. Suppose q 0 is the unique stationary solution of the McKean-Vlasov equation, and it is linearly stable (i.e. stable for the linearized equation). Then we expect the Central Limit Theorem holds uniformly in time; in particular, this provides a Central Limit Theorem for the stationary distribution of x (N ) . Some results in this direction are shown in [12].
• Supercritical regime. Suppose the set of stationary, linearly stable solutions of the McKean-Vlasov equation has cardinality greater than 1. In this case metastability phenomena occur at a time scale exponentially growing in N .
• Critical regime. This is the case in the boundary of the subcritical regime: denoting by L the linearization of L around q 0 , the spectrum Spec(L) of L is contained in {z ∈ C : Re(z) ≤ 0}, but there are elements on Spec(L) with zero real part. Under a suitable time speed-up, the elements of the corresponding eigenspaces may exhibit large and, possibly, non-normal fluctuations (see [9,5]).
Of course the three regimes described above do not cover in general all possibilities, since stable periodic orbits or even stranger attractors may arise. Moreover, the same model could be in different regimes depending on the values of some parameters (phase transition).
The main subject of this paper is the analysis of the dynamics of the critical fluctuations in disordered mean field models. We consider a mean field model and we add a site-dependent, i.i.d. random environment, acting as an inhomogeneity in the structure of the system; we aim at analyzing the effect of the disorder in the dynamics of critical fluctuations, as compared with the homogeneous case. We deal with the Curie-Weiss and the Kuramoto models. We are not aware of similar results concerning non-equilibrium critical fluctuations in presence of disorder. Static fluctuations for the random Curie-Weiss model have been studied in [1].
We now give the basic ideas of how the dynamics of critical fluctuations are determined. As we mentioned above, the deterministic limiting dynamics of the order parameter is described by a nonlinear evolution operator L. The linearization of this equation around a stationary solution gives rise to the so called linearized operator L. This operator is also related to the normal fluctuation of the process. At the critical point this operator has an eigenvalue with zero real part, while all other elements of the spectrum have negative real part. The eigenspace of the eigenvalue with zero real part will be called critical direction, and usually happens to have low dimension: critical phenomena involve the empirical averages corresponding to this subspace. Thus, our analysis follow the following points.
• Locating the critical direction.
• Determining the correct space-time scaling for the critical fluctuations. This requires an approximation of the time evolution of the order parameter that goes beyond the normal approximation.
• Proving that the rescaled fluctuations vanish along non-critical directions. This will be done using the method of "collapsing processes" : it was developed by Comets and Eisele in [5] for a geometric long-range interacting spin system and was previously applied to a homogeneous mean field spin-flip system in [18].
• Determining the limiting dynamics in the critical direction. It will be done using arguments of perturbation theory for Markov processes, which has been treated in [17], and of tightness, applied to a suitable martingale problem.
From a qualitative point of view, our results indicate that when disorder is added, spin systems and rotators belong to two different classes of universality, which is not the case for homogeneous systems. Roughly speaking, in spin systems the fluctuations produced by the disorder always prevail in the critical regime: these fluctuations evolve in a time scale of order N slowing down. However, as the "strength" of the disorder increases, the Kuramoto model undergoes a further phase transition: for sufficiently small disorder, the dynamics of critical fluctuations converge to a nonlinear, ergodic diffusion, as in the homogeneous case; for larger disorder, the limiting diffusion loses ergodicity, and actually explodes in finite time. We finally remark that in [4] we have analyzed the critical fluctuations for a spin system close in spirit to the Curie-Weiss model, although with a less general disorder distribution.

Description of the Model
Let S = {−1, +1} be the spin space, and µ be an even probability on R. Let also η = (η j ) N j=1 ∈ R N be a sequence of independent, identically distributed random variables, defined on some probability space (Ω, F , P ), and distributed according to µ. They represent a random, inhomogeneous magnetic field. Given a configuration σ = (σ j ) N j=1 ∈ S N and a realization of the magnetic field η, where σ j is the spin value at site j, and η j is the local magnetic field associated with the same site. Let β > 0 be the inverse temperature. For given η, σ(t) = (σ j (t)) N j=1 , with t ≥ 0, is a N -spin system evolving as a continuous time Markov chain on S N , with infinitesimal generator L N acting on functions f : S N → R as follows: where ∇ σ j f (σ) = f (σ j ) − f (σ) and the k-th component of σ j , which is the spin flip at the site j, is The quantity e −βσj (m σ N +ηj ) represents the jump rate of the spins, i.e. the rate at which the transition σ j → −σ j occurs for some j. The expressions (1) and (2) describe a system of mean field ferromagnetically coupled spins, each with its own random magnetic field and subject to Glauber dynamics. The two terms in the Hamiltonian have different effects: the first one tends to align the spins, while the second one tends to point each of them in the direction of its local field. For simplicity, the initial condition σ(0) is such that (σ j (0), η j ) N j=1 are independent and identically distributed with law λ. Note that, since the marginal law of the η j 's is µ, λ must be of the form with q 0 (1, η) + q 0 (−1, η) = 1, µ-almost surely. The quantity (σ j (t)) t∈[0,T ] represents the time evolution on [0, T ] of j-th spin value; it is the trajectory of the single j-th spin in time. The space of all these paths is D[0, T ], which is the space of the right-continuous, piecewise-constant functions from [0, T ] to S . We endow D[0, T ] with the Skorohod topology, which provides a metric and a Borel σ-field (see [11] for details).

Limiting Dynamics
We now describe the dynamics of the process (2), in the limit as N → +∞, in a fixed time interval [0, T ]. Later, the equilibrium of the limiting dynamics will be studied. These results are special cases of what shown in [6], so proofs are omitted. More details can also be found in [3].
N denote a path of the system in the time interval [0, T ], with T positive and fixed. If f : S × R → R, we are interested in the asymptotic (as N → +∞) behavior of empirical averages of the form We may think of ρ N := (ρ N (t)) t∈[0,T ] as a cadlag function taking values in M 1 (S × R), the space of probability measures on S × R endowed with the weak convergence topology, and the related Prokhorov metric, that we denote by d P ( · , · ). The first result we state concerns the dynamics of the flow of empirical measures. We need some more notations. For a given q : S × R → R, we introduce the linear operator L q , acting on f : S × R → R as follows: Given η ∈ R N , we denote by P η N the distribution on (D[0, T ]) N of the Markov process with generator (2) and initial distribution λ. We also denote by the joint law of the process and the field.
admits a unique solution in C 1 [0, T ], L 1 (µ) S , and q t (·, η) is probability on S , for µ-almost every η and every t > 0. Moreover, for every ε > 0 there exists C(ε) > 0 such that for N sufficiently large, where, by abuse of notations, we identify q t with the probability q t (σ, η)µ(dη) on S × R.
Thus, equation (4) describes the infinite-volume dynamics of the system. Next result gives a characterization of stationary solutions of (4).
Lemma 2.1. Let q * : S × R → R, such that q * (σ, ·) is measurable and q * (·, η) is a probability on S . Then q * is a stationary solution of (4), i.e. L q * q * ≡ 0, if and only if it is of the form where m * satisfies the self-consistency relation Moreover, m * = 0 is always a solution of (6) and it is linearly (resp. neutrally) stable if and only if β µ(dη) cosh 2 (βη) < (resp. =) 1.
Remark 2.2. The transition between uniqueness and non-uniqueness of the solution of (6) in general is not related to the change of stability for m * = 0. If the distribution µ is unimodal on R, the two thresholds coincide: the paramagnetic solution is linearly stable when it is unique and unstable when it is not. In case we choose µ = 1 2 (δ η + δ −η ), with η > 0, the phase diagram is more complex: when (7) fails, the paramagnetic solution of (6) is either unstable, and it coexists with a pair of opposite stable ferromagnetic solutions, or may recover linear stability, coexisting with a pair of unstable ferromagnetic solutions and a pair of stable ferromagnetic ones (see [6] for details). A more general µ may give rise to arbitrarily many solutions of (6).

Dynamics of Critical Fluctuations
The results of this section are concerned with the fluctuation floŵ that takes values on the space of signed measures on S × R. It is very convenient to assume that the process starts in local equilibrium, i.e. q 0 (σ, η) = q * (σ, η), where q * (σ, η) is a stationary solution of (4); it should be not hard to extend all next results to a general initial condition. The proofs of all results stated here will be given in Section 5. We first state results valid for all temperatures; later, Lemma 2.3, Proposition 2.3, Theorems 2.2 and 2.3 are restricted to the critical case. Functions from S × R are all of the form F (σ, η) = γ(η) + σφ(η). However does not change in time, and has a Gaussian limit for every γ ∈ L 2 (µ). Thus, we are only interested in the evolution of integrals of the type It is therefore natural to control the action of the generator L N on functions of σ and η of the form ψ σφ(η)dρ N , witĥ Proposition 2.1. Let ψ : R n → R be of class C 1 , and φ ∈ L 2 (ν) n , where ν is the measure on R defined by Then where Moreover the remainder o(1) in (10) is of the form where H(σ, η) is the vector-valued function and lim N →+∞ sup |x|,|y|,|z|,|w|≤M R N (x, y, z, w) = 0 (13) for every M > 0.
Proposition 2.1, whose proof consists of a rather standard computation that will be sketched in Section 5, is the essential ingredient for proving a Central Limit Theorem for the empirical flow, i.e. to show that the fluctuation flow converges in law to a Gaussian process. The proof of this result requires to identify an appropriate Hilbert space for the fluctuationsρ N (see e.g. [5] for related results). Our main aim is, however, to describe large-time fluctuations at the critical points; the additional technical difficulties arising, have not allowed us to obtained the desired results under the present assumptions, in particular with no requirements on the field distribution µ (except evenness). Thus we find it preferable to make the following assumption at this point.
(F) µ has finite support D.
Under assumption (F), the space L 2 (ν) is finite-dimensional. Together with the following simple result, this greatly simplifies the analysis of fluctuations.
converges in law to the Gaussian process (X i ) m−1 i=0 solving the following linear stochastic differential equations − ϕ i (η) tanh(β(m * + η))µ(dη) ϕ j (η) tanh(β(m * + η))µ(dη) i=0 are independent standard Brownian motions, that are independent of the vector (X 0 (0), Note that the randomness of the field persists in the limiting dynamics of fluctuations, due to the correlated, constant random drifts H i . Observe that H i ≡ 0 if µ = δ 0 , i.e. when the random field is absent. We now look more closely at fluctuations around the paramagnetic solution m * = 0 at the critical regime, i.e. for those values of β for which β D µ(dη) cosh 2 (βη) = 1. In the critical regime β D µ(dη) cosh 2 (βη) = 1, we have λ 0 = 0, and λ i > 0 for i > 0 (it is actually easily shown that λ i ≥ 1 for i > 0). It follows that the process X 0 (t) in Proposition 2.2 has a variance that diverges as t → +∞. A sharper description of the large time fluctuations is obtained by considering more "moderate" fluctuations: The following result improves the expansion given in Proposition 2.1. where Note that in Proposition 2.3, functions depending only on η are still integrated with respect toρ, rather thanρ; indeed, by the standard Central Limit Theorem, those integrals with respect toρ have a Gaussian limit under P N . Proposition 2.3 allows to deal easily with the homogeneous case µ = δ 0 . Using the notations of Proposition 2.2 we have m = 1, ϕ 0 ≡ 1. Thus, using Proposition 2.3 with n = 1, φ ≡ 1 and β = β c = 1, we easily observe that L (1) ψ = L (2) ψ ≡ 0, and Using convergence of generators as in Proposition 2.2 we readily obtain the dynamics of large-time critical fluctuations for the homogeneous model. This result is a simple special case of what obtained in [5].
Theorem 2.2. Assume µ = δ 0 , and β = 1. The stochastic process converges weakly, under P N , to the unique solution of the stochastic differential equation where W is a standard Brownian motion.
As we will see (proofs are in Section 5), the inhomogeneous case requires more sophisticated arguments.
Definition 2.1. We say that a sequence of stochastic processes (ξ n (t)) n , for t ∈ [0, T ], collapses to zero if for every ε > 0, where ϕ 0 , . . . , ϕ m−1 is the basis introduced in Proposition 2.2. Under P N the pro- where H is a Gaussian random variable, with zero mean and variance D tanh 2 (βη)µ(dη).
Thus, the disorder has a dramatic impact on fluctuations at the critical points: fluctuations arise at a much shorter time scale (N 1 4 rather that N 1 2 ), and have the simple form of a linear function with random slope.

Description of the Model
Let I = [0, 2π) be the one dimensional torus, and µ be an even probability on R. Let also η = (η j ) N j=1 ∈ R N be a sequence of independent, identically distributed random variables, defined on some probability space (Ω, F , P ), and distributed according to µ. Given a configuration x = (x j ) N j=1 ∈ I N and a realization of the random environment η, we can define the Hamiltonian H N (x, η) : where x j is the position of the rotator at site j and ωη j , with ω > 0, can be interpreted as its own frequency. Let θ, positive parameter, be the coupling strength. For given η, the stochastic process x(t) = (x j (t)) N j=1 , with t ≥ 0, is a N -rotator system evolving as a Markov diffusion process on I N , with infinitesimal generator L N acting on C 2 functions f : I N → R as follows: Consider the complex quantity where 0 ≤ r N ≤ 1 measures the phase coherence of the rotators and Ψ N measures the average phase. We can reformulate the expression of the infinitesimal generator (17) in terms of (18): The expressions (16) and (19) describe a system of mean field coupled rotators, each with its own frequency and subject to diffusive dynamics. The two terms in the Hamiltonian have different effects: the first one tends to synchronize the rotators, while the second one tends to make each of them rotate at its own frequency.
For simplicity, the initial condition x(0) is such that (x j (0), η j ) N j=1 are independent and identically distributed with law λ. We assume λ is of the form with I q 0 (x, η) dx = 1, µ-almost surely. The quantity x j (t) represents the time evolution on [0, T ] of j-th rotator; it is the trajectory of the single j-th rotator in time. The space of all these paths is C[0, T ], which is the space of the continuous function from [0, T ] to I, endowed with the uniform topology.

Limiting Dynamics
We now describe the dynamics of the process (17), in the limit as N → +∞, in a fixed time interval [0, T ]. Later, the equilibrium of the limiting dynamics will be studied. These results are special cases of what shown in [6], so proofs are omitted.
We may think of ρ N := (ρ N (t)) t∈[0,T ] as a continuous function taking values in M 1 (I × R), the space of probability measures on I × R endowed with the weak convergence topology, and the related Prokhorov metric, that we denote by d P ( · , · ). The first result we state concerns the dynamics of the flow of empirical measures. We need some more notations. For a given q : I × R → R, we introduce the linear operator L q , acting on f : I × R → R as follows: where r q e iΨq := I e ix q(x, η) µ(dη) dx.
Given η ∈ R N , we denote by P η N the distribution on (C[0, T ]) N of the Markov process with generator (17) and initial distribution λ. We also denote by the joint law of the process and the environment.
, and q t (·, η) is probability on I, for µ-almost every η and every t > 0. Moreover, for every ε > 0 there exists C(ε) > 0 such that for N sufficiently large, where, by abuse of notations, we identify q t with the prob- Thus, equation (22) describes the infinite-volume dynamics of the system. Since µ is symmetric and the operator L preserves evenness, we can suppose the average phase Ψ qt ≡ 0, without loss of generality. Next result gives a characterization of stationary solutions of (22).
Lemma 3.1. Let q * : I × R → R, such that q * (x, ·) is measurable and q * (·, η) is a probability on I. Then q * is a stationary solution of (22), i.e. L q * q * ≡ 0, if and only if it is of the form where Z * is a normalizing factor and r * satisfies the self-consistency relation Moreover, r * = 0 is always a solution of (24) and, letting we have that 1. if µ is unimodal on R, then the solution r * = 0 is linearly (resp. neutrally) stable if and only if θ < (resp. =) θ c ; , then the solution r * = 0 is linearly (resp. neutrally) stable if and only if θ < (resp. =) θ c ∧ 2 .
Remark 3.1. The transitions uniqueness/non-uniqueness of the solution of (24) and stability/instability of r * = 0 in general do not occur at the same threshold. It does, however, in the case 1 of the previous Lemma. The phase diagram related to the case 2 is more complicated. We refer to [6] for further details.

Dynamics of Critical Fluctuations
The results of this section are concerned with the fluctuation floŵ that takes values on the space of signed measures on I × R. It is very convenient to assume that the process starts in the particular local equilibrium q 0 (x, η) = q * (x, η) = 1 2π , which is the stationary solution of (22) corresponding to r * = 0. The proof of the Central Limit Theorem (Proposition 3.2) should be not hard also when q * (x, η) is a sincronous stationary solution of (22), i.e. with r * = 0. The proofs of all results stated here will be given in Section 6. If φ is a function from I × R, we are interested in the evolution of integrals of the type It is therefore natural to control the action of the generator L N on functions of x and η of the form ψ φ(x, η)dρ N , witĥ Proposition 3.1. Let ψ : R n → R be of class C 2 , and φ ∈ (C 2 ([0, 2π) × {−1, 1})) n be 2π-periodic in the first argument. Then where the operator is the linearization of L, given by (21), around the equilibrium distribution q * .
Unlike the proof of Proposition 2.1, which requires an expansion of the generator, Proposition 3.1 follows by the direct application of the generator; its proof is omitted. It provides the key computation for the proof of the Central Limit Theorem (Proposition 3.2 below). In order to simplify the analysis, we make the following assumption on the distribution of the random environment.
(H1) µ = 1 2 (δ 1 + δ −1 ) Because of the structure of the system, it is reasonable to focus on functions from I × R of the forms φ(x, η) = cos(hx), sin(hx), η cos(hx) or η sin(hx), for h ≥ 1 integer, and thus on the behavior of where δ 1h is Kronecker delta and are independent standard Brownian motions, that Note that the randomness of the field appears only through the parameter ω in the dynamics of fluctuations. The only source of stochasticity is due to the Brownian motions.
We now proceed to the analysis of the critical regime, i.e. for θ = θ c ∧ 2, where θ c is given in (25) and, under (H1), θ c = 1 + 4ω 2 . We make the following further assumption.
(H2) ω < 1 2 . Under assumptions (H1)-(H2), we have sufficient control of the spectrum of L, as operator in L 2 ([0, 2π) × {−1, 1}). In particular, L can be diagonalized in the critical regime, as stated in next Lemma. In this last case the spectrum of L is given by with corresponding eigenspaces In the critical regime θ = θ c = 1 + 4ω 2 the variance of the processes which are the fluctuations of the empirical averages corresponding to the directions generating the kernel of operator L, diverge as t → +∞. A sharper description of the large time fluctuations is obtained by considering more "moderate" fluctuations: We will obtain asymptotics, as N → +∞, for the signed measuresρ N ( √ N t). Note that these measures are completely characterized by their integrals with h ≥ 1 and i = 1, 2, 3, 4.
where W (1) and W (2) are two independent standard Brownian motions. In the case 1 2 √ 2 < ω < 1 2 , the process V (1) (t), V (2) (t) explodes in finite time; the convergence above holds for the localized processes: for every r > 0, the process V By Theorem 3.2 we can derive the limiting dynamics of the critical fluctuations for the homogeneous model µ = δ 0 . They can be obtained as a particular case setting ω = 0.
Under P N the processes Y (t) converges weakly to the unique solution of the stochastic differential equation where W (1) and W (2) are two independent standard Brownian motions.

Collapsing processes
Before giving the details of the proofs of the results stated previously, we briefly present one of the key technical tool: a Lyapunov-like condition, that guarantees a rather strong form of convergence to zero of a sequence of stochastic processes. The first result (Proposition 4.1) we state concerns semimartingales driven by Poisson processes, whose proof can be found in the Appendix of [5]. In the case where the driving noises are Brownian motions, the result takes a slightly simpler form (Proposition 4.2); its proof is a simple adaptation of the one in [5], and it is omitted.
Here, Λ n is a Point Process of intensity A n (t, dy)dt on R + × Y , where Y is a measurable space, and S n (t) and f n (t) are A t -adapted processes, if we consider (A t ) t≥0 a filtration on (Ω, A , P) generated by Λ n . Let d > 1 and C i constants independent of n and t. Suppose {κ n } n≥1 , {α n } n≥1 and {β n } n≥1 , increasing sequences with Furthermore, let {τ n } n≥1 be stopping times such that for t ∈ [0, τ n ] and n ≥ 1, sup ω∈Ω,y∈Y ,t≤τn Then, for any ε > 0, there exist C 6 > 0 and n 0 such that Here, (W i ) mn i=1 are independent standard Brownian motions which generate a filtration (A t ) t≥0 , and S n (t) and f n (t, i) are A t -adapted processes. Let d > 1 and C i constants independent of n and t. Suppose {κ n } n≥1 , {α n } n≥1 and {β n } n≥1 , increasing sequences with Furthermore, let {τ n } n≥1 be stopping times such that for t ∈ [0, τ n ] and n ≥ 1, Then, for any ε > 0, there exist C 6 > 0 and n 0 such that 5 Proofs for the Random Curie-Weiss Model
Proof of Lemma 2.2. Obviously L is a linear and continuous operator. We have to prove that, if and the proof of self-adjointness is completed.
Proof of Proposition 2.3. It is obtained by a simple rescaling of the last expansion of L N ψ σφ(η)dρ N seen in the proof of Proposition 2.1. The details are omitted.  (15). In the rest of this section, we often consider the time-rescaled infinitesimal generator J N = N 1 4 L N , where L N is given by (14). Whenever we write

Collapsing Terms
We later consider, for j ∈ S and k ∈ D, the counting process Λ σ N (j, k, t) which counts the number of spin flips of spins σ i such that σ i = j and η i = k, up to time N 1 4 t. We consider the following semi-martingale decomposition with M t N,Y 2 i the local martingale given by where we have defined and The quantity Λ σ N (j, k, dt) is the difference between the point process Λ σ N (j, k, dt), defined on S × D × R + , and its intensity λ σ (j, k, t) dt. The quantity A(j, k, N Then, for every ε > 0 there exist N 0 such that for every M > 0 there is a constant C 6 > 0 for which Proof. The main tool is Proposition 4.1. However, some assumptions in Proposition 4.1 are not satisfied uniformly in the environment. We therefore will condition on the event The random field η is i.i.d., so it satisfies a standard Central Limit Theorem. Therefore, we can choose K > 0 such that for every N ≥ 1, Constants below are allowed to depend on K; this dependence is omitted. We are left to show that, for every M > 0 there is C 6 > 0 such that where P K ( · ) := P ( · |A K ). To prove (39) we check the conditions in Proposition 4.1.
We start noticing that a Central Limit Theorem applies to the processes σϕ i (η) dρ N (0), since the random variables (σ j (0), ϕ i (η j )) N j=1 are independent; so, in the limit as N → +∞, N 1 4 Y (N ) i (0) converges to a Gaussian random variable and, since (σ j (0)ϕ i (η j )) N j=1 are bounded random variables, there is convergence of all the moments. In particular (40) holds.
for suitable constants δ, C 2 > 0, which are allowed to depend on M , and all t ∈ [0, τ M N ] (we recall that β N ≡ 1). Letting X : Using this expansion we can perform the computation as in Proposition 2.3, but keeping track of the remainders : The first term of this last expression is We are left to show that all remaining terms are bounded, for t ∈ [0, τ M N ], η ∈ A K and assuming that in (42),ρ N is evaluated at time N 1 4 t. We immediately have All remaining terms in (42) are of the form for some real valued f . Since (ϕ h ) m−1 h=0 form a basis for the vector space of these functions, we can write where C depend on m, on the combinators α h , but not on N . As a consequence With all this, (42) implies for some M -dependent constant C(M ).
Step 4. We check (a4) of Proposition 4.1, i.e. (see equation (34)) For t ≤ τ M N , we easily have Step 5. We check (a5) of Proposition 4.1, i.e. (see equation (36)) Recalling the definitions of ∇ and λ(j, k, t), which can be found in (35) and in (36), we have Boundedness of this last expression for t ∈ [0, τ M N ], η ∈ A K follows readily by boundedness of Y Step 6. Conclusion. It is now enough to use (31). The next step is to prove, for every ǫ > 0 and N ≥ 1, the existence of a constant M > 0 such that This fact, together with Lemma 5.1, implies the processes Y m−1 (t) converge to zero in probability, as N grows to infinity, for t in the whole time interval [0, T ]. As in (39), we can replace P by P K for a sufficiently large K.
The idea is to consider a martingale decomposition as in (33) for ψ Y (N ) 0 , where ψ ∈ C 1 has bounded first derivative, and is such that where with Λ σ N as in (36). The point now is to get bounds on J N ψ Y (N ) 0 . We proceed as in (42); the only difference is in the "gradient term", which is now . Proceeding as in (42), it is easily seen that where O M N . The absolute value of the term m−1 . Due to (39), we can choose a constant C(M ) for which this term is bounded by for t ≤ τ N M , with probability greater that 1 − ε 4 . Denote by B ε the event that this bound holds true. Putting all together, we have therefore proved that in .
This means that, by (46), the inequality implies, for N and M large enough, that either and we obtain the following inequality for the probability of the interested set We estimate the three terms of the right-hand side of the inequality.
• Since at time t = 0 the spins are distributed according to a product measure, Y (N ) 0 (0) is N 1 4 times the sample average of independent, bounded random variables of mean zero. Therefore, for some constant C > 0, and in the limit as N → +∞, we have convergence to zero in L 1 and then in probability. Therefore for N sufficiently large. • It is therefore enough to show that E M T N,ψ 2 is bounded uniformly on N and M . By (47) and since ψ is Lipschitz, we have (see also (36)) Since, by (36), λ σ (j, k, t) ≤ CN

Identification of the Limiting Generator and Convergence
We are going to show that, in the limit of infinite volume and t ∈ [0, T ], the process Y (N ) 0 (t) admits a limit in distribution, that we will be able to identify.
First, we need to prove the tightness of the sequence Y . This property implies the existence of convergent subsequences. Secondly, we will verify that all the convergent subsequences have the same limit and hence also the sequence Y must converge to that limit.
is tight.
Proof. Following [5], we use the following tightness criterion: 2. for every ε > 0 and α > 0 there exists δ > 0 such that where the second sup is over stopping times τ 1 and τ 2 , adapted to the filtration generated by the process ξ N .
We must verify the conditions (50) and (51) hold. Since we have already proved that for every ǫ > 0 the inequality P {τ M N ≤ T } ≤ ǫ is true for M sufficiently large and uniformly in N , it is enough to show tightness for the stopped processes .
We have already shown that, for M large enough which yields (50). To obtain (51), we notice that where we have denoted and Λ σ N is as in (36). As in the proof of Lemma 5.1, one shows that both J N Y    denote one of such a subsequence and let where The remainder o M (1) goes to zero as n → +∞, for t ≤ τ M n . If we compute the limit as n → +∞, using the facts that a Central Limit Theorem applies to the term tanh(βη)dρ n (t), the integral dρ n (t) is zero since ρ n is a centered measure, and the process σ tanh(βη)dρ n (t) collapse since tanh(βη) and ϕ 0 (η) = 1 cosh(βη) are orthogonal in L 2 (ν), we have, in the sense of weak convergence of processes: and where H is a Gaussian random variable. Then, because of (53) and (45), we obtain for t ∈ [0, T ]. We must prove the following Lemma: Lemma 5.3. M t ψ is a martingale (with respect to t); in other words, for all s, t ∈ [0, T ], s ≤ t and for all measurable and bounded functions g(Y 0 ([0, s])) the following identity holds: Proof. We begin by showing that (54) follows from the fact, that will be proved later, that for every t fixed, {M t n,ψ } n≥1 is a uniformly integrable sequence of random variables. Since M t n,ψ is a martingale (with respect to t) for every n, we have that for all s, t ∈ [0, T ], s ≤ t and for all measurable and bounded functions g(Y 0 ([0, s])) Now, as we have seen, M t n,ψ and M s n,ψ have a weak limit; this, together with uniform integrability, imply convergence in L 1 . Thus (54) follows by taking limit in (55). It remains to check that {M t n,ψ } n≥1 is a uniformly integrable family. A sufficient condition for uniform integrability is that sup n E[|M t n,ψ | 2 ] < +∞ (see again [19]). This, however, is exactly what we have done already in (49).
Proof of Theorem 2.3. We have shown that any weak limit of Y (n) 0 ( · ) solves the martingale problem with infinitesimal generator J, which admits a unique solution. It follows that all convergent subsequences have the same limit and so the sequence itself converges to that limit.

Proofs for the Random Kuramoto Model
Throughout this section we assume ω ≤ 1 2 √ 2 , even though this assumption will be relevant only starting from Section 6.3. Whenever needed, we will comment on the necessary changes to cover the case 1 2 √ 2 < ω < 1 2 .
Remark 6.1. In the case that θ = 1 + 4ω 2 , the unique value for which the selfconsistency relations in (57) are satisfied is A = B = 0, meaning that at the critical point the kernel of the operator L is two-dimensional, while it is trivial for all the other values of the parameter θ.
The part of the statement of Lemma 3.2 concerning spectrum and eigenspaces is easily proved by direct computation, and the fact that the set {v

Perturbation Theory
In the rest of the section, we often consider the time-rescaled infinitesimal generator where L N is given by (27). To determine the limiting generator J, we need to apply the first order perturbation theory. The methodology for treating a perturbation problem has been developed in the paper [17] and extends the earlier works done in [14,16]. It will be useful to keep in mind the following simple fact, which is just a restatement of Proposition 3.1.
As first step (Section 6.3) we show that for every φ ∈ span v collapses to zero in the sense of Definition 2.1. We are therefore left to understand the behavior as N → +∞ of the two-dimensional process For this reason, for ψ ∈ C 2 (R 2 , R), we need to control 1 dρ N .
The first term in the r.h.s. of (58) vanishes, since v 1 ∈ ker(L). In order to compensate for the second diverging term N 1 4 L (2) ψ, one introduces a "small" perturbation of ψ of the form for some ψ 1 to be chosen. We obtain In order to avoid divergence, ψ 1 should be chosen in such a way that L (2) ψ+L (1) ψ 1 = 0. At a purely formal level we are led to set which gives The operator J is therefore the candidate for the generator of the limiting process V (1) , V (2) . In order to make a rigorous proof out of this formal argument, the following two steps are needed: 1. The operator L (1) −1 L (2) has to be properly defined.
2. From the above convergence of operators one must derive weak convergence of processes.
Step 2 will be dealt with in Section 6.4, through standard martingale techniques.
We consider now step 1. The needed computations are rather long, but follow few basic ideas, that we now illustrate. First observe that We give the details for the term ∂v (1) 1 ∂x (x, η) sin(y − x)dρ N dρ N , the other being similar. Letting by applying standard trigonometric formulas we obtain This means that L (2) ψ v 1 dρ N is a linear combination of terms of the form i = 1, 2, j, h = 1, 2, 3, 4. If we denote by λ j k the eigenvalue of L corresponding to the eigenfunction v (j) k , in the critical case θ = 1 + ω 2 , we easily obtain this defines L (1) −1 for the whole expression in (64). Thus, the perturbation (61) is now well defined. A further comment is relevant. In the expression for the limiting generator in (62), the quantity appears. Moreover, we have seen that L (1) −1 L (2) ψ is linear combination of terms as in (65). We will prove later that, when evaluated at time √ N t, • the sequences of processes v • the sequences of processes v (j) 1 dρ N , j = 1, 2 are tight.
In particular, the processes L (1) −1 L (2) ψ collapse to zero. We then have to apply L (2) again. It is easy to show what follows.
2 dρ N has j = 3, 4, i.e. it has "two collapsing factors", then is still collapsing to zero.
• When j = 1, 2, non collapsing terms in the expression above, arise from since when the Prostapheresis formulas are applied to 1 dρ N , j, l = 1, 2, appear. Carefully performing a long but straightforward calculation, one obtains the following statement. Proposition 6.2. Up to collapsing terms (as the symbol ≃ is intended to mean) we have

Collapsing Processes
From now on we always assume θ = 1 + 4ω 2 , with ω < 1 2 . In what follows, it is more convenient to work with the following real-valued basis of where y (1) h (x, η) := cos hx y (2) h (x, η) := sin hx y (3) h (x, η) := η cos hx y (4) h (x, η) := η sin hx. We also set Y Clearly, showing that the sequences of processes v 2 dρ N , h = 1, 2, 3, 4 collapse to zero which, in turn, is implied by the fact that the sequence ρ N 2 r collapses to zero. All processes here are meant to be evaluated at time Our first result concerns collapsing of the stopped process ρ N ( Lemma 6.1. Fix d > 2 and r > 3 2 . Then, for every ε > 0 and M > 0, there exist N 0 > 0 and C 5 > 0, for which Proof. We apply Proposition 4.2. We set κ N = √ N , α N = N where {W j (t) : t > 0, j = 1, . . . , N } is a system of independent standard Brownian motions on [0, 2π]. We show the following inequalities for every t ∈ [0, τ M N ], which imply (b3) and (b4): for some constant C, that is allowed to depend on M .
Step 1: proof of (68). We use (58): (70) We begin to deal with L (1) ρ N 2 r . Due to uniform convergence of the series defining ρ N 2 r , we can apply L (1) term by term. For i = 3, 4 Also, by direct computation, Letting λ := 1 − 4ω 2 > 0, by (71) and (72) we obtain We now compute L (2) ρ N 2 r . A "typical" summand with h ≥ 2 of the infinite sum giving L (2) By using Prostapheresis formulas, one realizes that y ∂x sin(y−x)dρ N dρ N is a linear combination, with uniformly bounded coefficients, of terms of the form Summing over h ≥ 2 and observing that, for t ∈ [0, τ M N ], Y (j,N ) 1 (t) ≤ cM for some constant c, we obtain (omitting the evaluation at √ N t) is obtained. Putting together (73), (75) and (76), where the last inequality holds for N sufficiently large so that Consider now the term L (3) ρ N 2 r . We have By the simple bound using the fact that for r > 3 2 h≥2 for some constant C. The treatment of the term L (4) ρ N 2 r is quite similar, since it is obtained from (78) replacing q * withρ N . Havingρ N total variation N 1 4 , we get By (77), (79) and (80), (68) follows.
Step 2: proof of (69). Consider the summand r . The summands containing v (i) 1 are dealt with similarly. We have , from which (69) follows. we obtain the following estimate, which does not require any stopping argument: We can now prove the main result of this section, corresponding to the first part of Theorem 3.2.
collapses to zero.
Proof. Given the result of Lemma 6.1, all we have to show is that for every ε > 0 there exist M, N 0 > 0 such that Consider the function ψ(x, y) := 1 + x 2 + y 2 .
Note that ψ has uniformly bounded partial derivatives, and ψ(x, y) ≥ min(|x|, |y|). We begin by observing that For the proof of (84) we consider the perturbation as illustrated in Section 6.2, with As seen in Section 6.2, ψ 1 is a linear combination of terms of the form and therefore, up to time τ M N , can be bounded in absolute value by some Mdependent constant C(M ). This implies that, for every given M and for large enough N P sup (85) By abuse of notation, we write ψ N (t) in place of Consider the semimartingale representation where where {W j (t) : t > 0, j = 1, . . . , N } is a system of independent standard Brownian motions on [0, 2π]. We have The term P ψ N (0) ≥ M 9 is easy to control, since the random variables V (i,N ) h (0) converge to zero in probability. We are therefore left to show that the probabilities and P sup are small for N large enough. We begin to deal with (88). By (60) and the choice of ψ 1 , we have where the term o(1) is bounded by C(M) N α for some α > 0. Moreover, it is easily shown that L (3) ψ is bounded uniformly in N and M . To deal with L (2) ψ 1 , we use Proposition 6.2, which gives where, again, the "collapsing terms" are bounded by C(M) N α . Observing that ∂ i ψ(·, ·) = V (i,N ) ψ(·, ·) , and since, by assumption, 1−8ω 2 ≥ 0, the non-collapsing part of (90) is nonnegative. We therefore conclude that, for t ≤ τ M N J N ψ N ≤ C + C(M ) N α with C independent of M, N . This implies that the probability in (88) is arbitrarily small for M (first) and N (then) sufficiently large. We now deal with (89). By Doob's Maximal Inequality which, again, is small for M (first) and N (then) sufficiently large. This completes the proof.
Remark 6.3. The assumption ω ≤ 1 2 √ 2 has been used in (90), to obtain bounds for J N ψ N . When the processes are stopped, as in the part of Theorem 3.2 concerning the case 1 2 √ 2 < ω < 1 2 , those estimates are essentially trivial because of the uniform boundedness of the stopped processes.

Identification of the Limiting Generator and Convergence
In this Section we complete the proof of Theorem 3.2. The argument follows that of Section 5.4, so most details are omitted. The candidate for the limiting generator in (62) has been obtained in Proposition 6.2 for the drift part, while the diffusion part comes from the term L (3) ψ that, by direct computation, is shown to be equal to In what follows we denote by J the generator of the diffusion process in Theorem 3.2. The proof of convergence develops along the following steps.
Step 1: tightness of the processes V , together with its perturbation ψ N as in (59) and (61). Up to o(1) terms, for stopping times τ 1 ≤ τ 2 , As in the proof of Proposition 6.3, we find a (possibly M -dependent) constant C such that the uniform bound |J N ψ N | + N Step 2: convergence to the solution of a martingale problem. Denote by V It should be recalled that ψ n is a function of V , i = 1, 2, 3, 4, so when we write ψ n (t) we mean that t is the time at which the processes in the argument of ψ n are evaluated. Considering that: • ψ n → ψ as n → +∞ uniformly on compact sets; • the processes V   If we show that, for each ψ with the properties specified above, M(t) is a martingale, then we have that the limiting processes V  Thus, it is enough to show that, for some constant C > 0, the inequality is satisfied. It should be noticed that in (92) we gave a pointwise estimate (i.e. not in mean) of this sort; that, however, holds for the unperturbed function ψ. In that case the difference between ψ and its perturbation ψ n was estimated by a bound of the form C(M) N α . But now we are not stopping the process anymore, so a little more care is needed. We recall that ψ n = ψ + n − 1 4 ψ 1 .
Given the bound in (92), in order to obtain (94) it is enough to show that As seen in Section 6.2, ψ 1 is a linear combination of terms of the form Consider the first of the summands above, the others can be dealt with similarly. The factor 1 dρ n ∂ ∂x v 2 dρ n , is clearly bounded in absolute value by ρ n r , defined in (66). Estimating similarly all terms, one sees that for some constants C, C ′ , where we have used (81). This establishes (95), and thus completes the proof of Theorem 3.2.