Periodic long-time behaviour for an approximate model of nematic polymers

We study the long-time behaviour of a nonlinear Fokker-Planck equation, which models the evolution of rigid polymers in a given flow, after a closure approximation. The aim of this work is twofold: first, we propose a microscopic derivation of the classical Doi closure, at the level of the kinetic equation ; second, we prove the convergence of the solution to the Fokker-Planck equation to periodic solutions in the long-time limit.


Introduction
In a previous work [11], two of us have studied the long time behaviour of some flows of infinitely dilute flexible polymers. It has been proved that for appropriate boundary data, and provided the solution is assumed regular, the solution returns to equilibrium in a long time, whatever the initial condition. The mathematical ingredient for the proof is the appropriate use of Log-Sobolev inequalities and entropy methods à la Desvillettes-Villani, applied to the Fokker-Planck equation derived from statistical mechanics and kinetic theory considerations and modelling the evolution of the polymer chains. It turns out that flows of rigid polymers exhibit equally interesting and, actually, much more varied properties in the long time, in particular stationary periodic-in-time motions. Several behaviours have been experimentally observed. Numerical simulations confirm the ability of the models employed to reproduce such behaviours. We refer for example to [6,5,18] for previous mathematical studies. Those behaviours are traditionally filed in different categories with an appropriate terminology. One speaks of flows exhibiting kayaking, tumbling, etc. Mathematically, one underlying question that might be considered is to investigate whether the solution to the Fokker-Planck equation ruling the evolution of the microstructure (here, typically rigid rods) converges in the long term to a periodic-in-time solution. This is in sharp contrast to the case of flexible polymers considered in [11] where the long time limit of the solution to the Fokker-Planck equation is a steady, that is time-independent density. The mathematical ingredients mentioned above (Log-Sobolev inequalities and entropy methods) are again useful, but their use is more delicate, as will be seen below.
The purpose of this article is to consider a simple setting, where the long term behaviour of the evolution of the microstructures can be proven to indeed be periodic.
To start with, we consider a commonly used model, namely the rigid rod model with a Maier-Saupe potential. It has been observed numerically [18] that, for this specific model, the flow is, in the long time, periodic-in-time. It is therefore an adequate setting to consider with the view to proving that, in the long time, the solution to the Fokker-Planck equation converges to a periodic-in-time solution. This Fokker-Planck equation formally writes where L(Ψ) is a nonlinear nonlocal partial differential operator, essentially parabolic (see the weak formulation (6) below) and Ψ is the probability density function describing the state of the microstructure. We are unfortunately unable to prove mathematically the expected long term behaviour of the rigid rod model with a Maier-Saupe potential. Note that this is indeed a particularly challenging issue to make a period appear in an equation of the form (1) which does not explicitly contain any periodic function to start with! In some sense, we would need a Poincaré Bendixson type theorem for an infinite dimensional system. This is beyond our reach. So we proceed somewhat differently. In Sections 2.2 and 3, we first derive an approximation of the model, that gives rise to a closed evolution equation for the so-called conformation tensor. This equation agrees with the equation obtained when a classical Doi-type closure is performed on the original model. In passing, we motivate in Section 3.2 our particular choice of approximate model. We finally prove, in Section 4, that the solution to this equation does behave as expected in the long time: it becomes periodic-in-time. Our proof falls in essentially two steps. We first show, in Section 4.1, that the conformation tensor x ⊗ x Ψ calculated from the solution Ψ to our approximate model, and which satisfies the closed evolution equation (8) (this is the whole point of the closure approximation), becomes periodic-in-time in the long term. We next use this result in our final Section 4.3 to conclude our study on the convergence of the density Ψ itself. Because our fundamental tool (in the course of Section 4.1) is the Poincaré-Bendixson Theorem (following the work [14]), our main result is unfortunately restricted to the two-dimensional setting. Other intermediate results we prove however hold whatever the dimension (and they have been proved and stated so). Another technical limitation lies in the fact we exploit, for our proof, the specific explicit expression of the time-periodic solution we establish the existence of, and which attracts all solutions in the long-time. More generality in the technique of proof would be highly desirable but is out of our reach to date.
In summary, the main contributions of this work are of two different types in nature: from a modelling viewpoint, we propose a microscopic derivation of the quadratic Doi closure, using a stochastic dynamics with constraints on an average quantity (see Proposition 3.1) ; from a mathematical viewpoint, we analyze the longtime behaviour of the solution to a nonlinear Fokker-Planck equation which converges to a periodic in time function (see Proposition 4.6).
Our study can be considered in the vein of several previous studies such as [8,3]. It is the authors' wish that the quite limited study performed here will be yet another incentive for mathematicians to consider the question of long term convergence to non steady stationary states for solutions to kinetic equations of the type (1). We reiterate that, in our opinion, the issue of proving, in some particular settings and under appropriate conditions, that solutions to nonlinear Fokker-Planck equations of the type (1) converge in the long time to periodic-in-time solutions is an interesting, unsolved mathematical issue. 2 The Original model and the Doi closure 2.1 The Maier-Saupe model for rigid polymers Using the notation of [14], we consider, in Stratonovich form, the following stochastic adimensionalized model for a rigid polymer with the Maier-Saupe potential: where B t denotes a d-dimensional Brownian motion, N is a dimensionless concentration parameter and the matrix κ ∈ R d×d is related to the velocity field of the ambient flow, which is assumed to be homogeneous (so that transport terms in (2) are omitted). The purpose of the projector operator is to ensure the preservation of the rigid polymer (nematic crystalline polymer) norm X t in time: Here and in the following, we use the tensor product notation: for two vectors u and v in R d , u ⊗ v is the R d×d matrix whose (i, j)-entry is u i v j . Notice that P (X) is a symmetric matrix such that P (X)P (X) = P (X).
The right-hand side of Equation (2) contains three terms which model different phenomena. The first term models the reorientation due to the velocity gradient κ of the fluid. The second, non-linear term contains the force associated to the Maier-Saupe potential, which describes the effective interaction of a rigid rod with the other rods. The rightmost term is a rotational diffusion term.
In the following, we will in particular consider the two-dimensional case (d = 2) and a simple shear flow, for which where Pe (the Péclet number) and a (a molecular shape parameter) are two adimensional parameters, see [14]. But for the moment being, we consider (2) in full generality. Equation (2) equivalently writes, using Itô's integration rule: where d is the dimension of the ambient space. Obtaining (5) from (2) is straightforward using (with implied summation over repeated indices) where δ i,j is the Kronecker symbol. The Fokker-Planck equation associated to (5) (or equivalently to (2)) writes, in weak form: for any smooth test function ϕ : with again implied summation over repeated indices and where for any time t ≥ 0, µ t (dx) is the law of X t (with support on a sphere).

2.2
The Doi closure to derive a closed equation for E(X t ⊗ X t ) As announced in the introduction, we now recall how to derive from (5) a closed evolution equation on the conformation tensor using the standard quadratic Doi closure approximation [7]. It is the aim of Section 3.2 below to provide a microscopic justification of this closure. Using elementary Itô differential calculus (and the properties P = P T and P 2 = P ), we compute: where loc.mart. denotes a local martingale that we do not need to make precise for the rest of our argument. Taking the trace of the previous equation and using that tr(AB) = tr(BA) and (X ⊗ X) P (X) = P (X) (X ⊗ X) = 0, we check the preservation of the norm as was announced earlier. We henceforth set X t = L.
Taking now the expectation, we obtain: where we here introduced the Frobenius inner product: for two R d×d matrices A and B, A : B = tr(AB T ) = d i,j=1 A i,j B i,j . At this stage, we use the so-called quadratic Doi closure [7] that consists in performing the following approximation: for any deterministic matrix K, The following closed nonlinear first order differential equation that rules the evolution of M in time is thereby obtained (using the fact that tr(M ) = L 2 ): Note that, at this stage, the above equation is formal since we do not know that tr(M ) does not vanish. It is a consequence of the following proposition that this is not the case. Proof. We consider a time interval on which equation (8) is well posed. Such a time interval exists by a standard application of the Cauchy-Lipschitz Theorem. Indeed, the right-hand side of (8) is a rational function in the coefficients of M . Momentarily, this time interval may be bounded, but we will soon see that it is in fact infinite. The adjoint matrix M T (t) of M (t) then satisfies the following equation: It follows that, when M (t) solves (8), both M (t) and M T (t) are solutions to the first order evolution equation and that this holds for the same initial condition M (0) since the latter is symmetric. Now, the right hand side of this differential equation is a second order polynomial in B, with coefficients that are obviously continuous in time (in turn because M (t) solves (8)). It follows that the Cauchy-Lipschitz theorem holds for this equation, and thus that the solution is unique for a given initial condition. This proves that M (t) = M T (t) for all times in the considered time interval. We now take the trace of (8) and we easily check that using the fact that M is now known to be symmetric. The trace of the solution is thus preserved in time and this in particular shows that the solution to (8) is defined for any time (for any initial condition with non zero trace). We finally notice, as this will be useful below that, since tr(M )| t=0 = L 2 , (8) also writes ♦ Remark 1 A natural question is whether Equation (8) also preserves positiveness (in the sense of symmetric matrices). This property will be a consequence of a rewriting of the solution to (8) as M (t) = E(X t ⊗ X t ) for X t solution to a modified stochastic differential equation, see Section 3 below. We are unable to prove this preservation otherwise.

A rewriting of the equations
It is enlightening to compare our equation (7) with the equation (10) in the article [14] by G. Forest and collaborators. For this purpose we recall the three dimensionless numbers used by these authors: the molecular shape parameter a, the Péclet number Pe and the dimensionless concentration number N . In this context, the dimension is d = 2, the length is L = 1 and κ is (4) and thus writes: where The equation (7) then reads (using the fact that Ω is skew-symmetric): To agree with the notation of [14], we introduce Q = M − Id/2, the traceless part of M . Equation (10) then rewrites: We observe that the above equation agrees with the Equation (10) in [14] up to multiplicative constants that do not affect our conclusions. Using now the Doi closure approximation, we finally get 3 Derivation of an evolution equation for X t that yields our closed equation on M We have derived in the previous section a closed equation (8) on M , using a classical closure technique (à la Doi) on the original dynamics (5) on X t . The question now naturally arises to know whether it is possible to modify the original stochastic dynamics (5) itself so that M (t) = E(X t ⊗ X t ) calculated from X t solution to this modified dynamics is solution to (8).

Two possible closures on the stochastic differential equation
To begin with, using the fact that for any vector We next modify this equation as follows: where the pair (R t , λ) is yet to be determined so that (13) is indeed solution to (8). Here, R t ∈ R d×d is an adapted stochastic process, and λ ∈ R is a deterministic constant. As above, an elementary Itô calculation yields Taking the expectation, we obtain This equation is then equivalent to equation (8) if and only if Obviously, the simplest possible choice for (R t , λ) is to set It follows that the diffusion term in the associated Fokker-Planck equation is simply a Laplacian. Let us write the non-linear Fokker-Planck equation we thus obtain: where We will see in the next section that this particular choice (14) of pair (R t , λ) may be motivated by modeling considerations. This turns out to be the choice we advocate.
An alternate convenient pair (among many possible choices) would be to set Using the fact that M ≤ tr(M ) Id (in the sense of symmetric matrices), the existence of such a matrix R t follows from Cholesky factorization, for example. Then, the associated Fokker-Planck equation would write : where M [ψ] is defined by (16). We have not been able to motivate the alternate choice (17) as convincingly as the choice (14), and will show that our preferred choice (14) enjoys several agreeable properties. We will therefore henceforth adopt (14).

A possible justification of our approximation (13)
To derive an appropriate approximation of Equation (5), we now follow a different path.
Since (5) is the projection on the manifold defined by the constraint " X t 2 constant" of an original dynamics visiting the whole space R d , we may consider an approximation of (5) as a projection of the same original dynamics on a "manifold" defined by the constraint "E X t 2 constant". The difficulty is that giving a mathematical meaning to the latter constraint is not straightforward. The aim of this section is to give a proper meaning to this projection, and to identify the projected dynamics with the dynamics (13)-(14) that leads to the Doi closure. To keep our exposition simple, we omit the nonlinear term in the drift term of (19) (namely N = 0). The reasoning below generalizes to the full drift.
The approach we propose is to consider I ≥ 1 replicas (for 1 ≤ i ≤ I) of the dynamics (19) and project the system obtained on the manifold We thus impose that the empirical average is constant, and we are interested in the limit I → ∞. Of course, the d-dimensional Brownian motions B i t are assumed to be independent. The projection is performed using the D'Alembert Principle. Indeed, the constraining force does not bring or subtract energy from the system: it is directed orthogonally to the submanifold on which the constrained system evolves. More precisely, denoting by X t = (X 1 t , . . . , X I t ) ∈ R dI , the projected dynamics writes (see for example [4,13]): where K is the dI × dI block diagonal matrix composed of the blocks κ of size d × d, and P (X) is still defined by (3), with X ∈ R dI . We fix at initial time and this quantity is by construction preserved in time. We also assume that the random variables X i 0 are identically distributed so that, from (21), As mentioned above, Equation (5) is recovered using this projection procedure with only one replica: I = 1. Here, we consider the limit I → ∞.
We now pick the first component X 1 t ∈ R d of our vector X t and consider its evolution equation Since X t 2 = I L 2 , this also writes In the limit I → ∞, we formally obtain that X 1 t converges to Y t solution to This limit may be rigorously justified as follows.
. . . , X I,I t ) ∈ R dI be a solution to (20) (we here explicitly indicate in the superscript the dependence on the number of replicas I) and Y t solution to (23). The initial condition Y 0 is assumed to satisfy so that (X i,I 0 ) 1≤i≤I are identically distributed random variables satisfying (21). Then, for any positive time T > 0, there exists C > 0 such that, for all positive I ∈ N, The proof is provided in the appendix. Of course, the convergence result holds for any component X i t of the vector X t , and by standard results in propagation of chaos [17], we actually have that any subset of components (X i 1 t , . . . X i k t ) converges (in the limit I → ∞) to (Y i 1 t , . . . , Y i k t ), where the processes (Y i t ) are independent copies of Y t solution to (23). Notice that since E( Y t 2 ) = L 2 , we may therefore equally well write the dynamics on Y t in the following form dt.
This agrees with (13) for (R = Id, λ = d), thereby providing a justification of our particular choice (14) in the previous section.
To summarize, the original model for rigid rods (5) may be seen as a projection of the dynamics (19) onto the submanifold X t 2 = X 0 2 , while the approximated model (13) for (R = Id, λ = d) which is consistent with the Doi closure (8) may be seen as the original dynamics (19) constrained to have a fixed average length: E X t 2 = E X 0 2 . This yiels a microscopic interpretation of the Doi closure.

Long-time behaviour of our approximate model
We henceforth consider the model (13)-(14) (namely (R = Id, λ = d)) which we have built from the original model (5) by approximation. Throughout this section, we work in dimension d = 2. This is a crucial assumption, specifically needed for our technique of proof which makes use of the Poincaré-Bendixson Theorem. Additionally, we assume that the matrix κ is defined by (4): and that the initial condition X 0 satisfies E( X 0 2 ) = L 2 = 1 (so that, for all positive time, E( X t 2 ) = 1). Given these assumptions, we now recall, for the convenience of our reader and the consistency of the present section, the model under study: We also recall that the conformation tensor M (t) = E(X t ⊗ X t ) then satisfies the ordinary differential equation: where tr(M (t)) = tr(M (0)) = 1. The (non-linear) Fokker-Planck formulation associated to (24) and established in (15) writes: where M [ψ(t, ·)] = R 2 x ⊗ x ψ(t, x) dx and tr(M [ψ(0, ·)]) = 1. Notice that t → M [ψ(t, ·)] then satisfies (25). The aim of this section is to study the longtime behaviour of the solution ψ to the Fokker-Planck equation (26).
As it is standard for such analysis, we study the longtime behaviour of a solution to the Fokker-Planck equation (26), assumed sufficiently regular for our manipulations to be valid. We refer for example to [2] for an appropriate functional setting to justify such calculations.

Long-time convergence of the solution to (25) to a periodic solution
We first consider the closed ordinary differential equation (25) on M , momentarily leaving (26) aside. We will show that, under some assumptions on the parameters and the initial condition M (0), M converges, in the long time limit, to a periodic solution. This is an extension of the result [14, Theorem 5.1]. We also refer to that contribution for a more thorough study of the longtime behaviour of the dynamical system, for other regimes of the parameters. • For any initial condition M (0) ∈ Ω, the solution M (t) to (25) converges to M per (t) exponentially fast in the long time, that is: there exist C, λ > 0, such that, for all t ≥ 0, The remainder of this section is devoted to the proof of Proposition 4.1.

Proof.
Step 1: Existence of a periodic-in-time solution Using Proposition 2.1, we know that the solution M (t) to (7) is symmetric and satisfies tr(M (t)) = 1 since this holds true at initial time. Introducing as in Section 2.3 the traceless part Q = M − Id/2 of M , we may always write Q, in the two dimensional setting we consider, under the form

Now the evolution equation (12) equivalently reads
Introducing the polar coordinates x = r cos ϕ, we rewrite (28) in the form We now consider two positive constants ǫ 1 and ǫ 2 satisfying 0 < ǫ 1 < N −1 4N and 0 < ǫ 2 < 1 4N respectively. Set r 1 = N −1 4N − ǫ 1 and r 2 = N −1 4N + ǫ 2 . Note that, by construction, 0 < r 1 ≤ r 2 < 1 2 . Then, if r 1 ≤ r(0) ≤ r 2 , one has This shows that dr dt r=r 1 > 0 and dr dt r=r 2 < 0, thus the annulus Ω = (x, y), r 1 < x 2 + y 2 < r 2 is stable under the flow (for positive time). The domain Ω mentioned above in the Proposition 4.1 is now made precise and defined by: We now want to show that there is no stationary point in the annulus Ω. Since, by assumption, N > 1 1−a 2 , we may assume that ǫ 1 > 0 is chosen sufficiently small so that since the solution remains in Ω, and thus that there is no stationary point in Ω.
From the Poincaré-Bendixson Theorem (see for example [15, Theorem 6.12]), we then obtain that, for any trajectory with initial condition (x 0 , y 0 ) in Ω, its ω-limit set is a periodic orbit, that is, the trajectory of a periodic solution. We recall, for consistency, that a point (x ∞ , y ∞ ) is in the ω-limit set of the (forward) trajectory starting from (x 0 , y 0 ) if there is an increasing sequence of times t n going to infinity and such that (x(t n ), y(t n )) converges to (x ∞ , y ∞ ) as n goes to infinity. A corollary of the previous statement is that there exists a periodic solution (x per , y per )(t) to the equation (28) in Ω, and thus an associated periodic solution M per (t) to (25), Step 2: Properties of the periodic-in-time solution In order to prove the long time convergence to the periodic solution and make precise the rate of that convergence, we now compute the divergence of the vector field in the right-hand-side of (28): for (x, y) ∈ Ω, We thus see that if ǫ 1 and Pe are chosen sufficiently small, then D(x, y) < 0 in Ω. Following [12,15], we deduce that the equation (28) has a unique stable periodic orbit in Ω. The uniqueness follows from a generalization of the Dulac criterion. Let us recall the main arguments for the stability statement. We introduce the Poincaré map associated to the first return to the section of Ω. It is a standard result [12,Equation (1.17)] or [15,Equation (4.51)] that where T is the period of the periodic solution (x per , y per )(t), gives the derivative of the Poincaré map at its stationary point (namely the point where the periodic orbit intersects S). Thus, D < 0 implies that the derivative is strictly smaller than one (ρ < 1), which yields the exponential convergence to the stationary point of the Poincaré map, and thus the (exponential) asymptotic stability of the periodic orbit. For further use (see the proof of Proposition 4.5)), we establish uniform bounds on the eigenvalues of M per . We first note that M per is a symmetric positive matrix. It is a simple consequence of the fact that (x per (t), y per (t)) ∈ Ω. Indeed, one can check that tr(M per (t)) = 1 and This implies that the eigenvalues of M per (t) are 1 2 ± x 2 per (t) + y 2 per (t), and thus bounded both from below and from above. In the sense of symmetric matrices, Step 3: Convergence in the long time To conclude our proof, we now show the convergence (27). Consider a solution (x, y)(t) to (28) and the sequence (x, y)(t k ) = (x k , 0) of its successive return points to the section S. Otherwise stated, x k+1 is the image of x k by the Poincaré map. Notice that x k is a monotonic sequence (since two trajectories cannot cross). As explained in Step 2, we also know that the sequence x k converges exponentially fast to the fixed point x * of the Poincaré map: there exists C > 0 andρ ∈ (0, 1) (which can be chosen arbitrarily close to ρ) such that, for all k ≥ 0, Since (x * , 0) is on the periodic orbit, there exists a time t * ∈ [0, T ) such that (x per , y per )(t * ) = (x * , 0).
Without loss of generality, we may assume t * = 0. We now remark, using (31), that ϕ : [t k , t k+1 ) → [0, 2π) is a one-to-one (actually strictly decreasing) function. We may therefore use ϕ itself to reparameterize the trajectory between two successive return points. It follows from (30) that Likewise, along the periodic trajectory (r per , ϕ per )(t) (namely (x per , y per )(t) in polar coordinate), we have: We now use the fact that (r, ϕ)(t k ) = (x k , 0) is close to (r per , ϕ per )(0) = (x * , 0) (by virtue of (34)) and the Lipschitz property of the flow associated to (30) as a function of the initial conditions over a finite time interval, to get that where here and below C > 0 denotes irrelevant constants.
This implies that there exists a time T 0 such that for all k ≥ 0, Then, we have: for k ≥ 0, Using the Lipschitz property of the flow, we thus get: for all k ≥ 1 and for all t ∈ [kT, (k + 1)T ) (x per , y per )(t) − (x, y)(T 0 + t) ≤ Cρ k .

Analysis of the Fokker-Planck equation (26) for M(t) given
In this section, we consider (26) for a given M (t): This equation can be rewritten in the form We are thus considering here a linear Fokker-Planck equation.
Proof. Set b(t, x) = K(t)x. We argue formally. Our manipulations are standard and can be made rigorous using appropriate functional spaces and cut-off functions. We write Notice that this proof does not require M to satisfy (25). ♦ We now build an explicit Gaussian solution to (35).

Proposition 4.3
Let M (t) be a given time-dependent symmetric definite positive matrix with tr(M (0)) = 1. Introduce the associated two-dimensional Gaussian probability density function Then, ψ M satisfies (35) (for the given function M (t)) if and only if M (t) satisfies (25).
Proof. Let us denote P (t) = M −1 (t), so that It is easy to calculate that where here and for the rest of this proof we use the short-hand notation ∂ t instead of ∂ ∂t . Note that (using the symmetry of P ), It follows that Plugging ψ M in the right hand side of the Fokker-Planck equation (26) yields for all x ∈ R 2 . This is equivalent to the couple of conditions    1 2 tr(P −1 ∂ t P ) = tr(K − P ), where, for the second line, we have equated the symmetric part of the two second order coefficients. We immediately remark that the second line of (38) implies the first line, by elementary properties of the trace. We now write (25) under the form where, again, K is defined by (37). Thus M satisfies (25) if and only if P = M −1 satisfies where we use the fact ∂ t P = −P ∂ t M P . By comparing (38)  We now proceed with a uniqueness result for the periodic solution to (26). This result, for which we provide here a self contained proof, is also a consequence of the convergence stated in Proposition 4.6 and proved in the next section. Since the function I is nonnegative, this immediately implies that ψ per = ψ Mper on [0, T ] and thus for any time.
If the periodT of ψ per is different from the period T of ψ Mper , we slightly adapt the above argument. From standard results on continuous fractions (see for example [9,16]), there exists sequences of integers p n , q n such that Thus, if we set τ n = p n T , we have τ n = q nT + ε n where lim n→∞ ε n = 0 and lim n→∞ τ n = ∞. Then, we have, for n sufficiently large, We conclude this section with an inequality that will be useful below to show exponential convergence to periodic solutions for (26).
Proof. It is well known that centered Gaussian distributions with covariance matrix M satisfy a logarithmic Sobolev inequality with parameter the inverse of the largest eigenvalue of M , see for example [1]. Now, (33) precisely shows that the eigenvalues of M are uniformly bounded from above by a time-independent constant. This concludes the proof.  ). Then, consider a solution ψ of (26) and assume that the initial condition ψ(0) satisfies where Ω is defined above. We have: Proposition 4.6 Under the assumptions of Proposition 4.1 (and in particular (42)), the solution ψ to (26) (which we assume sufficiently regular) converges exponentially fast to ψ per in the following sense: there exist C > 0 and ν > 0 such that, for all time t > 0, Proof. To the function ψ is associated M (t) = M [ψ(t, ·)] satisfying (7), which (by Proposition 4.1) converges exponentially fast to M per : ∀t ≥ 0 The function ψ satisfies the linear Fokker-Planck equation: where K is defined by (37). Likewise, the function ψ per satisfies the linear Fokker-Planck equation: where K per is the periodic function defined by (37) with M = M per . Notice that from (43), we get Now, adapting the proof of Proposition 4.2 and rewriting (44) as: we have, for 0 < ε < 1, Now, using (43), the fact that x 2 ψ = tr(M ) = 1, and Proposition 4.5, we get from which we deduce the exponential convergence of H(ψ|ψ per ) to zero, using the Gronwall Lemma. ♦ Remark 3 It is easy, by making precise all the constants used in the bounds above, to give an expression for the rate of convergence ν in terms of ρ defined in (32) and µ defined in (41).

A Proof of Proposition 3.1
We provide in this appendix a proof of Proposition 3.1. Adapting a standard coupling approach, see for example [10], we introduce N independent copies of the nonlinear stochastic differential equation (23): so that E X i,I t 2 = L 2 since the law of the stochastic process X I t is invariant under permutation of the indices of its components (X 1,I t , . . . , X I,I t ). Moreover, since E Y i 0 2 = L 2 , we also have, for all positive time t, This originates from the fact that E( Y t 2 ) solves the ordinary differential equation: We now introduce the difference We have: Thus, using the Doob inequality to get (where here and throughout this proof, C denotes irrelevant constants that depend on T , κ, d, L but not on I), we obtain, for any time t ∈ [0, T ], (T is fixed), We are now going to estimate from above the first and third terms of the right-hand side.
We begin with the first term, which involves the initial condition Z i,I 0 . Using that, for all x, y > 0, (x − y) 2 ≤ |x 2 − y 2 |, we have: We now consider the term (48), which we split as follows: For the third term (52), we write: By independence of the stochastic processes (Y i t ) i≥1 , the terms in the sum are zero if j = k. Thus, using that sup s∈[0,T ] E Y s 8 < ∞, which is easy to check from (23) provided the initial condition Y 0 is assumed to have finite moments up to order 8. We finally estimate the second term (51). We have The last line is a consequence of the following: for ξ j = Y j s 2 − L 2 i.i.d. centered random variables with finite fourth moment, α = L 2 > 0, and m = 0, 2, 3, ds + C I , and we conclude using the Gronwall lemma.
Acknowledgements: The last two authors would like to thank Greg Forest for enlightening discussions, in particular during a workshop on polymer flows organized at Ecole des Ponts ParisTech in January 2009. The variety of observed and simulated long term behaviours of nematic polymer flows along with the importance of mathematically understanding such phenomena were then pointed out. The help of Victor Kleptsyn regarding the theory of planar dynamical systems is also acknowledged.