ERGODICITY FOR A CLASS OF MARKOV PROCESSES AND APPLICATIONS TO RANDOMLY FORCED PDE’S. II

The paper is devoted to studying the problem of ergodicity for the complex Ginzburg–Landau (CGL) equation perturbed by an external random force. We show that the conditions of a simple general result established in [22] are fulfilled for the equation in question. As a consequence, we prove that the corresponding family of Markov processes has a unique stationary distribution, which possesses a mixing property. The result of this paper was announced in the joint work with Sergei Kuksin [14].


1.
Introduction. The objective of this paper is to prove the uniqueness of stationary measure for the complex Ginzburg-Landau (CGL) equation perturbed by a random force. More precisely, we study the equatioṅ where D ⊂ R n is a bounded domain, h(x) is a deterministic function, and η(t, x) is a random process white in time and smooth in the space variables. (See Section 2.1 for the precise assumptions imposed on the right-hand side.) Equation (1) is supplemented with the Dirichlet boundary condition and an initial condition at the time t = 0. We show that if the distribution of η is sufficiently non-degenerate, then the random dynamical system associated with (1) has a unique stationary measure µ, and any other solution converges to µ in distribution as t → ∞. The problem of ergodicity was studied by many authors for various classes of randomly forced PDE's. We refer the reader to the reviews [1,10,21] and to the Introduction of [22] for a concise summary of the results obtained and the techniques developed. Here we mention only three papers that are directly related to the equation considered in the present article. Namely, Hairer [6] studied a real Ginzburg-Landau equation on a multidimensional torus and proved the uniqueness of stationary measure and an exponential mixing property for it, Odasso [19] established similar results for a class of CGL equations with strong nonlinear dissipation, and Debussche and Odasso [2] proved uniqueness and polynomial mixing for a damped one-dimensional Schrödinger equation.
The method used in this paper is based on studying a pair of independent copies of the Markov process generated by (1). This approach was applied in [22] to give a simple proof of the uniqueness and a mixing property of stationary measure for the 2D Navier-Stokes (NS) system in a bounded domain. The case of the CGL equation is technically more complicated. However, the main ideas remain the same, and we refer the reader to the Introduction of [22] for their informal explanation. Here we only clarify the difference between the cases of NS and CGL equations.
We wish to show that the distributions of solutions for (1) converge to a unique stationary measure in the Kantorovich-Wasserstein (KW) metric over the Sobolev space H 1 0 (see (2)). A crucial point of the proof is to estimate the distance between the distributions of solutions to (1) issued from different initial data. This is done in two steps. We first show that if the space of probability measures is endowed with the KW metric over the space L 2 (see (3)), then the arguments of [22] combined with a new a priori estimate established in Proposition 2 yield a uniform (in time) bound for the distance between solutions. The above argument does not apply to the KW metric over H 1 0 , because the a priori estimates available for higher Sobolev norms of solutions for the CGL equation are not strong enough. To overcome this difficulty, we prove that the Markov semigroup defined by (1) in the space of measures possesses a regularising property (see Proposition 4). Combining it with the bound for the KW distance over L 2 , we obtain the desired result.
Here is the plan of this paper. In Section 2, we state a well-known result on correctness of the initial-boundary value problem for Eq. (1) and establish some a priori estimates for solutions. The main result is presented in Section 3. To prove it, we show that the conditions of Theorem 1.2 in [22] are satisfied for the model in question. Finally, in the Appendix, we have compiled some auxiliary results used in the main text.
Notation. Let X be a separable Banach space with a norm · X . We denote by B X (r) the closed ball in X of radius r centred at the origin. We always assume that X is endowed with the Borel σ-algebra B(X) and denote by P(X) the set of probability measures on (X, B(X)). We write C b (X) and L(X) for the spaces of bounded continuous and bounded Lipschitz continuous functions f : X → R and endow them with the natural norms If ξ is a random variable on a probability space (Ω, F, P), then we denote by D(ξ) or P{ξ ∈ ·} the distribution of ξ. If a, b ∈ R, then a ∨ b (a ∧ b) stands for the maximum (minimum) of a and b.
We deal with the spaces H, H 1 , and H 2 introduced in Subsection 2.1 together with corresponding norms · , · 1 , and · 2 . If f ∈ C b (H 1 ) and µ ∈ P(H 1 ), then (f, µ) denotes the integral of f over H 1 with respect to µ. If µ 1 , µ 2 ∈ P(H 1 ), then we write * where f ∈ L(H 1 ), g ∈ L(H), and we denote by · L and | · | L the norms in the spaces L(H 1 ) and L(H), respectively.

2.1.
Well-posedness and a priori estimates. Let D ⊂ R n , n ≥ 1, be a bounded domain with smooth boundary ∂D. Consider the probleṁ Here ν > 0, α > 0, β ≥ 0, and σ ≥ 0 are some constants, h(x) is a deterministic function belonging to the Sobolev space of order 1, and η(t, x) is a random process white in time and smooth in x. More precisely, we assume that where {β j = β 1 j + iβ 2 j } is a sequence of complex-valued independent Brownian motions defined on a probability space (Ω, F, P) with right-continuous filtration F t , {e j } is a complete set of eigenvectors for the Dirichlet Laplacian in D with eigenvalues α 1 < α 2 ≤ α 3 ≤ · · · , and b j ≥ 0 are real constants such that Let H be the space of complex-valued square-integrable functions on D. We shall regard it as a real Hilbert space with the scalar product .
Let us define the following continuous functionals on H 1 : If X is Banach space and J ⊂ R is a closed interval, then we denote by C(J, X) the space of continuous functions f : J → X and by L 2 loc (J, X) the space of measurable functions f : J → X such that I f (t) 2 X dt < ∞ for any finite subinterval I ⊂ J. The following theorem establishes the well-posedness of (4) - (7). Its proof is carried out by standard methods and can be found in [14] for the case h ≡ 0, σ = 1, and 1 ≤ n ≤ 4. We refer the reader to [9,18] for more general existence and uniqueness results for SPDE's. Theorem 1. Let u 0 be an H 1 -valued F 0 -measurable random variable such that E H 1 (u 0 ) < ∞. Suppose that h ∈ H 1 (D, C), inequalities (9) are satisfied, and Then the following statements hold.
(i) There is an F t -adapted random process u(t) = u(t, x), t ≥ 0, whose almost every trajectory belongs to the space X := C(R + ; H 1 ) ∩ L 2 loc (R + ; H 2 ) and satisfies Eqs. (4) and (6) in the sense that where the left-and right-hand sides are regarded as elements of H.
(ii) The process u(t) constructed in (i) is unique in the sense that ifũ(t) is another random process satisfying (i), then, with probability 1, we have u(t) =ũ(t) for all t ≥ 0. (iii) The random process H 0 (u(t)) and H 1 (u(t)) possess stochastic differentials, which have the form where the constant B 0 is defined in (8). Moreover, for any t ≥ 0, we have where C > 0 is a constant depending only on α and β.

Exponential martingale inequalities.
For any function u(t) belonging to the space X (see Theorem 1), we set Proposition 1. Suppose that (9) and (10) holds. Then there exist positive constants K and γ such that the solution u(t) of (4) -(6) constructed in Theorem 1 satisfies the inequality Proof. We only outline the proof, which repeats the arguments used in [16,13]. In view of (11), we have Let us define a martingale by the formula and note that its quadratic variation satisfies the inequality Combining this inequality with (16), we obtain Since exp γM t − γ 2 M t /2 is a supermartingale with mean value not exceeding 1, the classical supermartingale inequality (see Theorem III.6.11 in [17]) implies that the probability on the right-hand side of (17) can be estimated from above by e −γρ .
We now consider the functional where Proposition 2. Suppose that (10) holds and that σ ≤ 2 n−2 ∧ 1. Then there is a constant p ≥ 2 such that for any T > 0 and ρ > 0 the solution u(t) of (4) -(6) satisfies the inequality where C, K and γ are positive constants not depending on u 0 and ρ.
Proof. We repeat the scheme used in the proof of Proposition 1. The difference is that the quadratic variation of the corresponding martingale cannot be estimated by I u (t).
Step 1. In view of (12) and the inequality where Suppose we have found p ≥ 2 such that Here and henceforth, we denote by C i unessential positive constants. Combining (19) - (21), we see that where c > 0 is sufficiently small. In view of (15), there are positive constants K 1 and γ 1 such that Furthermore, as in the proof of Proposition 1, the supermartingale inequality implies that P sup Comparing (22) -(24), we arrive at the desired inequality (18).
To prove (21), we write Thus, the required inequality (21) will be established if we show that To simplify formulas, we confine ourselves to the case n = 4 and σ = 1. In view of the Gagliardo-Nirenberg inequality (see Theorem 6.4.1 in [7]), we have This implies inequality (26) with p = 10.
3. Uniqueness of stationary distribution and mixing.
3.1. Main result. Throughout this section, we shall assume that the parameter σ ≥ 0 satisfies the inequalities (cf. (9)) Let us denote by (u t , P u ) the family of Markov processes associated with the problem (4) -(6) and parametrised by the deterministic initial condition , be the transition function for the family (u t , P u ) and let P t : The following theorem is the main result of this paper. (10) and (27) are satisfied, and b j = 0 for all j ≥ 1.
Then for any ν > 0 the Markov family (u t , P u ) has a unique stationary measure. Moreover, the measure µ is mixing in the sense that, for any λ ∈ P(H 1 ), we have A proof of Theorem 2 is given in the next subsection. Here we outline the main ideas. To simplify the presentation, in what follows we confine ourselves to the case n = 3 or 4, which is the most difficult.
Let (u t , IP u ) be a pair of independent copies of the family (u t , P u ). In other words, (u t , IP u ) is a family of Markov processes in H 1 = H 1 × H 1 whose transition function is given by the formula where B H 1 (r) stands for the closed ball in H 1 of radius r centred at the origin. Denote by τ m the first hitting time of G m : In view of Theorem 1.2 in [22], the required result will be established if we show that the following two properties hold. (P 1 ) For any u = (u, u ′ ) ∈ H 1 and m ≥ 1, we have IP u {τ m < ∞} = 1.
(P 2 ) There is a constant T > 0 and sequence δ m > 0 going to zero as m → ∞ such that sup t≥T P (t, u, ·) − P (t, u ′ , ·) * L ≤ δ m for any u ∈ G m .

ARMEN SHIRIKYAN
Proof of (P 1 ) literally repeats the arguments used in [22] for the case of the 2D Navier-Stokes system, and we shall not dwell on it. Let us sketch the proof of (P 2 ) (cf. Step 4 in [22, Section 3.1]). Let us denote by (Ω, F, P) the probability space associated with the problem (4) - (7), and let u t (ω, u) be the solution of (4), (5) issued from the initial point u ∈ H 1 . The proof of (P 2 ) is based on the two propositions below. The first of them enables one to estimate the distance between the distributions of solutions for (4), (5) with different initial points. This type of results were obtained in [11,12] for discrete-time forces and in [4] for white noise and were developed later in a number of works. Our presentation is close to that of the papers [15,6], in which the closeness of distributions is described with the help of transformation of the underlying probability space.
Proposition 3. For any δ > 0 there is ε > 0 such that, for any u ∈ B H 1 (ε), one can find a measurable transformation Ψ u : Ω → Ω satisfying the inequalities where Ψ u * (P) stands for the image of P under the transformation Ψ u and · var is the total variation norm.
In other words, if the initial function u ∈ H 1 is sufficiently small, then with high probability the solution is close (in the L 2 -norm) to the trajectory starting from zero and corresponding to a different value of the random parameter, which is denoted by Ψ u (ω). Moreover, the transformation ω → Ψ u (ω) almost preserves the probability measure P.
Using (30) and (31), one easily shows that where u t = u t (ω, u),ũ t = u t (ω, 0), and | · | * L stands for the dual Lipschitz metric for the norm in H (see Notation). It follows from (32) that, for any initial functions u, u ′ ∈ B H 1 (ε), the distributions of the corresponding solutions u t and u ′ t satisfy the inequality sup where δ = δ(ε) > 0 goes to zero with ε. Inequality (33) and a priori estimates for solutions imply that (29) will be established if we prove the continuity of the Markov semigroup P * t for appropriate norms. For any measure µ ∈ P(H 1 ), we set Proposition 4. For any positive constants C and γ there is δ > 0 such that if measures µ 1 , µ 2 ∈ P(H 1 ) satisfy the inequalities ERGODICITY FOR RANDOMLY FORCED PDE'S 9 Combining Proposition 4 with inequality (33), we conclude that Property (P 2 ) holds with T = 1.
The rest of this section is organised as follows. In Section 3.2, we give a detailed proof of the fact that Propositions 3 and 4 imply Property (P 2 ). Propositions 3 and 4 are established in Sections 3.3 and 3.4, respectively.
3.2. Verification of Property (P 2 ). Step 1. Let u = (u, u ′ ) ∈ G m and let u t and u ′ t be the solutions of (4), (5) issued from u and u ′ , respectively. To prove (29) with T = 1, it suffices to show that and the convergence is uniform with respect to u ∈ G m . Indeed, it follows from (14) and the Gronwall inequality that where v t stands for the solution of (4), (5) with the initial condition v. Therefore, applying Proposition 4 to the measures µ 1 = D(u t ) and µ 2 = D(u ′ t ) with an arbitrary t ≥ 0 and using (36), we conclude that This is equivalent to (29).
Step 2. Convergence (36) will be established if we show that whereũ t stands for the solution of (4), (5) with zero initial condition. To prove this, we use Proposition 3. Applying Lemma 3.4 in [22] to the random variables u t (ω) andũ t (ω) and to the transformation Ψ u defined in Proposition 3, we see that where ε = ε(δ) > 0 is sufficiently small. Combining this inequality with (31), we conclude that (32) holds for u ∈ B H 1 (ε). Since δ > 0 is arbitrary, we arrive at (37).

Proof of Proposition 3.
We first note that the underlying probability space (Ω, F, P) plays no role in the statement of Theorem 2. Therefore, we can assume from the very beginning that it possesses the following properties: • Ω coincides with the space of functions u ∈ C(R + , H) that vanish at t = 0; • Ω is endowed with the topology of uniform convergence on the compact intervals J ⊂ R + , and B(Ω) stands for the Borel σ-algebra on Ω; • P is the distribution of the random process ζ defined in (7) and F is the completion of B(Ω) with respect to P. In this case, we can assume, without loss of generality, that ζ(t) = ω t for all ω ∈ Ω and t ≥ 0.
The proof of Proposition 4 is divided into several steps.
Step 1: Construction of Ψ u . We shall need an auxiliary result on solvability of the projection of (4) to subspaces of finite codimension. For any N ≥ 1, denote by H N the 2N -dimensional vector space spanned by {e j , ie j , 1 ≤ j ≤ N } and by H ⊥ N its orthogonal complement in H. Consider the probleṁ Here w 0 ∈ H ⊥ N and v ∈ C(R + , H N ) are given functions, F ⊥ N : H 1 → H ⊥ N is a continuously differentiable function defined as F ⊥ N (u) = iβQ N (|u| 2σ u), and we denote by P N and Q N the orthogonal projections in H onto the subspaces H N and H ⊥ N , respectively. Lemma 1. Under the conditions of Theorem 1, there is a set Ω 0 ∈ F of full measure such that the following assertions hold for any ω ∈ Ω 0 .
(i) For any v ∈ C(R + , H N ) and w 0 ∈ H ⊥ N , problem (38), (39) has a unique solution w t that belongs to the space where w ∈ X N is the solution of (38), (39). Then W is uniformly Lipschitz continuous in (v, w 0 ) on bounded subsets for any fixed ω ∈ Ω and is measurable with respect to (v, w 0 , ω).
The proof of Lemma 1 is similar to that of Theorem 1 and is omitted. In what follows, we denote by W t (v, w 0 , ω) the value of the function W(v, w 0 , ω) at time t.
To construct Ψ u : Ω → Ω, let us choose a smooth function θ on R such that θ t = 1 for t ≤ 0 and θ t = 0 for t ≥ 1. For any u ∈ H 1 , we set where u t (ω, u) denotes the solution of (4), (5) issued from u. We now define Ψ u by the relations where ω ∈ Ω, Θ t = t 0 θ s ds, and Step 2: Proof of (30). Lemma 1 implies that problem (4) -(6) is equivalent to the following system for the components (v t , w t ) of the solution u t : Repeating literally the arguments used in [22] (see Step 7 in Section 3.1), we easily show that p t (ω) = P N u t (ω, u) − θ t u for a.e. ω ∈ Ω.
Step 3: Proof of (31). We follow the scheme used in [4,13]. Let us write Ψ u in the form where ϕ t is an H N -valued function defined as ϕ t (ω, u) = −θ t P N u + (ν + iα)θ t ∆P N u + D t (ω, u).

Introduce the functions
Let us fix a parameter ρ > 0 and define a truncating function χ ρ by the following rule: where K and C are the constants in inequality (15) and Proposition 5, respectively, and χ ρ t (ω, u) = 0 otherwise. Along with Ψ u , let us consider the transformation In view of the triangle inequality and an elementary property of the total variation distance, we have Propositions 1 and 2 and the definition of χ ρ imply that To estimate the first term on the right-hand side of (46), we use Proposition 6 (see Section 4.1). We claim that if N is such that inequality (44) holds with a sufficiently large C > 0, then where C N,ρ , c, and r are some positive constants not depending on u. If this inequality is established, then Thus, condition (63) is satisfied, and (64) implies that, for any fixed ρ > 0, Combining this with (46) and (47), we arrive at inequality (31) in which δ → 0 as u 1 → 0. Thus, it remains to prove (48).

Substitution of this inequality into (53) results in
Applying the Gronwall inequality, we obtain where Step 2. We now take any function f ∈ L(H 1 ) with norm f L ≤ 1 and note that Introduce the event It follows from the first inequality in (34) and Proposition 2 that P(Γ R ) → 0 as R → ∞. Choosing a sufficiently large R, we conclude from (54) and (55) that Combining this with (51) and recalling that f was arbitrary, we arrive at the required result.

Appendix.
4.1. Foiaş-Prodi type estimate. In this subsection, we study problem (4) - (6) in which η is a deterministic function of the form (cf. (7)) where ζ is a continuous function from R + to H 1 . Recall that {e j } ⊂ H is the complete set of eigenfunctions for the Dirichlet Laplacian in the domain D, H N is the 2N -dimensional subspace generated by {e j , ie j , 1 ≤ j ≤ N }, and H ⊥ N is the orthogonal complement of H N in H. Denote by P N : H → H N and Q N : H → H ⊥ N the corresponding orthogonal projections.
The following result provides a Foiaş-Prodi type estimate for the difference between two solutions whose projections to H N coincide (cf. [5]).

4.2.
Girsanov theorem. Let (Ω, F, P) be the probability space described in the beginning of Section 3.3 and let {F t , t ≥ 0} be the natural filtration on Ω augmented with respect to (F, P) (see [8]). Suppose that N ≥ 1 is an integer and d t (ω) is an H N -valued measurable function on R + × Ω that is adapted to F t and satisfies the condition Let K be a bounded linear operator in H defined by the relations Ke j = b j e j and K(ie j ) = ib j e j for all j ≥ 1. In view of condition (28), the operator K is injective and its inverse K −1 is well defined on H N . The following result is a straightforward consequence of the Girsanov theorem (see [20]).
Then the total variation distance between P and Φ * (P) admits the estimate Proof. We define a Brownian motion in H N by the formula B t (ω) = K −1 P N ω t and introduce the measurable function In view of the Girsanov theorem (see Theorem 8.6.4 in [20]), the function ρ is the density (with respect to P) of a probability measure on Ω, and the random process is a Brownian motion with respect to the measure P(dω) = ρ(ω)P(dω). It follows that the distribution of Φ(ω) under the law P coincides with P. Therefore,