A conservative evolution of the Brownian excursion

We consider the problem of conditioning the Brownian excursion to have a fixed time average over the interval [0,1] and we study an associated stochastic partial differential equation with reflection at 0 and with the constraint of conservation of the space average. The equation is driven by the derivative in space of a space-time white noise and contains a double Laplacian in the drift. Due to the lack of the maximum principle for the double Laplacian, the standard techniques based on the penalization method do not yield existence of a solution.


Introduction
The aim of this paper is to construct a stochastic evolution, whose invariant measure is the law of the Brownian excursion (e θ , θ ∈ [0, 1]) conditioned to have a fixed average 1 0 e θ dθ = c > 0 over the interval [0,1].
Since the distribution of the random variable 1 0 e θ dθ is non-atomic, and the Brownian excursion is not a Gaussian process, it is already not obvious that such conditioning is well defined. The first part of the paper will be dedicated to this problem: we shall write down the density of the random variable 1 0 e θ dθ and a regular conditional distribution of the law of (e θ , θ ∈ [0, 1]), given 1 0 e θ dθ. The same will be done for the Brownian meander (m θ , θ ∈ [0, 1]).
After this is done, we shall turn to the problem of constructing a natural stochastic dynamics associated with the constructed conditioned laws. We recall that a stochastic dynamics whose invariant measure is the law of the Brownian excursion has been studied in [8] and [10], where a stochastic partial differential equation with reflection and driven by space-time white noise is proven to be well posed and associated with a Dirichlet form with reference measure given by the law of the Brownian excursion.
In the present case, we shall see that a natural dynamics with the desired properties solves a stochastic partial differential equation of the fourth order with reflection and driven by the derivative in space of a space-time white noise: u(t, 0) = u(t, 1) = ∂ This kind of equations arise as scaling limit of fluctuations of conservative interface models on a wall, as shown in [12], where however different boundary conditions are considered. Indeed, notice that the boundary conditions in (1.1) are mixed, i.e. Dirichlet for u and Neumann for ∂ 2 u ∂θ 2 . In [6] and [12] a similar equation, with Neumann boundary conditions for u and ∂ 2 u ∂θ 2 , has been studied, together with the scaling limit of interface models mentioned above. In that case it was possible to prove pathwise uniqueness and existence of strong solutions for the SPDE. In the case of (1.1) we can only prove existence of weak solutions, since we have failed to obtain a uniqueness result: see subsection 2.6 below.
The dynamics is anyway uniquely determined by a natural infinite-dimensional Dirichlet form on the path space of the Brownian excursion, to which it is associated. See Theorem 2.3 below.
We also notice that the Brownian meander (m θ , θ ∈ [0, 1]) conditioned to have a fixed average and the density of 1 0 m θ dθ appear in an infinite-dimensional integration by parts formula in [6, Corollary 6.2].

The main results
In this section we want to present the setting and the main results of this paper.
2.1. Conditioning the Brownian excursion to have a fixed time average. Let (e t , t ∈ [0, 1]) be the normalized Brownian excursion, see [9], and (β t , t ∈ [0, 1]) a Brownian bridge between 0 and 0. Let {m,m, b} be a triple of processes such that: (1) m andm are independent copies of a Brownian meander on We introduce the continuous processes: We set for all ω ∈ C([0, 1]): The first result of this paper is P( e, 1 ∈ dc) = p e,1 (c) 1 {c≥0} dc.
In section 9 below we state and prove analogous results for the Brownian meander.

Two Hilbert spaces.
For the study of the stochastic partial differential equation (1.1) we need to introduce some notation. We denote by ·, · L the canonical scalar product in L 2 (0, 1): We denote by A the realization in L 2 (0, 1) of ∂ 2 θ with Neumann boundary condition at 0 and 1, i.e.: Notice that A is self-adjoint in L 2 (0, 1). We introduce a notation for the average of h ∈ L 2 (0, 1): Then we also set for all c ∈ R: where : Then a direct computation shows that for all h ∈ L 2 (0, 1): i.e. Q is the inverse of −A on L 2 0 and conserves the average. Then we define H as the completion of L 2 (0, 1) with respect to the scalar product: For all c ∈ R we also set: We remark that H is naturally interpreted as a space of distributions, in particular as the dual space of H 1 (0, 1). We also need a notation for the realization A D in L 2 (0, 1) of ∂ 2 θ with Dirichlet boundary condition at 0 and 1, i.e.: Notice that A D is self-adjoint and invertible in L 2 (0, 1), with inverse: 3. Weak solutions of (1.1). We state now the precise meaning of a solution to (1.1).

Function spaces.
Notice that for all c ∈ R, H c = c1 + H 0 is a closed affine subspace of H isomorphic to the Hilbert space H 0 . If J is a closed affine subspace of H space, we denote by C b (J), respectively C 1 b (J), the space of all bounded continuous functions on J, resp. bounded and continuous together with the first Fréchet derivative. We also denote by Lip(J) the set of all ϕ ∈ C b (J) such that: To The important point here is that we only allow derivatives along vectors in H 0 and the gradient is correspondingly in H 0 . In particular, by the definition of the scalar product in H, each ϕ ∈ Exp A (H) is also Fréchet differentiable in the norm of L 2 (0, 1); then, denoting by ∇ϕ the gradient in the Hilbert structure of L 2 (0, 1), we have Then the second result of this paper is (a) The bilinear form E = E νc, · H 0 given by is closable in L 2 (ν c ) and its closure (E, D(E)) is a symmetric Dirichlet Form. Furthermore, the associated semigroup By Theorem 2.3, we have a Markov process which solves (1.1) weakly and whose invariant measure is the law of e conditioned to have average equal to c.

2.6.
Remarks on uniqueness of solutions to (1.1). We expect equation (1.1) to have pathwise-unique solutions, since this is typically the case for monotone gradient systems: this is always true in finite dimensions, see [3], and has been proven in several interesting infinite-dimensional situations, see [8] and [6]. In the present situation, the difficulty we encountered in the proof of uniqueness of (1.1) is the following: because of the boundary condition u(t, 0) = u(t, 1) = 0 and of the reflection at 0, it is expected that the reflecting measure η has infinite mass on ]0, T ] × [0, 1]; this is indeed true for second order SPDEs with reflection: see [11]. If this is the case, then it becomes necessary to localize in ]0, 1[ in order to prove a priori estimates; however, in doing so one loses the crucial property that the average is constant. In short, we were not able to overcome these two problems. 3. Conditioning e on its average 3.1. An absolute continuity formula. Let (X t ) t∈[0,1] be a continuous centered Gaussian process with covariance function q t,s := E[X t X s ]. We have in mind the case of X being a Brownian motion or a Brownian bridge. In this section we consider two processes Y and Z, both defined by linear transformations of X, and we write an absolute continuity formula between the laws of Y and Z.

3.2.
Proof of Theorem 2.1. If (X, Y ) is a centered Gaussian vector and Y → R is not constant, then it is well known that a regular conditional distribution of X given Y = y ∈ R is given by the law of We apply this property to X = (β t , t ∈ [0, 1]) and to Y = 1 0 β. Notice that for all t ∈ [0, 1]: Therefore, for all c ∈ R, a regular conditional distribution of the law of β conditioned to 1 0 β = c is given by the law of the process: Proof. We shall show that we are in the situation of Lemma 3.1 with X = β, Y = β c and Z = Γ β . In the notation of Lemma 3.1, we consider and κ := √ 12 c. Then: The desired result follows by tedious direct computations and from Lemma 3.1.
We set: Moreover we set, denoting the density of N (0, t)(dy) by p t (y): By the Markov property of β: Then, recalling the definition of ρ c above, by Lemma 3.1 and Lemma 3.2: We recall now that P π ε by (9.2). Then by (9.1) and (9.2) On the other hand, β conditioned on K ε tends in law to the normalized Brownian excursion (e t , t ∈ [0, 1]), as proven in [7]. Then we have for all bounded f ∈ C(R): Comparing the two formulae for all f ∈ C(R) with compact support:

The linear equation
We start with the linear Cahn-Hilliard equation, written in abstract form: where W is a cylindrical white noise in L 2 (0, 1) and and we notice that BB * = −2A. We define the strongly continuous contraction semigroups in L 2 (0, 1): We stress that S and S * are dual to each other with respect to ·, · L but not necessarily with respect to (·, ·) H . Then it is well known that Z is equal to: and that this process belongs to C([0, ∞); L 2 (0, 1)). Notice that since S * t 1 = 1 and B * S * t 1 = B * 1 = 0. In particular, the average of Z is constant. Now, the L 2 (0, 1)-valued r.v. Z t (x) has law: Notice that: so that, recalling that Q D := (−A D ) −1 : If we let t → ∞, then S t h, k L → 1, h L 1, k L for all h ∈ L 2 (0, 1). Therefore the law of Z t (x) converges to the Gaussian measure on L 2 (0, 1): with covariance operator Q ∞ and mean c · a ∈ L 2 (0, 1), where Notice that the kernel of Q ∞ is {t1 : t ∈ R} and therefore µ c is concentrated on the affine space L 2 c . Finally, we introduce the Gaussian measure on L 2 (0, 1): recall (2.5). In this case, the kernel of Q D in L 2 (0, 1) is the null space, so the support of µ is the full space L 2 (0, 1). The next result gives a description of µ and µ c as laws of stochastic processes related to the Brownian bridge (β θ , θ ∈ [0, 1]). Proof. By (2.5) we have that the Q D is given by a symmetric kernel (θ∧σ−θσ, σ, θ ∈ [0, 1]). Since E(β t β s ) = t ∧ s − ts, for all t, s ∈ [0, 1], then it is well known that µ = N (0, Q D ) coincides with the law of β. Analogously, the covariance of β 0 is by (3.3) By the definition of Q ∞ in (4.4), this is easily seen to be the kernel of Q ∞ , so that µ 0 = N (0, Q ∞ ) is the law of β 0 . By the definitions of µ c = N (ca, Q ∞ ), β c and a, we find that µ c is the law of β c = β 0 + ca.
In particular, µ c is a regular conditional distribution of µ(dx) given {x = c}, ie: Lip(Hc) . Proof. The proof is standard, since the process Z is Gaussian: see [5, §10.2]. However we include some details since the interplay between the Hilbert structures of H and L 2 (0, 1) and the different role of the operators A and A D can produce some confusion. The starting point is the following integration by parts formula for µ: for all ϕ ∈ C 1 b (H) and h ∈ D(A D ). By conditioning on {x = c}, (4.6) implies: and computing the time derivative at t = 0 we obtain the generator of Z: Now we compute the scalar product in L 2 (µ c ; C) between Lϕ and ψ: where ψ is the complex conjugate of ψ and in the second equality we have used (4.7). It follows that (L, Exp A (H)) is symmetric in L 2 (µ c ) and the rest of the proof is standard.

The approximating equation
We consider now the following approximating equation: where ε > 0. Notice that this is a monotone gradient system in H: see [5,Chapter 12], i.e. (5.1) can be written as follows, We define the probability measure on L 2 (0, 1): where Z ε,α c is a normalization constant. Now, recalling (2.8), we introduce the symmetric bilinear form: Notice that this symmetric form is naturally associated with the operator: where Lϕ is defined in (4.8) above. The following proposition states that equation (5.1) has a unique martingale solution, associated with the Dirichlet form arising from the closure of (E ε,c , Exp A (H)). Moreover, it states that the associated semigroup is Strong Feller.

Convergence of the stationary measures
The first technical result is the convergence of ν ε,α c as ε → 0 + and then α → 0 + , and in particular the tightness in a suitable Hölder space. By Lemma 4.1, µ c is the law of β c defined in (3.3). We set K α = {ω ∈ C([0, 1]) : ω ≥ −α} and for α > 0 ν 0,α c := µ c ( · | K α ) = law of β c condiditioned to be greater or equal to − α. This is well defined, since µ c (K α ) > 0, and it is easy to see that t Moreover, since β c has the same path regularity as β, it is easy to see that for all α > 0, γ ∈ (0, 1/2) and r ≥ 1: We also need a similar tightness and convergence result for (ν 0,α c ) α>0 . We recall the definition ν c := P[e ∈ · | e, 1 = c], as defined in Theorem 2.1.
Notice that ϕ 1 Γ β = ϕ 1 β. We set I = [0, 1/3] and we denote by (β 0,a θ , θ ∈ I), resp. (m b,a θ , θ ∈ I), the Brownian bridge from 0 to a over the interval I, respectively the 3dimensional Bessel bridge from b to a over the interval I. Then, denoting by p t the density of N (0, t), where in the former equality we use the Markov property of β and in latter the equality in law between Brownian bridges conditioned to be positive and 3-dimensional Bessel bridges. Then Then it is easy to conclude that By symmetry, the same estimate holds for ϕ 3 · Γ β . As for ϕ 2 · Γ β , conditioning on the values of β 1/3 and β 2/3 and using an analogous argument, we find similarly that sup α>0 1 α 2 E ϕ 2 · Γ β p W γ,r (0,1) 1 (Γ β ∈Kα) < +∞.
We estimate now the denominator of the r.h.s. of (6.4). Recall the definition (3.4) of Γ ω for ω ∈ C([0, 1]). Notice that ]. This means that 1 0 e ≥ c > 0. Then (6.4) is proven. In order to show that ν 0,α c indeed converges to ν c , it is enough to recall formula (3.6) above and the second result of Theorem 2.1.

A general convergence result
In this section we recall two results of [2], which we shall apply in section 8 to the convergence in law of the solutions of (5.1) to the solution of (1.1). These processes are reversible and associated with a gradient-type Dirichlet form. Moreover their invariant measures (respectively, ν ε,α c and ν c , are log-concave; a probability measure γ on H is logconcave if for all pairs of open sets B, C ⊂ H ∀t ∈ (0, 1). (7.1) If H = R k , then the class of log-concave probability measures contains all measures of the form (here L k stands for Lebesgue measure) where V : H = R k → R is convex and Z := R k e −V dx < +∞, see Theorem 9.4.11 in [1], in particular all Gaussian measures. Notice that the class of log-concave measures is closed under weak convergence. Therefore, it is easy to see by an approximation argument that ν ε,α c and ν c are log-concave. We denote by X t : H [0,+∞[ → H the coordinate process X t (ω) := ω t , t ≥ 0. Then we recall one of the main results of [2]. We notice that the support of ν c in H is K c , the closure in H of {h ∈ L 2 (0, 1) : h ≥ 0, h, 1 L = c}, and the closed affine hull in H of K c is H c . Proposition 7.1 (Markov process and Dirichlet form associated with ν c and · H 0 ).
(a) The bilinear form E = E νc, · H 0 given by is closable in L 2 (ν c ) and its closure (E, D(E)) is a symmetric Dirichlet Form. Furthermore, the associated semigroup is symmetric in L 2 (ν c ); moreover ν c is invariant for (P t ), i.e. ν c (P t f ) = ν c (f ) for all f ∈ C b (K c ) and t ≥ 0.
Let (P ε,α,c x : x ∈ H c ) (respectively (P x : x ∈ K c )) be the Markov process in [0, +∞[ Hc associated to (resp. in [0, +∞[ Kc associated to ν c ) given by Proposition 7.1. We denote by P N ν N c := P N x dν N c (x) (resp. P νc := P x dν c (x)) the associated stationary measures. With an abuse of notation, we say that a sequence of measures (P n ) on C([a, b]; H) converges weakly in C([a, b]; H w ) if, for all m ∈ N and h 1 , . . . , h m ∈ H, the process ( X · , h i H , i = 1, . . . , m) under (P n ) converges weakly in C([a, b]; R m ) as n → ∞.
In this setting we have the following stability and tightness result: Theorem 7.2 (Stability and tightness). Then, for any x ∈ K c and 0 < ε ≤ T < +∞, Proof. This result follows from Theorem 1.5 in [2], where it is stated that the weak convergence of the invariant measures of a sequence of processes as in Proposition 7.1 implies the weak convergence of the associated processes. Since lim α→0 + lim ε→0 + ν ε,α c = ν c , we obtain the result.
Then (u ε,α , η ε,α , W ) converges in law to (u, η, W ), stationary weak solution of (1.1), in E T , for any T ≥ 0. The law of u is P x and therefore (u, u 0 = x ∈ K c ) is the Markov process associated with the Dirichlet form (7.3).
We shall use the following easy result: and v s = c > 0, i.e. γ(t) − γ(s) = 0, since c > 0. Proof of Proposition 8.1. Recall that P ε,α,c x is the law of u ε,α if u ε,α 0 = x. By Theorem 7.2 and Skorohod's Theorem we can find a probability space and a sequence of processes (v ε , w ε ) such that (v ε , w ε ) → (v, w) in C(O T ) almost surely and (v ε , w ε ) has the same distribution as (u ε , W ) for all ε > 0, where O T :=]0, T ] × [0, 1]. Notice that v ≥ 0 almost surely, since for all t the law of v t (·) is γ which is concentrated on K and moreover v is continuous on O T . We set now: From (5.1) we obtain that a.s. for all T ≥ 0 and h ∈ D(A 2 ) and h = 0: The limit is a random distribution on O T . We want to prove that in fact η ε converges as a measure in the dual of C(O T ) for all T ≥ 0. For this, it is enough to prove that the mass η ε (O T ) converges as n → ∞. Suppose that {η ε (O T )} n is unbounded. We define ζ ε := η ε /η ε (O T ). Then ζ ε is a probability measure on the compact set O T . By tightness we can extract from any sequence ε n → 0 a subsequence along which ζ ε converges to a probability measure ζ. By the uniform convergence of v ε we can see that the contact condition O T v dζ = 0 holds. Moreover, dividing (5.1) by η ε (O T ) for t ∈ [0, T ], we obtain that Ot h θ ζ(ds, dθ) = 0 for all h ∈ D(A 2 ) with h = 0 and by density for all h ∈ C([0, 1]) with h = 0.
Proof of Theorem 9.1 The results follow from Lemma 9.3, along the lines of the proof of Theorem 2.1.
It would be now possible to repeat the results of sections and prove existence of weak solutions to the SPDE u(t, 0) = ∂u ∂θ (t, 1) = ∂ The result follows if we show that the Laplace transforms of the two probability measures in (3.2) are equal. Notice that Y is a Gaussian process with mean κ (Λ+M ) and covariance function: Therefore, recalling the definition of h := h − 1 1−I M, h (λ + µ), we obtain after some trivial computation: