Stochastic Order Methods Applied to Stochastic Travelling Waves

This paper considers some one dimensional reaction diffusion equations driven by a one dimensional multiplicative white noise. The existence of a stochastic travelling wave solution is established, as well as a sufficient condition to be in its domain of attraction. The arguments use stochastic ordering techniques.


Introduction
We consider the following one dimensional reaction diffusion equation, driven by a one dimensional Brownian motion: We shall assume throughout that f , g ∈ C 3 ([0, 1]) and f (0) = f (1) = g(0) = g(1) = 0, (2) and consider solutions whose values u(t, x) lie in [0, 1] for all x ∈ R and t ≥ 0. The noise term g(u) • dW models the fluctuations due to an associated quantity that affects the entire solution simultaneously (for example temperature effects). In this setting we consider modelling with a Stratonovich integration to be more natural, as we can consider it as the limit of smoother noisy drivers. The use of a non-spatial noise allows us the considerable simplification of considering solutions that are monotone functions on R.
We consider three types of reaction f . We call f : (i) of KPP type if f > 0 on (0, 1) and f (0), f (1) = 0; (ii) of Nagumo type if there exists a ∈ (0, 1) with f < 0 on (0, a), f > 0 on (a, 1) and f (0), f (a), f (1) = 0; (iii) of unst a bl e type if there exists a ∈ (0, 1) with f > 0 on (0, a), f < 0 on (a, 1) and f (0), f (a), f (1) = 0. The deterministic behaviour (that is when g = 0) is well understood (see Murray [7] chapter 13 for an overview). Briefly, for f of Nagumo type there is a unique travelling wave, for f of KPP type a family of travelling waves, and for f of unstable type one expects solutions that split into two parts, one travelling right and one left, with a large flatish region inbetween around the level a. For f of KPP or Nagumo type the solution starting at the initial condition H(x) = I(x < 0) converges towards the slowest travelling wave. Various sufficient conditions (and in Bramson [1] necessary and sufficient conditions) are known on other initial conditions that guarantee the solutions converge to a travelling wave.
The aim of this paper is to start to investigate a few of these results for the stochastic equation (1). There are many tools used in the deterministic literature. In this paper, we develop only the key observation that the deterministic solutions started from the Heavyside initial condition H(x) = I(x < 0) become more stretched over time. The most transparent way to view this, as explained in Fife and McLeod [2], is in phase space, where it corresponds to a comparison result. More precisely, the corresponding phase curves p(t, u), defined via u x (t, x) = p(t, u(t, x)), are increasing in time. This idea is exploited extensively in [2] and subsequent papers. For the stochastic equation (1), the solution paths are not almost surely increasing. However we will use similar arguments to show that the solutions are stochastically ordered, and that this is an effective substitute.
We will use a wavefront marker defined, for a fixed a ∈ (0, 1), by To center the wave at its wavefront we definẽ We callφ the wave φ centered at height a. We have supressed the dependence on a in the notation for the wavefront marker and the centered wave. We give the L 1 loc (R) topology. We write ( ) for the space of (Borel) probability measures on with the topology of weak convergence of measures.
A stochastic travelling wave is a solution u = (u(t) : t ≥ 0) to (1) with values in and for which the centered process (ũ(t) : t ≥ 0) is a stationary process with respect to time. The law of a stochastic travelling wave is the law ofũ(0) on . We will show (see section 3.3) that the centered solutions themselves form a Markov process. Then an equivalent definition is that the law of a stochastic travelling wave is an invariant measure for the centered process.
The hypotheses for our results below are stated in terms of the drift f 0 in the equivalent Ito integral formulation, namely While we suspect that the existence, uniqueness and domains of stochastic travelling waves are determined by the Stratonovich drifts f , our methods use the finiteness of certain moments and require assumptions about f 0 . It is easy to find examples where the type of f and f 0 can be different, for example f of KPP type and f 0 of Nagumo type, or f of Nagumo type and f 0 of unstable type.
We now state our main results. The framework for describing stretching on is explained in section 2.3, where we define a pre-order on that reflects when one element is more stretched than another, and where we also recall the ideas of stochastic ordering for laws on a metric space equipped with a pre-order. These ideas are exploited to deduce the convergence in law in the following theorem, which is proved in section 3.

Theorem 1.
Suppose that f 0 is of KPP, Nagumo or unstable type. In the latter two cases suppose that f 0 (a) = 0 and g(a) = 0. Let u be the solution to (1) started from H(x) = I(x < 0). Then the laws (ũ(t)) are stochastically increasing (for the stretching pre-order on -see 2. 3), and converge to a law ν ∈ ( ). Furthermore ν is the law of a stochastic travelling wave.
Note that the unstable type reactions are therefore stabilized by the noise. This becomes intuitive when one realizes that a large flatish patch near the level a will be destroyed by the noise since g(a) = 0.
It is an immediate consequence of the stochastic ordering that for any solution whose initial condition u(0) is stochastically less stretched than ν, the laws (ũ(t)) will also converge to ν (that is they are in the domain of attraction of ν -see Proposition 16). It is not clear how to check whether an initial condition has this property. However, our stochastic ordering techniques do yield a simple sufficient condition, albeit for not quite the result one would want and also not in the unstable case, as described in the following theorem which is proved in section 4.

Theorem 2.
Suppose that f 0 is of KPP or Nagumo type, and in the latter two case suppose that f 0 (a) = 0 and g(a) = 0. let u be the solution to (1) with initial condition u(0) = φ ∈ which equals 0 for all sufficiently positive x and equals 1 for all sufficiently negative x. Then where ν is the law of the stochastic travelling wave from Theorem 1.

Preliminaries, including stretching and stochastic ordering 2.1 Regularity and moments for solutions
We state a theorem that summarizes the properties of solutions to (1) that we require. Recall we are assuming the hypothesis (2). A mild solution is one satisfying the semigroup formulation.
Theorem 3. Let W be an ( t ) Brownian motion defined on a filtered space (Ω, ( t ), , P) where 0 contains the P null sets. Given any u 0 : (1), driven by W and with initial condition u 0 . The paths of (u(t) : t ≥ 0) lie almost surely in C([0, ∞), L 1 loc (R)) and solutions are pathwise unique in this space. If P φ is the law on C([0, ∞), L 1 loc (R)) of the solution started at φ, then the family (P φ : φ ∈ L 1 l oc (R)) form a strong Markov family. The associated Markov semigroup is Feller (that is it maps bounded continuous functions on L 1 loc into bounded continuous functions). There is a regular version of any solution, where the paths of (u(t, x) : t > 0, x ∈ R) lie almost surely in C 0,3 ((0, ∞) × R). The following additional properties hold for such regular versions.
x) for all t ≥ 0 and x ∈ R, almost surely.
Remark. Henceforth, all results refer to the regular versions of solutions, that is with paths in C([0, ∞), L 1 l oc (R)) ∩ C 0,3 ((0, ∞) × R) almost surely. The results in Theorem 3 are mostly standard, and we omit the proofs but give a few comments for some of the arguments required. The moments in parts (iii) and (iv) at fixed (t, x) can be established via standard Green's function estimates, though a little care is needed since we allow arbitrary initial conditions. Indeed the constants for the pth moment of the kth derivative blow up like t −pk/2 0 (as for the deterministic equation), though we shall not use this fact. One can then derive all the bounds on the supremum of derivatives by bounding them in terms of integrals of a higher derivative and using the pointwise estimates. For example, in part (iv), The supremum over [−L, L] in part (iii) can be bounded by a sum of suprema over intervals [k, k+1] of length one, and each of these bounded using higher derivatives. This leads to the dependency L + 1 in the estimate, which we do not believe is best possible but is sufficient for our needs.
One route to reach the strict positivity and strict negativity in part (ii) is to follow the argument in Shiga [11]. In [11] Theorem 1.3, there is a method to show that u(t, x) > 0 for all t > 0, x ∈ R for an equation as in (1) but where the noise is space-time white noise. However the proof applies word for word for an equation driven by a single noise once the basic underlying deviation estimate in [11] Lemma 4.2 is established. This method applies to the equation for the derivative v = u x over any time interval [t 0 , ∞). This yields the strict negativity u x (t, x) < 0 for all t > 0, x ∈ R, almost surely, (which of course implies the strict positivity of u). The underlying large deviation estimate is for y) d y dW s is the stochastic part of the Green's function representation for u(t, x). This estimate can also be derived using the method suggested in Shiga, where he appeals to an earlier estimate in Lemma 2.1 of Mueller [6]. The method in [6], based on dyadic increments as in the Levy modulus for Brownian motion, can also be applied without significant changes to our case, since it reduces to estimates on the quadratic variation of increments of N (t, x) and these are all bounded (up to a constant) for our case by the analogous expressions in the space-time white noise case.

Wavefront markers, and pinned solutions
We remark on the L 1 l oc topology on . First, the space is Polish. Indeed, for φ n , φ ∈ , the convergence φ n → φ is equivalent to the convergence of the associated measures −dφ n → −dφ in the weak topology on the space of finite measures on R. Note that using the Prohorov metric for this weak convergence gives a compatible metric on that is translation invariant, in that d(φ, ψ) = d(φ(· − a), ψ(· − a)) for any a. Second, the convergence φ n → φ is equivalent to The wave marker Γ, defined by (3), is upper semicontinuous on . The wavemarker Γ(u(t)) and the centered solutionũ(t, x) are semi-martingales for t ≥ t 0 > 0 and x ∈ R. Here is the calculus behind this. (1) with u(0) ∈ almost surely. For t > 0 let m(t, ·) denote the inverse function for the map x → u(t, x). Then the process (m(t, x) : t > 0, x ∈ (0, 1)) lies in C 0,3 ((0, ∞) × (0, 1)), almost surely. For each x ∈ (0, 1) and t 0 > 0, the process (m(t, x) :

Lemma 4. Let u be a solution to
Also Γ(u(t)) = m(t, a) and the centered processũ solves, for t ≥ t 0 , Proof The (almost sure) existence and regularity of m follow from Theorem 3 (noting that x → u(t, x) is strictly decreasing for t > 0 by Theorem 3 (ii)). The equation for m(t, x) would follow by chain rule calculations if W were a smooth function. To derive it using stochastic calculus we choose φ : (0, 1) → R smooth and compactly supported and develop m(t, x)φ(x) d x. To shorten the upcoming expressions, we use, for real functions h 1 , h 2 defined on an interval with R, the notation 〈h 1 , h 2 〉 for the integral h 1 (x)h 2 (x)d x over this interval, whenever it is well defined. Using the substitution x → u(t, x) we have, for t > 0, To assist in our notation we letû,û x ,û x x , . . . denote the composition of the maps x → u, u x , u x x with the map x → m(t, x) (e.g.û x (t, x) = u x (t, m(t, x))). Using this notation we have, for x ∈ (0, 1), We continue by using the reverse substitution x → m(t, x) to reach In the second equality we have integrated by parts. In the final equality we have used the identities in (7). This yields the equation for m. The decomposition forũ follows by applying the Itô-Ventzel formula (see Kunita [3] section 3.3) using the decompositions for du(t, x) and dΓ(u(t)) = d m(t, a).

Stretching and stochastic stretching
Definitions. For φ : R → R we set where we set inf{ } = ∞. We write τ a φ for the translated function τ a φ(·) = φ(· − a). For φ, ψ : We write φ ψ to denote that φ is more stretched than ψ, and as usual we write φ ≺ ψ when ψ φ. In the diagram below, we plot a wave φ and two of its translates, all three curves crossing another wave ψ.
2. The upcoming lemma shows that the relation φ ψ is quite natural. For φ ∈ 1 ∩ with φ x < 0, one can associate a phase curve p φ : (0, 1) → R defined by φ x (x) = p φ (φ(x)). The relation of stretching between two such functions becomes simple comparison between the associated phase curves. Another way to define the relation for functions in is to define it on such nice paths via comparison of the associated phase curves, and then take the smallest closed extension.

3.
It is useful for us to have a direct definition of stretching without involving the associated phase curves. For example the key Lemma 7 below uses this direct definition. Moreover, in a future work, we will treat the case of spatial noise, where solutions do not remain decreasing and working in phase space is difficult. Note that Lemma 7 applies when functions are not necessarily decreasing.

4.
We will show that the stretching relation is a pre-order on , which means that it is reflexive (φ φ) and transitive (φ ψ and ψ ρ imply that ψ ρ). We recall that a partial order would in addition satisfy the anti-symmetric property: φ ψ and ψ φ would imply that φ = ψ.

Lemma 5. (i) The relation φ ψ is a pre-order on .
(ii) If φ ψ then τ a φ ψ and φ τ a ψ for all a. Moreover, if φ ψ and ψ φ then φ = τ a ψ for some a.
Proof We start with parts (iv) and (v), which use only straightforward calculus (and are exploited in

This shows that
This proves part (v). We claim that φ crosses ψ. (8)). This shows that φ crosses ψ. Since p τ a φ = p φ we may apply the same argument to τ a φ to conclude that φ ψ completing the proof of part (iv).
For part (iii) suppose that φ n ψ n for n ≥ 1 and . By the right continuity of φ, ψ we have that φ crosses ψ. We may repeat this argument for τ a φ, ψ to deduce that φ ψ.
The first statement in part (ii) is immediate from the definition. Suppose φ ψ φ. Then, as in Notation. For two probability measures µ, ν ∈ ( ) we write µ s ν if µ is stochastically larger than ν, where we take the stretching pre-order on .
Notation. For a measure µ ∈ ( ) we define the centered measureμ as the image of µ under the map φ →φ.

Remark 5.
We recall here the definition of stochastic ordering. A function F : An equivalent definition is that there exists a pair of random measures X , Y (with values in ( )), defined on a single probability space and satisfying X Y almost surely. The equivalence is sometimes called Strassen's theorem, and is often stated for partial orders, but holds when the relation is only a pre-order on a Polish space. Indeed, there is an extension to countably many laws: if µ 1 s ≺ µ 2 s ≺ . . . then there exist variables (U n : n ≥ 1) with (U n ) = µ n and U 1 ≺ U 2 ≺ . . . almost surely.
See Lindvall [4] for these results (where a mathscinet review helps by clarifying one point in the proof).
For part (iii) we suppose that µ n s ν n and µ n → µ, ν n → ν. Choose pairs (X n , Y n ) with (X n ) = µ n and (Y n ) = ν n and X n Y n almost surely. The laws of (X n , Y n ) are tight so that we may find a subsequence n k and versions (X n k ,Ŷ n k ) that converge almost surely to a limit (X , Y ). Now pass to the limit as k → ∞ to deduce that µ s ν.
3 Existence of the stochastic travelling wave

The solution from H(x) = I(x < 0) stretches stochastically
It is straightforward to extend the basic stretching lemma from McKean [5] to deterministic equations with time dependent reactions, as follows. Since it plays a key role in this paper, we present the proof with the small changes that are needed.

Suppose u and v are mild solutions, taking values in
R is bounded and w is a mild solution to w t = w x x + wR. We now wish to exploit a Feynman-Kac representation for w. Let (B(t) : t ≥ 0) be a Brownian motion, time scaled so that its generator is the Laplacian, and defined on a filtered probability space (Ω, ( s : is a continuous bounded ( s ) martingale and hence has an almost sure limit M (t) as Suppose w(t, x 1 ) > 0 for some x 1 , in particular x 1 ≥ θ 0 (w(t)). Consider the stopping time τ = inf 0≤s≤t {s : Consider another stopping time defined by We claim M (τ * ) ≥ 0 almost surely under P x 2 . Indeed, on {τ * < t} this is immediate from the construction of ξ. 0)) and the assumption that u(0) crosses v(0) ensures that w(0, B(t)) ≥ 0 and hence M (τ * ) ≥ 0. Applying (10), with x = x 2 and τ replaced by τ * , we find that w(t, x 2 ) ≥ 0 when ) and the proof is finished. In the cases θ 0 (u(t) − v(t)) = −∞ we may pick x 1 arbitrarily negative and in the case θ 0 (u(t) − v(t)) = +∞ there is nothing to prove.
By using a Wong-Zakai result for approximating the stochastic equation (1) by piecewise linear noises, we shall now deduce the following stretching lemma for our stochastic equations with white noise driver.

Proposition 8. Suppose that u, v are two solutions to (1) with respect to the same Brownian motion.
Then, for all t > 0, Proof Define a piecewise linear approximation to a Brownian motion W by, for ε > 0, for t ∈ [kε, (k + 1)ε] and k = 0, 1, . . .
can be solved succesively over each interval [kε, (k + 1)ε], path by path. If u solves (1) with respect to W then we have the convergence We were surprised not to be able to find such a result in the literature that covered our assumptions. The closest papers that we found were [8], whose assumptions did not cover Nemitski operators for the reaction and noise, and [12], which proves convergence in distribution for our model on a finite interval. Nevertheless this Wong-Zakai type result is true and can be established by closely mimicking the original Wong-Zakai proof for stochastic ordinary differential equations. The details are included in section 2.6 of the thesis [13]. (We note that the proof there, which covers exactly equation (1), would extend easily to equations with higher dimensional noises. Also it is in this proof that the hypothesis that f , g have continuous thrid derivatives is used.) In a similar way we construct v ε with v ε (0) = v(0). For all k, all paths of u ε and v ε lie in C 1,2 ((kε, (k + 1)ε] × R). By applying Lemma 7 repeatedly over the intevals [kε, (k + 1)ε] we see that u ε (t) crosses v ε (t) for all t ≥ 0 along any path where u(0) crosses v(0). We must check that this is preserved in the limit. Fix t > 0. There exists ε n → 0 so that for almost all paths Fix such a path where in addition u(0) crosses v(0). Suppose that θ 0 (u(t) − v(t)) < ∞. Arguing as in part (iii) of Lemma 5 we find that lim sup n→∞ θ 0 (u ε n (t) − v ε n (t)) ≤ θ 0 (u(t) − v(t)). Now choose y ∈ Q with y > θ 0 (u(t) − v(t)). Taking n large enough that y > θ 0 (u ε n (t) − v ε n (t)) we find, since u ε n (t) crosses v ε n (t), that u ε n (t, y) ≥ v ε n (t, y). Letting n → ∞ we find u(t, y) ≥ v(t, y). Now the continuity of the paths ensures that u(t) crosses v(t). For part (ii) it remains to check that τ a u(t) crosses v(t). But this follows from part (i) after one remarks that if u solves (1) then so too does For part (ii) we shall, when µ ∈ ( ), write Q µ t for the law of u(t) for a solution u to (1) whose initial condition u(0) has law µ. We writeQ µ t for the centered law ofũ(t). We write Q H t andQ H t in the special case that µ = δ H . Since H is less streched than any φ ∈ we know that Q H s s Q H 0 = δ H for any s ≥ 0. Now set µ = Q H s and apply part (i) to see that where the first equality is the Markov property of solutions. This shows that t → Q H t is stochastically increasing. By Lemma 6 (ii) the family t →Q H t is also increasing. The stochastic monotonicity will imply the convergence in law ofũ(t) on a larger space, as explained in the proposition below. Define Then D c is a compact space under the L 1 loc topology: given a sequence φ n ∈ C then along a suitable subsequence n the limit lim n →∞ φ n (x) exists for all x ∈ Q; then φ n → φ where φ(x) = lim y↓x ψ( y) is the right continuous regularization of ψ(x) = lim sup n →∞ φ n (x). I(x < 0). Thenũ(t), considered as random variables with values in c , converge in distibution as t → ∞ to a limit law ν c ∈ ( c ).

Proposition 10. Let u be the solution to (1) started from H(x) =
Proof Choose t n ↑ ∞. Then by Strassen's Theorem (9) we can find valued random variables U n with law (U n ) = (ũ(t n )) and satisfying U 1 ≺ U 2 ≺ . . . almost surely. Note that U n (0) = a and that U n has continuous strictly negative derivatives (by Theorem 3 (ii)). The stretching pre-order, together with Lemma 5 (v), implies that almost surely Thus the limit lim n→∞ U n (x) exists, almost surely, and we set U to be the right continuous modification of lim sup U n . This modification satisfies U n (x) → U(x) for almost all x, almost surely. Hence U n → U in c , almost surely, and the laws (ũ t n ) converge to (U) in distribution. We set ν c to be the law (U) on D c . To show that (ũ t ) → ν it suffices to show that the limit does not depend on the choice of sequence (t n ). Suppose (s n ) is another sequence increasing to infinity. If (r n ) is a third increasing sequence containing all the elements of (s n ) and (t n ) then the above argument shows that (ũ r n ) is convergent and hence the limits of (ũ s n ) and (ũ t n ) must coincide.

Remark
We do not yet know that the limit ν c is supported on . We must rule out the possibility that the wavefronts get wider and wider and the limit ν c is concentrated on flat profiles. We do this by a moment estimate in the next section. Once this is known, standard Markovian arguments in section 3.3 will imply that ν = ν c | , the restriction to , is the law of a stochastic travelling wave.

A moment bound
We will require the following simple first moment bounds. Under hypothesis (2) we may choose

Lemma 11. Let u be a solution to (1) with initial condition u
and there exists C(K 1 , K 2 , a) < ∞ so that Proof For part (i) we may, by translating the solution if necessary, assume that φ crosses 1/2 at the origin, that is φ(x) ≤ 1/2 for x ≥ 0 and φ(x) > 1/2 for x < 0. Taking expectations in (1) and This leads to Combining this with the bounds above and also (which uses that φ crosses 1/2 at the origin) completes the proof of part (i).
For part (ii) we have more explicit bounds. Use a Gaussian tail estimate to the bound for x > 0 We briefly sketch a simple idea from [5] for the deterministic equation u t = u x x + u(1 − u) started at H, which we will adapt for our stochastic equation. The associated centered wave satisfies

E[(Γ(u(t))) + ] =
where γ t is the associated wavefront marker. Integrating over (−∞, 0] × [t 0 , t], for some 0 < t 0 < t yields the estimate This allows one, for example, to control the size of the back tail Integrating over [0, ∞) gives information on the front tail. The following lemma gives the analogous tricks for the stochastic equation.

Lemma 12. Let u be the solution to (1) started from H(x) = I(x < 0). Letũ be the solution centered
at height a ∈ (0, 1). Then for 0 < t 0 < t, almost surely, Proof Integrating (6) first over [t 0 , t] and then over [0, U] we find The interchange of integrals uses Fubini's theorem path by path for the first and third terms on the right hands side and a stochastic Fubini theorem for the second and fourth term (for example the result on p176 of [9] applies directly for the fourth term and also the second term after localizing at the stopping times σ n = inf{t ≥ t 0 : sup y∈[0,U] |ũ x (s, y)| ≥ n}).
To prove the lemma we shall let U → ∞ in each of the terms. Bound | f (z)| ≤ Cz(1 − z) for some C.
Using the first moment bounds from Lemma 11 (ii) we see that This gives the domination that justifies This leaves the second term in (14) and the lemma will follow once we have shown that where we have converted from a Stratonovich to an Ito integral and we are writing [· , ·] t for the cross quadratic variation. We claim that each of these terms converge to zero almost surely. Note that the strict negativity of the derivative u x (t, x) and the relations (7) imply that the path So the first term on the right hand side of (15) converges (almost surely) to zero by dominated convergence using u(s, U) → 0 as U → ∞. The second term in (15) also converges to zero by applying the same argument to the quadratic variation g 2 (a) t t 0ũ 2 (s, U)m 2 x (s, a) ds. A short calculation leads to the explicit formula for the cross variation Again, since also g(ũ(t, U)) → 0 as U → ∞, a dominated convergence argument shows that the final term in (15) converges to zero as U → ∞.
This completes the proof of the first equation in the lemma. The second is similar by integrating over [−L, 0] and letting L → ∞.

Proposition 13.
Suppose that f 0 is of KPP, Nagumo or unstable type. In the latter two cases suppose that f 0 (a) = 0 and g(a) = 0. Let u be the solution to (1) started from H(x) = I(x < 0) and let ν c be the limit law constructed from u in Proposition 10. Then ν c ( ) = 1.

In the KPP and Nagumo cases we have the increasing limits as t ↑ ∞
In the unstable case Proof We start with the case where f 0 is of KPP type. In this case there is a constant C so that In a similar (but easier) way to Lemma 12, one may integrate (1) over s ∈ [t 0 , t] and then x ∈ R to find Taking expectations and rearranging one finds Using the first moments from Lemma (11) (ii) on each of the four terms of the right hand side we find that The other terms are similar.
Writing z → m(t, z) for the inverse function to x → u(t, x) we have The stochastic ordering of (u(t)) and Lemma (5) u)(s, x)]d x is increasing and we conclude that x are bounded and continuous on c . So by the convergence of (ũ(t)) to ν c in ( c ) we see that The last two displayed equations imply that This in turn implies that ν c only charges .
For 0 ≤ N ≤ M the function is increasing in M and also in t (since (ũ(t)) are stochastically increasing). We may therefore interchange the t and M limits to see that This control on the tails allows us to improve on (18) to the desired result (16).
Now we consider the case where f 0 is of Nagumo type, and this is the only place we exploit the bi-stability of f 0 (that is f (0), f (1) < 0). We may fix a smooth strictly concave h : The properties of h and the fact that f 0 is of Nagumo type together imply that h f 0 ≤ 0 on [0, 1] and h f 0 only vanishes at 0, a, 1. Since g(a) > 0 we have (h f 0 + 1 2 h g 2 ) < 0 on (0, 1). The derivatives at x = 0, 1 are non-zero and this implies that here is an ε > 0 so that (h f 0 + h g 2 ) ≤ −εh. The aim is to obtain a differential inequality of the form where we have integrated by parts in the last equality. Letting N → ∞ is justified (and is similar but simpler than Lemma 12) and we find The stochastic monotonicity of s → (ũ(s)) and Lemma 5 (iv) imply that the supremum sup z |u x (s, x)| is stochastically decreasing. Since E[sup z |u x (t 0 , z)|] is finite by Theorem 3 (iv) we have the desired differential inequality for m(t) = R E[h(u)(t, x)] d x. This implies that m stays bounded and since Ch(z) ≥ z(1 − z) for some C we find As in the previous KPP case this implies (16) and that ν c only charges . Now we consider the case where f 0 is of unstable type. Rearranging the conclusion of Lemma 12 we see, after taking expectations, that We claim that the limsup as t → ∞ is finite for all three terms on the right hand side. The first term can bounded using and then controlled by first moments as in the KPP case. For the second term the claim follows from the fact that s → E[|ũ x (s, 0)|] is decreasing and finite from Theorem 3 (iv). For the third term the claim follows from Lemma 11. We conclude that the limsup of the left hand side of (21) is finite. Applying a similar argument to the second equation of Lemma 12 we have Note that f 0 is of a single sign on each of the intervals [0, a] and [a, 1]. Indeed there exists C so that This and (17) imply that ν c charges only or the single point φ ≡ a.
The argument that there is no mass on the point φ ≡ a is a little fiddly, and we start with a brief sketch. We argue that if φ ≡ a with ν c positive mass then there are arbitrarily wide patches iñ u(t), for large t, that are flattish, that is lie close to the value a. But the height of this large flatish patch will evolve roughly like the Since g(a) = 0 the sde will move away from the value a with non-zero probability and this would lead to an arbitrary large value of E[ R |1 −ũ| |ũ − a|ũ d x] for all large times which contradicts (17). To implement this argument we will use the following estimate.

Lemma 14. Let u be a solution to (1) driven by a Brownian motion W . Let Y be the solution to the
Then there exists a constant c 0 (T ) so that for all η ∈ (0, 1) Note that the constant c 0 does not depend on η ∈ (0, 1). Considered as a constant function in x, the process Y t is a solution to (1). This lemma therefore follows by a standard Gronwall argument in order to estimate the L 2 difference between two solutions for an equation with Lipshitz coefficients. The use of weighted norms for equations on the whole space, that is the norm is also standard -see, for example, the analogous estimate in the proof of Shiga [11] Theorem 2.2 for the (harder) case of an equation driven by space-time white noise. Suppose (aiming for a contradiction) that ν c (φ ≡ a) = δ 1 > 0. By the convergence (ũ(t)) → ν c we have, for any η > 0, Suppose the solution u is defined on a filtered space (Ω, , ( t ), P) and with respect to an ( t ) Brownian motion W . Then for t ≥ T (η), we may choose sets Ω t ∈ t satisfying P[Ω t ] = δ 1 /2 and where L 0 is the Lipschitz constant of z|1 − z||a − z| on [0, 1]. We now estimate the terms I and I I. Firstly, Secondly, using Cauchy-Schwarz, Thus, substituting these estimates into (22), we find for t ≥ T (η) By taking η small this bound can be made arbitrarily large, which contradicts (17).

Proof of Theorem 1
Let ν c be the limit law constructed from u in Proposition 10. We let ν be the restriction of ν c to . Proposition 13 shows that in all cases ν is a probability. Moreover the fact that (ũ(t)) → ν c in ( c ) implies that (ũ(t)) → ν in ( ).
We first check that the centered solutions are still a Markov process. This can be done, as follows, by using the Dynkin criterion (see [10] Theorem 13.5) which gives a simple transition kernel condition for when a function of a Markov process remains Markov. Let D 0 = {φ ∈ : Γ(φ) = 0} with the induced topology. Define the centering map J : → 0 by J(φ) =φ. Let (P t (φ, dψ) : t ≥ 0) be the Markov transition kernels for solutions to (1). Then the Dynkin criterion is that for all measurable A ⊆ 0 and all ψ ∈ 0 the values P t (φ, J −1 A) are equal for all φ ∈ J −1 (ψ). By Lemma 5 (ii), elements of J −1 (ψ) are translates of each other and the Dynkin criterion follows from translation invariance of solutions. As a consequence, there are transition kernelsP t (φ, dψ) for the centered process on D 0 . We write (P t ) (respectively (P t )) for the associated semigroups generated by these kernels and acting on measurable F : → R (respectively F : 0 → R), and we write (P * t ) and (P * t ) for the dual semigroups acting on ( ) (respectively ( 0 )).
We aim to show that ν is the law of a stationary travelling wave by applying Markov semigroup arguments applied to the centered solutions (ũ(t) : t ≥ 0). Some difficulties arise since the wavefront marker Γ is only semi-continuous on , and hence 0 is a measurable but not a closed subset of .
For example, we do not yet know that Γ(φ) = 0 for ν almost all φ (though we will see that this is true).
The centered lawν charges only D 0 and we will therefore consider it (with a slight abuse of notation) as an element of ( 0 ), where it is the image of ν under the centering map J. Take F : → R that is bounded, continuous and translation invariant (that is F (φ) = F (τ a φ) for all a). Then the Feller property and translation invariance of solutions imply that P t F remains bounded, continuous and translation invariant. Let F 0 be the restriction of F to D 0 . The translation invariance of F implies thatP t F 0 (φ) = P t F (φ). Write Q H t andQ H t for the law of u(t) andũ(t) on D, when u is started at H. Then This equality may now be extended, by a monotone class argument, to hold for all bounded functions that are measurable with respect to the sigma field generated by continuous translation invarinat F . Lemma 15 below shows that this includes all bounded measurable translation invariant F : → R.
This yieldsP * sν =ν showing thatν is the law of a stationary travelling wave. Finally we check that ν was already centered. By the regularity of solutions at any time t > 0 we know that φ ∈ 1 and φ x < 0 forP * tν almost all φ, and hence forν almost all φ or indeed for ν almost all φ. But the construction of ν showed that φ(x) ≤ a for x > 0 and φ(x) ≥ a for x < 0 for ν almost all φ. Combining these shows that Γ(φ) = 0 for ν almost all φ and thus ν charges only 0 . Thusν = ν and this completes the proof. Remark. We have fixed the centering height a throughout and supressed its dependence in the notation. However, we wish to show that the choice of height is unimportant and, in this remark only, we shall now indicate this dependence. The construction in Proposition 10 of the stretched limit ν c held for any centering height. We write ν a c for this law when centered at height a and also Γ a for the wavefront marker at height a andũ a for the the solution started at H centered using Γ a . The moments in Proposition 13 rely on the specific properties of f 0 and g and the distiguished point a in the definition of the three types of reaction f 0 . But these moments imply, in any of the three cases, that the law ν a c charges only and that the restiction ν a to is the law of a stationary travelling wave for any centering height a. We claim, for a 1 , a 2 ∈ (0, 1), that the image of ν a 1 under the map φ → φ(· + Γ a 2 (φ)) is ν a 2 . Indeed ν a 1 s δ H and so by Corollary 9 (i) and the stationarity of ν a 1 we have ν a 1 s (ũ a 2 (t)). Now letting t → ∞ we have ν a 1 s ν a 2 . But reversing the roles of a 1 and a 2 we find ν a 2 s ν a 1 and Lemma 6 (ii) implies that the centered copies (at any height) of ν a 1 and ν a 2 must coincide.

Lemma 15. Translation invariant measurable F :
→ R are measurable with respect to the sigma field generated by the continuous translation invariant functions on .
Proof. We make use of a smoother wave marker than the wave marker Γ a for the height a. Definê Γ = 1 0 h(a)Γ a d a, where h : (0, 1) → R is continuous and compactly supported in (0, 1). Then Γ(φ) is finite and Γ(τ a φ) = Γ(φ)+a if we assume in addition that h d x = 1. Then the map φ →Γ(φ) is continuous (since Γ a (φ) is discontinuous at φ only for the countably many a where {x : φ(x) = a} has non-empty interior). We letD 0 = {φ ∈ :Γ(φ) = 0}, so that 0 is a closed subset of , and give it the induced subspace topology and Borel sigma field. For this proof only we letφ be the wave centered at the new wave-markerΓ. One may now check that the map φ → J(φ) = (Γ(φ),φ) ∈ R ×ˆ 0 is a homeomorphism. Also, every continuous (respectively measurable) translation invariant F : → R is of the form F (φ) =F (φ) for some continuous (respectively measurable)F :ˆ 0 → R. Using this one finds that

The domain of attraction
Throughout this section ν is the law of the stationary travelling wave constructed in Theorem 1. The stochastic monotonicity results imply that solutions starting from a certain set of initial conditions are attracted to ν, as follows.
Lemma 17. Let u be a solution to (1) started from an initial condition satisfying (24). Let u l , u r be the solutions to (1) driven by the same white noise and with initial conditions u l (0) = I(x < l) and u r (0) = I(x < r). Then, almost surely, and Γ(u l (t)) ≤ Γ(u(t)) ≤ Γ(u r (t)) = Γ(u l (t)) + (r − l) for all t.
Suppose that f 0 is of KPP or Nagumo type, with f 0 (a) = 0 and g(a) = 0 in the latter case. Then for any ε > 0 there exists N (ε) so that In particular Proof Theorem 3 (i) shows that coupled solutions u l , u, u r exists as desired. Note that for all x, t, almost surely, (by uniqueness of solutions). This yields Γ(u r (t)) = Γ(u l (t)) + (r − l). Furthemore So (28) follows from Proposition 13.
The uniform control on the tails was obtained for the solution started from H in (19). Define (25) and (26), A similar estimate holds for the left hand tail which completes the proof.
A key step to proving Theorem 2 is an implicit formula for the expected wave-speed.
Proposition 18. Suppose that f 0 is of KPP or Nagumo type, and that in the latter case suppose that f 0 (a) = 0 and g(a) = 0. Suppose u is a solution to (1) with a trapped initial condition u 0 = φ as in (24). Then where ν is the law of the stationary travelling wave constructed in Theorem 1.
Remark. The proof breaks down in the unstable case and indeed we expect these formulae to be incorrect for the unstable case (for example the last integral in (29) would always be positive in the unstable case). Indeed an examination of the proof suggests that we must have φ(1 − φ)d x ν(d x) = ∞ in the unstable case, else we could establish (29).
Proof By Lemma 17 it is enough to establish the formulae for the solution u started at u(0) = H. Combining (12) and (13) we have The aim is to take expectations, divide by t and then take the limit t → ∞. We may bound the final term by and the tail estamates (27) allows one to obtain the same limit when N = ∞. Using this in (30) we may as planned deduce the first of the formulae in the Lemma. For the second and third formula we argue similarly with each of (12) and (13) separately. We essentially need only one new fact, that which requires us to show that (ũ(t)) converges in a stronger topology. Choose t n ↑ ∞. The upcoming Lemma 19 implies the tightness of (ũ(t, x) : |x| ≤ L) t≥t 0 on 1 ([−L, L], R). So we may find a subsequence (t n ) along which (ũ(t n , x) : |x| ≤ L) converge in distribution on 1 ([−L, L], R).
The limit law must agree with that of (φ(x) : |x| ≤ L) under ν. Moreover the moments from Lemma 19 show that the variablesũ x (t n , 0) are uniformly integrable. Therefore E[ũ x (t n , 0)] → φ x (0) ν(dφ). Since this is true for any choice of subsequence (t n ) we may deduce (31) and complete the proof.
Lemma 19. Suppose that f 0 is of KPP or Nagumo type, and that in the latter case suppose that f 0 (a) = 0 and g(a) = 0. Suppose u is a solution to (1) with a trapped initial condition u 0 = φ as in (24). Then for any t 0 , L, p > 0 E sup |x|≤L |ũ x (t, x)| p + sup |x|≤L |ũ x (t, x)| p ≤ C(L, p, t 0 ) < ∞ for all t ≥ t 0 .
Proof We need to check that centering the solutions does not spoil the control of these derivatives from Theorem 3(iii). First note that by interchanging the order of integration. Hence where C(a) = a −1 + (1 − a) −1 . The first two terms in (32) have first moments bounded uniformly in t by (28). By conditioning on time t − t 0 and using Lemma 11 (i) the third term also has a bounded first moment. This shows that E Γ(u(t)) − Γ(u(t − t 0 )) is bounded independently of t ≥ t 0 . Then we use Chebychev's inequality to estimate P sup |x|≤L |ũ x (t, x)| > K ≤ P sup |x|≤L+K p |u x (t, x + Γ(u(t − t 0 )))| > K + P |Γ(u(t)) − Γ(u(t − t 0 ))| ≥ K p In the final ineqaulity we have used the moments from Theorem 3 (iii). The desired moments for sup |x|≤L |ũ x (t, x)| follow from these tail estimates. The second derivatives are entirely similar.
Remark Such estimates could be used to improve the topology of convergence in Theorem 1indeed they imply the convergence ofũ(t) holds in 1 loc (R). Convergence of higher derivatives should follow in a similar way (requiring more smoothness on f , g as necessary).

Proof of Theorem 2
Consider first the case where f 0 is of KPP type. Let u be the solution to (1) with initial condition u(0) = φ ∈ which satisfies (24). WriteQ φ t for the law ofũ(t). For t > 0 writeQ φ [0,t] for the law t −1 t 0Q φ r d r. Choose t n ↑ ∞. We may find a subsequence t n along which the lawsQ φ [0,t n ] converge as elements of ( ) to a limit which we denote as µ (use compactness of ( c ) and the bound (28) to show that limit points charge only ). We shall show that µ = ν. The subsequence principle then implies thatQ φ [0,t] → ν as t → ∞ and finishes the proof of the theorem. We will need later to know that µ charges only 1 strictly decreasing paths. To see this we will check that µ is the law of stationary travelling wave, and many arguments are as in the proof of Theorem 1. Letμ be the centered measure. As before, take F : → R that is bounded, continuous and translation invariant and let F 0 be the restriction of F to D 0 . Then As before, this imples thatP * sμ =μ and soμ is the law of a stationary travelling wave. Also as before this implies thatμ = µ.
Next we derive an implicit formula for the expected wave-speed in terms of µ. Our solution is started from φ and not the Heavyside function, but the formula (30) still holds. Indeed in it's derivation (in Lemma 12) we used the moment control (28), which holds for our solutions, the decay of derivativesũ x (t, x) as x → ±∞ from Theorem 3 (iv), and the first moment bounds on E[u(t, x)] and E [1 − u(t, x)] which can again be obtained by comparing with the coupled solutions u r and u l . The plan, once more, is to take expectations, divide by t n and then let n → ∞. The third term on the right hand side of (30) does not contribute to this limit, again by using the estimates