Lyapunov exponents in a slow environment

Motivated by the evolution of a population in a slowly varying random environment, we consider the 1D Anderson model on finite volume, with viscosity $ \kappa>0 $: $$ \partial_{t} u(t,x) = \kappa \Delta u(t,x) + \xi(t, x) u(t,x), \quad u(0, x) = u_{0}(x), \qquad t>0, x \in \mathbb{T}. $$ The noise $ \xi $ is chosen constant on time intervals of length $ \tau>0 $ and sampled independently after a time $ \tau $. We prove that the Lyapunov exponent $ \lambda (\tau) $ is positive and near $ \tau= 0 $ follows a power law that depends on the regularity on the driving noise. As $ \tau \to \infty $ the Lyapunov exponent converges to the average top eigenvalue of the associated time-independent Anderson model. The proofs make use of a solid control of the projective component of the solution and build on the Furstenberg--Khasminskii and Bou\'e--Dupuis formulas, as well as on Doob's H-transform and on tools from singular stochastic PDEs.


Introduction
In this work, we study a parabolic Anderson model with viscosity κ > 0 and periodic boundary conditions: λ stat being the largest eigenvalue of the associated time-independent dynamic, whereas λ(τ ) → 0 for τ → 0.
In addition, simulations show that λ(τ ) stabilizes relatively quickly around E[λ stat ], so the behaviour near τ = 0 is particularly interesting and gives a measure of the speed of adaptation of the population.Here we prove that the Lyapunov exponent follows a power law that depends on the regularity of the noise.We consider two archetypal settings: in the first one the noise is function valued, in the second one it is space white noise.In the first setting we show that λ(τ ) grows linearly, while in the second setting λ(τ ) ≃ √ κ −1 πτ .This behaviour can be explained by considering a different scaling.If we multiply ξ with a factor τ − 1 2 we expect to see a Stratonovich equation in the limit τ → 0 and by scaling we also expect that λ(τ ) gets roughly multiplied by τ −1 -since in the limiting equation the Lyapunov exponent depends linearly on the second moment of the noise -and after this rescaling we expect its convergence to the Lyapunov exponent of the Stratonovich equation.This is the case is if the noise is regular: instead for irregular noise the Stratonovich equation makes sense only after renormalization (i.e. after subtracting an Itô correction), which accounts for the blow-up of order τ − 1 2 in the new scaling.However, we will not follow precisely the approach we just outlined.Instead, on the one hand, our scaling slightly simplifies the setting, in the sense that we can work entirely without making use of Wong-Zakai results: it will be sufficient to establish averaging to the heat equation as τ → 0. On the other hand, since in our scaling we are essentially performing a Taylor expansion, we will need to focus on certain additional moment estimates.
As a notable side effect of our scaling, we can recover the renormalization constant for the multiplicative stochastic heat equation purely from the large scale dynamical properties of the equation, without considering the small-scale behaviour that usually motivates the use of tools from singular SPDEs [Hai14,GIP15].These tools will still be required, though in a different setting and in combination with different (indeed fewer and simpler) stochastic estimates than in the case of the multiplicative stochastic heat equation.
It is also crucial to observe that Wong-Zakai results for the associated SPDEs with Stratonovich or white noise, such as [HP15], are not per se sufficient to derive convergence of the Lyapunov exponents, since they consider convergence on compact time intervals and do not cover the large scales behavior.
The key additional ingredient in the control of the longtime behaviour is the projective invariant measure associated to the equation.The latter is the limit of the "projective" process u t / u t , for some suitable norm • .If one establishes convergence of the solution map as well as of the named invariant measure in a suitable space and with sufficient moment estimates, then the convergence of the Lyapunov exponent will follow.This is, in a nutshell, the approach discussed for SDEs in the monograph [Kha12].In full generality there is no infinite-dimensional extension of this theory, due to a lack of understanding of the projective component.To the best of our knowledge the present order preserving case is the only one in which a spectral gap for the projective component is available and has been studied in a series of papers [MS13a,MS13b,MS16b] and [MS16a], see also the survey [Mie14] and the book [Hes91] for the time-periodic case.
At the heart of our arguments lies the strict positivity of the solution map to (1.1), together with classical approaches for products of random matrices.We will decompose u t = u t L 1 z t , where z t > 0 integrates to one and is the projective component of u t .It is then useful to endow the projective space in which z t takes values with a particular topology under which positive linear maps satisfy a contraction principle.This guarantees the existence of a spectral gap for the process z t , as was first observed for random matrices [AGD94,Hen97] -but the result extends immediately to SPDEs [MS16b,Ros19].We observe that in the existing literature unique ergodicity of the projective component has been studied in many different forms over the past years.A seminal work by Sinai based on the representation of z t via directed polymers [Sin91] has been extended to cases without viscosity and on infinite volume [WKMS00, BCK14,DGR21] in the context of ergodicity for Burgers equation: we highlight here the work [Bak16] where the noise has a comparable structure to our case and a recent article [GK21] that uses a somewhat similar approach to establish Gaussian fluctuations.
In view of the mentioned results it would appear particularly interesting to extend the present study to infinite volume.However, in such setting the results for the projective component we mentioned are weaker and the picture we presented changes drastically, as the growth rate can be super-exponential in cases of so called strongly catalytic environments.In particular the regularity of the environment can determine the exact super-exponential order of growth of the population (see e.g.[KPvZ20,Che14] for the time independent case or [GdH06,CMS05], as well as the monograph [Kön16, Chapter 8], for time dependent problems).
Next let us discuss our proof methods.We will make use of the spectral gap of z t to derive the Furstenberg-Khasminskii formula for λ(τ ).We then study the Lyapunov exponent near zero via a Taylor expansion of the latter formula, in the spirit of similar results for products of random matrices close to the identity.These Taylor expansions build on the convergence of certain stochastic quantities, which give rise to the leading order terms.Much unlike the finite-dimensional case, here, if the noise is rough, we use paracontrolled distributions [GP17] to identify such terms.The positivity of λ(τ ) in the bulk is proven instead with an application of the Boué-Dupuis formula, by constructing a suitable control.We observe that there is a vast literature on lower bounds for Lyapunov exponents for parabolic Anderson models.Although to the best of our knowledge there is no result that covers our minimal assumption (the Lyapunov exponent is always positive, unless the noise is constant in space), we believe that our proof, based in a different way than above on the contraction property in the projective space and on a perturbation expansion, is of independent interest.Eventually, the convergence for τ → ∞ is studied via Doob's H−transform.In all cases, the backbone of our analysis consists of moment estimates for the invariant projective component: an essential tool in this respect is the use of certain quantitative lower bounds to the fundamental solution of SPDEs, as were developed recently in the context of SDEs with singular drift [PvZ20], as well as some precise estimates on the moments of the solution map to (1.1).
In conclusion, this work shows in a novel way how the tools presented in the cited works can be extended to obtain a strong quantitative control of the Lyapunov exponent, with a particular attention to the interplay with the regularity of the noise and theories from singular SPDEs.

Structure of the article
In Section 2 we will collect our main results, Theorems 2.7 and 2.8, after having introduced the setting of the article.In the next three sections we then prove the results concerning, respectively, the behaviour near zero, the bulk behaviour and the averaging as τ → ∞.The focus will be on the crucial points of the proofs: we sometimes provide an intuitive proof in the simpler case of regular noise and then concentrate on the added difficulties of white noise (or sometimes we just treat the latter, more complicated case).We leave the most technical calculations to later sections.In particular, in Section 6 we recall the required properties of the projective space and prove some moment estimates on the invariant measures.These in turn build on a quantitative analytic bound that we present in Section 7. In Section 8 we prove the stochastic estimates required for the Taylor expansion near τ = 0 and in Appendix A we define some functions spaces and recall basic constructions involving paraproducts.

Acknowledgments
The author gratefully acknowledges support through the Royal Society grant RP \R1\191065.Many thanks to Florian Bechtold, Ilya Goldsheid, Martin Hairer and Nicolas Perkowski for some fruitful discussions, ideas and support.

Notations
Let N = {0, 1, 2, . . .}.We will work on the torus T = R /Z .We denote with M b (T) the space of measurable and bounded functions ϕ : T → R with the uniform norm: Then let S(T), S ′ (T) be respectively the space of Schwartz test functions (i.e.smooth) and its topological dual, the space Schwartz distributions.For α ∈ R, p ∈ [1, ∞], we denote with , the spaces constructed in Appendix A. For α ∈ (0, ∞] \ N, C α coincides with the typical space of α−Hölder continuous functions.For time-dependent functions ϕ : [0, T ] → X (for some T > 0) and a Banach space X, we introduce the norms, for p ∈ [1, ∞] and the usual modification if p = ∞: For a set X and two functions f, g : X → R we write f g if there exists a constant c > 0 such that f (x) cg(x) for all x ∈ T.

Main results
As mentioned in the introduction, we will work in two distinct settings: in the first one every realization of the noise, for fixed time, is assumed to be a function (Assumption 2.1), either piecewise constant or with some continuity requirement; in the second one we consider space-white noise.We start by describing precisely the first setting: we add the index stat to indicate that the noise we describe is time independent.
Assumption 2.1 (Regular noise) Consider a probability space (Ω, F , P) supporting a random function: such that the following requirements are satisfied.

(Regularity) One of the following two holds true:
(a) There exists an α ∈ (0, 1) such that Remark 2.2 Most of the calculations we will present will work for general potential in L ∞ with exponential moments.The additional regularity assumption will be used for certain stochastic estimates that are based on the Feynman-Kac formula.The quadratic exponential moments are instead required to study the behaviour in the bulk.
In the second setting, below, we consider space white noise, as an archetype for more irregular noises for which every realization is a distribution rather than a function.
Assumption 2.3 (White noise) Consider a probability space (Ω, F , P) supporting a random distribution ξ stat : Ω → S ′ (T) such that for any ϕ ∈ C ∞ (T) the random variable ξ, ϕ is a centered Gaussian with covariance: Now we define our time-dependent potential.
Definition 2.4 Consider a probability space (Ω, F , P) such that either Assumption 2.1 or Assumption 2.3 is satisfied and supporting a sequence {ξ i stat } i∈N of i.i.d.random fields ξ i stat : Ω → S ′ (T) such that ξ i stat = ξ stat in distribution.Then for any τ > 0 define: ).We will consider the following random Hamiltonians, naturally associated to ξ τ .Definition 2.5 In the same setting as the previous definition, define for every ω ∈ Ω: In the case of space white noise (Assumption 2.3) the Hamiltonian is defined in the sense of Fukushima and Nakao [FN77].
To study the longtime behaviour of (1.1) as τ we recall the Furstenberg formula for the Lyapunov exponent λ(τ ).Here we will write u 0 > 0 if u 0 (x) 0 for almost all x ∈ T and it holds 0 Lemma 2.6 (Furstenberg formula) Consider τ > 0 and let u be the solution to (1.1) with initial condition u 0 ∈ L1 (T), u 0 > 0 and with ξ = ξ τ as in Definition 2.4.Then there exists a λ(τ ) ∈ R such that P-almost surely the following limit holds (and is independent of the choice of u 0 ): In addition, consider H(ω) as in Definition 2.5.Then where z ∞ is the projective invariant measure constructed in Proposition 6.3.
We provide a proof of the lemma in Section 6.Instead now we pass to the main results of this work.The next result describes the behavior of the Lyapunov under the assumption that the noise is regular.Here we denote with σ(H) the spectrum of a closed operator H.We note that all operators we consider have compact resolvents, so σ(H) consists of the pure point spectrum of the operator.

For large values of τ :
Proof The continuity of λ follows from Lemma 6.4 and the positivity from Lemma 4.1.Then the first statement is proven in Lemma 3.1, while the second statement follows from Proposition 6.6 as well as Lemma 5.5.✷ Instead in the case in which ξ has the law of space white noise the behavior near zero follows a different power law.

For large values of τ : lim
Proof The continuity of λ follows from Lemma 6.4.Similarly to above, the first statement follows from Lemma 3.2, while the second statement is a consequence of Proposition 6.6 as well as Lemma 5.5.✷ In the next sections we will collect all the results needed to prove the previous two claims.

Behavior near zero
This section is devoted to the proof of the small τ behavior stated in Theorems 2.7, 2.8.We start with the simpler setting of Assumption 2.1.
Lemma 3.1 For τ > 0 and under Assumption 2.1 consider λ(τ ) as in Lemma 2.6.Then Proof By Lemma 2.6 we have for any τ > 0: Using the definition of the semigroup e τ H(ω) , we can rewrite the quantity inside the logarithm as: where in the last step we used that T z ∞ (τ, ω ′ , x) dx = 1 by construction (cf.Proposition 6.3).Now let us define where we used integration by parts to remove the Laplacian.With this definition we observe that and now our result will follow by a Taylor expansion of the logarithm.The key observation is that although for τ → 0, ζ(τ, ω, ω ′ ) ≃ τ , as the potential is centered Ω ζ(τ, ω, ω ′ ) dP(ω) ≃ τ 2 , so that we obtain a term of the correct order (here we will use Lemma 8.1).
To rigorously motivate the Taylor expansion we start with some moment bounds for ζ.By a maximum principle we can bound: In particular by Lemma 6.5 on the projective invariant measure and the finite exponential moments of Assumption 2.1, we obtain that for any p 1 there exists a C(p) > 0 such that sup τ ∈(0,1) We can conclude that the set A τ = |ζ(τ )| 1 2 ⊆ Ω × Ω satisfies for any p 1 1 2 by choosing p = 4. Now we expand the logarithm to obtain: where the rest R is the Lagrange rest of for the Taylor expansion for some κ which, using the definition of A τ , satisfies the bound κ(τ, ω, ω ′ ) ∈ [1/2, 3/2].Hence the rest term is controlled by |R(τ, ω, ω ′ )|1 Aτ (ω, ω ′ ) |ζ(τ, ω, ω ′ )| 3 , so that from the bound E P⊗P |ζ(τ )| 3 τ 3 we obtain: In the last step we followed similar arguments to those already used to control the integral on A c τ .At this point we have reduced the problem to an estimate for the first two terms in the Taylor expansion of the logarithm.Now we apply Lemma 8.1 to obtain for some γ > 0, with lim τ →0 + r(τ ) = 1 4 T T E|ξ stat (x) − ξ stat (y)| 2 dx dy.This concludes the proof of the lemma.✷ Next we treat the behaviour near zero in the white noise case.Here we will skip some parts of the proof that are identical to the arguments we just used.
Lemma 3.2 For τ > 0 and under Assumption 2.3 consider λ(τ ) as in Lemma 2.6.Then Proof Once again we use Lemma 2.6 to write, for any τ > 0: Following the calculations for the regular case, we rewrite the quantity in the logarithm by integration by parts Hence let us define To define the product inside the integral, since ξ stat is a distribution, we can use the product estimates in Lemma A.1 for any ε > 0: Now, for any p 1 and γ ∈ (0, 1) we can estimate, via Lemma 6.9: and by Lemma 6.10 sup τ ∈(0,1) so that overall, for any ε ∈ (0, 1/2) sup τ ∈(0,1) As a consequence of this bound, following the same arguments as in the proof of Lemma 3.1 to motivate the Taylor expansion, we obtain that To establish the limiting behavior for τ → 0 we have to further decompose η into a leading term of order ≃ τ 3 2 and a better behaved rest term.For fixed ϕ ∈ C 1 2 +γ (for some γ ∈ (1/2, 1)) and s > 0 we write the solution e sH(ω) ϕ as where the modified paraproduct ≺ ≺ is defined as in (A.3) (it is in many ways equivalent to the paraproduct , with the crucial difference that the commutator [ψ≺ ≺(•), (∂ t − ∆)], for ψ fixed, satisfies some nice regularity estimates) and I(ξ stat )(t) = t 0 P t−s ξ stat ds as in Lemma 8.2.Then by Lemma 3.3 we have that for any γ ∈ (1/2, 1) there exists a δ > 0 such that In this way we find via Lemma 6.10: The second term in the decomposition above, ξ stat P s z ∞ (τ ), vanishes on average by using the independence of ξ stat and z ∞ (τ ).We are left with studying, for s ∈ [0, t] : where we used the resonant product as in Lemma A.1, observing that T f (x)g(x) dx = F (f g)(0) = T (f g) (x) dx (with F the Fourier transform).Now we can further decompose 1 with the commutators For the first resonant product we can use [GIP15, Lemma 2.4] to bound, for δ, γ ∈ (0, 1): where the parameters must satisfy 1 2 + γ − 4δ > 0 (which is true for δ sufficiently small: in particular, we can assume that δ is the same as chosen in the calculations above).Now we can estimate so that overall for some δ > 0 As for the second commutator term, we use [GP17, Lemma 2.8] and (3.1) to find: so that (assuming δ is sufficiently small) we obtain: We have thus deduced the following estimate on the Lyapunov exponent The proof of the lemma is concluded if we show that Indeed, for δ > 0 sufficiently small (and uniformly over τ ∈ (0, τ * ) and s ∈ (0, τ )): by Lemma 8.2 by using for the last term and Lemmata 6.9, 6.10 for the first term.The last step is then to show that Since T P s z ∞ (τ ) (x) dx = 1 it is enough to prove, once more for δ > 0 sufficiently small, that Here we can bound so that the claimed result follows along the same arguments explained above, by applying Lemma 3.3 and (3.1).✷ To conclude this section we establish an estimate on the paracontrolled decomposition of the solution used in the previous lemma. where Proof We have that Hence we obtain, for any choice of δ > 0: On the other hand, for the commutator term we can use [GP17, Lemma 2.8], which guarantees that: Then define δ 2 = 2ε and choose δ = min{δ 1 , δ 2 }.We find via Lemma A.2, collecting all the previous estimates: If ε is chosen sufficiently small we have 1 − 2ε 1 2 + δ (of course, this choice is far from optimal), so that the result follows now by Lemma 6.9, since E[ ξ stat p C − 1 2 −ε ] < ∞ for any p 1 (for example because ξ stat is the distributional derivative of a Brownian motion).✷

Bulk behavior
In this section we study the behaviour of λ(τ ) in the bulk (0, ∞) and establish the positivity of the Lyapunov exponent.Our argument is based on the Boué-Dupuis formula and on a perturbation argument to construct a suitable control.
Proof Since the exact value of τ, κ > 0 is irrelevant for this discussion, let us assume that τ = 1 and κ = 1 2 to simplify the notation.Moreover, since the Lyapunov exponent does not depend on the initial condition, we will fix u 0 = 1.And finally, we will work only in the case ξ stat being space white noise, since the arguments we will use will work identically also under Assumption 2.1.We will use the Feynman-Kac representation of the solution: where under E Q x the process B is a Brownian motion started in B 0 = x, while under E Q the process B is a Brownian motion started in the uniform measure on T. Then the Boué-Dupuis formula [BD98] implies that, for any fixed realization of the noise ξ: with X t = B t + t 0 u s ds and H n s the space of controls u adapted to the filtration of B such that n 0 |u s | 2 ds < ∞ and the law of X t is smooth for any t 0. We observe that as long as the law of X t is smooth the above formula makes sense also for ξ stat space-time white noise -so that the lower bound can be directly derived from the Boué-Dupuis formula for smooth potentials ξ by approximation.Now, if we take u = 0 we immediately obtain non-negativity of λ, since where we used that the law of B t converges to the uniform measure on T for t → ∞.To prove strict positivity we have to choose a slightly better control.Define, for ε ∈ (0, 1) that will be chosen small later on and any i ∈ N Our aim will be to prove that the cost of the control and several error terms are of order O(ε 2 ) (this is why the parameter ε is multiplying the drift): then a zeroth order term will vanish in the limit by averaging and we will be left with a positive leading term of order O(ε).With this aim in mind we rewrite the quantity in question as where p n is the solution to for all i ∈ N ∩ [0, n], with initial condition p n (0, y) = 1.Now we would like to use that p n converges to an invariant p ∞ as n → ∞.But for clarity let us fix first some notation.We may assume that the probability space is as in Assumption 6.1: then we observe that ← − p n (ω, i, Under this time change we can view ← − p n (ω, t) as the solution to on the interval (i, i + 1], and we observe that this definition makes sense for all i ∈ Z ∩ (−∞, n − 1], with terminal condition ← − p n (n, y) = 1.Now the one-force-one-solution principle in [Ros19, Theorem 3.4] (applied in the present time-reversed setting, observing that the solution map to (4.2) is strictly positive) guarantees the existence of a ← − p ∞ ((ω j ) j 0 , x) such that P−almost surely lim sup for a deterministic constant c > 0 (here d H is Hilbert's projective distance as in Section 6).In particular we define ← − p ∞ (ω, i, x) = ← − p ∞ ((ω j+i ) j 0 , x).We observe once more, to avoid confusion, that the time arrow is running backwards when dealing with ← − p ∞ , namely ← − p ∞ (ω, 0) is the evolution under (4.2) of ← − p ∞ (ω, 1).Now, the synchronization principle in [Ros19, Theorem 3.4] (which is the same as the one-forceone-solution principle, only seen for fixed initial time, instead of fixed terminal time) implies that for any fixed ε ∈ (0, 1) and almost all ω ∈ Ω there exists a random constant We remark that the constant d(ϑ n ω, ε) depends on n, but its law is invariant.Finally, let us denote with S ε t (ω i )p the solution at time t ∈ [−1, 0] to (4.2) with terminal condition S ε 0 (ω i )p = p, so that ← − p n (ω, i) = S ε −1 (ω i ) ← − p n (ω, i + 1).We can then rewrite (4.1) with the notation we have introduced so far to find We would like to replace ← − p n with ← − p ∞ , so let us further decompose the sum into We now have to treat all the error terms, as well as the cost of the control.
Step 1: Cost of the control.For the cost of the control we have to prove that lim sup Indeed we have Step 2: Rest term.Now let us consider the term appearing in (4.4).We further divide divide this sum into As for the first term, from the definition of d(ϑ n ω, ε) we have that for i n − d(ϑ n ω, ε) − 1: In addition, define for some parameters α ∈ (1/2, 1), δ > 0 with α+1 2 + δ < 1 where the previous inequality follows by Besov embedding.Then we find: and by the ergodic theorem the last quantity converges, as n → ∞, to ε 2 E[ ξ stat ∞ • η]: the latter is finite by Lemma 4.3, since E|η| σ < ∞ for any σ 1.Instead, for the rest of the sum we bound similarly to above Then as an application of the ergodic theorem (see e.g.[Ros19, Lemma A.4]) we have lim In fact, by independence we can bound the above through where the first term is finite by Lemma 4.3.As for the last quantity inside the sum, denoting with µ the contraction constant of Theorem 6.2, we have, via Lemma 4.4: Here we have used independence as well as a Taylor expansion.To conclude, since ← − p 0 (0) = 1 we observe that by Lemma 4.2, provided ε is sufficiently small.We therefore deduce which is the desired bound.Hence we have obtained that lim sup Step 3: First order term.Finally, we consider (4.3).This term converges by the ergodic theorem, since , assuming for the moment that all the products are well-defined.Here we used that E ξ stat , 1 = 0 and T S ε s (ω)[ ← − q ](x) dx = 1 since (4.2) is mass preserving.Now, define for s ∈ [−1, 0]: Then we claim that and uniformly over all q ∈ Pr E Here 3 4 is an arbitrary number larger that 1 2 , so that the product with ξ stat is well posed.To prove the above estimate we can use the Duhamel representation of the solution S ε s , s ∈ [−1, 0], so that for sufficiently small δ > 0 and by Lemma A.2 ∞ , where we used that S ε s (ω)q L 1 = 1 for q ∈ Pr.So the claimed bound is proven.
Step 4: Conclusion.Overall, we have obtained that Since Eξ stat (x) = 0, we can further reduce this, with the definition of T ε s (ω), to obtain In addition, by Lemma 4.2 and Lemma 4.4 we have that ← − q − 1 ∞ ε, so that following similar calculations to the one above we finally conclude where in the last line we used the definition of Z together with the fact that the heat semigroup is selfadjoint.We observe that the average appearing in the first order term is strictly positive both under Assumption 2.3 and under Assumption 2.1 (in the latter case, because the field is chosen to be non-trivial).Hence sending ε → 0 proves the desired result.✷ In the previous result we required an approximation of the invariant measure ← − p ∞ for small ε.This is the content of the next result.
Lemma 4.2 Under Assumption 6.1, and in the setting of either Assumption 2.1 or Assumption 2.3, consider ← − p ∞ the invariant probability measure, in the sense of Proposition 6.3, to the equation for ε ∈ (0, 1).Then for every σ 1 there exists an ε 0 (σ) ∈ (0, 1) such that for all ε ∈ (0, ε 0 ), with d H the Hilbert distance as in Section 6: Proof Note that we should expect that ← − p ∞ → 1 as ε → 0, since the latter is the eigenfunction associated to the top eigenvalue of the heat equation.Therefore, let us denote with S ε s (ω), S s = P −s respectively the solution map to (4.5) and to the heat equation started at time 0 and computed at time s < 0. We will prove that Following the same arguments, one can prove that 1 − E[ exp{−σd H ( ← − p ∞ , 1)}] σ ε, which together with the previous bound implies the desired result.To obtain our bound we compute: Now let µ = µ(S −1 ) ∈ (0, 1) be the contraction constant of the heat semigroup in the Hilbert distance, as in Theorem 6.2.Then Now, from the definition of d H , it suffices to prove that We can then decompose, for −1 s 0, For the first term we thus have where we used that T ← − p ∞ (x) dx = 1 and C 1 > 0 is a constant such that p 1 (x) C 1 , ∀x ∈ T with p t (x) the heat kernel at time t > 0. For the second term a Taylor expansion guarantees, for any fixed α ∈ (0, 1): with C 1 as above and C 2 , C 3 > 0 deterministic so that the fundamental solution Γ s (x, y) to (4.5) (i.e. with initial condition Γ 0 (x, y) = δ x (y) That such constants exist follows as in the proof of Lemma 6.10, via the results of [PvZ20].Then we can estimate (4.6) as follows, for some deterministic C > 0 for some M (µ, σ) > 0, where in the last step we used a Taylor expansion.Now we observe that via Lemma A.2 and Lemma 4.3, for any δ > 0 sufficiently small where in the third line we used Besov embeddings.Hence an application of Fernique's theorem as well as Lemma 4.3 guarantees the required estimate.✷ Next we show a moment estimate for the solution map to (4.5).
Lemma 4.3 Let S ε s p 0 be the solution to (4.5) with initial condition p 0 ∈ Pr (defined as in Section 6) at time s < 0. Then for any σ 1 and α ∈ (0, 2), δ > 0 such that α 2 + δ < 1 Proof First, we observe that by mass conservation S ε s p 0 ∈ Pr for all s < 0. The parameter δ > 0 is arbitrary small and chose only because we will embed L 1 ⊆ C −δ 1 .Now, let us first assume that α + δ < 1, so that using Duhamel's formula and Lemma A.2 we obtain where the regularity −1 far from an optimal choice, but the associated norm is finite under both Assumption 2.1 and Assumption 2.3.Now one can iterate the bound to obtain the result for any α ∈ (0, 1): which implies the desired result.✷ To conclude this section, let us note the following elementary result.
Lemma 4.4 Consider the distance d H as in Section 6 and let sinh(x) = 1 2 (e x − e −x ).Then, for any f, g ∈ Pr: Proof We have the upper bound f (x) g(x) − 1 e log ( max f g ) − 1 e dH (f,g) − 1, and the lower bound 1 − e log ( min f g ) 1 − e −dH (f,g) .Combining the two bounds proves the claim.✷

Behavior near ∞
In this section we provide a short proof of the convergence for τ → ∞ described in the main results.
A key tool will be Doob's H-transform, which has its roots in the Krein-Rutman theorem.
In particular, ζ(ω) = max σ(H(ω)).Finally, there exists an α > 1 so that ψ(ω Proof This is a simple consequence of the Krein-Rutman theorem and the strong maximum principle for parabolic equations.In fact, under both possible assumptions, for fixed ω, s → e sH(ω) is a compact semigroup on C(T) (see e.g.Lemma 6.9 for the white noise case, which implies also the required regularity estimate).✷ We can use the eigenvalue-eigenfunction pair (ζ, ψ) as just constructed to introduce Doob's Htransform.where e t H is the semigroup associated to the Hamiltonian Proof First we observe that by Lemma 5.1, since ψ C α < ∞ for some α > 1 and ψ > 0, the definition of H makes sense.The derivation of the H−transform is classical, but we provide the salient points for the reader.For any smooth ϕ we have, from the definition of ψ: meaning that where M ψ is the operator defined by point-wise multiplication with ψ.In particular, we have the decomposition e tH u 0 = e tζ e tM ψ HM ψ −1 u 0 , Now since M ψ −1 = M −1 ψ commutes with H, we eventually find e tH u 0 = e tζ e tM ψ HM ψ −1 u 0 = e tζ M ψ e tH M −1 ψ u 0 = e tζ ψe t H (u 0 /ψ), which concludes the proof.✷ The next result is a moment estimate on the Hilbert distance (cf.Section 6) between ψ and 1, the latter intended as the constant function.
Lemma 5.3 In the same setting as above, define Proof Consider v = log(ψ).Since ψ solves ∂ 2 x ψ + ξψ − ζψ = 0, we obtain that v solves Now we view v as a periodic function on R. We can choose x 0 ∈ R so that ∂ x v(x 0 ) = 0 and for every x ∈ T there exists a z ), and we can bound |z + (x)| c for all x, for some c > 0. We find: with Ξ a primitive of ξ with Ξ(x 0 ) = 0. Similarly we also find a z − (x) < x 0 such that

Now we can bound max
To conclude we have to guarantee that E[|ζ| + Ξ ∞ ] < ∞.Clearly the second term is bounded (under Assumption 2.3 Ξ is a Brownian motion).That E|ζ| < ∞ follows, under Assumption 2.3, from Corollary 6.11 (under Assumption 2.1 one can use a simpler argument, through a maximum principle).✷ The following result establishes the behaviour of λ(τ ) for large τ .
Proposition 5.4 In the setting of Lemma 2.6, with µ as in Lemma 5.3 and ζ as in Lemma 5.1 we have: (5.1) In particular lim Proof The proof follows by representing the solution via Doob's H−transform and an application of maximum principles and the ergodic theorem.We indicate with (ζ i , ψ i ) the eigenvalue-eigenfunction pair as in Lemma 5.1, associated to the Hamiltonian H i as 2.5.Furthermore, we restrict to considering the limit lim n→∞ 1 nτ min x∈T u(nτ, x).Then extension to arbitrary t is straightforward.For n ∈ N, using Doob's transformation as in Lemma 5.2 we can represent u(nτ, x) by with Hi (ϕ) = ∆ϕ + 2 ∇ψ i ψ i ∇ϕ.
Now for any i ∈ N the semigroup e t Hi is strictly positive: In addition for any c ∈ R (we identify the constant c with the constant function c(x) = c) e t Hi c = c.
In particular we observe that for any continuous ϕ: The last maximum principles provide the estimate: where µ i = log(max x∈T ψ i (x)) − log(min x∈T ψ i (x)).In particular, by the ergodic theorem P−almost surely As for the upper bound in (5.1), it's a simple consequence of the inequality u(nτ ✷ We conclude by proving that the average top eigenvalue is positive.
Lemma 5.5 Consider ξ stat as in Assumption 2.1 or 2.3.Then Proof Consider a smooth random function ψ : Ω → C ∞ (T) such that We observe that it's always possible to construct such ψ under both Assumption 2.1 and Assumption 2.3.We want to use the representation which follows from the fact that C ∞ is dense in the domain of the Hamiltonians in Definition 2.5.Then for α ∈ (0, 1) define ], so that by the previous bound, since lim α→0 c α = 1, we obtain which proves the desired result.✷

Projective invariant measures
In this section we study the projective space and some of its fundamental properties.This space, endowed with Hilbert's projective distance is a complete metric space (see e.g.[Ros19, Lemma 2.2]).Our purpose is to understand properties of the invariant measure associated to (1.1), when seen as a cocycle on Pr.For the needs of this section it will be convenient to work with product probability spaces (a stronger requirement than in Assumptions 2.1,2.3, but we can always modify the probability space Ω to meet this requirement).
Assumption 6.1 Under either Assumption 2.1 or Assumption 2.3, consider a probability space (Ω stat , F stat , P stat ) supporting a random variable ξ stat with law as required by 2.1 or 2.3.Then assume that the probability space (Ω, F , P) is the following product space endowed with the product sigma-algebra and the product measure: In this way every ω ∈ Ω is of the form ω = (ω i ) i∈Z , with ω i ∈ Ω stat , and we can assume that the maps ξ i stat of Definition 2.4 are given by: Finally define the map ϑ : In this setting we associate to any strictly positive (meaning Aϕ(x) > 0, ∀x ∈ T for all ϕ ∈ Pr ) and bounded operator A : C(T) → C(T) a projective map The reason why we consider the distance d H is the following contraction property.
Theorem 6.2 (Birkhoff 's contraction) If A is a strictly positive operator on C(T), then there exists a constant µ(A) ∈ [0, 1) such that In particular, a consequence of this proposition is the following result.
Proposition 6.3Under Assumption 2.1 or Assumption 2.3 and in the setting of Assumption 6.1, for any τ > 0 there exists a unique map z ∞ (τ ) : Ω → Pr that satisfies either of the following for all ω ∈ Ω outside a ϑ−invariant null-set: 1. (Synchronization) For any ϕ ∈ Pr: In addition, for every ω We refer the reader to [Ros19, Theorem 3.4] for a proof of the proposition above and in general for a more detailed discussion also of Theorem 6.2.The "convergence in direction" that the previous proposition proves is useful to derive some classical statements regarding Lyapunov exponents.We start with a proof of Lemma 2.6.
Proof (of Lemma 2.6) From the subadditive ergodic theorem we have that lim sup since E sup 0 t 1 log T u(t, x) dx ∨ 0 < ∞ by calculations simpler than those in Lemma 6.5 for regular noise and Lemma 6.9 for white noise.Next, let us prove Equation (2.3), which proves that the limsup is really a limit.Let λ ∈ [−∞, ∞) be defined by the following limit (up to taking a subsequence of n, to ensure that the limit exists): where we have just rewritten the first term via a telescopic sum with: Then for almost all ω ∈ Ω there exist some b(ω), c(ω) > 0 such that We can thus rewrite the terms in the telescopic sum, for almost all ω ∈ Ω, as: So that passing to the limit, using independence and the ergodic theorem: which is the required upper bound for (2.3).The lower bound follows analogously, so that λ(τ ) = λ(τ ) and (2.3) is proven.We are left with proving that λ(τ ) > −∞.In the case of regular noise, this now follows by Furstenberg's formula and similar calculations as in Lemma 6.9, while for white noise this is implied for example by Corollary 6.11.✷ The following results establishes instead the continuity of the Lyapunov exponent.
Lemma 6.4Under Assumption 2.1 or Assumption 2.3 and in the setting of Assumption 6.1 the map λ : (0, ∞) → R as in Lemma 2.6 is continuous.
Proof If suffices to establish the continous dependence on τ of Equation (2.3).First observe that for any σ ∈ (0, ∞), lim τ →σ z ∞ (τ ) = z ∞ (σ) in distribution in Pr. by Lemmata 6.5, 6.10 the sequence {z ∞ (τ )} |τ −σ| 1 is tight in C(T) and one can easily check that any limit point for τ → σ must satisfy the invariance property of Proposition 6.3 (for τ = σ).Since only z ∞ (σ) satisfies this property we deduce the required convergence.Using the independence between H(ω), and z ∞ (τ, ω ′ ), together with Lemma 6.9 (and a similar but simpler result if the noise is regular) we also observe that lim in distribution.Then the claimed convergence holds by uniform integrability, observing that for any p 1 This follows from Corollary 6.11 under Assumption 2.3 and by analogous but simpler calculations under Assumption 2.1.✷ Next we study some properties of the invariant measure that we will need for our results.In particular we prove certain moment bounds and study the convergence of the behaviour of the measures for τ → 0. The results as well as their proofs will be slightly different under either Assumption 2.1 or Assumption 2.3.Therefore we distinguish the two settings, starting with the latter.

Moment estimates for regular noise
Now we concentrate on moment estimates and on the convergence for τ → 0 of the invariant measure associated to (1.1).We start with the regular setting of Assumption 2.1.
Proof Let us fix τ > 0 and n ∈ N, then by Proposition 6.3, for almost every ω ∈ Ω To lighten the notation we avoid writing explicitly the dependence on ω, as long as no confusion can arise.Using a maximum principle we can bound the denominator by: while for the nominator we observe that where the latter is the solution to Let us write n(t) for the smallest integer such that τ n(t) t.Via Duhamel's formula Then by the Schauder estimates in Lemma A.2 we have for any α ∈ (0, 2) and ε > 0 such that α + ε < 2: By a maximum principle and since z ∞ (τ ) Hence overall for some C > 0 The first term is bounded by the exponential moment bound of Assumption 2.1.Similarly the second term, since: This is not quite enough, since our aim is to bound z ∞ (τ ) in C α and not in B α 1,∞ .But we can iterate the argument by using the bound we established, together with the Besov-Sobolev embedding From the previous tightness we can deduce the convergence for τ → 0.
Proposition 6.6 The following convergence holds in distribution, as a sequence of random variables with values in C α (T) for any α ∈ (0, 2): Proof We have already proven in Lemma 6.5 that the sequence {z ∞ (τ )} τ ∈(0,1) is tight in C α (T).To establish the limit as τ → 0 we observe that for any t > 0 and n(t) the smallest integer such that τ n(t) t: z ∞ (τ ) Here S τ (t)u 0 is the solution at time t 0 to Equation (1.1) with ξ = ξ τ and initial condition u 0 , and is chosen to be independent of z ∞ (τ ) on the right-hand side.Now let z ∞ (0) be any limit point of the sequence z ∞ (τ ) for τ → 0. Then by Lemma 6.7 and by the independence of S τ and z ∞ (τ ), for every t 0 there exists a subsequence Hence we conclude z ∞ (0) Since the Dirac measure in the function that is constantly 1 is the only invariant measure for P t we have proven our result.✷ We conclude the subsection with two lemmata used in the previous proof.We start with an averaging result for the solution map to (1.1).
Using a Gronwall argument we conclude that sup In particular, we can now bound by (6.2): .
Since M (τ, p) → 0 in probability, our result follows.✷ Finally we establish some bounds for the solution to the linear equation.
Lemma 6.8 The process I(ξ τ ) defined for any τ > 0 by satisfies for any T > 0, any α ∈ (0, 2) and any p 2: Moreover, for any bounded sequence (t τ ) τ ∈(0,1) of positive real numbers, we have Then we can rewrite the B α p,p norm as: therefore, by Fubini, our objective will be to bound E| I(ξ τ )(t), K x j | p uniformly over x ∈ T. As before, let n(t) be the smallest integer such that τ n(t) t.We can use Rosenthal's inequality [Pet95, Theorem 2.9], which we can apply because the sequence {ξ i stat } i∈N is independent, and EP t ξ stat = 0, to bound for p 2 We observe that in addition the following estimate holds for any ε > 0 and α ∈ (0, 2) with α+ε 2 < 1: In particular, if we now define ds, we obtain: .
At this point, using the inequality i |a i | p ( i |a i |) p and since .2). On the other hand, we can also bound which leads, following the previous steps, to the bound Now, interpolating between (6.3) and (6.4) delivers all the required results.✷

Moment estimates for white noise
We now treat similar bounds as in the previous subsection in the case of space white noise.This will require some more involved estimates.We start with a bound on the solution map.
Lemma 6.10Under Assumption 2.3, for any τ > 0 let z ∞ (τ ) : Ω → Pr be defined as in Proposition 6.3.Then for any p 2, γ ∈ (0, 1) Proof From the invariance of z ∞ we know that for any t > 0, z ∞ (τ ) = u(n(t)τ )/ u(n(t)τ ) L 1 in distribution, where n(t) = min{n ∈ N : τ n t} and for every ω ∈ Ω, u(ω) is the solution to the equation Hence our result follows from an upper bound on the moments of u(n(t)τ ) C 1 2 +γ and a lower bound on the moments of u(n(t)τ ) L 1 , for some t > 0 appropriately chosen, and uniformly over τ .Here the point is that at any positive time the heat semigroup has smoothened the initial condition, from L 1 to C 1 2 +γ , while at the same time our bounds show that the total mass may decrease at most by a factor exp(−Cn(t)τ |x| 2 ), where x is roughly some linear functional of the Gaussian noise and C > 0 deterministic, so t needs to be small enough to ensure the integrability of negative moments of this quantity.
Step 1: Lower bound on u L 1 .Recall that by the strong maximum principle u(ω, τ n(t), x) > 0, ∀x ∈ T. We will show that for every ω ∈ Ω T u(ω, τ n(t), x) dx C(ω, t, τ ) T u(0, x) dx = C(ω, t, τ ), where we used that T z ∞ (τ, x) dx = 1, and the crux of the argument will be that C(ω, t, τ ) satisfies sup τ ∈(0,1) E 1 |C(t,τ )| p < ∞ for certain combinations of p and t.In the following calculations we consider ω ∈ Ω fixed, so we omit writing the dependence on it.Let I(ξ τ )(t) = t 0 P t−s (ξ τ (s)) ds.Then we can decompose u t = e I(ξ τ )t w t , with w the solution to By Lemma 6.12, I(ξ τ ) takes values in C 1+γ , for any γ ∈ (0, 1/2).Hence let us define . By comparison we find that w t (x) w t (x), with w the solution to We can write w(t, x) = T Γ t (x, y)z ∞ (τ, y) dy, where Γ t is the fundamental solution to the previous PDE: with δ y the Dirac delta function centered at y. Now one can find quantitative lower bounds to Γ in terms of the heat kernel, see e.g.[PvZ20, Theorem 1.1].The quoted article considers the more complicated setting of a distribution valued drift on infinite volume, but the same arguments show that where C 1 , C 2 , κ > 0 are deterministic constants and p t (x) is the periodic heat kernel.In particular, we obtain Overall, we have obtained that Now, by Lemma 6.12 there exists a σ(γ) > 0 such that for any p 1 and t (6.5) Step 2: Upper bound on u C 1 2 +γ .Let us start by observing that for any ε > 0, q 1 sup τ ∈(0,∞) Hence we see that ξ τ takes values in L q ([0, 1]; C − 1 2 −ε ) for all ε > 0, q 1.In particular, for any t ∈ (0, 1) we can apply Lemma 7.1 to obtain that for all ε > 0 sufficiently small , where C 3 , C 4 > 0 are deterministic constants and we allow C 3 to depend on t to incorporate the explosion at time t = 0. Now, by Besov embedding we have that B , so that we can follow the same argument on the interval [t/2, t] to obtain (up to increasing the value of C 3 , C 4 ): . Since u 0 L 1 = 1, and since for ε small both 4 3−2ε < 2 and 1 2 + γ Step 3: Conclusion.Now there exists a τ * (p) such that for all τ ∈ (0, τ * (p)) we have τ n(t * (2p)/2) t * (2p).
Then the results at the previous points imply: To conclude the proof of the lemma we have to consider the case τ ∈ (τ * (p), ∞).Here we observe that in all the bounds in steps 1 and 2 we did not use any other information on the initial condition and S τ is the solution map to Equation (1.1) with ξ = ξ τ , chosen independent of z ∞ (τ ).Then we can follow verbatim the calculations above, by using that u 0 L 1 = 1 to obtain the required result.✷ A consequence of this result if the following bound on the largest eigenvalue of the operator ∆ + ξ stat .
Proof We can bound Ee σγ < ∞ for all σ > 0 by similar calculations as in the upper bound presented in Step 2 of the proof of Lemma 6.10.In addition, there exists a σ * such that for all σ ∈ (0, σ * ) we have Ee −σγ < ∞ by following the same arguments that lead to (6.5).✷ The previous bound builds on the following estimate on the linear equation with additive noise.
Lemma 6.12 Consider I(ξ τ ) defined by Then for any γ ∈ (0, 1) and T > 0 there exists a σ(γ, T ) > 0 such that: Proof Our aim is to apply the Kolmogorov continuity criterion to control the time continuity of I(ξ τ ).We can decompose an increment of the process as: For any δ ∈ (0, 1) define ζ = 1 2 + γ + 2δ.Then by Lemma A.2: Now we can assume δ > 0 sufficiently small and p 2 sufficiently large, so that for some ε ∈ (0, 1) and ζ ′ = 3 2 − 3ε we have the continuous embedding B where n(t) = min{n ∈ N : τ n t} and K x j is as in (A.1).Next, since over i we have a sum of independent random variables, we can use Rosenthal's inequality [Pet95, Theorem 2.9] to bound uniformly in x, j, similarly to the proof of Lemma 6.8: where we used the bound i a p i i a i p for a i 0 together with the bound Putting all bounds together, we have proven that for p 1 sufficiently large there exists a δ ′ > 0 such that together with the uniform bound (6.7) complete the proof of the result.✷

An analytic estimate
In this section we prove an analytic bound that is useful to control the invariant projective measure associated to Assumption 2.3 uniformly over small τ .
Lemma 7.1 Consider α ∈ (0, 1) and T > 0. Let u be the unique solution to on [0, T ] × T, with ξ ∈ L q ([0, T ]; C −α ) for all q 1, and with u 0 ∈ C β0 p , for some for a constant C = C(T, p, q, α, β, δ) > 0 independent of ξ and u 0 .This estimate guarantees that sup , which tells us that by time t * /A 1 α 1 the heat semigroup has smoothened the initial condition and the regularity of the solution is now governed by the forcing.Following this idea, we bound the solution for times larger than t * /A 1 α 1 differently (assuming We can follow the previous steps with β 0 = β and ζ = 0 to obtain for q sufficiently large: with α 2 = 1 q ′ − α 2 .Now we would like to use Gronwall to obtain a bound that depends only on v 0 and A. But this would lead to an estimate of the kind: 1 α 1 (see the discussion below).Since α 1 ≃ 1 − β+α 2 (for large q) this is not of the correct order for our result.Hence we have to take better care of the v L p t norm, to obtain roughly that the leading order term above (for small t) is of the order t α2 , which would lead to the required exponential bound.We find for any ε > 0: thus leading, for ε > 0 sufficiently small and q 1 sufficiently large, to the bound: A.
(7.4) Now, let us first work under the assumption From the definition of α 1 we then find, provided q is sufficiently large: We can then obtain from (7.4), for t ∈ (0, 1): for some C(T, n) > 0. In particular, we can find a t ′ * > 0 such that for 0 < t (t ′ * /A 1 α 3 ) ∧ 1 and uniformly over A one has (up to increasing the value of C(T, n)) Now, using linearity of the equation and iterating this bound on small intervals of length (t ′ * /A 1 α 3 )∧ 1, one finds (once again up to increasing the value of C(T, n)) We can now use the definition of v together with the small-times bound (7.1) on u to deduce that (up to taking a larger n): where in the last step we chose a possibly larger C(T, n) and used that This concludes the proof of the result in the case β α+ δ/2.We can build on this result to complete the proof of the bound on the spatial regularity.Fix any β ∈ (α, 2 − α), then, by the bound we just proved and (7.2) we have Hence, once more by a Gronwall type argument we obtain: Since 1 α2 2 2−α−δ , for q 1 sufficiently large, our claim now follows along the same arguments that led to (7.5), which completes the proof of the bound for the spatial regularity.
Step 2. Finally, the bound on the temporal regularity can be deduced from the spatial bound we just proved, by applying for instance [GP17, Lemma 6.6].✷

Stochastic estimates
In this section we establish the stochastic estimates necessary for the Taylor expansion of the Furstenberg formula near zero.The next result establishes the key stochastic estimate for the proof of Lemma 3.1.
Lemma 8.1 In the setting of Assumption 2.1 and Definition 2.5, define for any τ ∈ (0, 1) and (ω, ω ′ ) ∈ Ω × Ω the random variable ζ(τ, ω, ω ′ ) as: where z ∞ (τ, ω ′ ) is defined as in Proposition 6.3.Then there exists a γ > 0 such that: Proof Step 1: Estimate for the first moment.First we take the expectation over ω ′ .Hence define and observe that z∞ ∈ L 1 (T) with z∞ (τ, x) 0 for all x ∈ T and by Fubini T z∞ (τ, x) dx = 1.Then by integration by parts we obtain Now we use the Feynman Kac formula to represent the semigroup e sH(ω) (z).Here E Q x indicates the average w.r.t. a periodic Brownian motion B t started in x ∈ T, so that For the rest term we can use Taylor to estimate And the average is finite uniformly over s in a bounded set by the moment bound in Assumption 2.1.Now we observe that the Lebesgue measure on T is invariant for B s , so that from the definition of z∞ , so that Next we would like to replace s 0 ξ stat (B r ) dr by sξ stat (x).We follow two different approaches, depending on the regularity of ξ stat .First assume that (2.1) holds.Then for any ε ∈ (0, 1/2) and s ∈ [0, 1] Hence, using that < ∞, we obtain Instead, if we assume that (2.2) holds, then T = n i=1 A i , with A i disjoint intervals.In this case, for every x ∈ T there exists an i such that x ∈ A i and we can define p(x) ∈ ∂A i the nearest boundary point of A i to x.Then At this point we can conclude the estimate on the first moment of ζ.Via (8.2) by defining γ = α( 1 2 −ε) or (8.3) with γ = 1 2 (depending on the assumption on the noise) together with the moment assumption on ξ stat and Lemma 6.5 for the moments of z ∞ , we obtain: where we have defined κ(x, y) = E[ξ stat (x)ξ stat (y)].We have completed the estimate for the first moment of ζ.
Step 2: Estimate for the second moment.Let us fix any sequence z(τ ) of functions such that for every τ > 0 z(τ ) 0 with T z(τ, x) dx = 1 and concentrate on estimating the following (later we will replace z(τ ) by the random z ∞ (ω ′ , τ )): Here F is defined by: F (x, y, s, r, ω) = ξ stat (ω, x)[e sH(ω) (z(τ ))](x)ξ stat (ω, y)[e rH(ω) (z(τ ))](y), and we can expand F similarly to the previous step: Hence if we define we obtain Altogether, we have found that Finally, replacing z(τ ) with z ∞ (τ ) and using Lemma 6.5 we obtain: With this the proof is essentially complete.The last step is proving the convergence lim which is a consequence of Proposition 6.6 and Lemma 6.5.✷ The following result is instead essential for the proof of Lemma 3.2.
Then there exists a δ * > 0 such that for every δ ∈ (0, δ * ) and p 1: Proof Our aim will be to prove for any p 1, that E ξ I(ξ)(s) √ s − π e −(t−s)κ|k2| 2 ξ(k 2 ) ds, with ψ 0 (k 1 , k 2 ) = |i−j| 1 ̺ i (k 1 )̺ j (k 2 ).Note that { ξ(k)} k∈Z is a set of complex Gaussians with covariance E ξ(k 1 ) ξ(k 2 ) = 1 {k1+k2=0} .So we can write the Ito chaos decomposition for f : the first term on the right hand-side being a multiple stochastic integral in the sense of [Nua06, Section 1.1.2].For our purposes, we can decompose Using this bound with ε = 1 2 + 1 2 δ, for some small δ > 0 we can bound (8.5) uniformly over t > 0 by where in the first step we have changed variables and in the second we have used that |k 1 − k 2 | ≃ |k 2 | |k 1 | on the support of ψ 0 .Now since for δ > 0 the integral over k 2 is convergent we are left with