Hydrodynamic limit fluctuations of super-Brownian motion with a stable catalyst

We consider the behaviour of a continuous super-Brownian motion catalysed by a random medium with infinite overall density under the hydrodynamic scaling of mass, time, and space. We show that, in supercritical dimensions, the scaled process converges to a macroscopic heat flow, and the appropriately rescaled random fluctuations around this macroscopic flow are asymptotically bounded, in the sense of log-Laplace transforms, by generalised stable Ornstein-Uhlenbeck processes. The most interesting new effect we observe is the occurrence of an index-jump from a 'Gaussian' situation to stable fluctuations of index 1+gamma, where gamma is an index associated to the medium.


Introduction and main results
1.1. Motivation and background. In order to describe the long-term behaviour of infinite interacting spatial particle systems with mass preservation on average, limit theorems under mass-time-space rescaling are an established tool. A typical feature that can be captured by this means is the clumping behaviour of spatial branching processes in low dimensions: In some models, for a critical scaling one can observe convergence to a nontrivial field of isolated mass clumps. The spatial contraction allows to get hold of large mass clumps in remote locations, and the index of mass-rescaling serves as a measure of the strength of the clumping effect, quantifying the degree of intermittency. In some of these results a macroscopic time dependence can be retained, giving insight in the long-time developments of the clumps. For a recent result in this direction, see Dawson et al. [DFM02].
In higher dimensions one does not expect to observe clumping under mass-timespace rescaling, but convergence to a non-random mass flow, the hydrodynamic limit. In this case one can hope to get a deeper understanding from the investigation of fluctuations around this limit. Such fluctuations were studied by Holley and Stroock [HS78] and Dawson [Daw78], and their results were later refined and extended, e.g. by Dittrich [Dit87]. There is also a large body of literature on hydrodynamic limits of interacting particle systems, see e.g. [DMP91,KL99,Spo91]. Our main motivation behind this paper is to investigate the possible effects on fluctuations around the hydrodynamic limit if the original process is influenced by a random medium, which in our model acts as a catalyst for the local branching rates.
In Dawson et al. [DFG89], fluctuations under mass-time-space rescaling were derived for a class of spatial infinite branching particle systems in R d (with symmetric α-stable motion and (1 + β)-branching) in supercritical dimensions in a random medium with finite overall density. This leads to generalized Ornstein-Uhlenbeck processes which are the same as for the model in the constant (averaged) medium. In other words, for the log-Laplace equation the governing effect is homogenization: After rescaling, the equation approximates an equation with homogeneous branching rate, the medium is simply averaged out. The nature of the fluctuations for the case of a medium with infinite overall density remained unresolved over the years.
The purpose of the present paper is to get progress in this direction. Our main result shows that a medium with an infinite overall density can have a drastic effect on the fluctuation behaviour of the model under critical rescaling in supercritical dimensions, and homogenization is no longer the effect governing the macroscopic behaviour. In fact, despite the infinite overall density of the medium, we still have a law of large numbers under a certain mass-time-space rescaling. But under this scaling, the variances (given the medium) blow up, and the related fluctuations do not obey a central limit theorem. However, fluctuations can be described to some degree by a stable process.
To be more precise, we start with a branching system with finite variance given the medium, considered as a branching process with a random law, where this randomness of the laws comes from the randomness of the medium (quenched approach). Under a mass-time-space rescaling, the random laws of the fluctuations are asymptotically bounded from above and below by the laws of constant multiples of a generalized Ornstein-Uhlenbeck process with infinite variance. Here the ordering of random laws is defined in terms of the random Laplace transforms. The generalized Ornstein-Uhlenbeck process is the same as the fluctuation limit of a super-Brownian motion with infinite variance branching in the case of a constant medium. In fact, the branching mechanism is (1 + γ)-branching, where γ ∈ (0, 1) is the index of the medium. Altogether, the present result is a big step towards an affirmative answer to the old open problem of understanding fluctuations in the case of a random medium with infinite overall density. It also leads to random medium effects which are in line with experiences concerning the clumping behaviour in subcritical dimensions as in [DFM02].
For f : R d → R, set (2) |f | λ := f /φ λ ∞ where · ∞ refers to the supremum norm. Denote by C λ the separable Banach space of all continuous functions f : R d → R such that |f | λ is finite and that f (x)/φ λ (x) has a finite limit as |x| → ∞. Introduce the space of (at least) exponentially decreasing continuous test functions on R d . An index + as in R + or C + exp refers to the corresponding non-negative members. Let M = M(R d ) denote the set of all (non-negative) Radon measures µ on R d and d 0 a complete metric on M which induces the vague topology. Introduce the space M tem = M tem (R d ) of all measures µ in M such that µ, φ λ := dµ φ λ < ∞, for all λ > 0. We topologize this set M tem of tempered measures by the metric (4) d tem (µ, ν) := d 0 (µ, ν) + ∞ n=1 2 −n |µ − ν| 1/n ∧ 1 for µ, ν ∈ M tem .
Here |µ − ν| λ is an abbreviation for µ, φ λ − ν, φ λ . Note that M tem is a Polish space (that is, (M tem , d tem ) is a complete separable metric space), and that µ n → µ in M tem if and only if Probability measures will be denoted as P, P, P, whereas E, E, E and Var, Var, Var refer to the corresponding expectation and variance symbols.
Write W = W, (F t ) t≥0 , P x , x ∈ R d for the corresponding (standard) Brownian motion in R d with natural filtration, and S = {S t : t ≥ 0} for the related semigroup. Quantities depending on time t, as p t , S t or solutions u(t, · ), are formally set to 0 if t < 0. Let ℓ denote the Lebesgue measure on R d . Write B(x, r) for the closed ball around x ∈ R d with radius r > 0. In this paper, G denotes the Gamma function.
With c = c(q) we always denote a positive constant which (in the present case) might depend on a quantity q and might also change from place to place. Moreover, an index on c as c (#) or c # will indicate that this constant first occurred in formula line (#) or (for instance) Lemma #, respectively. We apply the same labelling rules also to parameters like λ and k.
1.3. Modelling of catalyst and reactant. Of course, there is some freedom in choosing the model we want to work with. To avoid unnecessary limit procedures, we work on R d and with continuous-state branching as the branching system, namely with continuous super-Brownian motion, which is a spatial version of Feller's branching diffusion. The branching rate of an intrinsic 'particle' varies in space and in fact is selected from a random field to be specified. In this context, it is convenient to speak also of the random field as the catalyst, and of the branching system given the random medium as the reactant.
First we want to specify the catalyst. In our context, a very natural way is to start from a stable random measure Γ on R d with index γ ∈ (0, 1) determined by its log-Laplace functional (The letter P always stands for the law of the catalyst, whereas P is reserved for the law of the reactant given the catalyst.) See, for instance, [DF92, Lemma 4.8] for background concerning Γ. Clearly, Γ is a spatially homogeneous random measure with independent increments and infinite expectation. Γ has a simple scaling property, where L = refers to equality in law. However, Γ is a purely atomic measure, hence, its atoms cannot be hit by a Brownian path or a super-Brownian motion in dimensions d ≥ 2. Thus, Γ cannot serve directly as a catalyst for a non-degenerate reaction model based on Brownian particles in higher dimensions. Therefore we look at the density function after smearing out Γ by the (non-normalized) function ϑ 1 , where ϑ r := 1 B(0,r) , r > 0, that is, In the sequel, the unbounded function Γ 1 with infinite overall density will play the rôle of the random medium: It will act as a catalyst that determines the spatially varying branching rate of the reactant. Once again, smoothing is needed, since otherwise the medium will not be hit by an intrinsic Brownian reactant particle. In our proofs, the independence and scaling properties of Γ will be advantageous, though one would expect analogous results to hold for quite general random media with infinite overall density. Consider now the continuous super-Brownian motion X = X[Γ 1 ] in R d , d ≥ 1, with random catalyst Γ 1 . More precisely, for almost all samples Γ 1 , this is a continuous time-homogeneous Markov process X = X[Γ 1 ] = (X, P µ , µ ∈ M tem ) with log-Laplace transition functional with initial condition u(0, · ) = ϕ. Here ̺ > 0 is an additional parameter (for scaling purposes). For background on super-Brownian motion we recommend [Daw93], [Eth00], or [Per02], and for a survey on catalytic super-Brownian motion, see e.g. [DF02] or [Kle00]. From Dawson and Fleischmann [DF83,DF85] the following dichotomy concerning the long-term behaviour of X is basically known (although there the phase space is Z d and the processes are in discrete time): Starting from the Lebesgue measure X 0 = ℓ, the process X dies locally in law as t ↑ ∞ if d ≤ 2/γ (recall that 0 < γ < 1 is the index of the random medium Γ 1 ), whereas in all higher dimensions one has persistent convergence in law to a non-trivial limit state denoted by X ∞ . From now on, we restrict our attention to (supercritical) dimensions d > 2/γ.
We are interested in the large scale behaviour of X.
1.4. Main results of the paper. Introduce the scaled processes X k , k > 0, defined by This hydrodynamic rescaling leaves the underlying Brownian motions invariant (in law), and the expectation of the scaled process is the heat flow: In particular, if X is started with the Lebesgue measure ℓ, the expectation is preserved in time. We also define the critical scaling index The refined law of large numbers is actually a by-product of the proofs of our main result, as will be explained immediately after Proposition 14.
In contrast to [DFG89], in the present paper we use Laplace transforms instead of Fourier transforms although fluctuations we are interested in are signed objects. This is possible since these fluctuations themselves are deviations from non-negative X k , and related stable limiting quantities have skewness parameter β = −1, for which Laplace transforms are meaningful.
Explicit values of c and c are given in (62) Z is the process with independent increments with values in the Schwartz space such that, for 0 ≤ s ≤ t and ϕ ∈ C + exp , Section 4], where it appeared as the hydrodynamic fluctuation limit process corresponding to super-Brownian motion with finite mean branching rate, but with infinite variance (1 + γ)-branching. Recall that the Markov process Y has log-Laplace transition functional with initial condition v(0, · ) = 0.
In particular, in our limit procedure the finite variance property of the original process given the medium is lost and, by a subtle averaging effect, an index jump of size 1 − γ > 0 occurs. ✸ Remark 4 (Ordering). The stochastic ordering of the random laws in our asymptotic bounds in (18) and (19) is well-known in queueing and risk theory, see [MS02] for background. ✸ Remark 5 (Existence of a fluctuation limit). Theorem 2 leaves open, whether a fluctuation limit exists in P-probability and whether it is a generalised Ornstein-Uhlenbeck process as described above. ✸ Remark 6 (Variance considerations). In the case µ k ≡ ℓ, for ϕ ∈ C exp , the P-random variance = 2̺ k 2κ−2d Hence, for κ satisfying implying γ ∈ ( 1 2 , 1) and d > 2γ/(2γ − 1), the random variances (24) converge to zero as k ↑ ∞, yielding the refined law of large numbers (15), whereas for κ > κ var these variances explode. Note that κ var < κ c , since (γ − 1)(d − 2γ) < 0. Therefore a quenched variance consideration as in (24) can only imply statement (15) in the restricted case (26). Of course, annealed variances are infinite already for fixed k, which follows from (24). ✸ 1.5. Heuristics, concept of proof, and outline. For this discussion we first focus on the case n = 1 in Theorem 2. From (10), (11), and scaling, with initial condition u k (0, · ) = k κ ϕ.
Consider now the critical scaling κ = κ c . By our claims in Theorem 2, f k should be asymptotically bounded in P-law by solutions v of for different values of c. Consequently, in a sense, we have to justify the transition from equation (30) to the log-Laplace equation (31) corresponding to the limiting fluctuations, recall (23). Here the x → Γ 1 (kx) entering into equation (30) are random homogeneous fields with infinite overall density, and the solutions f k depend on Γ 1 . But the most fascinating fact here seems to be the index jump from 2 to 1+γ, which occurs when passing from (30) to (31). Unfortunately, we are unable to explain this from an individual ergodic theorem acting on the (ergodic) underlying random measure Γ.
We take another route. For the heuristic exposition, we simplify as follows. First of all, we restrict our attention to the case ϕ(x) ≡ θ corresponding to total mass process fluctuations. Clearly, we have the domination Replacing one of the u k (t, x) factors in the non-linear term of (28) by k κ S t ϕ (x) ≡ k κ θ, and denoting the solution to the new equation with the same initial condition by w k , then u k ≥ w k , and we can explicitly calculate w k by the Feynman-Kac formula, For the upper bound (18), we may work with w k instead of u k . It suffices to show that µ, k κ S t ϕ − w k (t, ·) converges to µ, v in L 2 (P), where v is the solution to (31) with constant c = c. We therefore show that the P-expectations converge, and the P-variances go to 0. In this heuristics we concentrate on the convergence of E-expectations only, and we simplify by assuming µ = δ x (although formally excluded in the theorem by (17) if d > 3 ). We then have to show that By definition (9) of Γ 1 and (7) of Γ, the left hand side of (34) can be rewritten as We may additionally introduce the indicator 1 {τ ≤t} where τ = τ z 1/k [W ] denotes the first hitting time of the ball B(z, 1/k) by the path W starting from x, and we continue with Now we look at the E x -expectation of the exponent term. As the probability of hitting the small ball B(z, 1/k) is of order k 2−d , and the time spent afterwards in the ball is of order k −2 , the expectation of the exponent term is of order k (−d+κ)γ+2 = k −κ converging to zero as k ↑ ∞. Heuristically this justifies the use of the approximation 1 − e −x ≈ x. Note that then the leading factor k κ is cancelled, and we arrive at a constant multiple of θ 1+γ . According to this simplified calculation, the index jump has its origin in an averaging of exponential functionals of Γ [as in (7)], generating a transition from θ to θ γ . Note that the smallness of the exponent is largely due to the presence of the indicator of {τ ≤ t}. This fact is also behind our estimates of variances in Section 3.3.
We recall that the simplification u k w k which we used in the upper bound is basically a linearization of the problem, that is we pass from the non-linear log-Laplace equation (28) to the linear equation with initial condition u k (0, · ) = k κ θ.
In the case of a catalyst with finite expectation as in [DFG89], this linearization was a key step for deriving the limiting fluctuations. The difference between u k and w k was asymptotically negligible. But in the present model of a catalyst of infinite overall density, this is no longer the case. In fact, u k (t, x) − w k (t, x) does not converge to 0 in P-probability. Therefore, our upper bound is not sharp. For the lower bound, we replace u 2 k in (28) by w 2 k , and denoting the solution to the new equation with the same initial condition by m k . Then Inserting for w k the Feynman-Kac representation (33) we arrive at an explicit expression. Similarly as above, we then show that µ, The structure of the remaining paper is as follows. After some basic preparations, in Section 3 we concentrate on the upper bound, whereas the lower bound follows in Section 4.

Preparation: Some basic estimates
In this section we provide some simple but useful tools for the main body of the proof. For basic facts on Brownian motion, see, for instance, [RY91] or [KS91].
2.1. Simple estimates for the Brownian semigroup. We frequently use the argument (based on the triangle inequality) that, for η > 0 and s > 0, there exists c (37) = c (37) (η, s) such that for all x, For a while, let t > 0 and ϕ ∈ C + exp . Recall that (s, x) → S s ϕ (x) is uniformly continuous, hence for any ε > 0 one may choose δ > 0 such that, for r, s ∈ [0, t] and x, y ∈ R d , For convenience we expose the following simple fact.
Lemma 8. Let d ≥ 5. Then, for some constant c 8 and all x, y ∈ R d , Proof. Clearly, using the definition of the Green function as an integral of the transition densities, Interchanging integrations, using Chapman-Kolmogorov, substituting, and interchanging again gives with ι any point on the unit sphere. The latter integral is finite since d > 4, finishing the proof.
In dimension four, the situation is slightly more involved.
Lemma 9. Let d = 4 and λ > 0. Then, for some constant c 9 = c 9 (λ) and all x, y ∈ R 4 , Proof. If |x − y| > 2, then the left hand side of (44) is bounded in x, y. In fact, for z in a unit sphere around a singularity, say x, we use |z − y| ≥ 1 and (40). Outside both unit spheres, the integrand is bounded by φ λ . Now suppose |x − y| ≤ 2. We may also assume that x = y. As in the proof of Lemma 8, the left hand side of (44) leads to the integral First we additionally restrict the integrals to s, t ≤ |x − y| −1 . In this case, we drop φ λ (z), use Chapman-Kolmogorov, substitute, and interchange the order of integration to get the bound To see the last step, split the integral at t = 1. To finish the proof, by symmetry in x, y, it suffices to consider Plugging (48) into (47) and using the Green's function again gives the bound which is finite by (40).

Brownian hitting and occupation time estimates.
Further key tools are the asymptotics of the hitting times of small balls. Recall that τ = τ z 1/k [W ] denotes the first hitting time of the closed ball B(z, 1/k) by the Brownian motion W started in x. The following results are taken from [LG86], see formula (0a) and Lemma 2.1 there.
Lemma 10 (Hitting time asymptotics and bounds). Suppose d ≥ 3. Then the following results hold.
(a) There is a constant c (50) , which depends only on the dimension d, such that (b) There are constants c (51) and λ (51) > 0, depending on d and t > 0, such that for x, z ∈ R d , (c) The following convergence holds uniformly whenever |x − z| is bounded from zero, there are constants c (53) and λ (53) > 0, depending on d and t, such that for x, z ∈ R d , The following lemmas are all consequences of Lemma 10.
Lemma 11. Let d ≥ 3. Fix ϕ ∈ C + exp , η ≥ 0, and t > 0. Then there are constants c 11 and λ 11 such that for x, z ∈ R d , Proof. Initially, let ϕ be any non-negative function. Using the strong Markov property at time τ , Note that the right hand side is independent of k, z, y (in the considered range of y), and finite since in d ≥ 3 all such moments are finite. Consequently, there is a constant c such that g(r, y) ≤ c. If now ϕ ∈ C + exp , by the strong Markov property at time τ , using (39) in the second step. By (51), The result follows by combining (58) and (59).
for all x, z ∈ R d and k ≥ 1.
for all x ∈ R d and k ≥ 1.
3. Upper bound: Proof of (18) 3.1. Anderson model with stable random potential. As motivated in Section 1.5, we look at the mild solution to the linear equation This is an Anderson model with the time-dependent scaled stable random potential We study its fluctuation behaviour around the heat flow: Proposition 13 (Limiting fluctuations of w k ). Under the assumptions of Theorem 2, if κ = κ c , then for any ϕ ∈ C + exp and t ≥ 0, in P-probability, where the constant c = c(γ, ̺) is given by where ı is any point on the unit sphere of R d .
To see how the case n = 1 of (18) follows from Proposition 13, we fix a sample Γ. For ϕ ∈ C + exp , we use the abbreviation Formulas (10) and (12) give Recall that this can be rewritten in Feynman-Kac form as (67), and the Feynman-Kac representation we arrive at Hence, the case n = 1 of (18) follows from Proposition 13. Proposition 13 is proved in two steps: In Section 3.2 we show that the expectations converge, and in Section 3.3 that the variances vanish asymptotically.

Convergence of expectations.
Proposition 14 (Convergence of expectations). Let κ = κ c . There exists a λ 14 > 0 such that for every ε > 0 there is a k 14 = k 14 (ε) > 0 with Theorem 1 immediately follows from this proposition. Indeed, turning back to the situation κ < κ c , note from (64) (which holds for general κ) that It suffices to show that the right hand side converges to zero in L 1 (P). Using (69), where w k from (68) is defined using the critical index κ c . By Proposition 14, which does not require the finiteness of the energy, the expectation on the right remains bounded, implying the statement.
The rest of this section is devoted to the proof of this proposition. Recall that κ equals κ c , which is defined in (14). The proof is prepared by six lemmas. In all these lemmas, τ = τ y 1/k [W ] denotes the first hitting time of the ball B(y, 1/k) by the Brownian motion W, and π x the law of τ y Lemma 15. There exists a constant c 15 > 0 such that Proof. Note that, for any ι ∈ ∂B(0, 1), by Brownian scaling, We now use ϕ ≤ c, Jensen's inequality, (74), (51), and (37), to get This is the required statement.
Proof. Using Brownian scaling in the second, substitution and (39) in the last step, we estimate, To study the double integral, denote by τ 1 , τ 2 the first hitting times of the balls B(y 1 , 1) respectively B(y 2 , 1) by the Brownian path W . Pick p > 1 such that 2d + 2(2 − d)/p < 4 + δ, and q such that 1/p + 1/q = 1. By Hölder's inequality, For the second factor on the right hand side we get, using Cauchy-Schwarz, and the maximum principle to pass from y i to 0, Recall from Lemma 12(a) that the total occupation times of Brownian motion in the unit ball in d ≥ 3 have moments of all orders. Hence, the latter expectation is finite.
Lemma 17. For all ε > 0 there exists δ = δ(ε) > 0 and k 17 = k 17 (ε) > 0, such that Proof. For any δ, M > 0 we have, We look at (82b) and choose M such that this term is small. Indeed, the inner expectation in (82b) can be made arbitrarily small (simultaneously for all k and y) by choice of M . Hence we can use (51) to see that this term can be bounded by εφ γλ 7 (x), for all sufficiently large k, by choice of M (and independently of δ). We look at (82a) and choose δ > 0 such that The term (82a) can be bounded from above by By (52) there exists A ⊂ R d and k 17 ≥ 0 such that, for all x − y ∈ A and k ≥ k 17 , and (86) A c dz |z| 2−d + 1 exp λ 7 |z| − |z| 2 /16 < ε.
We can thus bound (84), for all k ≥ k 17 and x ∈ R d by By (83) the first term is bounded by εφ γλ 7 (x), as is the second term. For the last term we use the upper bound (51) for k 2−d π x [0, t] and then (86) to see the upper bound of εφ γλ 7 (x).
Proof. In a first step we note that, by Brownian scaling, The main contribution to this expectation is coming from thoseW withW M ≤ √ k. Indeed, the remaining part of the integral can be estimated by a constant multiple of M γ P 0 W M > √ k , and we can estimate (with c (89) depending on M ) for sufficiently large values of k, recalling (51) and (37).
In the next step we use (38) to choose k large enough such that Using this, and the last line is ≤ ε φ γλ 7 (x) by (51) and (37). Now it remains to observe that, by Brownian scaling, For y ∈ B(x, 1/k) the inner expectation is constant and equals c 19 . The contribution coming from y ∈ B(x, 1/k) is very easily seen to be bounded by a constant multiple of k −2 φ γλ 7 (x). This completes the proof.
The following lemma is at the heart of our proof of Proposition 14. Recall that π x denotes the law of τ = τ y 1/k [W ] for W starting in x.
Now we show that for all x ∈ R d , and k ≥ k 20 , Indeed, using (92) and (93), we can estimate We give estimates for the two final summands, the error terms. The term (96b) can be estimated, using (51), by ε dy The error term (96c) can be estimated as follows, and the integral is smaller than ε by (94). For the first summand, the main term (96a), we argue that The last summand is again bounded by a constant multiple of εφ λ 7 (x). Hence we have verified (95) and by the analogous argument one can see that, for all k ≥ k 20 and x ∈ R d , This completes the proof.
Proof of Proposition 14. Recall from (68) that We use (7) to evaluate the expectation with respect to the medium.
We now compare (101) to Clearly, By the second inequality, the term (102) is always an upper bound for (101). On the other hand, by the first inequality and Lemma 16, the difference is bounded from above by a constant multiple of Note that the exponent is negative iff dγ > 2 + δ[1 + γ], hence choosing δ > 0 sufficiently small justifies the approximation of (101) by (102).
Recall that τ = τ y 1/k [W ] denotes the first hitting time of the ball B(y, 1/k) by our Brownian motion W started in x. Now note that (102) equals where the strong Markov property was used and the value for κ was plugged in. By Lemma 15 we may choose (and henceforth fix) a value M > 1 such that contributions to the innermost integral coming from s > M/k 2 , can be bounded by εφ γ λ 7 (x), and additionally that Moreover, by Lemma 17, if k ≥ k 17 , the contribution to (105) coming from t − δ ≤ τ ≤ t can be made smaller than εφ γλ 7 (x) by choice of δ > 0, which we also assume fixed from now on.
We let (107) k (107) := M/δ and note that t − τ ≥ M/k 2 whenever t − δ ≥ τ and k ≥ k (107) . Now let k 14 := k 17 ∨ k 18 ∨ k 19 ∨ k 20 ∨ k (107) . It remains to show that This will be done in three steps by the triangle inequality. The steps are prepared in Lemmas 18 to 20.
In the first step note that by Lemma 18 we have, for all k ≥ k 14 and x ∈ R d , We may therefore continue, using the Markov property, As a second step, by Lemma 19 we have, for all k ≥ k 14 and x ∈ R d , ≤ ε φ γλ 7 (x).
By (106) we have, using c (106 In the third step we recall that, by Lemma 20, for all k ≥ k 18 , and x ∈ R d , and this completes the proof of Proposition 14.
3.3. Convergence of variances. In this section we establish that the variances with respect to the medium for the solutions of the linearized integral equation vanish asymptotically.
Proposition 21 (Convergence of variances). For every µ ∈ M tem satisfying the assumption in Theorem 2, for ϕ ∈ C + exp and t > 0, The remainder of this section is devoted to the proof of this proposition. Recalling the definition (9) of Γ 1 , the variance expression in Proposition 21 equals where (W 1 , W 2 ) is distributed according to P x ⊗P y . Exploiting the Laplace functional (7) of Γ, (110) can be rewritten as Note that by the elementary inequality the argument in the first exponential expression is not smaller than the argument in the second one. Therefore we may apply the elementary inequality and a z substitution to get for the non-negative total expression in (111) the upper bound (we may drop from now on the factor ̺ γ ) It remains to show that (114) converges to zero as k ↑ ∞.

3.4.
Upper bound for finite-dimensional distributions. We use an induction argument to extend the result from the convergence of one-dimensional distributions to all finite dimensional distributions. Recall that we have to show that, for any ϕ 1 , . . . , ϕ n and 0 = t 0 < t 1 < · · · < t n , in P-probability, The case n = 1 was shown in the previous paragraphs, so we may assume that it holds for n − 1 and show that it also holds for n. By conditioning on {X k (t) : t ≤ t n−1 } and applying the transition functional we get where u k is the solution of (66) with ϕ replaced by ϕ n . Separating the non-random terms yields = exp S tn−1 µ, k κ S tn−tn−1 ϕ n − u k (t n − t n−1 ) By Theorem 2 for n = 1 with starting measure S tn−1 µ, in P-probability, lim sup k↑∞ exp S tn−1 µ, k κ S tn−tn−1 ϕ n − u k (t n − t n−1 ) ≤ exp c S tn−1 µ, The remaining expectation can be written as To dominate this term observe that, by the induction assumption, in P-probability, We show below that in P-probability the following convergence in law holds: Observe that lim sup m↑∞ ξ m ≤ a in probability for some a, and ζ m ⇒ 1 in law implies lim sup m↑∞ ξ m ζ m ≤ a in probability. Hence (128) and (129) together imply that (127) is asymptotically bounded from above by .
Putting together (126) and (130) yields the claimed statement subject to the proof of (129).
To prove (129) it suffices to show that, for any a ≥ 1, (131) E µ k exp ak κ X k tn−1 − S tn−1 µ, −S tn−tn−1 ϕ n + k −κ u k (t n − t n−1 ) converges in P-probability to 1. Using the Feynman-Kac representation (67), the expectation in (131) equals where U k is the solution of (66) with ϕ replaced by a S tn−tn−1 ϕ n − k −κ u k (t n − t n−1 ) . It therefore suffices to show that converges in L 1 (P) to zero. As this term is non-negative and as (134) U k (t n−1 − r) ≤ ak κ S tn−r S tn−tn−1 ϕ n − k −κ u k (t n − t n−1 ) ≤ ak κ S tn−r ϕ n , it finally suffices to show that converges to zero. The first factor in the expectation can be expressed using the Feynman-Kac representation (67) of u k , which gives which again is dominated by a µ(dx) EE x k κ E Wt n−1 ϕ n (W tn−tn−1 ) .
We can now multiply the factors out and obtain .
We can now determine the limit in each of the three summands (138a) to (138c) separately. For the first one we obtain from Proposition 14, as k ↑ ∞, Similarly, the second one, (138b), converges by Proposition 14, as k ↑ ∞, Finally, the last expression (138c) equals, using Proposition 14 to take the limit as k ↑ ∞, Comparing the right hand sides of (139) to (141) shows that they cancel completely, which proves (135) and completes the argument.

Lower bound: Proof of (19)
4.1. A heat equation with random inhomogeneity. As motivated in Section 1.5, we look at the mild solution m k to the linear equation with initial condition m k (0, · ) = k κ ϕ.
This is a heat equation with the time-dependent scaled random inhomogeneity −k 2−d ̺ Γ 1 (kx) w 2 k (t, x). We study its asymptotic fluctuation behaviour around the heat flow: Under the assumptions of Theorem 2, if κ = κ c , then for any ϕ ∈ C + exp and t ≥ 0, in P-probability, where the constant c = c(γ, ̺) is given by To see how the case n = 1 of (19) follows from Proposition 22, we fix a sample Γ. Recall that where u k solves As u 2 k ≥ w 2 k , we obtain from (142), Hence, the case n = 1 of (19) follows from Proposition 22. Proposition 22 is proved in two steps: In Section 4.2 we show that the right hand side of (143) is an asymptotic lower bound of the expectations of the left hand side, and in Section 4.3 that the variances vanish asymptotically.
Proposition 23 (Convergence of expectations). For c as in (144), The remainder of this section is devoted to the proof of this proposition. Set Lemma 24 (Dropping the exponential). For each δ > 0 and for c 16 from Lemma 16, for all x ∈ R d and k ≥ 1.
Proof. By (142) and the Feynman-Kac representation (68), where W 1 and W 2 are independent Brownian motions starting from W s . By the definition (9) of Γ 1 this equals Recall that for measurable ϕ, ψ ≥ 0, Section 4]) and k 2−d+2κ k (2−d+κ)(γ−1) = k 2γ−2 for κ = κ c . Applying this to (153) yields By the inequality 1 − e −a ≤ a we have Applying (112) to the last integrand and using the symmetry in W 1 , W 2 , we see that the right hand side in the former display does not exceed We now drop I s (z, W 2 ) and evaluate the expectation with respect to W 2 , obtaining the upper bound Applying the Markov property at time s and time-homogeneity, this equals The last factor can be bounded by ( I 0 (y, W )) γ . Then we integrate with respect to s and obtain Using now Lemma 16, we arrive at finishing the proof.
It remains to find the limit of Substituting z kz gives Fix x, z ∈ R d and 0 < s < t for a while and consider The lemma immediately implies Proposition 23. Indeed, applying Fatou's lemma we get Proof of Lemma 25. Shifting the Brownian motions, By the uniform continuity of ϕ, and by (38), By the triangle inequality, Calculating the expectation with respect to W gives Using (158) once more we obtain Define events (160) A i k (z) := |W i r − z| > 1/k ∀r > 1/k . Evidently, ). We calculate the expressions in the last two lines separately. For the first line we get, by the Markov property at time 1/k,

By (38), this equals asymptotically
where in the last step Brownian scaling was used. Therefore the first line is asymptotically equivalent to Turning now to the second line, where the expectation with respect to W 2 was evaluated, and (38) was used. Recalling that ϕ is bounded and applying Cauchy-Schwarz we obtain an upper bound Since the expectation is bounded and the probability goes to zero, (162) is o(k 2−2γ ). Together with (161) this proves the lemma.
The following Lemmas 27 and 28 together directly imply Proposition 26.
Dropping the exponential in M 21 (x,x) and I s (z, W 2 ) + I s (z, W 2 ) gives By the Markov property, Carrying out the integration over s ands gives dz I 0 (z, W ) γ + I 0 (z,W ) γ − I 0 (z, W ) + I 0 (z,W ) γ .
Changing the integration variable k kz, we obtain The right hand side of this inequality coincides with (114), since 2κ +(κ −d)γ +d = κ − 2 + d, hence converges to zero.

4.4.
Lower bound for finite-dimensional distributions. The proof is analogous to the upper bound in Section 3.4. Again we use induction to show that, for any ϕ 1 , . . . , ϕ n and 0 = t 0 < t 1 < · · · < t n , in P-probability, For the case n = 1 this was shown in the previous paragraphs, so we may assume that it holds for n − 1 and show that it also holds for n. By conditioning on {X k (t) : t ≤ t n−1 } and applying the transition functional we get k κ X k ti − S ti µ, −ϕ i = exp S tn−1 µ, k κ S tn−tn−1 ϕ n − u k (t n − t n−1 ) (169) k κ X k ti − S ti µ, −ϕ i + k κ X k tn−1 − S tn−1 µ, −ϕ n−1 − k −κ u k (t n − t n−1 ) , where u k is the solution of (66) with ϕ replaced by ϕ n . By Theorem 2 for n = 1, in P-probability, lim inf k↑∞ exp S tn−1 µ, k κ S tn−tn−1 ϕ n − u k (t n − t n−1 ) ≥ exp c µ, tn tn−1 dr S r (S tn−r ϕ n ) 1+γ .
To estimate this term from below observe that, by the induction assumption, in P-probability, In (129) it was shown that in P-probability, (173) exp k κ X k tn−1 − S tn−1 µ, S tn−tn−1 ϕ n − k −κ u k (t n − t n−1 ) =⇒ k↑∞ 1.
As lim inf m↑∞ ξ m ≥ a in probability, for some a ≥ 0, and ζ m ⇒ 1 in law, implies lim inf m↑∞ ξ m ζ m ≥ a in probability, this completes the proof.