Weak and Strong disorder for the stochastic heat equation and the continuous directed polymer in $d\geq 3$

We consider the smoothed multiplicative noise stochastic heat equation $$d u_{\eps,t}= \frac 12 \Delta u_{\eps,t} d t+ \beta \eps^{\frac{d-2}{2}}\, \, u_{\eps, t} \, d B_{\eps,t} , \;\;u_{\eps,0}=1,$$ in dimension $d\geq 3$, where $B_{\eps,t}$ is a spatially smoothed (at scale $\eps$) space-time white noise, and $\beta>0$ is a parameter. We show the existence of a $\bar\beta\in (0,\infty)$ so that the solution exhibits weak disorder when $\beta<\bar\beta$ and strong disorder when $\beta>\bar\beta$. The proof techniques use elements of the theory of the Gaussian multiplicative chaos.


Motivation and introduction
We consider the stochastic heat equation (SHE) with multiplicative noise, written formally as ∂ t u(t, x) = 1 2 ∆u(t, x) + u(t, x) η(t, x). (1.1) Here η is the "space-time white noise", which formally is the centered Gaussian process with covariance function E(η(s, x)η(t, y)) = δ 0 (t − s)δ 0 (x − y) for s, t > 0 and x, y ∈ R d .We emphasize that (1.1) is a formal expression, and in attempting to give it a precise meaning one is immediately faced with the problem of multiplication of distributions.
Besides the intrinsic interest in the SHE, we recall that the Cole-Hopf transformation h := − log u formally transforms the SHE to the non-linear Kardar-Parisi-Zhang (KPZ) equation, which can be written as and appears in dimension d = 1 as the limit of front propagation in certain exclusion processes ([BG97], [ACQ11]).While a-priori the equation (1.2) is not well posed due to the presence of products of distributions, much recent progress has been achieved in giving an intrinsic precise interpretation to it in dimension d = 1 ([H13]) As discussed in [AKQ14] and [CSZ15], the equations (1.1) and (1.2) share close analogies to the well-studied discrete directed polymer, which can be defined as the transformed path measure Here the white noise (the disorder) is replaced by i.i.d.random variables η = {η(n, x) : n ∈ N, x ∈ Z d }, P 0 denotes the law of a simple random walk starting at the origin corresponding to a d-dimensional path ω n = (ω i ) i≤n , while β > 0 stands for the strength of the disorder.It is well-known that, when d ≥ 3 the normalized partition function Z n /EZ n converges almost surely to a random variable Z ∞ , which, when β is small enough, is positive almost surely (i.e., weak disorder persists [IS88, B89]), while for β large enough, Z ∞ = 0 (i.e., strong disorder holds [CSY04]).Related results for a continuous directed polymer in a field of random traps appear in [CY13].
We return to the study of the stochastic heat equation in the continuum R d , written as a stochastic differential equation where B t is a cylindrical Wiener process in L 2 (R d ).Since the solution to (1.4) is not well defined, a standard approach to treat this equation is to introduce a regularization of the process B t , followed by a suitable rescaling of the coupling coefficients and subsequently passing to a limit as the regularization is turned off.In one space dimension d = 1, this task was carried out by ) by expressing the regularized process by a Feynman-Kac formula; after a simple renormalization (the Wick exponential), a meaningful expression was obtained when the mollification was removed.The renormalized Feynman-Kac formula defines a process with continuous (in space and time) trajectories and it solves the equation (1.4) (when the stochastic differential is interpreted in the Ito sense).
Extending this procedure to d = 2 (where small scale singularities coming from the noise are stronger), ) introduced a rescaling of the coupling constant which vanishes as ε → 0. It turned out that the covariance E[Z ε (t, x)Z ε (t, y)] of the regularized field Z ε converges to a non-trivial limit as the mollification is removed, but the limiting law of Z ε was not identified in [BC98].The latter identification was recently carried out by Caravenna, Sun and Zygouras ( [CSZ15]) and by Feng [F15], who proved that, in d = 2, if β ε is chosen to be β 2π [log(1/ε)] −1 , then for β < 1, Z ε converges in law to a random variable with an explicit distribution, while for β ≥ 1, Z ε converges in law to 0.
The results of this article concern related statements for d ≥ 3 pertaining to the smoothened and rescaled equation Our main result shows that for every x ∈ R d , for any β small enough u ε (x) converges in distribution to a non-degenerate random variable Z ∞ = Z ∞ (β), i.e., weak disorder prevails, while for β large enough, u ε (x) converges in probability to 0, i.e., strong disorder takes place.We also show that for β small enough and any suitable test function f , u ε (f ) = f (x)u ε (x)dx converges in probability to f (x) dx.We remark that our results, unlike [CSZ15], do not charaterize the limiting non-degenerate random variable Z ∞ (β), nor do they identify the exact critical threshold for the value of β (which happens to be 1 in d = 2), where the departure from weak disorder to strong disorder takes place.
2.1 Preliminaries.We consider a complete probability space (Ω, F, P) and a cylindrical Wiener process B = (B t ) t≥0 on L 2 (R d ).The latter is defined as the centered Gaussian process with covariance Here S = S(R d ) is the Schwartz space of rapidly decreasing functions in R d .To define B pointwise in R d , we need the regularization with respect to some mollifier Here φ is some smooth, non-negative, compactly supported and even function such that R d φ(x)dx = 1.Then R d φ ε (x)dx = 1, and φ ε ⇒ δ 0 weakly as probability measures.Furthermore, for any ε > 0, B ε = (B ε,t ) t≥0 is also a centered Gaussian process with covariance where we introduced For any β > 0 and ε > 0, we consider the stochastic differential equation where the stochastic differential is interpreted in the classical Ito sense (since our smoothing of B was done in space only, the well-defined solution u ε,t is adapted to the natural filtration Our goal is to study the behavior of u ε,1 (x) as the mollification parameter ε is turned off.For this, we will use a convenient Feynman-Kac representation of u ε,t (x), which we introduce in Section 2.3 after stating our main results.
2.2 Main results: Weak and strong disorder.Henceforth we fix d ≥ 3 and set u ε (x) := u ε,1 (x) and, for any Here is the statement of our first main result.
Theorem 2.1 (Convergence to the heat equation in the weak disorder phase).There exists Remark 1 The first statement in Theorem 2.1 implies that u ε converges in the sense of distributions to the solution of the heat equation.Although for simplicity we content ourselves with the initial condition Z ε (0, x) = 1 in (2.2), the same statement continues to hold for reasonably nice initial condition u ε (0, x) = u 0 (x).
Remark 2 While we do not discuss it in detail, the Feynman-Kac representation of u ε (x) that we introduce in the next subsection shows that u ε (x) and u ε (y) become asymptotically independent as ε → 0; this explains the fact that smoothing with f makes u ε (f ) deterministic.
The proof of Theorem 2.1 is based on an L 2 computation and is presented in Section 3.
The proof of Theorem 2.2 is presented in Section 4. This proof avoids the use of the well-known fractional moment method which pervades the proofs of strong disorder assertions in realm of the aforementioned literature on the discrete directed polymer models, and instead uses the theory of Gaussian multiplicative chaos (GMC).
As a by-product of our arguments, we have the following corollary.
Corollary 2.3.There is a β ∈ (0, ∞) such that, as ε → 0, u ε (0) converges to 0 in probability for all β > β while u ε (0) converges in distribution to a non-degenerate, strictly positive random variable The constant β is given as the threshold for the uniform integrability of a certain family of martingales Z ε,β ; we refer to the proof of Corollary 2.3 for details, which can also be found at the end of Section 4. We leave unresolved the question of what happens at β = β.
Remark 3 Clearly β depends on the dimension d ≥ 3 and the mollifier φ.As mentioned in Section 1, it remains an open problem to determine the exact value of β ∈ (0, ∞) and to identify the exact distribution of the positive random variable Z ∞ appearing in Corollary 2.3.

A Feynman-Kac representation.
For any x ∈ R d , let P x denote the Wiener measure corresponding to a d-dimensional Brownian motion (W t ) t≥0 starting at x and independent of the cylindrical Wiener process B. E x will denote the corresponding expectation.For fixed W , set as a Wiener integral.For two fixed W and W ′ , the covariance is given by Here and later, we write E for integration over B only, keeping W fixed).We also note that, for any fixed We now turn to (2.2) and write its renormalized Feynman-Kac solution as (2.5) Note that E[u ε,t (x)] = 1.
For our purposes, it is convenient to introduce another representation of u ε,t .Note that by rescaling of time and space, ε −1 W s has the same distribution as W sε −2 , while Ḃ(s, dx)ds has the same distribution as Then, by (2.3), for a fixed W , Using the invariance of the distribution of Ḃ under time reversal, we obtain that the spatially-indexed process {u ε (x)} possesses the same distribution as the process {Z ε (x/ε)}, where 3. Proof of Theorem 2.1: The second moment method We start with an elementary computation.
Proof.Let W and W ′ be two independent standard Brownian motions with P 0 ⊗ P 0 denoting their joint law.Then, writing where the third identity follows by (2.4).Hence, by (2.1), Brownian scaling and change of variables, we infer that Since V is a bounded function of compact support, it is easy to check that for β small enough, sup Hence, by Portenko's lemma ( [P76]), sup This proves the lemma.
Remark 4 Let us remark that u ε is not a Cauchy sequence in L 2 (P) (which is the reason why the convergence in distribution in Theorem 2.1 cannot be upgraded to convergence in probability).A simple computation using (2.4) shows that The difference of the two terms in the first line (and likewise, the second line) does not go to zero.For instance, if φ ε is a centered Gaussian mollifier with variance ε 2 , then in the first line, again by Brownian scaling, the second term (with the expectation) becomes (recall (2.1)) while the first term becomes From these expressions one can see that E (u ε − u δ ) 2 does not vanish, e.g., in the iterated limit lim ε→0 lim δ→0 .
We turn to the proof of Theorem 2.1.
Proof of Theorem 2.1.Let us denote by Note that, for the proof of the first part of Theorem 2.1, it suffices to show that E u ε (f ) 2 → 0 (3.3) as ε → 0. Let us prove this fact.Exactly similar computations as in the proof of Lemma 3.1 imply that (3.5)By applying Portenko's lemma again ([P76]), we see that for β small enough Together with (3.5), by Lebesgue's convergence theorem, for an even smaller β we have as |z| → ∞.Combining (3.4), (3.6) and (3.7), we use the bounded convergence theorem to conclude (3.3).This proves the first part of Theorem 2.1.
For the second part, note that (2.6) implies that for fixed ε, u ε,1 (0) is equal in distribution to Z ε .Since the process {Z ε } ε is a positive martingale (with respect to a filtration indexed by 1/ε 2 ), it converges almost surely to a limit Z ∞ .By Lemma 3.1, Z ε is (uniformly in ε) L 2 (P) bounded for β small enough, and therefore Z ∞ does not vanish identically.By the 0−1 law as in the proof of Theorem 2.2 (see (4.1)), we conclude that P (Z ∞ = 0) = 0. We conclude that u ε (0) converges in distribution to Z ∞ .Further, since u ε (x) d = u ε (0) by translation invariance, the same applies to u ε (x).

Proof of Theorem 2.2 and Corollary 2.3: Gaussian multiplicative chaos
The starting point is the representation (2.7) for Z ε = Z ε (0).For d ≥ 3, which we assume throughout, we will show that there is a β * > 0 such that for all β > β * , Z ε → 0 in probability.
In order to prove this result, we represent Z ε as a Gaussian Multiplicative Chaos (GMC), see [K85, S14] for background.Let E = C 0 ([0, ∞); R d ) and recall that P 0 denotes the standard Wiener measure on E corresponding to the d-dimensional Brownian motion W = (W t ) t≥0 .Set and recall that Z ε Introduce the event V := {Z ε → ε→0 0}.Since V is a tail event for the process t → B(t, •), one has P(V) ∈ {0, 1}. (4.1) Note that ε −1 → Z ε is a strictly positive martingale of mean 1. Introduce on Ω × E the measure dQ ε := Λ ε d(P ⊗ P 0 ).
Let the measure Q ε be its marginal on Ω, i.e. dQ ε = Z ε dP.
Lemma 4.1.If the sequence (Z ε ) ε is uniformly integrable under P, then under Q ε , (Z ε ) ε is uniformly bounded in probability.In other words, Proof.Assume that Z ε is uniformly integrable.Then, by the la Vallée-Poussin theorem, there exists a convex increasing function h : R The conclusion follows.
Remark 5 The implication in Lemma 4.1 is an "if and only if" statement; we only stated the direction that we need.
Another preparatory step that we need is the following proposition, whose statement and proof closely follow [CY06, Prop.3.1].
Proposition 4.2.The sequence {Z ε } is uniformly integrable under P if and only if P(V) = 1.
To prove the reverse implication, recall the random variables Z ε (x) (with x ∈ R d ), see (2.7).With t = 1/ε 2 , we write Zt (x) = Z ε (x).It is enough to prove the uniform integrability for the sequence Zn (0).Following [CY06], Let Z∞ (B) denote the limit of Zn (0) (which exists a.s.) and, for z ∈ R d , let X n,z = Z∞ (θ n,z B)/E Z∞ , where θ n,z denote the temporal (by n) and spatial (by We have that EX n,z = 1 and X n,x ≥ E x (e n+1,x,W 1 • X n+1,W 1 ) by Fatou.Denote by G t the natural filtration induced by t → B(t, •).By construction, X n,• is independent of G n , and E(X n,z |G n ) = EX n,z = 1 .Now, iterating, we get by the Markov property Thus, It follows that the sequence Zn is uniformly integrable under P.
Remark 6 An alternative proof of Proposition 4.2 can be obtained by using [K87, Thm.2] and an appropriate 0-1 law with respect to the Brownian path W .
The following proposition is the heart of the proof of Theorem 2.2.Proposition 4.3.There exists β * such that for β > β * and any m > 0, We first complete the proof of Theorem 2.2, and then provide the proof of Proposition 4.3.
Proof of Theorem 2.2 (assuming Proposition 4.3): Assume that Z ε does not converge to 0 almost surely.Then, by Proposition 4.2, it is uniformly integrable and, by Lemma 4.1, it is uniformly bounded in probability under Q ε .In particular, there exists K > 0 such that Q ε (Z ε > K) < 1/2.This contradicts Proposition 4.3.
Before providing the proof of Proposition 4.3, we need to introduce some notation and prove some preparatory lemmas.Introduce the stopping times We need an estimate on the tail of τ := τ δ conditionally on W , presented in the next lemma; in its statement and in its proof, P ⊗2 0 denotes the measure P 0 ⊗ P 0 on (W, W ′ ) Lemma 4.4.There exists a random variable χ = χ(W ) and a constant κ > 0, such that for t large enough, Proof.Define Note that since κ 1 is measurable with respect to the tail σ-field of W ′ , it is deterministic, possibly equal to −∞.We will show that κ 1 > −∞.Taking then κ = −2κ 1 then proves the lemma.
where φ denotes the time-derivative of ϕ.We also use the notation ϕ ∞,t = sup s∈[0,t] |ϕ(s)|.Fix a (possibly random, but independent of W ′ ) function ϕ ∈ W 1,2 t .Then, by an application of the Cameron-Martin theorem in classical Wiener space, where the last inequality used Jensen's inequality and invariance of the set W ′ ∞,t ≤ δ/2 with respect to the map W ′ → −W ′ .
Introduce the random field Since Y is subadditive in the sense that Y s,t ≤ Y s,u + Y u,t for u ∈ (s, t), Kingman's subadditive ergodic theorem implies that t −1 Y 0,t → t→∞ κ 2 , a.s.(4.4) for a deterministic κ 2 .We claim that κ 2 is finite.This follows from the fact that κ 2 is smaller than EY 0,1 ; since Y 0,1 is finite almost surely and X := Y 0,1 is Lipshitz as a map on E, denoting by med(X) the median of X we have by the Borell-Tsirelson-Ibragimov-Sudakov inequality [AT07] that X − med(X) possesses Gaussian tails, and therefore EX 2 = EY 0,1 < ∞.
We can now conclude.Let ϕ (t) = ϕ (t) (W ) be such that ϕ (t) (0) = 0, ϕ (t) (t) = W (t) and Y 0,t = t 0 | φ(s)| 2 ds.(Such ϕ (t) exists by lower-semicontinuity of the L 2 norm, although this is not essential to our argument and we could just assume that the last integral is smaller than 2Y 0,t .)We have, by (4.3), . Thus, by (4.2) and (4.4), The last probability on the right hand side is P 0 (σ > t), where σ denotes the first exit time of the standard Brownian motion W ′ from the ball of radius δ/2 around the origin.It is well-known (for example, by the spectral theorem for − 1 2 ∆) that lim t→∞ 1 t log P 0 {σ > t} = −λ 1 , where λ 1 > 0 is the principal eigenvalue of − 1 2 ∆ with Dirichlet boundary conditions on the same ball.It follows that κ 1 > −∞ and Lemma 4.4 is proved.
Proof.Note that V (0) = R d φ 2 (y)dy.On the other hand, for θ small enough, inf This completes the proof.
Finally we turn to the proof of Proposition 4.3.
Proof of Proposition 4.3: Since we will use two independent copies W, W ′ of Brownian motions, we write throughout Λ ε = Λ ε (W ), Λ ε (W ′ ) to emphasize which Brownian motion participates in the definition of Λ ε .
The starting point of the proof is the remark that by the Cameron-Martin change of measure [Bo98], the law of Ḃ(x, s) under Q ε is the same as the law of Ḃ(x, s) + βφ(x − W s ) under P ⊗ P 0 .In particular, for any measurable A ⊂ E, the law under Q ε of A Λ ε (W ′ )dP 0 (W ′ ) is the same as the law under Let f : R + → R + be an increasing concave function.Then, by the above remark, where in the first inequality we used that f is increasing, and in the last inequality we used the same together with Lemma 4.5 (recall t = ε −2 ).On the other hand, f is concave and on the set {τ ≥ t} the covariance kernel K ε is bounded from above by the constant kernel K ε (W, W ′ ) := V (0)t.Using Kahane's comparison inequality with kernels K ε and K ε (see [K85] -it is stated there for convex functions, with the opposite sign; see also [S14, Theorem 28]), we get: 0)t/3 P 0 ⊗ P 0 τ (W, W ′ ) ≥ t|W e β(V (0)t) 1/2 G−β 2 V (0)t/2 , (4.6) where G is a standard centered Gaussian random variable which is independent of W , and the expectation E G,W is taken over both G and W .In particular, f (Z ε )dQ ε ≥ E G,W f e β 2 V (0)t/6 P 0 ⊗ P 0 τ (W, W ′ ) > t|W e β(V (0)t) 1/2 G ≥ E G,W f χ(W )e −κt e β 2 V (0)t/6 e β √ V (0)tG . (4.7) Note that the argument of f goes to infinity as t → ∞ for almost every (G, W ), if β > √ 6κ.Using we conclude that lim This completes the proof.
In view of Theorem 2.1 and Theorem 2.2, we have β ∈ (0, ∞).Thus, the corollary will follow from the following fact.