Dense blowup for parabolic SPDEs

The main result of this paper is that there are examples of stochastic partial differential equations [hereforth, SPDEs] of the type $$ \partial_t u=\frac12\Delta u +\sigma(u)\eta \qquad\text{on $(0\,,\infty)\times\mathbb{R}^3$}$$ such that the solution exists and is unique as a random field in the sense of Dalang and Walsh, yet the solution has unbounded oscillations in every open neighborhood of every space-time point. We are not aware of the existence of such a construction in spatial dimensions below $3$. En route, it will be proved that there exist a large family of parabolic SPDEs whose moment Lyapunov exponents grow at least sub exponentially in its order parameter in the sense that there exist $A_1,\beta\in(0\,,1)$ such that \[ \underline{\gamma}(k) := \liminf_{t\to\infty}t^{-1}\inf_{x\in\mathbb{R}^3} \log\mathbb{E}\left(|u(t\,,x)|^k\right) \ge A_1\exp(A_1 k^\beta) \qquad\text{for all $k\ge 2$}. \] This sort of"super intermittency"is combined with a local linearization of the solution, and with techniques from Gaussian analysis in order to establish the unbounded oscillations of the sample functions of the solution to our SPDE.


Introduction
Throughout, let us choose and fix a non random, globally Lipschitz-continuous function σ : R → R, and consider the stochastic heat equation, ∂u(t , x) ∂t = 1 2 (∆u)(t , x) + σ(u(t , x))η(t , x) for (t , x) ∈ (0 , ∞) × R 3 , (1.1) subject to initial value u(0) ≡ 1. The forcing term η is a white noise with homogeneous correlations in its spatial variable; that is, η is a centered, generalized Gaussian random field with Cov[η(t , x) , η(s , y)] = δ 0 (t − s)f (x − y) for all (t , x), (s , y) ∈ R + × R 3 , where the spatial correlation function f : R 3 → R + is a non random, non-negative, tempered, and positive semi-definite function. In principle, such equations can be -and have been -studied on R + × R n for any integer n 1. We will soon explain why we study them for n = 3 here. Let g denote the Fourier transform of any distribution g on R n , normalized so that e ix·z g(x) dx for all z ∈ R 3 and g ∈ L 1 (R 3 ).
The starting point of this article is the following existence and uniqueness theorem of Dalang [6]. Recall that f 0 almost everywhere because f is positive semi-definite.

2)
then (1.1) has a random field solution u. Moreover, u is unique subject to the condition that According to a general form of Doob's separability theorem [19, Theorem 2.2.1, Chapter 5], we may -and will -tacitly assume without loss of generality that the 4-parameter process u is separable.
Dalang [6] has observed that Condition (1.2) is also necessary in the case that σ is identically a constant.
Recall that the oscillation function of a function ψ : R 3 → R is defined as where B(x , ε) := {y ∈ R 3 : y − x < ε} for all x ∈ R 3 and ε > 0. (1.3) The main results of this paper are the following two theorems. In one form or another, the next two theorems show the existence of models of (1.1) that can have unbounded oscillations everywhere. This holds despite the fact that u(t , x) is a finite random variable at all non random space-time points (t , x) ∈ (0 , ∞) × R 3 .
This sort of extremely bad behavior of SPDEs has been observed earlier only for simpler, constant-coefficient SPDEs [8,9,12] and/or exactly-solvable ones [25,Theorem 1.2] that are forced by "very wild," non-Gaussian noise terms. We believe that the methods of the present paper are novel, in addition to being general enough to include a variety of nonlinear SPDEs that are driven by Gaussian white-noise forcing terms. For a non-trivial variation of Theorem 2, see Theorem 3 below.
Before we describe that variation, we first would like to explain why we consider equations on R + × R n only when n = 3: Spatial dimension three is the smallest dimension in which we know how to establish the blowup results of Theorem 2 and the next theorem.
Theorems 1, 2, and 3 together imply that there are models of f that satisfy (1.2) such that, for every t > 0 fixed, the random function u(t) : R 3 → R has discontinuities of the second kind. These theorems, particularly Theorem 3, fall short of establishing the following conjectures. The methods of this paper are efficient enough to prove Conjecture 1 provided that the answer to the following is "yes": Open Problem. Under the hypotheses of Theorem 2. Is it true that P u(t , x) > 0 for all rational t 0 and x ∈ Q 3 = 1?
The only strict positivity type of theorem for SPDEs that we are aware of is the celebrated result of Mueller [24]; see also [23, pp. 134-135]. But that result, and its proof, rely crucially on the a priori Hölder continuity of the solution. This is a luxury that we do not have in the present setting, as is corroborated by Theorems 2 and 3. The best-known result, along these lines, is the following consequence of Corollary 1.2 of Chen and Huang [2].
The remainder of this paper is devoted to proving Theorem 2. At the end of the paper, we have also included a paragraph which outlines how one can prove Theorem 3 from Theorem 2. In anticipation of those arguments let us conclude the Introduction by introducing more notation that will be used throughout the paper.
Throughout, let p t (x) = p(t , x) denote the heat kernel in R 3 ; that is, for all x ∈ R 3 and t > 0. (1.5) In particular, p t does not refer to the time derivative of the heat kernel; rather the heat kernel itself. We will use the following notation for shorthand. For any two functions A, B : R → R, where R is a topological space: • A(r) ∼ B(r) as r → r 0 means lim r→r 0 (A(r)/B(r)) = 1 ; for all r ∈ R means that there exists a finite constant c > 1 such that A(r) cB(r) for all r ∈ R; • A(r) ≍ B(r) for all r ∈ R means that B(r) A(r) B(r) for all r ∈ R. Finally, let us recall that by a "solution" u to (1.1) we mean a "mild solution." That is: (i) u is a predictable random field -with respect to the Brownian filtration generated by the cylindrical Brownian motion defined by B t (φ) := [0,t]×R 3 φ(y) η(ds dy), for all t 0 and measurable φ : where the stochastic integral is understood in the sense of Dalang [6] and Walsh [30]. Finally, it might help to recall also that for all s, t 0 and measurable φ 1 , φ 2 :

Some classical function theory
Recall that a function f : R 3 → (0 , ∞) is said to be a correlation function if f is locally integrable, with a nonnegative Fourier transform f . The main goal of this section is to establish the following quantitative variation on a certain form of Wiener's tauberian theorem. The following result will be used to show that there are many "bad" correlation functions on R 3 . Throughout, define B(r) to be the centered ball of radius r about the origin; that is, Theorem 5. For every α > 1 there are correlation functions f : R 3 → (0 , ∞) such that: f is uniformly continuous on R 3 \ B(r) for every r > 0; 3. There exists a nonincreasing function ϕ : R + → R + such that We will fix the notation, introduced in Theorem 5, for both f and ϕ from now on. Interestingly enough, we are only aware of one proof. Though most of that proof can be translated into the language of classical function theory -specifically, the theory of Bernstein functions, for example, as described in Schilling, Song, and Vondraček [28] -our proof is decidedly probabilistic at a key point. The reason is that, thus far, the unimodality result (2.6) below only has a probabilistic derivation, as it depends crucially on the strong Markov property; see Khoshnevisan and Xiao [21,Lemma 4.1] for details.
From now on, α > 1 is held fixed. We follow an idea of Khoshnevisan and Foondun [12], and first define an absolutely-continuous Borel measure ν on (0 , ∞) whose Radon-Nikodym density at r blows up a little bit more slowly than r −3/2 as r ↓ 0. Specifically, let the structure theory of Lévy processes -see Sato [27,Chapter 4] -tells us that ν is the Lévy measure of a subordinator T : The following lemma describes the asymptotic behavior of Φ near infinity.
Proof. Thanks to scaling and a simple application of the dominated convergence theorem, as λ → ∞. 1 Now write 1 − e −s = s 0 exp(−r) dr, plug this into the preceding integral and apply Tonelli's theorem in order to deduce the lemma.
Next let U denote the 1-potential measure of the subordinator T ; that is, for all Borel sets A ⊆ R + , Evidently, U is a Radon probability measure on R + . We refer to this property of U many times in the sequel, and will sometimes even do so tacitly. One can write (2.3) equivalently as follows: g dU = Eg(T S ), for all bounded Borel functions g : R + → R, where S denotes an independent random variable with an exponential, mean-one distribution.
The following estimates the U-measure of a small interval about the origin.
Proof. The Laplace transform of U can be computed easily as follows, thanks to several applications of the Fubini-Tonelli theorem: For all λ > 0, Therefore, Lemma 2.1 ensures that Now we apply a standard abelian argument as follows: First of all, because we can deduce from (2.5) that U[0 , 1/λ) λ −1/2 (log λ) −α for all λ > e. In order to obtain the remaining converse bound, let us first recall that U is "4-weakly unimodal" in the sense that for every x ∈ R and r > 0; (2.6) see Khoshnevisan and Xiao [21,Lemma 4.1]. Consequently, A second appeal to (2.5) completes the proof; to finish we simply set λ := 1/ε.
Proof of Theorem 5. Let T denote the subordinator that we just constructed in Lemma 2.2, and let W := {W (t)} t 0 be an independent standard Brownian motion in R 3 . Then, is an isotropic Lévy process in R 3 . We can see, by first conditioning on T t , that the characteristic function of X is given by for all t 0 and z ∈ R 3 .
Recall that the heat kernel p s (x) -defined in (1.5) -is the probability density of W (s) at x ∈ R 3 for every s > 0. Therefore, for every measurable function ψ : This is the function f that was announced in Theorem 5.
Clearly, f > 0 on R 3 . Also, Fubini's theorem and (2.4) together imply that the Fourier transform of f is Among other things, this calculation shows that: (a) 0 < f 1; and in particular, (b) f is positive semi definite. It follows that f is a correlation function, and Part 1 of the theorem is also proved.
Since U(R 3 ) 1, and because (s , x) → p s (x) is bounded uniformly on (0 , ∞)×[R 3 \B(r)] for every r > 0, the continuity of x → p s (x) and the dominated convergence theorem together prove that f is continuous uniformly on R 3 \ B(r) for every r > 0, whence follows part 2 of the theorem.
Part 3 follows immediately from (2.7) and the isotropy and monotonicity properties of the heat kernel.
In order to verify part 4 of the theorem we decompose f as follows: We have used the weak unimodality [see (2.6)] of U in order to deduce the second line from the first. Therefore, it remains to prove that f 1 (x) ≍ x −2 log(1/ x ) −α as long as x e −1 .
Lemma 2.2 implies the existence of two finite and positive constants a and b such that, uniformly for every ε ∈ (0 , e −1 ), Consequently, as long as we choose a large enough constant K > 0, the following holds uniformly when x 2 < K −1 e −1 : Since ϕ is monotone, the preceding holds also when K −1 e −1 < x 2 e −1 . Similarly, we can decompose n∈Z: e −n x 2 e −1 exp − e n 2 + n log e n / x 2 −α .
This readily yields the complementary bound to (2.8) and completes the proof of part 4. Part 5 was proved in Foondun and Khoshnevisan [12]; see the argument that led to Theorem 3.14 therein (ibid.).
In order to complete the proof we verify part 6 of the theorem. Let {R λ } λ>0 denote the resolvent of the heat semigroup on R 3 , run at twice the standard speed; that is, 2) is equivalent to the condition that (R 1 f )(0) < ∞. Therefore, it remains to prove that (R 1 f )(0) is finite. This is a well-known calculation about the Newtonian potential in dimension three. The computations will be carried out here for the sake of completeness. One can integrate in spherical coordinates as follows: Therefore, the lemma follows once one proves that R 3 x −1 f (x) dx < ∞; that is, once we prove that f has finite Newtonian potential. Note that uniformly for all s > 0. It follows from the definition (2.7) of f that the Newtonian potential of f can be written as It remains to prove that 1 0 s −1/2 U(ds) is finite; this endeavor will complete the proof since U is a probability measure. To see that
From now on, we restrict attention to a noise model for η that corresponds to a spatial correlation function f : R 3 → R + that satisfies properties 1-5 of Theorem 5; the choice of f is otherwise arbitrary.

Existence, uniqueness, and moments
The following result follows from Theorems 1 and 5 (part 6).
Lemma 3.1. The stochastic partial differential equation (1.1), subject to u(0) ≡ 1 admits a predictable random field solution u. Moreover, u is unique, up to a modification, subject to the condition that for all T ∈ (0 , ∞) and k ∈ [2 , ∞), dr. Thus, Lemma 3.1 follows from the fact that ∞ 0 rϕ(r) dr < ∞; see the proof of Lemma 3.1. This sort of integrability condition for r → rϕ(r) arose earlier in the context of hyperbolic SPDEs; see Dalang and Frangos [7].
Next we produce moment estimates for the solution to (1.1), all the time remembering that the spatial correlation function f of η satisfies the properties mentioned in Theorem 5, and α > 1 is the underlying parameter that was used in the course of the construction of f . Theorem 6. Let u denote the solution to (1.1), and recall that σ is Lipschitz continuous and non random. Then, there exists a finite constant A > 0 such that uniformly in x ∈ R 3 and k ∈ [2 , ∞) and t > 0. For a complementary bound, suppose that σ(z) = z for all z ∈ R. Then, in that case, there exists a finite constant A 1 > 0 such that uniformly for all x ∈ R 3 and all integers k 2 and t > 0.
is included here mainly because it shows that, for the spatial correlation function f of the type studied here, the solution to (1.1) is "extremely intermittent." One way to say this is as follows: Consider the [lower] moment Lyapunov exponents, Then, (3.2) proves that lim inf k→∞ k −1/α log γ(k) > 0. In other words, the Lyapunov moments exponents grow extremely rapidly with the moment numbers. For usual choices of the spatial correlation function f , log γ(k) grows as log k, whereas it grows as k 1/α here. This sort of extreme intermittency provides a certain amount of evidence toward the truth of Conjecture 2, though it certainly does not prove Conjecture 2.
In order to prove the upper bound (3.1) we will use a general result of Foondun and Khoshnevisan [12,Theorem 1.3]. For the lower bound (3.2) we first use a Feynman-Kac type moment formula to represent the solution, and then reduce the problem to a small-ball estimate for three-dimensional Brownian motion.
The proof of the upper bound requires two technical lemmas which we develop next.
Proof. We find it more convenient to work with p t rather than p 2t ; a change of variables [2t → t] will adjust the constants for correct later use. We integrate in spherical coordinates to see that, for all t > 0, where both are functions of the time variable t which we suppress. The second quantity T 2 is bounded uniformly in t. In fact, the monotonicity of ϕ -see Theorem 5 -yields Therefore, it remains to prove that T 1 ≍ t −1 | log t| −α for all t ∈ (0 , t 0 ), where t 0 > 0 is a sufficiently-small constant. Choose and fix a constant K > 2e. Thanks to part 4 of Theorem 5, we can write where T 1,1 := Similarly, In this way we have proved that for small values of t > 0. The other bound is even simpler to establish since for all sufficiently-small values of t.
Let R := {R λ } λ>0 denote the resolvent of the Laplace operator 1 2 ∆; compare with (2.9). We can write R in terms of the heat kernel of Brownian motion as for all x ∈ R 3 , λ > 0, and Borel functions h : Proof. First, let us observe that (R λ f )(0) < ∞ for all λ > 0 for the same sort of reason that showed that (R 1 f )(0) < ∞; see the proof of Lemma 3.1. Next are defined as follows: We estimate T 1 and T 2 separately. A change of variable shows that Therefore, Lemma 3.4 ensures that By the semigroup property of the heat kernel, Since p t * f is a continuous, positive semi-definite function, it is maximized at the origin. Therefore, we can deduce from the preceding display that t → (p t * f )(0) is non increasing. In particular, thanks to Lemma 3.4. This and the estimate for T 1 together imply that T 1 = o(T 2 ) as λ → ∞, which completes the proof.
Proof of Theorem 6. First we prove the claimed upper bound on the moments of u(t , x). According to Lemma 3.5, uniformly for all λ ek/2 and k 2. If, in addition, for a sufficiently large C > 0, then the preceding simplifies to the following inequality:

In particular, (3.3) tells us that there exists a positive and finite constant
for all k 2.

Theorem 1.3 of Foondun and
Khoshnevisan [12] now shows that lim sup for all k 2. This proves an asymptotic, large-t, version of the stated upper bound (3.2) of the theorem. The asserted fixed-t result holds because of the proof of Theorem 1.3 of Foondun and Khoshnevisan (ibid.); consult Lemmas 5.4 and 5.5 of that reference for details.
To prove the lower bound (3.2) let us recall the following Feynman-Kac formula for the moments of the parabolic Anderson model; see Hu and Nualart [17] and Conus [4]: Thanks to (2.2), if 0 < η exp(−e), then we can find a positive constant L such that, uniformly for all s ∈ [0 , t] and 1 i k, ϕ( w It is well known, and easy to see directly, that there exists a universal positive constant c such that The proofs of Theorems 2 and 3 will rely on the following variation on the theme of Theorem 6. Proposition 3.6. Suppose, in addition, that σ is bounded. Then, Proof. In accord with (1.6) and suitable form of the Burkholder-Davis-Gundy inequality [18], . Now, the boundedness of the function σ simplifies the preceding as follows: Since p 2s and f are both positive semi-definite, so is their convolution. Moreover, p 2s * f is manifestly continuous. Therefore, by elementary properties of continuous, positive definite functions, p 2s * f is maximized at the origin. Therefore, Lemma 3.4 implies that Moreover, the semigroup property of the heat kernel implies that p 2s = p 2/e * p 2(s−(1/e)) for all s > e −1 , whence uniformly for all s e −1 . Consequently, Remark 3.7. In order to highlight the efficacy of Proposition 3.6, let us consider the case that σ is a constant function; say, σ ≡ 1. In that case, u(t , x) is a mean-one Gaussian random variable with variance, by the same sort of computation as the one used in the course of the proof of Proposition 3.6. Special properties of mean-zero Gaussian distributions then imply that when σ ≡ 1, uniformly for all (t , x , k) ∈ R + × R 3 × [2 , ∞). One can conclude readily from this inequality that the statement of Proposition 3.6 is, in its essence, unimprovable.

Moment bounds for the spatial and temporal increments
In this subsection we give estimates for the quantity when t ≈ t ′ and x ≈ x ′ . These estimates will be used in an ensuing "local linearization argument" that will be highlighted in Proposition 5.1. Throughout this paper, we set log + θ := log(θ ∨ e) for all θ ∈ R.
uniformly for all distinct x, x ′ ∈ R 3 , and all real numbers k 2.
Proof. Choose and fix t > 0 and x, x ′ ∈ R 3 . According to (1.6) and a suitable application of the BDG inequality (see [18] for details), for every real number k 2, for all r > 0 and a ∈ R 3 , and E(s , y) := E |σ(u(s , y))| k 1/k .
Since σ is bounded, there exists a finite constant B > 1 such that uniformly for all t > 0, k 2, y ∈ R 3 , and s ∈ [0 , t], 2) for the definition of ϕ. At first one might expect that the absolute values in the integral introduce additional logarithmic factors which can damage our estimates since the left-hand side is quite large already [remember that we are trying to prove that the left-hand side is at most a negative power of the iterated logarithm of x − x ′ ]. Remarkably, the introduction of the absolute values turns out to be harmless. In order to prove this we will use the following elementary inequality: Uniformly for all z ∈ R 3 and s > 0, For a detailed proof see Lemma 6.4 in [5]. Now we analyze (3.5).
Throughout the remainder of this calculation, let us define Theorem 5 and two back-to-back applications of (3.6) together show that t 0 ds y,y∈R 3 : (3.7) Next we estimate the same integral as above, but with its region of integration replaced by {y, y ′ ∈ R 3 : y − y ′ z }. [It might help to consult (3.5) to see why.] With this aim in mind, define for all y ∈ R 3 and for every integer n 0, By the monotonicity properties of ϕ [Theorem 5], t 0 ds y,y ′ ∈R 3 : y ′ ∈An(y)\A n+1 (y) dy dy ′ |P s (y)P s (y ′ )| ϕ( y − y ′ ) ϕ 2 −n−1 z The elementary properties of the heat kernel p and the inequality (3.6) together allow us to write where the implied constant does not depend on (n , s , z). Therefore, where the implied constant does not depend on (n , s , z). In particular, where the implied constants do not depend on (t , z). Therefore, Theorem 5 ensures that t 0 ds y,y∈R 3 : and, as before, the implied constants do not depend on (t , z). In light of (3.5), (3.7), and (3.9), we can deduce the existence of a finite constant B such that, uniformly for all 0 < t < T and k 2, where B is also independent of (x , x ′ ). We can replace A by a possibly larger constant in order to complete the proof of the result.
Next, let us consider bounds on the temporal increments of the solution to (1.1). The main result of the section is recorded as the following proposition. Proposition 3.9. Assume that σ is bounded. Then for all T ∈ (0, ∞) there exists a finite constant A depending on T such that valid uniformly for all distinct t, t ′ ∈ [0 , T ] and all real numbers k 2.

Proof. A suitable form of the BDG inequality for martingales implies that
see [18]. By the boundedness of σ, we obtain, thanks to Parseval's identity. Since 1 − exp(−a) min(1 , a) for all a > 0, it then follows that The integral can be considered separately in two parts: Where w 1/ √ h and where w > 1/ √ h. The first part is estimated as follows: The second bound is handled similarly, viz., The lemma is a ready consequence of the preceding two displays and (3.11).
In order to estimate the quantity T 2 (t , h , x) -see Proof. Becausė we apply the fundamental theorem of calculus to see that Integrate the preceding [dx] in order to see that

Hence, by change of variable
This proves the lemma. Lemma 3.12. Recall T 2 (t , h , x) from (3.10). If σ is bounded, then there exists a finite constant A such that Proof. We begin as in the proof of the preceding lemma. Namely, we begin by observing that a suitable form of the BDG inequality for martingales implies that By the boundedness of σ and the Cauchy-Schwarz inequality, r (x) was defined earlier in (3.12); see the derivation of (3.11). Denote the triple integral in (3.13) by I. The integral I can be expressed as follows: Next, I 1 and I 2 are estimated separately, and in reverse order. Theorem 5 and Lemma 3.11 together imply that (3.14) In order to estimate I 1 , one can use the trivial inequality D (h) s (y ′ ) p s+h (y ′ ) + p s (y ′ ) in order to see that Because p r (z) r −3/2 for all z ∈ R 3 and r > 0, and since p r is a probability density, one can estimate the dy ′ -integral in the preceding display as follows: Therefore, Now apply Theorem 5 to see that The lemma follows from this and (3.14).

The constant-coefficient case
So far, everything that was considered held for any α > 1. From now on, we restrict the choice of the spatial correlation function f further by assuming that f comes from Theorem 5 in the special case that 1 < α < 2. This assumption will be in place throughout the remainder of this paper, and used sometimes without mention. In this section we study the [constant-coefficient] linearization of (SHE). That is, we consider the stochastic partial differential equation, subject to Z(0) ≡ 1. As is well known, the solution is the following centered Gaussian random field: as the preceding Wiener integral has a finite variance. This can be seen from an application of Lemma 3.1 with σ ≡ 1.
Recall the function ϕ from (2.2). The elementary properties of Wiener integrals show us that Z is a centered Gaussian random field with covariance for all t > 0 and x, x ′ ∈ R 3 . In particular, it follows readily that Z(t) is a centered, stationary Gaussian random field -indexed by R 3 -for every fixed t > 0.
The following is the main result of this section.
Proposition 4.1. For every real number t > 0 there exists a finite K > 1 such that Let us make a few remarks before we prove Proposition 4.1. First, we record the following ready corollary of Proposition 4.1, the stationarity of x → Z(t , x), and the restriction (4.1) on α. Therefore, Fubini's theorem yields the following.
Proposition 4.4 implies that x → Z(t , x) is continuous in L 2 (P), and hence in L p (P) since the L p (P) norm of a Gaussian random variable is controlled by its L 2 (P) norm. In particular, Doob's regularity theory implies that x → Z(t , x) has a separable, in fact, Lebesgue measurable, version; see Chapter 5 of Khoshnevisan [19]. After we establish Proposition 4.4 we always tacitly refer to that separable version.

Proof of Proposition 4.4. By Parseval's identity,
Therefore, an appeal to Lemma 4.1 of Foondun and Khoshnevisan [12] yields Thanks to Theorem 5, Integrate in spherical coordinates to find that (4.5) Similar computations yield the following: (4.6) and uniformly for all x, x ′ ∈ R 3 .
For every d-compact set A ⊂ R 3 , let N A ( ·) denote the metric entropy of A in the metric d; that is, for every ε > 0, the quantity N A (ε) denotes the minimum number of d-balls of radius ε > 0 that are required to cover A. We have noted already that x → Z(t , x) is a centered and stationary Gaussian process. Therefore, the theory of Dudley [10] and Fernique [11] [for a pedagogic account see Marcus and Rosen [22]] together imply that  It is a noteworthy observation that the topology induced by the metric d is Euclidean, thanks to (4.9). Therefore, (4.10) holds for every compact set A ⊂ R 3 .
We wish to apply the preceding to the finite set According to (4.9), diam(A) ≍ 1 uniformly for all integers N 1. Moreover, for all i, j ∈ {0 , . . . , N} 3 and N 1, In particular, there exists a finite constant K > 1 such that for all ε > 0: Let δ(N) denote the smallest ε > 0 such that ε d(i/N , j/N) > 0 for two distinct points i, j ∈ NA. The preceding remarks together imply that Now apply (4.10) to see that uniformly for all N 1. Finally, we apply Borell's inequality in order to see that for all z > 0, see Borell [1] and Sudakov and Tsirel'son [29]. This, (4.11), and Remark 3.7 together yield the proposition, after we make a judicious choice of z.

Local linearization
For every space-time function φ : R + × R 3 → R, and for every ε ∈ (0 , ∞) 3 , define In other words, ∇ ε is a sort of discrete spatial gradient operator on a mesh of size ε . In particular, note that for all ε ∈ (0 , ∞) 3 and x ∈ R 3 , where p t (·) is the heat kernel, as was defined in (1.5).
We may also observe that ∇ ε φ makes sense equally well when φ depends only on a spatial variable. In other words, whenever x → φ(x) is a function on R 3 , In the next section we show that, under some additional assumptions on σ, the solution to (1.1) can be discontinuous at any given space-time point. The idea is that, in a strong sense, (∇ ε u)(t , x) ≈ σ(u(t , x))(∇ ε Z)(t , x) whenever ε ≈ 0, (5.1) for all t > 0 and x ∈ R 3 . Hence, the gloal discontinuity of Z(t , x) -see Corollaries 4.2 and 4.3 -will force the local discontinuity of u(t , x), as long as σ(u(t , x)) is not too small. As part of this work, it will be shown that the error in the approximation (5.1) to ∇ ε u does not affect the discontinuity of the term σ(u) × ∇ ε Z. The following result makes this assertion more precise.
The proof is somewhat long, and will be presented shortly. But first, let us make a few remarks on the content of Proposition 5.1.
According to Propositions 3.8, for every k 2 and T > 0, uniformly for all (t , x) ∈ [0 , T ] × R 3 and ε > 0 sufficiently small. This very inequality can be applied with k replaced by 2k and σ by the constant function 1 in order to yield the following: For all k 2 and T > 0, uniformly for all (t , x) ∈ [0 , T ] × R 3 and ε ∈ R 3 \ {0} such that ε is sufficiently small. Theorem 6 and the Lipschitz continuity of σ together imply that σ(u(t , x)) 2k is bounded, for every k 2 and T > 0, uniformly over all (t , x) ∈ [0 , T ] × R 3 . One can conclude from this discussion, and the Cauchy-Schwarz inequality, that for all k 2 and T > 0, uniformly for all (t , x) ∈ [0 , T ] × R 3 and ε > 0 sufficiently small. If this bound were proved to be sharp [it can be, in some cases], then Proposition 5.1 is telling us that, although A 1 := (∇ ε u)(t , x) and A 2 := σ(u(t , x))(∇ ε Z)(t , x) are both quite small in L k (P) norm, their difference A 1 −A 2 is smaller still. This is a quantitative way to say that the locally-linearized form A 2 is a very good approximation to the discrete gradient A 1 of the solution to (1.1). This general idea has recently played various roles in SPDEs; see, for example Hairer [14,15] and Hairer and Pardoux [16], where this sort of local linearization is sometimes referred to as a "jet expansion," and Foondun, Khoshnevisan, and Mahboubi [13] and Khoshnevisan, Swanson, Xiao, and Zhang [20], where this sort of local linearization is used to analyse the local structure of the solution to parabolic SPDEs that are much nicer than those that appear here. Let us conclude this section with the following.
Proof of Proposition 5.1. Let us first introduce some notation. For every ε > 0 let As notational advice, let us point out that here and throughout, ε > 0 denotes a typicallysmall scalar and should not be confused with ε ∈ (0 , ∞) 3 which is a 3-vector that typically has small norm. For all t, ε > 0 and x ∈ R 3 define to be a suitably-chosen, 4-dimensional, space-time box with "center" (t , x).
Let us consider the following decomposition of ∇ ε u, valid thanks to (1.6) where I ij = I ij (t , x , ε) is defined for all i, j = 1, 2 as follows: y)) η(ds dy); x − y) η(ds dy); x η(ds dy); and The L k (P)-norms of the I ij 's are estimated next. The computations are somewhat long and tedious. Therefore, they are presented in five separate steps.
Step 1. A comparison estimate for I 11 . In the second step of the proof we establish an inequality that compares the moments of the random variable I 11 to moments of a certain mean-zero Gaussian random variable; see (5.5) below. A suitable formulation of the Burkholder-Davis-Gundy inequality [18] implies that where D(r , a , a ′ ) := |(∇ ε p)(r , a)(∇ ε p)(r , a ′ )|, and the triple integrals are computed over all points If X is a random variable with the standard normal distribution, then X k ≍ √ k uniformly for all k 2. This fact and the previous inequality for I 11 2 k together yield valid uniformly for all (t , x , k) ∈ [0, T ] ×R 3 ×[2 , ∞) and ε > 0, where B(x , t , ε) was defined in (5.3).
According to (5.2), β ε > ε 2 for all ε > 0 sufficiently small. Therefore, (3.6) and the monotonicity of ϕ [Theorem 5] together imply that (5.7) One can estimate Q 12 using the same technique that was used in the proof of Proposition 3.8. More specifically, we proceed as follows: For all ε > 0 sufficiently small, An appeal to Theorem 5 and (5.2) yields by an integral test. This inequality and (5.7) together yield valid uniformly for all (t , x) ∈ R + × R 3 and all sufficiently small ε > 0. In order to bound Q 2 , we change variables and then use the simple inequality, in order to deduce that for all ε sufficiently small, Because p s and f are both positive semi-definite, so is p s * f . Moreover, p s * f is continuous and bounded. Therefore, elementary facts about positive definite functions tell us that p s * f is maximized at the origin. In this way we find that thanks to the definition (5.2) of γ ε and β ε . It is easy to deduce from the preceding that lim sup ε↓0 √ β ε log Q 2 − 1 2 , and hence for every κ > 0, Q 2 [log(1/ε)] −κ , uniformly for all ε > 0 sufficiently small [with room to spare]. This inequality, (5.6) and (5.8) together accomplish the main objective of Step 3; namely, they imply that there exists ε 0 ∈ (0 , 1) such that for some constant A T > 0, uniformly for all (t , x , k) ∈ [0, T ] × R 3 × [2 , ∞) and ε ∈ (0 , ε 0 ).
Step 4. Estimates for I 21 and I 22 . In this step we derive a bound for the moments of I 21 and I 22 ; the end results are (5.14) and (5.15) below.
Let us recall the sets B 2 (x , t , ε) from (5.4). By a suitable application of the Burkholder-Davis-Gundy inequality, where for all (s , w) ∈ B(x , t , ε). Of course, the functions P and U depend also on the variables (t , x , ε), but this dependency is not immediately relevant to the discussion. Because of the Lipschitz condition of σ, Propositions 3.8 and 3.9 together imply that when ε is sufficiently small, uniformly for all (s , w) ∈ B(x , t , ε), and 1/∞ := 0 to account for the possibilities s = t − β ε and w = x. Consequently, (5.13) yields P(s , y)P(s , y ′ )f (y − y ′ ) dy dy ′ ds P(s , y)P(s , y ′ )f (y − y ′ ) dy dy ′ ds the last line follows from (3.7) and (3.9) [simply apply the latter two inequalities with z = ε, for instance]. This readily yields the following, with room to spare: There exist finite and positive constants A and ε 1 < 1 such that uniformly for all (t , x , k) ∈ R + × R 3 × [2 , ∞) and ε ∈ (0 , ε 1 ). Finally, we obtain I 22 k from (5.14), using the Cauchy-Schwarz inequality in the same way that I 12 was derived from I 11 in Step 4, in order to obtain uniformly for all (t , x , k) ∈ [0, T ] × R 3 × [2 , ∞) and ε ∈ (0 , ε 1 ).
Step 5. Conclusion of proof. The proposition follows from an application of Minkowski inequality, using the results (5.10) through (5.15) of Steps 1 through 5.

Proof of Theorems 2 and 3
The groundwork for the proof of the main results of the paper has been laid. We now are ready to prove the main results of the paper, which we do in order.
Next we observe that one can reduce the scope of the problem to the case that σ(z) 0 for all z ∈ R without incurring any loss in generality. This is because σ is continuous and crosses zero at -and only at -the origin. A second appeal to the assumption σ −1 {0} = 0 reduces the problem to proving the following: u(t , y) = ∞ for all r > 0 σ(u(t , x)) > 0 = 1, (6. Owing to (6.1), we can find two finite numbers 0 < A < B such that We plan to prove that the following holds for every such pair (A , B) of real numbers that satisfy (6. This will do the job since we may let A ↓ 0 and B ↑ ∞, using Doob's martingale convergence theorem, to finish the proof. Thus, it remains to prove (6.4). For the remainder of the proof let us choose and fix an arbitrary space-time point (t , x) ∈ (0 , ∞) × R 3 for which we plan to verify (6.4). Also, let us choose an arbitrary real number δ > 0. Define and For every real number M > 0, where the definition of (P 1 , P 2 ) is clear from context, and D t (x , y) := u(t , y) − u(t , x) − σ(u(t , x)) [Z(t , y) − Z(t , x)] , (6.5) for every y ∈ R 3 , and the mean-zero Gaussian random field Z is, as before, the solution to (1.1) with σ ≡ 1. We are going to prove that P 1 (δ) and P 2 (δ) both tend to zero as δ ↓ 0.
Since M > 0 is arbitrary, this will complete the proof. Consider the Gaussian process As was demonstrated in Section 4, the canonical distance d imposed on R 3 by Z satisfies for every i, j ∈ Π(δ). We plan to apply a metric entropy argument in order to estimate the quantity on the left-hand side of (6.7) below.
The dervivation of Theorem 2 admittedly required some effort. But now we can adjust that derivation -without a great deal of additional effort -in order to verify Theorem 3.
Proof of Theorem 3 (sketch). The conditioning in the equivalent statement (6.2) to Theorem 2 arose because, during the course of the proof of Theorem 2, we needed to prove that lim δ→0 P max i∈Π(δ) |σ(u(t , x))| · |Z(t , y i ) − Z(t , x)| λ̺(t , δ) = 0, (6.15) for a suitably-small choice of λ > 0, and (t , x) → σ(u(t , x)) could, in principle, be frequently close to -or possibly even equal to -zero. In the present setting however, σ is bounded uniformly from below, away from zero. Therefore, in the present setting, (6.15) follows immediately Proposition 4.1, as long as λ is a small-enough [but otherwise fixed] positive constant. The remainder of the proof of Theorem 2 remains essentially intact.