Spatial asymptotics for the parabolic Anderson model driven by a Gaussian rough noise

The aim of this paper is to establish the almost sure asymptotic behavior as the space variable becomes large, for the solution to the one spatial dimensional stochastic heat equation driven by a Gaussian noise which is white in time and which has the covariance structure of a fractional Brownian motion with Hurst parameter greater than 1/4 and less than 1/2 in the space variable.


Introduction
This article is concerned with a linear stochastic heat equation on R + × R, formally written as ∂u ∂t = 1 2 whereẆ is a Gaussian noise which is white in time and colored in space, and we are interested in regimes where the spatial behavior ofẆ is rougher than white noise. More specifically, our noise can be seen as the formal space-time derivative of a centered Gaussian process whose covariance is given by E [W (s, x)W (t, y)] = 1 2 |x| 2H + |y| 2H − |x − y| 2H (s ∧ t), (1.2) where 1 4 < H < 1 2 . That is, W is a standard Brownian motion in time and a fractional Brownian motion with Hurst parameter H in the space variable. Notice that the spatial covariance ofẆ , which is formally given by γ(x − y) = H(2H − 1)|x − y| 2H−2 , is not locally integrable when H < 1 2 . It is in fact a nonpositive distribution, and therefore the stochastic integration with respect to W cannot be handled by classical theories (see e.g. [11,10,16]). However, one has recently been able (cf. [14]) to give a proper definition of equation (1.1) and to solve it in a space of Hölder continuous processes (see also the recent work [2], covering the linear case (1.1)). We shall take those results for granted.
Let us now highlight the fact that space-time asymptotics for stochastic heat equations like (1.1) have attracted a lot of attention in the recent past. This line of research stems from different motivations, and among them let us quote the following. For a fixed t > 0, the large scale behavior of the function x → u(t, x) is dramatically influenced by the presence of the noiseẆ in (1.1) (as opposed to a deterministic equation with no noise).
One way to quantify this assertion is to analyze the asymptotic behavior of x → u(t, x) as |x| → ∞. Results in this sense include intermittency results, upper and lower bounds for M R ≡ sup |x|≤R u(t, x) contained in [8], and culminate in the sharp results obtained in [6].
Roughly speaking, in case of a white noise in time like in (1.2), those articles establish that log(M R ) behaves like [log(R)] ψ , for an exponent ψ which depends on the spatial covariance structure ofẆ . In particular, if the spatial covariance ofẆ is described by the Riesz kernel |x| −α for α ∈ (0, 1), one gets ψ = 2 4−α . This interpolates between a regular situation in space (α = 0 and ψ = 1/2) and the white noise setting (α = 1 and ψ = 2/3). In any case those results are in sharp contrast with the deterministic case, for which x → u(t, x) stays bounded.
With these preliminaries in mind, the current contribution completes the space-time asymptotic picture for the stochastic heat equation, covering very rough situations like the ones described by (1.2). Namely, we shall get the following spatial asymptotics. Let us say a few words about our strategy in order to prove Theorem 1.1. It can be roughly divided in two main steps: (i) Tail estimate for u(t, x). Let us fix t ∈ R + and x ∈ R. We will see (cf Corollary 4.5) that for large a, we have P (log(u(t, x)) ≥ a) exp −ĉ H,t a 1+H t H , (1.5) whereĉ H,t is determined by a variational problem. This stems, via some large deviation arguments, from a sharp analysis of the high moments of u(t, x). Namely, our main effort in order to get the tail behavior will be to prove that for large m ∈ N, we have (see with a variational expression for c H . Towards this aim, we resort to a Feynman-Kac representation for the moments of u(t, x), which involves a kind of intersection local time for an m-dimensional Brownian motion weighted by a singular potential. We are thus able to relate the quantity E[(u(t, x)) m ] to a semi-group on L 2 (R m ), and this semi-group admits a generator A m which can be expressed as the Laplace operator on R m perturbed by a singular distributional potential. Then we shall get our asymptotic result (1.6) thanks to a careful spectral analysis of A m . The technicalities related to this step are detailed in Sections 3 and 4.
(ii) Spatial behavior. Once the tail of log(u(t, x)) has been sharply estimated, we can complete the study of the asymptotic behavior in the following way: on the interval [−M, M ] for large M , we are able to produce some random variables X 1 , . . . , X N such that: • N is of order 2M . • X 1 , . . . , X N are i.i.d, and satisfy approximately (1.5). More precisely, for any δ > 0, we shall prove that one can choose λ conveniently so that the inequality P(log(|X i |) > λ(log R)  ≤ exp(−R ν ), with a positive ν. Otherwise stated, we obtain an exponentially small probability of having log(|X j |) of order less than [log R] As already mentioned, the spatial covariance γ of the noiseẆ driving equation (1.1) is a nonpositive distribution. With respect to smoother cases such as the ones treated in [6], this induces some serious additional difficulties which can be summarized as follows.
First, the variational asymptotic results involving the generator A m cannot be reduced to a one-dimensional situation due to the singularities of γ. We thus have to handle a family of optimization problems in L 2 (R m ) for arbitrarily large m. Then, still in the part concerning the asymptotic behavior of m → E[(u(t, x)) m ], the upper bound obtained in [6] relied heavily on a compactification by folding argument for which the positivity of γ was essential. This approach is no longer applicable here, and we have to replace it by a coarse graining procedure. Finally, the localization procedure and the study of fluctuations in the spatial behavior step of our proof, though similar in spirit to the one in Conus et al. [9], is more involved in its implementation. More specifically, in our case the moment estimates cannot be obtained by using sharp Burkholder inequalities, because of the roughness of the noise. For this reason we use Wiener chaos expansions and hypercontractivity, which are more suitable methods in our context. The fluctuation estimates alluded to above are also obtained through chaos expansions.
The paper is organized as follows. Section 2 contains some preliminaries on stochastic integration with respect to the rough noiseẆ and the mild formulation of equation (1.1). We introduce the variational quantities and their asymptotic behavior when time is large in Section 3. Section 3.3 deals with Feynman-Kac semigroups and in Section 4 we derive the precise moments asymptotics which are required to show Theorem 1.1. The proof of Theorem 1.1 is given in Section 5. A technical lemma is proved in the appendix.

Multiplicative stochastic heat equation
This section is devoted to recall the basic existence and uniqueness results for the stochastic equation with rough space-time noise.

Structure of the noise
Recall that we are considering a Gaussian field W whose covariance structure is given by (1.2). As mentioned above, the stochastic integration with respect to W has only been introduced recently in [2,14], and we proceed now to a brief review of the results therein.
Let us start by introducing our basic notation on Fourier transforms of functions.
The space of Schwartz functions is denoted by S. Its dual, the space of tempered distributions, is S . The Fourier transform of a function g ∈ S is defined by so that the inverse Fourier transform is given by F −1 g(ξ) = (2π) −1 Fg(−ξ). Let D((0, ∞) × R) denote the space of real-valued infinitely differentiable functions with compact support on (0, ∞) × R. Taking into account the spectral representation of the covariance function of the fractional Brownian motion in the case H < 1 2 proved in [15, Theorem 3.1], we represent our noise W by a zero-mean Gaussian family {W (ϕ), ϕ ∈ D((0, ∞) × R)} defined on a complete probability space (Ω, F, P), whose covariance structure is given by

R+×R
Fϕ(s, ξ) Fψ(s, ξ) µ(dξ) ds, (2.1) where the Fourier transforms Fϕ, Fψ are understood as Fourier transforms in space c H = 1 2π Γ(2H + 1) sin(πH), and µ(dξ) = |ξ| 1−2H dξ . (2.2) We denote by γ the Fourier transform of the measure µ(dξ). Formally c H γ(x)δ 0 (s) is the covariance function of the generalized noiseẆ . However, notice that γ is a generalized function and the integral γ(x) = R e −iξx µ(dξ) is not defined pointwise. Rather, it is defined as a linear functional given by for any ϕ ∈ S(R). As a generalized function, γ is non-negative definite in the sense that On the other hand, γ(x) (or more precisely, its truncated form) takes negative values somewhere. As mentioned in the introduction, this fact makes the problem of spatial asymptotics for equation (1.1) much harder than in [6].
Let H be the closure of D((0, ∞) × R) under the semi-norm induced by the right-hand side of (2.1). The Gaussian family W can be extended as an isonormal Gaussian process W = {W (ϕ), ϕ ∈ H} indexed by the Hilbert space H. The space H can be identified with the homogenous Sobolev space of order 1 [1] for the definition ofḢ 1 2 −H ). Let us close this subsection by the definition of Itô's type integral in our context, which will play a crucial role in the sequel. We will make use of the notation for any t ≥ 0 and ϕ ∈ S(R) For each t ≥ 0, we denote by F t the σ-field generated by the random variables {W (s, ϕ) : s ∈ [0, t], ϕ ∈ S(R)}. The following proposition is borrowed from [14].
Proposition 2.1. Let L 2 a be the space of predictable processes g defined on R + × R such that almost surely g ∈ H and E[ g 2 H ] < ∞. Then, the stochastic integral R+ R g(s, x) W (ds, dx) is well defined for g ∈ L 2 a . Furthermore, the following isometry property holds true

Stochastic heat equation with rough multiplicative noise
Recall that we are considering equation (1.1) driven by the noise described in Section 2.1. For the sake of simplicity, we shall moreover choose u(0, ·) = 1 as the initial condition.
x), t ≥ 0, x ∈ R} be a real-valued predictable stochastic process. Assume that for all t ≥ 0 and x ∈ R the process {p t−s (x − y)u(s, y)1 [0,t] (s), 0 ≤ s ≤ t, y ∈ R} is an element of L 2 a , where p t (x) is the heat kernel on the real line related to 1 2 ∆ and L 2 a is defined in Proposition 2.1. We say that u is a mild solution of (1.1) if for all t ≥ 0 and x ∈ R we have u(t, x) = 1 + t 0 R p t−s (x − y)u(s, y)W (ds, dy) a.s., (2.5) where the stochastic integral is understood in the Itô sense of Proposition 2.1.
Let {B j , j ≥ 1} be a collection of independent standard Brownian motions, all independent of W . For all t ≥ 0 and j < k, we can define (see [14] again) the functional (2.6) which is interpreted as the following limit in L 2 (Ω) With these notations in mind, let us quote an existence and uniqueness result for our equation of interest.

Proposition 2.3.
There is a unique nonnegative mild solution u to equation (2.5), understood as in Definition 2.2. Moreover, recalling our notation (2.6) above, we have: (i) The following Feynman-Kac formula for the moments of u holds true for m ≥ 2 where {B j , j = 1, . . . , m} is a family of independent standard Brownian motions starting from x ∈ R, and E x denotes the expected value with respect to the Wiener measure shifted by x.
(ii) For any m ≥ 1 and any α > 0 there exist some constants c 1 and c 2 such that In particular, for t ≥ 0 and x ∈ R we have the existence of two constants c 3 and c 4 One of the main steps in our estimates will be to obtain a sharp refinement of inequality (2.8).

Preliminaries on Dirichlet forms and semigroups
As mentioned in the Introduction, the semigroup related to a certain operator A m will play a prominent role in our analysis of the spatial behavior of u. The current section defines and analyzes these objects.

Variational quantities
We will see in Section 4 that our sharp asymptotic estimates rely on an optimization problem for some variational quantities related to equation (1.1). We now derive some analytic properties of those quantities.

A variational form on R
Let us consider the following general problem: let K 1 be the space defined by K 1 = g ∈ L 2 (R) : g 2 = 1 and g ∈ L 2 (R) . Next, for a given parameter θ > 0 and g ∈ K 1 set We are interested in optimizing this kind of variational quantity, and here is a first result in this direction.
Then the following holds true: Proof. Let us first focus on item (i). For any g ∈ K 1 and ξ ∈ R we have In addition, an elementary integration by parts argument shows that Hence, for any ξ ∈ R and g ∈ K 1 we get where the last relation is due to Cauchy-Schwarz' inequality plus the fact that g L 2 = 1 for g ∈ K 1 . Consider now an additional parameter R > 0. Gathering the two bounds we have obtained for Fg 2 (ξ) , we end up with We thus take R = R θ large enough, such that 4 θ H R 2H ≤ 1 2 . We get Since the quantity R θ does not depend on g, the above inequality is valid for all g ∈ K 1 .
The proof of item (i) is thus finished.
In order to check item (ii), consider a parameter a > 0, and for g ∈ K 1 set g a (x) = a 1/2 g(ax). It is readily checked that g a ∈ K 1 whenever g ∈ K 1 . In addition, we have Fg 2 a (ξ) = Fg 2 ξ a , and g a (x) = a 3/2 g (ax).
Plugging this information into the definition (3.2), we obtain We now choose a = θ 1 2H , which yields H θ (g a ) = θ However, this quantity blows up for a broad class of functions in K 1 . Indeed, for g ∈ K 1 such that g ≥ α > 0 on a neighborhood of 0, we have

A variational form on R m
Fix m ≥ 2. Our future computations also rely on the following variational quantity on R m : Notice that K θ,m can also be interpreted as a Dirichlet form related to a Schrödinger type generator A θ,m , that is, Observe, however, that K θ,m and A θ,m are only defined for smooth test functions, due to the fact that γ is a distribution.
Remark 3.4. The quantity K θ,m (g) can also be expressed in Fourier modes. Indeed, the inverse Fourier transform of x ∈ R m → γ(x j − x k ) is given by Hence for j < k we end up with  Summarizing, we have obtained (3.8)

Asymptotic results for principal eigenvalues
With those preliminary notions in hand, we now relate the principal eigenvalue of A θ,m with the quantity E θ following the methodology introduced in [7].
Proposition 3.5. Consider θ > 0 and the quantity K θ,m (g) given by (3.5). Define the set K m (which is the equivalent of K 1 for functions defined on R m ) as follows We define the principal eigenvalue of the operator A θ,m by λ θ,m = sup {K θ,m (g); g ∈ K m } . Proof. We mostly focus on the upper bound, the lower bound being easier to obtain. To this aim we divide the proof in several steps.
Step 1: Cutoff procedure. Observe that the results in [7] only hold for a pointwise defined function γ(x). For this reason we introduce the decomposition (3.11) where the above identity (namely the second term γ 2 M (x)) is understood in the distribution sense. Also notice that for j = 1, 2, the function γ j M can be seen as the Fourier transform of the measure µ j M , where µ 1 M and µ 2 M are defined as follows: Then, for any δ ∈ (0, 1), we can write The term B 1 m,M is handled by [7] and we obtain (3.14) where the limiting behavior for E M,δ,θ is a direct consequence of Proposition 3.1.
Step 2: Upper bound for B 2 m,M . We claim that the following inequality holds To show this inequality, we write Replacing the supremum of the sum by a sum of supremums in the expression defining where for k = 2, . . . , m we have set Let us further analyze the term ∇ k g 2 2 = ∇ k g 2 L 2 (R m ) above, and relate it with To this aim, it is readily checked that g k ∈ K 1 whenever g ∈ K m . Furthermore, we have By Cauchy-Schwarz' inequality, we thus get Plugging this inequality into (3.18), we have obtained and recalling relation (3.16), this yields which shows (3.15).
Step 3: Limiting behavior for B 2 We now show that this variation goes to zero as M tends to ∞. This can be seen by a simple integration by parts argument. For any g ∈ K 1 we have Hence Plancherel's identity yields and it should be stressed that we are using our assumption H > 1/4 in order to get nondivergent integrals above. In addition, another use of Plancherel's identity enables us to write We now get a uniform bound on g 2 . Since g ∈ K 1 we have g 2 = 1 and thus Plugging this information into (3.22) we obtain Finally, recalling (3.21) we can write  Step 4: Conclusion for the upper bound. Let us report our estimates (3.14) and (3.23) into the upper bound (3.13). This trivially yields lim sup Step 5: Lower bound. In order to get the lower bound, we proceed to a direct verification, replacing the class K m by the smaller class Towards this aim, we resort to the expression (3.8) of K θ,m (g) in Fourier modes. Then, for g ∈ K 0 , it is readily checked that Fg(ξ) = m k=1 Fg 0 (ξ k ) for all ξ = (ξ 1 , . . . , ξ k ). Invoking this relation, plus the fact that g 0 ∈ K 1 , we get and recalling that the functionsĝ jk are introduced in (3.7), Plugging those identities into the expression (3.8), we get This quantity is easily related to the variational expression H θ (g 0 ) defined by (3.2), from which the identity is readily checked. This shows our lower bound and finishes the proof.

Feynman-Kac semi-groups
For m ≥ 1 and t ≥ 0, consider the quantity Q m (t) defined by (2.7). Our moment study for the solution to (1.1) will rely on the spectral behavior of the following semi-group where c H is the constant defined in (2.2) and x ∈ R m represents the initial condition for the m-dimensional Brownian motion B appearing in (2.7). We now establish some basic properties of those operators. Proposition 3.6. Let {T m,t , t ≥ 0} be the family of operators introduced in (3.24). Then the following properties hold true: There exists an integer m 0 ≥ 1 such that for m ≥ m 0 , A m admits a self-adjoint extension Proof. Let us first prove the boundedness of T m,t . To this aim, notice that for g ∈ L 2 (R m ) we have We now apply Cauchy-Schwarz' inequality, together with relation (2.8), in order to get .
which proves boundedness in L 2 (R m ). The self-adjointness of T m,t is then easily derived. It is also readily checked that the infinitesimal generator of T m,t , acting on test functions in S(R m ), is given by A m . In addition, this operator is obviously symmetric.
In order to show that it admits a self-adjoint extension, it is sufficient (thanks to the classical Freidrichs extension theorem) to show that for all g ∈ Dom(A m ) and for a constant c > 0. We now prove this inequality for m large enough: indeed, for all test functions we have A m g, g = K m (g), where we have set K m (g) = K c H ,m (g) and K θ,m (g) is defined by (3.5). The fact that there exists an m 0 ≥ 1 such that relation (3.25) holds for m ≥ m 0 is then an immediate consequence of Proposition 3.5. This finishes our proof.
As in the proof of Proposition 3.5, our future considerations will also rely on a truncated version of the operators T m,t . Let us label their definition for further use. For m ≥ 1 we introduce the following quantities, defined similarly to (2.7) but replacing γ by the smoothed function γ 1 M given by (3.11): Related to these Feynman-Kac functionals, we consider a family of operators {T m,M,t , t ≥ 0} acting on L 2 (R m ), indexed by a parameter θ > 0 (3.28) The family of operators we have just introduced enjoys the following property. The conclusions of Proposition 3.6 remain true for this semi-group, with a generator A m,M given byÂ Furthermore, the following limiting behavior holds true for all x ∈ R m where λ m,M is the principal eigenvalue ofÂ m,M , defined similarly to (3.10) by and where we recall that K m is introduced in (3.9).
Proof. The self-adjointness of T m,M,t is completely classical and left to the reader. In order to get the self-adjointness of A m,M , we just realize that γ 1 M : R → R is the inverse Fourier transform of a function which is in L 2 (R) with compact support. Therefore γ 1 M admits bounded derivatives of all order. The desired self-adjointness property is thus a consequence of classical results, which are summarized e.g in [3]. Finally relation (3.29) is a classical Feynman-Kac limit, for which we refer to [3, Theorem 4.1.6].

Asymptotic properties of the moments
As in [4,6,8], the spatial asymptotics for equation (2.5) will be established thanks to a sharp estimate of the tail of u(t, 0) for t ≥ 0 fixed. As we will see later, this can be related to some estimates on the moments u(t, 0), and our next step will be to obtain the exact asymptotic behavior (as m → ∞) of these moments. Before we begin with this task, we will reduce our problem thanks to a series of lemmas.
First let us observe that the generator we have considered for a proper normalization procedure in Section 3.1 involves a sum of the form 1 m 1≤j<k≤m γ(x j − x k ), where we emphasize the normalization by m. However, the quantity we manipulate in our Feynman-Kac representation (2.7) is Q m (t), which does not exhibit this normalizing term.
We will introduce the missing normalization by a simple scaling argument.
We now set m −1/2H ξ = λ in the space integral above. This easily yields an equality in law between Q m (t) and m −1 Q m (t m ), and thus which corresponds to our claim.
We now establish a couple of simple monotonicity properties for the quantity Q m which will feature in the sequel. The first one is related to our regularization procedure.
We now show that, for any fixed n ≥ 1, the map M → E 0 [(Q 1 m,M (t))) n ] is increasing. Indeed, recalling our notation (3.12) for µ 1 M , it is readily checked that where we use the simple convention dξ = dξ 1 · · · dξ n and ds = ds 1 · · · ds n . Now notice that for all j, k, l we have The second monotonicity property we need concerns the dependence with respect to the initial condition for our underlying Brownian motion.
For any x ∈ R m , recall that we use E x to denote the mathematical expectation with respect to B with B(0) = x. Then the following relation is verified for For any fixed M > 0, the same property holds true for Q 1 m,M (t) and Q 2 m,M (t).
Proof. We focus on the property for Q m , the equivalent for Q i m,M (i = 1, 2) being shown exactly in the same way. Furthermore, resorting to expansion (4.3) as in the previous proof, our claim can be reduced to show that We now focus on relation (4.5).
In order to show (4.5), we first decompose the moments of Q m (t) similarly to (4.4) where we have set D m,n = n l=1 . Furthermore, one can reorder terms in the quantity D m,n , and obtain where the constants C(j 1 , . . . , j n , k 1 , . . . , k n ) satisfy |C(j 1 , . . . , j n , k 1 , . . . , k n )| = 1. It should also be noticed that Hence, taking the mathematical expectation yields This proves (4.5), and thus our claim.
We are now ready to prove our main asymptotic theorem for moments of u.  Proof. Recall that the law of u(t, x) does not depend on x, and we thus start from expression (4.1) for E[(u(t, 0)) m ]. With a lower bound in mind, our first task will be to relate this expression to the semi-group T m,t introduced in (3.24).
Step 1: Spectral representation. Let us consider a function g : R → R with support in a given interval [−a, a]. For m ≥ 1 we define g m = g ⊗m , that is, Owing to Lemma 4.3 plus the trivial relation Invoking our definition (3.24), we have thus obtained We now consider a small parameter ε > 0. We also recall our definitions (3.5) and (3.6) for K θ,m and A θ,m , and set K m := K c H ,m (resp. A m := A c H ,m ) to alleviate notations. Owing to our computations in the proof of Proposition 3.5 (Step 5), we can choose g satisfying g L 2 (R) = 1 and such that This quantity can be related to the semi-group T m,t in the following way. Due to the fact that A m is self-adjoint (see Proposition 3.6), T m,t admits a spectral representation related to its generator. Hence, since g L 2 (R) = 1, there exists a probability measure ν g on R such that: . Plugging this information into (4.9) and recalling that g satisfies (4.10), we end up with the following inequality, valid for m large enough (4.12) Step 2: Lower bound. Starting from (4.12), the desired lower bound is now easily derived: taking into account the fact that mt m = m 1+ 1 H t, we get (for m large enough) Taking limits in m and recalling that we have chosen an arbitrarily small ε, we have proved that lim inf which is corresponds to our claim.
Step 3: Cut-off in the Feynman-Kac representation. Let us go back to the Feynman-Kac representation of moments for u(t, x) given by (2.7), and write the quantity Q m (t) therein In order to get the upper bound part of our theorem, we shall replace the quantity Q m (t), with its diverging high frequency modes, by the quantity Q 1 m,M (t) defined in (3.26). To this aim, recall that γ 1 M , γ 2 M , µ 1 M , µ 2 M are defined in (3.11)-(3.12), and notice that we have (4.13) Our cut-off procedure is now expressed in the following form. For two conjugate exponents p, q > 1, Hölder's inequality yields We will prove that our study can be reduced to analysis of the term Q Step 4: Proof of (4.15). Let us generalize somehow our problem, and show that for any We shall now prove that In addition, observe that E 0 e iξ n l=1 (Bj l (s)−B1(s)) = E 0 e iξ n l=1 Bj l (s) E 0 e −iξ n l=1 B1(s) .
Owing to the fact that E 0 [e iξ n l=1 Bj l (s) ] ≥ 0 and E 0 [e −iξ n l=1 B1(s) ] ∈ (0, 1), we end up Invoking the series expansion of the exponential function again, this last bound easily yields our claim (4.18). Starting from (4.18), we can prove (4.15). To this aim, a direct consequence of (4.18) is the following inequality where the last relation can be seen by means of some elementary computations, very similar to the ones displayed in [14, p. 50]. This finishes the proof of (4.15).
Step 5: An expression with diagonal terms. Let us now focus on the behavior of the quantity Q 1 m,M (t) defined by (4.13). Specifically, having (4.13) and (4.14) in mind, we wish to find the asymptotic behavior of for a given parameter θ > 0. We wish to reduce this asymptotic study to an evaluation involving Feynman-Kac semigroups. A first step in this direction is to realize that, since the cut-off measure µ 1 M is now finite, the asymptotic behavior of m → E[(u(t, x)) m ] is not perturbed by adding the diagonal terms corresponding to j = k in the sum defining Q 1 m,M (t). That is, one can replace Q 1 m,M (t) bŷ   Recalling once again our formula (2.7), we now focus on the evaluation ofÂ 1 m,M .
Step 6: A coarse graining procedure. Thanks to relation (4.22), our problem is reduced to a Feynman-Kac asymptotics for the semi-group related toÂ 1 m,M . However, the semi-groupT m,M,t is considered in Proposition 3.8 with m, M fixed and t → ∞. In contrast, we consider here a situation where both m and t m are going to ∞. We solve this problem by a coarse graining type procedure which is described below.
To this aim, consider a fixed ρ ∈ N and let us decompose m as m = nρ + r with n, r ∈ N and 0 ≤ r ≤ n − 1. We can writê  In relation (4.23), the remainder termR m,M (t m ) is defined bŷ and notice that we also have   This yields our upper bound taking into account (4.15), the fact that we consider A 1 m,M (θ) with θ = pc H 2 and p arbitrarily close to 1, plus relation (4.22).
Using Corollary 1.2.5 in [3] we deduce the following corollary. Let us thus consider a sequence of real numbers (a n ) n≥1 converging to ∞. Related to this sequence, we also introduce the sequence of random variables Y n = a −1/H n log(u(t, x)), and the sequence (ρ n ) n≥1 with ρ n = a Notice that the above asymptotic result does not enable a direct application of Ellis-Gartner's theorem, since the limit in (4.28) is only obtained for β ≥ 0 (while Ellis-Gartner would require limits for β < 0 too). We will thus apply a large deviation theorem for positive random variables (Theorem 1.2.3 in [3]), which can be summarized as follows.
We thus decompose the exponential moments of Y n into Owing to this relation, plus our limiting result (4.28), it is readily checked that Furthermore, conditioned to the event (Y 1 > 0), the random variable Y n can be considered as positive.
Taking into account the last considerations, we have thus obtained a conditioned version of (4.30), that is One can then transform this relation into a nonconditioned one, owing to the fact that P(Y 1 > 0) is a fixed strictly positive quantity. Recalling the notation for Y n and ρ n , we Some elementary changes of variable now yield our claim (4.27).

Remark 4.7.
Notice that the constantĉ H,t appearing in (4.27) is precisely

Proof of Theorem 1.1
In this section we start from the tail behavior for the random variable log(u(t, x)) provided by (4.27). We carry out a localization and discretization procedure which will allow us to evaluate the growth of x → u(t, x).

Proof of the lower bound by localization
Our approach is based on a method (introduced in [8] and [9]) involving localizations of the driving noiseẆ in the space-time domain. Let us start by some elementary preliminaries (whose proofs are left to the reader) concerning the localizing function.
For any β > 0 define β (x) = β (βx). Then the Fourier transform of β (x) is given by: We now give a representation of the noiseẆ as a convolution of a certain kernel with respect to a space-time white noise.

Lemma 5.2. Letγ be the distribution defined bỹ
whereŴ is a standard space-time white noise on R 2 .
Proof. It is easily checked thatγ * γ = c H γ is the spatial covariance of the fractional Brownian sheet W . In fact, Let now W be the Gaussian field given by (5.2). Then for any s, t ≥ 0 and φ, ψ ∈ S(R), we can write: which corresponds to expression (2.1).
We now turn to a description of the localized approximation of u which will be used in the sequel. For this step, we fix a parameter β ≥ 1 and consider the approximation {W β (t, φ) , φ ∈ S(R)} of the fractional Brownian field W defined by From (5.4) we obtain the following expression for the covariance function of the random Notice that β is an approximation of the identity as β tends to infinity, so that W β has to be seen as an approximation of W . On the other hand, the spatial covariance of the noise W β , given by (F β )γ * (F β )γ, has compact support. In this sense we call it localized.
Remark 5.3. We have followed the notation of [8] for our localization step. However let us stress the fact that, though the localization is made through the Fourier transform of β , it is a localization in direct spatial coordinates. Having the approximation (5.4) in hand, we can now define the following Picard approximation of the solution u to equation (1.1). Namely, we set U β,0 (t, x) = 1 and for n ≥ 1, we define Proof. The proof is exactly the same as the proof of Lemma 5.4 of [9], and is omitted for sake of conciseness.

(5.8)
Here σ denotes the permutation of {1, 2, . . . , n} such that 0 < s σ(1) < · · · < s σ(n) < t and I n is the multiple Itô-Wiener integral with respect to the fractional Brownian field W . The same kind of formula holds for U β,n (t, x) defined by (5.6). Namely, denote by p (β) the kernel defined by t} . Then for n ≥ 1, one can recast formula (5.6) as By iteration, similarly to [14] and (5.7), we have U β,n (t, x) = n k=0 I β,k (f β,k (·, t, x)) , (5.9) where f β,0 (t, x) = 1 and for k ≥ 1, (1) ). (5.10) In the above expression, I β,k is the multiple Itô-Wiener integral of order k with respect to the Gaussian process W β (t, x). In the next proposition we are going to show that the sequence U β,n (t, x) converges in L 2 , and defines a random field U β (t, x) given by I β,n (f β,n (·, t, x)).

(5.11)
On the other hand, we will also see that U β (t, x) converges in L 2 to u(t, x) as β tends to infinity.
Proposition 5.6. For any (t, x) ∈ R + × R and for any p ≥ 1, the sequence U β,n (t, x) defined by (5.6) converges in L p to the random variable U β (t, x) defined by (5.11), as n tends to infinity. Furthermore, we have the following estimates for the differences of the solutions: there is a finite constant C (dependent on t but independent of β and p) such that for any β ≥ 1 and p ≥ 2, Proof. In order to simplify the notation we will omit the dependence on (t, x) in some of the terms of our computations. The proof will be done in several steps.
In the case i ≤ j − 1, we also claim that In fact, recalling our definition (5.15) and thanks to an elementary change of variable, this integral can be written as which is bounded by a constant times (s σ(i+1) − s σ(i) ) − α i +1 2 by Lemma 6.1 in the Appendix below. It remains to consider the integral over the variable η j in (5.22), which is given by We decompose the integral R |η j | −2θ+αj dη j into two parts: on the region |η j | ≤ 1 we take θ = αj +1 2 − δ 2 and on the region |η j | > 1 we take θ = αj +1 2 + δ 2 , for some δ > 0 to be fixed later. In this way, we obtain ≤ C (s σ(j+1) − s σ(j) ) − 1 2 (αj +1+δ) .
We now wish to apply Lemma 5.5 in order to bound the right hand side of (5.27), and we first discuss the nature of the exponents involved: (i) First we have to ensure that each exponent in the integral showing up in (5.27) is lower bounded by −1. These exponents are ≥ − max i≤n αi+1 2 − δ 2 , and recall that max i≤n α i ≤ 2(1 − 2H) according to (5.23). We can thus ensure that each exponent is greater than −1, provided that 0 < δ < 4H − 1. (ii) Invoking relation (5.23) again, it is readily checked that the sum of the exponents in (5.27) is −n + nH − δ 2 . (iii) We wish to choose θ as large as possible in order to ensure the maximal exponential decay for A j,n . According to our previous considerations, we have taken θ = αj +1 2 − δ 2 with δ of the form 4H − 1 − ε for an arbitrarily small ε. Referring once more to (5.27), we can just ensure α j ≥ 0, which yields θ ≥ 1−δ 2 = 1 − 2H + ε 2 ≥ 1 − 2H. With those considerations in mind, we can now apply Lemma 5.5 to relation (5.27) in order to conclude that where C is a constant depending on H and t. Substituting (5.28) into (5.18) yields (recall that the constant C might change from line to line) Using the inequality Γ(nH + 1 2 ) ≥ Γ(nH+1) Going back to our decomposition (5.14), let us now deal with the term A 2 . The spectral measure of the noise W β has a density equal to c H ( β * | · | 1 2 −H ) 2 (ξ). Therefore, thanks to another telescoping sum argument, we get In addition, notice that where c 1,H = R |η| 1 2 −H (η)dη. This follows easily from Taking into account that β ≥ 1, this leads to the estimate We now start from the expression (5.16) for Ff β,n (s 1 , ξ 1 , . . . , s n , ξ n ), we make the change of variable ξ σ(i) + · · · + ξ σ(1) = η i , for all i = 1, 2, . . . , n, and we bound |η i − η i−1 | 1−2H by |η i | 1−2H + |η i−1 | 1−2H as in the case of our term A 1 . This yields where we recall that S n (t) denotes the n-dimensional simplex of [0, t] n . We now conclude, putting together (5.29) and (5.30), that In a similar way, we can also obtain the following estimate (whose proof is left to the patient reader), where the constant C is independent of β E[|I β,n (f β,n )| 2 ] ≤ C n Γ(nH + 1) . (5.32) Step 2: L p -estimates. Recall that U β,n (t, x) is defined by the finite sum (5.9). Let us first get the convergence of this finite sum to a random variable U β (t, x) formally defined by the series (5.11). To this aim, recall that for a functional F n which belongs to the n-th chaos of a Wiener space and p ≥ 2, we have the hypercontractivity inequality F n L p (Ω) ≤ p n 2 F n L 2 (Ω) . We thus get where the last inequality is due to (5.32). Furthermore, the following inequality, valid for z ≥ 0 and a > 0, is an easy consequence of estimates on Mittag-Leffler functions which can be found in [12] ∞ k=n+1 z k Γ(ak + 1) ≤ c 1 z n Γ(an + 1) e c2z , which shows our claim (5.13).
The same kind of consideration also allows to derive inequality (5.12). Namely, write where we resort to hypercontractivity and (5.31) for the last step. This easily yields (5.12) by the same kind of argument as before.
Proof. From (5.12) and (5.13) we obtain , for some constant C depending on H and t. Using the asymptotic properties of the Gamma function, this is bounded by e Cp We are now ready to give the proof of our lower bound. Proof. We divide this proof in two steps: first we determine a main contribution to the maximum, given by our approximations U β,n suitably discretized. Then we will evaluate the main contribution.
Step 1: Fluctuation results. Fix R > 0 and consider a given ν > 0. Referring to the notation of Corollary 5.7, we wish to choose p in inequality (5.33) such that we obtain  For a fixed t > 0, we now wish to produce some independent random variables U β,n (t, x j ), with x j ∈ [−R, R]. For this we need x j+1 − x j > 2nβ(1 + t 1/2 ) for all j. Set N = 2nβ(1 + t 1/2 ). We choose the set of points If |N R | = 2 R 2N + 1 denotes the cardinality of N R , one can check, using the expressions of β and n given in (5.37), that for any ε > 0 the following inequalities hold c H,M,ε R 1−ε ≤ |N R | ≤ R, (5.38) for R large enough, where c H,M,ε is a positive constant.
We can now study max z∈N R |u(t, z) − U β,n (t, z)|. For any η > 0 we have Furthermore, a simple application of Markov's inequality yields, for an arbitrary p ≥ 1 , so that choosing p = p(R) and invoking relation (5.36), we can recast this relation as Going back to inequality (5.39) and choosing ν = 3, we have obtained the following inequality for R large enough which is enough to assert that where we recall that β = β(R) and n = n(R) in the right-hand side of (5.43) are given by (5.37). We will now evaluate the right-hand side of (5.43), identified with our main contribution.
Hence, if we assume that R is large enough, so that log 2 ≤ δ(log R) 1 1+H , we have Owing to simple additivity properties of P, we thus get P log(|U β,n |) > λ(log R) where the last inequality is a direct consequence of (5.40). Now recall that we have chosen λ fulfilling condition (5.44). Applying Corollary 4.5 in this context yields and thus, for R large enough the following holds true Plugging this relation into (5.47) gives, for R large enough P log(|U β,n |) > λ(log R) . Our proof is thus easily concluded.

Proof of the upper bound
The proof of the upper bound is based on a quantification of the fluctuations of u in boxes around the points x j ∈ N R , defined in the proof of Proposition 5.8. We will first need an evaluation of the modulus of continuity of u in the space variable. Proposition 5.9. For any β ∈ (0, 2H − 1/2), there exists a constant C depending on α, H and t, such that for any x, y ∈ R and any p ≥ 2, Proof. First we estimate the L 2 norm using the Wiener chaos expansion of the solution and the notation used in the proof of Proposition 5.6. In this way we can write E |I n (f n (·, t, x)) − I n (f n (·, t, y))| 2 = n! f n (·, t, x)−f n (·, t, y) 2 H ⊗n = n!c n H L n (x, y), (5.51) where we have set L n (x, y) = Ff n (·, t, x) − Ff n (·, t, y) 2 L 2 ([0,t] n ×R n ,λ n ×µ n ) .
Plugging this relation into (5.51) and applying the hypercontracticity property on a fixed chaos, we have thus obtained I n (f n (·, t, x)) − I n (f n (·, t, y)) L p (Ω) ≤ C n p n 2 |x − y| β Γ nH 2 + 1 , from which (5.50) is obtained exactly as in Proposition 5.6.
We are now ready to prove the upper bound part of Theorem 1. Proof. We shall use the same kind of notation as in the proof of Proposition 5.8, sometimes with a slightly different meaning (which should be clear from the context). Fix R > 0 and divide the interval [−R, R] into subintervals I j with the same length, for j = 1, . . . , N R (notice that N R is now a cardinal instead of being a set as in Proposition 5.8), of length less than or equal to , for some > 0 to be chosen later. Pick one point x j of each interval I j . By convention, we assume that I 1 contains 0, and we choose x 1 = 0.
We get N R P log u(t, 0) ≥ λ(log R)