Remarks on non-linear noise excitability of some stochastic heat equations

We consider nonlinear parabolic SPDEs of the form $\partial_t u=\Delta u + \lambda \sigma(u)\dot w$ on the interval $(0, L)$, where $\dot w$ denotes space-time white noise, $\sigma$ is Lipschitz continuous. Under Dirichlet boundary conditions and a linear growth condition on $\sigma$, we show that the expected $L^2$-energy is of order $\exp[\text{const}\times\lambda^4]$ as $\lambda\rightarrow \infty$. This significantly improves a recent result of Khoshnevisan and Kim. Our method is very different from theirs and it allows us to arrive at the same conclusion for the same equation but with Neumann boundary condition. This improves over another result of Khoshnevisan and Kim.


Introduction
The main objective of this paper is to study the effect of noise on the solutions to various stochastic heat equations. Fix L > 0 and consider the following where 0 < l σ ≤ L σ < ∞. Our study is motivated by a recent paper of Khoshnevisan and Kim [6] where the authors initiated the study of the effect of λ on the energy of the solution. We will shortly describe their results in a bit more detail but let us mention that existence and uniqueness is not an issue for us. It is well known that the above equation has a unique mild solution satisfying and p D (t, x, y) denotes the Dirichlet heat kernel. As usual, (1.3) will be the starting point of most of our analysis. For more information about existenceuniquness , see [3] or [9] for more information.
To describe our results in a precise manner, we adopt some notations and definitions from [6] and [7]. We begin by defining the energy of the solution at time t by One of the main results in [6] states that as λ gets large, E t (λ) grows at most like exp(const × λ 4 ) but at least like exp(const × λ 2 ). This current project grew out of trying to understand this discrepancy. The following indices were introduced in [7] to capture the super exponential growth just mentioned.
Whenē(t) and e(t) are equal, we simply refer to the common value as the nonlinear noise excitation index of the solution at time t, which is required to be strictly positive. We are now ready to state the first main result of the paper. Estimating the lower excitation index is the main contribution of this paper and our approach requires two new ideas which we now describe.
1. We use a couple of renewal inequalities which give the desired upper and lower bound on the energy. The use of renewal theoretic ideas was introduced in [4] but here we use it in a different manner; see Remark 2.3.

2.
To arrive at these renewal inequalities, we make use of the idea that for small times and away from the boundary, the Dirichlet heat kernel behaves pretty much like that Gaussian heat kernel. This idea has been the subject of intense investigations for decades now; see [1] and [8]. Since we are working in spatial dimension one, we provide complete analytic proofs of the main estimates we need.
It is also interesting to note that in [6], the bound on the upper index was the harder part of the proof. Here the complete opposite is true; the lower bound is much harder and requires the second point mentioned above which is entirely novel. As far as we know, Gaussian estimates for Dirichlet Laplacian have never been used in the study of stochastic partial differential differential equations. Using these two ideas, we were able to improve the bound on the lower index. It turns out that our method can adapted to study the same stochastic PDE but with Neumann boundary condition. We now describe our main findings in this context. Consider the following equation (1.5) Here we stress the fact that as opposed to [6], we do not require our initial function to be bounded below. Any bounded nonrandom compactly supported non negative initial function will be enough. It is well known that there exists a unique mild solution satisfying (1.2) with the following integral representation, and p N (t, x, y) is the Neumann heat kernel. We again refer to [3] and [9] for more information about various technicalities. To state our main result for (1.5), we set the following notations, where u t is the solution to (1.5).
An immediate consequence of the above is the following. Our technique seems to be suited for the study of a wider class of the stochastic equations. If the Laplacian in say (1.1) were replaced by the fractional Dirichlet Laplacian of order α and the white noise were replaced by a colored noise with Riesz Kernel of order β, we conjecture that the non-linear excitation index is 2α/(α−β). This is currently under investigation and will be the subject of [5].
We end this introduction with the plan of the article. Section 2 contains the renewal type inequalities. Section 3 contains the relevant Dirichlet heat estimates and the proof of Theorem 1.3. Section 4 contains the corresponding estimates for the Neuman heat kernel as well as the proof of Theorem 1.4 and its corollary.

Some estimates
This section will be devoted to the renewal-type inequalities mentioned in the introduction. The perceptive reader will recognise that the presence of the square root inside the integrals is motivated by the Gaussian heat kernel.
where a and b are positive constants and T < ∞. Then for each Proof. We start off by iterating inequality (2.1) once to obtain We change the order of integration in the above double integral to find that where c 1 is some positive constant. This together with the above inequality gives for some positive constant c 2 . A suitable version of Gronwall's inequality now finishes the proof of the proposition.
We now reverse the inequality in the statement of the proposition to obtain the following result. The proof is pretty much the same as the above, so we omit it.
where a and b are positive constants and T < ∞. Then for each Remark 2.3. Obviously, the constants a and b appearing in Proposition 2.1 might be different from those appearing in Proposition 2.2. In a sense, the above results are not new. But what's original about them here, is that they are used to get information about the growth of the function with respect to the parameter k rather than t.
We start off with a result which gives a lower bound on the Dirichlet heat kernel in terms of the Gaussian heat kernel. This is borrowed from [2]. But we give a proof here for the sake of completeness. Recall that from the method of images, we have the following representation, Suppose that x, y ∈ (0, L) and set ǫ := min{x, y, L − x, L − y}, then we have Proof. The proof involves rewriting (3.1) in a suitable way and making use of the following observation. For n ≥ 1 and x, y ∈ (0, L) |x + y + 2nL| ≥ |x − y + 2nL| (3.2) and | − (x + y) + 2(n + 1)L| ≥ |2nL − (x − y)|. We can now write .
We now use (3.2) and (3.3) together with (3.1) to conclude that This and the definition of ǫ essentially finish the proof.
A consequence of the above lemma is that away from the boundary, the Dirichlet heat kernel behaves pretty much like the Gaussian one provided that time is small enough. This is intuitively obvious from the probabilistic point of view.
Corollary 3.2. Fix ǫ > 0, then there exists t 0 > 0 depending on ǫ such that for all t ≤ t 0 and all x, y ∈ [ǫ, L − ǫ], we have The result then follows from the above lemma.
Another starting point for the proof of the above result could be the following. Recall that the Dirichlet heat kernel is the transition probability for a killed Brownian motion. Let τ denote the first exit time of Brownian motion from from (0, L), then since p(t, x, y) is the transition probability of this Brownian motion, we have p D (t, x, y) = P x (τ > t|B t = y)p(t, x, y).
We also have where X D is the position of the Brownian motion when it hits the boundary, making the following trivial.

Lemma 3.3.
For all x, y ∈ (0, L) and t > 0, the following holds p D (t, x, y) ≤ p(t, x, y). (3.4) The next result says that provided we stay from the boundary, we can find a suitable lower bound on the growth of the second moment of the solution of (1.1). To state our result, we introduce the following notation. For ǫ > 0, where u t is the solution to (1.1).
Proof. Using the mild formulation and Ito's isometry, we have We now fix an ǫ > 0 and let t 0 be defined as in the proof of Corollary 3.2. We bound the first term on the right hand side of the above display first. Recall that (G D u) t (x) solves the deterministic heat equation, that is (1.1) with λ = 0. Provided we stay away from the boundary, it is bounded below by a constant which depends on t. In other words for x ∈ [ǫ, L − ǫ], |(G D u) t (x)| 2 ≥ c 1 for some positive constant c 1 depending on t. We now look at the second term. Using Corollary 3.2, we obtain We now estimate the innermost integral appearing in the above line. For fixed t, s and for some constants c 3 and c 4 . We thus have We now combine the above estimates yield the following inequality The proof now follows from an application of Proposition 2.2.
We are now ready to prove Theorem 1.3.

Proof of Theorem 1.3.
We will first show that e(t) ≤ 4. This will be done in one step. We will then show that e(t) ≥ 4. We will do so in two steps. We prove the bound for small times and then extend it to all times by using a suitable trick.

Proof of the upper bound.
We start off with the mild formulation, take the second moment and then integrate to obtain For fixed t, the first term is a bounded function, so that we have We now turn our attention to the second term. Using (3.4) and the semigroup property of the heat kernel, we end up with We now combine the above estimates to obtain The upper bound is thus proved after an application of Proposition 2.1.

Proof of the lower bound.
Step 1: We first prove the lower bound for t ≤ t 0 where t 0 is some positive number. Being the solution to the deterministic heat equation, (G D u) t (x) is bounded below by a constant depending on t. So the first term of (3.8) is thus bounded below.
To find a lower bound on (3.8), we use the lower bound on σ as well as the Markov property of killed Brownian motion to find that if t ≤ t 0 where t 0 depends on ǫ. We combine the above estimates to obtain We now note that (3.7) actually means that I ǫ, s (λ) grows at least like exp(c 8 λ 4 ). Some calculus then finishes the proof for t ≤ t 0 .
Step 2: We now show that the lower bound holds for any t > 0. We assume that t > t 0 , otherwise there is nothing else to prove. Let t 1 be a small constant to be chosen later. We write t = t − t 1 + t 1 and set T = t − t 1 for notational convenience. As we have seen before the mild formulation of the solution yields We now use the notation t = T + t 1 in the above display together with a few lines of computations to obtain which after a change of variable reduces to to see that the above gives the following inequality A close inspection at the proof given in step 1 shows that the above inequality is all that we need once we choose t 1 = t 0 /2 and show that |(G D u) T +t1 (x)| 2 is strictly positive. But the latter fact can be shown to be bounded below by a positive constant which depends on T and t 1 . We leave it to the reader to fill in the details.

The Neumann equation
We begin this section with a couple of estimates on the Neumann heat kernel. First, recall that by the method of images, we have ]. (4.1) From the above series, we trivially have  And a little more work shows the following, where c T is some constant depending on T > 0.

Proof of Theorem 1.4
Proof. We begin by proving the lower bound first. We start off with the mild formulation and take second moment to end up with We bound the first term on the right hand side of the above display first. Since (G N u) t (x) solves the corresponding deterministic problem, we have that for fixed t > 0, (G N u) t (x) is bounded below by a positive constant depending on t.
We now deal with the second term. We will again use (4.2) as well as the definition of I t (λ). Combining the above inequalities, we obtain An application of Proposition 2.2 yields the lower bound stated in the theorem. We now prove the lower bound. Our starting point is (4.4). Finding an upper bound on the first term is straight forward since the initial condition is a bounded function. For the second term, we need a bit more work.
With this inequality, (4.4) reduces to An application of Proposition 2.1 now yields the desired result.

Proof of Corollary 1.5
The proof of Corollary 1.5 is straightforward.
Proof. Note that u t 2 L 2 [0, L] ≤ S 2 t (λ)L, from which the upper bound follows. As for the lower bound, we have