The Schr{\"o}dinger equation with spatial white noise potential

We consider the linear and nonlinear Schr{\"o}dinger equation with a spatial white noise as a potential in dimension 2. We prove existence and uniqueness of solutions thanks to a change of unknown originally used in [8] and conserved quantities.


Introduction
In this work we study a linear or nonlinear Schrödinger equation on a periodic domain with a random potential given by a spatial white noise in dimension 2. This equation is important for various purposes. In the linear case, it is used to study Anderson localisation. It is a complex version of the famous PAM model. In the nonlinear case, it describes the evolution of nonlinear dispersive waves in a totally disorder medium (see for instance [9], [12] and the references therein).
If u denotes the unknown, the equation is given by: where T 2 denotes the two dimensional torus, identified with [0, 2π] 2 , and ξ is a real-valued spatial white noise. Of course, λ = 0 for the linear equation. A positive λ corresponds to the focusing case and λ < 0 to the defocusing case. For simplicity, we take λ = ±1 in the nonlinear case. The qualitative properties of the solutions are completely different in these two cases.
We are here interested in the question of existence and uniqueness of solutions. This is a preliminary but important step before studying other phenomena, such as solitary waves, blow up phenomena or Anderson localisation. The main difficulty is due to the presence of the rough potential. Recall that in dimension 2, a white noise has a negative regularity which is strictly less than −1. In the parabolic case (i.e. replace i du dt by du dt ) renormalized solutions were recently constructed in [13]. However, their argument relies strongly on the strong regularising properties of the heat semigroup which are not available for the Schrödinger equation. Some smoothing properties for the Schrödinger equation are known when it is set on the plane. For instance in [6], section 2.5, it is shown that localized initial data yield smooth solutions. Further results are mentioned in Remark 2.7.8 of the same book. See also [8,14,16,18] for further results as well as [4,17] for deterministic nonlinear smoothing in the euclidean setting. In the periodic case, considered here however, similar linear smoothing results do not hold. Some deterministic nonlinear smoothing hold for the cubic NLS on the one dimensional torus (see [10,11,15]), but there is no such deterministic smoothing for the quintic or higher, and it is believed not to hold in dimensions bigger than 2 regardless of the nonlinearity.
Instead of using regularising estimates for the Schrödinger semigroup we rely on the Hamiltonian structure of the equation and its conserved quantities. We use the same transformation introduced in [13]. On the level of this transformed equation, the conservation of mass and energy gives enough control to construct solutions taking values in almost H 2 (T 2 ), the standard Sobolev space of functions with derivatives up to order 2 in L 2 (T 2 ). Inverting the transformation this yields solutions in almost H 1 (T 2 ). This is rather surprising since the regularity is comparable to what is obtained in the parabolic case, when strong smoothing properties are available. However, we have to assume more structure on the initial datum than is needed in the parabolic case: On the level of the transformed equation the initial datum has to have H 2 (T 2 ) regularity, which translates to the assumption that the initial datum for the original equation is the product of an H 2 (T 2 ) function and a explicit function of regularity almost H 1 (T 2 ) which depends on the specific realisation of the noise ξ .This may be surprising at a first glance but, remembering that the domain of the linear operator appearing in the equation is random ( [1]), this is in fact natural.
As in the parabolic case, a renormalization is necessary and at the level of the original equation, the renormalized equation rewrites formally: The transformation u → e i∞t u transforms the original equation into the renormalized one. Therefore, the renormalization amounts to renormalize only the phase. A similar remark was made in [3] in a related but different context.
We first consider the linear equation, λ = 0. In this way the ideas can be explained in a simpler setting. We use a transformation introduced in [13] in the parabolic case, it transforms the equation into a less singular one. Conservation of the L 2 (T 2 ) and H 1 (T 2 ) norm imply some bounds in these spaces, but these are not yet sufficient to construct a solution. We then use conservation of the L 2 (T 2 ) norm of the time derivative to get bounds in H 2 on the transformed equation with a smoothed noise. The bound explodes when the smoothing disappears but thanks to an idea reminiscent of interpolation theory we show that this implies uniform H γ , γ < 2, bounds on the smoothed solutions. This bound is sufficient to prove that they converge to a solution of our transformed equation.
Going back to the original equation yields a solution in H 1 . For λ < 0, we obtain global solutions for any initial data satisfying some smoothness assumptions. For λ > 0, as in the deterministic case, we need a smallness assumption on the initial data. The ideas are similar but the estimates are more delicate.
We could of course consider the equation with a more general nonlinearity: |u| 2σ u with σ ≤ 1. For σ < 1, no restriction on the size of the initial data is required for λ > 0.
Another easy generalization is to consider a general bounded domain and Dirichlet boundary conditions, as long as they are sufficiently smooth and the properties of the Green function of the Laplace operator are sufficiently good so that Lemma 2.1 below holds.
The study of the linear equation is closely related to the understanding of the Schrödinger operator with white noise potential. This is the subject of a recent very interesting article by Allez and Chouk ( [1]) where the paracontrolled calculus is used to study the domain and spectrum of this operator. It is not clear how this can be used for the nonlinear equation.

Notation
We use the classical L p = L p (T 2 ) spaces for p ∈ [1, ∞], as well as the L 2 based Sobolev spaces H s = H s (T 2 ) for s ∈ R and the Besov spaces B s p,q = B s p,q (T 2 ), for s ∈ R, p, q ∈ [1, ∞]. These are defined in terms of Fourier series and Littlewood-Paley theory (see [2]). Recall that for s > 2 p the space B s p,q (T 2 ) embeds into L ∞ (T 2 ). Throughout the article, c denotes a constant which may change from one line to the next. Also, we use a small parameter 0 < ε < e −1 and K ε is a random constant which can also change but such that EK p ε is uniformly bounded in ε for all p. Similarly, for 0 < ε 1 , ε 2 < e −1 , the random constant K ε1,ε2 may depend on ε 1 , ε 2 but EK p ε1,ε2 is uniformly bounded in ε 1 , ε 2 for all p.

Preliminaries
We consider the following nonlinear Schrödinger equation in dimension 2 on the torus, that is periodic boundary conditions are assumed, for the complex-valued unknown u = u(x, t): It is supplemented with initial data We need to choose the initial data of special form which depends on the realisation of the noise ξ. This will be made precise below. In the focusing case λ > 0 we need an extra assumption on the size of u 0 L 2 which has be small enough (see (4.3) below). The potential ξ is random and is a real valued spatial white noise on T 2 . For simplicity, we assume that it has a zero spatial average. The general case could be recovered by adding an additional Gaussian random potential which is constant in space. This would not change the analysis below.
Formally equation (2.1) has two invariant quantities. Given a solution u of (2.1), the mass: is constant in time as well as the energy: This is formal because the noise ξ is very rough. In dimension 1, the noise has regularity −1/2 − and belongs to B α ∞,∞ for any α < −1/2, therefore the product ξ|u| 2 can be defined rigorously for u ∈ H 1 and this provides a bound in H 1 . Existence and uniqueness through regularization of the noise and a compactness argument can then be obtained.
In dimension 2, the noise lives in any space with regularity −1 − , that is any regularity strictly less than −1, and the solution is not expected to be sufficiently smooth to compensate this. In fact, the product is almost well defined for u ∈ H 1 and we are in a situation similar to the two dimensional nonlinear heat equation with spatial white noise (i.e. the two dimensional continuous parabolic Anderson model). We expect that a renormalization is necessary.
Inspired by [13], we introduce: (note that this is well defined since we consider a zero average noise, we choose Y also with a zero average) and v = ue Y . Then the equation for v reads The random field Y has regularity 1 − and ∇Y is of regularity 0 − . Thus this transformation has lowered the roughness of the most irregular term on the right hand side. At this point it is easier to see why we need a renormalization: the term |∇Y | 2 is not well defined since ∇Y is not a function. However, the roughness is mild here and it has been known for long that up to renormalization by a log divergent constant this square term can be defined in the second order Wiener chaos based on ξ.
Let us be more precise. Let ρ ε = ε −2 ρ( · ε ) be a compactly supported smooth mollifier and consider the smooth noise ξ ε = ρ ε * ξ. We denote by Y ε = ∆ −1 ξ ε . Then it is proved in [13] that for every κ > 0, ξ belongs almost surely to B −1−κ ∞,∞ and, as ε → 0, ξ ε converges in probability to ξ in B −1−κ ∞,∞ . Also, denoting by for any p ≥ 1, κ > 0 to a random variable : |∇Y | 2 : in the second Wiener chaos associated to ξ. It is easy to see that C ε goes to ∞ as | ln ε| as ε → ∞: This discussion is summarised in the following Lemma whose proof can be found in [13, Lemma 1.1 and Proposition 1.3] in the more difficult case of the space variable in R 2 . Lemma 2.1. For 1 ≥ κ > κ > 0 and any p ≥ 1, there exist a constant c independent of ε such that: Using the monotonicity of stochastic L p norms in p, one can drop the exponent − 2 p in the right hand side. We state the result in this way because this is the bound that one actually proves. Below, we use this bound with κ − 2 p = κ 2 .
Note that for s <s, p, r ≥ 1, we have Bs ∞,∞ ⊂ B s p,r . Thus, bounds in the latter Besov spaces follow.
and setting u ε = v ε e −Yε : Since Y ε is smooth, it is classical to prove that these equations have a unique global solution in C([0, T ]; H k ) for any T > 0 for an initial data in H k , k = 1, 2, provided the L 2 norm is small for λ > 0 (see for instance [6], Section 3.6). More details are given in Since the most irregular term : |∇Y ε | 2 : here is not as rough as ξ, this transformed energy is a much better quantity than the original one. It is possible to give a meaning to it for ε = 0 and use it to get bounds in H 1 .
Below, we use the following simple results.

Lemma 2.3.
For any κ ∈ (0, 1) and any p ≥ 1, there exists a constant independent on ε such that: Then we write: The result follows by Hölder inequality, Lemma 2.1 (in the form of Remark 2.2 with κ = κ) and Gaussianity to bound exponential moments of Y and Y ε .

Lemma 2.4.
There exists a constant c independent of ε such that: Proof. It suffices to write: Similarly:

The linear case
In this section, we start with the linear case: λ = 0. Then the equation for v ε reads is easy to prove, for instance by a fixed point argument on a mild form of the equation -recall that (e it∆ ) t is a strongly continuous group of isometries on any H s . This mild solution satisfies (3.1) as an inequality in L 2 , see for instance [7], chapter 4.
We take the initial data v ε (0) = v 0 = u 0 e Y and assume below that it belongs to H 2 . Note that this gives an initial data depending on ε for u ε for (3.2), but we recover the good initial data at the limit ε → 0.
The mass and energy of a solution are now: For instance, thanks to this property we have: Since Y ε converges in B 1−κ ∞,∞ for any κ > 0 as ε tends to zero, we see that the mass gives a uniform bound in L 2 on v ε . More precisely: The energy enables us to get a bound on the gradient. Proposition 3.1. Let κ ∈ (0, 1/2), there exists a random constant K ε bounded in L p (Ω) with respect to ε for any p ≥ 1 such that if v 0 ∈ H 1 : Proof. Since B −κ ∞,2 is in duality with B κ 1,2 we deduce by the standard multiplication rule in Besov spaces (see e.g. [2], Section 2.8.1) and hence, by absorbing the last term in the left hand side, we obtain a (random) bound on v ε in H 1 using similar arguments as above.

Corollary 3.2.
There exists a random constant K ε bounded in L p (Ω) with respect to ε for any p ≥ 1 such for Unfortunately, this regularity is not sufficient to control the product ∇v ε · ∇Y ε on the right hand side of (3.1).
The next observation is that, w ε = dvε dt is formally a solution of: and since w ε satisfies the same equation as v ε , it has the same invariant quantities. We use in particular the mass:Ñ (w ε (t)) =Ñ (w ε (0)). Hence: This formal argument can be justified as follows. We take a sequence (v ε,η (0)) η>0 in H 4 which converges to v ε (0) in H 2 . Then, as noted above, the corresponding solution v ε,η of (3. Letting η → 0, we obtain that (3.6) is true under the assumption that v 0 ∈ H 2 .

Proposition 3.3.
There exists a random constant K ε bounded in L p (Ω) with respect to ε for any p ≥ 1 such that Proof. From (3.1), we have: so that, thanks to the embedding H 1/2 ⊂ L 4 , By interpolation we deduce: By Lemma 2.4 and gaussianity, we know that the moments of this random variable are bounded with respect to ε.

It follows
This in turn allows us to control v ε H 2 . Indeed, from (3.1), and by similar arguments as above The result follows thanks to (3.3).
This bound does not seem to be very useful since it explodes as ε → 0. To use it, we consider the difference of two solutions.
Proposition 3.4. Let ε 2 > ε 1 > 0 then for κ ∈ (0, 1], p ≥ 1 there exists a random constant K ε1,ε2 bounded in L p (Ω) with respect to ε 1 , ε 2 for any p ≥ 1 such that Proof. We set r = v ε1 − v ε2 and write: By standard computations, we deduce: the first term of the right hand side is bounded thanks to interpolation and paraproduct inequalities (see [2]) and we have Then, by Proposition 3.3 and interpolation: and, by Corollary 3.2, The second term is bounded by the same quantity and we deduce: The result follows thanks to integration in time and Lemma 2.1. For instance, we have: By interpolation, we deduce from Proposition 3.3 and 3.4 the following result.
Here, by solution, we mean that Proof. Let us first prove pathwise uniqueness. Since the equation is linear, this amounts to prove that a solution with v(0) = 0 is 0. Let us consider such a solution and write: as can be seen from Lemma 2.1. Then, v ∈ C([0, T ]; H γ ) implies that dv dt ∈ C([0, T ]; H γ−2 ) and we may differentiate for ε > 0: where (·, ·) has to be understood as the duality between H γ−2 and H 2−γ -note that 2 − γ ≤ γ. Recall that: (i∆v − 2i∇v · ∇Y ε + iv : |∇Y ε | 2 :, ve −2Yε ) = 0, and deduce: We then repeat the estimate of the proof of Proposition 3.4 but estimate the H 1+κ norm by the H γ norm and obtain The random constant K ε depends on the norm of v 1 , v 2 in C([0, T ]; H γ ) and on Y ε . We take ε = 2 −k . By Lemma 2.1 and Borel-Cantelli, we know that sup k K 2 −k < ∞ a.s.. Integrating in time and letting k → ∞, we get . It is not difficult to prove that the limit v is a solution of (3.8) and and assume below that it belongs to H 2 . Using the same argument as in [5], we can prove existence of a solution to (2.4) in C([0, T * ); H 2 ) ∩ C 1 ([0, T * ); L 2 ) where T * is the maximal time of existence. It is either infinite or finite and in the latter case the H 2 norm blows up. This implies local existence and uniqueness for (2.3). Proposition 4.2 below shows that, under condition 4.2, the H 2 norm is bounded on finite time intervals and thus implies that T * = ∞.
Again the solution is sufficiently regular to justify the computations yielding conservation of the mass and energy.
The mass gives a uniform bound in The estimate on the H 1 norm using the energy is similar to the linear case. Recall that the energy is given by: and it can be checked that for all t ≥ 0 we haveH ε (v ε (t)) =H ε (v 0 ).
Proof. We proceed as in the proof of Proposition 3.1. We first have For λ = −1, the result follows after dropping the last term and using thanks to the Sobolev embedding H 1/2 ⊂ L 4 and interpolation. For λ = 1, Gagliardo-Nirenberg inequality (see for instance [5] for a simple proof with the constant 1/2 used below): (3.3). The result follows easily under assumption (4.2).
We now proceed with the H 2 bound.

Proposition 4.2.
There exists a random constants K ε bounded in L p (Ω) with respect to ε for any p ≥ 1 such that if v 0 = u 0 e −Y ∈ H 2 and (4.2) holds: Moreover K k = K 2 −k is bounded almost surely.
Proof. As in Section 3, we set w ε = dvε dt which now satisfies: From (2.3), we have: and as in Proposition 3.3 and using the embedding H 1 ⊂ L 6 : By Lemma 2.4: with K ε having all moments finite and such that K 2 −k is almost finite by Borel-Cantelli.We have taken | ln ε| 4 instead of | ln ε| 2 in the estimate above in order to have this latter property. Recall that Y 2 −k converges a.s. in L ∞ . We do not have preservation of the L 2 norm but: thanks to the Brezis-Gallouet inequality ( [5]) and to Proposition 4.1. This computation is not rigorous. We proceed as in the linear case to justify it. We take a sequence of smooth initial data, say in H 4 . In this case, using Theorem 5.4.1 in [6], we know that the solution is sufficiently regular to do the computations above. This theorem is proved in the R 2 framework but the proof in fact works on any domain. The estimate above is obtained at the limit for an initial data in H 2 .
Then as above we have: By the embedding H 1 ⊂ L 6 and Proposition 4.
Again, the almost sure boundedness of the different constant K k is obtained thanks to Lemma 2.3, Lemma 2.4 and Borel-Cantelli.