1 Introduction

1.1 Statement of the main result

We are interested in the following family of NLS with multiplicative spatial white noise:

$$\begin{aligned} i\partial _t u = \Delta u + \xi u +\lambda u|u|^p, \quad u(0,x)=u_0(x), \quad (t,x)\in \mathbb {R}\times \mathbb {T}^2 \end{aligned}$$
(1.1)

where \(\xi (x,\omega )\) is space white noise, \(2\le p\le 3\), \(\lambda \le 0\), and we identify \(\mathbb {T}^2\) with \((-\pi ,\pi )\times (-\pi , \pi )\). We work for simplicity with a defocusing nonlinearity, but the results of this paper can be extended to the focusing case under a smallness assumption on the initial datum. Our main aim is an improvement on the range of the nonlinearity p, from the case \(p=2\) achieved by A. Debussche and H. Weber in [4], to the larger range \(2\le p \le 3\). We basically follow the approach of [4], the main novelty being the introduction of modified energies in the context of (1.1). These energies allow to cover a larger set of p in (1.1). They also have the potential to be useful in the future for the study of the growth of the high Sobolev norms in the context of (1.1).

We assume that \(\xi (x,\omega )\) is real valued and has a vanishing zero Fourier mode (or equivalently of mean zero with respect to x). This assumption is however not essential because one may remove the zero mode of \(\xi \) from the equation by the transform \(u\mapsto e^{it\hat{\xi }(0)}u\). Therefore in the sequel, we will assume that \(\xi (x,\omega )\) is given by the following random Fourier series:

$$\begin{aligned} \xi (x,\omega )=\sum _{n\in \mathbb {Z}^2,n\ne 0}\, g_{n}(\omega )\,e^{in\cdot x}\,, \end{aligned}$$

where \(x\in \mathbb {T}^2\) and \((g_n(\omega ))\) are identically distributed standard complex gaussians on the probability space \((\Omega , {\mathcal F},p)\). We suppose that \((g_n(\omega ))_{n\ne 0}\) are independent, modulo the relation \(\overline{g_n}(\omega )=g_{-n}(\omega )\) (so that \(\xi \) is a.s a real valued distribution).

Since the white noise \(\xi (x,\omega )\) is not a classical function, it is important to properly define what we mean by a solution of (1.1). The nature of the initial datum \(u_0(x)\) is also of importance in this discussion but even for \(u_0(x)\in C^\infty (\mathbb {T}^2)\) it is not clear what we mean by a solution of (1.1). Let us therefore suppose first that \(u_0(x)\in C^\infty (\mathbb {T}^2)\). Since it is well known how to solve (1.1) with \(\xi (x,\omega )\in C^\infty (\mathbb {T}^2)\) and \(u_0(x)\in C^\infty (\mathbb {T}^2)\), it is natural to consider the following regularized problems:

$$\begin{aligned} i\partial _t u_\varepsilon = \Delta u_\varepsilon + \xi _\varepsilon u_\varepsilon +\lambda u_\varepsilon |u_\varepsilon |^p\,\, , \quad u_{\varepsilon }(0,x)=u_0(x), \end{aligned}$$
(1.2)

where \(\xi _\varepsilon (x,\omega )=\chi _{\varepsilon }(x)*\xi (x,\omega )\), \(\varepsilon \in (0,1)\) is a regularization of \(\xi \) by convolution with \(\chi _{\varepsilon }(x)=\varepsilon ^{-2}\chi (x/\varepsilon )\), where \(\chi (x)\) is smooth with a support in \(\{|x|<1/2\}\) and \(\int _{\mathbb {T}^2}\chi dx=1\). Then we have

$$\begin{aligned} \xi _{\varepsilon }(x,\omega )=\sum _{n\in \mathbb {Z}^2,n\ne 0}\, \rho \big (\varepsilon n\big ) g_{n}(\omega )\,e^{in\cdot x}\,, \end{aligned}$$
(1.3)

where \(\rho =\hat{\chi }\) is the Fourier transform on \(\mathbb {R}^2\) of \(\chi \).

Unfortunately, we do not know how to pass into the limit \(\varepsilon \rightarrow 0\) in (1.2) (even for \(u_0(x)\in C^\infty (\mathbb {T}^2)\)) and it may be that this limit is quite singular in general. Our analysis will show that we can only pass into the limit almost surely w.r.t. \(\omega \) if we take a well chosen random approximation of the datum \(u_0(x)\) in (1.2) and if we properly renormalize the phase of the solution \(u_\varepsilon (t,x,\omega )\). Following [4] and [8] we introduce the following smoothed potential \(Y=\Delta ^{-1} \xi \) and its \(C^\infty \) regularization \(Y_\varepsilon =\Delta ^{-1} \xi _\varepsilon \), namely:

$$\begin{aligned} Y(x,\omega )=-\sum _{n\in \mathbb {Z}^2,n\ne 0}\, \frac{g_{n}(\omega )}{|n|^2}\,e^{in\cdot x} \end{aligned}$$
(1.4)

and

$$\begin{aligned} Y_{\varepsilon }(x,\omega )=-\sum _{n\in \mathbb {Z}^2,n\ne 0}\, \rho \big (\varepsilon n\big ) \frac{g_{n}(\omega )}{|n|^2}\,e^{in\cdot x}. \end{aligned}$$
(1.5)

We now consider the regularized problems:

$$\begin{aligned} i\partial _t u_\varepsilon = \Delta u_\varepsilon + \xi _\varepsilon u_\varepsilon +\lambda u_\varepsilon |u_\varepsilon |^p\,\, , \quad u_{\varepsilon }(0,x)= u_0(x) e^{Y(x,\omega )-Y_\varepsilon (x,\omega )} \end{aligned}$$
(1.6)

where we assume that almost surely w.r.t. \(\omega \) we have \(e^{Y(x,\omega )} u_0(x)\in H^2(\mathbb {T}^2)\). Notice that under this assumption the problem (1.6) has almost surely w.r.t. \(\omega \) a classical unique global solution \(u_\varepsilon (t,x,\omega )\in {\mathcal C}(\mathbb {R}; H^2(\mathbb {T}^2))\) (see [2, 13]). Here is our main result.

Theorem 1.1

Assume \(p\in [2,3], \lambda \le 0\) and \(u_0(x)\) be such that \(e^{Y(x,\omega )} u_0(x) \in H^2(\mathbb {T}^2)\) a.s. Then there exists an event \(\Sigma \subset \Omega \) such that \(p(\Sigma )=1\) and for every \(\omega \in \Sigma \) there exists

$$\begin{aligned} v (t,x, \omega )\in \bigcap _{\gamma \in [0, 2)}{\mathcal C}(\mathbb {R}; H^\gamma (\mathbb {T}^2)) \end{aligned}$$

such that for every \(T>0\) and \(\gamma \in [0,2)\) we have:

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert e^{-iC_\varepsilon t} e^{Y_\varepsilon (x, \omega )} u_\varepsilon (t,x,\omega )- v (t,x,\omega )\Vert _{H^\gamma (\mathbb {T}^2)} \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \end{aligned}$$
(1.7)

where \(C_\varepsilon =\sum _{n\in \mathbb {Z}^2,n\ne 0}\, \frac{\rho ^2(\varepsilon n)}{|n|^2}\) and \(u_\varepsilon (t,x,\omega )\) are solutions to (1.6). Moreover for \(\gamma \in [0,1)\) and \(\omega \in \Sigma \) we have

$$\begin{aligned} \sup _{t\in [-T,T]} \big \Vert |u_\varepsilon (t,x,\omega )| - e^{-Y(x,\omega )}| v(t,x, \omega )| \big \Vert _ {{H^\gamma (\mathbb {T}^2)}\cap L^\infty (\mathbb {T}^2)} \overset{\varepsilon \rightarrow 0}{\longrightarrow }0. \end{aligned}$$
(1.8)

The limits obtained in Theorem 1.1 are by definition what we may wish to call solutions of (1.1) with datum \(u_0(x)\). Observe that \(|u_\varepsilon (t,x,\omega )|\) has a well defined limit, while the phase of \(u_\varepsilon (t,x,\omega )\) should be suitably renormalized by the diverging constants \(C_\varepsilon \) in order to get a limit. We also point out that the meaning of the constants \(C_\varepsilon \), introduced along the statement of Theorem 1.1, is explained in Sect. 2, where the renormalization procedure is presented.

It is worth mentioning that despite (1.7), that works for \(\gamma \in (0,2)\), in (1.8) we assume \(\gamma \in (0,1)\). This is due to a technical reason since, in order to estimate the Sobolev norm of the absolute value of a Sobolev function, we use the diamagnetic inequality which, to the best of our knowledge, works up to the \(H^1\) regularity.

In a future work we plan to extend the result of Theorem 1.1 to any \(p<\infty \) by exploiting the dispersive properties of the Schrodinger equation on a compact spatial domain established in [2]. In the present moment we are only able to do so for potentials slightly more regular than the white noise. In fact, we shall not need to exploit the construction of [2] in its full strength because we will only need an \(\varepsilon \)-improvement of the Sobolev embedding. This means that we will need to make the WKB construction of [2] for solutions oscillating at frequency \(h^{-1}\) only up to time \(h^{2-\delta }\), \(\delta >0\) which are much shorter than the times h achieved in [2]. As already mentioned, even if we succeed to incorporate the dispersive effect in the analysis of (1.1) the modified energy method, used indirectly in the proof of Theorem 1.1 (and more directly in the proof of Theorem 1.2 below) would be essentially needed in order to get polynomial bounds on higher Sobolev norms of the obtained solutions, similar to the ones obtained in [11] in the case without a white noise potential.

In Theorem 1.1 the initial data \(u_0(x)\) is well-prepared because it is supposed to satisfy \(e^{Y(x,\omega )} u_0(x) \in H^2(\mathbb {T}^2)\) a.s. It would be interesting to decide whether a suitable application of the I-method introduced in [3] may allow to remove this assumption of well-prepared data. For this purpose, one should succeed to establish the limiting property by using energies at level \(H^s\), for a suitable \(s<1\).

1.2 The gauge transform

In the sequel we perform some formal computations that allow us to introduce heuristically a rather useful transformation. Following [4] and [8] we introduce the new unknown:

$$\begin{aligned} v=e^{Y} u \end{aligned}$$
(1.9)

where u is assumed to be formally solution to (1.1) and \(Y=\Delta ^{-1} \xi \). In order to clarify the relevance of this transformation first notice that by direct computation we have that the equation solved (at least formally) by v is the following one:

$$\begin{aligned} i\partial _t v = \Delta v - 2\nabla v \cdot \nabla Y+ v |\nabla Y|^2+\lambda e^{-pY}v|v|^p, \quad v(0,x)= e^{Y(x,\omega )} u_0(x). \end{aligned}$$
(1.10)

Notice that the quantity \(|\nabla Y|^2\) is not well defined since \(\nabla Y\), even if is one derivative more regular than \(\xi \), has still negative Sobolev regularity. However this issue can be settled by a renormalization (see below and Sect. 2 for more details). On the other hand (1.10) compared with (1.1) looks more complicated since a perturbation of order one in the linear part of the equation is added compared with (1.1). Nevertheless, we have the advantage that the coefficients involved in the new equation are more regular that the spatial white noise \(\xi \) that appears in (1.1).

Another relevant advantage that comes from the new variable v is related to the conservation of the Hamiltonian. Recall that the conservation laws play a key role in the analysis of nonlinear Schrödinger equations. In particular in the context of (1.1) the quadratic part of the conserved energy is given by

$$\begin{aligned} \int _{\mathbb {T}^2} (|\nabla u |^2 - |u|^2 \xi ) dx\,. \end{aligned}$$
(1.11)

The key feature in the transformation (1.9) is that there is a cancellation between the two terms in (1.11) and this cancellation is the main point in the definition of a suitable self-adjoint realisation of \(\Delta +\xi \) (see [7] and the references therein). Indeed, let us compute (1.11) in the new variable v, hence we have \(u=e^{-Y} v\) and (1.11) becomes

$$\begin{aligned} \int _{\mathbb {T}^2} (-e^{-Y} v \Delta (e^{-Y} \bar{v}) - e^{-2Y} |v|^2\xi )dx, \end{aligned}$$

which after some elementary manipulations can be written as:

$$\begin{aligned} \int _{\mathbb {T}^2} ( |\nabla v|^2 +|v|^2 \Delta Y - |v|^2 |\nabla Y|^2- |v|^2\xi ) e^{-2Y}dx\,. \end{aligned}$$

Thanks to the choice \(\Delta Y=\xi \) we get a cancellation of the white noise potential leading to

$$\begin{aligned} \int _{\mathbb {T}^2} ( |\nabla v|^2 - |v|^2 |\nabla Y|^2) e^{-2Y}dx\,. \end{aligned}$$

Notice that now the potential energy w.r.t. the new variable v involves the potential \(|\nabla Y(x)|^2\) which is (morally) one derivative more regular compared with the white noise.

Motivated by the previous discussion, we observe that if \(u_\varepsilon (t,x,\omega )\) is a solution to

$$\begin{aligned} i\partial _t u_\varepsilon = \Delta u_\varepsilon + u_\varepsilon \xi _\varepsilon +\lambda u_\varepsilon |u_\varepsilon |^p, \end{aligned}$$

where \(\xi _\varepsilon (x,\omega )\) is defined by (1.3), then the transformed function

$$\begin{aligned} v_\varepsilon (t,x,\omega )= e^{-iC_\varepsilon t} e^{Y_\varepsilon (x,\omega )} u_\varepsilon (t,x,\omega ) \end{aligned}$$
(1.12)

satisfies

$$\begin{aligned} i\partial _t v_\varepsilon = \Delta v_\varepsilon - 2\nabla v_\varepsilon \cdot \nabla Y_\varepsilon + v_\varepsilon :|\nabla Y_\varepsilon |^2: +\lambda e^{-pY_\varepsilon } v_\varepsilon |v_\varepsilon |^p. \end{aligned}$$

Here we have \(Y_\varepsilon (x,\omega )\) given by (1.5) and \(:|\nabla Y_\varepsilon |^2:(x,\omega )\) is defined as follows:

$$\begin{aligned} :|\nabla Y_\varepsilon |^2:(x,\omega )=|\nabla Y_\varepsilon |^2(x,\omega )-C_\varepsilon \end{aligned}$$
(1.13)

where

$$\begin{aligned} C_\varepsilon =\sum _{n\in \mathbb {Z}^2,n\ne 0}\, \frac{\rho ^2(\varepsilon n)}{|n|^2} \end{aligned}$$
(1.14)

is the same constant as the one appearing in Theorem 1.1. One can show that almost surely w.r.t. \(\omega \) we have the following convergence, in spaces with negative regularity:

$$\begin{aligned} :|\nabla Y_\varepsilon |^2:(x,\omega )\overset{\varepsilon \rightarrow 0}{\longrightarrow }:|\nabla Y|^2:(x,\omega ), \end{aligned}$$

where

$$\begin{aligned} :|\nabla Y|^2:(x,\omega )&= \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0\\ n_1\ne n_2 \end{array}}\, \frac{n_1\cdot n_2}{|n_1|^2 |n_2|^2}\, g_{n_1}(\omega )\overline{g_{n_2}(\omega )}\,e^{i(n_1-n_2)\cdot x} \nonumber \\&\quad +\sum _{n\in \mathbb {Z}^2,n\ne 0}\,\frac{|g_n(\omega )|^2 -1}{|n|^2}, \end{aligned}$$
(1.15)

(see Sect.  2 for details).

The main idea to establish Theorem 1.1 is to look for the convergence of \(v_\varepsilon \) as \(\varepsilon \rightarrow 0\), and hence to get informations on \(u_\varepsilon \) by going back via the transformation (1.12).

Theorem 1.2

Assume \(p\in [2,3], \lambda \le 0\) and \(u_0(x)\) be such that \(e^{Y(x,\omega )} u_0(x) \in H^2(\mathbb {T}^2)\) a.s. Then there exists an event \(\Sigma \subset \Omega \) such that \(p(\Sigma )=1\) and for every \(\omega \in \Sigma \) there exists

$$ v(t,x,\omega )\in \bigcap _{\gamma \in [0, 2)}{\mathcal C}(\mathbb {R}; H^\gamma (\mathbb {T}^2))$$

such that for every fixed \(T>0\) and \(\gamma \in [0,2)\) we have:

$$\begin{aligned} \sup _{t\in [-T, T]} \Vert v_\varepsilon (t,x, \omega ) - v(t,x, \omega )\Vert _{H^\gamma (\mathbb {T}^2)} \overset{\varepsilon \rightarrow 0}{\longrightarrow }0. \end{aligned}$$

Here we have denoted by \(v_\varepsilon (t,x,\omega )\) for \(\omega \in \Sigma \) the unique global solution in the space \({\mathcal C} (\mathbb {R};H^2(\mathbb {T}^2))\) of the following problem:

$$\begin{aligned} i\partial _t v_\varepsilon&= \Delta v_\varepsilon - 2\nabla v_\varepsilon \cdot \nabla Y_\varepsilon + v_\varepsilon :|\nabla Y_\varepsilon |^2: +\lambda e^{-pY_\varepsilon }v_\varepsilon |v_\varepsilon |^p, \nonumber \\ v_\varepsilon (0,x)&=v_0(x)\in H^2(\mathbb {T}^2) \end{aligned}$$
(1.16)

and \( v(t,x, \omega )\) denotes for \(\omega \in \Sigma \) the unique global solution in the space \({\mathcal C} (\mathbb {R}; H^\gamma (\mathbb {T}^2))\), for \(\gamma \in (1, 2)\), of the following limit problem:

$$\begin{aligned} i\partial _t v&= \Delta v - 2\nabla v \cdot \nabla Y + v :|\nabla Y |^2: +\lambda e^{-pY} v| v|^p,\, \nonumber \\ v(0,x)&=v_0(x)\in H^2(\mathbb {T}^2) \end{aligned}$$
(1.17)

where in both Cauchy problems (1.16) and (1.17) \(v_0(x)=e^{Y(x,\omega )} u_0(x)\), \(\omega \in \Sigma \).

The result of Theorem 1.2 for \(p=2\), with a weaker convergence, was established in [4]. Here we still follow the strategy developed in [4] which can be summarized as follows:

  1. (1)

    A priori bounds for the \(H^2\)-norm of \(v_\varepsilon \);

  2. (2)

    Convergence of the special sequence \((v_{2^{-k}})\) a.s. w.r.t. \(\omega \) in \({\mathcal C}([-T,T];H^\gamma (\mathbb {T}^2))\) for every \(T>0\);

  3. (3)

    Convergence of the whole family \((v_{\varepsilon })\) a.s. w.r.t. \(\omega \) in \({\mathcal C}([-T,T];H^\gamma (\mathbb {T}^2))\) for every \(T>0\);

  4. (4)

    Pathwise uniqueness of solutions to (1.17).

In contrast with [4], we do not use the pathwise uniqueness in the convergence procedure of steps (2) and (3). The main novelty in this paper is that we can extend the \(H^2\) bounds in step (1) to the range of the nonlinearity \(2\le p\le 3\). The key tool compared with [4] is the use of suitable energies in conjunction with the Brezis-Gallouët inequality. This technique is inspired by [10, 11, 13]. As already mentioned another difference compared with [4] is that we establish the convergence of solutions to the regularized problems to the solution of the limit problem almost surely rather than in the weaker convergence in probability.

It would be interesting to decide whether the modified energy argument developed in this paper can be useful in order to improve the range of the nonlinearity in [5], where the NLS with multiplicative space white noise on the whole space is considered. Another question concerns the possibility to implement our approach of modified energies in the context of the formalism used in [7].

1.3 Notations

Next we fix some notations. We denote by \(L^q\), \(W^{s,q}\), \(H^\gamma \), the spaces \(L^q(\mathbb {T}^2)\), \(W^{s,q}(\mathbb {T}^2)\), \(H^\gamma (\mathbb {T}^2)\). Let us give the precise definition of \(W^{s,q}\), we use. The linear operator \(D^s\) is defined by

$$\begin{aligned} D^s(e^{in\cdot x})=\langle n\rangle ^{s} e^{in\cdot x}, \end{aligned}$$

where \(\langle n\rangle =(1+|n|^2)^{\frac{1}{2}}\). Then we define \(W^{s,q}\) via the norm

$$\begin{aligned} \Vert f\Vert _{W^{s,p}(\mathbb {T}^2)}:=\Vert D^s(f)\Vert _{L^p(\mathbb {T}^2)}\,. \end{aligned}$$

We also use the following notation for weighted Lebesgue spaces: \(\Vert f\Vert _{L^q(w)}^q=\int _{\mathbb {T}^2} |f|^q w \hbox { } dx\) where \(w\ge 0\) is a weight. We shall denote by \(x=(x_1, x_2)\) the generic point in \(\mathbb {T}^2\) and \(\nabla \) will be the full gradient operator w.r.t. the space variables and also \(\partial _i\) the partial derivative w.r.t. \(x_i\). To simplify the presentation we denote by \(\int _{\mathbb {T}^2} h\) the integral with respect to the Lebesgue measure \(\int _{\mathbb {T}^2} h \hbox { } dx\).

Starting from Sect. 3, we will denote by \(C(\omega )\) a generic random variable finite on the event of full probability defined in Proposition 3.1. The random contant \(C(\omega )\) will be allowed to change from line to line in our computations. For every \(q\in [1,\infty ]\) we denote by \(q'\) the conjugate Holder exponent. We shall use the notation \(\lesssim \) in order to denote a lesser or equal sign \(\le \) up to a positive multiplicative constant C, that in turn may depend harmlessly on contextual parameters. In some cases we shall drop the dependence of the functions from the variable \((t, x , \omega )\) when it is clear from the context.

1.4 Plan of the remaining part of the paper

In the next section, we present some stochastic analysis considerations. Section 3 is devoted to the basic bounds resulting from the Hamiltonian structure and some variants of the Gronwall lemma. Section 4 contains the key bounds at \(H^2\) level. The proof of the algebraic proposition Proposition 4.1 is postponed to the last section. In Sect. 5, we present the proof of Theorem 1.2 while Sect.  6 is devoted to the proof of Theorem 1.1. In the final Sect.  7, we present the proof of Proposition 4.1.

2 Probabilistic results

In this section we collect a series of results concerning the probabilistic object Y and its regularized version \(Y_\varepsilon \) (see (1.5)). The main point is that all the needed probabilistic properties are established a.s., which is the key point to establish convergence a.s. in Theorems 1.1 and 1.2. We shall need in the rest of the paper some special random constants that will be a combination of the ones involved in Proposition 2.1.

First we justify the introduction of the constant \(C_\varepsilon \) in (1.14) as follows. By definition of \(Y_{\varepsilon }(x,\omega )\) (see (1.5)) we have

$$\begin{aligned} |\nabla Y_\varepsilon |^2(x,\omega )=\sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \end{array}}\, \rho \big (\varepsilon n_1\big ) \rho (\varepsilon n_2) \frac{n_1\cdot n_2}{|n_1|^2 |n_2|^2}\, g_{n_1}(\omega )\overline{g_{n_2}(\omega )}\,e^{i(n_1-n_2)\cdot x} \end{aligned}$$

whose zero Fourier coefficient is the random constant

$$\begin{aligned} \sum _{n\in \mathbb {Z}^2,n\ne 0}\, \rho ^2(\varepsilon n) \frac{|g_{n}(\omega )|^2}{|n|^2}. \end{aligned}$$

Hence the constant \(C_\varepsilon \) defined in (1.14) is the average on \(\Omega \) of the zero Fourier modes defined above. We shall prove that a.s. w.r.t. \(\omega \) the functions \(:|\nabla Y_\varepsilon |^2:(x,\omega )\) defined in (1.13) converge as \(\varepsilon \rightarrow 0\), in the topology \(W^{-s,q}\) for \(s\in (0,1)\) and \(q\in (1, \infty )\), to the limit object \(:|\nabla Y|^2:(x,\omega )\) defined by (1.15).

Next we gather the key probabilistic properties that we need in the rest of the paper.

Proposition 2.1

Let \(s\in (0,1)\) and \(q\in (1,\infty )\) be given. There exists an event \(\Sigma _0\subset \Omega \) such that \(p(\Sigma _0)=1\) and for every \(\omega \in \Sigma _0\) there exists a finite constant \(C(\omega )>0\) such that:

  • we have the following uniform bound:

    $$\begin{aligned} \sup _{\varepsilon \in (0,1)} \big \{\Vert e^{\pm Y_\varepsilon }(x,\omega )\Vert _{L^\infty }, \Vert e^{\pm Y_\varepsilon }(x,\omega )\Vert _{W^{s,q}},\Vert \nabla Y_\varepsilon (x,\omega )\Vert _{L^q} |\ln \varepsilon |^{-1},\\ \Vert :|\nabla Y_\varepsilon |^2:(x,\omega ) \Vert _{L^q} |\ln \varepsilon |^{-2}, \Vert :|\nabla Y_\varepsilon |^2: (x,\omega )\Vert _{W^{-s,q}} \big \}<C(\omega ); \end{aligned}$$
  • for a suitable \(\kappa >0\) we have:

    $$\begin{aligned} \Vert Y_\varepsilon (x,\omega )-Y(x,\omega )\Vert _{W^{s,q}}<C(\omega ) \varepsilon ^\kappa , \end{aligned}$$
    (2.1)

    in particular by choosing \(sq>2\) we get by Sobolev embedding

    $$\begin{aligned} \Vert Y_\varepsilon (x,\omega )-Y(x,\omega )\Vert _{L^\infty }<C(\omega ) \varepsilon ^\kappa \end{aligned}$$

    and also

    $$\begin{aligned} \Vert e^{-pY_\varepsilon (x,\omega )}-e^{-pY(x,\omega )}\Vert _{L^\infty }<C(\omega ) \varepsilon ^\kappa , \quad \forall p\in \mathbb {R}; \end{aligned}$$
  • for a suitable \(\kappa >0\) we have

    $$\begin{aligned} \Vert \nabla Y_\varepsilon (x,\omega ) - \nabla Y(x,\omega ) \Vert _{W^{-s,q}}< C(\omega ) \varepsilon ^\kappa , \end{aligned}$$
    (2.2)

    and

    $$\begin{aligned} \Vert :|\nabla Y_\varepsilon |^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega ) \Vert _{W^{-s,q}}< C(\omega ) \varepsilon ^\kappa . \end{aligned}$$
    (2.3)

Let us observe that since the condition on s is open, by using the Sobolev embedding we can include the case \(q=\infty \) in (2.1), (2.2), (2.3).

We shall split the proof of Proposition 2.1 in several propositions. The following result will be of importance to pass informations from a suitable discrete sequence \(\varepsilon _N\) to the continuous parameter \(\varepsilon \). Notice that the independence property of \((g_n)\) is not used in its proof.

Lemma 2.2

Let \(\gamma >0\) be fixed, then there exists an event \(\Sigma _1\) with full measure such that for every \(\omega \in \Sigma _1\) there exists \(K>0\) such that

$$\begin{aligned} |g_n(\omega )|< K \langle n\rangle ^{\gamma },\quad \forall \, n\in \mathbb {Z}^2\setminus \{0\}. \end{aligned}$$
(2.4)

Proof

We first prove

$$\begin{aligned} p\Big ( \{\omega \in \Omega : \sup _{n\in \mathbb {Z}^2, n\ne 0} \big ( \langle n\rangle ^{-\gamma } |g_n(\omega )|\big ) \ge K \}\Big ) \overset{K\rightarrow \infty }{\longrightarrow }0. \end{aligned}$$
(2.5)

Notice that

$$\begin{aligned} p\big ( \{\omega \in \Omega : \sup _{n\in \mathbb {Z}^2, n\ne 0} \big ( \langle n\rangle ^{-\gamma } |g_n(\omega )|\big ) \ge K\} \big ) \\ \le \sum _{ n\in \mathbb {Z}^2, n\ne 0}\, p\big (\{\omega \in \Omega : |g_n(\omega )| \ge K \langle n\rangle ^{\gamma }\}\big ). \end{aligned}$$

It remains to observe that by gaussianity

$$\begin{aligned} p\big (\{\omega \in \Omega : |g_n(\omega )| \ge K \langle n\rangle ^{\gamma }\}\big ) < C e^{-c \langle n\rangle ^{2\gamma } K^2} \end{aligned}$$

and we conclude (2.5) by elementary considerations.

Next we introduce

$$\begin{aligned} \Omega _K=\{\omega \in \Omega : \sup _{n\in \mathbb {Z}^2, n\ne 0} \big ( \langle n\rangle ^{-\gamma } |g_n(\omega )|\big ) < K\} \end{aligned}$$

and

$$\begin{aligned} \Sigma _1=\bigcup _{K=1}^\infty \Omega _K\,. \end{aligned}$$

Notice that (2.4) holds for \(\omega \in \Sigma _1\) for a suitable K, and moreover \(\Sigma _1\) has full measure by (2.5). \(\square \)

Proposition 2.3

Let \(s\in (0,1)\) and \(q\in (1,\infty )\) be fixed. There exists an event \(\tilde{\Sigma }\subset \Omega \) such that \(p(\tilde{\Sigma })=1\) and for every \(\omega \in \tilde{\Sigma }\) there exists \(C(\omega )<\infty \) such that:

$$\begin{aligned} \Vert Y_\varepsilon (x,\omega ) -Y(x,\omega )\Vert _{W^{s,q}}<C(\omega ) \varepsilon ^\kappa \end{aligned}$$
(2.6)

for a suitable \(\kappa >0\). Moreover we have

$$\begin{aligned} \Vert e^{-pY_\varepsilon (x,\omega )}-e^{-pY(x,\omega )}\Vert _{L^\infty }<C(\omega ) \varepsilon ^\kappa . \end{aligned}$$
(2.7)

Proof

First we notice that by (2.6) and Sobolev embedding we have a.s.

$$\begin{aligned} \Vert Y_\varepsilon (x,\omega ) -Y(x,\omega )\Vert _{L^\infty }<C(\omega ) \varepsilon ^\kappa . \end{aligned}$$
(2.8)

Hence (2.7) follows from the following computation

$$\begin{aligned} |e^{-pY_\varepsilon (x,\omega )}-e^{-pY(x,\omega )}|\lesssim |Y_\varepsilon (x,\omega ) -Y(x,\omega )| e^{|p| \sup _{\varepsilon } \Vert Y_\varepsilon (x,\omega )\Vert _{L^\infty }} \end{aligned}$$

and by noticing that by (2.8) we have a.s. \(\sup _{\varepsilon } \Vert Y_\varepsilon (x,\omega )\Vert _{L^\infty }<\infty \).

Next we split the proof of (2.6) in two steps.

First step: proof of (2.6) for \(\varepsilon =\varepsilon _N=N^{-1}\)

For every \(p\ge q\) we combine the Minkowski inequality and a standard bound between the \(L^p\) and \(L^2\) norms of gaussians in order to get:

$$\begin{aligned}&\Vert Y(x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{L^p(\Omega ;W^{s,q})} = \Vert D^s(Y(x,\omega )-Y_{\varepsilon _N}(x,\omega ))\Vert _{L^p(\Omega ;L^{q})} \nonumber \\&\quad \le \Vert D^s(Y(x,\omega )-Y_{\varepsilon _N}(x,\omega ))\Vert _{L^{q}(\mathbb {T}^2;L^p(\Omega ))} \nonumber \\&\quad \lesssim \sqrt{p}\Vert D^s(Y(x,\omega )-Y_{\varepsilon _N}(x,\omega ))\Vert _{L^{q}(\mathbb {T}^2;L^2(\Omega ))} \lesssim \sqrt{p}N^{-1+s}. \end{aligned}$$
(2.9)

To justify the last inequality notice that by independence of \(g_n(\omega )\) we have for every fixed \(x\in \mathbb {T}^2\) the following estimates:

$$\begin{aligned} \Vert D^s(Y(x,\omega )-Y_{\varepsilon _N}(x,\omega ))\Vert _{L^2(\Omega )}^2\lesssim \sum _{|n|\le N} \frac{|1- \rho (\varepsilon _N n)|^2}{\langle n\rangle ^{4-2s}}+ \sum _{|n|>N} \frac{|1- \rho (\varepsilon _N n) |^2}{\langle n\rangle ^{4-2s}}\\ \lesssim N^{-2} \sum _{|n|\le N} \frac{|n|^2}{\langle n\rangle ^{4-2s}}+ \sum _{|n|>N} \frac{1}{\langle n\rangle ^{4-2s}} \lesssim N^{-2+2s} \end{aligned}$$

where we have used the following consequence of the mean value theorem

$$\begin{aligned} |1- \rho (\varepsilon _N n)|=|\rho (0)- \rho (\varepsilon _N n)|\lesssim \min \{(\sup |\nabla \rho |) |N^{-1} |n|, (\sup |\rho |)\}. \end{aligned}$$

Next by combining (2.9) and Lemma 4.5 of [14], we can write

$$\begin{aligned} p\big (\{\omega \in \Omega : N^{\frac{1-s}{2}} \Vert Y(x,\omega )- Y_{\varepsilon _N}(x,\omega )\Vert _{W^{s, q}}> 1\}\big ) \le C\, e^{-c N^{1-s }} \end{aligned}$$
(2.10)

where the positive constants \(c,C>0\) are independent of N. The right-hand side of (2.10) is summable in N. Therefore, we can use the Borel-Cantelli lemma to conclude that there exists a full measure set \(\Sigma _2\subset \Omega \) such that for every \(\omega \in \Sigma _2\) there exists \(N_0(\omega )\in \mathbb {N}\) such that

$$\begin{aligned} \Vert Y(x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{W^{s,q}}\le N^{\frac{s-1}{2}}= \varepsilon _N^{\frac{1-s}{2}},\quad \forall N>N_0(\omega ) \end{aligned}$$

and hence

$$\begin{aligned} \Vert Y(x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{W^{s,q}}<C(\omega ) \varepsilon _N^{\frac{1-s}{2}}, \quad \forall N\in \mathbb {N}. \end{aligned}$$
(2.11)

Second step: proof of (2.6) for \(\varepsilon \in (0,1)\)

Let us set \( \tilde{\Sigma }=\Sigma _1\cap \Sigma _2\ \) (where \(\Sigma _2\) is given in the first step and \(\Sigma _1\) in Lemma 2.2), then \(p(\tilde{\Sigma })=1\) and we will show that for every \(\omega \in \tilde{\Sigma }\) we have the desired property.

For every \(\varepsilon \in (0,1)\) there exists \(N\in \mathbb {N}\) such that

$$\begin{aligned} \varepsilon _{N+1}< \varepsilon \le \varepsilon _N\ \end{aligned}$$
(2.12)

where \(\varepsilon _N=N^{-1}\) is as in the previous step. We claim that

$$\begin{aligned} \Vert Y_\varepsilon (x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{W^{s,q}} < C(\omega )\varepsilon ^{1-s-\gamma } , \quad \forall \omega \in \Sigma _1 \end{aligned}$$
(2.13)

for every \(\gamma >0\). Once the estimate above is established then the proof of (2.6) for \(\omega \in \tilde{\Sigma }\) follows by recalling (2.11) and the Minkowski inequality:

$$\begin{aligned}&\Vert Y_\varepsilon (x,\omega )-Y(x,\omega )\Vert _{W^{s,q}} \le \Vert Y_\varepsilon (x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{W^{s,q}}\\&\qquad +\Vert Y_{\varepsilon _N} (x,\omega )-Y(x,\omega )\Vert _{W^{s,q}}< C(\omega )\varepsilon ^{1-s-\gamma } + C(\omega ) \varepsilon _N^{\frac{1-s}{2}}\\&\quad <C(\omega )(\varepsilon ^{1-s-\gamma } + \varepsilon ^{\frac{1-s}{2}}). \end{aligned}$$

Hence we get (2.6) for \(\kappa =\frac{1-s}{2}\) provided that we choose \(\gamma >0\) small enough.

Next we prove (2.13). Due to (2.4) and the Minkowski inequality for every \(\omega \in \Sigma _1\) we have some \(K>0\) such that:

$$\begin{aligned} \Vert Y_\varepsilon (x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{W^{s,q}} \lesssim K \sum _{n\in \mathbb {Z}^2} \frac{\big | \rho \big (\varepsilon n\big ) - \rho \big (\varepsilon _N n\big ) \big |}{\langle n\rangle ^{2-s-\gamma }}\,. \end{aligned}$$
(2.14)

On the other hand by the mean value theorem and (2.12) we get

$$\begin{aligned} \big | \rho \big (\varepsilon n\big ) - \rho \big (\varepsilon _N n\big ) \big | \lesssim |n|N^{-2} \sup _{t\in [0,1]} |\nabla \rho \big ((t\varepsilon _N +(1-t) \varepsilon ) n\big )|, \end{aligned}$$

therefore by using the rapid decay of \(\nabla \rho \) we get for every \(L>0\) the following bound

$$\begin{aligned} \big | \rho \big (\varepsilon n\big ) - \rho \big (\varepsilon _N n\big ) \big | \lesssim |n|N^{-2} \min \{(\sup |\nabla \rho |), |n|^{-L}N^L\}. \end{aligned}$$
(2.15)

We can now estimate the r.h.s. in (2.14) as follows:

$$\begin{aligned}&\sum _{|n|\le N} \frac{\big | \rho \big (\varepsilon n\big ) - \rho \big (\varepsilon _N n\big ) \big |}{\langle n\rangle ^{2-s-\gamma }} + \sum _{|n|>N} \frac{| \rho (\varepsilon n) - \rho (\varepsilon _N n)|}{\langle n\rangle ^{2-s-\gamma }} \\&\quad \lesssim N^{-2} \sum _{|n|\le N} \frac{1 }{\langle n\rangle ^{1-s-\gamma }}+ N^{L-2}\sum _{|n|>N} \frac{1}{|n|^{L-1}\langle n\rangle ^{2-s-\gamma }} \lesssim N^{-1+s+\gamma } \end{aligned}$$

Hence going back to (2.14) we get

$$\begin{aligned} \Vert Y_\varepsilon (x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{W^{s,q}} < C(\omega )N^{-1+s+\gamma }=C(\omega )\varepsilon _N^{1-s-\gamma } , \quad \forall \omega \in \Sigma _1 \end{aligned}$$

and we conclude (2.13) since we are assuming (2.12). \(\square \)

Proposition 2.4

Let \(q\in (1,\infty )\) be fixed. There exists an event \(\tilde{\Sigma }\subset \Omega \) such that \(p(\tilde{\Sigma })=1\) and for every \(\omega \in \tilde{\Sigma }\) there exists \(C(\omega )<\infty \) such that:

$$\begin{aligned} \Vert \nabla Y_\varepsilon (x,\omega )\Vert _{L^{q}}<C(\omega ) |\ln \varepsilon | \end{aligned}$$
(2.16)

and

$$\begin{aligned} \Vert :|\nabla Y_\varepsilon (x,\omega )|^2:\Vert _{L^{q}}<C(\omega ) |\ln \varepsilon |^2. \end{aligned}$$
(2.17)

Proof

It is easy to deduce (2.17) from (2.16). In fact we have the following trivial estimate

$$\begin{aligned} \Vert :|\nabla Y_\varepsilon (x,\omega )|^2:\Vert _{L^{q}} \le \Vert |\nabla Y_\varepsilon (x,\omega )|\Vert _{L^{2q}}^2 +C_\varepsilon < C(\omega ) |\ln \varepsilon |^2 +C_\varepsilon \end{aligned}$$

and we conclude by noticing that \(C_\varepsilon \lesssim |\ln \varepsilon |\). Next, we focus on the proof of (2.16) that we split in two steps.

First step: proof of (2.16) for \(\varepsilon =\varepsilon _N=N^{-\delta }\), \(\delta \in (0,1)\)

Once again, by combining the Minkowski inequality and a standard bound between the \(L^p\) and \(L^2\) norms of gaussians we get for \(p\ge q\):

$$\begin{aligned}&\Vert \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^p(\Omega ;L^{q})} \le \Vert \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}(\mathbb {T}^2;L^p(\Omega ))} \nonumber \\&\quad \lesssim \sqrt{p}\Vert \nabla Y_{\varepsilon _N}(x, \omega )\Vert _{L^{q}(\mathbb {T}^2;L^2(\Omega ))} \lesssim \sqrt{p |\ln \varepsilon _N|} \end{aligned}$$
(2.18)

The last step follows since, by orthogonality of \(g_n(\omega )\), for every \(x\in \mathbb {T}^2\) we have:

$$\begin{aligned} \Vert \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{2}(\Omega )}^2&\lesssim \sum _{n\in \mathbb {Z}^2, n\ne 0} \frac{|\rho \big (\varepsilon _N n)|^2}{|n|^2} \\&\lesssim \sum _{n\in \mathbb {Z}^2, 0<|n|<N} \frac{1}{|n|^2} + N^{2\delta L} \sum _{n\in \mathbb {Z}^2, |n|\ge N} \frac{1}{|n|^{2+2L}} \lesssim |\ln N| \\&\quad +N^{2L(\delta -1)} \lesssim |\ln \varepsilon _N| \end{aligned}$$

where \(L>0\) is any number and we used the fast decay of \(\rho \). Using (2.18), we obtain that

$$\begin{aligned} F(\omega )=|\ln \varepsilon _N|^{-1/2}\Vert \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}} \end{aligned}$$

satisfies \( \Vert F\Vert _{L^p(\Omega )}\le C\sqrt{p} \) and using Lemma 4.5 of [14] (with \(N=1\), according to the notations of [14]), we can write

$$\begin{aligned} p\big (\{\omega \in \Omega : |\ln \varepsilon _N|^{-1}\Vert \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}}> \lambda \}\big ) \le Ce^{-c|\ln \varepsilon _N| \lambda ^2} \end{aligned}$$
(2.19)

where the positive constants \(c,C>0\) are independent of N and \(\lambda >0\) is chosen large enough in such a way that the right-hand side of (2.19) is summable in N. Therefore, we can use the Borel-Cantelli lemma to conclude that there exists a full measure set \(\Sigma _2\subset \Omega \) such that for every \(\omega \in \Sigma _2\) there exists \(N_0(\omega )\in \mathbb {N}\) such that

$$\begin{aligned} \Vert \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}}\le \lambda |\ln \varepsilon _N|, \quad \forall N>N_0(\omega ) \end{aligned}$$

and hence for every \(\omega \in \Sigma _2\) we have

$$\begin{aligned} \Vert \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}}\le C(\omega ) |\ln \varepsilon _N|, \quad \forall N\in \mathbb {N}. \end{aligned}$$

Second step: proof of (2.16) for \(\varepsilon \in (0,1)\)

Exactly as in the proof of Proposition 2.3 it is sufficient to estimate

$$\begin{aligned} \Vert Y_{\varepsilon }(x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}}<C(\omega ) |\ln \varepsilon |, \end{aligned}$$
(2.20)

where

$$\begin{aligned} \varepsilon _{N+1}< \varepsilon \le \varepsilon _N \end{aligned}$$
(2.21)

with \(\varepsilon _N=N^{-\delta }\), provided that \(\omega \) belongs to the event \(\Sigma _1\) given in Lemma 2.2 and \(\delta >0\) is small enough . By Lemma 2.2 we have that for every \(\omega \in \Sigma _1\) there exists a constant \(K>0\) such that

$$\begin{aligned} \Vert \nabla Y_{\varepsilon }(x,\omega )-\nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}}\lesssim K \sum _{n\in \mathbb {Z}^2} \frac{\big | \rho \big (\varepsilon n\big ) - \rho \big (\varepsilon _N n\big ) \big |}{\langle n\rangle ^{1-\gamma }}\,. \end{aligned}$$
(2.22)

Next notice that by combining the mean value theorem with the strong decay of \(\rho \) we get for every fixed \(L>0\):

$$\begin{aligned} \big | \rho \big (\varepsilon n\big ) - \rho \big (\varepsilon _N n\big ) \big | \lesssim |n|N^{-1-\delta } \sup _{t\in [0,1]} |\nabla \rho \big ((t\varepsilon _N +(1-t) \varepsilon ) n\big )| \\ \lesssim |n|N^{-1-\delta } \min \{(\sup |\nabla \rho |), |n|^{-L}N^{\delta L}\}. \end{aligned}$$

Then we can estimate

$$\begin{aligned} \sum _{n\in \mathbb {Z}^2} \frac{\big | \rho \big (\varepsilon n\big ) - \rho \big (\varepsilon _N n\big ) \big |}{\langle n\rangle ^{1-\gamma }}&\lesssim N^{-1-\delta } \sum _{n\in \mathbb {Z}^2, 0<|n|<N^\delta } \langle n\rangle ^{\gamma } + N^{-1-\delta +\delta L} \sum _{n\in \mathbb {Z}^2, |n|\ge N^\delta } \langle n\rangle ^{\gamma -L} \\&\lesssim N^{-1-\delta } N^{\delta (2+\gamma )} + N^{-1-\delta +\delta L} N^{\delta (\gamma - L+2)} =2N^{-1-\delta } N^{\delta (2+\gamma )} \end{aligned}$$

and hence by (2.22) we get

$$\begin{aligned} \Vert Y_{\varepsilon }(x,\omega )-Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}}<C(\omega ) \varepsilon _N^\alpha \end{aligned}$$

for a suitable \(\alpha >0\), provided that we choose \(\delta , \gamma >0\) small enough. By (2.21) we get

$$\begin{aligned} \Vert \nabla Y_{\varepsilon }(x,\omega )- \nabla Y_{\varepsilon _N}(x,\omega )\Vert _{L^{q}}<C(\omega ) \varepsilon ^\alpha \end{aligned}$$

and in particular we get (2.20). \(\square \)

Proposition 2.5

Let \(s\in (0,1)\) and \(q\in (1,\infty )\) be fixed. There exists an event \(\tilde{\Sigma }\subset \Omega \) such that \(p(\tilde{\Sigma })=1\) and for every \(\omega \in \tilde{\Sigma }\) there exists \(C(\omega )<\infty \) such that

$$\begin{aligned} \Vert :|\nabla Y_{\varepsilon }|^2:(x,\omega ) -:|\nabla Y|^2:(x,\omega )\Vert _{W^{-s,q}}<C(\omega ) \varepsilon ^\kappa \end{aligned}$$
(2.23)

for a suitable \(\kappa >0\).

Proof

We split again the proof of (2.23) in two steps.

First step: proof of (2.23) for \(\varepsilon =\varepsilon _N=N^{-\delta }\), \(\delta \in (0,1)\)

Notice that

$$\begin{aligned}&:|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega ) \\&\quad = \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0\\ n_1\ne n_2 \end{array}}\, \big [\rho \big (\frac{n_1}{N^\delta }\big ) \rho \big (\frac{n_2}{N^\delta }\big ) -1\big ] \frac{n_1\cdot n_2}{|n_1|^2 |n_2|^2}\, g_{n_1}(\omega )\overline{g_{n_2}(\omega )}\,e^{i(n_1-n_2)\cdot x} \\&\qquad +\sum _{n\in \mathbb {Z}^2,n\ne 0}\, \big [\rho ^2(\frac{n}{N^\delta })-1]\frac{|g_n(\omega )|^2 -1}{|n|^2}. \end{aligned}$$

For every \(p\ge q\), by the Minkowski inequality and using hypercontractivity (see [12]) to estimate the \(L^p\) norm of a bilinear form of the gaussian vector \((g_n)\), we get:

$$\begin{aligned}&\Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega )\Vert _{L^p(\Omega ;W^{-s,q})} \nonumber \\&\quad = \Vert D^{-s}(:|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega ))\Vert _{L^p(\Omega ;L^{q})} \nonumber \\&\quad \le \Vert D^{-s}(:|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega ))\Vert _{L^{q}(\mathbb {T}^2;L^p(\Omega ))} \nonumber \\&\quad \lesssim p\Vert D^{-s}(:|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega ))\Vert _{L^{q}(\mathbb {T}^2;L^2(\Omega ))}. \end{aligned}$$
(2.24)

Now, we observe that for a complex gaussian g and two nonnegative integers \(k_1\), \(k_2\), we have that \(\mathbb {E}(g^{k_1}\bar{g}^{k_2})=0\), unless \(k_1=k_2\). Therefore, by the independence of \((g_n)\), modulo \(g_n=\overline{g_{-n}}\), for every fixed \(x\in \mathbb {T}^2\), we can write:

$$\begin{aligned} \Vert D^{-s}(:|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega ))\Vert _{L^2(\Omega )}^2\\ \lesssim \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0\\ n_1\ne n_2 \end{array}}\, \big [\rho \big (\frac{n_1}{N^\delta }\big ) \rho \big (\frac{n_2}{N^\delta }\big ) -1\big ]^2 \frac{|n_1\cdot n_2|^2}{\langle n_1-n_2 \rangle ^{2s}|n_1|^4 |n_2|^4}\, \\ + \sum _{n\in \mathbb {Z}^2,n\ne 0}\, \big [\rho ^2\big (\frac{n}{N^\delta }\big )-1\big ]^2\frac{1}{|n|^4} =I+II, \end{aligned}$$

where the implicit constant is independent of x. Concerning II we notice first that by the mean value theorem

$$\begin{aligned} \big |\rho ^2\big (\frac{n}{N^\delta }\big )-1\big |=\big |\rho ^2\big (\frac{n}{N^\delta }\big )-\rho ^2(0))\big | \lesssim |n| N^{-\delta } \end{aligned}$$

and by interpolation with the trivial bound

$$\begin{aligned} \big |\rho ^2\big (\frac{n}{N^\delta }\big )-1\big | \lesssim 1 \end{aligned}$$

we get

$$\begin{aligned} \big |\rho ^2\big (\frac{n}{N^\delta }\big )-1\big | \lesssim |n|^\theta N^{-\delta \theta }, \quad \theta \in (0,1). \end{aligned}$$

Then we can estimate

$$\begin{aligned} II\lesssim N^{-2\delta \theta } \sum _{n\in \mathbb {Z}^2,n\ne 0}\, \frac{1}{|n|^{4-2\theta }}\lesssim N^{-2\delta \theta } . \end{aligned}$$

Next we estimate I. First notice that

$$\begin{aligned} \rho \big (\frac{n_1}{N^\delta }\big ) \rho \big ( \frac{n_2}{N^\delta }\big ) -1 = \rho \big (\frac{n_1}{N^\delta }\big ) \big [\rho \big ( \frac{n_2}{N^\delta }\big )-\rho (0)\big ] +\big [\rho \big (\frac{n_1}{N^\delta }\big ) -\rho (0)\big ] \rho \big (0) \end{aligned}$$

and hence by the mean value theorem we get

$$\begin{aligned} \big | \rho \big (\frac{n_1}{N^\delta }\big ) \rho \big ( \frac{n_2}{N^\delta }\big ) -1\big |\lesssim (|n_1|+|n_2|) N^{-\delta }. \end{aligned}$$
(2.25)

that by interpolation with the trivial bound

$$\begin{aligned} \big | \rho \big (\frac{n_1}{N^\delta }\big ) \rho \big ( \frac{n_2}{N^\delta }\big ) -1\big |\lesssim 1 \end{aligned}$$

implies

$$\begin{aligned} \big | \rho \big (\frac{n_1}{N^\delta }\big ) \rho \big ( \frac{n_2}{N^\delta }\big ) -1\big |\lesssim (|n_1|+|n_2|)^\theta N^{-\delta \theta }, \quad \theta \in (0,1). \end{aligned}$$

Hence we can evaluate I as follows:

$$\begin{aligned} I&\lesssim N^{-2\delta \theta } \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \end{array}} \frac{(|n_1|+|n_2|)^{2\theta }}{\langle n_1-n_2 \rangle ^{2s}|n_1|^2 |n_2|^2} \\&\lesssim N^{-2\delta \theta } \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \end{array}} \frac{1}{\langle n_1-n_2 \rangle ^{2s}|n_1|^{2-2\theta } |n_2|^{2-2\theta }}\lesssim N^{-2\delta \theta }, \end{aligned}$$

where we have used

$$\begin{aligned} \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \end{array}} \frac{1}{\langle n_1-n_2 \rangle ^{2s}|n_1|^{2-2\theta } |n_2|^{2-2\theta }}<\infty \end{aligned}$$

which in turn follows from the discrete Young inequality provided that \(2\theta <s\). Going back to (2.24) we get that for \(\delta \in (0,1)\) and \(\theta \in (0,\frac{s}{2})\) one has the bound

$$\begin{aligned} \Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega )\Vert _{L^p(\Omega ;W^{-s,q})} \lesssim p N^{-\delta \theta }\,. \end{aligned}$$

This estimate, in conjunction with Lemma 4.5 of [14] implies

$$\begin{aligned} p\big (\{\omega \in \Omega : N^{\frac{\delta \theta }{2}} \Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega )\Vert _{W^{-s,q}}> 1\}\big ) \le Ce^{-c N^{\frac{\delta \theta }{2}}} \end{aligned}$$

where the positive constants \(c,C>0\) are independent of N. Since the r.h.s. is summable in N we can apply the Borel-Cantelli lemma and deduce the existence of an event \(\Sigma _2\) with full measure and such that for every \(\omega \in \Sigma _2\) there exists \(N_0(\omega )>0\) with the property

$$\begin{aligned} \Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega )\Vert _{W^{-s,q}} \le N^{-\frac{\delta \theta }{2}}=\varepsilon _N^{\frac{\theta }{2}}, \quad \forall \, N>N_0(\omega ) \end{aligned}$$

and hence

$$\begin{aligned} \Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega )\Vert _{W^{-s,q}} < C(\omega ) \varepsilon _N^{\frac{\theta }{2}}, \quad \forall \, N\in \mathbb {N}. \end{aligned}$$

Second step: proof of (2.23) for \(\varepsilon \in (0,1)\)

We consider a generic \(\varepsilon >0\) and we select N in such a way that \(\varepsilon _{N+1}<\varepsilon \le \varepsilon _N\) where \(\varepsilon _N=N^{-\delta }\) as in the first step. By the Minkowski inequality and the previous step it is sufficient to prove

$$\begin{aligned} \Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y_{\varepsilon }|^2:(x,\omega )\Vert _{W^{-s,q}} <C(\omega )N^{-\alpha } \end{aligned}$$
(2.26)

for a suitable \(\alpha >0\), for \(\omega \in \Sigma _1\) where \(\Sigma _1\) is given in Lemma 2.2. By Lemma 2.2 we deduce that almost surely there exists a finite constant \(K>0\) such that (2.4) occurs. Then we have

$$\begin{aligned}&\Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y_{\varepsilon }|^2:(x,\omega )\Vert _{W^{-s,q}} \\&\quad \lesssim K^2 \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \end{array}} \frac{ \big |\rho (\varepsilon _N n_1) \rho (\varepsilon _N n_2) -\rho (\varepsilon n_1) \rho (\varepsilon n_2)\big |}{\langle n_1-n_2\rangle ^s|n_1|^{1-\gamma } |n_2|^{1-\gamma }}. \end{aligned}$$

By looking at the argument to prove (2.25) we get:

$$\begin{aligned} \big | \rho (\varepsilon _N n_1) \rho ( \varepsilon _N n_2) -\rho (\varepsilon n_1) \rho (\varepsilon n_2) \big |\lesssim (|n_1|+|n_2|) (\varepsilon _N - \varepsilon ) \end{aligned}$$
(2.27)

and hence we can continue the estimate above as follows

$$\begin{aligned}&\Vert :|\nabla Y_{\varepsilon _N}|^2:(x,\omega ) - :|\nabla Y_{\varepsilon }|^2:(x,\omega )\Vert _{W^{-s,q}} \nonumber \\&\quad \lesssim K^2 (\varepsilon _N - \varepsilon ) \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ \max \{|n_1|, |n_2|\}\le N^{2\delta } \end{array}} \frac{|n_1|+|n_2|}{{\langle n_1-n_2\rangle ^s|n_1|^{1-\gamma } |n_2|^{1-\gamma }}} \nonumber \\&\qquad + K^2 \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ \min \{|n_1|, |n_2|\}\ge N^{2\delta } \end{array}} \frac{ \big |\rho (\varepsilon _N n_1) \rho (\varepsilon _N n_2) -\rho (\varepsilon n_1) \rho (\varepsilon n_2)\big |}{\langle n_1-n_2\rangle ^s|n_1|^{1-\gamma } |n_2|^{1-\gamma }} \nonumber \\&\qquad + K^2 \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ \min \{|n_1|, |n_2|\}\le N^{2\delta } \\ \max \{|n_1|, |n_2|\}\ge N^{2\delta } \end{array}} \frac{ \big |\rho (\varepsilon _N n_1) \rho (\varepsilon _N n_2) -\rho (\varepsilon n_1) \rho (\varepsilon n_2)\big |}{\langle n_1-n_2\rangle ^s|n_1|^{1-\gamma } |n_2|^{1-\gamma }}. \end{aligned}$$
(2.28)

Concerning the first sum on the r.h.s. in (2.28) (notice that by symmetry we can assume \(|n_1|<|n_2|\)) we can estimate as follows:

$$\begin{aligned}&(\varepsilon _N - \varepsilon ) \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ |n_1|\le |n_2|\le N^{2\delta } \end{array}} \frac{|n_2|}{\langle n_1-n_2\rangle ^s|n_1|^{1-\gamma } |n_2|^{1-\gamma }} \\&\quad \lesssim N^{-1-\delta } \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ |n_1|\le |n_2|\le N^{2\delta } \end{array}} |n_2|^\gamma \lesssim N^{-1-\delta } N^{4\delta } N^{2\delta \gamma }. \end{aligned}$$

For the second sum on the r.h.s. in (2.28) we use the fast decay of \(\rho \) and hence for every \(L>0\) we have

$$\begin{aligned}&\sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ \min \{|n_1|, |n_2|\}\ge N^{2\delta } \end{array}} \frac{ \big |\rho (\varepsilon _N n_1) \rho (\varepsilon _N n_2) -\rho (\varepsilon n_1) \rho (\varepsilon n_2)\big |}{\langle n_1-n_2\rangle ^s|n_1|^{1-\gamma } |n_2|^{1-\gamma }} \\&\quad \lesssim N^{2\delta L}\sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ \min \{|n_1|, |n_2|\}\ge N^{2\delta } \end{array}} \frac{1}{|n_1|^{1-\gamma +L} |n_2|^{1-\gamma +L}} \\&\quad \lesssim N^{2\delta L} N^{{2\delta }(\gamma -L+1)}N^{{2\delta }(\gamma -L+1)} \lesssim N^{4\delta \gamma -2\delta L+4\delta }. \end{aligned}$$

By using again the fast decay of \(\rho \) we can estimate the third sum on the r.h.s. in (2.28) as follows (we can assume by symmetry \(|n_1|<N^{2\delta }\le |n_2|\)):

$$\begin{aligned}&\sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ |n_1|<N^{2\delta }\le |n_2| \end{array}} \frac{ \big |\rho (\varepsilon _N n_1) \rho (\varepsilon _N n_2) -\rho (\varepsilon n_1) \rho (\varepsilon n_2)\big |}{\langle n_1-n_2\rangle ^s|n_1|^{1-\gamma } |n_2|^{1-\gamma }} \\&\quad \lesssim {N^{\delta L}} \sum _{\begin{array}{c} (n_1,n_2)\in \mathbb {Z}^4\\ n_1\ne 0, n_2\ne 0 \\ |n_1|<N^{2\delta }\le |n_2| \end{array}} \frac{1}{|n_1|^{1-\gamma } |n_2|^{1-\gamma +L}} \lesssim {N^{\delta L}} N^{2\delta (\gamma +1)} N^{2\delta (\gamma -L+1)} \lesssim N^{-\delta L+4\delta \gamma +4\delta }. \end{aligned}$$

The proof of (2.26) is now complete provided we choose L large and \(\delta ,\gamma \) small enough. \(\square \)

We complete this section by noticing that the analysis we performed here may allow the extension of the results of [9] to a continuous family of approximation problems.

3 Some useful facts

Next we provide a result that will be useful in the sequel. The proof is inspired by [4], nevertheless we provide a proof for the sake of completeness. From a technical viewpoint the minor difference is that our proof involves Sobolev spaces, while the one in [4] uses Hölder spaces.

Proposition 3.1

Let \(v_\varepsilon (t,x, \omega )\) be as in Theorem 1.2. Then there exists an event \(\Sigma \subset \Omega \) such that \(p(\Sigma )=1\), \(\Sigma \subset \Sigma _0\) where \(\Sigma _0\) is the event in Proposition 2.1 and for every \(\omega \in \Sigma \) there exists a finite constant \(C(\omega )>0\) such that:

$$\begin{aligned} \sup _{\varepsilon \in (0,1), t\in \mathbb {R}} \Vert v_\varepsilon (t,x,\omega )\Vert _{H^1}<C(\omega ) \end{aligned}$$
(3.1)

and

$$\begin{aligned} \Vert e^{Y(x,\omega )}u_0(x)\Vert _{H^2}<C(\omega ). \end{aligned}$$
(3.2)

Proof

We introduce the event \(\Sigma \) as follows

$$\begin{aligned} \Sigma =\{\omega \in \Omega : e^{Y(x,\omega )}u_0(x)\in H^2\} \cap \Sigma _0 \end{aligned}$$
(3.3)

where \(\Sigma _0\) is the event provided by Proposition 2.1.

We now state the fundamental conservation laws satisfied by \(v_{\varepsilon }\). We have the mass conservation

$$\begin{aligned} \frac{d}{dt} \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |v_\varepsilon (t)|^2 =0 \end{aligned}$$
(3.4)

and the energy conservation

$$\begin{aligned} \frac{d}{dt} \Big ( \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } \big (|\nabla v_\varepsilon (t)|^2 -:|\nabla Y_\varepsilon |^2: |v_\varepsilon (t)|^2 -\frac{2\lambda }{p+2} e^{-pY_\varepsilon } |v_\varepsilon (t)|^{p+2} \big ) \Big ) =0. \end{aligned}$$
(3.5)

Of course, the conservation laws (3.4) and (3.5) give the key global information in our analysis. By using (3.4) and Propositions 2.1, we get

$$\begin{aligned} \int _{\mathbb {T}^2} |v_\varepsilon (t)|^2&\le \Vert e^{2Y_\varepsilon }\Vert _{L^\infty }\int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |v_\varepsilon (t)|^2< C(\omega ) \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |v(0)|^2 \nonumber \\&< C(\omega ) \Vert e^{-2Y_\varepsilon }\Vert _{L^\infty } \int _{\mathbb {T}^2} |v(0)|^2 <C(\omega ) \int _{\mathbb {T}^2} |v(0)|^2 . \end{aligned}$$
(3.6)

In order to control \(\Vert \nabla v_\varepsilon (t) \Vert _{L^2}\) we first notice that by duality and by Lemma 2.2 in [9] (see also [6] and the references therein), and by using Proposition 2.1 we get for \(s\in (0,1)\):

$$\begin{aligned} \big |\int _{\mathbb {T}^2} e^{-2Y_\varepsilon }&:|\nabla Y_\varepsilon |^2: |v_\varepsilon (t)|^2 \big | \le \Vert :|\nabla Y_\varepsilon |^2: \Vert _{W^{-s, q}} \Vert e^{-2Y_\varepsilon } |v_\varepsilon (t)|^2 \Vert _{W^{s,q'}} \\&< C(\omega ) (\Vert e^{-2Y_\varepsilon }\Vert _{W^{s,2}} \Vert |v_\varepsilon (t)|^2\Vert _{L^r} + \Vert e^{-2Y_\varepsilon }\Vert _{L^{l}} \Vert |v_\varepsilon (t)|^2\Vert _{W^{s,\frac{3}{2}}}) \\&<C(\omega ) (\Vert v_\varepsilon (t) \Vert _{L^{2r}}^2 + \Vert v_\varepsilon (t)\Vert _{L^m}\Vert v_\varepsilon (t)\Vert _{H^s}), \end{aligned}$$

where \(\frac{1}{q'}=\frac{1}{2} + \frac{1}{r}=\frac{2}{3} + \frac{1}{l}\), \(\frac{2}{3}=\frac{1}{m}+ \frac{1}{2}\) and \(q<\infty \) is large enough. We now fix \(s=\frac{1}{2}\). By using now interpolation and the Sobolev embedding we get

$$\begin{aligned} \big |\int _{\mathbb {T}^2} e^{-2Y_\varepsilon } :|\nabla Y_\varepsilon |^2: |v_\varepsilon (t)|^2\big |< & {} C(\omega ) \Vert v_\varepsilon (t) \Vert _{H^{\frac{1}{2}}} \Vert v_\varepsilon (t) \Vert _{H^{1}} \\< & {} C(\omega ) \Vert v_\varepsilon (t) \Vert _{L^{2}}^{\frac{1}{2}} \Vert v_\varepsilon (t) \Vert _{H^{1}}^{\frac{3}{2}}\,. \end{aligned}$$

By combining this estimate with (3.5) (and by using that we are assuming \(\lambda \le 0\)) we get

$$\begin{aligned} \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |\nabla v_\varepsilon (t)|^2&\lesssim \Vert e^{-Y_\varepsilon } \nabla v_\varepsilon (0)\Vert _{L^2}^2 \\&\quad + C(\omega ) (\Vert v_\varepsilon (t) \Vert _{L^{2}}^{\frac{1}{2}} \Vert v_\varepsilon (t) \Vert _{H^{1}}^{\frac{3}{2}} +\Vert v_\varepsilon (0) \Vert _{L^{2}}^{\frac{1}{2}} \Vert v_\varepsilon (0) \Vert _{H^{1}}^{\frac{3}{2}})\\&\quad + \Vert e^{-Y_\varepsilon } v_\varepsilon (0)\Vert _{L^{p+2}}^{p+2} \end{aligned}$$

which in turn by interpolation, Sobolev embedding and (3.6) implies

$$\begin{aligned}&\int _{\mathbb {T}^2} ( |\nabla v_\varepsilon (t)|^2 + |v_\varepsilon (t)|^2) \lesssim C(\omega ) [{\mathcal P}(\Vert v_\varepsilon (0) \Vert _{L^{2}}) \Vert v_\varepsilon (t) \Vert _{H^{1}}^{\frac{3}{2}} +{\mathcal P}(\Vert v_\varepsilon (0) \Vert _{H^{1}})] \end{aligned}$$

where \({\mathcal P}\) denotes a polynomial function and we have used Proposition 2.1 to estimate a.s. \(\sup _{\varepsilon \in (0,1)} \Vert e^{-Y_\varepsilon }\Vert _{L^\infty }<C(\omega )\). We therefore have the bound

$$\begin{aligned} \int _{\mathbb {T}^2} ( |\nabla v_\varepsilon (t)|^2 + |v_\varepsilon (t)|^2) \lesssim C(\omega ) \Vert v_\varepsilon (t) \Vert _{H^{1}}^{\frac{3}{2}}+C(\omega ), \end{aligned}$$

where the random constant \(C(\omega )\) is finite for every \(\omega \in \Sigma \). We conclude the proof by the classical Young inequality. \(\square \)

In the sequel we shall need suitable versions of the Gronwall lemma. Although they are very classical we prefer to state them, in particular we emphasize how the estimates depend from the constants involved. We also mention that the estimates below are implicitely used in [4], however for the sake of clarity we prefer to give below the precise statements that we need.

Proposition 3.2

Let f(t) be a non-negative real valued function such that for \(t\in [0, \infty )\):

$$\begin{aligned} f(t) \le A + B \int _0^t f(s) \ln (C+ f(s))ds, \end{aligned}$$

where \(A, B, C\in (1, \infty )\). Then we have the following upper bound

$$\begin{aligned} f(t)\le (A+C)^{e^{Bt}}\,. \end{aligned}$$

Proof

Notice that by assumption

$$\begin{aligned} C+f(t)\le A +C+ B \int _0^t (C+f(s)) \ln (C+ f(s))ds:=F(t)\,. \end{aligned}$$
(3.7)

Therefore

$$\begin{aligned} F'(t)= B (C+f(t))\ln (C+f(t))\le B F(t)\ln F(t)\,. \end{aligned}$$

Hence

$$\begin{aligned} \frac{d}{dt}\big ( \ln \ln (F(t)) \big )\le B \end{aligned}$$

which implies after integration between 0 and t :

$$\begin{aligned} \ln \ln (F(t))\le \ln \ln (A+C)+Bt\,. \end{aligned}$$

By taking twice the exponential we obtain

$$\begin{aligned} F(t)\le (A+C)^{e^{Bt}}. \end{aligned}$$

Coming back to (3.7), we get the needed bound. \(\square \)

Proposition 3.3

Let f(t) be a non-negative real valued function for \(t\in [0, \infty )\), such that:

$$\begin{aligned} f'(t)\le A +B f(t), \quad f(0)=0 \end{aligned}$$

where \(A, B\in (0, \infty )\). Then we have the following upper bound:

$$\begin{aligned} f(t)\le AB^{-1} e^{Bt}. \end{aligned}$$

Proof

We notice that \(\frac{d}{dt}(e^{-Bt} f(t))\le Ae^{-Bt}\) and hence

$$\begin{aligned} f(t)\le A e^{Bt} \int _0^t e^{-Bs} ds=\frac{A}{B} e^{Bt} (1-e^{-Bt})\le \frac{A}{B} e^{Bt}. \\ \end{aligned}$$

\(\square \)

4 Modified energy for the gauged NLS on \(\mathbb {T}^2\) and \(H^2\) a-priori bounds

In the sequel \(v_\varepsilon (t,x,\omega )\) will denote the unique solution to:

$$\begin{aligned} i\partial _t v_\varepsilon&= \Delta v_\varepsilon - 2\nabla v_\varepsilon \cdot \nabla Y_\varepsilon + v_\varepsilon :|\nabla Y_\varepsilon |^2: +\lambda e^{-pY_\varepsilon } v_\varepsilon |v_\varepsilon |^p,\,\, \nonumber \\&\quad v_\varepsilon (0,x)=v_0(x)\in H^2, \end{aligned}$$
(4.1)

where \(\lambda \in \mathbb {R}\), \(p\ge 2\).

Proposition 4.1

We have the identity

$$\begin{aligned} \frac{d}{dt} ({\mathcal E}_{\varepsilon }(v_\varepsilon )[t])= \lambda \mathcal H_{\varepsilon }(v_\varepsilon )[t], \end{aligned}$$

where

$$\begin{aligned} {\mathcal E}_{\varepsilon }(v_\varepsilon )[t]={\mathcal F}_{\varepsilon }(v_\varepsilon )[t] + \lambda {\mathcal G}_{\varepsilon }(v_\varepsilon )[t] \end{aligned}$$

and the energies \({\mathcal F}_{\varepsilon }, {\mathcal G}_{\varepsilon }, \mathcal H_{\varepsilon }\) are defined as follows on a generic time dependent function w(tx). The kinetic energy is defined by

$$\begin{aligned} \begin{aligned} {\mathcal F}_{\varepsilon }(w)[t]&= \int _{\mathbb {T}^2} |\Delta w(t)|^2 e^{-2Y_\varepsilon }\\&\quad -4 \text {Re}\int _{\mathbb {T}^2} \Delta w(t) \nabla Y_\varepsilon \cdot \nabla \bar{w}(t) e^{-2Y_\varepsilon } \\&\quad +4 \sum _{i=1}^2 \int _{\mathbb {T}^2} (|\partial _i w(t)|^2) (\partial _i Y_\varepsilon )^2 e^{-2Y_\varepsilon }\\&\quad +8 \text {Re}\int _{\mathbb {T}^2} \partial _1 Y_\varepsilon \partial _2 Y_\varepsilon \partial _1 w(t) \partial _2 \bar{w}(t) e^{-2Y_\varepsilon } \\&\quad +2 \text {Re}\int _{\mathbb {T}^2} w(t) :|\nabla Y_\varepsilon |^2: \nabla \bar{w}(t) \cdot \nabla (e^{-2Y_\varepsilon }) \\&\quad +2 \text {Re}\int _{\mathbb {T}^2} \Delta w(t) \bar{w}(t) :|\nabla Y_\varepsilon |^2: e^{-2Y_\varepsilon }\\&\quad + \int _{\mathbb {T}^2} |w(t)|^2 (:|\nabla Y_\varepsilon |^2:)^2 e^{-2Y_\varepsilon }\,\,. \end{aligned} \end{aligned}$$

The potential energy is defined by

$$\begin{aligned} \begin{aligned} {\mathcal G}_{\varepsilon }(w)[t]&= - \int _{\mathbb {T}^2} |\nabla w(t)|^2|w(t)|^p e^{-(p+2)Y_\varepsilon }\\&\quad -2 \text {Re}\int _{\mathbb {T}^2} w(t) \nabla (|w(t)|^p) \cdot \nabla \bar{w}(t) e^{-(p+2)Y_\varepsilon } \\&\quad +\frac{p}{4} \int _{\mathbb {T}^2} |\nabla (|w(t)|^2)|^2|w(t)|^{p-2} e^{-(p+2)Y_\varepsilon } \\&\quad +\frac{2 }{p+2} \int _{\mathbb {T}^2} |w(t)|^{p+2} :|\nabla Y_\varepsilon |^2: e^{-(p+2)Y_\varepsilon }\\&\quad +2p\text {Re}\int _{\mathbb {T}^2} w(t) |w(t)|^p \nabla Y_{\varepsilon } \cdot \nabla \bar{w}(t) e^{-(p+2)Y_{\varepsilon }}\,. \end{aligned} \end{aligned}$$

Finally, the lack of exact conservation is measured by the functional

$$\begin{aligned} \mathcal H_{\varepsilon }(w)[t]&= -\int _{\mathbb {T}^2} |\nabla w(t)|^2 \partial _t (|w(t)|^p) e^{-(p+2)Y_\varepsilon } \\&\quad -2\text {Re}\int _{\mathbb {T}^2} \partial _t w(t)\nabla (|w(t)|^p) \cdot \nabla \bar{w} (t) e^{-(p+2)Y_\varepsilon } \\&\quad -\frac{p}{4} \int _{\mathbb {T}^2} |\nabla (|w(t)|^2)|^2\partial _t (|w(t)|^{p-2}) e^{-(p+2)Y_\varepsilon } \\&\quad +2p\text {Re}\int _{\mathbb {T}^2}\partial _t (w(t) |w(t)|^p) \nabla Y_\varepsilon \cdot \nabla \bar{w}(t) e^{-(p+2)Y_\varepsilon }\, . \end{aligned}$$

Remark 4.2

Notice that in the linear case (namely (4.1) with \(\lambda =0\)) we get the following exact conservation law:

$$\begin{aligned} \frac{d}{dt} {\mathcal F}_{\varepsilon }(v_\varepsilon )=0. \end{aligned}$$

The proof of Proposition 4.1 will be presented in the last section of the paper. Next, we estimate \({\mathcal H}_\varepsilon (v_\varepsilon )\) and the lower order terms in the energy \({\mathcal E}_\varepsilon (v_\varepsilon )\). They will play a crucial role in order to get the key \(H^2\) a-priori bound for \(v_\varepsilon \). In the sequel we shall assume that \(v_\varepsilon \) solves (4.1) with \(\lambda \le 0\) and \(p\in [2,3]\). In particular we are allowed to use Proposition 3.1 in order to control a.s. \(\Vert v_\varepsilon (t,x)\Vert _{H^1}\) uniformly w.r.t. \(\varepsilon \) and t.

Proposition 4.3

Let \(\Sigma \subset \Omega \) be the event of full probability, obtained in Proposition 3.1. Then there exists a random variable \(C(\omega )\) finite on \(\Sigma \) such that for every \(\varepsilon \in (0,\frac{1}{2})\):

$$\begin{aligned} |{\mathcal H}_\varepsilon (v_\varepsilon )|< C(\omega ) \Big ( \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2 \ln ^\frac{p-1}{2} \big (2+ \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}\big ) + |\ln \varepsilon |^8 \Big ). \end{aligned}$$

Proof

By using the Hölder inequality, the Leibnitz rule and the diamagnetic inequality \(|\partial _t |u||\le |\partial _t u|\) we get that the first three terms in \({\mathcal H}_\varepsilon (v_\varepsilon )\) can be estimated by:

$$\begin{aligned} \int _{\mathbb {T}^2} |\partial _t v_\varepsilon ||\nabla v_\varepsilon |^2 |v_\varepsilon |^{p-1} e^{-(p+2)Y_\varepsilon } \le \Vert \partial _t v_\varepsilon \Vert _{L^2(e^{-(p+2)Y_\varepsilon })} \Vert \nabla v_\varepsilon \Vert _{L^4({e^{-(p+2)Y_\varepsilon })}}^2 \Vert v_\varepsilon \Vert _{L^\infty }^{p-1} \end{aligned}$$

which by the Brezis-Gallouet inequality (see [1]) can be estimates as follows:

$$\begin{aligned} \dots&\lesssim \Vert \partial _t v_\varepsilon \Vert _{L^2(e^{-(p+2)Y_\varepsilon })} \Vert \nabla v_\varepsilon \Vert _{L^4(e^{-(p+2)Y_\varepsilon })}^2 \Vert v_\varepsilon \Vert _{H^1}^{p-1}\\&\quad \times \ln ^\frac{p-1}{2} (2+ \Vert v_\varepsilon \Vert _{L^2} + \Vert \Delta v_\varepsilon \Vert _{L^2}) \end{aligned}$$

and by using the equation solved by \(v_\varepsilon (t,x)\):

$$\begin{aligned} \dots&\lesssim \Vert \Delta v_\varepsilon \Vert _{L^2(e^{-(p+2)Y_\varepsilon })} \Vert \nabla v_\varepsilon \Vert _{L^4(e^{-(p+2)Y_\varepsilon })}^2 \Vert v_\varepsilon \Vert _{H^1}^{p-1} \ln ^\frac{p-1}{2} (2+ \Vert v_\varepsilon \Vert _{L^2} + \Vert \Delta v_\varepsilon \Vert _{L^2}) \\&\quad +\Vert \nabla v_\varepsilon \cdot \nabla Y_\varepsilon \Vert _{L^2({e^{-(p+2)Y_\varepsilon })}} \Vert \nabla v_\varepsilon \Vert _{L^4(e^{-(p+2)Y_\varepsilon }) }^2 \Vert v_\varepsilon \Vert _{H^1}^{p-1} \ln ^\frac{p-1}{2} (2+ \Vert v_\varepsilon \Vert _{L^2} + \Vert \Delta v_\varepsilon \Vert _{L^2}) \\&\quad +\Vert v_\varepsilon :|\nabla Y_\varepsilon |^2: \Vert _{L^2(e^{-(p+2)Y_\varepsilon })} \Vert \nabla v_\varepsilon \Vert _{L^4(e^{-(p+2)Y_\varepsilon })}^2 \Vert v_\varepsilon \Vert _{H^1}^{p-1} \ln ^\frac{p-1}{2} (2+ \Vert v_\varepsilon \Vert _{L^2} + \Vert \Delta v_\varepsilon \Vert _{L^2})\\&\quad +\Vert e^{-pY_\varepsilon }v_\varepsilon |v_\varepsilon |^p\Vert _{L^2(e^{-(p+2)Y_\varepsilon })} \Vert \nabla v_\varepsilon \Vert _{L^4(e^{-(p+2)Y_\varepsilon })}^2 \Vert v_\varepsilon \Vert _{H^1}^{p-1} \ln ^\frac{p-1}{2} (2+ \Vert v_\varepsilon \Vert _{L^2} + \Vert \Delta v_\varepsilon \Vert _{L^2}) \\&=I+II+III+IV. \end{aligned}$$

Next we recall a family of estimates that will be useful to control IIIIIIIV. We shall also use without any further comment Propositions 2.1 and 3.1. We have the Gagliardo-Nirenberg type inequality

$$\begin{aligned} \Vert \nabla u\Vert _{L^4}^2\le C \Vert \Delta u \Vert _{L^2}\Vert \nabla u\Vert _{L^2} \,. \end{aligned}$$
(4.2)

Indeed, using the Sobolev embedding \(H^{\frac{1}{2}}\subset L^4\), we can write

$$\begin{aligned} \Vert \nabla u\Vert _{L^4}^2 \le C \Vert \nabla u \Vert ^2_{H^{\frac{1}{2}}}\le C \Vert \nabla u\Vert _{H^1}\Vert \nabla u\Vert _{L^2}. \end{aligned}$$

It remains to observe that

$$\begin{aligned} \Vert \nabla u\Vert _{H^1}\le C\Vert \Delta u\Vert _{L^2}. \end{aligned}$$

Therefore we have (4.2). Now, using (4.2) we get:

$$\begin{aligned} \Vert \nabla v_\varepsilon \Vert _{L^4(e^{-(p+2)Y_\varepsilon })}^2< C(\omega ) \Vert \nabla v_\varepsilon \Vert _{L^4}^2 < C(\omega )\Vert \Delta v_\varepsilon \Vert _{L^2} \Vert \nabla v_\varepsilon \Vert _{L^2} \end{aligned}$$

and also

$$\begin{aligned}&\Vert \nabla v_\varepsilon \cdot \nabla Y_\varepsilon \Vert _{L^2(e^{-(p+2)Y_\varepsilon })}< C(\omega ) \Vert \nabla v_\varepsilon \Vert _{L^4}\Vert \nabla Y_\varepsilon \Vert _{L^4}\\&\quad < C(\omega )\Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \Vert \nabla v_\varepsilon \Vert _{L^2}^\frac{1}{2} \Vert \nabla Y_\varepsilon \Vert _{L^4}. \end{aligned}$$

Next notice that

$$\begin{aligned}&\Vert v_\varepsilon :|\nabla Y_\varepsilon |^2: \Vert _{L^2(e^{-(p+2)Y_\varepsilon })}< C(\omega ) \Vert v_\varepsilon \Vert _{L^4}\Vert :|\nabla Y_\varepsilon |^2: \Vert _{L^4} \\&\quad <C(\omega ) \Vert v_\varepsilon \Vert _{H^1} \Vert :|\nabla Y_\varepsilon |^2: \Vert _{L^4}\,, \end{aligned}$$

where we have used the Sobolev embedding. Again by the Sobolev embedding we get:

$$\begin{aligned} \Vert e^{-pY_\varepsilon } v_\varepsilon |v_\varepsilon |^p\Vert _{L^2(e^{-(p+2)Y_\varepsilon })}<C(\omega ) \Vert v_\varepsilon \Vert _{L^{2(p+1)}}^{p+1}<C(\omega ) \Vert v_\varepsilon \Vert _{H^1}^{p+1}. \end{aligned}$$

Finally notice that

$$\begin{aligned} \Vert \Delta v_\varepsilon \Vert _{L^2}\le \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2} \Vert e^{Y_\varepsilon }\Vert _{L^\infty } <C(\omega ) \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2} . \end{aligned}$$

Based on the estimates above we get:

$$\begin{aligned} I< C(\omega ) \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2 \ln ^\frac{p-1}{2} \big (2 + C(\omega )+C(\omega ) \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}\big )\,. \end{aligned}$$

and also

$$\begin{aligned} II&< C(\omega ) \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^\frac{3}{2} |\ln \varepsilon | \ln ^\frac{p-1}{2} (2 + C(\omega )+ C(\omega )\Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}) \\&< C(\omega )\Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2 +|\ln \varepsilon |^4 \ln ^{2(p-1)} (2+ C(\omega )+C(\omega )\Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}) \\&< C(\omega ) (\Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2+1) +|\ln \varepsilon |^8, \end{aligned}$$

where we used the Young inequality. We conclude with the following estimates:

$$\begin{aligned} III&< C(\omega ) \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2} |\ln \varepsilon |^{2} \ln ^\frac{p-1}{2} (2+ C(\omega )+ C(\omega ) \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}) \\&< C(\omega ) (\Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2 + 1) +|\ln \varepsilon |^8 \, \end{aligned}$$

and

$$\begin{aligned} IV&< C(\omega ) \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2} \ln ^\frac{p-1}{2} (2+ C(\omega )+C(\omega )\Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}) \\&< C(\omega ) (\Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2 +1). \end{aligned}$$

Summarizing we can control the first three terms in \({\mathcal H}_\varepsilon (v_\varepsilon )\). Concerning the last term in the expression of \({\mathcal H}_\varepsilon (v_\varepsilon )\) we can estimate it as follows:

$$\begin{aligned}&\int _{\mathbb {T}^2} |\partial _t v_\varepsilon | |v_\varepsilon |^p |\nabla Y_\varepsilon | |\nabla v_\varepsilon | e^{-(p+2)Y_\varepsilon }\\&\quad \le \Vert e^{-(p+2)Y_\varepsilon }\Vert _{L^\infty } \Vert v_\varepsilon \Vert _{L^{8p}}^p \Vert \partial _t v_\varepsilon \Vert _{L^2} \Vert \nabla Y_\varepsilon \Vert _{L^8} \Vert \nabla v_\varepsilon \Vert _{L^4} \\&\quad <C(\omega ) |\ln \varepsilon | \Vert \partial _t v_\varepsilon \Vert _{L^2} \Vert \nabla v_\varepsilon \Vert _{L^4} \end{aligned}$$

where we have used Propositions 2.1 and 3.1 in conjunction with the Sobolev embedding to control \(\Vert v_\varepsilon \Vert _{L^{8p}}\). Hence by the Gagliardo-Nirenberg (4.2) inequality and by using the equation solved by \(v_\varepsilon \) we can continue as follows

$$\begin{aligned} \dots&< C(\omega ) |\ln \varepsilon | \Vert \Delta v_\varepsilon \Vert _{L^2} \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \Vert \nabla v_\varepsilon \Vert _{L^2}^\frac{1}{2}\\&\quad +C(\omega ) |\ln \varepsilon | \Vert \nabla v_\varepsilon \cdot \nabla Y_\varepsilon \Vert _{L^2} \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \Vert \nabla v_\varepsilon \Vert _{L^2}^\frac{1}{2}\\&\quad +C(\omega ) |\ln \varepsilon | \Vert v_\varepsilon :|\nabla Y_\varepsilon |^2: \Vert _{L^2} \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \Vert \nabla v_\varepsilon \Vert _{L^2}^\frac{1}{2} \\&\quad +C(\omega ) |\ln \varepsilon | \Vert e^{-pY_\varepsilon }v_\varepsilon |v_\varepsilon |^p\Vert _{L^2} \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \Vert \nabla v_\varepsilon \Vert _{L^2}^\frac{1}{2}. \end{aligned}$$

and by the Sobolev embedding

$$\begin{aligned} \dots&< C(\omega ) |\ln \varepsilon | \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{3}{2} +C(\omega ) |\ln \varepsilon | \Vert \nabla v_\varepsilon \Vert _{L^4} \Vert \nabla Y_\varepsilon \Vert _{L^4} \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \\&\quad +C(\omega ) |\ln \varepsilon | \Vert v_\varepsilon \Vert _{L^4} \Vert :|\nabla Y_\varepsilon |^2: \Vert _{L^4} \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} +C(\omega ) |\ln \varepsilon | \Vert v_\varepsilon |v_\varepsilon |^p\Vert _{L^2} \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \\&< C(\omega ) |\ln \varepsilon | \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{3}{2} +C(\omega ) |\ln \varepsilon |^2 \Vert \Delta v_\varepsilon \Vert _{L^2} +C(\omega ) |\ln \varepsilon |^3 \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \\&\quad +C(\omega ) |\ln \varepsilon | \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2}. \end{aligned}$$

The conclusion is now straighforward. \(\square \)

Proposition 4.4

Let \(\Sigma \subset \Omega \) be the event of full probability, obtained in Propositions 3.1. For every \(\delta >0\) and \(\omega \in \Sigma \) there exists a finite constant \(C(\omega , \delta )>0\) such that for every \(\varepsilon \in (0,\frac{1}{2})\):

$$\begin{aligned} |{\mathcal F}_{\varepsilon }(v_\varepsilon )-\int _{\mathbb {T}^2} |\Delta v_\varepsilon |^2 e^ {-2Y_\varepsilon } |< \delta \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}^2 + C(\omega , \delta ) |\ln \varepsilon |^4 \end{aligned}$$
(4.3)

and

$$\begin{aligned} |{\mathcal G}_{\varepsilon }(v_\varepsilon )|< \delta \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}^2 +C(\omega , \delta ) |\ln \varepsilon |^4. \end{aligned}$$
(4.4)

Proof

We estimate the terms involved in the expression \({\mathcal F}_{\varepsilon }(v_\varepsilon )-\int _{\mathbb {T}^2} |\Delta v_\varepsilon |^2 e^ {-2Y_\varepsilon }\). Since the arguments are quite similar to the ones used along of Proposition 4.3, we skip the details. Using Propositions  2.1 and 3.1, we can write

$$\begin{aligned}&\left| \int _{\mathbb {T}^2} \Delta v_\varepsilon \nabla Y_\varepsilon \cdot \nabla \bar{v}_\varepsilon e^{-2Y_\varepsilon }|\le \Vert \Delta v_\varepsilon \Vert _{L^2} \Vert \nabla Y_\varepsilon \Vert _{L^4} \Vert \nabla v_\varepsilon \Vert _{L^4} \Vert e^{-2Y_\varepsilon }\Vert _{L^\infty }\right. \\&\left. \left. \quad< C(\omega ) \Vert \Delta v_\varepsilon \Vert _{L^2} |\ln \varepsilon \right| \Vert \Delta v_\varepsilon \Vert _{L^2}^\frac{1}{2} \Vert \nabla v_\varepsilon \Vert _{L^2}^\frac{1}{2}< C(\omega ) \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}^\frac{3}{2} |\ln \varepsilon \right| \\&\quad < \delta \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}^2 + C(\omega , \delta ) |\ln \varepsilon |^4. \end{aligned}$$

Next notice that third and fourth term in the energy \({\mathcal F}_{\varepsilon }\) can be estimated by

$$\begin{aligned}&\Vert \nabla v_\varepsilon \Vert _{L^4}^2 \Vert \nabla Y_\varepsilon \Vert _{L^4}^2 \Vert e^{-2Y_\varepsilon }\Vert _{L^\infty } \\&\quad< C(\omega )|\ln \varepsilon |^2 \Vert \Delta v_\varepsilon \Vert _{L^2} \Vert \nabla v_\varepsilon \Vert _{L^2}< C(\omega ) |\ln \varepsilon |^2 \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2} \\&\quad < \delta \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}^2 + C(\omega , \delta ) |\ln \varepsilon |^4. \end{aligned}$$

By similar arguments and Sobolev embedding we get:

$$\begin{aligned}&\left| \int _{\mathbb {T}^2} v_\varepsilon :|\nabla Y_\varepsilon |^2: \nabla \bar{v}_\varepsilon \cdot \nabla (e^{-2Y_\varepsilon })| \right. \\&\left. \quad \le 2 \Vert v_\varepsilon \Vert _{L^8} \Vert :|\nabla Y_\varepsilon |^2:\Vert _{L^8} \Vert \nabla v_\varepsilon \Vert _{L^2} \Vert \nabla Y_\varepsilon \Vert _{L^4} \Vert e^{-2Y_\varepsilon }\Vert _{L^\infty } < C(\omega ) |\ln \varepsilon \right| ^{3}. \end{aligned}$$

Next we estimate the other term to be controlled:

$$\begin{aligned}&\left| \int _{\mathbb {T}^2} \Delta v_\varepsilon \bar{v}_\varepsilon :|\nabla Y_\varepsilon |^2: e^{-2Y_\varepsilon }| \le \Vert \Delta v_\varepsilon \Vert _{L^2} \Vert :|\nabla Y_\varepsilon |^2:\Vert _{L^4} \Vert v_\varepsilon \Vert _{L^4} \Vert e^{-2Y_\varepsilon }\Vert _{L^\infty } \right. \\&\left. \left. \quad< C(\omega ) \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2} |\ln \varepsilon |^2 < \delta \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}^2 + C(\omega , \delta ) \right| \ln \varepsilon \right| ^4\,, \end{aligned}$$

where we have used the Sobolev embedding. We conclude the proof of (4.3) by the following estimates:

$$\begin{aligned} \left| \int _{\mathbb {T}^2} |v_\varepsilon |^2 (:|\nabla Y_\varepsilon |^2:)^2 e^{-2Y_\varepsilon }| \le \Vert v_\varepsilon \Vert _{L^4}^2 \Vert :|\nabla Y_\varepsilon |^2:\Vert _{L^4}^2 \Vert e^{-2Y_\varepsilon }\Vert _{L^\infty } < C(\omega ) |\ln \varepsilon \right| ^4 \,, \end{aligned}$$

where we have used again the Sobolev embedding. Next, we prove (4.4). The first, second and third terms in the definition of \({\mathcal G}_\varepsilon (v_\varepsilon )\) can be estimated essentially by the same argument. Let us focus on the first one:

$$\begin{aligned}&\int _{\mathbb {T}^2} |\nabla v_\varepsilon |^2|v_\varepsilon |^p {e^{-(p+2)Y_\varepsilon }} \le \Vert \nabla v_\varepsilon \Vert _{L^4} ^2 \Vert v_\varepsilon \Vert _{L^{2p}} ^p \Vert e^{-(p+2)Y_\varepsilon }\Vert _{L^\infty } \\&\quad<C(\omega )\Vert \Delta v_\varepsilon \Vert _{L^2} \Vert \nabla v_\varepsilon \Vert _{L^2} \Vert v_\varepsilon \Vert _{H^1} ^p < C(\omega ) \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}\,, \end{aligned}$$

where we have used again the Sobolev embedding, the Gagliardo-Nirenberg inequality and Propositions 2.1 and 3.1. Concerning the fourth term in the definition of \({\mathcal G}_\varepsilon (v_\varepsilon )\) we get by the Hölder inequality and the Sobolev embedding

$$\begin{aligned}&\int _{\mathbb {T}^2} |v_\varepsilon |^{p+2} :|\nabla Y_\varepsilon |^2: e^{-(p+2)Y_\varepsilon } \\&\quad \le \Vert :|\nabla Y_\varepsilon |^2:\Vert _{L^4} \Vert v_\varepsilon \Vert ^{p+2}_{L^{\frac{4}{3}(p+2)}} \Vert e^{-(p+2)Y_\varepsilon }\Vert _{L^\infty } <C(\omega ) |\ln \varepsilon |^2 \end{aligned}$$

where we have used again Propositions 2.1 and 3.1. Finally we focus on the last term in the definition of \({\mathcal G}_\varepsilon (v_\varepsilon )\) that can be estimated as follows

$$\begin{aligned}&\int _{\mathbb {T}^2} |v_\varepsilon |^{p+1} |\nabla Y_{\varepsilon }| |\nabla v_\varepsilon | e^{-(p+2)Y_{\varepsilon }} \\&\quad \le \Vert v_\varepsilon \Vert _{L^{4(p+1)}}^{p+1}\Vert \nabla Y_{\varepsilon }\Vert _{L^4} \Vert \nabla v_\varepsilon \Vert _{L^2} \Vert e^{-(p+2)Y_{\varepsilon }}\Vert _{L^\infty } <C(\omega ) |\ln \varepsilon | \end{aligned}$$

where we have used Propositions 2.1 and 3.1. \(\square \)

As already mentioned in the introduction we carefully follow the approach in [4] along the proof of Theorem 1.2. The main novelty being the following \(H^2\) a-priori bound that we extend to the regime of the nonlinearity \(2\le p\le 3\). Next we shall focus on the proof of the following Proposition (to be compared with Proposition 4.2 in [4]) which is the most important result of this section.

Proposition 4.5

Let \(\Sigma \subset \Omega \) be the event of full probability, obtained in Propositions 3.1 and let \(T>0\) be fixed. Then there exists a random variable \(C(\omega )>0\) finite for every \(\omega \in \Sigma \) and such that for every \(\varepsilon \in (0,\frac{1}{2})\),

$$\begin{aligned} \sup _{t\in [-T,T]}\Vert v_{\varepsilon }(t,x)\Vert _{H^2}< |\ln \varepsilon |^{C(\omega )}. \end{aligned}$$

Proof of Proposition 4.5

We only consider positive times t. The case \(t<0\) can be treated similarly. We shall prove the following estimate

$$\begin{aligned} \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon (t)\Vert _{L^2}^2\le \Big (C(\omega ) (1+ |\ln \varepsilon |^4 + T|\ln \varepsilon |^8)\Big )^{e^{C(\omega ) t}}, \quad \forall t\in [0,T] \end{aligned}$$

for a suitable random constant which is finite a.s., then the conclusion follows by

$$\begin{aligned} \Vert \Delta v_\varepsilon (t)\Vert _{L^2}^2\le \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon (t)\Vert _{L^2}^2 \Vert e^{2 Y_\varepsilon }\Vert _{L^\infty }<C(\omega ) \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon (t)\Vert _{L^2}^2. \end{aligned}$$

By Proposition 4.1 after integration in time and by using Propositions 4.3 and 4.4 (where we choose \(\delta >0\) small in such a way that we can absorb on the l.h.s. the factor \(\Vert e^{-Y_\varepsilon }\Delta v_\varepsilon (t)\Vert _{L^2}^2\)) we can write:

$$\begin{aligned}&\Vert e^{-Y_\varepsilon }\Delta v_\varepsilon (t)\Vert _{L^2}^2 \nonumber \\&\quad \le C(\omega ) \int _0^t \big [ \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2 \ln ^\frac{p-1}{2} (2+ \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}) + |\ln \varepsilon |^8 \big ] \nonumber \\&\qquad + C(\omega ) |\ln \varepsilon |^4+{\mathcal E}_\varepsilon (v_0)\,. \end{aligned}$$
(4.5)

Notice also that by Proposition 4.4 one can can show the following bound for every \(\omega \) belonging to the event given in Proposition 3.1:

$$\begin{aligned} |{\mathcal E}_\varepsilon (v_0)|< C(\omega )(1 + |\ln \varepsilon |^4)\,. \end{aligned}$$

Hence, by recalling that \(\frac{p-1}{2}\le 1\), we deduce from (4.5) the following bound

$$\begin{aligned} \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon (t)\Vert _{L^2}^2&\le C(\omega ) \int _0^t \big [ \Vert e^{-Y_\varepsilon }\Delta v_\varepsilon \Vert _{L^2}^2 \ln (2+ \Vert e^{-Y_\varepsilon } \Delta v_\varepsilon \Vert _{L^2}) \big ] ds\\&\quad +C(\omega )(1 + |\ln \varepsilon |^4 +T |\ln \varepsilon |^8), \quad \forall t\in (0,T). \end{aligned}$$

We can apply Proposition 3.2 and the conclusion follows. \(\square \)

5 Proof of theorem 1.2

5.1 Convergence of the approximate solutions

Proposition 5.1

Let \(T>0\) be fixed and \(v_\varepsilon (t,x, \omega )\) be as in Theorem 1.2. Then there exists \( v(t,x, \omega )\in {\mathcal C}(\mathbb {R};H^\gamma )\) such that

$$\begin{aligned} \sup _{t\in [-T, T]} \Vert v_\varepsilon (t,x, \omega ) - v(t,x, \omega )\Vert _{H^\gamma } \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad a.s. \quad w.r.t. \quad \omega . \end{aligned}$$

Proof

We shall only consider positive times, the analysis for negative times being similar. Let us fix \(T>0\). Set

$$\begin{aligned} r(t,x)=v_{\varepsilon _1}(t,x)-v_{\varepsilon _2}(t,x), \quad \varepsilon _2\ge \varepsilon _1\,. \end{aligned}$$

Then the equation solved by r is the following one:

$$\begin{aligned} i \partial _t r&= \Delta r - 2\nabla r \cdot \nabla Y_{\varepsilon _1} + r :|\nabla Y_{\varepsilon _1} |^2: \\&\quad - 2\nabla v_{\varepsilon _2} \cdot \nabla (Y_{\varepsilon _1} - Y_{\varepsilon _2}) + v_{\varepsilon _2} (:|\nabla Y_{\varepsilon _1} |^2:-:|\nabla Y_{\varepsilon _2} |^2:) \\&\quad + \lambda e^{-pY_{\varepsilon _1}} r |v_{\varepsilon _1}|^p - \lambda e^{-pY_{\varepsilon _1}} v_{\varepsilon _2} (|v_{\varepsilon _2}|^p -|v_{\varepsilon _1}|^p)+\lambda v_{\varepsilon _2}|v_{\varepsilon _2}|^p ( e^{-pY_{\varepsilon _1}} -e^{-pY_{\varepsilon _2}} ). \end{aligned}$$

We multiply the equation by \(e^{-2Y_{\varepsilon _1}(x)}\bar{r}(t,x)\) and we consider the imaginary part, then we get

$$\begin{aligned} \frac{1}{2} \frac{d}{dt} \int _{\mathbb {T}^2} e^{-2Y_{\varepsilon _1}} |r(t)|^2&= - 2 \text {Im}\int _{\mathbb {T}^2} e^{-2Y_{\varepsilon _1}} \bar{r} (t) \nabla v_{\varepsilon _2} \cdot \nabla (Y_{\varepsilon _1} - Y_{\varepsilon _2}) \nonumber \\&\quad + \text {Im}\int _{\mathbb {T}^2} e^{-2Y_{\varepsilon _1}} \bar{r}(t) v_{\varepsilon _2}(t) (:|\nabla Y_{\varepsilon _1} |^2:-:|\nabla Y_{\varepsilon _2} |^2:) \nonumber \\&\quad +\lambda \text {Im}\int _{\mathbb {T}^2} e^{-(p+2)Y_{\varepsilon _1}} |r(t)|^2 |v_{\varepsilon _1}(t)|^p \nonumber \\&\quad -\lambda \text {Im}\int _{\mathbb {T}^2} e^{-(p+2)Y_{\varepsilon _1}} \bar{r}(t)v_{\varepsilon _2}(t) (|v_{\varepsilon _2}(t)|^p -|v_{\varepsilon _1}(t)|^p) \nonumber \\&\quad +\lambda \text {Im}\int _{\mathbb {T}^2} e^{-2Y_{\varepsilon _1}}\bar{r}(t)v_{\varepsilon _2}(t)|v_{\varepsilon _2}(t)|^p ( e^{-pY_{\varepsilon _1}} -e^{-pY_{\varepsilon _2})} \nonumber \\&=I+II+III+IV+V. \end{aligned}$$
(5.1)

From now on we choose \(\omega \in \Sigma \) where the event \(\Sigma \) is defined as in Proposition 3.1. We estimate I by using duality and Lemma 2.2 in [9] (see also the proof of Proposition 3.1):

$$\begin{aligned} |I|&\lesssim \Vert \nabla (Y_{\varepsilon _1} - Y_{\varepsilon _2})\Vert _{W^{-s,q}} \Vert e^{-2Y_{\varepsilon _1}} \bar{r} (t) \nabla v_{\varepsilon _2} (t)\Vert _{W^{s,q'}} \\&\lesssim \Vert \nabla (Y_{\varepsilon _1} - Y_{\varepsilon _2})\Vert _{W^{-s,q}} \times \big (\Vert e^{-2Y_{\varepsilon _1}}\Vert _{W^{s,q_1}} \Vert r(t)\Vert _{L^{q_2}} \Vert \nabla v_{\varepsilon _2}(t)\Vert _{L^{q_3}}\\&\quad +\Vert e^{-2Y_{\varepsilon _1}}\Vert _{L^{q_1}} \Vert r(t)\Vert _{W^{s,q_2}} \Vert \nabla v_{\varepsilon _2}(t)\Vert _{L^{q_3}}+\Vert e^{-2Y_{\varepsilon _1}}\Vert _{L^{q_1}} \Vert r(t)\Vert _{L^{q_2}} \Vert \nabla v_{\varepsilon _2}(t)\Vert _{W^{s,q_3}}\big ) \end{aligned}$$

where \(\frac{1}{q'}=\frac{1}{q_1}+\frac{1}{q_2}+\frac{1}{q_3}\) and \(s\in (0,1), q\in (1, \infty )\). Next notice that by choosing \(s\in (0,1)\) small enough, by using Sobolev embedding and by recalling Propositions 2.1, 3.1 and 4.5 we get

$$\begin{aligned} |I|< C(\omega ) \varepsilon _2^\kappa \Vert v_{\varepsilon _2}(t)\Vert _{H^2} \Vert r(t)\Vert _{H^1} < C(\omega ) \varepsilon _2^\kappa |\ln \varepsilon _2|^{C(\omega )}. \end{aligned}$$

By a similar argument we can estimate II as follows:

$$\begin{aligned} |II|&\lesssim \Vert :|\nabla Y_{\varepsilon _1}|^2: - :|Y_{\varepsilon _2}|^2:\Vert _{W^{-s,q}} \Vert e^{-2Y_{\varepsilon _1}} \bar{r} (t) v_{\varepsilon _2}(t) \Vert _{W^{s,q'}} \\&\lesssim \Vert \nabla (Y_{\varepsilon _1} - Y_{\varepsilon _2})\Vert _{W^{-s,q}} \times \big (\Vert e^{-2Y_{\varepsilon _1}}\Vert _{W^{s,q_1}} \Vert r(t)\Vert _{L^{q_2}} \Vert v_{\varepsilon _2}(t)\Vert _{L^{q_3}}\\&\quad +\Vert e^{-2Y_{\varepsilon _1}}\Vert _{L^{q_1}} \Vert r(t)\Vert _{W^{s,q_2}} \Vert v_{\varepsilon _2}(t)\Vert _{L^{q_3}}+\Vert e^{-2Y_{\varepsilon _1}}\Vert _{L^{q_1}} \Vert r(t)\Vert _{L^{q_2}} \Vert v_{\varepsilon _2}(t)\Vert _{W^{s,q_3}}\big ) \end{aligned}$$

and hence by using Sobolev embedding and by recalling Propositions 2.1, 3.1 and 4.5 we get for s small enough

$$\begin{aligned} |II|\lesssim C(\omega ) \varepsilon _2^\kappa \Vert v_{\varepsilon _2}(t)\Vert _{H^2}< C(\omega ) \varepsilon _2^\kappa . \end{aligned}$$

The estimate of the term III is rather classical and can be done by using the Brezis-Gallouët inequality (see [1]). More precisely we get:

$$\begin{aligned} |III|&\lesssim \Vert e^{-\frac{p+2}{2}Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 \Vert v_{\varepsilon _1}(t)\Vert _{L^\infty }^p \\&\lesssim \Vert e^{-\frac{p+2}{2} Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 (\Vert v_{\varepsilon _1}(t)\Vert _{H^1}^p \ln ^\frac{p}{2} (2+ \Vert v_{\varepsilon _1}(t)\Vert _{H^2}) \\&< C(\omega ) \Vert e^{-\frac{p+2}{2} Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 \ln ^\frac{p}{2} (2+ \Vert v_{\varepsilon _1}(t)\Vert _{H^2}) \end{aligned}$$

where we have used at the last step Proposition 3.1. In order to control \(\Vert v_{\varepsilon _1}(t)\Vert _{H^2}\) we use Proposition 4.5 and we get

$$\begin{aligned} |III|&< C(\omega ) \Vert e^{-\frac{p+2}{2} Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 \ln ^\frac{p}{2} (|\ln \varepsilon _1|^{C(\omega )})\\&< C(\omega ) \Vert e^{-Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 \ln ^\frac{p}{2} (|\ln \varepsilon _1|^{C(\omega )}) \end{aligned}$$

Next, arguing as in the estimate of III, we get by combining Propositions 3.1 and 4.5

$$\begin{aligned} |IV|&\lesssim \Vert e^{-\frac{p+2}{2}Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 (\Vert v_{\varepsilon _1}(t)\Vert _{L^\infty }^{p-1} +\Vert v_{\varepsilon _2}(t)\Vert _{L^\infty }^{p-1})\Vert v_{\varepsilon _2}(t)\Vert _{L^\infty } \\&\lesssim \Vert e^{-\frac{p+2}{2}Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 \ln ^\frac{p}{2} (|\ln \varepsilon _1|^{C(\omega )}) <C(\omega ) \Vert e^{-Y_{\varepsilon _1}} r(t)\Vert _{L^2}^2 \ln ^\frac{p}{2} (|\ln \varepsilon _1|^{C(\omega )}). \end{aligned}$$

Finally by the Holder inequality, propositions 2.1 and 3.1 we estimate

$$\begin{aligned} |V|&\lesssim \Vert e^{-2Y_{\varepsilon _1}}\Vert _{L^\infty } \Vert \bar{r}(t)\Vert _{L^2} \Vert v_{\varepsilon _2}(t)\Vert _{L^{2(p+1)}}^{p+1} \Vert e^{-pY_{\varepsilon _1}} -e^{-pY_{\varepsilon _2}}\Vert _{L^\infty } \\&< C(\omega )\varepsilon _2^\kappa . \end{aligned}$$

Summarizing we obtain

$$\begin{aligned} \frac{1}{2} \frac{d}{dt} \int _{\mathbb {T}^2} e^{-2Y_{\varepsilon _1}} |r(t)|^2&< C(\omega ) \varepsilon _2^\kappa |\ln \varepsilon _2|^{C(\omega )} \nonumber \\&\quad + C(\omega ) \ln ^\frac{p}{2} (|\ln \varepsilon _1|^{C(\omega )}) \int _{\mathbb {T}^2} e^{-2Y_{\varepsilon _1}} |r(t)|^2 \,. \end{aligned}$$
(5.2)

Next we split the proof in two steps.

First step \(v_{2^{-k}}(t,x,\omega )\overset{k\rightarrow \infty }{\longrightarrow }v(t,x,\omega )\) for every \(\omega \in \Sigma \).

We consider \(r=v_{2^{-(k+1)}} - v_{2^{-k}}\). Then by combining Proposition 3.3 and (5.2) (where we choose \(\varepsilon _1 =2^{-(k+1)}\) and \( \varepsilon _2=2^{-k}\)) we get:

$$\begin{aligned}&\sup _{t\in (0,T)}\int _{\mathbb {T}^2} e^{-2Y_{2^{-(k+1)}}} |v_{2^{-(k+1)}}(t)- v_{2^{-k}}(t)|^2 \\&\quad < \frac{ C(\omega ) 2^{-k\kappa } |\ln 2^{-k}|^{C(\omega )} e^{C(\omega ) \ln ^\frac{p}{2} (|\ln 2^{-(k+1)}|^{C(\omega )}) T}}{\ln ^\frac{p}{2} (|\ln 2^{-(k+1)}|^{C(\omega )})} \end{aligned}$$

By recalling that for every \(\omega \in \Sigma \) we have \(\sup _k \Vert e^{2Y_{2^{-(k+1)}}}\Vert _{L^\infty }<C(\omega )<\infty \) we deduce that the bound above implies

$$\begin{aligned}&\sup _{t\in (0,T)}\int _{\mathbb {T}^2} |v_{2^{-(k+1)}}(t)- v_{2^{-k}}(t)|^2 < \frac{ C(\omega ) 2^{-k\kappa } |\ln 2^{-k}|^{C(\omega )} e^{C(\omega ) \ln ^\frac{p}{2} (|\ln 2^{-(k+1)}|^{C(\omega )}) T}}{\ln ^\frac{p}{2} (|\ln 2^{-(k+1)}|^{C(\omega )})}. \end{aligned}$$

By combining this estimate with interpolation and with Proposition 4.5 we deduce for every \(\gamma \in [0,2)\) the following bound

$$\begin{aligned}&\sup _{t\in (0,T)}\Vert v_{2^{-(k+1)}}(t)- v_{2^{-k}}(t)\Vert _{H^\gamma } \\&\quad < \frac{ C(\omega ) 2^{-k \tilde{\kappa }} |\ln 2^{-k}|^{C(\omega )} e^{C(\omega ) \ln ^\frac{p}{2} (|\ln 2^{-(k+1)}|^{C(\omega )}) T}}{\ln ^\frac{\tilde{p}}{2} (|\ln 2^{-(k+1)}|^{C(\omega )})}. \end{aligned}$$

where \(\tilde{\kappa }, \tilde{p}>0\) are constants that depend from the interpolation inequality. It is easy to check that

$$\begin{aligned} \sum _k \frac{ C(\omega ) 2^{-k \tilde{\kappa }} |\ln 2^{-k}|^{C(\omega )} e^{C(\omega ) \ln ^\frac{p}{2} (|\ln 2^{-(k+1)}|^{C(\omega )}) T}}{\ln ^\frac{\tilde{p}}{2} (|\ln 2^{-(k+1)}|^{C(\omega )})} <\infty \end{aligned}$$

and therefore \((v_{2^{-k}})\) is a Cauchy sequence in \({\mathcal C}([0,T];H^\gamma )\) and we conclude.

Second step: \(v_{\varepsilon }(t,x,\omega )\overset{\varepsilon \rightarrow 0}{\longrightarrow }v(t,x,\omega )\) for every \(\omega \in \Sigma \).

For every \(\varepsilon \in (2^{-(k+1)}, 2^{-k})\) we introduce \(r=v_{\varepsilon }- v_{2^{-k}}\). Then by combining (5.2) (where we choose \(\varepsilon _1=\varepsilon \) and \(\varepsilon _2=2^{-k}\)) with Proposition 3.3 and arguing as above we get

$$\begin{aligned} \sup _{t\in (0,T)}\Vert v_{\varepsilon }(t)- v_{2^{-k}}(t)\Vert _{H^\gamma } < \frac{ C(\omega ) 2^{-k \tilde{\kappa }} |\ln 2^{-k}|^{C(\omega )} e^{C(\omega ) \ln ^\frac{p}{2} (|\ln \varepsilon |^{C(\omega )}) T}}{\ln ^\frac{\tilde{p}}{2} (|\ln \varepsilon |^{C(\omega )})} \end{aligned}$$

and hence (recall that \(\varepsilon \in (2^{-(k+1)}, 2^{-k})\))

$$\begin{aligned} \sup _{\varepsilon \in (2^{-(k+1)}, 2^{-k})}\Vert v_{\varepsilon }(t)- v_{2^{-k}}(t)\Vert _{H^\gamma }\overset{k\rightarrow \infty }{\longrightarrow }0. \end{aligned}$$

We conclude by recalling the first step. \(\square \)

5.2 Uniqueness for (1.17)

It follows from the analysis of the previous section that \(v_{\varepsilon }\) converges almost surely to a solution of (1.17). We next prove the uniqueness of this solution.

Proposition 5.2

Let \(\Sigma \subset \Omega \) be the full measure event defined in Proposition 3.1 and \(T>0\). For every \(\omega \in \Sigma \) there exists at most one solution \( v(t,x)\in {\mathcal C}([0,T];H^\gamma )\) to (1.17) for \(\gamma >1\).

Proof

Assume \( v_1(t,x)\) and \( v_2(t,x)\) are two solutions, then we consider the difference \(r(t,x)= v_1(t,x)- v_2(t,x)\) which solves

$$\begin{aligned} {\left\{ \begin{array}{ll} i\partial _t r = \Delta r - 2\nabla r \cdot \nabla Y + r :|\nabla Y |^2: +\lambda e^{-pY}( v_1| v_1|^p- v_2| v_2|^p),\\ r(0,x)=0. \end{array}\right. } \end{aligned}$$

Next we multiply the equation by \(e^{-2Y_\varepsilon (x)} \bar{r}(t,x)\) where \(\varepsilon \in (0,1)\), we integrate by parts and we take the imaginary part, finally we get:

$$\begin{aligned} \frac{1}{2} \frac{d}{dt} \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |r(t)|^2&= 2 \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } \bar{r}(t)\nabla r(t) \cdot \nabla (Y_\varepsilon - Y) \\&\quad +\lambda \text {Im}\int _{\mathbb {T}^2} e^{-2Y_\varepsilon -pY} \bar{r}(t) ( v_1(t)| v_1(t)|^p- v_2(t)| v_2(t)|^p)\\&=I+II. \end{aligned}$$

By the Sobolev embedding \(H^\gamma \subset L^\infty \) we get

$$\begin{aligned} II&\le \Vert e^{-pY}\Vert _{L^\infty } \Vert e^{-Y_\varepsilon }r(t)\Vert _{L^2}^2 \sup _{t\in [0,T]} (\Vert v_1(t)\Vert _{H^\gamma }^p+\Vert v_2(t)\Vert _{H^\gamma }^p)\\&< C(\omega ) \Vert e^{-Y_\varepsilon }r(t)\Vert _{L^2}^2. \end{aligned}$$

For the term I we get by duality and Lemma 2.2 in [9] (see the proof of proposition  3.1 for more details) the following estimate

$$\begin{aligned} |I|&\le \Vert \nabla (Y_\varepsilon - Y)\Vert _{W^{-s,q}} \Vert e^{-2Y_\varepsilon } \bar{r}(t)\nabla r(t)\Vert _{W^{s,q'}} \\&< C(\omega ) \varepsilon ^\kappa (\Vert e^{-2Y_\varepsilon }\Vert _{L^{q_1}} \Vert \bar{r}(t)\Vert _{L^{q_2}}\Vert \nabla r(t)\Vert _{W^{s,2}} +\Vert e^{-2Y_\varepsilon }\Vert _{W^{s,q_1}} \Vert \bar{r}(t)\Vert _{L^{q_2}}\Vert \nabla r(t)\Vert _{L^{2}} \\&\quad +\Vert e^{-2Y_\varepsilon }\Vert _{L^{q_1}} \Vert \bar{r}(t)\Vert _{W^{s,q_2}} \Vert \nabla r(t)\Vert _{L^{2}} ) \end{aligned}$$

where \(s\in (0,1), q\in (0, \infty )\), \(\frac{1}{q'}=\frac{1}{q_1}+ \frac{1}{q_2}+\frac{1}{2}\) and we have used Proposition 2.1 at the second step. By Sobolev embedding, provided that we choose s small enough, and Proposition 2.1 one can show that

$$\begin{aligned} |I|< C(\omega ) \varepsilon ^\kappa \sup _{t\in [0,T]} (\Vert v_1(t)\Vert _{H^\gamma }^{2}+\Vert v_2(t)\Vert _{H^\gamma }^{2}) < C(\omega ) \varepsilon ^\kappa . \end{aligned}$$

Summarizing we get

$$\begin{aligned} \frac{d}{dt} \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |r(t)|^2<C(\omega ) (\int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |r(t)|^2 +\varepsilon ^\kappa ), \quad r(0)=0. \end{aligned}$$

We deduce by proposition 3.3 that

$$\begin{aligned} \int _{\mathbb {T}^2} e^{-2Y_\varepsilon } |r(t)|^2<C(\omega ) \varepsilon ^\kappa e^{C(\omega )t} \end{aligned}$$

and hence by passing to the limit \(\varepsilon \rightarrow 0\) we deduce \(\int _{\mathbb {T}^2} e^{-2Y} |r(t)|^2=0\). \(\square \)

6 Proof of theorem 1.1

The proof of (1.7) follows by combining the transformation (1.12) with Theorem 1.2. Since now we shall denote by \(\Sigma \subset \Omega \) the event of full probability given by the intersection of the ones defined in Theorem 1.2 and in Proposition 3.1. In order to prove (1.8) we first show

$$\begin{aligned} \sup _{t\in [-T,T]} \big \Vert e^{Y_\varepsilon (x,\omega )} |u_\varepsilon (t,x,\omega )| - | v (t,x, \omega )| \big \Vert _ {{H^\gamma (\mathbb {T}^2)}\cap L^\infty (\mathbb {T}^2)} \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \, \omega \in \Sigma . \end{aligned}$$
(6.1)

Notice that from (1.7) and the Sobolev embedding, we get

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert e^{-iC_\varepsilon t} e^{Y_\varepsilon } u_\varepsilon (t)- v(t)\Vert _{L^2\cap L^\infty } \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \,\omega \in \Sigma \end{aligned}$$

and hence by the triangle inequality in \(\mathbb {C}\),

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert e^{Y_\varepsilon } |u_\varepsilon (t)|-| v(t)|\Vert _{L^2\cap L^\infty } \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \,\omega \in \Sigma . \end{aligned}$$
(6.2)

Next we prove

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert e^{Y_{\varepsilon }} |u_{\varepsilon }(t)|-| v(t)|\Vert _{H^\gamma } \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \, \omega \in \Sigma , \quad \gamma \in (0,1). \end{aligned}$$
(6.3)

Since

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert v_{\varepsilon }(t) - v(t)\Vert _{H^\gamma }\overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \,\omega \in \Sigma , \quad \gamma \in [0,2) \end{aligned}$$
(6.4)

we get in particular

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert v(t)\Vert _{H^1} <\infty , \quad \forall \, \omega \in \Sigma \end{aligned}$$
(6.5)

and hence by the diamagnetic inequality

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert | v(t)|\Vert _{H^1} <\infty , \quad \forall \, \omega \in \Sigma . \end{aligned}$$
(6.6)

On the other hand by (6.4) we have

$$\begin{aligned} \sup _{\begin{array}{c} \varepsilon \in (0,1), t\in [-T,T] \end{array}} \Vert v_{\varepsilon }(t)\Vert _{H^\gamma } <\infty , \quad \forall \, \omega \in \Sigma , \quad \gamma \in [0, 2). \end{aligned}$$
(6.7)

Next notice that \(e^{Y_{\varepsilon }} |u_{\varepsilon }(t)|=|v_{\varepsilon }(t)|\) and hence by the diamagnetic inequality \(\Vert e^{Y_{\varepsilon }} |u_{\varepsilon }(t)|\Vert _{H^1}\le \Vert v_{\varepsilon }(t)\Vert _{H^1}\) and hence summarizing

$$\begin{aligned} \sup _{\begin{array}{c} \varepsilon \in (0,1), t\in [-T,T] \end{array}} \big (\max \big \{ \Vert e^{Y_{\varepsilon }} |u_{\varepsilon }(t)|\Vert _{H^1}, \Vert | v(t)|\Vert _{H^1}\big \}\big )<\infty , \quad \quad \forall \, \omega \in \Sigma . \end{aligned}$$
(6.8)

By interpolation between the uniform bound (6.8) and (6.2) we get (6.3).

Finally we prove (1.8). We show first the following fact

$$\begin{aligned} \sup _{t\in [-T,T]} \Vert e^{Y_\varepsilon } |u_\varepsilon (t)| - e^{Y} |u_\varepsilon (t)|\Vert _{H^\gamma \cap L^\infty } \overset{\varepsilon \rightarrow 0 }{\longrightarrow }0, \quad \forall \, \omega \in \Sigma , \quad \gamma \in (0,1)\ \end{aligned}$$
(6.9)

which in turn implies by (6.1) the following convergence

$$\begin{aligned} \sup _{t\in [-T,T]} \big \Vert e^{Y} |u_\varepsilon (t)| - | v(t)| \big \Vert _ {{H^\gamma }\cap L^\infty } \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \,\omega \in \Sigma , \quad \gamma \in (0,1). \end{aligned}$$
(6.10)

We shall establish the following equivalent form of (6.9):

$$\begin{aligned} \Vert e^{Y}(1-e^{Y_{\varepsilon }-Y}) |u_{\varepsilon }(t)|\big \Vert _{{H^\gamma }\cap L^\infty } \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \,\omega \in \Sigma , \quad \gamma \in (0,1). \end{aligned}$$
(6.11)

We first focus on the case \(\gamma =0\). In this case we get (6.11) by combining the following facts: we have the convergence \(Y_\varepsilon (x) \overset{\varepsilon \rightarrow 0}{\longrightarrow }Y(x)\) for every \(\omega \in \Sigma \) in the \(L^\infty \) topology (see Proposition 2.1); we have the following bound

$$\begin{aligned} \Vert u_{\varepsilon }(t)\Vert _{L^2}=\Vert e^{-Y_{\varepsilon }} v_{\varepsilon }(t)\Vert _{L^2} \le \Vert e^{-Y_{\varepsilon }}\Vert _{L^\infty } \Vert v_{\varepsilon }(t)\Vert _{L^2} \end{aligned}$$

and hence \(\Vert u_{\varepsilon }(t)\Vert _{L^2}\) is bounded for every \(\omega \in \Sigma \) by (6.7) and Proposition 2.1.

In order to establish (6.11) for \(\gamma \in (0,1)\) it is sufficient to interpolate between the convergence for \(\gamma =0\) (already established above) with the uniform bound

$$\begin{aligned} \sup _{\begin{array}{c} \varepsilon \in (0,1), t\in [-T,T] \end{array}}\Vert e^{Y}(1-e^{Y_{\varepsilon }-Y}) |u_{\varepsilon }(t)|\big \Vert _{{H^\gamma }}<\infty , \quad \forall \, \omega \in \Sigma . \end{aligned}$$

In order to establish this bound it is sufficient to notice that for every \(\omega \in \Sigma \)

$$\begin{aligned} \sup _{\varepsilon \in (0,1)} \{\Vert e^{Y}\Vert _{H^\gamma \cap L^\infty }, \Vert 1-e^{Y_{\varepsilon }-Y} \Vert _{H^\gamma \cap L^\infty }, \Vert |u_{\varepsilon }(t)|\Vert _{H^\gamma \cap L^\infty }\}<\infty \end{aligned}$$
(6.12)

and to recall that \(H^\gamma \cap L^\infty \) is an algebra. We recall that the boundedness of \(\Vert |u_{\varepsilon }(t)|\Vert _{H^\gamma \cap L^\infty }\) comes on one hand by combining (6.7) with

$$\begin{aligned} \Vert |u_{\varepsilon }(t)|\Vert _{L^\infty }=\Vert e^{-Y_{\varepsilon }} |v_{\varepsilon }(t)|\Vert _{L^\infty }\le \Vert e^{-Y_{\varepsilon }}\Vert _{L^\infty } \Vert v_{\varepsilon }(t)\Vert _{L^\infty } \lesssim \Vert v_{\varepsilon }(t)\Vert _{H^s}, \end{aligned}$$

where \(s>1\). On the other hand we have the following computation:

$$\begin{aligned} \Vert |u_{\varepsilon }(t)|\Vert _{H^\gamma }&= \Vert e^{-Y_{\varepsilon }(x)} |v_{\varepsilon }(t)| \Vert _{H^\gamma } \\&\le \Vert e^{-Y_{\varepsilon }}\Vert _{L^\infty \cap H^\gamma } \Vert |v_{\varepsilon }(t)| \Vert _{L^\infty \cap H^\gamma } \lesssim \Vert |v_{\varepsilon }(t)| \Vert _{L^\infty \cap H^\gamma }, \quad \gamma \in [0, 1) \end{aligned}$$

where we have used Proposition 2.1 and hence we get the desired uniform bound since by the diamagnetic inequality

$$\begin{aligned} \Vert |v_{\varepsilon }(t)| \Vert _{L^\infty \cap H^\gamma }\le \Vert |v_{\varepsilon }(t)| \Vert _{L^\infty \cap H^1} \lesssim \Vert v_{\varepsilon }(t)\Vert _{L^\infty \cap H^1} \end{aligned}$$

and we conclude by (6.7).

Let us now establish (1.8). Notice that by combining (6.2), (6.3) and (6.11) we have:

$$\begin{aligned} \sup _{t\in [-T,T]} \big \Vert e^{Y} |u_{\varepsilon }(t)| - | v(t)| \big \Vert _ {{H^\gamma }\cap L^\infty } \overset{\varepsilon \rightarrow 0}{\longrightarrow }0, \quad \forall \, \omega \in \Sigma ,\quad \gamma \in (0,1). \end{aligned}$$
(6.13)

Hence (1.8) in the case \(\gamma =0\) and the \(L^\infty \) convergence, follow from (6.13) since \(e^{-Y}\in L^\infty \) for every \(\omega \in \Sigma \) (see Proposition 2.1). To prove (1.8) in the general case \(\gamma \in (0,1)\) it is sufficient to make interpolation between \(\gamma =0\) and the bound

$$\begin{aligned} \sup _\varepsilon \{\Vert e^{-Y}\Vert _{H^\gamma \cap L^\infty }, \Vert e^{Y} |u_{\varepsilon }(t)| - | v(t)| \Vert _{H^\gamma \cap L^\infty }\}<\infty , \quad \forall \, \omega \in \Sigma \end{aligned}$$
(6.14)

which in turn implies, thanks to the fact that \({H^\gamma \cap L^\infty }\) is an algebra, that the quantity \(\Vert |u_{\varepsilon }(t)| - e^{-Y}| v(t)| \Vert _{H^\gamma }\) is uniformly bounded for every \(\omega \in \Sigma \). The proof of (6.14) follows by combining: the estimate (6.12), the bound \(\Vert v(t)\Vert _{L^\infty }\lesssim \Vert v(t)\Vert _{H^s}<C\) for \(s\in (1,2)\) where we used (6.7) in the last inequality, by the bound (6.6) and finally by the properties of Y (see Proposition  2.1). This completes the proof of Theorem 1.1.

7 Proof of proposition 4.1

In the sequel, we use the following simplified notation: \(v=v_\varepsilon (t,x)\), \(Y=Y_\varepsilon (x)\) and \({\mathcal E}={\mathcal E}_{\varepsilon }\). Moreover we denote by \((\cdot , \cdot )\) the \(L^2\) scalar product. We also drop the explicit dependence of the functions involved from the variable (tx), in order to make the computations more compact.

We are interested to construct a suitable energy with the following structure

$$\begin{aligned} \mathcal {E}(v)=(\Delta v, \Delta v e^{-2Y})+ \mathrm{remainder}. \end{aligned}$$

By using the equation solved by v we have the following identity:

$$\begin{aligned} \frac{d}{dt} (\Delta v, \Delta v e^{-2Y})&= 2\text {Re}(\partial _t \Delta v, \Delta v e^{-2Y}) = 2\text {Re}(\partial _t \Delta v, i\partial _t v e^{-2Y} ) \nonumber \\&\quad + 2\text {Re}(\partial _t \Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y})\nonumber \\&\quad -2\text {Re}(\partial _t \Delta v, v :|\nabla Y|^2: e^{-2Y}) \nonumber \\&\quad -2\lambda \text {Re}(\partial _t \Delta v, v|v|^p e^{-(p+2)Y})\nonumber \\&=I+II+III+IV. \end{aligned}$$
(7.1)

Notice that

$$\begin{aligned} I&=-2\text {Re}(\partial _t \nabla v, i\partial _t \nabla v e^{-2Y} )-2\text {Re}(\partial _t \nabla v, i\partial _t v \nabla (e^{-2Y}) ) \nonumber \\&=-2 \text {Im}(\partial _t \nabla v, \partial _t v \nabla (e^{-2Y}) ). \end{aligned}$$
(7.2)

Moreover we have

$$\begin{aligned} II&= 2\text {Re}(\partial _t \Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y}) \nonumber \\&=2\text {Re}(\Delta v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) + 2 \frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y}) \end{aligned}$$
(7.3)

and using again the equation

$$\begin{aligned} II&=2 \frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y})+2\text {Re}(i\partial _t v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad +4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad - 2\text {Re}(v:|\nabla Y|^2: , \partial _t \nabla v \cdot \nabla (e^{-2Y})) -2\lambda \text {Re}(e^{-pY}v|v|^p , \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&= 2\frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y})-2\text {Im}(\partial _t v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad +4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y}))- 2\text {Re}(v :|\nabla Y|^2: , \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad -2\lambda \text {Re}(e^{-pY} v|v|^p , \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&=2 \frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y})+2 \text {Im}(\partial _t \nabla v, \partial _t v \nabla (e^{-2Y})) \nonumber \\&\quad +4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y}))- 2\text {Re}(v :|\nabla Y|^2: , \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad -2\lambda \text {Re}(e^{-pY} v|v|^p , \partial _t \nabla v \cdot \nabla (e^{-2Y})) \end{aligned}$$
(7.4)

and hence by (7.2) we get

$$\begin{aligned} II&= 2 \frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y}) -I +4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \\&\quad - 2\text {Re}(v :|\nabla Y|^2: , \partial _t \nabla v \cdot \nabla (e^{-2Y})) -2\lambda \text {Re}(e^{-pY} v|v|^p , \partial _t \nabla v \cdot \nabla (e^{-2Y})) \\&= 2\frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y}) -I +4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \\&\quad -2 \frac{d}{dt} \text {Re}(v :|\nabla Y|^2: , \nabla v \cdot \nabla (e^{-2Y})) +2\text {Re}(\partial _tv:|\nabla Y|^2:, \nabla v \cdot \nabla (e^{-2Y})) \\&\quad -2\lambda \text {Re}(e^{-pY}v|v|^p , \partial _t \nabla v \cdot \nabla (e^{-2Y})). \end{aligned}$$

Summarizing we get from the previous chain of identities

$$\begin{aligned} I+II&= 2\text {Re}(\partial _tv:|\nabla Y|^2:, \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad + 2 \frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y}) +4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad - 2\frac{d}{dt} \text {Re}(v :|\nabla Y|^2: , \nabla v \cdot \nabla (e^{-2Y})) -2\lambda \text {Re}(e^{-pY} v|v|^p , \partial _t \nabla v \cdot \nabla (e^{-2Y})). \end{aligned}$$
(7.5)

On the other hand we can compute

$$\begin{aligned}&-2\lambda \text {Re}(e^{-pY} v|v|^p , \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad = 2\lambda \text {Re}(\nabla ( e^{-pY} v|v|^p) , \partial _t \nabla v e^{-2Y}) +2\lambda \text {Re}(e^{-pY} v|v|^p , \partial _t \Delta v e^{-2Y}) \nonumber \\&\quad = 2\lambda \text {Re}(\nabla ( e^{-pY}v|v|^p) , \partial _t \nabla v e^{-2Y}) -IV \nonumber \\&\quad = 2\lambda \text {Re}(e^{-pY} \nabla v|v|^p, \partial _t \nabla v e^{-2Y}) +2\lambda \text {Re}(e^{-pY}v\nabla (|v|^p), \partial _t \nabla v e^{-2Y}) \nonumber \\&+2\lambda \text {Re}(\nabla (e^{-pY}) v |v|^p, \partial _t \nabla v e^{-2Y}) -IV \end{aligned}$$
(7.6)

and hence

$$\begin{aligned} \dots&=\lambda \text {Re}(\partial _t (|\nabla v|^2)|v|^p, e^{-(p+2)Y}) \nonumber \\&\quad +2\lambda \text {Re}(v\nabla (|v|^p), \partial _t \nabla v e^{-(p+2)Y})+2\lambda \text {Re}(\nabla (e^{-pY})v |v|^p, \partial _t \nabla v e^{-2Y}) -IV \nonumber \\&=\lambda \text {Re}(\partial _t (|\nabla v|^2)|v|^p, e^{-(p+2)Y}) +2\lambda \frac{d}{dt} \text {Re}(v\nabla (|v|^p), \nabla v e^{-(p+2)Y})\nonumber \\&\quad -2\lambda \text {Re}(\partial _t v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) \nonumber \\&\quad -2\lambda \text {Re}(v\nabla \partial _t (|v|^p), \nabla v e^{-(p+2)Y}) +2\lambda \text {Re}(\nabla (e^{-pY})v |v|^p, \partial _t \nabla v e^{-2Y}) -IV \nonumber \\&=\lambda \frac{d}{dt} \text {Re}(|\nabla v|^2|v|^p, e^{-(p+2)Y}) -\lambda \text {Re}(|\nabla v|^2 \partial _t (|v|^p), e^{-(p+2)Y})\nonumber \\&\quad +2\lambda \frac{d}{dt} \text {Re}(v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) \nonumber \\&\quad -2\lambda \text {Re}(\partial _t v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) -\frac{\lambda p}{2} (\partial _t (\nabla (|v|^2)|v|^{p-2}), \nabla (|v|^2) e^{-(p+2)Y}) \nonumber \\&\quad +2\lambda \text {Re}(\nabla (e^{-pY})v |v|^p, \partial _t \nabla v e^{-2Y}) -IV\nonumber \\&=\lambda \frac{d}{dt} \text {Re}(|\nabla v|^2 |v|^p, e^{-(p+2)Y}) -\lambda \text {Re}(|\nabla v|^2 \partial _t (|v|^p), e^{-(p+2)Y})\nonumber \\&\quad +2\lambda \frac{d}{dt} \text {Re}(v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) \nonumber \\&\quad -2\lambda \text {Re}(\partial _t v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) -\frac{\lambda p}{2}(\partial _t \nabla (|v|^{2})|v|^{p-2}, \nabla (|v|^2) e^{-(p+2)Y}) \nonumber \\&\quad -\frac{\lambda p}{2} (\nabla (|v|^2) \partial _t (|v|^{p-2}), \nabla (|v|^2) e^{-(p+2)Y}){+}2\lambda \text {Re}(\nabla (e^{-pY})v |v|^p, \partial _t \nabla v e^{-2Y}){-}IV \nonumber \\&=\lambda \frac{d}{dt} \text {Re}(|\nabla v|^2|v|^p, e^{-(p+2)Y}) -\lambda \text {Re}(|\nabla v|^2 \partial _t (|v|^p), e^{-(p+2)Y})\nonumber \\&\quad +2\lambda \frac{d}{dt} \text {Re}(v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) \nonumber \\&\quad -2\lambda \text {Re}(\partial _t v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) -\frac{\lambda p}{4} (\partial _t (|\nabla (|v|^2)|^2)|v|^{p-2}, e^{-(p+2)Y}) \nonumber \\&\quad - \frac{\lambda p}{2} (\nabla (|v|^2) \partial _t (|v|^{p-2}), \nabla (|v|^2) e^{-(p+2)Y}){+}2\lambda \text {Re}(\nabla (e^{-pY})v |v|^p, \partial _t \nabla v e^{-2Y}){-}IV \nonumber \\&=\lambda \frac{d}{dt} \text {Re}(|\nabla v|^2|v|^p, e^{-(p+2)Y}) -\lambda \text {Re}(|\nabla v|^2 \partial _t (|v|^p), e^{-(p+2)Y})\nonumber \\&\quad +2\lambda \frac{d}{dt} \text {Re}(v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) \nonumber \\&\quad -2\lambda \text {Re}(\partial _t v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) -\frac{\lambda p}{4} \frac{d}{dt}(|\nabla (|v|^2)|^2|v|^{p-2}, e^{-(p+2)Y}) \nonumber \\&\quad +\frac{\lambda p}{4} (|\nabla (|v|^2)|^2\partial _t (|v|^{p-2}), e^{-(p+2)Y}) -\frac{\lambda p}{2}(\nabla (|v|^2) \partial _t (|v|^{p-2}), \nabla (|v|^2) e^{-(p+2)Y}) \nonumber \\&\quad +2\lambda \text {Re}(\nabla (e^{-pY})v |v|^p, \partial _t \nabla v e^{-2Y})-IV\,. \end{aligned}$$
(7.7)

By combining (7.5), (7.6) and (7.7) we get

$$\begin{aligned} I+II+IV&=2\text {Re}(\partial _tv:|\nabla Y|^2:, \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad + 2 \frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y}) +4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad -2\frac{d}{dt} \text {Re}(v :|\nabla Y|^2: , \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad +\lambda \frac{d}{dt} \text {Re}(|\nabla v|^2|v|^p, e^{-(p+2)Y}) -\lambda \text {Re}(|\nabla v|^2 \partial _t (|v|^p), e^{-(p+2)Y})\nonumber \\ {}&\quad +2\lambda \frac{d}{dt} \text {Re}(v\nabla (|v|^p), \nabla v e^{-(p+2)Y}){-}2\lambda \text {Re}(\partial _t v\nabla (|v|^p), \nabla v e^{-(p+2)Y})\nonumber \\&\quad -\frac{\lambda p}{4} \frac{d}{dt}(|\nabla (|v|^2)|^2|v|^{p-2}, e^{-(p+2)Y}) \nonumber \\&\quad +\frac{\lambda p}{4} (|\nabla (|v|^2)|^2\partial _t (|v|^{p-2}), e^{-(p+2)Y}) \nonumber \\&\quad -\frac{\lambda p}{2}(\nabla (|v|^2) \partial _t (|v|^{p-2}), \nabla (|v|^2) e^{-(p+2)Y}) \nonumber \\&\quad +2\lambda \text {Re}(\nabla (e^{-pY})v |v|^p, \partial _t \nabla v e^{-2Y}). \end{aligned}$$
(7.8)

Next notice that

$$\begin{aligned} III= -2\frac{d}{dt}\text {Re}(\Delta v, v :|\nabla Y|^2: e^{-2Y})+ 2 \text {Re}(\Delta v, \partial _t v :|\nabla Y|^2: e^{-2Y}). \end{aligned}$$
(7.9)

Summarizing we get

$$\begin{aligned}&I+II+III+IV \nonumber \\&\quad =2\text {Re}(\partial _tv:|\nabla Y|^2:, \nabla v \cdot \nabla (e^{-2Y})) + 2 \frac{d}{dt}\text {Re}(\Delta v, 2 \nabla Y \cdot \nabla ve^{-2Y}) \nonumber \\&\qquad + 4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\qquad -2\frac{d}{dt} \text {Re}(v :|\nabla Y|^2: , \nabla v \cdot \nabla (e^{-2Y})) +\lambda \frac{d}{dt} \text {Re}(|\nabla v|^2|v|^p, e^{-(p+2)Y}) \nonumber \\&\qquad -\lambda \text {Re}(|\nabla v|^2 \partial _t (|v|^p), e^{-(p+2)Y}) +2\lambda \frac{d}{dt} \text {Re}(v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) \nonumber \\&\qquad -2\lambda \text {Re}(\partial _t v\nabla (|v|^p), \nabla v e^{-(p+2)Y}) -\frac{\lambda p}{4} \frac{d}{dt}(|\nabla (|v|^2)|^2|v|^{p-2}, e^{-(p+2)Y}) \nonumber \\&\qquad +\frac{\lambda p}{4} (|\nabla (|v|^2)|^2\partial _t (|v|^{p-2}), e^{-(p+2)Y}) -\frac{\lambda p}{2}(\nabla (|v|^2) \partial _t (|v|^{p-2}), \nabla (|v|^2) e^{-(p+2)Y}) \nonumber \\&\qquad +2\lambda \frac{d}{dt} \text {Re}(\nabla (e^{-pY})v |v|^p, \nabla v e^{-2Y}) -2\lambda \text {Re}(\nabla (e^{-pY})\partial _t (v |v|^p), \nabla v e^{-2Y}) \nonumber \\&\qquad -2\frac{d}{dt}\text {Re}(\Delta v, v :|\nabla Y|^2: e^{-2Y})+ 2 \text {Re}(\Delta v, \partial _t v :|\nabla Y|^2: e^{-2Y}). \end{aligned}$$
(7.10)

Next by using the equation we compute the first and last term on the r.h.s. in (7.10) as follows:

$$\begin{aligned}&2\text {Re}(\partial _tv:|\nabla Y|^2:, \nabla v \cdot \nabla (e^{-2Y})) + 2 \text {Re}(\Delta v, \partial _t v :|\nabla Y|^2: e^{-2Y}) \nonumber \\&\quad = 2\text {Re}(\partial _tv:|\nabla Y|^2:, \nabla v \cdot \nabla (e^{-2Y}))+ 2 \text {Re}(2\nabla v \cdot \nabla Y, \partial _t v :|\nabla Y|^2: e^{-2Y}) \nonumber \\&\qquad - 2 \text {Re}(v:|\nabla Y|^2:, \partial _t v :|\nabla Y|^2: e^{-2Y})- 2 \lambda \text {Re}( e^{-pY}v|v|^p, \partial _t v :|\nabla Y|^2: e^{-2Y}) \nonumber \\&\quad = -4\text {Re}(\partial _t v :|\nabla Y|^2:, \nabla v \cdot \nabla Y e^{-2Y})+ 4 \text {Re}(\nabla v\cdot \nabla Y, \partial _t v :|\nabla Y|^2: e^{-2Y}) \nonumber \\&\qquad - 2 \text {Re}(v:|\nabla Y|^2:, \partial _t v :|\nabla Y|^2: e^{-2Y})- 2 \lambda \text {Re}(v|v|^p, \partial _t v :|\nabla Y|^2: e^{-(p+2)Y}) \nonumber \\&\quad =-\frac{d}{dt} (|v|^2:|\nabla Y|^2:, :|\nabla Y|^2: e^{-2Y}) - \frac{2 \lambda }{p+2} \frac{d}{dt}\text {Re}(|v|^{p+2}, :|\nabla Y|^2: e^{-(p+2)Y}). \end{aligned}$$
(7.11)

Finally we show that the third term on the r.h.s. in (7.10) can be written as a total derivative w.r.t. time variable:

$$\begin{aligned}&4\text {Re}(\nabla Y \cdot \nabla v, \partial _t \nabla v \cdot \nabla (e^{-2Y})) \nonumber \\&\quad =-8 \text {Re}\sum _{i,j=1}^2 \int _{\mathbb {T}^2} e^{-2Y} \partial _i Y \partial _i v \partial _t \partial _j \bar{v} \partial _j Y =-4 \text {Re}\sum _{i=1}^2 \int _{\mathbb {T}^2} e^{-2Y} \partial _t (|\partial _i v|^2) (\partial _i Y)^2 \nonumber \\&\qquad -8 \text {Re}\int _{\mathbb {T}^2} e^{-2Y} \partial _1 Y \partial _1 v \partial _t \partial _2 \bar{v} \partial _2 Y -8 \text {Re}\int _{\mathbb {T}^2} e^{-2Y} \partial _2 Y \partial _2 v \partial _t \partial _1 \bar{v} \partial _1 Y \nonumber \\&\quad =-4 \frac{d}{dt} \sum _{i=1}^2 \int _{\mathbb {T}^2} e^{-2Y} (|\partial _i v|^2) (\partial _i Y)^2 -8 \text {Re}\int _{\mathbb {T}^2} e^{-2Y} \partial _1 Y \partial _2 Y (\partial _1 v \partial _t \partial _2 \bar{v} +\partial _2 v \partial _t \partial _1 \bar{v}) \nonumber \\&\quad =-4 \frac{d}{dt} \sum _{i=1}^2 \int _{\mathbb {T}^2} e^{-2Y} (|\partial _i v|^2) (\partial _i Y)^2 -8 \text {Re}\int _{\mathbb {T}^2} e^{-2Y} \partial _1 Y \partial _2 Y \partial _t (\partial _1 v \partial _2 \bar{v}) \nonumber \\&\quad =-4 \frac{d}{dt} \sum _{i=1}^2 \int _{\mathbb {T}^2} e^{-2Y} (|\partial _i v|^2) (\partial _i Y)^2 -8 \frac{d}{dt} \text {Re}\int _{\mathbb {T}^2} e^{-2Y} \partial _1 Y \partial _2 Y \partial _1 v \partial _2 \bar{v}. \end{aligned}$$
(7.12)

We conclude the proof of Proposition 4.1 by combining (7.1), (7.10), (7.11), (7.12).