1 Introduction

1.1 Model and main result

We study global probabilistic properties of the \({\mathcal {P}}(\phi )_2\) Euclidean quantum field theories on the two dimensional unit torus \(\Omega =\mathbb {T}^2\). These objects are measures \(\nu ^{\mathcal {P}}\) on the space of distributions \(S'(\Omega )\) that are formally given by

$$\begin{aligned} \nu ^{\mathcal {P}}(d\phi ) \propto \exp \Big (- \int _\Omega {\mathcal {P}}(\phi (x)) dx\Big ) \nu ^\text {GFF}(d\phi ), \end{aligned}$$
(1.1)

where \({\mathcal {P}}\) is a polynomial of even degree with positive leading coefficient and \(\nu ^\text {GFF}\) is law of the massive Gaussian free field, i.e. the mean zero Gaussian measure with covariance \((-\Delta + m^2)^{-1}\) for some arbitrary mass \(m>0\) that is fixed throughout this article. The most famous example is when \({\mathcal {P}}(\phi )\) is a quartic polynomial, in which case \(\nu ^{\mathcal {P}}\) is known as the \(\phi _2^4\) measure in finite volume with periodic boundary condition.

The origin of these measures lies in constructive quantum field theory, where they arise as Wick rotations of interacting bosonic quantum field theories in \(1+1\) Minkowski space-time, see for instance [38] and [33]. Their construction was first achieved in 2D by Nelson in [32], see also the books by Simon [37] and Glimm and Jaffe [24, Section 8]. Indeed, fields sampled from \(\nu ^\text {GFF}\) are almost surely distributions in a Sobolev space of negative regularity, and hence cannot be evaluated pointwise. Therefore, since there is no canonical definition of nonlinear functions of distributions, the density in (1.1) is ill-defined. This can be seen concretely, if one imposes a small scale cut-off. Then the cut-off measures are well-defined, but as the cut-off is removed, one encounters so-called ultraviolet divergences. It is well-known that a suitable renormalisation procedure is needed to remove these divergences.

In order to give meaning to (1.1), we view it as a limit of renormalised lattice approximations. For \(\epsilon >0\) let \(\Omega _\epsilon = \Omega \cap (\epsilon \mathbb {Z}^2)\) be the discretised unit torus and denote \(X_\epsilon = \mathbb {R}^{\Omega _\epsilon }= \{\varphi :\Omega _\epsilon \rightarrow \mathbb {R}\}\). We assume throughout that \(1/\epsilon \in \mathbb {N}\). Moreover, write \(\Delta ^\epsilon \) for the discrete Laplacian acting on functions \(f \in X_\epsilon \) by \(\Delta ^\epsilon f (x) = \epsilon ^{-2} \sum _{y\sim x} \big ( f(y) - f(x) \big )\), where \(y \sim x\) denotes that \(x,y \in \Omega _\epsilon \) are nearest neighbours. Let \(\nu ^{\text {GFF}_\epsilon }\) be the centred Gaussian measure on \(X_\epsilon \) with covariance \((-\Delta ^\epsilon + m^2)^{-1}\), i.e. the law of the massive discrete Gaussian free field on \(\Omega _\epsilon \). Note that as \(\epsilon \rightarrow 0\), we have for all \(x\in \Omega _\epsilon \)

$$\begin{aligned} c_\epsilon \equiv {{\,\textrm{Var}\,}}\big ( \Phi _x^{\text {GFF}_\epsilon } \big ) = \frac{1}{2\pi } \log \frac{1}{\epsilon } + O(1), \qquad \Phi ^{\text {GFF}_\epsilon } \sim \nu ^{\text {GFF}_\epsilon }. \end{aligned}$$
(1.2)

To compensate this small scale divergence we renormalise the polynomial \({\mathcal {P}}\) in (1.1) by interpreting it as Wick-ordered. For \(n\in \mathbb {N}\), we define the Wick powers

$$\begin{aligned} {:\,} \phi ^n(x) {:\,}_\epsilon = c_\epsilon ^{n/2} H_n(\phi (x)/\sqrt{c_\epsilon }), \end{aligned}$$

where \(H_n\) denotes the n-th Hermite polynomial and \(c_\epsilon \) is as in (1.2). For instance, when \(n=4\), we have

$$\begin{aligned} {:\,} \phi ^4(x) {:\,}_\epsilon = \phi ^4(x) - 6 c_\epsilon \phi ^2(x) + 3 c_\epsilon ^2. \end{aligned}$$

Note that by the multiplication with \(c_\epsilon ^n\) we ensure that the leading coefficient of the Wick power equals 1.

It can be shown that as \(\epsilon \rightarrow 0\) the Wick ordered monomials for the Gaussian free field converge to well-defined random variables \({:\,} \Phi ^n {:\,}\) with values in the space of distributions \(S'(\Omega )\). Moreover, the collection \(({:\,} \Phi ^n {:\,})_{n\in \mathbb {N}}\) is orthogonal in \(L^2(\mu ^\text {GFF})\) and satisfies

$$\begin{aligned} \mathbb {E}[{:\,} \Phi ^n(x) {:\,} {:\,} \Phi ^n(y) {:\,}]= n! \, G^n(x-y), \qquad \Phi \sim \nu ^{\text {GFF}}, \end{aligned}$$

where \(G = (-\Delta + m^2)^{-1}\) is the Green function of the Laplacian on \(\Omega \). This gives the Wick powers the interpretation of powers of the Gaussian free field.

For general polynomials of the form \({\mathcal {P}}:\mathbb {R}\rightarrow \mathbb {R}, r\mapsto \sum _{k=1}^{N} a_k r^k\) of even degree \(N \in 2\mathbb {N}\), and coefficients satisfying \(a_k \in \mathbb {R}\), \(1 \leqslant k < N\), and \(a_N>0\), we define the Wick ordering

$$\begin{aligned} {:\,} {\mathcal {P}}(\phi (x)) {:\,}_\epsilon = \sum _{k=1}^{N} a_k {:\,} \phi ^k(x) {:\,}_\epsilon . \end{aligned}$$
(1.3)

When it is clear from the context or when there is no conceptual difference between the discrete Wick power and the continuum limit, we drop \(\epsilon \) from the notation. Note that, by a simple scaling argument, for the construction of the measures (1.1) we may assume without loss of generality that \({\mathcal {P}}\) has no constant term.

As for the monomials, the Wick ordered polynomials converge to well-defined random variables with values in a space of distributions as \(\epsilon \rightarrow 0\). However, as \(\epsilon \rightarrow 0\) one loses the uniform lower bound of \({:\,} {\mathcal {P}} {:\,}_\epsilon \). Indeed, due to the logarithmic divergence of \(c_\epsilon \) in (1.2), we have for \({:\,} {\mathcal {P}} {:\,}_\epsilon \) as in (1.3)

$$\begin{aligned} \inf _{x\in \Omega _\epsilon } {:\,} {\mathcal {P}} {:\,}_\epsilon \, \geqslant - O(|\log \epsilon |^{n}), \qquad \epsilon \rightarrow 0 \end{aligned}$$

for some \(n\in \mathbb {N}\). Therefore, the exponential integrability (and hence the construction of (1.1) as a limit) is not immediate. Nelson leveraged the polylogarithmic divergence in \(d=2\) to give a rigorous construction of the continuum object \(\nu ^{\mathcal {P}}\), which can be obtained as a weak limit of the regularised measures \(\nu ^{{\mathcal {P}}_\epsilon }\) on \(X_\epsilon \) defined by

$$\begin{aligned} \nu ^{{\mathcal {P}}_\epsilon }(d \phi ) \propto \exp \Big ( - \epsilon ^2 \sum _{x\in \Omega _\epsilon } {:\,} {\mathcal {P}}(\phi (x)) {:\,}_\epsilon \Big ) \nu ^{\text {GFF}_\epsilon }(d\phi ). \end{aligned}$$
(1.4)

Other constructions also exist, for example via stochastic quantisation techniques [40], the Boué–Dupuis stochastic control representation [5], and martingale methods [28].

Fields sampled from \(\nu ^{\mathcal {P}}\) have similar small scale behaviour to the Gaussian free field on \(\Omega \). Indeed, the two measures are mutually absolutely continuous and, as such, \(\nu ^{\mathcal {P}}\) is supported on a space of distributions, not functions. One of the central themes of this article is to quantify the difference in short-distance behaviour between these two measures, and to see whether the similarities on small scales lead to similarities on a global scale, e.g. universal behaviour of the maxima.

Finally, let us remark that similar Wick renormalisation procedures have been successfully used to construct other continuum non-Gaussian Euclidean field theories in 2D, see for instance [7], for the sine-Gordon field and [4] for the sinh-Gordon field.

Our main result is a probabilistic coupling between the \({\mathcal {P}}(\phi )_2\) field and the Gaussian free field at all scales, which allows to write the field of interest as a sum of the Gaussian free field and a (non-independent) regular difference field. The methods we use rely on a stochastic control formulation developed for the \(\phi _2^4\) field in [5] and the essentially equivalent non-perturbative Polchinski renormalisation group approach that was used in [7] for the sine-Gordon field. Our point of view in this work resembles that in the latter reference, for which we use a similar notation. Specifically, we construct the \(\epsilon \)-regularised \({\mathcal {P}}(\phi )_2\) field as the solution to the high dimensional SDE

$$\begin{aligned} d \Phi _t^{{\mathcal {P}}_\epsilon ,E} = - \dot{c}_t^\epsilon \nabla v_t^{\epsilon ,E} (\Phi _t^{{\mathcal {P}}_\epsilon ,E}) dt+ (\dot{c}_t^\epsilon )^{1/2} d W_t, \qquad \Phi _\infty ^{{\mathcal {P}}_\epsilon ,E} = 0, \end{aligned}$$
(1.5)

where \((\dot{c}_t^\epsilon )_{t\in [0,\infty ]}\) is associated to the Pauli-Villars decomposition of the Gaussian free field covariance defined for \(t \in (0,\infty )\) by

$$\begin{aligned} c_t^\epsilon = ( -\Delta ^\epsilon + m^2 + 1/t)^{-1}, \qquad \dot{c}_t^\epsilon = \frac{d}{dt}c_t^\epsilon , \end{aligned}$$
(1.6)

and with \(c_0^\epsilon =0\) and \(c_\infty ^\epsilon = (-\Delta ^\epsilon + m^2)^{-1}\), W is a Brownian motion in \(\Omega _\epsilon \), and \(v_t^{\epsilon ,E}\) is the renormalised potential to be defined later. The Pauli-Villars decomposition could be replaced by, for example, a heat-kernel decomposition, but it is technically convenient to work with the former rather than the latter, see Remark 3.1 for a more precise discussion on the rigidity concerning the choice of scale decomposition.

The energy cut-off \(E>0\) is sufficient to ensure that the SDE (1.5) is well-defined and has strong existence, and is removed by taking \(E\rightarrow \infty \). The stochastic integral on the right-hand side of (1.5) corresponds to a scale decomposition of the Gaussian free field, i.e. setting

$$\begin{aligned} \Phi _t^{\text {GFF}_\epsilon } = \int _t^\infty (\dot{c}_s^\epsilon )^{1/2} d W_s \end{aligned}$$
(1.7)

we obtain a Gaussian process \((\Phi _t^{\text {GFF}_\epsilon }\big )_t\) started at \(t=\infty \) with \(\Phi _\infty ^{\text {GFF}_\epsilon } = 0\) and such that \(\Phi _0^{\text {GFF}_\epsilon } \sim \nu ^{\text {GFF}_\epsilon }\). Thus, going back to (1.5) the difference field corresponds to the finite variation term on the right-hand side of the SDE.

We state the main result in the following theorem on the level of regularisations. For \(\alpha \in \mathbb {R}\) let \(H^\alpha (\Omega _\epsilon )\equiv H^\alpha \) be the (discrete) Sobolev space of regularity \(\alpha \), see Sect. 2.2 for a precise definition. Moreover, we denote by \(C_0([0, \infty ), {\mathcal {S}})\) the space of continuous sample paths with values in a metric space \(({\mathcal {S}}, \Vert \cdot \Vert _{\mathcal {S}})\) that vanish at infinity equipped with the topology of compact convergence.

Theorem 1.1

There exists a process \(\Phi ^{{\mathcal {P}}_\epsilon }\in C_0([0,\infty ), H^{-\kappa })\) for any \(\kappa >0\) such that

$$\begin{aligned} \Phi _t^{{\mathcal {P}}_\epsilon } = \Phi _t^{\Delta _\epsilon } + \Phi _t^{\text {GFF}_\epsilon }, \qquad \Phi _0^{{\mathcal {P}}_\epsilon } \sim \nu ^{{\mathcal {P}}_\epsilon }, \end{aligned}$$
(1.8)

where the difference field \(\Phi ^{\Delta _\epsilon }\) satisfies

$$\begin{aligned}&\sup _{\epsilon >0 } \sup _{t\geqslant 0} \mathbb {E}[\Vert \Phi _t^{\Delta _\epsilon }\Vert _{H^{1}}^2] < \infty , \end{aligned}$$
(1.9)
$$\begin{aligned}&\sup _{\epsilon >0} \sup _{t\geqslant t_0} \mathbb {E}[\Vert \Phi _t^{\Delta _\epsilon }\Vert _{H^{2}}^2] < \infty \end{aligned}$$
(1.10)

for any \(t_0>0\), and for any \(\alpha \in [0,2)\) and some large enough \(L\geqslant 4\)

$$\begin{aligned}&\sup _{\epsilon >0 } \sup _{t\geqslant 0} \mathbb {E}[\Vert \Phi _t^{\Delta _\epsilon }\Vert _{H^{\alpha }}^{2/L}] < \infty , \end{aligned}$$
(1.11)
$$\begin{aligned}&\sup _{\epsilon >0} \mathbb {E}[ \Vert \Phi _t^{\Delta _\epsilon }-\Phi _0^{\Delta _\epsilon }\Vert _{H^{\alpha }}^{2/L} ]\rightarrow 0 \text {~as~} t\rightarrow 0. \end{aligned}$$
(1.12)

Moreover, for any \(t>0\), \(\Phi ^{\text {GFF}_\epsilon }_0-\Phi _t^{\text {GFF}_\epsilon }\) is independent of \(\Phi _t^{{\mathcal {P}}_\epsilon }\).

Let us make two important remarks concerning the optimality of the regularity estimates in Theorem 1.1. First, the restriction to \(\alpha \in [0,2)\) in (1.11) and (1.12) seems to be optimal, at least with respect to our method. Other reasonable choices of \((\dot{c}_t^\epsilon )_{t\in [0,\infty ]}\), see again Remark 3.1, should also lead to the same estimates (1.9), (1.11) and (1.12). Second, we do not expect that the probabilistic regularity estimates on the difference field \(\Phi _t^{\Delta _\epsilon }\), \(t\geqslant 0\) in Theorem 1.1 can be replaced by deterministic ones, in contradistinction to the case of the sine-Gordon field considered in [7]. This is because \(\Phi _t^{\Delta _\epsilon }\) is related to \(\nabla v_t^{\epsilon ,E}(\phi )\), \(\phi \in X_\epsilon \) in (3.6), which is, unlike for the sine-Gordon field, unbounded in \(\phi \) as \(E\rightarrow \infty \). In fact, since the constant L depends linearly on the degree of the polynomial \({\mathcal {P}}\), the bounds in Theorem 1.1 become weaker as the degree of \({\mathcal {P}}\) increases. This suggests that the larger the growth of the nonlinearity in (1.1), the weaker bounds for the difference field. Note that, in the case of the \(\phi ^4_2\) field, we can take \(L=4\).

As a corollary of Theorem 1.1, we also obtain the following statement for the continuum, i.e. \(\epsilon =0\). We stress that, in the statement below, \(H^\alpha = H^\alpha (\Omega )\) refers to the usual, i.e. continuum, Sobolev space of regularity \(\alpha \) over \(\Omega \).

Corollary 1.2

There exists a process \(\Phi ^{{\mathcal {P}}} \in C_0([0,\infty ), H^{-\kappa })\) for every \(\kappa >0\) such that

$$\begin{aligned} \Phi _t^{{\mathcal {P}}} = \Phi _t^{\Delta } + \Phi _t^{\text {GFF}}, \end{aligned}$$
(1.13)

where \(\Phi _0^{\mathcal {P}}\) is distributed as the continuum \({\mathcal {P}}(\phi )_2\) field and \(\Phi _0^\text {GFF}\) is distributed as the continuum Gaussian free field. For the difference field \(\Phi ^\Delta \), the analogous estimates as for \(\Phi _t^{\Delta _\epsilon }\) in Theorem 1.1 hold in the continuum Sobolev spaces. Finally, for any \(t>0\), \(\Phi _0^\text {GFF}- \Phi _t^\text {GFF}\) is independent of \(\Phi _t^{\mathcal {P}}\).

As we shall see in the proof of Corollary 1.2, we construct the continuum processes \(\Phi ^\Delta \) in (1.13) as a weak limit of \((\Phi ^{\Delta _\epsilon })_\epsilon \) as \(\epsilon \rightarrow 0\) along a subsequence, thereby establishing the existence of \(\Phi ^{\mathcal {P}}\). While the convergence as processes only holds along a subsequence, we prove that every subsequential limit of \((\Phi ^{\Delta _\epsilon })_\epsilon \) and hence of \((\Phi ^{{\mathcal {P}}_\epsilon })_\epsilon \) has the same law.

The key ingredient in the proof of our main result, Theorem 1.1, is an exact correspondence between two different stochastic representations of \({\mathcal {P}}(\phi )_2\): a stochastic control representation, called the Boué–Dupuis representation, and the Polchinski renormalisation group approach, see Sect. 3.4. More precisely, the difference field is directly related to a special minimiser of the stochastic control problem. We use this correspondence to transfer fractional moment estimates on the minimiser to the difference field.

In the case of the sine-Gordon field considered in [7], the proof of the analogous estimates on \(\Phi ^{\Delta _\epsilon }\) relies heavily on deterministic estimates on the gradient of the renormalised potential \(v_t^\epsilon \), which is enabled by a Yukawa gas representation from [16]. Such a representation is not available in the present case, as the nonlinearity \({\mathcal {P}}(\phi )\) is not periodic in the field variables \((\phi _x)_{x\in \Omega _\epsilon }\). Therefore, our approach is more robust in the sense that we do not need to know the gradient of the renormalised potential, for which a uniform bound may not be available, and we do not need the periodicity requirement on the potential.

It would be of interest to extend our approach to other continuum Euclidean field theories. On the one hand, we believe that our approach can be extended to treat fields with more general nonlinearities in the log-density, such as the sinh-Gordon model. On the other hand, by combining our techniques with paracontrolled ansatz techniques developed in [5], it may be possible to analyse the sine-Gordon field up to the same regime as treated in [7]. Note that our techniques would a priori yield probabilistic estimates on the difference field, while the precise analysis of the gradient of the renormalised potential in [7] yields deterministic estimates. It would be interesting to see whether one can recover similar deterministic estimates with our approach. For \(\beta < 4\pi \), this seems possible by using methods developed in [3]. A treatment of the full subcritical regime \(\beta < 8\pi \) using either method would be of great interest. Let us mention that in the setting of fermionic Euclidean field theories, such couplings have been constructed for a \(\phi ^{4}\)-type model in the full subcritical regime in [19] using a Forward-Backward SDE approach. Although the objects treated are not the same (recall that we are interested in bosonic Euclidean field theories), on a high level their general framework is closely related to our point of view here and may be of help in analysing sine-Gordon for \(\beta < 8\pi \).

Finally, we remark that similar couplings can be obtained from stochastic quantisation, for instance with the methods of [30] or [25], which allow to write the \({\mathcal {P}}(\phi )_2\) field as a sum of a Gaussian free field and a non-Gaussian regular field with values in \(H^1(\Omega )\). However, the couplings obtained in this way do not have the independence property stated in Corollary 1.2. This property together with the fact that our regularity estimates imply \(L^\infty \) bounds for \(\Phi _t^{\Delta }\), \(t\geqslant 0\) enables us to study the distribution of the centred maximum of the field as described below.

1.2 Application to the maximum of the \({\mathcal {P}}(\phi )_2\) field

The strong coupling to the Gaussian free field in Theorem 1.1 is a useful tool to study novel probabilistic aspects of \({\mathcal {P}}(\phi )_2\) theories that go beyond the scope of the current literature. As an illustration, we investigate the global centred maximum of the regularised fields \(\Phi ^{{\mathcal {P}}_\epsilon }\) defined as

$$\begin{aligned} M^\epsilon = \max _{\Omega _\epsilon }\Phi ^{{\mathcal {P}}_\epsilon } \qquad \Phi ^{{\mathcal {P}}_\epsilon }\sim \nu ^{{\mathcal {P}}\epsilon }. \end{aligned}$$

It is clear that as \(\epsilon \rightarrow \infty \) this random variable diverges, since the limiting field takes values in a space of distributions. For the (massive) Gaussian free field, i.e. the case \({\mathcal {P}}(\phi ) = 0\), this divergence has been quantified in [15] by

$$\begin{aligned} \mathbb {E}[M^\epsilon ] = {\mathfrak {m}}_\epsilon + O(1), \qquad {\mathfrak {m}}_\epsilon = \frac{1}{\sqrt{2\pi } } (2\log \frac{1}{\epsilon }-\frac{3}{4}\log \log \frac{1}{\epsilon }). \end{aligned}$$
(1.14)

Moreover, in this reference it is also shown that the sequence of centred maxima \((M^\epsilon - {\mathfrak {m}}_\epsilon )_\epsilon \) is tight. We remark that these results were initially proved for the massless Gaussian free field on a box with Dirichlet boundary condition rather than on the torus, but the arguments are not susceptible to a different choice of boundary. The minor difference between the prefactor \(1/\sqrt{2\pi }\) in (1.14) to [15] and various other references comes from our different scaling of the fields, see (1.2).

It is clear that the coupling in Theorem 1.1 for \(t=0\) together with the properties of \(\Phi ^\Delta \) imply (1.14) and tightness of the centred global maxima for the \({\mathcal {P}}(\phi )_2\) field. Indeed, by the standard Sobolev embedding in \(d=2\), the Sobolev norm in (1.11) can be replaced by the \(L^\infty \)-norm, and hence, the maximum of the \({\mathcal {P}}(\phi )_2\) field and that of the GFF differ by a random variable with finite fractional moment. In particular, this implies that (1.14) also holds for general even polynomials \({\mathcal {P}}\) with positive leading coefficient.

Exploiting also the larger scales \(t>0\) of (1.8) allows to understand the O(1) terms in \(M^\epsilon - {\mathfrak {m}}_\epsilon \) and establish the following convergence in distribution as \(\epsilon \rightarrow 0\).

Theorem 1.3

The centred maximum of the \(\epsilon \)-regularised \({\mathcal {P}}(\phi )_2\) field \(\Phi ^{{\mathcal {P}}_\epsilon } \sim \nu ^{{\mathcal {P}}_\epsilon }\) converges in distribution as \(\epsilon \rightarrow 0\) to a randomly shifted Gumbel distribution, i.e.

$$\begin{aligned} \max _{\Omega _\epsilon } \Phi ^{{\mathcal {P}}_\epsilon } - {\mathfrak {m}}_\epsilon \rightarrow \frac{1}{\sqrt{8\pi }} X + \frac{1}{\sqrt{8\pi }}\log \textrm{Z}^{{\mathcal {P}}} + b, \end{aligned}$$

where \(\textrm{Z}^{{\mathcal {P}}} \) is a nontrivial positive random variable, X is an independent standard Gumbel random variable, and b is a deterministic constant.

The analogous result for the Gaussian free field was proved in [14] and later generalised to log-correlated Gaussian fields in [20], and to the (non-Gaussian) continuum sine-Gordon field in [7]. The proof in the latter reference relies on a coupling between the Gaussian free field and the sine-Gordon field, in essence similar to Theorem 1.1, and a generalisation of all key result in [14] to a non-Gaussian regime. Here we follow a similar strategy to establish Theorem 1.3 verifying that the technical results in [7, Section 4] also hold under the weaker assumptions on the term \(\Phi ^\Delta \) in Theorem 1.1.

It is believed that the limiting law of the centred maximum is universal for \(\Phi ^\epsilon \) belonging to a large class of Gaussian or non-Gaussian log-correlated fields, in the sense that the fluctuations of the centred maximum are of order 1 and moreover, there is a sequence \(a_\epsilon \), such that

$$\begin{aligned} \mathbb {P}(\max _{\Omega _\epsilon }\Phi ^\epsilon - a_\epsilon \leqslant x) \rightarrow \mathbb {E}[ \exp (-C\textrm{Z}e^{-cx})], \qquad x\in \mathbb {R}\end{aligned}$$
(1.15)

for some positive constants \(c,C>0\) and a positive random variable \(\textrm{Z}\); see for instance [17]. The expectation value in (1.15) is the distribution function of a randomly shifted Gumbel distribution, which is obtained from the deterministic Gumbel distribution function and averaging over the random shift \(\log \textrm{Z}\). In particular, the weak convergence for \(\max _{\Omega _\epsilon }\Phi ^{{\mathcal {P}}_\epsilon }\) in Theorem 1.3 can be equivalently stated as in (1.15) by setting \(a_\epsilon = {\mathfrak {m}}_\epsilon \), \(c = \sqrt{8\pi }\), \(C= e^{\sqrt{8\pi }b}\) and \(\textrm{Z}= \textrm{Z}^{{\mathcal {P}}}\). In recent years there has been substantial progress on the extremal behaviour of log-correlated fields and related models, thus confirming the conjectured behaviour of the maximum. For Gaussian fields, in particular the discrete Gaussian free field, the vast majority of questions centred around the maximum have been answered thanks to the works [14, 20, 35] as well as [9, 10]. As various key methods in the proof of these results only apply to Gaussian fields, the picture in the non-Gaussian regime is less complete. Important recent works on non-Gaussian models include [7, 8, 21, 36] and [41]. There is also a surprising relation between log-correlated processes and the extreme values of characteristic polynomials of certain random matrix ensembles as well as the maximum of the Riemann zeta function in a typical short interval on the critical line, which was first described and investigated in [22] and [23]. Subsequent works in this direction include [1, 18] and [34]. For a survey on recent developments we refer to [2].

The random variable \(\textrm{Z}^{{\mathcal {P}}}\) is believed to be a multiple of the critical multiplicative chaos of the field, also known as the derivative martingale, which can be obtained as the weak limit of

$$\begin{aligned} \textrm{Z}^{{\mathcal {P}}_\epsilon } = \epsilon ^2 \sum _{\Omega _\epsilon } (\frac{2}{\sqrt{2\pi }}\log \frac{1}{\epsilon } - \Phi ^{{\mathcal {P}}_\epsilon }) e^{-2\log \frac{1}{\epsilon } + \sqrt{8\pi }\Phi ^{{\mathcal {P}}_\epsilon }}. \end{aligned}$$
(1.16)

However, this has been established rigorously only for the massless Gaussian free field thanks to its conformal invariance, see [11]. Even though the exact characterisation of \(\textrm{Z}^{\mathcal {P}}\) does not come out of the proof of Theorem 1.3, it does show that \(\textrm{Z}^{\mathcal {P}}\) is obtained as the limit of prototypical derivative martingales defined similarly to (1.16).

Nonetheless, the coupling in Theorem 1.1 and Corollary 1.2 immediately gives a construction of the limit of the measure associated to (1.16) as \(\epsilon \rightarrow 0\) through

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \textrm{Z}^{{\mathcal {P}}_\epsilon }(dx) = e^{\sqrt{8\pi } \Phi _0^{\Delta } (x)}\textrm{Z}^\text {GFF}(x) \end{aligned}$$

where \(\textrm{Z}^\text {GFF}(dx)\) is the Gaussian multiplicative chaos associated with \(\Phi _0^\text {GFF}\). We expect that this relation allows to establish further properties for the multiplicative chaos of the \({\mathcal {P}}(\phi )_2\).

Finally, we believe that the coupling in Theorem 1.1 can also be used to prove finer results on the extreme values of the \({\mathcal {P}}(\phi )_2\) field, in particular on the locations and heights of local maxima. The main reference for the local and full extremal process for the Gaussian free field is the work by Biskup and Louidor, see [9, 10] and [11]. Using the coupling in Theorem 1.1 and the generalisation of the key results in these references to the non-Gaussian setting analogously to [27], it seems plausible that the local extremal process of the \({\mathcal {P}}(\phi )_2\) field converges to a Poisson point process on \(\Omega \times \mathbb {R}\) with random intensity measure \(\textrm{Z}(dx)\otimes e^{-\sqrt{8\pi } h}dh\). With additional work, it should also be possible to prove convergence of the full extremal process thanks to the continuity of the difference field.

1.3 Notation

For a covariance \(c^\epsilon \) on \(X_\epsilon \), we denote by \({\varvec{E}}_{c^\epsilon }\) the expectation with respect to the centred Gaussian measure with covariance \(c^\epsilon \). Moreover, we use the standard Landau big-O and little-o notation and write \(f \lesssim g\) to denote that \(f\leqslant O(g)\), and \(f \simeq g\) if \(f \lesssim g\) and \(f\lesssim g\).

When an estimate holds true up to a constant factor depending on a parameter \(\alpha \) say, i.e. if \(f \leqslant C g\) where the constant \(C=C(\alpha )\) depends only on \(\alpha \), then we write \(f\lesssim _{\alpha } g\).

For fields \(f \in X_\epsilon \), we either write f(x) or \(f_x\) for the evaluation of f at \(x\in \Omega _\epsilon \). When the field already comes with an index, say t, then we write \(f_t(x)\) instead. We also use the word function and field interchangeably for elements in \(X_\epsilon \).

To simplify notation in what follows we will write

$$\begin{aligned} \int _{\Omega _\epsilon } f(x) dx \equiv \epsilon ^2 \sum _{x \in \Omega _\epsilon } f(x) \end{aligned}$$

for the discrete integral over \(\Omega _\epsilon \). For two functions \(f,g \in X_\epsilon ^\mathbb {C}\equiv \{\varphi :\Omega _\epsilon \rightarrow \mathbb {C}\}\) we define the inner product

$$\begin{aligned} \langle f,g \rangle \equiv \langle f,g \rangle _{\Omega _\epsilon } \equiv \epsilon ^2 \sum _{x \in \Omega _\epsilon } f(x) \overline{ g(x)}. \end{aligned}$$

For \(p\in [1,\infty )\) we also define the \(L^p\)-norm for \(f\in X_\epsilon ^\mathbb {C}\) by

$$\begin{aligned} \Vert f\Vert _{L^p(\Omega _\epsilon )}^p \equiv \Vert f \Vert _{L^p}^p = \int _{\Omega _\epsilon } |f(x)|^p dx \end{aligned}$$

and for \(p=\infty \) we define

$$\begin{aligned} \Vert f\Vert _{L^\infty (\Omega _\epsilon )} \equiv \Vert f\Vert _{L^\infty } \equiv \Vert f\Vert _{\infty } = \max _{x\in \Omega _\epsilon }|f(x)|. \end{aligned}$$

We write \(L^p(\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{L^p})\), \(1\leqslant p\leqslant \infty \) for the discrete \(L^p\) space. Finally, for \(F:X_\epsilon \rightarrow \mathbb {R}\), \(\varphi \mapsto F(\varphi )\) define \(\nabla F\) as the unique function \(X_\epsilon \rightarrow X_\epsilon \) that satisfies

$$\begin{aligned} \langle \nabla F(\varphi ), g \rangle = D_\varphi F(g) \end{aligned}$$

for all \(\varphi ,g \in X_\epsilon \), where \(D_\varphi F :X_\epsilon \rightarrow \mathbb {R}\) is the Fréchet derivative of F at \(\varphi \), i.e. the unique bounded linear map satisfying

$$\begin{aligned} F(\varphi + g) = F(\varphi ) + D_\varphi F(g) + o(g) \end{aligned}$$

for all \(g \in X_\epsilon \) and where \(o(g) = o(\Vert g\Vert )\) for some norm on \(X_\epsilon \). Note that our convention for \(\nabla F\) differs from the usual gradient by a factor \(\epsilon ^{-2}\) due to the normalised inner product.

2 Discrete Besov Spaces and the Regularity of Wick Powers

In our analysis we require regularity information on various types of random fields on \(X_\epsilon \) that are uniform in the regularisation parameter. Particularly important examples of such random fields are given by discrete analogues of Wick powers for the GFF. Besov spaces allow for a convenient description of the regularity for the latter objects. As such, in this section, we first recall basic notions from discrete Fourier analysis and define discrete analogues of Besov spaces. All inequalities in this section will be uniform in \(\epsilon \) unless otherwise stated.

2.1 Discrete Fourier series and trigonometric embedding

Any function in \(X_\epsilon \) admits a Fourier representation thanks to the periodic boundary conditions. Let \(\Omega _\epsilon ^*=\{ (k_1,k_2) \in 2\pi \mathbb {Z}^2 :-\pi /\epsilon < k_i \leqslant \pi /\epsilon \}\) and \(\Omega ^*= 2\pi \mathbb {Z}^2\) be the Fourier dual spaces of \(\Omega _\epsilon \) and \(\Omega \). Then, for \(f\in X_\epsilon \) and \(x\in \Omega _\epsilon \),

$$\begin{aligned} f(x) = \sum _{k \in \Omega _\epsilon ^*} {\hat{f}}(k) e^{ik\cdot x}, \end{aligned}$$
(2.1)

where \({\hat{f}}(k)\in \mathbb {C}\) is the k-th Fourier coefficient of f given by

$$\begin{aligned} {\hat{f}} (k) = \langle f,e^{ik\cdot } \rangle _{\Omega _\epsilon } = \epsilon ^2 \sum _{x\in \Omega _\epsilon } f(x) e^{-ik\cdot x}. \end{aligned}$$
(2.2)

Since f is real-valued, we have that \({\hat{f}}(k) = \overline{{\hat{f}}(-k)}\). For a given \(f\in X_\epsilon \), the map \({\mathcal {F}}_\epsilon (f):\Omega _\epsilon ^* \rightarrow \mathbb {C}\), \(k\mapsto {\hat{f}}(k)\) is called the (discrete) Fourier transform of f. Similarly, for any function \({\hat{f}} \in X_\epsilon ^* {:}{=} \{ g :\Omega _\epsilon ^* \rightarrow \mathbb {C}\}\), we define the inverse Fourier transform \({\mathcal {F}}_\epsilon ^{-1}\) by

$$\begin{aligned} {\mathcal {F}}_\epsilon ^{-1} ({\hat{f}}) (x) = \sum _{k \in \Omega _\epsilon ^*} {\hat{f}}(k) e^{ik\cdot x}. \end{aligned}$$

As for the usual Fourier transform on \(\Omega \) and its inverse, which we define analogously to (2.1) and (2.2) by replacing \(\Omega _\epsilon \) by \(\Omega \), it can be easily seen that \({\mathcal {F}}^{-1} \circ {\mathcal {F}}= \textrm{id}_{X_\epsilon ^\mathbb {C}}\).

For a translation invariant operator operator \(q :X_\epsilon \rightarrow X_\epsilon \), we denote by \({\hat{q}}(k)\), \(k\in \Omega _\epsilon ^*\) its Fourier multipliers, defined by

$$\begin{aligned} \widehat{q (f)} = {\hat{q}}(k) {\hat{f}}(k). \end{aligned}$$

For instance, it can be shown that the negative lattice Laplacian \(-\Delta ^\epsilon \) on \(\Omega _\epsilon \) has Fourier multipliers

$$\begin{aligned} -\hat{\Delta }^\epsilon (k) = \epsilon ^{-2} \sum _{i=1}^2 (2-2\cos (\epsilon k_i)), \quad k \in \Omega _\epsilon ^*. \end{aligned}$$
(2.3)

As \(\epsilon \rightarrow 0\) these converge to the Fourier multipliers of the continuum Laplacian given by

$$\begin{aligned} -{\hat{\Delta }} (k) = |k|^2, \qquad k \in \Omega ^* \end{aligned}$$

and this convergence is quantified by

$$\begin{aligned} 0\leqslant -\hat{\Delta }(k) - (-{\hat{\Delta }}^\epsilon (k)) \leqslant |k|^2 h(\epsilon k), \qquad k \in \Omega _\epsilon ^*, \end{aligned}$$
(2.4)

where \(h(x) = \max _{i=1,2} (1 -x_i^{-2}(2-2\cos (x_i)))\) satisfies \(h(x) \in [0,1-c]\) with \(c = 4/\pi ^2\) for \(|x|\leqslant \pi \) and \(h(x) =O(|x|^2)\) as \(x\rightarrow 0\).

We extend functions on \(\Omega _\epsilon \) onto \(\Omega \) by using the standard trigonometric extension, which is also an isometric embedding \(I_\epsilon :L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega )\), i.e. if \(f\in X_\epsilon \) has Fourier series (2.1), then the extension \(I_\epsilon f\) of f is the unique function \(\Omega \rightarrow \mathbb {R}\) whose Fourier coefficients agree with those of f for \(k\in \Omega _\epsilon ^*\) and vanish for \(k\in \Omega ^*\setminus \Omega _\epsilon ^*\). Note that \(I_\epsilon f\) coincides with f on \(\Omega _\epsilon \).

Conversely, we can restrict a smooth function \(f:\Omega \rightarrow \mathbb {R}\) to \(\Omega _\epsilon \) by restricting its Fourier series

$$\begin{aligned} f(x) = \sum _{k\in \Omega ^*} {\hat{f}}(k)e^{ik\cdot x}, \qquad {\hat{f}}(k) = \int _{\Omega } f(x)e^{-ik\cdot x} dx \end{aligned}$$

to Fourier coefficients \(k\in \Omega _\epsilon ^*\), i.e. we write

$$\begin{aligned} \Pi _\epsilon f = \sum _{k\in \Omega _\epsilon ^*}{\hat{f}}(k)e^{ik\cdot x}. \end{aligned}$$

2.2 Discrete Sobolev spaces

The notion of Fourier series can be used to define discrete Sobolev norms in complete analogy to the continuum case. For \(\Phi \in X_\epsilon \) and \(\alpha \in \mathbb {R}\) we define

$$\begin{aligned} \Vert \Phi \Vert _{H^\alpha (\Omega _\epsilon )}^2 \equiv \Vert \Phi \Vert _{H^\alpha }^2 = \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^{\alpha } |{\hat{\Phi }}(k)|^2, \end{aligned}$$
(2.5)

where \({\hat{\Phi }}(k)\), \(k\in \Omega _\epsilon ^*\) denote the Fourier coefficients of \(\Phi \) as defined in (2.1). Moreover, we denote by \(H^\alpha (\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{H^\alpha (\Omega _\epsilon )})\) the discrete Sobolev space of regularity \(\alpha \). Thus, when using the isometric embedding the discrete and the continuum Sobolev norm \(\Vert \cdot \Vert _{H^\alpha (\Omega )}\), defined as in (2.5) except the sum being now over \(k\in \Omega ^*\), coincide, i.e. for \(\Phi \in X_\epsilon \)

$$\begin{aligned} \Vert \Phi \Vert _{H^\alpha (\Omega _\epsilon )} = \Vert I_\epsilon \Phi \Vert _{H^\alpha (\Omega )}, \end{aligned}$$

where now \(H^\alpha (\Omega )\) denotes the Sobolev space of regularity \(\alpha \in \mathbb {R}\). Recall that we have the following standard embedding theorem. Here, \(\Vert \cdot \Vert _{L^p(\Omega )}\) and \(\Vert \cdot \Vert _{H^s(\Omega )}\) denote the continuum norms for functions \(\Omega \rightarrow \mathbb {C}\).

Proposition 2.1

(Sobolev embeddings). Let \(s \geqslant 1\). Then

$$\begin{aligned} \Vert f\Vert _{L^p(\Omega )} \lesssim _s \Vert f\Vert _{H^s(\Omega )}, \end{aligned}$$

where

$$\begin{aligned} {\left\{ \begin{array}{ll} 2\leqslant p < \infty , &{}s = 1, \\ 2\leqslant p \leqslant \infty , &{} s>1. \end{array}\right. } \end{aligned}$$

In particular, we have for every \( \alpha > 0\) and \(f:\Omega _\epsilon \rightarrow \mathbb {C}\),

$$\begin{aligned} \Vert f\Vert _{L^\infty (\Omega _\epsilon )} \leqslant \Vert I_\epsilon f\Vert _{L^\infty (\Omega )} \lesssim _\alpha \Vert I_\epsilon f \Vert _{H^{1+\alpha }(\Omega )} = \Vert f \Vert _{H^{1+\alpha }(\Omega _\epsilon )}. \end{aligned}$$

Finally, we also have the following Hölder embedding. Let \(\alpha \in (0,1)\). For a function \(f\in C^\infty (\Omega ,\mathbb {R})\), we define its \(\alpha \)-Hölder norm by

$$\begin{aligned} \Vert f \Vert _{C^\alpha (\Omega )} \equiv \Vert f \Vert _{C^\alpha } = |f|_{C^\alpha } + \Vert f\Vert _{L^\infty }, \quad |f|_{C^\alpha (\Omega )}= \sup _{x,y \in \Omega , x\ne y} \frac{|f(x)-f(y)|}{|x-y|^\alpha }, \end{aligned}$$

where \(|f|_{C^\alpha }\) denotes the Hölder seminorm. The \(\alpha \)-Hölder space, denoted \(C^\alpha (\Omega )\), is then defined as the completion of \(C^\infty (\Omega ,\mathbb {R})\) with respect to \(\Vert \cdot \Vert _{C^\alpha (\Omega )}\). For discrete functions \(f\in X_\epsilon \), we define

$$\begin{aligned} \Vert f \Vert _{C^\alpha (\Omega _\epsilon ) } = \Vert I_\epsilon f \Vert _{C^\alpha (\Omega )} \end{aligned}$$

and \(C^\alpha (\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{C^\alpha (\Omega _\epsilon )})\). Then, the following standard embedding from Sobolev spaces into Hölder spaces can be transferred to functions in \(X_\epsilon \).

Proposition 2.2

Let \(\alpha \in (0,1)\) and \(s-1 \geqslant \alpha \). Then

$$\begin{aligned} \Vert f\Vert _{C^\alpha (\Omega )} \lesssim _{\alpha , s} \Vert f\Vert _{H^s(\Omega )}. \end{aligned}$$

2.3 Discrete Besov spaces

In this section we introduce the important class of Besov spaces and recall their most important properties. We state the results below for dimension \(d=2\) though analogous statements hold for general d. Let \(A :=B_{4/3} \setminus B_{3/8}\) be the annulus of inner radius \(r_1 = 3/8\) and outer radius \(r_2=4/3\). Here, we denote by \(B_r = \{ |x| \leqslant r \} \subset \mathbb {R}^2\) the centred ball of radius \(r\geqslant 0\) in \(\mathbb {R}^2\). Let \(\chi , {\tilde{\chi }} \in C^\infty _c(\mathbb {R}^2,[0,1])\), such that

$$\begin{aligned} {{\,\textrm{supp}\,}}{\tilde{\chi }} \subseteq B_{4/3}, \qquad {{\,\textrm{supp}\,}}\chi \subseteq A \end{aligned}$$

and

$$\begin{aligned} {\tilde{\chi }}(x) + \sum _{j=0}^\infty \chi (x/2^j) = 1, \qquad x\in \mathbb {R}^2 \end{aligned}$$

and write

$$\begin{aligned} \chi _{-1} = {\tilde{\chi }}, \quad \chi _{j} = \chi (\cdot /2^j), \,\,\, j \geqslant 0. \end{aligned}$$
(2.6)

For \(\epsilon >0\) define \(j_\epsilon = \max \{ j \geqslant -1 :{{\,\textrm{supp}\,}}\chi _j \subset (-\pi /\epsilon , \pi /\epsilon ]^2 \}\). Note that for \(j \geqslant j_\epsilon \), \({{\,\textrm{supp}\,}}\chi _j\) may intersect \(\partial \{ [-\pi /\epsilon , \pi /\epsilon ]^2 \}\). To avoid ambiguities with the periodisation of \(\chi _j\) onto \(\Omega _\epsilon ^*\), we modify our dyadic partition of unity in (2.6) as follows: for \(j \in \{ -1, \ldots , j_\epsilon \}\) let \(\chi _j^\epsilon \in C^\infty _c(\mathbb {R}^2, [0,1])\) be such that for \(k \in \Omega _\epsilon ^*\) we have

$$\begin{aligned} \chi _j^\epsilon (k) = {\left\{ \begin{array}{ll} \chi _j(k), \,\qquad &{} j< j_\epsilon , \\ 1- \sum _{j < j_{\epsilon }} \chi _j^\epsilon (k), \,\qquad &{} j = j_\epsilon , \\ 0, \,\qquad &{} j >j_\epsilon . \end{array}\right. } \end{aligned}$$

Then we define for \(j\geqslant -1\) the j-th Fourier projector \(\Delta _j\) by

$$\begin{aligned} \Delta _j f = {\mathcal {F}}^{-1} (\chi _j^\epsilon {\hat{f}}), \end{aligned}$$

where \({\hat{f}} :\Omega _\epsilon ^* \rightarrow \mathbb {C}, \, k\mapsto {\hat{f}}(k)\) is the Fourier transform of f. We can use this partition of unity to decompose a given function \(f\in X_\epsilon \) into a sum of functions with almost disjoint support in Fourier space and define for any \(\alpha \in \mathbb {R}\) and \(p,q \in [1,\infty ]\)

$$\begin{aligned} \Vert f \Vert _{{B_{p,q}^{\alpha }}} :=\Big [ \sum _{j=-1}^\infty \big ( 2^{\alpha k} \Vert \Delta _j f \Vert _{L^p} \big )^q \Big ]^{1/q}. \end{aligned}$$

One can prove that this is indeed a norm on \(X_\epsilon \) and that different choice of \({\tilde{\chi }}, \chi \) yield equivalent norms uniformly in \(\epsilon >0\).

We denote by \({B_{p,q}^{\alpha }}(\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{{B_{p,q}^{\alpha }}})\) the discrete Besov space with parameters p,q and \(\alpha \). Note that \(H^\alpha (\Omega _\epsilon ) = {B_{2,2}^{\alpha }}(\Omega _\epsilon )\) which holds in the sense that the norms are equivalent uniformly in the lattice spacing. Moreover, for any \(\alpha \in \mathbb {R}\), we write \({\mathcal {C}}^\alpha (\Omega _\epsilon ):={B_{\infty ,\infty }^{\alpha }}(\Omega _\epsilon )\).

We now state useful properties of these spaces. In all estimates below we use \(\lesssim \) to denote less than or equal to up to a constant that may depend on the parameters of the Besov space but does not depend on \(\epsilon \). For the proofs of these result, we refer to [31, Remark 8, 10 and 11] [26, Lemma A.2], [4, Theorem 2.13], [26, Lemma A.3], and [4, Theorem 2.14] respectively.

Proposition 2.3

(Immediate embeddings). Let \(\alpha _1, \alpha _2 \in \mathbb {R}\) and \(p_1,p_2,q_1, q_2 \in [1,\infty ]\). Then we have

$$\begin{aligned} \Vert u \Vert _{{B_{p_1,q_1}^{\alpha _1}}}&\lesssim \Vert u \Vert _{{B_{p_2,q_2}^{\alpha _2}}}{} & {} \text {~for~} \alpha _1 \leqslant \alpha _2 \text {~and~} p_1 \leqslant p_2 \text {~and~} q_1 \geqslant q_2, \\ \Vert u \Vert _{{B_{p_1,q_1}^{\alpha _1}}}&\lesssim \Vert u \Vert _{B_{p_1,\infty }^{\alpha _2}}{} & {} \text {~for~} \alpha _1< \alpha _2, \\ \Vert u\Vert _{{B_{p_1,\infty }^{0}}}&\lesssim \Vert u \Vert _{L^{p_1}} \lesssim \Vert u \Vert _{{B_{p_1,1}^{0}}}. \end{aligned}$$

Proposition 2.4

(Duality). Let \(\alpha _1,\alpha _2 \in \mathbb {R}\) such that \(\alpha _{1}+\alpha _{2}=0\) and let \(p_1,p_2,q_1,q_2 \in [1,\infty ]\) such that \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{q_1} + \frac{1}{q_2} = 1\). Then

$$\begin{aligned} \Big | \int _{\Omega _{\epsilon }} f g dx \Big | \leqslant \Vert f\Vert _{{B_{p_1,q_1}^{\alpha _1}} }\Vert g\Vert _{{B_{p_2,q_2}^{\alpha _2}}}. \end{aligned}$$
(2.7)

Proposition 2.5

(Multiplication inequality). Let \(p,p_{1},p_{2},q,q_1,q_2\in [1,\infty ]\) be such that \(\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\), \(\frac{1}{q} = \frac{1}{q_1} + \frac{1}{q_2}\), and let \(\alpha _{1},\alpha _{2}\in \mathbb {R}\) be such that \(\alpha _{1}+\alpha _{2}>0\). Denote \(\alpha =\min (\alpha _{1},\alpha _{2})\). Then, for any \(\epsilon > 0\) and for all \(f,g \in X_\epsilon \),

$$\begin{aligned} \Vert fg \Vert _{{B_{p,q}^{\alpha }}} \lesssim \Vert f\Vert _{{B_{p_1,q_1}^{\alpha _1}}} \Vert g\Vert _{{B_{p_2,q_2}^{\alpha _2}}}. \end{aligned}$$
(2.8)

In particular, for \(\alpha > 0\) we obtain the following iterated multiplication inequality: for every \(k\in \mathbb {N}\) such that \(k \geqslant 1\),

$$\begin{aligned} \Vert f^k\Vert _{{B_{p,q}^{\alpha }}} \lesssim \Vert f \Vert _{{B_{kp,kq}^{\alpha }}}^k. \end{aligned}$$

Proposition 2.6

(Interpolation). Let \(\theta \in [0,1]\), \(p,p_{1},p_{2},q,q_1,q_2\in [1,\infty ]\) satisfying \(\frac{1}{p}=\frac{\theta }{p_{1}}+\frac{1-\theta }{p_{2}}\), \(\frac{1}{q} = \frac{\theta }{q_1} + \frac{1-\theta }{q_2}\), and \(\alpha ,\alpha _{1},\alpha _{2}\in \mathbb {R}\) satisfying \(\alpha =\theta \alpha _{1}+(1-\theta )\alpha _{2}\). Then, for any \(\epsilon > 0\) and \(f \in X_\epsilon \),

$$\begin{aligned} \Vert f\Vert _{{B_{p,q}^{\alpha }}} \lesssim \Vert f\Vert ^{\theta }_{{B_{p_1,q_1}^{\alpha _1}}} \Vert f\Vert _{{B_{p_2,q_2}^{\alpha _2}}}^{1-\theta }. \end{aligned}$$
(2.9)

Proposition 2.7

(Besov embedding). Let \(\alpha _{1},\alpha _{2}\in \mathbb {R}\) and \(p_{1},p_{2},q\in [1,\infty ]\). Assume that \(\alpha _{2} \leqslant \alpha _{1}-2\big ( \frac{1}{p_{1}}-\frac{1}{p_{2}}\big )\). Then, for any \(\epsilon > 0\) and \(f \in X_\epsilon \),

$$\begin{aligned} \Vert f\Vert _{{B_{p_2,q}^{\alpha _2}}} \lesssim \Vert f\Vert _{{B_{p_1,q}^{\alpha _1}}}. \end{aligned}$$
(2.10)

2.4 Regularity estimates on discrete Wick powers

The relevance of Besov norms in our context comes from the fact that in dimension \(d=2\) the Wick powers of log-correlated fields are distributions in \({B_{p,p}^{-\kappa }}\) for any \(\kappa >0\).

Lemma 2.8

Let \(Y_\infty ^\epsilon \) be the discrete Gaussian free field on \(\Omega _\epsilon \). Then, for any \(\kappa >0\) and \( p \in [1,\infty )\),

$$\begin{aligned} \sup _{\epsilon > 0} \mathbb {E}\big [\Vert {:\,} (Y_\infty ^\epsilon )^n {:\,} \Vert _{{B_{p,p}^{-\kappa }}}^p\big ] <\infty . \end{aligned}$$

Consequently, by Besov embedding (2.10), for any \(n \in \mathbb {N}\), \(\kappa >0\), and \(r > 0\),

$$\begin{aligned} \sup _{\epsilon >0} \mathbb {E}\big [\Vert {:\,} (Y_\infty ^\epsilon )^n {:\,} \Vert ^{r}_{{\mathcal {C}}^{-\kappa }}\big ] < \infty . \end{aligned}$$

Proof

Throughout this proof we drop \(\epsilon >0\) from the notation. By the definition of the Besov norms, we have

$$\begin{aligned} \mathbb {E}\big [\Vert {:\,} Y_\infty ^{n} {:\,} \Vert _{{B_{p,p}^{-\kappa }}}^{p} \big ]{} & {} = \sum _{j=-1}^{\infty } 2^{-j \kappa p} \mathbb {E}\big [\Vert \Delta _{j}{:\,} Y_\infty ^{n} {:\,} \Vert ^{p}_{L^{p}}\big ]\nonumber \\{} & {} \lesssim \sum _{j=-1}^{\infty }2^{-j \kappa p} \Big (\mathbb {E}\big [\big |\big (\Delta _{j}{:\,} Y_\infty ^{n} {:\,}\big )(0)\big |^{2}\big ]\Big )^{p/2}\nonumber \\{} & {} = \sum _{j=-1}^{\infty }2^{-j \kappa p}\Big (\mathbb {E}\Big [\int _{\Omega _\epsilon \times \Omega _\epsilon } K_j(x)K_j(y){:\,} Y_\infty ^{n}(x) {:\,}{:\,} Y_\infty ^{n} {:\,}(y) dx dy \Big ] \Big )^{p/2}\nonumber \\{} & {} = \sum _{j=-1}^{\infty }2^{-j \kappa p} \Big (\int _{\Omega _\epsilon \times \Omega _\epsilon } K_j(x)K_j(y) \big ( c(x,y)\big )^{n} dx dy \Big )^{p/2}, \end{aligned}$$
(2.11)

where in the second line, we use stationarity together with a standard Wiener chaos estimate (which is a consequence of hypercontractivity), in the third line, we denote the (real-space) kernel of \(\Delta _j\) by \(K_j=K_j^\epsilon \), and in the final line we use Wick’s theorem.

Recall that we have \(\Vert K_j\Vert _{L^{1}(\Omega _\epsilon )}\lesssim 1\) and \(\Vert K_j\Vert _{L^{\infty }(\Omega _\epsilon )}\lesssim 2^{2j}\). This implies, by interpolation, that \(\Vert K_j\Vert _{L^{q_{1}}(\Omega _\epsilon )}\lesssim 2^{j \kappa /2}\) if \(q_{1}>1\) is chosen sufficiently close to 1. Observe that if \(q_{2}<\infty \) is the Hölder conjugate of \(q_{1}\), then \(\sup _{\epsilon >0}\Vert c^n\Vert _{L^{q_{2}}(dxdy)}<\infty \). Thus,

$$\begin{aligned} \Big | \int K_{j}(x)K_{j}(y)c^{n}(x,y) dxdy \Big |&\leqslant \Vert K_{j}(x)K_{j}(y)\Vert _{L^{q_{1}}(dxdy)}\Vert c^{n}\Vert _{L^{q_{2}}(dxdy)} \nonumber \\&\leqslant \Vert K_{j}\Vert ^{2}_{L^{q_{1}}}\Vert c^{n}\Vert _{L^{q_{2}}(dxdy)}\lesssim 2^{j\kappa }. \end{aligned}$$
(2.12)

Plugging (2.12) into the sum (2.11), we see that it converges and is bounded uniformly in \(\epsilon \). This completes the proof. \(\quad \square \)

In the sequel we consider the Wick powers of the sum of the Gaussian free field and a regular field. Note that for \(n \in \mathbb {N}\) and \(\varphi \in X_\epsilon \) we have

$$\begin{aligned} {:\,} (Y_\infty ^\epsilon +\varphi )^n {:\,} = \sum _{k=0}^{n} {n \atopwithdelims ()k} {:\,} (Y_\infty ^\epsilon )^{n-k} {:\,} \varphi ^{k}. \end{aligned}$$
(2.13)

In practice, we apply this formula in the case where \(\varphi \) admits a bound uniform in \(\epsilon >0\) on the second moment of some positive regularity Sobolev norm. In particular, it is mainly useful when \(\varphi \) admits estimates in a positive regularity norm (rather than a negative regularity distribution norm) that are uniform in \(\epsilon >0\).

Lemma 2.9

Let \(Y_\infty ^\epsilon \) be the discrete Gaussian free field on \(\Omega _\epsilon \). Then for any \(n \in \mathbb {N}\) and \(\kappa > 0\), and \({\bar{\kappa }} >0\) small enough

$$\begin{aligned} \Vert {:\,} (Y_\infty ^\epsilon +\varphi )^{n} {:\,}\Vert _{{\mathcal {C}}^{-\kappa }} \lesssim 1 + \sum _{k=0}^{n-1}\Vert {:\,} (Y_\infty ^\epsilon )^{n-k} {:\,}\Vert _{{\mathcal {C}}^{-{\bar{\kappa }}}}^{n/(n-k)} + \Vert \varphi \Vert _{H^{1}}^n \end{aligned}$$
(2.14)

uniformly in \(\epsilon > 0\).

Proof

We estimate the terms on the right-hand side of (2.13) separately. For \(k\in \{1,\ldots , n-1\}\) and \(p >2/\kappa \), we have for any \(\delta >0\) by Besov embedding (2.10) and the multiplication inequality (2.8)

$$\begin{aligned} \Vert {:\,} Y_\infty ^{n-k} {:\,} \varphi ^{k} \Vert _{{\mathcal {C}}^{-\kappa }} \lesssim _{\kappa } \Vert {:\,} Y_\infty ^{n-k} {:\,} \varphi ^{k} \Vert _{{B_{p,\infty }^{-\kappa + 2/p}}} \lesssim _{\kappa , \delta } \Vert {:\,} Y_\infty ^{n-k} {:\,} \Vert _{{B_{\infty ,\infty }^{-\kappa + 2/p}}} \Vert \varphi ^{k} \Vert _{{B_{p,\infty }^{\kappa -2/p + \delta }}}. \end{aligned}$$
(2.15)

Now, by iterating the multiplication inequality (2.8) and Besov embedding (2.10) we have for \(2/\kappa < p \leqslant 2(1+ 1/k) / (\kappa + \delta )\)

$$\begin{aligned} \Vert \varphi ^k \Vert _{{B_{p,\infty }^{\kappa -2/p + \delta }}} \lesssim _{k,p} \Vert \varphi \Vert _{{B_{kp,\infty }^{\kappa -2/p + \delta }}}^k \lesssim _{\delta , \kappa , k} \Vert \varphi \Vert _{H^1}^k. \end{aligned}$$

Thus, we can further estimate (2.15) by

$$\begin{aligned} \Vert {:\,} Y_\infty ^{n-k} {:\,} \varphi ^{k} \Vert _{{\mathcal {C}}^{-\kappa }} \lesssim _{\kappa ,\delta ,k} \Vert {:\,} Y_\infty ^{n-k} {:\,} \Vert _{{\mathcal {C}}^{-\kappa + 2/p}} \Vert \varphi \Vert _{H^1}^k \lesssim \Vert {:\,} Y_\infty ^{n-k} {:\,} \Vert _{{\mathcal {C}}^{-{\bar{\kappa }}}}^{n/(n-k)} +\Vert \varphi \Vert _{H^1}^n, \end{aligned}$$

where we used Young’s inequality in the last step and set \(\bar{\kappa }= \kappa -2/p\). Summing over \(k\in \{1,\ldots ,n-1\}\) and observing that

$$\begin{aligned} \Vert \varphi ^n\Vert _{{\mathcal {C}}^{-\kappa }} \leqslant \Vert \varphi \Vert _{H^1}^n, \end{aligned}$$

the estimate (2.14) follows. \(\quad \square \)

3 Stochastic Representations of \({\mathcal {P}}(\phi )_2\)

In this section, we present two stochastic representations of measures \(\nu ^{{\mathcal {P}}_\epsilon }\): the Polchinski renormalisation group approach and a stochastic control representation via the Boué–Dupuis variational formula. The former underlies the SDE (1.5) that we use to construct the process \(\Phi ^{{\mathcal {P}}_\epsilon }\). We show that there is an exact correspondence between these two approaches. In particular, the minimiser of the variational problem is explicitly related to the difference field of \(\Phi ^{\Delta _\epsilon }\) of the Polchinski dynamics. For technical reasons, we introduce a potential cut-off to guarantee the well-posedness of the SDE and to ensure existence of minimisers for the variational problem.

3.1 Pauli–Villars decomposition of the covariance

Let \((c_t^\epsilon )_{t\in [0,\infty ]}\) be a continuously differentiable decomposition of the covariance of the GFF, i.e.

$$\begin{aligned} c_t^\epsilon = \int _0^t \dot{c}_s^\epsilon ds, \qquad c_\infty ^\epsilon = (-\Delta ^\epsilon + m^2)^{-1}, \end{aligned}$$

where, for every \(s > 0\), \(\dot{c}_s^\epsilon \) is a positive-semidefinite operator acting on \(X_\epsilon \). The choice of \(\dot{c}_t^\epsilon \) is restricted by the condition on \(c_\infty ^\epsilon \). In this work we use the Pauli-Villars regularisation

$$\begin{aligned} c_t^\epsilon = ( -\Delta ^\epsilon + m^2 + 1/t)^{-1}, \qquad \dot{c}_t^\epsilon = \frac{d}{dt}c_t^\epsilon . \end{aligned}$$
(3.1)

Remark 3.1

The choice of the Pauli-Villars regularisation is twofold: first, it satisfies the required regularity estimates in Sect. 4.1. Second, it is technically convenient with regards to convergence of the maxima in Sect. 6. In particular, it allows to avoid the introduction of an additional field in the approximation of the small scale field in Sect. 6.2. This, however, is not necessary and we expect that some other choices which are more natural from an analytic perspective (i.e. satisfying the estimates of Sect. 4.1), such as a heat kernel decomposition, should also work.

We view \(c_t^\epsilon \) and \(\dot{c}_t^\epsilon \) acting on \(X_\epsilon \) as Fourier multipliers. For instance, we have for \(f\in X_\epsilon \)

$$\begin{aligned} (\dot{c}_t^\epsilon f )(x) = \sum _{k\in \Omega _\epsilon ^*} \widehat{\dot{c}_t^\epsilon } (k){\hat{f}}(k) e^{ikx}, \end{aligned}$$

where the Fourier multipliers of \(\dot{c}_t^\epsilon \) are given by

$$\begin{aligned} \widehat{ \dot{c}_t^\epsilon } = \frac{1}{\big (t(-\hat{\Delta }^\epsilon (k)+m^2) + 1\big )^2}, \end{aligned}$$

where \(-{\hat{\Delta }}^\epsilon (k)\) is as in (2.3). We also record the Fourier multipliers for \(q_t^\epsilon \) which is the unique positive semi-definite operator on \(X_\epsilon \) such that \(\dot{c}_t^\epsilon = q_t^\epsilon * q_t^\epsilon \), where \(*\) is the discrete convolution on \(\Omega _\epsilon \). From this defining relation, we immediately deduce that

$$\begin{aligned} {\hat{q}}_t^\epsilon (k) = \frac{1}{t(-{\hat{\Delta }}^\epsilon (k)+m^2) + 1}. \end{aligned}$$
(3.2)

Remark 3.2

For \(x,y \in \Omega _\epsilon \), set \(c_t^\epsilon (x,y) = (-\Delta ^\epsilon + m^2 + 1/t)^{-1}(x,y)\), where \((x,y)\mapsto (-\Delta ^\epsilon + m^2 + 1/t)^{-1}(x,y)\) is the Green’s function of \((-\Delta ^\epsilon + m^2 +1/t)\). Note that \(c_t^\epsilon (x,y) = C_{t/\epsilon ^2}(x/\epsilon , y/\epsilon )\), where

$$\begin{aligned} C_t^\epsilon = (-\Delta + \epsilon ^2 m^2 + 1/t )^{-1} \end{aligned}$$

is the Green’s function for the massive unit lattice Laplacian. It is easy to see that \(c_t^\epsilon (x,y) = c_t^\epsilon (0,x-y)\), i.e. \(c_t^\epsilon \) is stationary. We simply write \(c_t^\epsilon (x-y)= c_t^\epsilon (x,y)\).

We use the Pauli-Villars decomposition of \(c_\infty ^\epsilon \) to construct a process associated to the discrete Gaussian free field on \(\Omega _\epsilon \) as follows. Let W be a cylindrical Brownian motion in \(L^2(\Omega )\) defined on a probability space \((\mathcal {O}, {\mathcal {F}}, \mathbb {P})\) and denote by \(({\mathcal {F}}_t)_{t \geqslant 0}\) and \(({\mathcal {F}}^t)_{t \geqslant 0}\) the forward and backward filtrations generated by the past \(\{W_s-W_0:s \leqslant t\}\) and the future \(\{W_s-W_t:s\geqslant t\}\). We assume that \({\mathcal {F}}\) is \(\mathbb {P}\)-complete and the filtrations are augmented by \(\mathbb {P}\)-null sets. Moreover, we write expectation with respect to \(\mathbb {P}\) as \(\mathbb {E}\). The cylindric Brownian motion W can be almost surely represented as a random Fourier series via the so-called Karhunen-Loève expansion. More precisely, for \(k \in 2\pi \mathbb {Z}^2\) and \(t \geqslant 0\), let \({\hat{W}}_t (k) = \int _\Omega W_t(x) e^{-i k \cdot x}dx\). Then, almost surely \(\{ {\hat{W}}(k) :k \in 2\pi \mathbb {Z}^2 \}\) is a set of complex standard Brownian motions, independent up to the constraint \({\hat{W}}(k)=\overline{{\hat{W}}(-k)}\) and \({\hat{W}}(0)\) is a real standard Brownian motion and we can write

$$\begin{aligned} W_t(x) = \sum _{k \in \Omega ^*} e^{i k \cdot x} {\hat{W}}_t(k), \qquad x \in \Omega , \end{aligned}$$

where the sum converges uniformly on compact sets in \( C([0,\infty ),H^{-1-\kappa })\) for any \(\kappa >0\), i.e. for any \(T\geqslant 0\), we have as \(r\rightarrow \infty \)

$$\begin{aligned} \mathbb {E}\Big [ \sup _{t\leqslant T} \Big \Vert \sum _{|k|>r} e^{ik\cdot } {\hat{W}}_t(k) \Big \Vert _{H^{-1-\kappa }}^2 \Big ] \rightarrow 0. \end{aligned}$$

To obtain a Brownian motion in \(\Omega _\epsilon \) we restrict the formal Fourier series of W to \(k\in \Omega _\epsilon ^*\), i.e. for \(x\in \Omega _\epsilon \), we set

$$\begin{aligned} W^\epsilon _t(x) \equiv \Pi _\epsilon W_t(x)= \sum _{k \in \Omega _\epsilon ^*} e^{ik\cdot x} {\hat{W}}_t(k), \qquad x\in \Omega _\epsilon . \end{aligned}$$

Then \((W^\epsilon (x))_{x\in \Omega _\epsilon }\) are independent Brownian motions indexed by \(\Omega _\epsilon \) with quadratic variation \(t/\epsilon ^{2}\), see for instance [7, Section 3.1] for more details. As in (1.7) we define the decomposed Gaussian free field \(\Phi _t^{\text {GFF}_\epsilon }\) by

$$\begin{aligned} \Phi _t^{\text {GFF}_\epsilon } = \int _t^\infty q_s^\epsilon dW_s^\epsilon = \sum _{k\in \Omega _\epsilon ^*} e^{ik\cdot (\cdot )}\int _t^\infty {\hat{q}}_u^\epsilon (k) d{\hat{W}}_u(k), \end{aligned}$$
(3.3)

where \(\dot{c}_t^\epsilon = q_t^\epsilon * q_t^\epsilon \) and \({\hat{q}}_t (k)\) are as in (3.2). Note that \(\Phi ^{\text {GFF}_\epsilon }\) has independent increments and that \(\Phi _0^{\text {GFF}_\epsilon }\sim \nu ^{\text {GFF}_\epsilon }\), i.e. at \(t=0\) the we obtain the discrete Gaussian free field on \(\Omega _\epsilon \). Moreover, we emphasise that the process \(\Phi ^{\text {GFF}_\epsilon } = (\Phi ^{\text {GFF}_\epsilon }_t)_{t\geqslant 0}\) is adapted to the backward filtration \(({\mathcal {F}}^t)\).

3.2 Polchinski renormalisation group dynamics for \({\mathcal {P}}(\phi )_2\)

Let \(v_0^\epsilon (\phi ) = \epsilon ^2 \sum _{x\in \Omega _\epsilon } {:\,} {\mathcal {P}}(\phi (x)) {:\,}_\epsilon \) be the interaction for the measure \(\nu ^{{\mathcal {P}}_\epsilon }\) in (1.4). We also refer to this object as Hamiltonian of the \({\mathcal {P}}(\phi )_2\) field. For the reasons described at the beginning of this section, we consider the following energy cut-off for \(v_0^\epsilon \). For \(E>0\) let \(\chi _E \in C^2(\mathbb {R},\mathbb {R})\) be concave and increasing such that

$$\begin{aligned} \chi _E(x) = {\left\{ \begin{array}{ll} x, \,\qquad &{} x \in (-\infty ,E/2] \\ E, \,\qquad &{} x > E \end{array}\right. } \end{aligned}$$

and moreover \(\chi _E \leqslant \chi _{E'}\) on \([0,\infty )\) for \(E \leqslant E'\) and \(\sup _{E > 0} \Vert \chi _E' \Vert _{\infty } < 2\). Then we define the cut-off Hamiltonian \(v_0^{\epsilon ,E} =\chi _E \circ v_0^{\epsilon }\) and the associated measure \(\nu ^{{\mathcal {P}}_\epsilon , E}\) on \(\Omega _\epsilon \) by

$$\begin{aligned} \nu ^{{\mathcal {P}}_\epsilon ,E}(d\phi ) \propto e^{-v_0^{\epsilon ,E}(\phi )} \nu ^{\text {GFF}_\epsilon }(d\phi ). \end{aligned}$$
(3.4)

Next, we define the renormalised potential \(v_t^{\epsilon ,E}\) at scale \(t\geqslant 0\) and for \(E\in (0,\infty ]\) by

$$\begin{aligned} e^{-v_t^{\epsilon ,E}(\phi )} = {\varvec{E}}_{c_t^\epsilon } [e^{-v_0^{\epsilon ,E}(\phi + \zeta )}], \end{aligned}$$
(3.5)

where we recall that \({\varvec{E}}_{c_t^\epsilon }\) denotes the expectation with respect to the centred Gaussian measure with covariance \(c_t^\epsilon \).

We think of \(v_t^{\epsilon ,E}\) as the effective potential that sees fluctuations on the scales larger than the characteristic length scale \(L_t=\sqrt{t} \wedge 1/m\). Indeed, (3.5) can be interpreted as integrating out all small scale parts of the discrete GFF which are associated to the covariance \(c_t^\epsilon \).

The scale evolution of the renormalised potential is further encoded in the Polchinski equation, as stated in the following proposition. For a proof of this result, see for instance [16].

Lemma 3.3

Let \(\epsilon > 0\) and \(E >0 \). Then \(v^{\epsilon ,E}\) is a classical solution of the Polchinski equation

$$\begin{aligned}{} & {} \partial _t v_t^{\epsilon ,E} = \frac{1}{2} \Delta _{\dot{c}_t^\epsilon } v_t^{\epsilon ,E} \nonumber \\{} & {} \quad - \frac{1}{2} (\nabla v_t^{\epsilon ,E})_{\dot{c}_t^\epsilon }^2 = \frac{1}{2} \epsilon ^4\sum _{x,y \in \Omega _\epsilon } \dot{c}_t^\epsilon (x,y) \left[ {\frac{\partial ^2v_t^\epsilon }{\partial \varphi _x\partial \varphi _y} - \frac{\partial v_t^\epsilon }{\partial \varphi _x} \frac{\partial v_t^\epsilon }{\partial \varphi _y} }\right] \end{aligned}$$
(3.6)

with initial condition \(v_0^{\epsilon ,E}\).

We record a priori estimates on the gradient and Hessian of \(v_t^{\epsilon ,E}\), where we make heavy use of the potential cut-off. We emphasise that the bounds are uniform in \(\phi \in X_\epsilon \), but not in \(\epsilon >0\) and \(E>0\).

Lemma 3.4

Let \(\epsilon >0\) and \(E \in (0,\infty )\). There exists \(C=C(\epsilon ,E)>0\) such that

$$\begin{aligned} \sup _{t \geqslant 0} \Vert \nabla v_t^{\epsilon ,E} \Vert _{L^\infty (\Omega _\epsilon )}&\leqslant C \end{aligned}$$
(3.7)
$$\begin{aligned} \sup _{t \geqslant 0} \Vert {{\,\textrm{Hess}\,}}v_t^{\epsilon ,E} \Vert _{L^\infty (\Omega _\epsilon )}&\leqslant C, \end{aligned}$$
(3.8)

where there is an implicit summation over the coordinates of \(\nabla v_t^{\epsilon ,E}\) and \({{\,\textrm{Hess}\,}}v_t^{\epsilon ,E}\) in the norms above.

Proof

We first observe that we have by the chain rule

$$\begin{aligned} \nabla v_0^{\epsilon ,E}(\phi ) = \chi _E'( v_0^\epsilon (\phi )) \, \nabla v_0^{\epsilon }(\phi ). \end{aligned}$$

Since \(v_0^\epsilon \) is continuous in \(\phi \in X_\epsilon \) and since \(|v_0^\epsilon (\phi )| \rightarrow \infty \) as \(\Vert \phi \Vert _{L^\infty (\Omega _\epsilon )} \rightarrow \infty \), we have that \(\nabla v_0^{\epsilon ,E}\) has bounded support. Since the components of \(\nabla v_0^\epsilon \) are polynomials in \(\phi \), (3.7) follows for \(t=0\). For \(t>0\), we obtain by differentiating (3.5)

$$\begin{aligned} \nabla v_t^{\epsilon , E} (\phi ) = e^{v_t^{\epsilon , E}(\phi )} {\varvec{E}}_{c_t^\epsilon } \big [\nabla v_0^{\epsilon , E}(\phi +\zeta ) e^{-v_0^{\epsilon , E}(\phi +\zeta )} \big ]. \end{aligned}$$
(3.9)

Now the cut-off \(\chi _E\) allows to bound \(|v_t^{\epsilon , E}(\phi )| \lesssim _\epsilon E\) and hence, we can estimate

$$\begin{aligned}{} & {} \Vert \nabla v_t^{\epsilon , E} (\phi ) \Vert _{L^\infty (\Omega _\epsilon )}\\{} & {} \quad \lesssim _{\epsilon , E} {\varvec{E}}_{c_t^\epsilon } \big [ \Vert \nabla v_0^{\epsilon , E} (\phi +\zeta ) e^{-v_0^{\epsilon , E}(\phi +\zeta )} \Vert _{L^\infty (\Omega _\epsilon )} \big ] \lesssim _{\epsilon , E} {\varvec{E}}_{c_t^\epsilon } \big [ \Vert \nabla v_0^{\epsilon ,E}(\phi +\zeta ) \Vert _{L^2} \big ], \end{aligned}$$

from which (3.7) follows. Similarly, we have

$$\begin{aligned} {{\,\textrm{Hess}\,}}v_0^{\epsilon ,E} = \chi _E'' (v_0^\epsilon ) \nabla v_0^\epsilon \cdot (\nabla v_0^\epsilon )^T + \chi _E'(v_0^\epsilon ) {{\,\textrm{Hess}\,}}v_0^ \epsilon , \end{aligned}$$

and thus, (3.8) follows from the same arguments for \(t=0\). For \(t>0\) the statement is obtained by differentiating (3.9) and similar arguments and the bounds for \(\nabla v_t^{\epsilon , E}\). \(\square \)

We now introduce the Polchinski dynamics: a high-dimensional SDE driven by \(\Phi _t^{\text {GFF}_\epsilon }\) and with drift given by the gradient of the renormalised potential. We then use the Polchinski equation to show that this provides coupling between the \({\mathcal {P}}(\phi )_2\) field with cut-off E and the discrete Gaussian free field on \(\Omega _\epsilon \). One of the key properties of this dynamic that makes it useful to study global probabilistic properties of the \({\mathcal {P}}(\phi )_2\) measure is an independence property between small and large spatial scales. Recall that we denote by \(C_0([0, \infty ), {\mathcal {S}})\) the space of continuous sample paths with values in a metric space \(({\mathcal {S}}, \Vert \cdot \Vert _{\mathcal {S}})\) that vanish at infinity.

Proposition 3.5

For \(\epsilon >0\) and \(E>0\) there is a unique \({\mathcal {F}}^t\)-adapted process \(\Phi ^{{\mathcal {P}}_\epsilon ,E} \in C_0([0,\infty ), X_\epsilon )\) such that

$$\begin{aligned} \Phi _{t}^{{\mathcal {P}}_\epsilon ,E} = - \int _t^\infty \dot{c}^\epsilon _u \nabla v_{u}^{\epsilon ,E}(\Phi _u^{{\mathcal {P}}_\epsilon ,E}) \, du + \Phi _t^{\text {GFF}_\epsilon }. \end{aligned}$$
(3.10)

In particular, for any \(t>0\), \(\Phi ^{\text {GFF}_\epsilon }_0-\Phi _t^{\text {GFF}_\epsilon }\) is independent of \(\Phi _t^{{\mathcal {P}}_\epsilon ,E}\). Moreover, \(\Phi _0^{{\mathcal {P}}_\epsilon ,E}\) is distributed as the measure \(\nu ^{{\mathcal {P}}_\epsilon ,E}\) defined in (3.4).

In removing the cut-off, the left-hand side of (3.10) loses meaning as an adapted solution to a backwards SDE. However, we show that the right-hand side can be made sense of in the limit and use this to define the left-hand side. Moreover, the independence property in Proposition 3.5 is preserved. As such, the finite variation integral in (3.10) is one of the main quantities of interest in the remainder of this paper, and we record this notion in the following definition.

Definition 3.6

Let \(\epsilon > 0\) and \(E \in (0,\infty ]\). The approximate difference field \(\Phi ^{\Delta _\epsilon ,E} \in C([0,\infty ), X_\epsilon )\) is the \(({\mathcal {F}}^t)_{t \geqslant 0}\) adapted process defined by

$$\begin{aligned} \Phi ^{\Delta _\epsilon , E}_t = -\int _t^\infty \dot{c}_s \nabla v_s^{\epsilon ,E}(\Phi _s^{{\mathcal {P}}_\epsilon , E}) ds, \qquad t \geqslant 0, \end{aligned}$$

where \(\Phi ^{{\mathcal {P}}_\epsilon , E}\) is the solution to (3.10). In particular,

$$\begin{aligned} \Phi _t^{{\mathcal {P}}_\epsilon ,E} = \Phi _t^{\Delta _\epsilon ,E} + \Phi _t^{\text {GFF}_\epsilon }, \qquad t \geqslant 0. \end{aligned}$$
(3.11)

Proof of Proposition 3.5

We first prove that the SDE is well-defined and has a (pathwise) unique, strong solution on \([0,\infty )\). Then the independence property follows by independence of increments of the underlying cylindrical Brownian motion. Since \(\epsilon >0\) is fixed throughout the proof, we suppress it from the notation when clear.

We first show that the coefficients of the SDE (3.10) are uniformly Lipschitz thanks to the global Hessian bound (3.8). The proof is then completed by arguing as in [7, Theorem 3.1]. For \({\mathcal {D}},{\mathcal {E}}\in C([0,\infty ), X_\epsilon )\), let

$$\begin{aligned} F_t({\mathcal {D}},{\mathcal {E}}) = -\int _t^\infty \dot{c}_s\nabla v_s^E({\mathcal {E}}_s+{\mathcal {D}}_s) \, ds. \end{aligned}$$
(3.12)

By the mean value theorem and the bound (3.8), we have, for \(s\geqslant 0\) and \(\Phi , {\tilde{\Phi }} \in X_\epsilon \),

$$\begin{aligned} \Vert \dot{c}_s \nabla v_s^E (\Phi ) - \dot{c}_s \nabla v_s^E ({\tilde{\Phi }})\Vert _{L^2} \leqslant \Vert {{\,\textrm{Hess}\,}}v_s^E \Vert _{L^\infty } \Vert \dot{c}_s\Vert \Vert \Phi - {\tilde{\Phi }} \Vert _{L^2} \lesssim _{\epsilon ,E } \Vert \dot{c}_s \Vert \Vert \Phi - {\tilde{\Phi }} \Vert _{L^2},\nonumber \\ \end{aligned}$$
(3.13)

where \(\Vert \dot{c}_s\Vert \) denotes the spectral norm of the operator \(\dot{c}_s :X_\epsilon \rightarrow X_\epsilon \). Thus, for \({\mathcal {E}}\in C([0,\infty ),X_\epsilon )\), we have that

$$\begin{aligned} \Vert F_t({\mathcal {D}},{\mathcal {E}})-F_t({\tilde{{\mathcal {D}}}},{\mathcal {E}}) \Vert _{L^2} \leqslant \int _t^\infty O_{\epsilon ,E} (\Vert \dot{c}_s \Vert ) \Vert {\mathcal {D}}_s-{\tilde{{\mathcal {D}}}}_s \Vert _{L^2} \, ds. \end{aligned}$$
(3.14)

Suppose first that there are two solutions \(\Phi ^{{\mathcal {P}}}\) and \({\tilde{\Phi }}^{{\mathcal {P}}}\) to (3.10) satisfying \({\mathcal {D}}:=\Phi ^{{\mathcal {P}}}-\Phi ^{\text {GFF}} \in C_0([0,\infty ),X_\epsilon )\) and \({\tilde{{\mathcal {D}}}}:={\tilde{\Phi }}^{{\mathcal {P}}}-\Phi ^{\text {GFF}} \in C_0([0,\infty ),X_\epsilon )\). Then, by (3.14),

$$\begin{aligned} \Vert {\mathcal {D}}_t-{\tilde{{\mathcal {D}}}}_t \Vert _{L^2}&= \Vert F_t({\mathcal {D}},{\Phi ^{\text {GFF}}})-F_t({\tilde{{\mathcal {D}}}},{\Phi ^{\text {GFF}}}) \Vert _{L^2} \\&\leqslant \int _t^\infty O_{\epsilon ,E}(\Vert \dot{c}_s\Vert ) \Vert {\mathcal {D}}_s-{\tilde{{\mathcal {D}}}}_s \Vert _{L^2} \, ds. \end{aligned}$$

Thus, \(f(t) = \Vert {\mathcal {D}}_t-{\tilde{{\mathcal {D}}}}_t \Vert _{L^2}\) is bounded with \(f(t)\rightarrow 0\) as \(t\rightarrow \infty \) and additionally satisfies

$$\begin{aligned} f(t) \leqslant a + \int _{t}^\infty O_{\epsilon , E} (\Vert \dot{c}_s\Vert ) f(s) \, ds, \qquad a=0. \end{aligned}$$

Since \(\Vert \dot{c}_s \Vert \lesssim _m \frac{1}{1+ s^2}\), we have \(\int _{0}^\infty O(\Vert \dot{c}_s \Vert ) \, ds < \infty \) and thus, a version of Gronwall’s inequality implies that for \(t\geqslant 0\)

$$\begin{aligned} f(t) \leqslant a \exp \left( {\int _t^\infty O_{\epsilon ,E}( \Vert \dot{c}_s\Vert ) \, ds}\right) = 0, \end{aligned}$$

and hence, \({\mathcal {D}}={\tilde{{\mathcal {D}}}}\) on \([0,\infty )\).

That a solution to (3.10) on \([0,\infty )\) exists follows from Picard iteration. For \({\mathcal {D}}\in C([0,\infty ), X_\epsilon )\) and \(t\geqslant 0\) let \(\Vert {\mathcal {D}}\Vert _t = \sup _{s\geqslant t} \Vert {\mathcal {D}}_s \Vert _{L^2}\). Fix \({\mathcal {E}}\in C_0([0,\infty ), X_\epsilon )\) and set \({\mathcal {D}}^0=0\) and \({\mathcal {D}}^{n+1}=F({\mathcal {D}}^n,{\mathcal {E}})\). Then,

$$\begin{aligned} \Vert {\mathcal {D}}^1\Vert _t = \Vert F(0,{\mathcal {E}})\Vert _t&\leqslant \Vert {\mathcal {E}}\Vert _t \int _t^\infty O_{\epsilon ,E}(\Vert \dot{c}_s\Vert )\,ds\\ \Vert {\mathcal {D}}^{n+1}-{\mathcal {D}}^n \Vert _t&\leqslant \int _t^\infty O_{\epsilon ,E}(\Vert \dot{c}_s\Vert ) \Vert {\mathcal {D}}^{n}-{\mathcal {D}}^{n-1}\Vert _{s}\,ds , \end{aligned}$$

and from the elementary identity

$$\begin{aligned} \int _{t}^\infty ds \, g(s) \left( {\int _s^\infty ds' \, g(s')}\right) ^{k-1} = \frac{1}{k} \left( {\int _{t}^\infty ds\, g(s)}\right) ^{k} \end{aligned}$$

applied with \(g(s) = O( \Vert \dot{c}_s \Vert )\), we conclude that

$$\begin{aligned} \Vert {\mathcal {D}}^{n+1}-{\mathcal {D}}^n\Vert _t \lesssim _{\epsilon ,E} \Vert {\mathcal {E}}\Vert _{t} \frac{1}{n!} \left( {O(\Vert \dot{c}_t \Vert )}\right) ^n. \end{aligned}$$
(3.15)

Since the right-hand side of (3.15) is summable, we have that \({\mathcal {D}}^n \rightarrow {\mathcal {D}}^*\) for some \({\mathcal {D}}^* = {\mathcal {D}}^*({\mathcal {E}}) \in C_0([0,\infty ),X_\epsilon )\) in \(\Vert \cdot \Vert _{0}\), and the limit satisfies \(F_s({\mathcal {D}}^*({\mathcal {E}}),{\mathcal {E}}) = {\mathcal {D}}^*({\mathcal {E}})_s\) for \(s\geqslant {0}\). Now, we apply this result for \({\mathcal {E}}=\Phi ^\text {GFF}\) noting that, from the representation (3.3),

$$\begin{aligned} \mathbb {E}\big [ \Vert \Phi _t^\text {GFF}\Vert _{L^2}^2 \big ] = \sum _{k\in \Omega _\epsilon ^*} \int _t^\infty \frac{1}{\big (s(-\hat{\Delta }^\epsilon (k) + m^2) +1) \big )^2} ds \lesssim 1/t, \end{aligned}$$

which implies that \(\Phi ^\text {GFF}\in C_0([0,\infty ), X_\epsilon )\) a.s. In summary, \(\Phi ^{{\mathcal {P}}, E}_t = D^*(\Phi ^\text {GFF})_t + \Phi ^{\text {GFF}}_t\) is the desired solution on \([0,\infty )\).

We now prove that \(\Phi _0^{{\mathcal {P}},E}\) is distributed as \(\nu ^{{\mathcal {P}},E}\) as defined in (3.4), for which we proceed similarly as in the proof of [7, Theorem 3.2]. Let \(\nu _t^{{\mathcal {P}},E}\) be the renormalised measure defined by

$$\begin{aligned} \mathbb {E}_{\nu _t^{{\mathcal {P}},E}} [F] = e^{v^E_\infty (0)} {\varvec{E}}_{c_\infty -c_t}[e^{-v^E_t(\zeta )}F(\zeta )], \end{aligned}$$
(3.16)

where \({\varvec{E}}_{c_t}\) denotes the expectation of the Gaussian measure with covariance \(c_t\). Then, let \({\tilde{\Phi }}^T\) be the unique strong solution to the (forward) SDE

$$\begin{aligned} d{\tilde{\Phi }}_t^T = -\dot{c}_{T-t} \nabla v^E_{T-t}({\tilde{\Phi }}_t^T) \, dt + q_{T-t} \, d{\tilde{W}}_t^T, \qquad 0 \leqslant t \leqslant T \end{aligned}$$

with initial condition \({\tilde{\Phi }}_0\sim \nu _T^{{\mathcal {P}},E}\) and \(\tilde{W}^{T}_t = W_T-W_{T-t}\). Existence and uniqueness can be seen by the exact same arguments as above for the case \(T=\infty \). Using the Polchinski semigroup \({\varvec{P}}_{s,t}\), \(s\leqslant t\) as defined in [6, (1.3-\(-\)1.4)] to evolve the measure \(\nu _T^{{\mathcal {P}},E}\), it follows that \({\tilde{\Phi }}_T^T\) is distributed as the measure (3.4). Reversing the direction of t and setting \(\Phi _t^T = {\tilde{\Phi }}_{T-t}^T\) we obtain

$$\begin{aligned} \Phi _t^T&= \Phi _T^T - \int _0^{T-t} \dot{c}_{T-s} \nabla v_{T-s}^E({\tilde{\Phi }}_s^T) \, ds + \int _0^{T-t} q_{T-s} \, d{\tilde{W}}_s^T \\&= \Phi _T^T - \int _t^T \dot{c}_{s} \nabla v_{s}^E(\Phi _s^T) \, ds + \int _t^T q_{s} \, dW_s . \end{aligned}$$

Note that this yields a coupling of all solutions \(\Phi ^{{\mathcal {P}},E}\), \(\Phi ^T\), \(T>0\). Therefore, we have with \(\Phi ^{{\mathcal {P}},E}_\infty =0\) that

$$\begin{aligned} \Phi ^{{\mathcal {P}},E}_t - \Phi _t^T&= (\Phi ^{{\mathcal {P}},E}_\infty -\Phi _T^T) - \int _t^T \left[ {\dot{c}_{s} \nabla v^E_{s}(\Phi ^{{\mathcal {P}},E}_s)-\dot{c}_{s} \nabla v^E_{s}(\Phi _s^T)}\right] \, ds \\ {}&- \int _T^\infty \dot{c}_{s} \nabla v^E_{s}(\Phi ^{{\mathcal {P}},E}_s) \, ds + \int _T^\infty q_{s} \, dW_s. \end{aligned}$$

We will show that as \(T\rightarrow \infty \), we have \(\Vert \Phi ^{{\mathcal {P}},E}_0 - \Phi _0^T\Vert _{L^2}\rightarrow 0\) in probability, from which we deduce that \(\Phi ^{{\mathcal {P}},E}_0\sim \nu ^{{\mathcal {P}},E}\). In what follows, we denote by \(\Vert \dot{c}_t\Vert \) the operator norm of \(\dot{c}_t\) when seen as an operator \(L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega _\epsilon )\).

The first, third, and fourth terms on the right-hand side above are independent of t and they converge to 0 in probability as \(T \rightarrow \infty \). Indeed, for the first term this follows from the weak convergence of the measure \(\nu _T^{{\mathcal {P}},E}\) to \(\delta _0\), e.g., in the sense of [6, (1.6)]. The third term is bounded by

$$\begin{aligned} \int _T^\infty \Vert \dot{c}_s \nabla v_s^E(\Phi ^{{\mathcal {P}},E}_s) \Vert _{L^2} \, ds \end{aligned}$$

which converges to 0 in \(L^1\) as \(T\rightarrow \infty \) by (3.7) and the fact that \(\Vert \dot{c}_t \Vert = O(1/t^2)\). The fourth term is a Gaussian field on \(\Omega _\epsilon \) with covariance matrix \(c_\infty - c_T \rightarrow 0\) as \(T\rightarrow \infty \). Since \(\Omega _\epsilon \) is finite, it is a trivial consequence that this Gaussian field convergences to 0.

In summary, we have shown that there is \({\mathcal {R}}_T\) such that \(\Vert {\mathcal {R}}_T \Vert \rightarrow 0\) in probability, and

$$\begin{aligned} \Phi ^{{\mathcal {P}},E}_t - \Phi _t^T = - \int _t^T \left[ {\dot{c}_{s} \nabla v^E_{s}(\Phi ^{{\mathcal {P}},E}_s)-\dot{c}_{s} \nabla v^E_{s}(\Phi _s^T)}\right] \, ds + {\mathcal {R}}_T. \end{aligned}$$

For any \(t\geqslant 0\), we have by (3.13) that

$$\begin{aligned} \int _t^T \Vert \dot{c}_s \nabla v_s^E (\Phi ^{{\mathcal {P}},E}_s) - \dot{c}_s \nabla v_s^E ( \Phi _s^T)\Vert _{L^2} ds \lesssim _{\epsilon ,E} \int _t^T \Vert \dot{c}_t \Vert \Vert \Phi ^{{\mathcal {P}},E}_s - \Phi _s^T \Vert _{L^2} ds \end{aligned}$$

so that we have with \(M_t= \Vert \dot{c}_t \Vert \)

$$\begin{aligned} \int _t^T \Vert \dot{c}_{s} \nabla v_{s}^E(\Phi _s)-\dot{c}_{s} \nabla v_{s}^E(\Phi _s^T) \Vert _{L^2} \, ds \lesssim _{\epsilon ,E} \int _t^T M_s \Vert \Phi ^{{\mathcal {P}},E}_s-\Phi _s^T \Vert _{L^2} \, ds. \end{aligned}$$

Thus, we have shown that \({\mathcal {D}}_t = \Phi ^{{\mathcal {P}},E}_{t}-\Phi _t^T\) satisfies

$$\begin{aligned} \Vert {\mathcal {D}}_t \Vert _{L^2} \lesssim _{\epsilon ,E} \Vert {\mathcal {R}}_T \Vert _{L^2} + \int _{t}^T M_s \Vert {\mathcal {D}}_s \Vert _{L^2} \, ds. \end{aligned}$$

Since \(\int _{0}^\infty M_s \, ds < \infty \), the same version of Gronwall’s inequality as above implies that

$$\begin{aligned} \Vert {\mathcal {D}}_t \Vert _{L^2} \lesssim _{\epsilon ,E} \Vert {\mathcal {R}}_T \Vert _{L^2} \exp \left( {\int _t^T M_s \, ds}\right) \leqslant \Vert {\mathcal {R}}_T \Vert _{L^2} \exp \left( {\int _{t_0}^\infty M_s \, ds}\right) \lesssim \Vert {\mathcal {R}}_T \Vert _{L^2}. \end{aligned}$$

Since the right-hand side is uniform in \(t\geqslant 0\), we conclude that \(\sup _{t\geqslant 0} \Vert {\mathcal {D}}_t\Vert _{L^2} \rightarrow 0\) in probability as \(T\rightarrow \infty \). \(\quad \square \)

3.3 A stochastic control representation via the Boué–Dupuis formula

We now turn to a stochastic control representation of the renormalised potential \(v_t^{\epsilon ,E}\) defined in (3.5) based on the Boué–Dupuis variational formula, which allows to express the moment generating function for functionals of Brownian motion as an expectation of the shifted functional and a drift term. For this, we interpret the drift part of our SDE as a minimiser of the control problem. This correspondence is known as the verification principle in stochastic control theory, see [39, Section 4]. To connect to the Polchinski renormalisation group approach of Sect. 3.2 and also to the notation in [5] we write

$$\begin{aligned} Y_t^\epsilon = \int _0^t q_s^\epsilon d W_s^\epsilon = \Phi _0^{\text {GFF}_\epsilon } - \Phi _t^{\text {GFF}_\epsilon } \end{aligned}$$
(3.17)

for the small scale of the Gaussian free field process \(\Phi ^{\text {GFF}_\epsilon }\). Note that \(Y_t^\epsilon \) is a Gaussian field with covariance \(c_t^\epsilon \), which follows from standard stochastic analysis results. Thus, for \(\phi \in X_\epsilon \), the renormalised potential \(v_t^{\epsilon ,E}\) can be likewise expressed as

$$\begin{aligned} e^{-v_t^{\epsilon ,E}(\phi )} = \mathbb {E}[e^{-v_0^{\epsilon ,E}(Y_t^\epsilon + \phi )}]. \end{aligned}$$

The right-hand side is now expressed as a measurable functional of Brownian motions. One can then exploit continuous-time martingale techniques, in particular Girsanov’s theorem, to analyse this expectation. This underlies the stochastic control representation that we now present.

Let \(\mathbb {H}_a\) be the space of progressively measurable (with respect to the backward filtration \({\mathcal {F}}^t\)) processes that are a.s. in \(L^2(\mathbb {R}_0^+ \times \Omega _\epsilon )\), i.e. \(u \in \mathbb {H}_a\) if and only if \(u|_{[t,\infty )}\) is \({\mathcal {B}}([t,\infty )) \otimes {\mathcal {F}}^t\)-measurable for every \(t\geqslant 0\) and

$$\begin{aligned} \int _{0}^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau < \infty \quad \text {a.s.} \end{aligned}$$

Here \({\mathcal {B}}([t,\infty ))\) denotes the Borel \(\sigma \)-algebra on \([t,\infty )\). The restriction of \(\mathbb {H}_a\) to a finite interval [0, t] is denoted \(\mathbb {H}_a[0,t]\) with the convention \(\mathbb {H}_a[0,\infty ]= \mathbb {H}_a\). We will refer to elements \(u\in \mathbb {H}_a\) as drifts. For \(u\in \mathbb {H}_a\) and \(0\leqslant s \leqslant t \leqslant \infty \) define the integrated drift

$$\begin{aligned} I_{s,t}^\epsilon (u)&= \int _s^t q_\tau ^\epsilon u_\tau d\tau \end{aligned}$$

with the convention \(I_{0,t}^\epsilon (u) \equiv I_{t}^\epsilon (u)\). The following proposition is the Boué–Dupuis formula for the renormalised potential. We state it in the conditional form in order to be able to draw the correct comparison to the Polchinski renormalisation group approach afterwards.

Proposition 3.7

(Boué–Dupuis formula). Let \(\epsilon >0\). Then the conditional Boué–Dupuis formula holds for the renormalised potential \(v_t^{\epsilon ,E}\), i.e. for \(t\in [0, \infty ]\)

$$\begin{aligned}{} & {} -\log \mathbb {E}\big [e^{-v_0^{\epsilon ,E}(Y_t^\epsilon + \Phi _t^{{\mathcal {P}}_\epsilon ,E})} \bigm | {\mathcal {F}}^t\big ] \nonumber \\{} & {} \quad = \inf _{u \in \mathbb {H}_a[0,t]} \mathbb {E}\Big [v_0^{\epsilon ,E} \big (Y_t^\epsilon + \Phi _t^{{\mathcal {P}}_\epsilon ,E} + I_t^\epsilon (u)\big ) + \frac{1}{2} \int _0^t \Vert u_s\Vert _{L^2}^2 ds \bigm | {\mathcal {F}}^t\Big ]. \end{aligned}$$
(3.18)

Proof

Since \(Y_t^\epsilon \) is independent of \(\Phi _t^{{\mathcal {P}}_\epsilon ,E}\) we have by standard properties of conditional expectation

$$\begin{aligned} -\log \mathbb {E}\big [ e^{-v_0^{\epsilon ,E}(Y_t^{\epsilon }+\Phi _t^{{\mathcal {P}}_\epsilon ,E})} \bigm | {\mathcal {F}}^t \big ] = -\log \mathbb {E}\big [e^{-v_0^{\epsilon ,E}(Y_t^{\epsilon }+\phi )}\big ]_{\phi =\Phi _t^{{\mathcal {P}}_\epsilon ,E}}. \end{aligned}$$

From [13, Theorem 8.3], we have that for a deterministic \(\phi \in X_\epsilon \) the unconditional expectation on right-hand side of this display is equal to

$$\begin{aligned} -\log \mathbb {E}\big [ e^{-v_0^{\epsilon ,E}(Y_t^\epsilon +\phi )} \big ]&= \inf _{u \in \mathbb {H}_{a}[0,t]} \mathbb {E}\Big [ v_0^{\epsilon ,E}(Y_t^\epsilon +\phi +I_t^\epsilon (u))+\frac{1}{2}\int _{0}^{t}\Vert u_s \Vert ^{2}_{L^{2}(\Omega _{\epsilon })} ds\Big ], \end{aligned}$$

from which the claim follows. \(\quad \square \)

3.4 Correspondence between the Polchinski and the Boué–Dupuis representations

The following proposition describes the equivalence between the Polchinski renormalisation group dynamics and the stochastic control representation via the Boué–Dupuis formula in the presence of a potential cut-off \(E > 0\).

Proposition 3.8

Let \(\epsilon > 0\), \(E\in (0,\infty )\) and let \(\Phi ^{{\mathcal {P}}_\epsilon ,E} \in C_0([0,\infty ), X_\epsilon )\) be the unique strong solution to (3.10). Let \(u^E:[0,t]\times \Omega _\epsilon \rightarrow \mathbb {R}\) denote the process defined by

$$\begin{aligned} u^E_s = -q_s^\epsilon \nabla v_s^{\epsilon ,E} (\Phi _s^{{\mathcal {P}}_\epsilon ,E}), \qquad s\in [0,t]. \end{aligned}$$

Then \(u^E\) is a minimiser of the conditional Boué–Dupuis variational formula (3.18). In particular, the relation between \(u^E\) and the difference field \(\Phi ^{\Delta _\epsilon ,E}\) is given by

$$\begin{aligned} \Phi _t^{\Delta _\epsilon ,E} = \int _t^\infty q_s^\epsilon u^E_s ds. \end{aligned}$$

Proof

Since \((\Phi _t^{{\mathcal {P}}_\epsilon ,E})_{t\in [0,\infty ]}\) is \({\mathcal {F}}^t\)-measurable and continuous, it follows that \(u^{\epsilon ,E} \in \mathbb {H}_a^\epsilon [0,t]\) for any \(t\in [0,\infty ]\). To ease the notation, we drop \(\epsilon \) throughout this proof. Applying Itô’s formula to \(v^E_{T-\tau }(\Phi ^{{\mathcal {P}},E}_{T-\tau })\) for \( \tau \leqslant T<\infty \) and substituting \(t=T-\tau \) thereafter, we obtain

$$\begin{aligned}&d v^E_t(\Phi ^{{\mathcal {P}},E}_t) = - \nabla v^E_t(\Phi ^{{\mathcal {P}},E}_t) \dot{c}_t \nabla v^E_t(\Phi ^{{\mathcal {P}},E}_t) dt - \partial _t v^E_t(\Phi ^{{\mathcal {P}},E}_t) dt \\ {}&+\frac{1}{2} {{\,\textrm{Tr}\,}}\big ({{\,\textrm{Hess}\,}}v^E_t \dot{c}_t\big ) dt +\nabla v^E_t(\Phi ^{{\mathcal {P}},E}_t) \dot{c}_t^{1/2} dW_t \\ {}&= -\big (\nabla v^E_t (\Phi ^{{\mathcal {P}},E}_t)\big )_{\dot{c}_t}^2 - \partial _t v^E_t(\Phi ^{{\mathcal {P}},E}_t) dt + \frac{1}{2} \Delta _{\dot{c}_t} v^E_t(\Phi ^{{\mathcal {P}},E}_t) dt + \nabla v^E_t(\Phi ^{{\mathcal {P}},E}_t) \dot{c}_t^{1/2} dW_t\\ {}&= -\frac{1}{2} \big (\nabla v^E_t (\Phi ^{{\mathcal {P}},E}_t)\big )_{\dot{c}_t}^2 dt + \nabla v^E_t(\Phi ^{{\mathcal {P}},E}_t) \dot{c}_t^{1/2} dW_t, \end{aligned}$$

where we used the Polchinski equation (3.6) in the last step. Integrating from 0 to t and taking conditional expectation yields

$$\begin{aligned} \mathbb {E}\Big [ v^E_0(\Phi ^{{\mathcal {P}},E}_0) - v^E_t (\Phi ^{{\mathcal {P}},E}_t) \bigm | {\mathcal {F}}^t \Big ] = - \frac{1}{2}\mathbb {E}\Big [ \int _0^t \big (\nabla v^E_s (\Phi ^{{\mathcal {P}},E}_s)\big )_{\dot{c}_s}^2 ds \bigm | {\mathcal {F}}^t \Big ], \end{aligned}$$
(3.19)

where we used that for \(t\in [0,\infty ]\)

$$\begin{aligned} \mathbb {E}\Big [ \int _0^t \nabla v^E_s(\Phi ^{{\mathcal {P}},E}_s) \dot{c}_s^{1/2} d W_s \bigm | {\mathcal {F}}^t \Big ] = 0 \end{aligned}$$

which holds by (3.7) and the fact that \(\Vert \dot{c}_t \Vert = O(1/t^2)\). Since \(Y_t\) is independent of \({\mathcal {F}}^t\) and \(\Phi ^{{\mathcal {P}},E}_t\) is \({\mathcal {F}}^t\)-measurable, we have by standard properties of conditional expectation

$$\begin{aligned} e^{-v^E_t(\Phi ^{{\mathcal {P}},E}_t)} = \mathbb {E}\Big [e^{-v^E_0(Y_t + \Phi ^{{\mathcal {P}},E}_t)} \bigm | {\mathcal {F}}^t \Big ]. \end{aligned}$$

Therefore, we obtain from (3.19)

$$\begin{aligned} \mathbb {E}&\Big [ \frac{1}{2}\int _0^t \big (\nabla v^E_s (\Phi ^{{\mathcal {P}},E}_s)\big )_{\dot{c}_s}^2 ds \bigm | {\mathcal {F}}^t \Big ] = v^E_t(\Phi ^{{\mathcal {P}},E}_t) - \mathbb {E}\Big [ v^E_0(\Phi ^{{\mathcal {P}},E}_0) \bigm | {\mathcal {F}}^t\Big ] \\ {}&= - \log \mathbb {E}\Big [e^{-v^E_0(Y_t^\epsilon + \Phi ^{{\mathcal {P}},E}_t)} \bigm | {\mathcal {F}}^t \Big ] - \mathbb {E}\Big [ v^E_0\Big (\int _0^\infty q_t dW_t - \int _0^\infty \dot{c}_t \nabla v^E_t(\Phi ^{{\mathcal {P}},E}_t)dt \Big ) \bigm |{\mathcal {F}}^t \Big ]. \end{aligned}$$

Rearranging this and using (3.17) show that

$$\begin{aligned}&- \log \mathbb {E}\Big [e^{-v^E_0(Y_t^\epsilon + \Phi ^{{\mathcal {P}},E}_t)} \bigm | {\mathcal {F}}^t \Big ] \\&\quad = \mathbb {E}\Big [ v^E_0\Big (\int _0^\infty q_t dW_t - \int _0^\infty \dot{c}_t \nabla v^E_t(\Phi ^{{\mathcal {P}},E}_t)dt \Big ) + \frac{1}{2}\int _0^t \big (\nabla v^E_s (\Phi ^{{\mathcal {P}},E}_s)\big )_{\dot{c}_s}^2 ds \bigm | {\mathcal {F}}^t \Big ] \\&\quad = \mathbb {E}\Big [ v^E_0\Big ( Y_t^\epsilon + \Phi ^{{\mathcal {P}},E}_t - \int _0^t \dot{c}_s \nabla v^E_s(\Phi ^{{\mathcal {P}},E}_s)ds \Big )+ \frac{1}{2}\int _0^t \big (\nabla v^E_s (\Phi ^{{\mathcal {P}},E}_s)\big )_{\dot{c}_s}^2 ds \bigm | {\mathcal {F}}^t \Big ], \end{aligned}$$

which proves that \(u^E_s = -q_s \nabla v^E_s(\Phi ^{{\mathcal {P}},E}_s)\), \(s\in [0,t]\) is a minimiser for (3.18). \(\quad \square \)

4 Fractional Moment Estimate on the Renormalised Potential

In this section we prove a fractional moment estimate on the small-scale behaviour of the renormalised potential. We exploit the connection between the Polchinski dynamics and a minimiser of the Boué–Dupuis variational problem (3.18) as in Proposition 3.8 to transfer estimates on minimisers onto the renormalised potential. First, we prove some a priori bounds on Sobolev norms of the integrated drift.

4.1 Sobolev norms of integrated drifts

Recall that, for \(0\leqslant s\leqslant t \leqslant \infty \) and \(u\in \mathbb {H}_a\), the integrated drift is defined by

$$\begin{aligned} I_{s,t}^\epsilon (u) = \int _s^t q_\tau ^\epsilon u_\tau d\tau , \qquad q_\tau ^\epsilon =\big (\tau (-\Delta ^\epsilon + m^2) + 1\big )^{-1} \end{aligned}$$

and that u is a.s. \(L^2\) integrable on \([0,\infty ]\), i.e. \(\int _0^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau <\infty \) a.s. In Fourier space this condition reads as

$$\begin{aligned} \int _0^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau = \sum _{k\in \Omega _\epsilon ^*}\int _0^\infty |{\hat{u}}_\tau (k)|^2 d\tau <\infty \quad \text {a.s.,} \end{aligned}$$

where \({\hat{u}}_\tau (k)\), \(k\in \Omega _\epsilon ^*\) denote the Fourier coefficients of \(u_\tau \).

In what follows we will discuss the regularity of \(I_{s,t}^\epsilon (u)\) for different choices of st, for which we use the Sobolev norm defined in (2.5). Note that we have

$$\begin{aligned} \Vert I_{s,t}^\epsilon (u) \Vert _{H^\alpha }^2= \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^\alpha | \widehat{ I_{s,t}^\epsilon (u) } |^2 =\sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^\alpha \Big | \int _s^t {\hat{q}}_\tau ^\epsilon (k) {\hat{u}}_\tau (k) d\tau \Big |^2. \end{aligned}$$

In addition, recall that the Fourier multipliers of \(q_\tau ^\epsilon \) are given by

$$\begin{aligned} {\hat{q}}_\tau ^\epsilon (k) = \frac{1}{\tau (- {\hat{\Delta }}^\epsilon (k) + m^2) + 1}, \qquad k\in \Omega _\epsilon ^*, \end{aligned}$$

where \(-{\hat{\Delta }}^\epsilon (k)\) are as in (2.3). Using (2.4), we have that

$$\begin{aligned} -{\hat{\Delta }}^\epsilon (k) \geqslant c |k|^2 \end{aligned}$$

for \(k\in \Omega _\epsilon ^*\) with \(c=\frac{4}{\pi ^2}\). Hence, we have for \(k\in \Omega _\epsilon ^*\)

$$\begin{aligned} {\hat{q}}_\tau ^\epsilon (k) \leqslant \frac{1}{\tau (c|k|^2 + m^2) + 1}. \end{aligned}$$
(4.1)

The following results establish bounds for Sobolev norms of the large and small scales of \(I^\epsilon (u)\) for a given drift \(u\in \mathbb {H}_a\). For the rest of this subsection we drop \(\epsilon >0\) from the notation.

Lemma 4.1

Let \(\alpha \in [1,2]\). Then, for \(0<t<1\),

$$\begin{aligned} \Vert I_{t,\infty } (u) \Vert _{H^\alpha }^2 \lesssim _{\alpha } \frac{1}{t^{\alpha -1}} \int _t^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

Proof

By the Cauchy-Schwarz inequality and (4.1) we have

$$\begin{aligned} \Vert I_{t,\infty } (u) \Vert _{H^\alpha }^2&\leqslant \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^\alpha \big ( \int _t^\infty |{\hat{q}}_\tau (k) {\hat{u}}_\tau (k)| d\tau \big )^2 \\&\leqslant \sum _{k\in \Omega _\epsilon ^*}(1+|k|^2)^\alpha \big ( \int _t^\infty |{\hat{q}}_\tau (k) |^2 d\tau \big ) \big ( \int _t^\infty |{\hat{u}}_\tau (k)|^2 d\tau \big ) \\ {}&\leqslant \sum _{k\in \Omega _\epsilon ^*}(1+|k|^2)^\alpha \frac{1}{(c|k|^2+ m^2)}\frac{1}{t(c|k|^2+ m^2) +1} \int _t^\infty |{\hat{u}}_\tau (k)|^2 d\tau \\ {}&\leqslant \sum _{k\in \Omega _\epsilon ^*} \frac{(1+|k|^2)}{(c|k|^2+ m^2)}\frac{(1+|k|^2)^{\alpha -1}}{t(c|k|^2+ m^2) +1} \int _t^\infty |{\hat{u}}_\tau (k)|^2 d\tau \\ {}&\lesssim _{\alpha } \sum _{k\in \Omega _\epsilon ^*} \frac{1}{t^{ \alpha -1}} \int _t^\infty |{\hat{u}}_\tau (k)|^2 d \tau = \frac{1}{t^{\alpha -1}} \int _t^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau , \end{aligned}$$

where we used that \(\sup _{x\geqslant 0}\frac{(1+x)^\beta }{1+tx} {\lesssim _\beta } \frac{1}{t^\beta }\) for \(t<1\) and \(\beta <1\). \(\quad \square \)

Lemma 4.2

Let \(\alpha \in [0,1]\). Then,

$$\begin{aligned} \sup _{t\geqslant 0} \Vert I_{t,\infty } (u) \Vert _{H^\alpha }^2 \lesssim \int _0^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

Proof

Using the same calculations as above, we see that

$$\begin{aligned} \Vert I_{t,\infty } (u) \Vert _{H^\alpha }^2&\leqslant \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^\alpha \big ( \int _t^\infty |{\hat{q}}_\tau (k) {\hat{u}}_\tau (k)| d\tau \big )^2 \\ {}&\leqslant \sum _{k\in \Omega _\epsilon ^*}(1+|k|^2)^\alpha \big ( \int _t^\infty |{\hat{q}}_\tau (k) |^2 d\tau \big ) \big ( \int _t^\infty |{\hat{u}}_\tau (k)|^2 d\tau \big ) \\ {}&\leqslant \sum _{k\in \Omega _\epsilon ^*}(1+|k|^2)^\alpha \frac{1}{(c|k|^2+ m^2)}\frac{1}{t(c|k|^2+ m^2) +1} \int _t^\infty |{\hat{u}}_\tau (k)|^2 d\tau \\ {}&\lesssim \sum _{k\in \Omega _\epsilon ^*} \frac{(1+|k|^2)^\alpha }{(c|k|^2+ m^2)}\int _t^\infty |{\hat{u}}_\tau (k)|^2 d\tau \leqslant \int _t^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

Taking the supremum over \(t\geqslant 0\) establishes the estimate. \(\quad \square \)

Lemma 4.3

For \(0\leqslant \alpha \leqslant 1\) and \(0\leqslant s\leqslant t\), we have

$$\begin{aligned} \Vert I_{s,t} (u) \Vert _{H^\alpha }^2 \lesssim _{\alpha } (t-s)^{1-\alpha } \int _s^t \Vert u_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

Proof

By the same arguments that were used in the proof of the previous statements, we obtain

$$\begin{aligned}&\Vert I_{s,t} (u) \Vert _{H^\alpha }^2 \leqslant \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^{\alpha } \Big ( \int _s^t |{\hat{q}}_\tau (k) {\hat{u}}_\tau (k)| d\tau \Big )^2 \\ {}&\leqslant \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^{\alpha } \int _s^t \frac{1}{\tau (c|k|^2 + m^2) + 1 )^2} d\tau \int _s^t | {\hat{u}}_\tau (k)|^2 d\tau \\ {}&= \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^{\alpha } \Big [ \frac{1}{(c|k|^2 +m^2)} \Big ( \frac{1}{s(c|k|^2 +m^2)+1} - \frac{1}{t(c|k|^2 +m^2)+1}\Big )\Big ] \int _s^t |{\hat{u}}_\tau (k)|^2 d\tau \\ {}&= \sum _{k\in \Omega _\epsilon ^*} (1+|k|^2)^{\alpha } \Big [ \frac{t-s}{\big (s(c|k|^2 +m^2) + 1\big )\big ( t(c|k|^2 +m^2) + 1 \big ) } \Big ] \int _s^t |{\hat{u}}_\tau (k)|^2 d\tau \\ {}&\leqslant ( t -s) \sum _{k \in \Omega _\epsilon ^*} \frac{(1 + |k|^2)^\alpha }{t(c|k|^2 +m^2)+1} \int _s^t | {\hat{u}}_\tau (k)|^2 d\tau \\ {}&\lesssim _{\alpha } (t-s) \frac{1}{t^\alpha } \int _s^t \Vert u_\tau \Vert _{L^2}^2 d\tau \leqslant (t-s)^{1-\alpha } \int _s^t \Vert u_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

\(\square \)

Lemma 4.4

For \(\alpha \in (1, 2]\) and \(0<s\leqslant t\), we have

$$\begin{aligned} \Vert I_{s,t} (u) \Vert _{H^\alpha }^2 \lesssim _{\alpha } \frac{t-s}{s^\alpha } \int _s^t \Vert u_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

Proof

The following calculations are similar to before. We include them here for completeness.

$$\begin{aligned} \Vert I_{s,t}(u) \Vert _{H^\alpha }^2&\leqslant \sum _{k\in \Omega _\epsilon ^*} (1+ |k|^2)^\alpha \int _s^t \frac{1}{(\tau (c|k|^2 +m^2) + 1)^2} d\tau \int _s^t |{\hat{u}}_\tau (k)|^2 d \tau \\ {}&\lesssim _{\alpha } \sum _{k\in \Omega _\epsilon ^*} \int _s^t \frac{1}{\tau ^\alpha } d\tau \int _s^t |{\hat{u}}_\tau (k)|^2 d \tau = \int _s^t \frac{1}{\tau ^\alpha } d\tau \sum _{k\in \Omega _\epsilon ^*} \int _s^t |{\hat{u}}_\tau (k)|^2 d \tau \\ {}&= \frac{1}{s^\alpha } (t-s) \int _s^t \Vert u_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

\(\square \)

4.2 Uniform \(L^2\) estimate for minimisers

As an application of the estimates established in the previous section we now show that if a drift minimises the functional in (3.18) for \(t=\infty \), then the expectation of its \(L^2\) norm is finite as stated in the following proposition.

Proposition 4.5

Assume that \({\bar{u}}\) minimises the functional

$$\begin{aligned} F^{\infty ,E} (u) = \mathbb {E}\big [ v_0^{\epsilon ,E}(Y_\infty ^\epsilon + I_\infty ^\epsilon (u) ) + \frac{1}{2} \int _0^\infty \Vert u_t\Vert _{L^2}^2 dt \big ] \end{aligned}$$

in \(\mathbb {H}_a\). Then

$$\begin{aligned} \sup _{\epsilon>0}\sup _{E>0} \mathbb {E}\Big [\frac{1}{2} \int _0^\infty \Vert {\bar{u}}_t\Vert _{L^2}^2 dt\Big ] < \infty . \end{aligned}$$
(4.2)

Before we prove Proposition 4.5, we establish the following auxiliary result to fix the numerology in the analysis when using interpolation inequalities, which will allow to treat all polynomials \({\mathcal {P}}\) as in (1.3) simultaneously.

Lemma 4.6

Let \(l,N \in \mathbb {N}\) such that \(1\leqslant l \leqslant N-1\). There exist \(\theta _{l,N}\in (0,1)\) and \(\alpha _{l,N},\beta _{l,N},\gamma _{l,N} \in (0,\infty )\) satisfying

$$\begin{aligned} \alpha _{l,N} = \frac{2}{\theta _{l,N} l}, \qquad \beta _{l,N}=\frac{N}{(1-\theta _{l,N})l}, \qquad \frac{1}{\gamma _{l,N}}=1-\frac{1}{\alpha _{l,N}}-\frac{1}{\beta _{l,N}} \end{aligned}$$
(4.3)

such that the following inequalities hold:

$$\begin{aligned} \frac{1}{\alpha _{l,N}}+\frac{1}{\beta _{l,N}}<1 \qquad \text {and} \qquad \gamma _{l,N} \geqslant \alpha _{l,N}. \end{aligned}$$
(4.4)

Proof

The first inequality can be rewritten as

$$\begin{aligned} \frac{\theta _{l,N} l}{2}+\frac{(1-\theta _{l,N})l}{N}< 1 \quad \iff \quad \theta _{l,N} < \frac{2(N-1)}{l(N-2)}. \end{aligned}$$

To reformulate the second condition, we compute

$$\begin{aligned} \frac{1}{\gamma _{l,N}} = 1 - \frac{1}{\alpha _{l,N}} - \frac{1}{\beta _{l,N}} = 1 - \frac{\theta _{l,N}l}{2} - \frac{(1-\theta _{l,N} )l}{N} = \frac{2N - \theta _{l,N} Nl - 2(1-\theta _{l,N})l}{2N}. \end{aligned}$$

Thus, to satisfy the second inequality in (4.4), we need to choose \(\theta _{l,N}\) such that

$$\begin{aligned} \frac{N}{2N - \theta _{l,N} Nl - 2(1-\theta _{l,N})l} \geqslant \frac{1}{\theta _{l,N}l}, \end{aligned}$$

which is equivalent to

$$\begin{aligned} 2 N \theta _{l,N}l \geqslant 2N -2l + 2l \theta _{l,N} \quad \iff \quad \theta _{l,N} \geqslant \frac{N-l}{(N+1)l}. \end{aligned}$$

Hence, both conditions in (4.4) are satisfied, if we choose \(\theta _{l,N} \in (0,1)\) satisfying

$$\begin{aligned} \frac{2(N-1)}{ l(N-2)} > \theta _{l,N} \geqslant \frac{N-l}{ l(N+1)}, \end{aligned}$$

which is possible for all specified values of lN. \(\quad \square \)

Lemma 4.7

For every \(l,N \in \mathbb {N}\) such that \(1 \leqslant l \leqslant N-1\) and for every \(\kappa >0\), there exists \({\tilde{\kappa }}_{l,N} >0\) such that the following estimate holds. For any \(\delta > 0\) sufficiently small there exists \(C>0\) such that, for \(f\in {\mathcal {C}}^{- {\tilde{\kappa }}_{l,N}} \) and \(u\in L^{2}(\mathbb {R}_0^+ \times \Omega _{\epsilon })\), we have, for \(t \in (0,\infty ]\),

$$\begin{aligned} \Big |\int _{\Omega _\epsilon } f I_t^l(u)dx \Big | \leqslant C (t\wedge 1)^{1-\kappa }\Vert f\Vert ^{\gamma _{l,N}}_{{\mathcal {C}}^{-{\tilde{\kappa }}_{l,N}} } + \delta a_N \Vert I_{t}(u)\Vert ^{N}_{L^N}+ \delta \int _{0}^{t} \Vert u_s \Vert ^{2}_{L^{2}} ds. \end{aligned}$$
(4.5)

Proof

We only prove the statement for \(t \in (0,1]\) - the case \(t=\infty \) follows similarly. Let \(\theta _{l,N}\) and \(\alpha _{l,N},\beta _{l,N}, \gamma _{l,N}\) be as in Lemma 4.6. Multiplying the first inequality in (4.4) by 1/l we have

$$\begin{aligned} \frac{1}{\tilde{l}} \equiv \frac{\theta _{l,N}}{2}+\frac{1-\theta _{l,N}}{N}<\frac{1}{l}. \end{aligned}$$

Now, the standardFootnote 1 Besov embedding \({B_{p',q'}^{s'}} \hookrightarrow {B_{p,q}^{s}}\) if \(s' > s, p' \geqslant p\), and \(q' \geqslant q\), together with interpolation (2.9) and Lemma 4.4, implies that

$$\begin{aligned} \Vert I_t(u)\Vert ^{l}_{{B_{l,l}^{{\tilde{\kappa }}}}}&\lesssim \Vert I_{t}(u)\Vert ^{l}_{{B_{{\tilde{l}} ,{\tilde{l}} }^{{\tilde{\kappa }}}}} \lesssim \Vert I_{t}(u)\Vert ^{l}_{{B_{{\tilde{l}},\infty }^{2{\tilde{\kappa }}}}} \lesssim \Vert I_t(u)\Vert _{{B_{N,\infty }^{0}}}^{(1-\theta _{l,N}) l }\Vert I_t(u)\Vert _{{B_{2,\infty }^{2\tilde{\kappa }/\theta _{l,N}}}}^{\theta _{l,N}l} \\&\lesssim \Vert I_{t}(u)\Vert ^{(1-\theta _{l,N})l}_{L^{N}} \Vert I_{t}(u)\Vert ^{\theta _{l,N} l}_{H^{2{\tilde{\kappa }}/\theta _{l,N}}} \\&\lesssim t^{(1-\kappa )\theta _{l,N}l/2} \Vert I_{t}(u) \Vert _{L^{N}}^{(1-\theta _{l,N})l}\big ( \int _0^t \Vert u_s \Vert _L^2 ds\big )^{\theta _{l,N}l/2} \end{aligned}$$

where, for a given \(\kappa \), we chose \({\tilde{\kappa }}_{l,N} \equiv {\tilde{\kappa }} = 2\theta _{l,N} \kappa \).

Then, by duality (2.7), the iterated multiplication inequality (2.8) and the definitions of \(\alpha _{l,N}, \beta _{l,N}\) in (4.3), there exists \(C>0\) such that

$$\begin{aligned} \Big | \int _{\Omega _\epsilon } f I_t^l(u) dx \Big |&\leqslant C \Vert f \Vert _{{\mathcal {C}}^{{\tilde{\kappa }}}} \Vert I_t^l(u) \Vert _{{B_{1,1}^{\tilde{\kappa }}}} \leqslant C \Vert f \Vert _{{\mathcal {C}}^{{\tilde{\kappa }}}} \Vert I_t(u) \Vert _{{B_{l,l}^{{\tilde{\kappa }}}}}^l \\&\leqslant {\bar{C}} t^{(1-\kappa )/\alpha _{l,N}}\Vert f\Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}} \Vert I_{t}(u)\Vert _{L^N}^{N/{\beta _{l,N}}} \big ( \int _0^t \Vert u_s\Vert ds\big )^{2/\alpha _{l,N}}. \end{aligned}$$

Now, applying Young’s inequality with \(\alpha _{l,N},\beta _{l,N},\gamma _{l,N}\) and using the fact that \(\gamma _{l,N}/ \alpha _{l,N}>1\), the estimate in (4.5) follows. \(\quad \square \)

Proof of Proposition 4.5

Throughout this proof \(\epsilon > 0\) is fixed and therefore dropped from the notation. The goal is to prove that for any \(\delta > 0\) sufficiently small, there exists \({\bar{C}} > 0\) such that, for all \(u \in \mathbb {H}\),

$$\begin{aligned} F^{\infty ,E}(u) \geqslant - {\bar{C}} + \frac{1-2\delta }{2} \mathbb {E}\Big [ \int _0^\infty \Vert u_s \Vert _{L^2}^2 \Big ]. \end{aligned}$$
(4.6)

This allows us to compare the cost function evaluated at a minimiser \({\bar{u}}\) against the cost function evaluated at a competitor drift. For our purposes it suffices to choose this competitor as the zero drift. Indeed, since \({\bar{u}}\) is a minimiser and since \(\chi _E\) is concave and satisfies \(\chi _E(0)=0\), we have

$$\begin{aligned} F^{\infty ,E}({\bar{u}}) \leqslant F^{\infty ,E}(0) = \mathbb {E}[v_0^{\epsilon ,E}(Y_\infty )] \leqslant \chi _E \big ( \mathbb {E}[ v_0(Y_\infty ^\epsilon ) ] \big ) \leqslant 0. \end{aligned}$$

Hence, by (4.6),

$$\begin{aligned} \mathbb {E}\Big [ \int _0^\infty \Vert {\bar{u}}_s \Vert _{L^2}^2 ds \Big ] \leqslant \frac{2 {\bar{C}}}{1-2\delta }. \end{aligned}$$

Taking the supremum over \(E >0\) and \(\epsilon > 0\) establishes (4.2).

It remains to establish (4.6). Fix \(u \in \mathbb {H}_a\). For \({\mathcal {P}}\) as in (1.3), by repeated use of the triangle inequality, we have that

$$\begin{aligned} \begin{aligned}&\Big |\int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty + I_{\infty }(u)) {:\,} dx\Big |\\&\geqslant -\Big |\int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty + I_{\infty }(u)) {:\,} dx - \int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty ) {:\,} dx - \int _{\Omega _\epsilon } {\mathcal {P}}(I_{\infty }(u)) dx \Big | \\&\qquad - \Big | \int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty ) {:\,} dx \Big | - \Big | \int _{\Omega _\epsilon } [{\mathcal {P}}(I_{\infty }(u)) - a_N I_\infty ^N(u)] dx \Big | \\&\qquad + a_N \Vert I_\infty (u)\Vert _{L^N}^N. \end{aligned} \end{aligned}$$
(4.7)

First, observe that there exists \(c_1 > 0\), depending on the coefficients of \({\mathcal {P}}\), such that

$$\begin{aligned} \Big | \int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty ) {:\,} dx \Big | \leqslant c_1 \sum _{k=1}^N \Vert {:\,} Y_\infty ^k {:\,} \Vert _{{\mathcal {C}}^{-\kappa }}. \end{aligned}$$
(4.8)

Second, observe that, for any \(\delta > 0\), there exists \(c_2=c_2(\delta ) > 0\), also depending on the coefficients of \({\mathcal {P}}\), such that

$$\begin{aligned} \Big |\int _{\Omega _\epsilon } \big [ {\mathcal {P}}\big (I_{\infty }(u)\big ) - a_N I_{\infty }^N(u)\big ] dx \Big | \leqslant c_2 + \delta \int _{\Omega _\epsilon } a_N I_{\infty }^N(u) dx. \end{aligned}$$
(4.9)

Let us estimate the first term on the right-hand side of (4.7). By the triangle inequality and binomial theorem for Wick powers (2.13),

$$\begin{aligned} \begin{aligned}&\Big |\int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty + I_{\infty }(u)) {:\,} dx - \int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty ) {:\,} dx - \int _{\Omega _\epsilon } {\mathcal {P}}(I_{\infty }(u)) dx \Big | \\&\leqslant \sum _{k = 1}^{N} |a_{k}| \Big | \int _{\Omega _\epsilon }\big [ {:\,} \big (Y_\infty + I_{\infty }(u) \big )^k {:\,} - {:\,} Y_\infty ^k {:\,} - I_{\infty }^k(u) \big ] dx \Big | \\&\leqslant \sum _{k = 1}^{N} \sum _{l = 1}^{k-1} |a_{k}| {k \atopwithdelims ()l} \Big | \int _{\Omega _\epsilon } {:\,} Y_\infty ^{k-l} {:\,} I_{\infty }^{l} (u) dx \Big |. \end{aligned} \end{aligned}$$
(4.10)

An application of Lemma 4.7 and setting \({\tilde{\kappa }}= \min _{l\leqslant N} {\tilde{\kappa }}_{l,N}\) and \({\tilde{\kappa }}_{l,N}\) as in (4.5) now yields

$$\begin{aligned} \Big | \int _{\Omega _{\epsilon }}{:\,} Y^{k-l}_{\infty } {:\,} I_{\infty }^{l} dx \Big | \leqslant C_{3} \Vert {:\,} Y^{k-l}_{\infty } {:\,}\Vert _{{\mathcal {C}}^{-\tilde{\kappa }}}^{\gamma _{l,N}}+\delta a_N \Vert I_{t}(u)\Vert ^{N}_{L^{N}} +\delta \int _{0}^{\infty }\Vert u_{s}\Vert ^{2}_{L^{2}}ds \end{aligned}$$

Set \(\Gamma =\max _{l \leqslant N}\gamma _{l,N}\) and define \(Q = \sum _{k=1}^N \sum _{l=1}^{k} \big (\Vert {:\,} Y_\infty ^l {:\,} \Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}} + 1 \big )^\Gamma + \sum _{k=1}^N \Vert {:\,} Y_\infty ^k {:\,} \Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}}\). Inserting this estimate into (4.10) and combining it with (4.8) and (4.9), we thus obtain the estimate: there exists \(c> 0\) depending on the mass m only such that there exists \(c_4,c_5 > 0\) such that

$$\begin{aligned} \begin{aligned} \int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty + I_{\infty }(u)) {:\,} dx&\geqslant -c_4 \big ({ \sum _{k=1}^N }\sum _{l=1}^{k-1} (\Vert {:\,} Y_\infty ^l {:\,} \Vert ^{\gamma _{l,N}}_{{\mathcal {C}}^{-{\tilde{\kappa }}}} + 1) - \sum _{k=1}^N \Vert {:\,} Y_\infty ^k {:\,} \Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}} \big ) \\&\qquad + a_N(1 - \delta ) \Vert I_{\infty }(u)\Vert ^{N}_{L^{N}}-\tilde{\delta }\Vert I_\infty (u)\Vert _{H^1}^2 \\&\geqslant -c_5 Q -\delta c \int _{0}^\infty \Vert u_s\Vert ^{2}_{L^{2}} ds, \end{aligned} \end{aligned}$$
(4.11)

where the last inequality follows by Lemma 4.2.

Thus, by the monotonicity of the cut-off, and the fact that \( \chi _E(x) = x\) for \(x \leqslant 0\), (4.11) yields

$$\begin{aligned} \begin{aligned} \chi _{E}\Big (&\int _{\Omega _{\epsilon }}{:\,} {\mathcal {P}}\big (Y_{\infty }+I_{\infty }(u)\big ) {:\,} dx \Big ) \geqslant - c_5 Q -\delta c \int _{0}^\infty \Vert u_s\Vert ^{2}_{L^{2}} ds. \end{aligned} \end{aligned}$$
(4.12)

Hence, by Lemma 2.9, and the a priori estimate on the drift Lemma 4.2, we have that there exists \({\bar{C}} > 0\) such that,

$$\begin{aligned} \begin{aligned} F^{\infty ,E}(u)&\geqslant \mathbb {E}\Big [ -C Q + \frac{1-2\delta c}{2} \int _0^\infty \Vert u_s \Vert _{L^2}^2 ds \Big ] \geqslant - {\bar{C}} + \frac{1-2\delta c }{2} \mathbb {E}\Big [ \int _0^\infty \Vert u_s \Vert _{L^2}^2 \Big ], \end{aligned} \end{aligned}$$
(4.13)

which, after a redefinition of \(\delta \), establishes (4.6). \(\quad \square \)

Remark 4.8

It may seem surprising that the naive choice of comparing the minimiser \({\overline{u}}\) with the 0 drift yields useful information, as opposed to a more carefully chosen competitor. This is a manifestation of the mild ultraviolet divergences in dimension 2 for these field theories. Indeed, a trivial lower bound on the partition functions uniform in the cutoff can be obtained via Jensen’s inequality and using that Wick polynomials are martingales - note that Jensen’s inequality coincides exactly with choosing the 0 drift. To extend these techniques to more singular field theories, such as the sine-Gordon model beyond a certain threshold, a more informed choice of competitor would need to be made.

Remark 4.9

The above argument also establishes uniform \(L^{p}\) bounds for the density of the \({\mathcal {P}}(\phi )_{2}\) measure with respect to the Gaussian. Indeed, observe that we have proved that \(F^{\infty ,E}\) is bounded from below uniformly in \(E,\epsilon \). Recall that by (3.18) we have

$$\begin{aligned} \inf _{u\in \mathbb {H}_{a}} F^{\infty ,E} (u)= -\log \mathbb {E}[\exp (-v_{0}^{\epsilon ,E}(Y^{\epsilon }_{\infty }))]. \end{aligned}$$

So, a uniform lower bound on \(F^{\infty ,E}\) establishes a uniform bound on the expectation of the density of \(\nu ^{{\mathcal {P}}_\epsilon ,E}\). The argument in the proof of Proposition 4.5 is also valid when \({\mathcal {P}}\) is replaced by \(p{\mathcal {P}}\), since Wick renormalisations are linear in the coefficients, which implies a uniform \(L^{p}\) bound.

4.3 A fractional moment estimate for small scales

We now turn to a more refined estimate on the conditional second moment of minimisers restricted to finite time-horizons. Here and henceforth, we denote

$$\begin{aligned} F^{t,E}(u,\varphi ) = \mathbb {E}\Big [ v_0^{\epsilon ,E}(Y_\infty ^\epsilon + I_t^\epsilon (u)+\varphi ) + \frac{1}{2} \int _0^t \Vert u_s\Vert _{L^2}^2 ds \bigm | {\mathcal {F}}^t \Big ], \qquad {\mathcal {F}}^t = \sigma ( W_s:s\geqslant t ) \end{aligned}$$

for \(t\in [0,\infty ]\) and \(E\in (0,\infty ]\) with the convention \(F^{t,E}(u,0) = F^{t,E}(u)\).

Proposition 4.10

Assume that \({\bar{u}}\) minimises the functional \(F^{t,E}(u,\varphi )\) in \(\mathbb {H}_a[0,t]\) and that \(\varphi \in X_\epsilon \) is \({\mathcal {F}}^t\)-measurable. Then, as \(t\rightarrow 0\) and for \(\kappa \) sufficiently small,

$$\begin{aligned} \sup _{\epsilon>0}\sup _{E>0}\mathbb {E}\Big [ \int _0^t \Vert {\bar{u}}_s \Vert _{L^2}^2 ds \bigm | {\mathcal {F}}^t \Big ] \lesssim t^{1 -\kappa } ( 1 + {\mathcal {W}}_t^\epsilon + \Vert \varphi \Vert _{H^1}^2 )^{L}, \end{aligned}$$

where \({\mathcal {W}}_t^\epsilon \geqslant 0\) a.s. for all \(t\geqslant 0\) and \(\sup _{\epsilon >0}\sup _{t \geqslant 0} \mathbb {E}[{\mathcal {W}}_t^\epsilon ] <\infty \), and \(L>0\) only depends on the maximal degree of \({\mathcal {P}}\) and the interpolation parameters chosen in Lemma 4.6.

Proof

From the definition of \(F^{t,E}(u,\varphi )\) we see that we need to bound the conditional expectation of \(v_0^{\epsilon ,E} (Y_\infty ^\epsilon + I_t^\epsilon (u) + \varphi ) \). Similarly to the proof of Proposition 4.5, we compare the cost function evaluated at the minimiser against the cost function evaluated at a competitor, which is again the zero drift because of similar reasons outlined in Remark 4.8. To simplify the notation, we will drop \(\epsilon \) from here on. Since \({\bar{u}}\) is a minimiser for \(F^{t,E}(u,\varphi )\), we have \( F^{t,E}({\bar{u}}, \varphi ) - F^{t,E}(0,\varphi ) \leqslant 0\), and therefore

$$\begin{aligned} \begin{aligned} 0&\geqslant F^{t,E}({\bar{u}}, \varphi )-F^{t,E}(0, \varphi ) \\&= \mathbb {E}\Big [\chi _{E}\big (v_0(Y_{\infty }+I_{t}({\bar{u}}) +\varphi )\big ) +\frac{1}{2}\int _0^t \Vert {\bar{u}}_{s}\Vert _{L^2}^{2}ds -\chi _{E}\big (v_0(Y_{\infty }+\varphi )\big ) \Bigm | {\mathcal {F}}^t\Big ] \\&= \mathbb {E}\Big [\chi _{E}\Big (v_0\big (Y_{\infty }+\varphi \big ) +{\mathcal {A}}(Y_\infty ,{\bar{u}},\varphi ) + a_N \int _{\Omega _\epsilon } ( I_{t}({\bar{u}})^N dx \Big ) \\ {}&\qquad \qquad \qquad \qquad \qquad + \frac{1}{2}\int _{0}^{t}\Vert {\bar{u}}_{s}\Vert ^{2}_{L^2}ds -\chi _{E}\Big (v_0\big (Y_{\infty }+\varphi \big )\Big ) \Bigm | {\mathcal {F}}^t\Big ] \end{aligned} \end{aligned}$$
(4.14)

where, for a generic drift \(u \in \mathbb {H}_a[0,t]\), we set

$$\begin{aligned} \begin{aligned} {\mathcal {A}}(Y_\infty ,u,\varphi )&= \int _{\Omega _\epsilon }{:\,} {\mathcal {P}}\big (Y_\infty + \varphi + I_{t}(u)\big ) {:\,} dx - \int _{\Omega _\epsilon } {:\,} {\mathcal {P}}(Y_\infty + \varphi ) {:\,} \, dx - a_N \int _{\Omega _\epsilon } I_t(u)^N dx \\&= \sum _{k=1}^N \sum _{l=1}^{k-1} a_k {k \atopwithdelims ()l} \int _{\Omega _\epsilon } {:\,} (Y_\infty + \varphi )^{k-l} {:\,} I_{t}^{l}(u) dx + \sum _{l=1}^{N-1} a_k \int _{\Omega _{\epsilon }} I_t(u)^l dx. \end{aligned} \end{aligned}$$
(4.15)

Note that, above, we tacitly use that the polynomial \({\mathcal {P}}\) has no constant term.

The heart of the proof is to show the following estimate on \({\mathcal {A}}\): there exists \(\kappa > 0\) sufficiently small and \(L>0\) such that, for any \(\delta > 0\), there exists \(C>0\) such that, for any \(u \in \mathbb {H}_a[0,t]\),

$$\begin{aligned} \begin{aligned} {\mathcal {A}}(Y_\infty ,u,\phi )&+ \delta a_N \int _{\Omega _\epsilon } I_t(u)^{N} dx \\&\geqslant -C t^{1-\kappa } \big ( 1+ \sum _{k=1}^N\Vert {:\,} Y_{\infty }^j {:\,}\Vert _{{\mathcal {C}}^{-\kappa }}^{2L} + \Vert \varphi \Vert _{H^{1}}^{2L}\big ) -\delta \int _{0}^{t}\Vert u_{s}\Vert _{L^{2}}^{2}ds. \end{aligned} \end{aligned}$$
(4.16)

Then, inserting (4.16) into (4.14) and using that \(\chi _E(a-b) \geqslant \chi _E(a) -b\) for \(a \in \mathbb {R}\) and \(b\geqslant 0\), we obtain

$$\begin{aligned} 0\geqslant & {} \mathbb {E}\Big [\chi _{E}\Big (v_0\big (Y_{\infty }+\varphi \big ) +{\mathcal {A}}(Y_\infty ,{\bar{u}},\varphi ) + a_N \int _{\Omega _\epsilon }\ I_{t}({\bar{u}})^N dx \Big )\nonumber \\{} & {} \qquad + \frac{1}{2}\int _{0}^{t}\Vert {\bar{u}}_{s}\Vert ^{2}_{L^2}ds -\chi _{E}\Big (v_0\big (Y_{\infty }+\varphi \big )\Big ) \Bigm | {\mathcal {F}}^t\Big ]\nonumber \\\geqslant & {} \mathbb {E}\Big [-Ct^{1-\kappa }\Big (1 + \sum _{k=1}^N \Vert {:\,} Y_{\infty }^k {:\,}\Vert _{{\mathcal {C}}^{- \kappa }}^{2L} +\Vert \varphi \Vert _{H^{1}}^{2L} \Big ) + \big (\frac{1}{2}-\delta \big )\int _{0}^{t}\Vert {\bar{u}}_{s}\Vert ^{2}_{L^2} ds \Bigm | {\mathcal {F}}^t \Big ]\nonumber \\\geqslant & {} -Ct^{1- \kappa } \Big ( 1 + {\mathcal {W}}_t + \Vert \varphi \Vert ^2_{H^1} \Big )^{L} + \mathbb {E}\Big [ \big (\frac{1}{2}-\delta \big ) \int _{0}^{t} \Vert {\bar{u}}_{s}\Vert ^{2}_{L^2} ds \Bigm | {\mathcal {F}}^t\Big ], \end{aligned}$$
(4.17)

where

$$\begin{aligned} {\mathcal {W}}_t = \mathbb {E}\Big [ \sum _{k=1}^N \Vert {:\,} Y_{\infty }^k {:\,} \Vert _{{\mathcal {C}}^{-\bar{\kappa }}}^{2L} \bigm | {\mathcal {F}}^t \Big ]^\frac{1}{L}. \end{aligned}$$

Finally, observe that by Jensen’s inequality, the tower property of conditional expectation, and Lemma 2.9, we have

$$\begin{aligned} \begin{aligned} \sup _{\epsilon> 0} \sup _{t \geqslant 0} \mathbb {E}[{\mathcal {W}}_t]&= \sup _{\epsilon> 0} \sup _{t \geqslant 0} \mathbb {E}\Big [ \Big (\mathbb {E}\Big [ \sum _{k=1}^N \Vert {:\,} Y_{\infty }^k {:\,} \Vert _{{\mathcal {C}}^{-{\bar{\kappa }}}}^{2L} \bigm | {\mathcal {F}}^t \big ] \Big )^\frac{1}{L} \Big ] \\&\leqslant \sup _{\epsilon > 0} \sup _{t \geqslant 0} \Big ( \sum _{k=1}^N \mathbb {E}\big [ \Vert {:\,} Y_{\infty }^k {:\,}\Vert _{{\mathcal {C}}^{-{\bar{\kappa }}}}^{2L} \big ] \Big )^\frac{1}{L} < \infty . \end{aligned} \end{aligned}$$

Thus, rearranging (4.17) completes the proof conditional on (4.16).

We now focus on (4.16). Applying Lemma 4.7 with \(f={:\,} (Y_{\infty }+\varphi )^{k-l} {:\,}\) and \(f=1\), we know that there exists \(\kappa , {\tilde{\kappa }} > 0\) sufficiently small and \(L>0\) such that, for any \(\delta > 0\), there exists \(C>0\) such that, for any \(u \in \mathbb {H}_a[0,t]\),

$$\begin{aligned} \Big | \int _{\Omega _\epsilon } {:\,} (Y_\infty&+ \varphi )^{k-l} {:\,} I_{t}^{l} (u)dx \Big |\nonumber \\&\leqslant C t^{1-\kappa } \Vert {:\,} (Y_\infty + \varphi )^{k-l} {:\,} \Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}}^{\gamma _{l,N}} + \delta \Big (a_N \int _{\Omega _\epsilon } I_t(u)^{N} dx + \frac{1}{2} \int _0^t \Vert u_s \Vert _{L^2}^2 ds \Big ) \end{aligned}$$
(4.18)

and

$$\begin{aligned} \int I_t(u)^l dx \leqslant C t^{1-\kappa }+ \delta a_N\Vert I_t(u)\Vert ^N_{L^N} + \delta \int _0^t \Vert u_s\Vert _L^2 ds. \end{aligned}$$
(4.19)

In addition, by Lemma 2.9, there exists \({\bar{\kappa }}>0\) sufficiently small, such that

$$\begin{aligned} \Vert {:\,} (Y_{\infty }+\varphi )^{k-l} {:\,} \Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}} \lesssim 1 + \sum ^{k-l}_{j=1} \Vert {:\,} Y_{\infty }^{k-l-j} {:\,}\Vert _{{\mathcal {C}}^{-{\bar{\kappa }}}}^{ (k-l)/(k-l-j)}+ \Vert \varphi \Vert _{H^{1}}^{k-l}. \end{aligned}$$
(4.20)

Therefore, inserting (4.20) into (4.18), and summing this with (4.19), there exists \(L>0\) such that

$$\begin{aligned} \begin{aligned}&{\mathcal {A}}(Y_\infty ,u,\phi ) + \delta a_N \Vert I_{t} (u)\Vert _{L^{N}}^{N} \\&\geqslant -C t^{1-\kappa } \big ( 1+ \sum _{k=1}^N \sum _{l=1}^{k-1} \sum _{j=1}^{k-l} \big ( 1 + \Vert {:\,} Y_{\infty }^{k-l-j} {:\,}\Vert _{{\mathcal {C}}^{-{\bar{\kappa }}}}^{ (k-l)/(k-l-j)}+ \Vert \varphi \Vert _{H^{1}}^{k-l} \big )^{\gamma _{l,N}} -\delta \int _{0}^{t}\Vert u_{s}\Vert _{L^{2}}^{2}ds \\&\geqslant -C t^{1-\kappa } \big ( 1+ \sum _{j=1}^N\Vert {:\,} Y_{\infty }^j {:\,}\Vert _{{\mathcal {C}}^{-{\bar{\kappa }}}}^{2L} + \Vert \varphi \Vert _{H^{1}}^{2L}\big ) -\delta \int _{0}^{t}\Vert u_{s}\Vert _{L^{2}}^{2}ds \end{aligned} \end{aligned}$$

thereby establishing (4.16) up to a redefinition of \(\kappa \). \(\quad \square \)

We now prove the main estimate of this section.

Proposition 4.11

Let \(u^E_t=-q_t^\epsilon \nabla v_t^{\epsilon ,E}(\Phi _t^{\Delta _\epsilon ,E})\) and let L be as in Proposition 4.11. Then for any \( \kappa >0\) sufficiently small, we have

$$\begin{aligned} \sup _{E>0} \sup _{\epsilon >0} \sup _{t\leqslant 1} \mathbb {E}\Big [ \big ( t^{ -1 + \kappa } \int _0^t \Vert u^E_s \Vert _{L^2}^2 ds\big )^{1/L} \Big ] < \infty . \end{aligned}$$

Proof

From Proposition 3.8 we know that \(u^E\) is a minimiser of the Boué–Dupuis variational formula (3.18). Since \(\Phi _t^{{\mathcal {P}}_\epsilon ,E} = \Phi _t^{\Delta _\epsilon ,E} + \Phi _t^{\text {GFF}_\epsilon }\) and \(Y_t^\epsilon + \Phi _t^{\text {GFF}_\epsilon } = Y_\infty ^\epsilon \), we also have that \(u^E\) minimises

$$\begin{aligned} F^{t,E}(u, I_{t,\infty }^\epsilon (u^E)) = \mathbb {E}\Big [ v_0^{\epsilon ,E}\big (Y_\infty ^\epsilon + I_t^\epsilon (u) + I_{t,\infty }^\epsilon (u^E) \big ) + \frac{1}{2}\int _0^t \Vert u\Vert _{L^2}^2 \bigm | {\mathcal {F}}^t \Big ] \end{aligned}$$

in \(\mathbb {H}_a[0,t]\), where \(I_{t,\infty }^\epsilon (u^E) = \Phi _t^{\Delta _\epsilon ,E}\) is \({\mathcal {F}}^t\)-measurable. Hence, by Proposition 4.10 we have for any \(\kappa \) sufficiently small

$$\begin{aligned} \mathbb {E}\Big [ \int _0^t \Vert u^E_s \Vert _{L^2}^2 \bigm | {\mathcal {F}}^t \Big ] \lesssim t^{1-\kappa } \big ( 1+ {\mathcal {W}}_t^\epsilon + \Vert I_{t,\infty }^\epsilon (u^E)\Vert _{H^1}^2 \big )^L. \end{aligned}$$

Thus, for every such \(\kappa \)

$$\begin{aligned}&\mathbb {E}\big [ \big ( t^{-1+\kappa } \int _0^t \Vert u^E_s\Vert _{L^2}^2 ds \big )^{1/L} \big ] = \mathbb {E}\Big [ \mathbb {E}\big [ \big ( t^{-1+\kappa } \int _0^t \Vert u^E\Vert _{L^2}^2 ds \big )^{1/L} \bigm | {\mathcal {F}}^t \big ] \Big ] \\&\quad \leqslant \mathbb {E}\Big [ \mathbb {E}\big [ t^{-1+\kappa } \int _0^t \Vert u^E_s \Vert _{L^2}^2 \bigm |{\mathcal {F}}^t\big ]^{1/L}\Big ] \leqslant \mathbb {E}\Big [ 1+ {\mathcal {W}}_t^\epsilon + \Vert I_{t,\infty }^\epsilon (u^E) \Vert _{H^1}^2 \Big ] \leqslant C \end{aligned}$$

for some universal constant \(C>0\). Taking the supremum over \(t\leqslant 1\) and \(\epsilon >0\), the claim follows. \(\quad \square \)

5 Proof of the Coupling to the GFF

This section is devoted to the proof of the main results Theorem 1.1 and Corollary 1.2. Recall that we introduced the notion \(u^E_t = -q_t^\epsilon \nabla v_t^{\epsilon ,E} (\Phi _t^{{\mathcal {P}}_\epsilon ,E})\). By Proposition 3.8 we know that this is a minimiser of the variational problem (3.18). Hence, it satisfies the estimate in Proposition 4.5 and also the estimates in Proposition 4.11.

5.1 Estimates on the difference field \(\Phi ^{\Delta _\epsilon ,E}\)

First, we show how the bound for the minimiser \(u^E\) in Proposition 4.5 implies bounds on the fractional moments of Sobolev norms of the difference field \(\Phi _t^{\Delta _\epsilon ,E} = \int _t^\infty q_\tau ^\epsilon u^E_\tau d\tau \) in (3.10). The main result is the following proposition.

Proposition 5.1

Let \(\alpha \in (0,2)\). Then

$$\begin{aligned} \sup _{E>0} \sup _{\epsilon >0} \sup _{t\geqslant 0}\mathbb {E}\big [ \Vert \Phi _0^{\Delta _\epsilon ,E} \Vert ^{2/L}_{H^{\alpha }} \big ] < \infty . \end{aligned}$$
(5.1)

The first step towards (5.1) is the following estimate for the large scale of \(\Phi ^{\Delta _\epsilon ,E}\).

Lemma 5.2

Let \(t_0>0\) and \(\alpha \in [1,2)\). Then, for \(t\geqslant t_0\),

$$\begin{aligned} \sup _{E>0} \sup _{\epsilon >0} \mathbb {E}[\Vert \Phi _t^{\Delta ,E} \Vert _{H^{\alpha }} ] \lesssim _{t_0, \alpha } \frac{1}{t^{(\alpha -1)/2}}. \end{aligned}$$

Proof

Since \(\Phi _t^{\Delta _\epsilon ,E} = I_{t,\infty }^\epsilon (u^E)\) and since \(\mathbb {E}[\int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau ] <\infty \) uniformly in \(\epsilon >0\) and \(E>0\), the claim follows from Lemma 4.1. \(\quad \square \)

Lemma 5.3

Set \(\Phi _{s,t}^{\Delta _\epsilon ,E} = \int _s^t \dot{c}_\tau ^\epsilon \nabla v_\tau ^{\epsilon ,E}(\Phi _\tau ^{{\mathcal {P}}_\epsilon ,E}) d\tau \). Then for \(\alpha \in (1,2)\)

$$\begin{aligned} \sup _{E>0}\sup _{\epsilon >0} \mathbb {E}\Big [ \Vert \Phi _{s,t}^{\Delta _\epsilon ,E} \Vert _{H^{\alpha }}^{2/L} \Big ] \lesssim \mathbb {E}\Big [ \big ( \frac{t-s}{s^\alpha } \int _s^t \Vert u^E_\tau \Vert _{L^2}^2 d\tau \big )^{1/L}\Big ]. \end{aligned}$$

Proof

Again, the proof is an application of the regularity estimates at the end of Sect. 3.3. Noting that \(\Phi _{s,t}^{\Delta _\epsilon ,E} = I_{s,t}^\epsilon (u^E)\), by Lemma 4.4 that

$$\begin{aligned} \Vert \Phi _{s,t}^{\Delta _\epsilon ,E} \Vert _{H^\alpha }^{2/L} \lesssim \Big (\frac{t-s}{s^\alpha } \int _s^t \Vert u^E_\tau \Vert _{L^2}^2 d\tau \Big )^{1/L}, \end{aligned}$$

from which the claim follows by taking expectation. \(\quad \square \)

Combining Lemma 5.2 and Lemma 5.3 we can now give the proof of Proposition 5.1.

Proof of Proposition 5.1

Let \(\alpha \in (0,2)\) and let \(\kappa = (2-\alpha )/r\) for r large enough. Further, let \((t_n)_n\) be a decreasing sequence of numbers with \(t_n\rightarrow 0\) as \(n\rightarrow \infty \) that will be determined later. Then, using Lemma 5.3 and Proposition 4.11,

$$\begin{aligned} \mathbb {E}[\Vert \Phi _0^{\Delta _\epsilon ,E} \Vert _{H^{\alpha }}^{2/L} ]&\leqslant \sum _{n \geqslant 1} \mathbb {E}\Vert \Phi _{t_{n+1},t_n}^{\Delta _\epsilon ,E} \Vert _{H^{\alpha }}^{2/L} + \mathbb {E}\Vert \Phi _{t_0}^{\Delta _\epsilon ,E} \Vert _{H^{\alpha }}^{2/L} \\&\leqslant \sum _{n\geqslant 1} \mathbb {E}\Big [\Big ( \frac{t_n-t_{n+1}}{t_{n+1}^\alpha }\int _{t_{n+1}}^{t_{n}} \Vert u^E_\tau \Vert _{L^2}^2 d\tau \Big )^{1/L}\Big ] + C \\&\leqslant \sum _{n\geqslant 1} \Big ( \frac{t_n-t_{n+1}}{t_{n+1}^\alpha } t_n^{1-\kappa }\Big )^{1/L} \mathbb {E}\Big [\Big ( t_n^{-1+\kappa } \int _0^{t_n} \Vert u^E_\tau \Vert _{L^2}^2 d\tau \Big )^{1/L}\Big ] + C \\&\lesssim \sum _{n \geqslant 1} \Big ( \frac{t_n-t_{n+1}}{t_{n+1}^\alpha } t_n^{1-\kappa }\Big )^{1/L} + C. \end{aligned}$$

Now, choosing \(t_n=2^{-n}\) we see that

$$\begin{aligned} \frac{t_n-t_{n+1}}{t_{n+1}^\alpha } t_n^{1-\kappa } = \frac{2^{-(n+1)}}{2^{-\alpha (n+1)}} 2^{-n(1-\kappa )} = 2^{-(2-\alpha -\kappa )n + \alpha } \end{aligned}$$

and thus, the sum in the last display is finite for the specified values of \(\alpha \) and \(\kappa \). \(\quad \square \)

5.2 Removal of the cut-off: proof of Theorem 1.1

We have collected all results to give the proof of Theorem 1.1 and Corollary 1.2. The main task in this section is to remove the cut-off by taking the limit \(E\rightarrow \infty \).

Proof of Theorem 1.1

We first prove that the sequence of processes \((\Phi ^{\Delta _\epsilon ,E})_E\) is tight. For \(R>0\) and \(\alpha <1\), let

$$\begin{aligned} {\mathcal {X}}_R = \Big \{ \Phi \in C_0([0,\infty ),X_\epsilon ) :\sup _{t \in [0,\infty )} \Vert \Phi \Vert _{H^\alpha }^2 \leqslant R \text { and } \sup _{s< t} \frac{\Vert \Phi _t - \Phi _s\Vert _{H^\alpha }^2}{(t-s)^{1-\alpha }} \leqslant R \Big \}. \end{aligned}$$

Note that \({\mathcal {X}}_R\) is totally bounded and equicontinuous. Therefore, by the Arzèla-Ascoli theorem, \({\mathcal {X}}_R \subset C_0([0,\infty ),X_\epsilon )\) is precompact, and the closure \(\overline{{\mathcal {X}}_R}\) is compact. Moreover, we have by Lemma 4.2 and Lemma 4.3 that

$$\begin{aligned} \sup _{t\geqslant 0}\Vert \Phi _t^{\Delta _\epsilon ,E} \Vert _{H^\alpha }^2&\lesssim \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau \\ \Vert \Phi _t^{\Delta _\epsilon ,E} - \Phi _s^{\Delta _\epsilon ,E}\Vert _{H^\alpha }^2&\lesssim (t-s)^{1-\alpha } \int _{0}^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau . \end{aligned}$$

Therefore, we have for some constant \(C>0\), which is independent of \(\epsilon \) and E,

$$\begin{aligned}&\mathbb {P}( \Phi ^{\Delta _\epsilon ,E} \in \overline{{\mathcal {X}}_R}^c ) \\&\quad \leqslant \mathbb {P}( \Phi ^{\Delta _\epsilon ,E} \in {\mathcal {X}}_R^c ) \leqslant \mathbb {P}\Big ( \sup _{t \in [0,\infty )} \Vert \Phi _t^{\Delta _\epsilon ,E}\Vert _{H^\alpha }^2 + \sup _{s< t} \frac{\Vert \Phi _t^{\Delta _\epsilon ,E} - \Phi _s^{\Delta _\epsilon ,E} \Vert _{H^\alpha }^2}{(t-s)^{1-\alpha }}> R \Big ) \\&\quad \leqslant \mathbb {P}\Big ( \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau > R/C \Big ) \leqslant \frac{C}{R} \mathbb {E}\Big [ \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau \Big ]. \end{aligned}$$

So, for a given \(\kappa >0\), we can choose R large enough such that

$$\begin{aligned} \sup _{E>0} \mathbb {P}\big ( \Phi ^{\Delta _\epsilon ,E} \in \overline{{\mathcal {X}}_R}^c \big ) \leqslant \frac{C}{R} \sup _{E>0} \mathbb {E}\Big [ \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau \Big ] <\kappa , \end{aligned}$$

which establishes tightness for the sequence \((\Phi ^{\Delta _\epsilon ,E})_E \subseteq C_0([0,\infty ), X_\epsilon )\).

By Prohorov’s theorem there is a process \( \Phi ^{\Delta _\epsilon }\) and a subsequence \((E_k)_k\) such that \(\Phi ^{\Delta ,E_k} \rightarrow \Phi ^{\Delta _\epsilon }\) in distribution as \(k\rightarrow \infty \). By (3.11) there exists a process \(\Phi ^{{\mathcal {P}}_\epsilon } \equiv \Phi ^{\Delta _\epsilon } + \Phi ^{\text {GFF}_\epsilon }\), such that \(\Phi ^{{\mathcal {P}}_\epsilon ,E_k}\rightarrow \Phi ^{{\mathcal {P}}_\epsilon }\) in distribution as \(k\rightarrow \infty \).

Since \(e^{-v_0^{\epsilon ,E}(\phi )} \rightarrow e^{-v_0^\epsilon (\phi )}\) as \(E\rightarrow \infty \) for every \(\phi \in X_\epsilon \) and since \(e^{-v_0^{\epsilon ,E}} \leqslant e^{C_\epsilon }\) for some constant \(C_\epsilon >0\), we have by dominated convergence for every bounded and continuous \(f:X_\epsilon \rightarrow \mathbb {R}\)

$$\begin{aligned} \mathbb {E}_{\nu ^{{\mathcal {P}}_\epsilon ,E}}[f] \rightarrow \mathbb {E}_{\nu ^{{\mathcal {P}}_\epsilon }}[f], \end{aligned}$$
(5.2)

so that \(\nu ^{{\mathcal {P}}_\epsilon ,E}\rightarrow \nu ^{{\mathcal {P}}_\epsilon }\) in distribution. Since \(\Phi _0^{{\mathcal {P}}_\epsilon ,E} \sim \nu ^{{\mathcal {P}}_\epsilon ,E}\) we conclude that \(\Phi _0^{{\mathcal {P}}_\epsilon } \sim \nu ^{{\mathcal {P}}_\epsilon }\) by uniqueness of weak limits. Moreover, since independence is preserved under weak limits, we have that for every \(t>0\), \(\Phi _t^{{\mathcal {P}}_\epsilon }\) is independent of \(\Phi _0^{\text {GFF}_\epsilon } - \Phi _t^{\text {GFF}_\epsilon }\).

Finally, the bounds (1.91.12) follow from the respective estimates on \(\Phi ^{\Delta _\epsilon ,E}\) and the fact that the norms are continuous maps from \(C_0([0,\infty ), X_\epsilon )\) to \(\mathbb {R}\). For instance, (1.11) is proved by

$$\begin{aligned}&\sup _{\epsilon>0} \sup _{t\geqslant 0} \mathbb {E}\Big [ \Vert \Phi _t^{\Delta _\epsilon } \Vert _{H^\alpha }^{2/L} \Big ]\\&\quad = \sup _{\epsilon>0} \sup _{t\geqslant 0} \lim _{C\rightarrow \infty } \mathbb {E}\Big [ \Vert \Phi _t^{\Delta _\epsilon } \Vert _{H^\alpha }^{2/L} \wedge C \Big ] = \sup _{\epsilon>0} \sup _{t\geqslant 0} \lim _{C\rightarrow \infty } \lim _{E\rightarrow \infty } \mathbb {E}\Big [ \Vert \Phi _t^{\Delta _\epsilon ,E} \Vert _{H^\alpha }^{2/L} \wedge C \Big ] \\ {}&\quad \leqslant \sup _{\epsilon>0} \sup _{t \geqslant 0} \sup _{E >0} \mathbb {E}\Big [ \Vert \Phi _t^{\Delta _\epsilon ,E} \Vert _{H^\alpha }^{2/L} \Big ] <\infty , \end{aligned}$$

where the last display is finite by Proposition 5.1. \(\quad \square \)

5.3 Lattice convergence: proof of Corollary 1.2

In this section we prove that as \(\epsilon \rightarrow 0\) the processes \((\Phi ^{\Delta _\epsilon })_\epsilon \) converge along a subsequence \((\epsilon _k)_k\) to a continuum process \(\Phi ^{\Delta _0}\). In order to obtain a continuum process \(\Phi ^{{\mathcal {P}}_0}\) from the sequence \((\Phi ^{{\mathcal {P}}_\epsilon })_\epsilon \), we also need the convergence of the decomposed Gaussian free field \(\Phi ^{\text {GFF}_\epsilon }\), which is the content of Lemma 5.4 below. Define for \(t\geqslant 0\)

$$\begin{aligned} \Phi _t^{\text {GFF}_0} = \int _t^\infty q_s^0 dW_s= \sum _{k\in \Omega ^*} e^{ik\cdot (\cdot )}\int _t^\infty {\hat{q}}_u^0(k) d{\hat{W}}_u(k), \quad {\hat{q}}_u^0(k) = \frac{1}{t(-|k|^2 +m^2) + 1}. \end{aligned}$$

Note that for \(t>0\) the convergence of the sum is understood in \(H^\alpha \), \(\alpha <1\), while for \(t=0\) it is in \(H^\alpha \) for \(\alpha <0\).

Lemma 5.4

Let \(I_\epsilon :L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega )\) be the isometric embedding defined in Sect. 2.1. Then, for any \(t_0 > 0\) and \(\alpha < 1\), we have

$$\begin{aligned} \mathbb {E}\left[ { \sup _{t\geqslant t_0}\Vert I_\epsilon \Phi ^{\text {GFF}_\epsilon }_t -\Phi ^{\text {GFF}_{0}}_t \Vert _{H^\alpha }^2}\right]&\rightarrow 0. \end{aligned}$$
(5.3)

Moreover, if \(t_0=0\) we have for \(\alpha <0\)

$$\begin{aligned} \mathbb {E}\left[ { \sup _{t\geqslant 0}\Vert I_\epsilon \Phi _t^{\text {GFF}_\epsilon }-\Phi _t^{\text {GFF}_{0}}\Vert _{H^{\alpha }}^2 }\right] \rightarrow 0. \end{aligned}$$
(5.4)

Proof

Note that the process \(I_\epsilon \Phi ^{\text {GFF}_\epsilon } - \Phi ^{\text {GFF}_0}\) is a backward martingale adapted to the filtration \({\mathcal {F}}^t\) with values in \(H^\alpha \) with \(\alpha <1\) for \(t_0>0\) and \(\alpha <0\) for \(t_0=0\). Thus, the \(H^\alpha \) norm of this process is a real valued submartingale. Therefore, we have by Doob’s \(L^2\) inequality,

$$\begin{aligned} \mathbb {E}\left[ { \sup _{t\geqslant t_0}\Vert I_\epsilon \Phi ^{\text {GFF}_\epsilon }_t -\Phi ^{\text {GFF}_{0}}_t \Vert _{H^\alpha }^2}\right] \lesssim \mathbb {E}\left[ { \Vert I_\epsilon \Phi ^{\text {GFF}_\epsilon }_{t_0} -\Phi ^{\text {GFF}_{0}}_{t_0} \Vert _{H^\alpha }^2}\right] , \end{aligned}$$

so it suffices to prove that the right-hand side converges to 0 for the specified values of \(t_0\) and \(\alpha \).

To this end, we consider the difference of the Fourier coefficients \({\hat{q}}^\epsilon _t(k)\) and \({\hat{q}}_t^0(k)\). By (2.4), we have that

$$\begin{aligned} \begin{aligned} 0\leqslant {\hat{q}}_t^\epsilon (k) - {\hat{q}}_t^0(k)&= \frac{1}{t(-\hat{\Delta }^\epsilon (k) + m^2) + 1} - \frac{1}{t(-{\hat{\Delta }}^0(k) + m^2) + 1}\\&\leqslant \frac{t |k|^2 h(\epsilon k)}{ \big [t(c |k|^2+ m^2) + 1\big ] \big [ t(|k|^2 + m^2) + 1\big ] } \leqslant \frac{h(\epsilon k)}{t(c |k|^2 + m^2) + 1}, \end{aligned} \end{aligned}$$
(5.5)

where we recall that \(c =4/\pi ^2\) as below (2.4). By the definition of the Sobolev norm, we have

$$\begin{aligned} \Vert I_\epsilon \Phi _{t_0}^{\text {GFF}_\epsilon } -\Phi _{t_0}^{\text {GFF}_{0}}\Vert _{H^{\alpha }(\Omega )}^2 = \sum _{k\in \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \Big |\int _{t_0}^\infty ({\hat{q}}_u^\epsilon (k) - {\hat{q}}_u^0(k)) d{\hat{W}}_u(k) \Big |^2\\ + \sum _{k \in \Omega ^*\setminus \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \Big |\int _{t_0}^\infty {\hat{q}}_u^0(k) d{\hat{W}}_u(k) \Big |^2. \end{aligned}$$

Taking expectation and using the estimate (5.5) for the first sum yields

$$\begin{aligned} \begin{aligned} \mathbb {E}\Big [ \Vert I_\epsilon \Phi _{t_0}^{\text {GFF}_\epsilon } -\Phi _{t_0}^{\text {GFF}_{0}}\Vert _{H^{\alpha }(\Omega )}^2 \Big ]&\leqslant \sum _{k\in \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \int _{t_0}^\infty \frac{h^2(\epsilon k)}{\big (u(c |k|^2 + m^2) + 1\big )^2} du \\ {}&+ \sum _{k \in \Omega ^*\setminus \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \int _{t_0}^\infty \frac{1}{\big (u(|k|^2+ m^2) + 1\big )^2} du \\ {}&\leqslant \sum _{k\in \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \frac{h^2(\epsilon k)}{c |k|^2 + m^2} \frac{1}{t_0(c |k|^2 + m^2) + 1} \\ {}&+ \sum _{k \in \Omega ^*\setminus \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \frac{1}{|k|^2 + m^2} \frac{1}{t_0( |k|^2 + m^2) + 1}. \end{aligned} \end{aligned}$$
(5.6)

Further note that \(h^2(\epsilon k) \leqslant O(\epsilon |k|)^\delta \) for any \(\delta <4\). We can now discuss the convergence to 0 as \(\epsilon \rightarrow 0\) for \(\alpha \) and \(t_0\) as above. For \(t_0>0\) the first sum on the right-hand side can be bounded by

$$\begin{aligned} \begin{aligned}&\sum _{k\in \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \frac{h(\epsilon k)}{c |k|^2 + m^2} \frac{1}{t_0(c |k|^2 + m^2) + 1} \\ {}&\lesssim \epsilon ^\delta \sum _{k\in \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \frac{|k|^\delta }{c |k|^2 + m^2} \frac{1}{t_0(c |k|^2 + m^2) + 1} \lesssim _{t_0} \epsilon ^\delta \sum _{k\in \Omega ^*} \frac{1}{\big (1+|k|^2\big )^{2-\alpha -\delta }}. \end{aligned} \end{aligned}$$
(5.7)

The last sum is finite for \(\alpha <1\) and \(\delta \) small enough (depending on \(\alpha \)), and hence vanishes as \(\epsilon \rightarrow 0\). For the second sum on the right-hand side of (5.6) we similarly have

$$\begin{aligned} \sum _{k \in \Omega ^*\setminus \Omega _\epsilon ^*} (1+ |k|^2)^{\alpha } \frac{1}{|k|^2+m^2} \frac{1}{t_0 (|k|^2+m^2)+1} \lesssim _{t_0} \sum _{k \in \Omega ^*\setminus \Omega _\epsilon ^*} \frac{1}{\big ( 1+|k|^2\big )^{2-\alpha } }, \end{aligned}$$

which is finite uniformly in \(\epsilon \) for \(\alpha <1\), and hence converges to 0 as \(\epsilon \rightarrow 0\). Together with (5.7) this shows the convergence in (5.3).

For the proof of (5.4), we note that both sums on the right-hand side lose a term of order \(O(|k|^{-2})\) when \(t_0=0\). Using the same arguments as for the case \(t_0>0\), we find that the corresponding sums are finite for \(\alpha <0\) in this case. \(\quad \square \)

The next result is the convergence of the difference field \(\Phi ^{\Delta _\epsilon }\) along a subsequence \((\epsilon _k)_k\), and its proof is almost identical to the removal of the cut-off in the proof of Theorem 1.1. Recall that we denote by \(C_0([0,\infty ), {\mathcal {S}})\) the set of all continuous processes with values in a metric space \(({\mathcal {S}}, \Vert \cdot \Vert _{{\mathcal {S}}})\) that vanish at \(\infty \) and that \(I_\epsilon :L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega )\) is the isometric embedding.

Proposition 5.5

Let \(\alpha <1\). Then \((I_\epsilon \Phi ^{\Delta _\epsilon })_\epsilon \) is a tight sequence of processes in \(C_0([0,\infty ), H^{\alpha })\). In particular, there is a process \(\Phi ^{\Delta _0} \in C_0([0,\infty ), H^{\alpha })\) and a subsequence \((\epsilon _k)_k\), \(\epsilon _k \rightarrow 0\) as \(k\rightarrow \infty \) such that the laws of \(\Phi ^{\Delta _{\epsilon _k}}\) on \(C_0([0,\infty ), H^{\alpha })\) converge weakly to the law of \(\Phi ^{\Delta _0}\).

Proof

For \(R>0\) and \(\alpha <1\), let

$$\begin{aligned} {\mathcal {X}}_R = \Big \{ \Phi \in C_0([0,\infty ),H^\alpha ) :\sup _{t \in [0,\infty )} \Vert \Phi \Vert _{H^\alpha }^2 \leqslant R \text { and } \sup _{s< t} \frac{\Vert \Phi _t - \Phi _s\Vert _{H^\alpha }^2}{(t-s)^{1-\alpha }} \leqslant R \Big \}. \end{aligned}$$

As in the proof of Theorem 1.1 the closure \(\overline{{\mathcal {X}}_R}\) is compact by the Arzèla-Ascoli theorem. Moreover, we have

$$\begin{aligned} \sup _{t\geqslant 0}\Vert I_\epsilon \Phi _t^{\Delta _\epsilon ,E} \Vert _{H^\alpha }^2&\lesssim \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau , \\ \Vert I_\epsilon \Phi _t^{\Delta _\epsilon ,E} - I_\epsilon \Phi _s^{\Delta _\epsilon ,E} \Vert _{H^\alpha }^2&\lesssim (t-s)^{1-\alpha } \int _{0}^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau , \end{aligned}$$

and thus, by the weak convergence of \((\Phi ^{\Delta _\epsilon ,E})_E\) as \(E\rightarrow \infty \) (along a subsequence \((E_k)_k\)), we have for some constant \(C>0\), which is independent of \(\epsilon \) and E,

$$\begin{aligned} \mathbb {P}(I_\epsilon \Phi ^{\Delta _\epsilon } \in \overline{{\mathcal {X}}_R}^c )&\leqslant \liminf _{k\rightarrow \infty } \mathbb {P}(I_\epsilon \Phi ^{\Delta _\epsilon ,E_k} \in \overline{{\mathcal {X}}_R}^c ) \leqslant \liminf _{k\rightarrow \infty } \mathbb {P}( I_\epsilon \Phi ^{\Delta _\epsilon ,E_k} \in {\mathcal {X}}_R^c ) \\ {}&\leqslant \liminf _{k\rightarrow \infty } \mathbb {P}\Big ( \int _0^\infty \Vert u^{E_k}_\tau \Vert _{L^2}^2 d\tau> R/C \Big ) \leqslant \sup _{E>0}\mathbb {P}\Big ( \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau> R/C \Big ) \\ {}&\leqslant \sup _{E>0}\frac{C}{R} \mathbb {E}\Big [ \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau \Big ]. \end{aligned}$$

So, for a given \(\kappa >0\), we can choose R large enough such that

$$\begin{aligned} \sup _{\epsilon>0} \mathbb {P}( I_\epsilon \Phi ^{\Delta _\epsilon } \in \overline{{\mathcal {X}}_R}^c ) \leqslant \frac{2}{R} \sup _{\epsilon>0} \sup _{E>0} \mathbb {E}\Big [ \int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau \Big ] <\kappa , \end{aligned}$$

which establishes tightness for the sequence \((I_\epsilon \Phi ^{\Delta _\epsilon })_\epsilon \subseteq C_0([0,\infty ], H^{\alpha })\). The existence of a weak limit \(\Phi ^{\Delta _0}\) then follows by Prohorov’s theorem. \(\quad \square \)

Proof of Corollary 1.2

Since \(\Phi ^{\Delta _{\epsilon _k}} \rightarrow \Phi ^{\Delta _{0}}\) in distribution as \(k\rightarrow \infty \), we also have that there exists a process \(\Phi _t^{{\mathcal {P}}_0} \equiv \Phi ^{\Delta _0} + \Phi ^{\text {GFF}_0}\), such that \(\Phi ^{{\mathcal {P}}_{\epsilon _k}} \rightarrow \Phi ^{{\mathcal {P}}_{0}}\) in distribution as \(k\rightarrow \infty \). Moreover, as \(\epsilon \rightarrow 0\), we have that \(\nu ^{{\mathcal {P}}_\epsilon } \rightarrow \nu ^{{\mathcal {P}}}\), where \(\nu ^{\mathcal {P}}\) is the continuum \({\mathcal {P}}(\phi )_2\) measure, see also Proposition 5.6 below.

Finally, the estimates on the norms of \( \Phi ^{\Delta _0}\) and the independence of \(\Phi _t^{{\mathcal {P}}_0}\) and \(\Phi _0^{\text {GFF}_0} - \Phi _t^{\text {GFF}_0}\) follow from the convergence in distribution similarly as in the proof of Theorem 1.1. \(\quad \square \)

5.4 Uniqueness of the limiting law for fixed \(t > 0\)

For the discussion of the maximum, we need a refined statement on the convergence of \((\Phi _t^{{\mathcal {P}}_\epsilon })_\epsilon \) for \(t>0\) as \(\epsilon \rightarrow 0\). More precisely, we prove that the law of the limiting field \(\Phi _t^{{\mathcal {P}}_{0}}\) for a fixed \(t>0\) does not depend on the subsequence. By the same arguments that led to (5.2), we have that \(\Phi _t^{{\mathcal {P}}_\epsilon }\) is distributed as the renormalised measure \(\nu _t^{{\mathcal {P}}_\epsilon }\) defined by

$$\begin{aligned} \mathbb {E}_{\nu _t^{{\mathcal {P}}_\epsilon }} [F] = e^{v_\infty ^\epsilon (0)} {\varvec{E}}_{c_\infty ^\epsilon -c_t^\epsilon } \big [ F(\zeta ) e^{-v_t^\epsilon (\zeta )} \big ], \end{aligned}$$
(5.8)

where \(F:X_\epsilon \rightarrow \mathbb {R}\), and \(v_t^\epsilon \) is the renormalised potential defined in (3.5) for \(E=\infty \). Let \((I_\epsilon )_*\nu _t^{{\mathcal {P}}_\epsilon }\) denote the pushforward measure of \(\nu _t^{{\mathcal {P}}_\epsilon }\) under the isometric embedding \(I_\epsilon \). Then we have the following convergence to a unique limit as \(\epsilon \rightarrow 0\).

Proposition 5.6

As \(\epsilon \rightarrow 0\), we have for \(t>0\) that, as measures on \(H^\alpha (\Omega )\) for every \(\alpha <1\), \((I_{\epsilon })_*\nu _t^{{\mathcal {P}}_\epsilon }\) converges weakly to \(\nu _t^{\mathcal {P}}\) given by

$$\begin{aligned} \mathbb {E}_{\nu _{t}^{\mathcal {P}}}[F] = e^{v_\infty ^0(0)} \mathbb {E}\big [ F(Y_\infty - Y_t) e^{-v_t^0(Y_\infty -Y_t)} \big ] \end{aligned}$$
(5.9)

for \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\) bounded and measurable, and where \(e^{-v_\infty ^0(0)} = \mathbb {E}\big [ e^{-\int _{\Omega } {:\,} {\mathcal {P}}(Y_{\infty }) {:\,}dx}\big ]\).

Moreover, for \(t=0\), the weak convergence \((I_{\epsilon })_*\nu _0^{{\mathcal {P}}_\epsilon } \rightarrow \nu _0^{\mathcal {P}}\) holds as measures on \(H^\alpha (\Omega )\) for any \(\alpha <0\) and with \(\nu _0^{\mathcal {P}}\) defined by (5.9) with \(t=0\) and \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\) for \(\alpha <0\).

Remark 5.7

Note that, although \({:\,} {\mathcal {P}}(Y_\infty ) {:\,}\) is almost surely a distribution of negative regularity and not a function, the expression \(\int _\Omega {:\,} {\mathcal {P}}(Y_\infty ) {:\,}dx\) makes sense as an abuse of notation to denote the duality pairing between a distribution and a constant function (which is smooth on the torus). Furthermore, we sometimes include the spatial argument of the distribution as a further abuse of notation, e.g. \(\int _\Omega Y_\infty (x) dx\), not to be confused with a pointwise evaluation (which may not exist).

We first prove that the discrete potential, suitably extended, converges to the continuum potential in \(L^2\).

Lemma 5.8

As \(\epsilon \rightarrow 0\), we have

$$\begin{aligned} \mathbb {E}\Big [ \Big (\int _{\Omega _{\epsilon }} {:\,} {\mathcal {P}}(Y^\epsilon _{\infty }) {:\,} dx - \int _{\Omega } {:\,} {\mathcal {P}}(Y_{\infty }) {:\,} dx \Big )^{2} \Big ] \rightarrow 0. \end{aligned}$$
(5.10)

Proof

It is convenient to introduce an alternative extension operator to the trigonometric extension that behaves well under commutation with products. We choose the piecewise constant extension of a function defined on \(\Omega _\epsilon \) to a function defined on \(\Omega \) that is piecewise constant on \(x + \epsilon (-1/2, 1/2]^2\) for all \(x \in \Omega _\epsilon \subseteq \Omega \). Indeed, for any \(f \in X_\epsilon \), define \(E_\epsilon f :\Omega \rightarrow \mathbb {R}\) for \(x \in \Omega \) by

$$\begin{aligned} E_\epsilon f (x) = \sum _{z \in \Omega _\epsilon } f(z) \textbf{1}_{(-\epsilon /2,\epsilon /2]^2}(x-z). \end{aligned}$$
(5.11)

We identify \(Y^{\epsilon }_{\infty }\) with \(E_\epsilon Y^{\epsilon }_{\infty }\) and regard its covariance operator \(c^\epsilon =c_\infty ^\epsilon \) as an operator acting on such piecewise constant functions. In the following computation, we abuse notation and evaluate the distribution \(Y_{\infty }\) pointwise. Although such a pointwise evaluation alone is a priori ill-defined, the covariance \(E[Y^{\epsilon }_{\infty }(x) Y_{\infty }(y)]\) for \(x,y \in \Omega _\epsilon \) yields a well defined expression, as our computation below shows. This can be made rigorous by first approximating \(Y_{\infty }\) with smooth fields and then removing the approximation. We omit this to lighten the notation. Thus, for \(x,y \in \Omega \),

$$\begin{aligned}&\mathbb {E}[Y^{\epsilon }_{\infty }(x)Y_{\infty }(y)] = \mathbb {E}\Big [ \sum _{z \in \Omega _\epsilon } Y^{\epsilon }_{\infty }(z) \textbf{1}_{(-\epsilon /2,\epsilon /2]^2}(x-z)Y_{\infty }(y) \Big ] \\&\quad = \sum _{z \in \Omega _\epsilon } \textbf{1}_{(-\epsilon /2,\epsilon /2]^2}(x-z) \sum _{k_1 \in \Omega _\epsilon ^*, k_2 \in \Omega ^*} \mathbb {E}\Big [ e^{ik_1 \cdot z+ik_2 \cdot y} \int _0^\infty \hat{q}_t^\epsilon d{\hat{W}}_t(k_1)\int _0^\infty \hat{q}_t^0 d {\hat{W}}_t(k_2) \Big ] \\&\quad = \sum _{z \in \Omega _\epsilon } \textbf{1}_{(-\epsilon /2,\epsilon /2]^2}(x-z) \sum _{k\in \Omega _\epsilon ^*} e^{ik\cdot (z-y) } \int _0^\infty {\hat{q}}_t^\epsilon (k) {\hat{q}}_t^0(k) dt \\&\quad \rightarrow \sum _{k\in \Omega ^*} e^{ik\cdot (x-y) } \int _0^\infty | {\hat{q}}_t^0(k)|^2 dt = c(x-y), \end{aligned}$$

as \(\epsilon \rightarrow 0\) and where we recall that \(c=c_\infty ^0\) is the kernel of \((-\Delta + m^2 )^{-1}\).

To prove (5.10), it is sufficient to show that as \(\epsilon \rightarrow 0\)

$$\begin{aligned} \mathbb {E}\Big [\Big (\int _{\Omega _{\epsilon }}{:\,} (Y^{\epsilon }_{\infty })^{n} {:\,}dx -\int _{\Omega }{:\,} Y_{\infty }^{n} {:\,} dx \Big )^{2}\Big ] \rightarrow 0. \end{aligned}$$

Note that, by Wick’s theorem and an abuse of notation regarding evaluating distributions pointwise as above, we have

$$\begin{aligned} \mathbb {E}[{:\,} (Y_\infty ^\epsilon )^n(x) {:\,} {:\,} Y_\infty ^n(y) {:\,}]&= \sum _{z_1,\dots ,z_n} \prod _{i=1}^n \textbf{1}_{(-\epsilon /2,\epsilon /2]^2}(x-z_i) \, \mathbb {E}[ {:\,} \prod _{i=1}^n Y_\infty ^\epsilon (z_i) {:\,} {:\,} Y_\infty ^n (y) {:\,} ] \\&= n!\sum _{z_1,\dots ,z_n} \prod _{i=1}^n \textbf{1}_{(-\epsilon /2,\epsilon /2]^2}(x-z_i) \, \prod _{i=1}^n \mathbb {E}[ Y_\infty ^\epsilon (z_i) Y_\infty (y)] \\&= n!\mathbb {E}[ (Y_\infty ^\epsilon )(x) Y_\infty (y) ]^n. \end{aligned}$$

Hence, by expanding, we have that the expression on the left-hand side of (5.11) is equal to

$$\begin{aligned}&\int _{\Omega _{\epsilon }}\int _{\Omega _{\epsilon }}\mathbb {E}[ {:\,} (Y^{\epsilon }_{\infty })^{n} (x) {:\,} {:\,} (Y^{\epsilon }_{\infty })^{n}(y) {:\,}] dx dy + \int _{\Omega } \int _{\Omega } \mathbb {E}[ {:\,} (Y_{\infty })^{n}(x) {:\,} {:\,} (Y_{\infty })^{n}(y) {:\,}] dxdy \\&-2\int _{\Omega }\int _{\Omega _{\epsilon }} \mathbb {E}[{:\,} (Y_{\infty })^{n}(x) {:\,} {:\,} (Y^{\epsilon }_{\infty })^{n}(y) {:\,}] dx dy \\ = \,&\, n!\int _{\Omega _{\epsilon }} \int _{\Omega _{\epsilon }} (c^{\epsilon }(x-y))^{n} dx dy + n!\int _{\Omega } \int _{\Omega } c^n(x-y) dx dy - 2n! \int _{\Omega }\int _{\Omega } \mathbb {E}[Y^{\epsilon }_{\infty }(x) Y_{\infty }(y)]^n dxdy, \end{aligned}$$

which converges to 0 by dominated convergence. \(\quad \square \)

We need the following exponential integrability lemma, which follows from a by-now standard argument due to Nelson, see [24, Chapter 9.6]. Note that one may also prove this using the Boué–Dupuis formula and estimates similar to those in Sect. 4.2, see Remark 4.9. The convergence statement below then follows by Vitali’s theorem as stated in [12, Theorem 4.5.4] and the convergence (5.10).

Lemma 5.9

For any \(0\leqslant p<\infty \), we have

$$\begin{aligned} \sup _{\epsilon >0}\mathbb {E}\Big [\exp \Big (-p\int _{\Omega _{\epsilon }}{:\,} {\mathcal {P}}(Y_\infty ^\epsilon ) {:\,} dx\Big )\Big ] < \infty . \end{aligned}$$
(5.12)

In particular, as \(\epsilon \rightarrow \infty \)

$$\begin{aligned} \mathbb {E}\Big [\exp \Big (-\int _{\Omega _{\epsilon }}{:\,} {\mathcal {P}}(Y_{\infty }^\epsilon ) {:\,} dx\Big )\Big ] \rightarrow \mathbb {E}\Big [\exp \Big (-\int _{\Omega }{:\,} {\mathcal {P}}(Y_{\infty }) {:\,} dx\Big )\Big ]. \end{aligned}$$
(5.13)

Combining Lemma 5.8 and Lemma 5.9, we can now give a proof of Proposition 5.6.

Proof of Proposition 5.6

Recall from (5.8) that the renormalised measure \(\nu _t^{{\mathcal {P}}_\epsilon }\) is defined by

$$\begin{aligned} \mathbb {E}_{\nu _t^{{\mathcal {P}}_\epsilon }}[F] = { e^{v_\infty ^\epsilon (0)} {\varvec{E}}_{c_\infty ^\epsilon - c_t^\epsilon } [F(\zeta ) e^{-v_t^\epsilon (\zeta )} ] } = { e^{v_\infty ^\epsilon (0)} } \mathbb {E}[F(Y^{\epsilon }_{\infty }-Y^{\epsilon }_{t}) e^{-v^{\epsilon }_{t}(Y^{\epsilon }_{\infty }-Y^{\epsilon }_{t})}], \end{aligned}$$

where \(F:X_\epsilon \rightarrow \mathbb {R}\) is bounded and continuous, and \(v^\epsilon _t\) is the renormalised potential. By the definition of the pushforward measure and the renormalised potential we obtain that, for \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\), bounded and continuous,

$$\begin{aligned} \mathbb {E}_{(I_{\epsilon })_*\nu ^{{\mathcal {P}}_\epsilon }_{t}}[F]&= \mathbb {E}_{\nu _t^{{\mathcal {P}}_\epsilon }} [F \circ I_\epsilon ] = e^{v_\infty ^\epsilon (0)} \mathbb {E}[F \big ( I_\epsilon (Y_\infty ^\epsilon - Y_t^\epsilon ) \big ) e^{-v_t^\epsilon (Y_\infty ^\epsilon - Y_t^\epsilon )} ] = \\&= { e^{v_\infty ^\epsilon (0)} } \mathbb {E}\Big [ F\big (I_{\epsilon }(Y_\infty ^\epsilon -Y_t^\epsilon )\big ) \mathbb {E}\big [e^{- v_0^\epsilon (Y_\infty ^\epsilon - Y_t^\epsilon + Y_t^\epsilon ) } \bigm | {\mathcal {F}}^t \big ] \Big ] \\&= e^{v_\infty ^\epsilon (0)} \mathbb {E}\Big [F\big (I_\epsilon (Y_\infty ^\epsilon - Y_t^\epsilon )\big ) e^{-v_0^\epsilon (Y_\infty ^\epsilon )} \Big ]. \end{aligned}$$

Now, we have by (5.3) that \(I_{\epsilon }(Y^{\epsilon }_{\infty }-Y^{\epsilon }_{t})\) converges to \(Y_{\infty }-Y_{t}\) in \(L^2\) with respect to the norm of \(H^\alpha (\Omega )\) for any \(\alpha <1\). Moreover, we have by (5.10) that \(v_0^\epsilon (Y_\infty ^\epsilon ) \rightarrow v_0^0(Y_\infty )\) in \(L^2\). Take any subsequence, which we continue to denote as \(\epsilon \). Then, there is a further subsequence \((\epsilon _k)_k\), along which we have

$$\begin{aligned} F\big (I_{\epsilon _k}(Y_\infty ^{\epsilon _k} - Y_t^{\epsilon _k})\big ) e^{-v_0^{\epsilon _k} (Y_\infty ^{\epsilon _k})} \rightarrow F( Y_\infty - Y_t) e^{-v_0^0 (Y_\infty )} \end{aligned}$$
(5.14)

almost surely, where we also used that F is continuous with respect to the norm on \(H^\alpha \). Since F is bounded, we have by (5.12) that

$$\begin{aligned} \Big (F\big (I_\epsilon (Y_\infty ^{\epsilon } - Y_t^{\epsilon })\big ) e^{-v_0^{\epsilon } (Y_\infty ^{\epsilon })} \Big )_\epsilon \end{aligned}$$

is uniformly integrable. Hence, by Vitali’s theorem, the convergence in (5.14) holds in \(L^1\), i.e.

$$\begin{aligned} \mathbb {E}\Big [F\big (I_{\epsilon _k}(Y_\infty ^{\epsilon _k} - Y_t^{\epsilon _k})\big ) e^{-v_0^{\epsilon _k} (Y_\infty ^{\epsilon _k})} \Big ] \rightarrow \mathbb {E}\Big [ F( Y_\infty - Y_t) e^{-v_0^0 (Y_\infty )} \Big ]. \end{aligned}$$
(5.15)

In summary, we have shown that for every subsequence of \(\epsilon \) there is a further subsequence \((\epsilon _k)_k\) such that (5.15) holds, thus showing full convergence of (5.15).

For the case \(t=0\) we follow the same arguments as for \(t>0\), but now we take \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\) for \(\alpha <0\) and use (5.4). \(\quad \square \)

6 Convergence in Law for the Maximum of \({\mathcal {P}}(\phi )_2\)

In this section we use the results on the difference field \(\Phi ^\Delta \) and prove that the maximum of the \({\mathcal {P}}(\phi )_2\) field converges in distribution to a randomly shifted Gumbel distribution. The analogous result was recently established for the sine-Gordon field in [7, Section 4]. The main difficulty in this reference is to deal with the non-Gaussian and non-independent term \(\Phi _0^\Delta \), which requires generalising and extending several key results of [14]. In the present case the combination of the Polchinski approach and the Boué–Dupuis variational approach produce a similar situation with the essential difference to the sine-Gordon case being different regularity estimates for the difference field. Thus, the main goal in this section is to argue that all results in [7, Section 4] also hold under the modified assumptions on \(\Phi ^\Delta \).

From now on, when no confusion can arise, we will drop \(\epsilon \) from the notation. Moreover, to ease notation, we will use the notation \(\Vert \varphi \Vert _{L^\infty (\Omega _\epsilon )} \equiv \Vert \varphi \Vert _{\infty }\) for fields \(\varphi :\Omega _\epsilon \rightarrow \mathbb {R}\). Recall that by Theorem 1.1 we have that

$$\begin{aligned} \Phi _0^{{\mathcal {P}}} = \Phi _0^\text {GFF}+ \Phi _0^\Delta = \Phi _0^\text {GFF}- \Phi _s^\text {GFF}+ \Phi _s^{\mathcal {P}}+ R_s, \end{aligned}$$
(6.1)

where \(R_s = \Phi _0^\Delta -\Phi _s^\Delta \) satisfies \(\sup _{\epsilon >0} \mathbb {E}[\Vert R_s\Vert _\infty ^{2/L} ] \rightarrow 0\) as \(s\rightarrow 0\) by the Sobolev embedding and (1.12). In the analysis of the maximum of \(\Phi _0^{\mathcal {P}}\) we also need the following continuity result for the field \(\Phi ^\Delta \).

Lemma 6.1

Let \(\alpha \in (0,1)\). Then

$$\begin{aligned} \sup _{\epsilon >0} \sup _{t\geqslant 0}\mathbb {E}\big [\Vert \Phi _t^{\Delta } \Vert ^{2/L}_{C^{\alpha }(\Omega )} \big ] < \infty . \end{aligned}$$

Proof

The statement follows from the Sobolev-Hölder embedding in Proposition 2.2 and (1.11). \(\quad \square \)

To express convergence in distribution we will use the Lévy distance d on the set of probability measures on \(\mathbb {R}\), which is a metric for the topology of weak convergence. It is defined for any two probability measures \(\nu _1,\nu _2\) on \(\mathbb {R}\) by

$$\begin{aligned} d(\nu _1,\nu _2)= \min \{\kappa >0 :\nu _1 (B) \leqslant \nu _2(B^\kappa )+\kappa \text {~for all open sets~} B \}, \end{aligned}$$

where \(B^\kappa =\{y\in \mathbb {R}:{{\,\textrm{dist}\,}}(y,B) < \kappa \}\). We will use the convention that when a random variable appears in the argument of d, we refer to its distribution on \(\mathbb {R}\). Note that if two random variables X and Y can be coupled with \(|X-Y|\leqslant \kappa \) with probability \(1-\kappa \) then \(d(X,Y) \leqslant \kappa \).

6.1 Reduction to an independent decomposition

The first important step is the introduction of a scale cut-off \(s>0\) to obtain an independent decomposition from (6.1). More precisely, we write

$$\begin{aligned} \Phi _0^{{\mathcal {P}}} = {\tilde{\Phi }}_s+ R_s, \qquad {\tilde{\Phi }}_s= (\Phi _0^\text {GFF}- \Phi _s^\text {GFF}) + \Phi _s^{\mathcal {P}}\end{aligned}$$
(6.2)

and argue that we may from now on focus on the auxiliary field \({\tilde{\Phi }}_s\). The following statement plays the same role as Lemma 4.1 in [7].

Lemma 6.2

[Analog to [7, Lemma 4.1]]. Assume that the limiting law \({\tilde{\mu }}_s\) of \(\max _{\Omega _\epsilon } {\tilde{\Phi }}_s-{\mathfrak {m}}_\epsilon \) as \(\epsilon \rightarrow 0\) exists for every \(s>0\), and that there are positive random variables \(\textrm{Z}_s\) (on the above common probability space) such that

$$\begin{aligned} {\tilde{\mu }}_s((-\infty ,x])=\mathbb {E}[e^{-\alpha ^* \textrm{Z}_s e^{-\sqrt{8\pi }x}}] \end{aligned}$$
(6.3)

for some constant \(\alpha ^*>0\). Then the law of \(\max _{\Omega _\epsilon } \Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon \) converges weakly to some probability measure \(\mu _0\) as \(\epsilon \rightarrow 0\) and \({\tilde{\mu }}_s \rightharpoonup \mu _0\) weakly as \(s\rightarrow 0\). Moreover, there is a positive random variable \(\textrm{Z}^{\mathcal {P}}\) such that

$$\begin{aligned} \mu _0((-\infty ,x])=\mathbb {E}[e^{-\alpha ^* \textrm{Z}^{\mathcal {P}}e^{-\sqrt{8\pi }x}}]. \end{aligned}$$

Proof

We follow identical steps as in the proof of Lemma 4.1 of [7]. We first argue that the sequence \((\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon )_\epsilon \) is tight. Indeed, since

$$\begin{aligned} \max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon = \max _{\Omega _\epsilon }\Phi _0^\text {GFF}-{\mathfrak {m}}_\epsilon + O( \Vert \Phi _0^\Delta \Vert _\infty ) \end{aligned}$$

and since \((\max _{\Omega _\epsilon }\Phi _0^\text {GFF}- {\mathfrak {m}}_\epsilon )_\epsilon \) is tight by [15], we see that \((\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon )_\epsilon \) differs from a tight sequence by a sequence \(Y_\epsilon \) with \(\mathbb {E}[|Y_\epsilon |^{2/L}] <\infty \). Elementary arguments such as Markov’s inequality then imply that also sequence \((\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon )_\epsilon \) is tight.

Thus, there is a probability distribution \(\mu _0\) such that the law of \(\max _{\Omega _\epsilon }\Phi ^{\mathcal {P}}- {\mathfrak {m}}_\epsilon \) converges to \(\mu _0\) weakly along a subsequence \((\epsilon _k)_k\). Considering the Lévy distance to \({\tilde{\mu }}_s\), we have

$$\begin{aligned} d(\mu _0, {\tilde{\mu }}_s)&\leqslant \limsup _{\epsilon = \epsilon _k \rightarrow 0} [d(\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon , \max _{\Omega _\epsilon }{\tilde{\Phi }}_s- {\mathfrak {m}}_\epsilon ) + d(\max _{\Omega _\epsilon }{\tilde{\Phi }}_s- {\mathfrak {m}}_\epsilon , {\tilde{\mu }}_s) ] \\ {}&=\limsup _{\epsilon =\epsilon _k \rightarrow 0} d(\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon , \max _{\Omega _\epsilon }{\tilde{\Phi }}_s- {\mathfrak {m}}_\epsilon ). \end{aligned}$$

The last display can be estimated as follows: for any open set \(B \subseteq \mathbb {R}\) we have

$$\begin{aligned} \mathbb {P}(\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon \in B)&\leqslant \mathbb {P}(\max _{\Omega _\epsilon }{\tilde{\Phi }}_s - {\mathfrak {m}}_\epsilon \in B \pm \Vert R_s\Vert _\infty ) \\&\leqslant \mathbb {P}(\max _{\Omega _\epsilon }{\tilde{\Phi }}_s- {\mathfrak {m}}_\epsilon \in B^\kappa ) + \mathbb {P}(\max _{\Omega _\epsilon }\Vert R_s\Vert _\infty \geqslant \kappa ). \end{aligned}$$

Markov’s inequality implies that

$$\begin{aligned} \mathbb {P}(\Vert R_s\Vert _\infty \geqslant \kappa ) \leqslant \frac{\mathbb {E}[\Vert R_s\Vert _\infty ^{2/L} ] }{\kappa ^{2/L}}, \end{aligned}$$

and thus, choosing

$$\begin{aligned} \kappa = \frac{\mathbb {E}[\Vert R_s\Vert _\infty ^{2/L} ] }{\kappa ^{2/L}} \iff \kappa = \big (\mathbb {E}[ \Vert R_s\Vert _\infty ^{2/L}]\big )^{L/(L+2)}, \end{aligned}$$

we get by the definition of the Levy distance

$$\begin{aligned} d(\mu _0, {\tilde{\mu }}_s) \lesssim \big (\sup _{\epsilon >0} \mathbb {E}[\Vert R_s\Vert _{\infty }^{2/L} ] \big )^{L/(L+2)}. \end{aligned}$$
(6.4)

Taking \(s\rightarrow 0\) shows that the subsequential limit \(\mu _0\) is unique, as it is the unique weak limit of \({\tilde{\mu }}_s\). Thus, it follows that \(\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon \rightarrow \mu _0\) in distribution as \(\epsilon \rightarrow 0\). Moreover, by (6.4) we have \({\tilde{\mu }}_s \rightharpoonup \mu _0\) weakly, and thus

$$\begin{aligned} {\tilde{\mu }}_s((-\infty , x]) \rightarrow \mu _0((-\infty , x]) \end{aligned}$$

for the distribution functions.

It remains to show that \((\textrm{Z}_s)_s\) is tight. Using (6.2) we get for any \(C>0\)

$$\begin{aligned} \begin{aligned}&\mathbb {P}(\max _{\Omega _\epsilon }{\tilde{\Phi }}_s-{\mathfrak {m}}_\epsilon \leqslant x) \geqslant \mathbb {P}(\max _{\Omega _\epsilon }\Phi _0^\text {GFF}- {\mathfrak {m}}_\epsilon \leqslant x - \Vert \Phi _s^\Delta \Vert _\infty ) \\ {}&\geqslant \mathbb {P}(\max _{\Omega _\epsilon }\Phi _0^\text {GFF}- {\mathfrak {m}}_\epsilon \leqslant x - C, \Vert \Phi _s^\Delta \Vert _\infty \leqslant C) \\&= \mathbb {P}( \max _{\Omega _\epsilon }\Phi _0^\text {GFF}- {\mathfrak {m}}_\epsilon \leqslant x- C) - \mathbb {P}(\max _{\Omega _\epsilon }\Phi _0^\text {GFF}- {\mathfrak {m}}_\epsilon \leqslant x-C, \Vert \Phi _s^\Delta \Vert _\infty > C). \end{aligned} \end{aligned}$$
(6.5)

Now, for a given \(\kappa >0\) we use Markov’s inequality and choose C such that

$$\begin{aligned} \sup _{s\geqslant 0} \mathbb {P}(\Vert \Phi _s^\Delta \Vert _\infty > C) \leqslant \kappa /2. \end{aligned}$$

Then we obtain from (6.5)

$$\begin{aligned} -\kappa /2 + \mathbb {P}(\max _{\Omega _\epsilon }\Phi _0^\text {GFF}- {\mathfrak {m}}_\epsilon \leqslant x-C) \leqslant \mathbb {P}(\max _{\Omega _\epsilon }{\tilde{\Phi }}_s-{\mathfrak {m}}_\epsilon \leqslant x). \end{aligned}$$
(6.6)

Let \(\mu _0^\text {GFF}\) be the limiting law of the centred maximum of the discrete Gaussian free field which exists by [14]. Then taking \(\epsilon \rightarrow 0\) in (6.6) together with the assumption (6.3) yields

$$\begin{aligned} -\kappa /2 + \mu _0^\text {GFF}((-\infty , x - C] ) \leqslant \mathbb {E}[e^{-\alpha ^* \textrm{Z}_s e^{-\sqrt{8\pi }x}}]. \end{aligned}$$
(6.7)

From here the argument is analogous as in the proof of [7, Lemma 4.1]: assume that the sequence \((\textrm{Z}_s)_s\) is not tight. Then we have

$$\begin{aligned} \exists \kappa>0 :\forall M>0 :\exists s_M :\mathbb {P}(\textrm{Z}_{s_M}>M) >\kappa . \end{aligned}$$

It follows that

$$\begin{aligned} \mathbb {E}[e^{-\alpha ^* \textrm{Z}_{s_M} e^{-\sqrt{8\pi }x}}] \leqslant e^{-\alpha ^* M e^{-\sqrt{8\pi }x}} + \mathbb {P}(\textrm{Z}_{s_M} \leqslant M) \leqslant e^{-\alpha ^* M e^{-\sqrt{8\pi }x}} + (1-\kappa ). \end{aligned}$$

Sending \(M\rightarrow \infty \), (6.7) implies that

$$\begin{aligned} - \kappa /2 + \mu _0^\text {GFF}((-\infty , x-C]) \leqslant 1-\kappa , \end{aligned}$$

which is a contradiction when sending \(x\rightarrow \infty \). \(\quad \square \)

6.2 Approximation of small scale field

Thanks to Lemma 6.2 we may focus from now on the centred maximum of \({\tilde{\Phi }}_s\), which has a Gaussian small scale field \(\Phi _0^\text {GFF}- \Phi _s^\text {GFF}\) and a non-Gaussian but independent large scale field \(\Phi _s^{\mathcal {P}}\). Similar to [7, Section 4.2] we replace the small scale field by a collection of massless discrete Gaussian free fields, so that the results in [14] apply. The only difference is the regularisation of the Gaussian free field covariance: in [7] the heat-kernel regularisation is used, i.e.

$$\begin{aligned} \frac{d}{dt} {\tilde{c}}_t^\epsilon = e^{t(-\Delta ^\epsilon + m^2)/2}, \qquad {\tilde{c}}_t^\epsilon = \int _0^t \frac{d}{ds}\tilde{c}_s^\epsilon \, ds, \end{aligned}$$

while here, we use the Pauli-Villars regularisation (3.1), which implies

$$\begin{aligned} {{\,\textrm{Cov}\,}}(\Phi _0^\text {GFF}- \Phi _s^\text {GFF}) = (-\Delta + m^2 + 1/s)^{-1}. \end{aligned}$$
(6.8)

Therefore, we do not need the additional decomposition (4.17) in [7] involving the function \(g_s\). The following paragraph is completely analogous to [7, Section 4.2], but we include it here to set up the notation and improve readability.

We introduce a macroscopic subdivision of the torus \(\Omega \) as follows: let \(\Gamma \) be the union of horizontal and vertical lines intersecting at the vertices \(\frac{1}{K}\mathbb {Z}^2 \cap \Omega \) which subdivides \(\Omega \) into boxes \(V_i \subset \Omega \), \(i=1,\dots , K^2\) of side length 1/K. We use the notation \(V_i\) for both the subset of \(\Omega \) and the corresponding lattice version as subset of \(\Omega _\epsilon \).

Let \(\Delta _\Gamma \) be the Laplacian on \(\Omega \) with Dirichlet boundary conditions on \(\Gamma \), and let \(\Delta \) be the Laplacian with periodic boundary conditions on \(\Omega \). The domain of \(\Delta \) is the space of 1-periodic functions, and that of \(\Delta _\Gamma \) is the smaller space of 1-periodic functions vanishing on \(\Gamma \). This implies that \(-\Delta _\Gamma \geqslant -\Delta \) and thus,

$$\begin{aligned} (-\Delta +m^2 + 1/s)^{-1} \geqslant (-\Delta _\Gamma +m^2 + 1/s)^{-1} \end{aligned}$$
(6.9)

as quadratic form inequalities.

Hence, using (6.8), we can decompose the small scale Gaussian field \(\Phi _0^\text {GFF}- \Phi _s^\text {GFF}\) as

$$\begin{aligned} \Phi ^{\text {GFF}}_0-\Phi ^{\text {GFF}}_s {\mathop {=}\limits ^{d}} {\tilde{X}}_{s,K}^f + {\tilde{X}}_{s,K}^c, \end{aligned}$$

where the two fields on the right-hand side are independent Gaussian fields with covariances

$$\begin{aligned} {{\,\textrm{Cov}\,}}({\tilde{X}}_{s,K}^f)&= (-\Delta _\Gamma + m^2 + 1/s)^{-1} \\ {{\,\textrm{Cov}\,}}({\tilde{X}}_{s,K}^c)&= (-\Delta +m^2 + 1/s)^{-1} - (-\Delta _\Gamma +m^2 + 1/s)^{-1}. \end{aligned}$$

Note that for this decomposition, which is analogous to the Gibbs-Markov decomposition of the massless GFF with Dirichlet boundary condition, the Pauli-Villars decomposition is particularly convenient due to (6.9). Using [7, Lemma 4.2] in the exact same form, we see that the maximum of \(\Phi _0^\text {GFF}- \Phi _s^\text {GFF}\) can be replaced by the maximum of \(X_K^f + {\tilde{X}}_{s,K}^c\), where

$$\begin{aligned} {{\,\textrm{Cov}\,}}(X_K^f) = (-\Delta _\Gamma )^{-1}. \end{aligned}$$

This yields a new auxiliary field denoted \(\Phi _s\) with independent decomposition

$$\begin{aligned} \Phi _s = X_K^f + {\tilde{X}}_{s,K}^c+ \Phi _s^{\mathcal {P}}, \end{aligned}$$
(6.10)

which is completely analogous to (4.43) in [7], except that there is no field \(X_s^h\) for the different choice of the covariance regularisation. In fact the two fields \(X_K^f\) and \({\tilde{X}}_{s,K}^c\) are exactly the same and thus, the covariance estimates for \({\tilde{X}}_{s,K}^c\) in [7, Lemma 4.4] can be used verbatim. The only essential difference is that the field \(\Phi _s^{\mathcal {P}}\) is different, but the following statement establish the same regularity estimates as in [7, Lemma 4.5].

Lemma 6.3

For any \(s>0\) and \(\epsilon \geqslant 0\) the fields \(\Phi _s^\text {GFF}\) and \(\Phi _s^{\mathcal {P}}\) are a.s. Hölder continuous. Moreover, there is \(\alpha \in {(0,1)}\) such that for \(\#\in \{\text {GFF}, {\mathcal {P}}\}\)

$$\begin{aligned} \sup _{\epsilon >0} \mathbb {E}\Big [ \max _{\Omega _\epsilon }|\Phi _s^\#| + \max _{x,y \in \Omega _\epsilon } \frac{|\Phi _s^\#(x) - \Phi _s^\# (y)|}{|x-y|^\alpha } \Big ] < \infty . \end{aligned}$$
(6.11)

Proof

Note that the Fourier coefficients of \(\Phi _t^\text {GFF}\) satisfy

$$\begin{aligned} \mathbb {E}\big [ |{\hat{\Phi }}_s^\text {GFF}(k)|^2 \big ] = O_{s} \Big (\frac{1}{1+|k|^4}\Big ). \end{aligned}$$

Hence, (6.11) for \(\Phi _s^\text {GFF}\) follows from standard results as stated in [29, Proposition B.2 (i)] (for Hölder continuity) and [14, Lemma 3.5] (for the maximimum). For \(\Phi _s^{\mathcal {P}}\) the results follow from the properties of the difference term \(\Phi _s^\Delta \). \(\quad \square \)

The only remaining results, where the properties of \(\Phi ^\Delta \) enter, are Proposition 4.8 and Proposition 4.9 in [7]. To state these results we introduce for technical reasons a small neighbourhood of the grid \(\Gamma \) as follows. For \(\delta \in (0,1)\), define

$$\begin{aligned} V_i^{\delta }= \{x \in V_i :{{\,\textrm{dist}\,}}(x, \Gamma ) \geqslant \delta /K \}, \qquad \Omega ^\delta = \bigcup _{i=1}^{K^2} V_i^{\delta }, \qquad \Omega _\epsilon ^{\delta }= \Omega ^\delta \cap (\epsilon \mathbb {Z}^2).\nonumber \\ \end{aligned}$$
(6.12)

Proposition 6.4

[Version of [14, Proposition 5.1]]. Let \(\Omega _\epsilon ^{\delta }\) be as in (6.12). Then,

$$\begin{aligned} \lim _{\delta \rightarrow 0}\limsup _{K\rightarrow \infty } \limsup _{\epsilon \rightarrow 0} \mathbb {P}(\max _{\Omega _\epsilon ^{\delta }}\Phi _s\ne \max _{\Omega _\epsilon }\Phi _s)=0. \end{aligned}$$

Proposition 6.5

[Version of [14, Proposition 5.2]]. Let \(\Phi _s\) be as in (6.10). Let \(z_i \in V_i^{\delta }\) be such that

$$\begin{aligned} \max _{V_i^{\delta }} X_K^f= X_K^f(z_i) \end{aligned}$$

and let \({\bar{z}}\) be such that

$$\begin{aligned} \max _i \Phi _s(z_i) = \Phi _s({\bar{z}}). \end{aligned}$$

Then for any fixed \(\kappa >0\) and small enough \(\delta >0\),

$$\begin{aligned} \lim _{K\rightarrow \infty } \limsup _{\epsilon \rightarrow 0} \mathbb {P}(\max _{\Omega _\epsilon ^{\delta }} \Phi _s\geqslant \Phi _s({\bar{z}})+\kappa )=0. \end{aligned}$$

Moreover, there is a function \(g:\mathbb {N}\rightarrow \mathbb {R}_0^+\) with \(g(K) \rightarrow \infty \) as \(K\rightarrow \infty \), such that

$$\begin{aligned} \lim _{K\rightarrow \infty } \limsup _{\epsilon \rightarrow 0} \mathbb {P}( X_K^f({\bar{z}})\leqslant {\mathfrak {m}}_{\epsilon K}+g(K))=0. \end{aligned}$$

Proof of Proposition 6.4 and Proposition 6.5

In the case of the sine-Gordon field it was used that \(\Vert \Phi _s^\Delta \Vert _\infty \) is bounded by a deterministic constant and that \(\Phi _t^\Delta \) is Hölder continuous. Thus, the generalisation of these results are immediate when restricting to the event

$$\begin{aligned} E = \{ \Vert \Phi _s^\Delta \Vert _{C^\alpha (\Omega )} < C \}, \end{aligned}$$

whose probability is arbitrarily close to 1 if C is large enough.

6.3 Approximation by \(\epsilon \)-independent random variables

Following [14, Section 2.3] we approximate \(\max _{\Omega _\epsilon }\Phi _s-{\mathfrak {m}}_\epsilon \) by \(G^*_{s,K}= \max _{i}G_{s,K}^i\) where

$$\begin{aligned} G_{s,K}^i= \rho _{K}^i (Y_{K}^i+g(K)) + Z_{s,K}^{c,0}(\textbf{u}_\delta ^i) - \frac{2}{\sqrt{2\pi }}\log K, \end{aligned}$$
(6.13)

and also define

$$\begin{aligned} \textrm{Z}_{s,K}=m_\delta \frac{1}{K^2} \sum _{i=1}^{K^2} (\frac{2}{\sqrt{2\pi }}\log K - Z_{s,K}^{c,0}(\textbf{u}_\delta ^i)) e^{ -2\log K +\sqrt{8\pi }Z_{s,K}^{c,0}(\textbf{u}_\delta ^i) }. \end{aligned}$$
(6.14)

Here, the sequence g(K) is as in Proposition 6.5 and the random variables in (6.13) and (6.14) are all independent and defined as follows:

  • The random variables \(\rho _{K}^i \in \{0,1\}\), \(i=1,\dots , K^2\), are independent Bernoulli random variables with \({{\mathbb {P}}}(\rho _{K}^i=1)=\alpha ^* m_\delta g(K) e^{-\sqrt{8\pi }g(K)}\) with \(\alpha ^*\) and \(m_\delta \) as in [7, Proposition 4.6].

  • The random variables \(Y_{K}^i \geqslant 0\), \(i=1,\dots , K^2\), are independent and characterised by \({{\mathbb {P}}}(Y_{K}^i\geqslant x)= \frac{g(K)+x}{g(K)}e^{- \sqrt{8\pi }x}\) for \(x\geqslant 0\).

  • The random field \(Z_{s,K}^{c,0}(x)\), \(x\in \Omega \), is a weak limit of the overall coarse field \(Z_{s,K}^c\equiv {\tilde{X}}_{s,K}^c+\Phi _s^{\mathcal {P}}\) as \(\epsilon \rightarrow 0\). The existence of this limit is guaranteed by Corollary 1.2 and [7, Lemma 4.4].

  • The random variables \(\textbf{u}_\delta ^i \in V_i^\delta \), \(i=1,\dots ,K^2\), have the limiting distribution of the maximisers \(z_i\) of \(X_K^f\) in \(V_i^{\delta }\) as \(\epsilon \rightarrow 0\). Thus, \(\textbf{u}_\delta ^i\) takes values in the i-th subbox of \(\Omega =\mathbb {T}^2\) and, scaled to the unit square, its density is \(\psi ^\delta \) as in [7, Proposition 4.6].

Note that the correction in (6.13) can be understood from

$$\begin{aligned} {\mathfrak {m}}_{\epsilon K} - {\mathfrak {m}}_{\epsilon }= - \frac{2}{\sqrt{2\pi }}\log K + O_K(\epsilon ). \end{aligned}$$

From here the rest of [7, Section 4] can be used without adjustment, as no other properties of the difference field are used. We omit further details.