Abstract
We establish a coupling between the \({\mathcal {P}}(\phi )_2\) measure and the Gaussian free field on the two-dimensional unit torus at all spatial scales, quantified by probabilistic regularity estimates on the difference field. Our result includes the well-studied \(\phi ^4_2\) measure. The proof uses an exact correspondence between the Polchinski renormalisation group approach, which is used to define the coupling, and the Boué–Dupuis stochastic control representation for \({\mathcal {P}}(\phi )_2\). More precisely, we show that the difference field is obtained from a specific minimiser of the variational problem. This allows to transfer regularity estimates for the small-scales of minimisers, obtained using discrete harmonic analysis tools, to the difference field.As an application of the coupling, we prove that the maximum of the \({\mathcal {P}}(\phi )_2\) field on the discretised torus with mesh-size \(\epsilon > 0\) converges in distribution to a randomly shifted Gumbel distribution as \(\epsilon \rightarrow 0\).
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
1 Introduction
1.1 Model and main result
We study global probabilistic properties of the \({\mathcal {P}}(\phi )_2\) Euclidean quantum field theories on the two dimensional unit torus \(\Omega =\mathbb {T}^2\). These objects are measures \(\nu ^{\mathcal {P}}\) on the space of distributions \(S'(\Omega )\) that are formally given by
where \({\mathcal {P}}\) is a polynomial of even degree with positive leading coefficient and \(\nu ^\text {GFF}\) is law of the massive Gaussian free field, i.e. the mean zero Gaussian measure with covariance \((-\Delta + m^2)^{-1}\) for some arbitrary mass \(m>0\) that is fixed throughout this article. The most famous example is when \({\mathcal {P}}(\phi )\) is a quartic polynomial, in which case \(\nu ^{\mathcal {P}}\) is known as the \(\phi _2^4\) measure in finite volume with periodic boundary condition.
The origin of these measures lies in constructive quantum field theory, where they arise as Wick rotations of interacting bosonic quantum field theories in \(1+1\) Minkowski space-time, see for instance [38] and [33]. Their construction was first achieved in 2D by Nelson in [32], see also the books by Simon [37] and Glimm and Jaffe [24, Section 8]. Indeed, fields sampled from \(\nu ^\text {GFF}\) are almost surely distributions in a Sobolev space of negative regularity, and hence cannot be evaluated pointwise. Therefore, since there is no canonical definition of nonlinear functions of distributions, the density in (1.1) is ill-defined. This can be seen concretely, if one imposes a small scale cut-off. Then the cut-off measures are well-defined, but as the cut-off is removed, one encounters so-called ultraviolet divergences. It is well-known that a suitable renormalisation procedure is needed to remove these divergences.
In order to give meaning to (1.1), we view it as a limit of renormalised lattice approximations. For \(\epsilon >0\) let \(\Omega _\epsilon = \Omega \cap (\epsilon \mathbb {Z}^2)\) be the discretised unit torus and denote \(X_\epsilon = \mathbb {R}^{\Omega _\epsilon }= \{\varphi :\Omega _\epsilon \rightarrow \mathbb {R}\}\). We assume throughout that \(1/\epsilon \in \mathbb {N}\). Moreover, write \(\Delta ^\epsilon \) for the discrete Laplacian acting on functions \(f \in X_\epsilon \) by \(\Delta ^\epsilon f (x) = \epsilon ^{-2} \sum _{y\sim x} \big ( f(y) - f(x) \big )\), where \(y \sim x\) denotes that \(x,y \in \Omega _\epsilon \) are nearest neighbours. Let \(\nu ^{\text {GFF}_\epsilon }\) be the centred Gaussian measure on \(X_\epsilon \) with covariance \((-\Delta ^\epsilon + m^2)^{-1}\), i.e. the law of the massive discrete Gaussian free field on \(\Omega _\epsilon \). Note that as \(\epsilon \rightarrow 0\), we have for all \(x\in \Omega _\epsilon \)
To compensate this small scale divergence we renormalise the polynomial \({\mathcal {P}}\) in (1.1) by interpreting it as Wick-ordered. For \(n\in \mathbb {N}\), we define the Wick powers
where \(H_n\) denotes the n-th Hermite polynomial and \(c_\epsilon \) is as in (1.2). For instance, when \(n=4\), we have
Note that by the multiplication with \(c_\epsilon ^n\) we ensure that the leading coefficient of the Wick power equals 1.
It can be shown that as \(\epsilon \rightarrow 0\) the Wick ordered monomials for the Gaussian free field converge to well-defined random variables \({:\,} \Phi ^n {:\,}\) with values in the space of distributions \(S'(\Omega )\). Moreover, the collection \(({:\,} \Phi ^n {:\,})_{n\in \mathbb {N}}\) is orthogonal in \(L^2(\mu ^\text {GFF})\) and satisfies
where \(G = (-\Delta + m^2)^{-1}\) is the Green function of the Laplacian on \(\Omega \). This gives the Wick powers the interpretation of powers of the Gaussian free field.
For general polynomials of the form \({\mathcal {P}}:\mathbb {R}\rightarrow \mathbb {R}, r\mapsto \sum _{k=1}^{N} a_k r^k\) of even degree \(N \in 2\mathbb {N}\), and coefficients satisfying \(a_k \in \mathbb {R}\), \(1 \leqslant k < N\), and \(a_N>0\), we define the Wick ordering
When it is clear from the context or when there is no conceptual difference between the discrete Wick power and the continuum limit, we drop \(\epsilon \) from the notation. Note that, by a simple scaling argument, for the construction of the measures (1.1) we may assume without loss of generality that \({\mathcal {P}}\) has no constant term.
As for the monomials, the Wick ordered polynomials converge to well-defined random variables with values in a space of distributions as \(\epsilon \rightarrow 0\). However, as \(\epsilon \rightarrow 0\) one loses the uniform lower bound of \({:\,} {\mathcal {P}} {:\,}_\epsilon \). Indeed, due to the logarithmic divergence of \(c_\epsilon \) in (1.2), we have for \({:\,} {\mathcal {P}} {:\,}_\epsilon \) as in (1.3)
for some \(n\in \mathbb {N}\). Therefore, the exponential integrability (and hence the construction of (1.1) as a limit) is not immediate. Nelson leveraged the polylogarithmic divergence in \(d=2\) to give a rigorous construction of the continuum object \(\nu ^{\mathcal {P}}\), which can be obtained as a weak limit of the regularised measures \(\nu ^{{\mathcal {P}}_\epsilon }\) on \(X_\epsilon \) defined by
Other constructions also exist, for example via stochastic quantisation techniques [40], the Boué–Dupuis stochastic control representation [5], and martingale methods [28].
Fields sampled from \(\nu ^{\mathcal {P}}\) have similar small scale behaviour to the Gaussian free field on \(\Omega \). Indeed, the two measures are mutually absolutely continuous and, as such, \(\nu ^{\mathcal {P}}\) is supported on a space of distributions, not functions. One of the central themes of this article is to quantify the difference in short-distance behaviour between these two measures, and to see whether the similarities on small scales lead to similarities on a global scale, e.g. universal behaviour of the maxima.
Finally, let us remark that similar Wick renormalisation procedures have been successfully used to construct other continuum non-Gaussian Euclidean field theories in 2D, see for instance [7], for the sine-Gordon field and [4] for the sinh-Gordon field.
Our main result is a probabilistic coupling between the \({\mathcal {P}}(\phi )_2\) field and the Gaussian free field at all scales, which allows to write the field of interest as a sum of the Gaussian free field and a (non-independent) regular difference field. The methods we use rely on a stochastic control formulation developed for the \(\phi _2^4\) field in [5] and the essentially equivalent non-perturbative Polchinski renormalisation group approach that was used in [7] for the sine-Gordon field. Our point of view in this work resembles that in the latter reference, for which we use a similar notation. Specifically, we construct the \(\epsilon \)-regularised \({\mathcal {P}}(\phi )_2\) field as the solution to the high dimensional SDE
where \((\dot{c}_t^\epsilon )_{t\in [0,\infty ]}\) is associated to the Pauli-Villars decomposition of the Gaussian free field covariance defined for \(t \in (0,\infty )\) by
and with \(c_0^\epsilon =0\) and \(c_\infty ^\epsilon = (-\Delta ^\epsilon + m^2)^{-1}\), W is a Brownian motion in \(\Omega _\epsilon \), and \(v_t^{\epsilon ,E}\) is the renormalised potential to be defined later. The Pauli-Villars decomposition could be replaced by, for example, a heat-kernel decomposition, but it is technically convenient to work with the former rather than the latter, see Remark 3.1 for a more precise discussion on the rigidity concerning the choice of scale decomposition.
The energy cut-off \(E>0\) is sufficient to ensure that the SDE (1.5) is well-defined and has strong existence, and is removed by taking \(E\rightarrow \infty \). The stochastic integral on the right-hand side of (1.5) corresponds to a scale decomposition of the Gaussian free field, i.e. setting
we obtain a Gaussian process \((\Phi _t^{\text {GFF}_\epsilon }\big )_t\) started at \(t=\infty \) with \(\Phi _\infty ^{\text {GFF}_\epsilon } = 0\) and such that \(\Phi _0^{\text {GFF}_\epsilon } \sim \nu ^{\text {GFF}_\epsilon }\). Thus, going back to (1.5) the difference field corresponds to the finite variation term on the right-hand side of the SDE.
We state the main result in the following theorem on the level of regularisations. For \(\alpha \in \mathbb {R}\) let \(H^\alpha (\Omega _\epsilon )\equiv H^\alpha \) be the (discrete) Sobolev space of regularity \(\alpha \), see Sect. 2.2 for a precise definition. Moreover, we denote by \(C_0([0, \infty ), {\mathcal {S}})\) the space of continuous sample paths with values in a metric space \(({\mathcal {S}}, \Vert \cdot \Vert _{\mathcal {S}})\) that vanish at infinity equipped with the topology of compact convergence.
Theorem 1.1
There exists a process \(\Phi ^{{\mathcal {P}}_\epsilon }\in C_0([0,\infty ), H^{-\kappa })\) for any \(\kappa >0\) such that
where the difference field \(\Phi ^{\Delta _\epsilon }\) satisfies
for any \(t_0>0\), and for any \(\alpha \in [0,2)\) and some large enough \(L\geqslant 4\)
Moreover, for any \(t>0\), \(\Phi ^{\text {GFF}_\epsilon }_0-\Phi _t^{\text {GFF}_\epsilon }\) is independent of \(\Phi _t^{{\mathcal {P}}_\epsilon }\).
Let us make two important remarks concerning the optimality of the regularity estimates in Theorem 1.1. First, the restriction to \(\alpha \in [0,2)\) in (1.11) and (1.12) seems to be optimal, at least with respect to our method. Other reasonable choices of \((\dot{c}_t^\epsilon )_{t\in [0,\infty ]}\), see again Remark 3.1, should also lead to the same estimates (1.9), (1.11) and (1.12). Second, we do not expect that the probabilistic regularity estimates on the difference field \(\Phi _t^{\Delta _\epsilon }\), \(t\geqslant 0\) in Theorem 1.1 can be replaced by deterministic ones, in contradistinction to the case of the sine-Gordon field considered in [7]. This is because \(\Phi _t^{\Delta _\epsilon }\) is related to \(\nabla v_t^{\epsilon ,E}(\phi )\), \(\phi \in X_\epsilon \) in (3.6), which is, unlike for the sine-Gordon field, unbounded in \(\phi \) as \(E\rightarrow \infty \). In fact, since the constant L depends linearly on the degree of the polynomial \({\mathcal {P}}\), the bounds in Theorem 1.1 become weaker as the degree of \({\mathcal {P}}\) increases. This suggests that the larger the growth of the nonlinearity in (1.1), the weaker bounds for the difference field. Note that, in the case of the \(\phi ^4_2\) field, we can take \(L=4\).
As a corollary of Theorem 1.1, we also obtain the following statement for the continuum, i.e. \(\epsilon =0\). We stress that, in the statement below, \(H^\alpha = H^\alpha (\Omega )\) refers to the usual, i.e. continuum, Sobolev space of regularity \(\alpha \) over \(\Omega \).
Corollary 1.2
There exists a process \(\Phi ^{{\mathcal {P}}} \in C_0([0,\infty ), H^{-\kappa })\) for every \(\kappa >0\) such that
where \(\Phi _0^{\mathcal {P}}\) is distributed as the continuum \({\mathcal {P}}(\phi )_2\) field and \(\Phi _0^\text {GFF}\) is distributed as the continuum Gaussian free field. For the difference field \(\Phi ^\Delta \), the analogous estimates as for \(\Phi _t^{\Delta _\epsilon }\) in Theorem 1.1 hold in the continuum Sobolev spaces. Finally, for any \(t>0\), \(\Phi _0^\text {GFF}- \Phi _t^\text {GFF}\) is independent of \(\Phi _t^{\mathcal {P}}\).
As we shall see in the proof of Corollary 1.2, we construct the continuum processes \(\Phi ^\Delta \) in (1.13) as a weak limit of \((\Phi ^{\Delta _\epsilon })_\epsilon \) as \(\epsilon \rightarrow 0\) along a subsequence, thereby establishing the existence of \(\Phi ^{\mathcal {P}}\). While the convergence as processes only holds along a subsequence, we prove that every subsequential limit of \((\Phi ^{\Delta _\epsilon })_\epsilon \) and hence of \((\Phi ^{{\mathcal {P}}_\epsilon })_\epsilon \) has the same law.
The key ingredient in the proof of our main result, Theorem 1.1, is an exact correspondence between two different stochastic representations of \({\mathcal {P}}(\phi )_2\): a stochastic control representation, called the Boué–Dupuis representation, and the Polchinski renormalisation group approach, see Sect. 3.4. More precisely, the difference field is directly related to a special minimiser of the stochastic control problem. We use this correspondence to transfer fractional moment estimates on the minimiser to the difference field.
In the case of the sine-Gordon field considered in [7], the proof of the analogous estimates on \(\Phi ^{\Delta _\epsilon }\) relies heavily on deterministic estimates on the gradient of the renormalised potential \(v_t^\epsilon \), which is enabled by a Yukawa gas representation from [16]. Such a representation is not available in the present case, as the nonlinearity \({\mathcal {P}}(\phi )\) is not periodic in the field variables \((\phi _x)_{x\in \Omega _\epsilon }\). Therefore, our approach is more robust in the sense that we do not need to know the gradient of the renormalised potential, for which a uniform bound may not be available, and we do not need the periodicity requirement on the potential.
It would be of interest to extend our approach to other continuum Euclidean field theories. On the one hand, we believe that our approach can be extended to treat fields with more general nonlinearities in the log-density, such as the sinh-Gordon model. On the other hand, by combining our techniques with paracontrolled ansatz techniques developed in [5], it may be possible to analyse the sine-Gordon field up to the same regime as treated in [7]. Note that our techniques would a priori yield probabilistic estimates on the difference field, while the precise analysis of the gradient of the renormalised potential in [7] yields deterministic estimates. It would be interesting to see whether one can recover similar deterministic estimates with our approach. For \(\beta < 4\pi \), this seems possible by using methods developed in [3]. A treatment of the full subcritical regime \(\beta < 8\pi \) using either method would be of great interest. Let us mention that in the setting of fermionic Euclidean field theories, such couplings have been constructed for a \(\phi ^{4}\)-type model in the full subcritical regime in [19] using a Forward-Backward SDE approach. Although the objects treated are not the same (recall that we are interested in bosonic Euclidean field theories), on a high level their general framework is closely related to our point of view here and may be of help in analysing sine-Gordon for \(\beta < 8\pi \).
Finally, we remark that similar couplings can be obtained from stochastic quantisation, for instance with the methods of [30] or [25], which allow to write the \({\mathcal {P}}(\phi )_2\) field as a sum of a Gaussian free field and a non-Gaussian regular field with values in \(H^1(\Omega )\). However, the couplings obtained in this way do not have the independence property stated in Corollary 1.2. This property together with the fact that our regularity estimates imply \(L^\infty \) bounds for \(\Phi _t^{\Delta }\), \(t\geqslant 0\) enables us to study the distribution of the centred maximum of the field as described below.
1.2 Application to the maximum of the \({\mathcal {P}}(\phi )_2\) field
The strong coupling to the Gaussian free field in Theorem 1.1 is a useful tool to study novel probabilistic aspects of \({\mathcal {P}}(\phi )_2\) theories that go beyond the scope of the current literature. As an illustration, we investigate the global centred maximum of the regularised fields \(\Phi ^{{\mathcal {P}}_\epsilon }\) defined as
It is clear that as \(\epsilon \rightarrow \infty \) this random variable diverges, since the limiting field takes values in a space of distributions. For the (massive) Gaussian free field, i.e. the case \({\mathcal {P}}(\phi ) = 0\), this divergence has been quantified in [15] by
Moreover, in this reference it is also shown that the sequence of centred maxima \((M^\epsilon - {\mathfrak {m}}_\epsilon )_\epsilon \) is tight. We remark that these results were initially proved for the massless Gaussian free field on a box with Dirichlet boundary condition rather than on the torus, but the arguments are not susceptible to a different choice of boundary. The minor difference between the prefactor \(1/\sqrt{2\pi }\) in (1.14) to [15] and various other references comes from our different scaling of the fields, see (1.2).
It is clear that the coupling in Theorem 1.1 for \(t=0\) together with the properties of \(\Phi ^\Delta \) imply (1.14) and tightness of the centred global maxima for the \({\mathcal {P}}(\phi )_2\) field. Indeed, by the standard Sobolev embedding in \(d=2\), the Sobolev norm in (1.11) can be replaced by the \(L^\infty \)-norm, and hence, the maximum of the \({\mathcal {P}}(\phi )_2\) field and that of the GFF differ by a random variable with finite fractional moment. In particular, this implies that (1.14) also holds for general even polynomials \({\mathcal {P}}\) with positive leading coefficient.
Exploiting also the larger scales \(t>0\) of (1.8) allows to understand the O(1) terms in \(M^\epsilon - {\mathfrak {m}}_\epsilon \) and establish the following convergence in distribution as \(\epsilon \rightarrow 0\).
Theorem 1.3
The centred maximum of the \(\epsilon \)-regularised \({\mathcal {P}}(\phi )_2\) field \(\Phi ^{{\mathcal {P}}_\epsilon } \sim \nu ^{{\mathcal {P}}_\epsilon }\) converges in distribution as \(\epsilon \rightarrow 0\) to a randomly shifted Gumbel distribution, i.e.
where \(\textrm{Z}^{{\mathcal {P}}} \) is a nontrivial positive random variable, X is an independent standard Gumbel random variable, and b is a deterministic constant.
The analogous result for the Gaussian free field was proved in [14] and later generalised to log-correlated Gaussian fields in [20], and to the (non-Gaussian) continuum sine-Gordon field in [7]. The proof in the latter reference relies on a coupling between the Gaussian free field and the sine-Gordon field, in essence similar to Theorem 1.1, and a generalisation of all key result in [14] to a non-Gaussian regime. Here we follow a similar strategy to establish Theorem 1.3 verifying that the technical results in [7, Section 4] also hold under the weaker assumptions on the term \(\Phi ^\Delta \) in Theorem 1.1.
It is believed that the limiting law of the centred maximum is universal for \(\Phi ^\epsilon \) belonging to a large class of Gaussian or non-Gaussian log-correlated fields, in the sense that the fluctuations of the centred maximum are of order 1 and moreover, there is a sequence \(a_\epsilon \), such that
for some positive constants \(c,C>0\) and a positive random variable \(\textrm{Z}\); see for instance [17]. The expectation value in (1.15) is the distribution function of a randomly shifted Gumbel distribution, which is obtained from the deterministic Gumbel distribution function and averaging over the random shift \(\log \textrm{Z}\). In particular, the weak convergence for \(\max _{\Omega _\epsilon }\Phi ^{{\mathcal {P}}_\epsilon }\) in Theorem 1.3 can be equivalently stated as in (1.15) by setting \(a_\epsilon = {\mathfrak {m}}_\epsilon \), \(c = \sqrt{8\pi }\), \(C= e^{\sqrt{8\pi }b}\) and \(\textrm{Z}= \textrm{Z}^{{\mathcal {P}}}\). In recent years there has been substantial progress on the extremal behaviour of log-correlated fields and related models, thus confirming the conjectured behaviour of the maximum. For Gaussian fields, in particular the discrete Gaussian free field, the vast majority of questions centred around the maximum have been answered thanks to the works [14, 20, 35] as well as [9, 10]. As various key methods in the proof of these results only apply to Gaussian fields, the picture in the non-Gaussian regime is less complete. Important recent works on non-Gaussian models include [7, 8, 21, 36] and [41]. There is also a surprising relation between log-correlated processes and the extreme values of characteristic polynomials of certain random matrix ensembles as well as the maximum of the Riemann zeta function in a typical short interval on the critical line, which was first described and investigated in [22] and [23]. Subsequent works in this direction include [1, 18] and [34]. For a survey on recent developments we refer to [2].
The random variable \(\textrm{Z}^{{\mathcal {P}}}\) is believed to be a multiple of the critical multiplicative chaos of the field, also known as the derivative martingale, which can be obtained as the weak limit of
However, this has been established rigorously only for the massless Gaussian free field thanks to its conformal invariance, see [11]. Even though the exact characterisation of \(\textrm{Z}^{\mathcal {P}}\) does not come out of the proof of Theorem 1.3, it does show that \(\textrm{Z}^{\mathcal {P}}\) is obtained as the limit of prototypical derivative martingales defined similarly to (1.16).
Nonetheless, the coupling in Theorem 1.1 and Corollary 1.2 immediately gives a construction of the limit of the measure associated to (1.16) as \(\epsilon \rightarrow 0\) through
where \(\textrm{Z}^\text {GFF}(dx)\) is the Gaussian multiplicative chaos associated with \(\Phi _0^\text {GFF}\). We expect that this relation allows to establish further properties for the multiplicative chaos of the \({\mathcal {P}}(\phi )_2\).
Finally, we believe that the coupling in Theorem 1.1 can also be used to prove finer results on the extreme values of the \({\mathcal {P}}(\phi )_2\) field, in particular on the locations and heights of local maxima. The main reference for the local and full extremal process for the Gaussian free field is the work by Biskup and Louidor, see [9, 10] and [11]. Using the coupling in Theorem 1.1 and the generalisation of the key results in these references to the non-Gaussian setting analogously to [27], it seems plausible that the local extremal process of the \({\mathcal {P}}(\phi )_2\) field converges to a Poisson point process on \(\Omega \times \mathbb {R}\) with random intensity measure \(\textrm{Z}(dx)\otimes e^{-\sqrt{8\pi } h}dh\). With additional work, it should also be possible to prove convergence of the full extremal process thanks to the continuity of the difference field.
1.3 Notation
For a covariance \(c^\epsilon \) on \(X_\epsilon \), we denote by \({\varvec{E}}_{c^\epsilon }\) the expectation with respect to the centred Gaussian measure with covariance \(c^\epsilon \). Moreover, we use the standard Landau big-O and little-o notation and write \(f \lesssim g\) to denote that \(f\leqslant O(g)\), and \(f \simeq g\) if \(f \lesssim g\) and \(f\lesssim g\).
When an estimate holds true up to a constant factor depending on a parameter \(\alpha \) say, i.e. if \(f \leqslant C g\) where the constant \(C=C(\alpha )\) depends only on \(\alpha \), then we write \(f\lesssim _{\alpha } g\).
For fields \(f \in X_\epsilon \), we either write f(x) or \(f_x\) for the evaluation of f at \(x\in \Omega _\epsilon \). When the field already comes with an index, say t, then we write \(f_t(x)\) instead. We also use the word function and field interchangeably for elements in \(X_\epsilon \).
To simplify notation in what follows we will write
for the discrete integral over \(\Omega _\epsilon \). For two functions \(f,g \in X_\epsilon ^\mathbb {C}\equiv \{\varphi :\Omega _\epsilon \rightarrow \mathbb {C}\}\) we define the inner product
For \(p\in [1,\infty )\) we also define the \(L^p\)-norm for \(f\in X_\epsilon ^\mathbb {C}\) by
and for \(p=\infty \) we define
We write \(L^p(\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{L^p})\), \(1\leqslant p\leqslant \infty \) for the discrete \(L^p\) space. Finally, for \(F:X_\epsilon \rightarrow \mathbb {R}\), \(\varphi \mapsto F(\varphi )\) define \(\nabla F\) as the unique function \(X_\epsilon \rightarrow X_\epsilon \) that satisfies
for all \(\varphi ,g \in X_\epsilon \), where \(D_\varphi F :X_\epsilon \rightarrow \mathbb {R}\) is the Fréchet derivative of F at \(\varphi \), i.e. the unique bounded linear map satisfying
for all \(g \in X_\epsilon \) and where \(o(g) = o(\Vert g\Vert )\) for some norm on \(X_\epsilon \). Note that our convention for \(\nabla F\) differs from the usual gradient by a factor \(\epsilon ^{-2}\) due to the normalised inner product.
2 Discrete Besov Spaces and the Regularity of Wick Powers
In our analysis we require regularity information on various types of random fields on \(X_\epsilon \) that are uniform in the regularisation parameter. Particularly important examples of such random fields are given by discrete analogues of Wick powers for the GFF. Besov spaces allow for a convenient description of the regularity for the latter objects. As such, in this section, we first recall basic notions from discrete Fourier analysis and define discrete analogues of Besov spaces. All inequalities in this section will be uniform in \(\epsilon \) unless otherwise stated.
2.1 Discrete Fourier series and trigonometric embedding
Any function in \(X_\epsilon \) admits a Fourier representation thanks to the periodic boundary conditions. Let \(\Omega _\epsilon ^*=\{ (k_1,k_2) \in 2\pi \mathbb {Z}^2 :-\pi /\epsilon < k_i \leqslant \pi /\epsilon \}\) and \(\Omega ^*= 2\pi \mathbb {Z}^2\) be the Fourier dual spaces of \(\Omega _\epsilon \) and \(\Omega \). Then, for \(f\in X_\epsilon \) and \(x\in \Omega _\epsilon \),
where \({\hat{f}}(k)\in \mathbb {C}\) is the k-th Fourier coefficient of f given by
Since f is real-valued, we have that \({\hat{f}}(k) = \overline{{\hat{f}}(-k)}\). For a given \(f\in X_\epsilon \), the map \({\mathcal {F}}_\epsilon (f):\Omega _\epsilon ^* \rightarrow \mathbb {C}\), \(k\mapsto {\hat{f}}(k)\) is called the (discrete) Fourier transform of f. Similarly, for any function \({\hat{f}} \in X_\epsilon ^* {:}{=} \{ g :\Omega _\epsilon ^* \rightarrow \mathbb {C}\}\), we define the inverse Fourier transform \({\mathcal {F}}_\epsilon ^{-1}\) by
As for the usual Fourier transform on \(\Omega \) and its inverse, which we define analogously to (2.1) and (2.2) by replacing \(\Omega _\epsilon \) by \(\Omega \), it can be easily seen that \({\mathcal {F}}^{-1} \circ {\mathcal {F}}= \textrm{id}_{X_\epsilon ^\mathbb {C}}\).
For a translation invariant operator operator \(q :X_\epsilon \rightarrow X_\epsilon \), we denote by \({\hat{q}}(k)\), \(k\in \Omega _\epsilon ^*\) its Fourier multipliers, defined by
For instance, it can be shown that the negative lattice Laplacian \(-\Delta ^\epsilon \) on \(\Omega _\epsilon \) has Fourier multipliers
As \(\epsilon \rightarrow 0\) these converge to the Fourier multipliers of the continuum Laplacian given by
and this convergence is quantified by
where \(h(x) = \max _{i=1,2} (1 -x_i^{-2}(2-2\cos (x_i)))\) satisfies \(h(x) \in [0,1-c]\) with \(c = 4/\pi ^2\) for \(|x|\leqslant \pi \) and \(h(x) =O(|x|^2)\) as \(x\rightarrow 0\).
We extend functions on \(\Omega _\epsilon \) onto \(\Omega \) by using the standard trigonometric extension, which is also an isometric embedding \(I_\epsilon :L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega )\), i.e. if \(f\in X_\epsilon \) has Fourier series (2.1), then the extension \(I_\epsilon f\) of f is the unique function \(\Omega \rightarrow \mathbb {R}\) whose Fourier coefficients agree with those of f for \(k\in \Omega _\epsilon ^*\) and vanish for \(k\in \Omega ^*\setminus \Omega _\epsilon ^*\). Note that \(I_\epsilon f\) coincides with f on \(\Omega _\epsilon \).
Conversely, we can restrict a smooth function \(f:\Omega \rightarrow \mathbb {R}\) to \(\Omega _\epsilon \) by restricting its Fourier series
to Fourier coefficients \(k\in \Omega _\epsilon ^*\), i.e. we write
2.2 Discrete Sobolev spaces
The notion of Fourier series can be used to define discrete Sobolev norms in complete analogy to the continuum case. For \(\Phi \in X_\epsilon \) and \(\alpha \in \mathbb {R}\) we define
where \({\hat{\Phi }}(k)\), \(k\in \Omega _\epsilon ^*\) denote the Fourier coefficients of \(\Phi \) as defined in (2.1). Moreover, we denote by \(H^\alpha (\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{H^\alpha (\Omega _\epsilon )})\) the discrete Sobolev space of regularity \(\alpha \). Thus, when using the isometric embedding the discrete and the continuum Sobolev norm \(\Vert \cdot \Vert _{H^\alpha (\Omega )}\), defined as in (2.5) except the sum being now over \(k\in \Omega ^*\), coincide, i.e. for \(\Phi \in X_\epsilon \)
where now \(H^\alpha (\Omega )\) denotes the Sobolev space of regularity \(\alpha \in \mathbb {R}\). Recall that we have the following standard embedding theorem. Here, \(\Vert \cdot \Vert _{L^p(\Omega )}\) and \(\Vert \cdot \Vert _{H^s(\Omega )}\) denote the continuum norms for functions \(\Omega \rightarrow \mathbb {C}\).
Proposition 2.1
(Sobolev embeddings). Let \(s \geqslant 1\). Then
where
In particular, we have for every \( \alpha > 0\) and \(f:\Omega _\epsilon \rightarrow \mathbb {C}\),
Finally, we also have the following Hölder embedding. Let \(\alpha \in (0,1)\). For a function \(f\in C^\infty (\Omega ,\mathbb {R})\), we define its \(\alpha \)-Hölder norm by
where \(|f|_{C^\alpha }\) denotes the Hölder seminorm. The \(\alpha \)-Hölder space, denoted \(C^\alpha (\Omega )\), is then defined as the completion of \(C^\infty (\Omega ,\mathbb {R})\) with respect to \(\Vert \cdot \Vert _{C^\alpha (\Omega )}\). For discrete functions \(f\in X_\epsilon \), we define
and \(C^\alpha (\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{C^\alpha (\Omega _\epsilon )})\). Then, the following standard embedding from Sobolev spaces into Hölder spaces can be transferred to functions in \(X_\epsilon \).
Proposition 2.2
Let \(\alpha \in (0,1)\) and \(s-1 \geqslant \alpha \). Then
2.3 Discrete Besov spaces
In this section we introduce the important class of Besov spaces and recall their most important properties. We state the results below for dimension \(d=2\) though analogous statements hold for general d. Let \(A :=B_{4/3} \setminus B_{3/8}\) be the annulus of inner radius \(r_1 = 3/8\) and outer radius \(r_2=4/3\). Here, we denote by \(B_r = \{ |x| \leqslant r \} \subset \mathbb {R}^2\) the centred ball of radius \(r\geqslant 0\) in \(\mathbb {R}^2\). Let \(\chi , {\tilde{\chi }} \in C^\infty _c(\mathbb {R}^2,[0,1])\), such that
and
and write
For \(\epsilon >0\) define \(j_\epsilon = \max \{ j \geqslant -1 :{{\,\textrm{supp}\,}}\chi _j \subset (-\pi /\epsilon , \pi /\epsilon ]^2 \}\). Note that for \(j \geqslant j_\epsilon \), \({{\,\textrm{supp}\,}}\chi _j\) may intersect \(\partial \{ [-\pi /\epsilon , \pi /\epsilon ]^2 \}\). To avoid ambiguities with the periodisation of \(\chi _j\) onto \(\Omega _\epsilon ^*\), we modify our dyadic partition of unity in (2.6) as follows: for \(j \in \{ -1, \ldots , j_\epsilon \}\) let \(\chi _j^\epsilon \in C^\infty _c(\mathbb {R}^2, [0,1])\) be such that for \(k \in \Omega _\epsilon ^*\) we have
Then we define for \(j\geqslant -1\) the j-th Fourier projector \(\Delta _j\) by
where \({\hat{f}} :\Omega _\epsilon ^* \rightarrow \mathbb {C}, \, k\mapsto {\hat{f}}(k)\) is the Fourier transform of f. We can use this partition of unity to decompose a given function \(f\in X_\epsilon \) into a sum of functions with almost disjoint support in Fourier space and define for any \(\alpha \in \mathbb {R}\) and \(p,q \in [1,\infty ]\)
One can prove that this is indeed a norm on \(X_\epsilon \) and that different choice of \({\tilde{\chi }}, \chi \) yield equivalent norms uniformly in \(\epsilon >0\).
We denote by \({B_{p,q}^{\alpha }}(\Omega _\epsilon ) = (X_\epsilon , \Vert \cdot \Vert _{{B_{p,q}^{\alpha }}})\) the discrete Besov space with parameters p,q and \(\alpha \). Note that \(H^\alpha (\Omega _\epsilon ) = {B_{2,2}^{\alpha }}(\Omega _\epsilon )\) which holds in the sense that the norms are equivalent uniformly in the lattice spacing. Moreover, for any \(\alpha \in \mathbb {R}\), we write \({\mathcal {C}}^\alpha (\Omega _\epsilon ):={B_{\infty ,\infty }^{\alpha }}(\Omega _\epsilon )\).
We now state useful properties of these spaces. In all estimates below we use \(\lesssim \) to denote less than or equal to up to a constant that may depend on the parameters of the Besov space but does not depend on \(\epsilon \). For the proofs of these result, we refer to [31, Remark 8, 10 and 11] [26, Lemma A.2], [4, Theorem 2.13], [26, Lemma A.3], and [4, Theorem 2.14] respectively.
Proposition 2.3
(Immediate embeddings). Let \(\alpha _1, \alpha _2 \in \mathbb {R}\) and \(p_1,p_2,q_1, q_2 \in [1,\infty ]\). Then we have
Proposition 2.4
(Duality). Let \(\alpha _1,\alpha _2 \in \mathbb {R}\) such that \(\alpha _{1}+\alpha _{2}=0\) and let \(p_1,p_2,q_1,q_2 \in [1,\infty ]\) such that \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{q_1} + \frac{1}{q_2} = 1\). Then
Proposition 2.5
(Multiplication inequality). Let \(p,p_{1},p_{2},q,q_1,q_2\in [1,\infty ]\) be such that \(\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\), \(\frac{1}{q} = \frac{1}{q_1} + \frac{1}{q_2}\), and let \(\alpha _{1},\alpha _{2}\in \mathbb {R}\) be such that \(\alpha _{1}+\alpha _{2}>0\). Denote \(\alpha =\min (\alpha _{1},\alpha _{2})\). Then, for any \(\epsilon > 0\) and for all \(f,g \in X_\epsilon \),
In particular, for \(\alpha > 0\) we obtain the following iterated multiplication inequality: for every \(k\in \mathbb {N}\) such that \(k \geqslant 1\),
Proposition 2.6
(Interpolation). Let \(\theta \in [0,1]\), \(p,p_{1},p_{2},q,q_1,q_2\in [1,\infty ]\) satisfying \(\frac{1}{p}=\frac{\theta }{p_{1}}+\frac{1-\theta }{p_{2}}\), \(\frac{1}{q} = \frac{\theta }{q_1} + \frac{1-\theta }{q_2}\), and \(\alpha ,\alpha _{1},\alpha _{2}\in \mathbb {R}\) satisfying \(\alpha =\theta \alpha _{1}+(1-\theta )\alpha _{2}\). Then, for any \(\epsilon > 0\) and \(f \in X_\epsilon \),
Proposition 2.7
(Besov embedding). Let \(\alpha _{1},\alpha _{2}\in \mathbb {R}\) and \(p_{1},p_{2},q\in [1,\infty ]\). Assume that \(\alpha _{2} \leqslant \alpha _{1}-2\big ( \frac{1}{p_{1}}-\frac{1}{p_{2}}\big )\). Then, for any \(\epsilon > 0\) and \(f \in X_\epsilon \),
2.4 Regularity estimates on discrete Wick powers
The relevance of Besov norms in our context comes from the fact that in dimension \(d=2\) the Wick powers of log-correlated fields are distributions in \({B_{p,p}^{-\kappa }}\) for any \(\kappa >0\).
Lemma 2.8
Let \(Y_\infty ^\epsilon \) be the discrete Gaussian free field on \(\Omega _\epsilon \). Then, for any \(\kappa >0\) and \( p \in [1,\infty )\),
Consequently, by Besov embedding (2.10), for any \(n \in \mathbb {N}\), \(\kappa >0\), and \(r > 0\),
Proof
Throughout this proof we drop \(\epsilon >0\) from the notation. By the definition of the Besov norms, we have
where in the second line, we use stationarity together with a standard Wiener chaos estimate (which is a consequence of hypercontractivity), in the third line, we denote the (real-space) kernel of \(\Delta _j\) by \(K_j=K_j^\epsilon \), and in the final line we use Wick’s theorem.
Recall that we have \(\Vert K_j\Vert _{L^{1}(\Omega _\epsilon )}\lesssim 1\) and \(\Vert K_j\Vert _{L^{\infty }(\Omega _\epsilon )}\lesssim 2^{2j}\). This implies, by interpolation, that \(\Vert K_j\Vert _{L^{q_{1}}(\Omega _\epsilon )}\lesssim 2^{j \kappa /2}\) if \(q_{1}>1\) is chosen sufficiently close to 1. Observe that if \(q_{2}<\infty \) is the Hölder conjugate of \(q_{1}\), then \(\sup _{\epsilon >0}\Vert c^n\Vert _{L^{q_{2}}(dxdy)}<\infty \). Thus,
Plugging (2.12) into the sum (2.11), we see that it converges and is bounded uniformly in \(\epsilon \). This completes the proof. \(\quad \square \)
In the sequel we consider the Wick powers of the sum of the Gaussian free field and a regular field. Note that for \(n \in \mathbb {N}\) and \(\varphi \in X_\epsilon \) we have
In practice, we apply this formula in the case where \(\varphi \) admits a bound uniform in \(\epsilon >0\) on the second moment of some positive regularity Sobolev norm. In particular, it is mainly useful when \(\varphi \) admits estimates in a positive regularity norm (rather than a negative regularity distribution norm) that are uniform in \(\epsilon >0\).
Lemma 2.9
Let \(Y_\infty ^\epsilon \) be the discrete Gaussian free field on \(\Omega _\epsilon \). Then for any \(n \in \mathbb {N}\) and \(\kappa > 0\), and \({\bar{\kappa }} >0\) small enough
uniformly in \(\epsilon > 0\).
Proof
We estimate the terms on the right-hand side of (2.13) separately. For \(k\in \{1,\ldots , n-1\}\) and \(p >2/\kappa \), we have for any \(\delta >0\) by Besov embedding (2.10) and the multiplication inequality (2.8)
Now, by iterating the multiplication inequality (2.8) and Besov embedding (2.10) we have for \(2/\kappa < p \leqslant 2(1+ 1/k) / (\kappa + \delta )\)
Thus, we can further estimate (2.15) by
where we used Young’s inequality in the last step and set \(\bar{\kappa }= \kappa -2/p\). Summing over \(k\in \{1,\ldots ,n-1\}\) and observing that
the estimate (2.14) follows. \(\quad \square \)
3 Stochastic Representations of \({\mathcal {P}}(\phi )_2\)
In this section, we present two stochastic representations of measures \(\nu ^{{\mathcal {P}}_\epsilon }\): the Polchinski renormalisation group approach and a stochastic control representation via the Boué–Dupuis variational formula. The former underlies the SDE (1.5) that we use to construct the process \(\Phi ^{{\mathcal {P}}_\epsilon }\). We show that there is an exact correspondence between these two approaches. In particular, the minimiser of the variational problem is explicitly related to the difference field of \(\Phi ^{\Delta _\epsilon }\) of the Polchinski dynamics. For technical reasons, we introduce a potential cut-off to guarantee the well-posedness of the SDE and to ensure existence of minimisers for the variational problem.
3.1 Pauli–Villars decomposition of the covariance
Let \((c_t^\epsilon )_{t\in [0,\infty ]}\) be a continuously differentiable decomposition of the covariance of the GFF, i.e.
where, for every \(s > 0\), \(\dot{c}_s^\epsilon \) is a positive-semidefinite operator acting on \(X_\epsilon \). The choice of \(\dot{c}_t^\epsilon \) is restricted by the condition on \(c_\infty ^\epsilon \). In this work we use the Pauli-Villars regularisation
Remark 3.1
The choice of the Pauli-Villars regularisation is twofold: first, it satisfies the required regularity estimates in Sect. 4.1. Second, it is technically convenient with regards to convergence of the maxima in Sect. 6. In particular, it allows to avoid the introduction of an additional field in the approximation of the small scale field in Sect. 6.2. This, however, is not necessary and we expect that some other choices which are more natural from an analytic perspective (i.e. satisfying the estimates of Sect. 4.1), such as a heat kernel decomposition, should also work.
We view \(c_t^\epsilon \) and \(\dot{c}_t^\epsilon \) acting on \(X_\epsilon \) as Fourier multipliers. For instance, we have for \(f\in X_\epsilon \)
where the Fourier multipliers of \(\dot{c}_t^\epsilon \) are given by
where \(-{\hat{\Delta }}^\epsilon (k)\) is as in (2.3). We also record the Fourier multipliers for \(q_t^\epsilon \) which is the unique positive semi-definite operator on \(X_\epsilon \) such that \(\dot{c}_t^\epsilon = q_t^\epsilon * q_t^\epsilon \), where \(*\) is the discrete convolution on \(\Omega _\epsilon \). From this defining relation, we immediately deduce that
Remark 3.2
For \(x,y \in \Omega _\epsilon \), set \(c_t^\epsilon (x,y) = (-\Delta ^\epsilon + m^2 + 1/t)^{-1}(x,y)\), where \((x,y)\mapsto (-\Delta ^\epsilon + m^2 + 1/t)^{-1}(x,y)\) is the Green’s function of \((-\Delta ^\epsilon + m^2 +1/t)\). Note that \(c_t^\epsilon (x,y) = C_{t/\epsilon ^2}(x/\epsilon , y/\epsilon )\), where
is the Green’s function for the massive unit lattice Laplacian. It is easy to see that \(c_t^\epsilon (x,y) = c_t^\epsilon (0,x-y)\), i.e. \(c_t^\epsilon \) is stationary. We simply write \(c_t^\epsilon (x-y)= c_t^\epsilon (x,y)\).
We use the Pauli-Villars decomposition of \(c_\infty ^\epsilon \) to construct a process associated to the discrete Gaussian free field on \(\Omega _\epsilon \) as follows. Let W be a cylindrical Brownian motion in \(L^2(\Omega )\) defined on a probability space \((\mathcal {O}, {\mathcal {F}}, \mathbb {P})\) and denote by \(({\mathcal {F}}_t)_{t \geqslant 0}\) and \(({\mathcal {F}}^t)_{t \geqslant 0}\) the forward and backward filtrations generated by the past \(\{W_s-W_0:s \leqslant t\}\) and the future \(\{W_s-W_t:s\geqslant t\}\). We assume that \({\mathcal {F}}\) is \(\mathbb {P}\)-complete and the filtrations are augmented by \(\mathbb {P}\)-null sets. Moreover, we write expectation with respect to \(\mathbb {P}\) as \(\mathbb {E}\). The cylindric Brownian motion W can be almost surely represented as a random Fourier series via the so-called Karhunen-Loève expansion. More precisely, for \(k \in 2\pi \mathbb {Z}^2\) and \(t \geqslant 0\), let \({\hat{W}}_t (k) = \int _\Omega W_t(x) e^{-i k \cdot x}dx\). Then, almost surely \(\{ {\hat{W}}(k) :k \in 2\pi \mathbb {Z}^2 \}\) is a set of complex standard Brownian motions, independent up to the constraint \({\hat{W}}(k)=\overline{{\hat{W}}(-k)}\) and \({\hat{W}}(0)\) is a real standard Brownian motion and we can write
where the sum converges uniformly on compact sets in \( C([0,\infty ),H^{-1-\kappa })\) for any \(\kappa >0\), i.e. for any \(T\geqslant 0\), we have as \(r\rightarrow \infty \)
To obtain a Brownian motion in \(\Omega _\epsilon \) we restrict the formal Fourier series of W to \(k\in \Omega _\epsilon ^*\), i.e. for \(x\in \Omega _\epsilon \), we set
Then \((W^\epsilon (x))_{x\in \Omega _\epsilon }\) are independent Brownian motions indexed by \(\Omega _\epsilon \) with quadratic variation \(t/\epsilon ^{2}\), see for instance [7, Section 3.1] for more details. As in (1.7) we define the decomposed Gaussian free field \(\Phi _t^{\text {GFF}_\epsilon }\) by
where \(\dot{c}_t^\epsilon = q_t^\epsilon * q_t^\epsilon \) and \({\hat{q}}_t (k)\) are as in (3.2). Note that \(\Phi ^{\text {GFF}_\epsilon }\) has independent increments and that \(\Phi _0^{\text {GFF}_\epsilon }\sim \nu ^{\text {GFF}_\epsilon }\), i.e. at \(t=0\) the we obtain the discrete Gaussian free field on \(\Omega _\epsilon \). Moreover, we emphasise that the process \(\Phi ^{\text {GFF}_\epsilon } = (\Phi ^{\text {GFF}_\epsilon }_t)_{t\geqslant 0}\) is adapted to the backward filtration \(({\mathcal {F}}^t)\).
3.2 Polchinski renormalisation group dynamics for \({\mathcal {P}}(\phi )_2\)
Let \(v_0^\epsilon (\phi ) = \epsilon ^2 \sum _{x\in \Omega _\epsilon } {:\,} {\mathcal {P}}(\phi (x)) {:\,}_\epsilon \) be the interaction for the measure \(\nu ^{{\mathcal {P}}_\epsilon }\) in (1.4). We also refer to this object as Hamiltonian of the \({\mathcal {P}}(\phi )_2\) field. For the reasons described at the beginning of this section, we consider the following energy cut-off for \(v_0^\epsilon \). For \(E>0\) let \(\chi _E \in C^2(\mathbb {R},\mathbb {R})\) be concave and increasing such that
and moreover \(\chi _E \leqslant \chi _{E'}\) on \([0,\infty )\) for \(E \leqslant E'\) and \(\sup _{E > 0} \Vert \chi _E' \Vert _{\infty } < 2\). Then we define the cut-off Hamiltonian \(v_0^{\epsilon ,E} =\chi _E \circ v_0^{\epsilon }\) and the associated measure \(\nu ^{{\mathcal {P}}_\epsilon , E}\) on \(\Omega _\epsilon \) by
Next, we define the renormalised potential \(v_t^{\epsilon ,E}\) at scale \(t\geqslant 0\) and for \(E\in (0,\infty ]\) by
where we recall that \({\varvec{E}}_{c_t^\epsilon }\) denotes the expectation with respect to the centred Gaussian measure with covariance \(c_t^\epsilon \).
We think of \(v_t^{\epsilon ,E}\) as the effective potential that sees fluctuations on the scales larger than the characteristic length scale \(L_t=\sqrt{t} \wedge 1/m\). Indeed, (3.5) can be interpreted as integrating out all small scale parts of the discrete GFF which are associated to the covariance \(c_t^\epsilon \).
The scale evolution of the renormalised potential is further encoded in the Polchinski equation, as stated in the following proposition. For a proof of this result, see for instance [16].
Lemma 3.3
Let \(\epsilon > 0\) and \(E >0 \). Then \(v^{\epsilon ,E}\) is a classical solution of the Polchinski equation
with initial condition \(v_0^{\epsilon ,E}\).
We record a priori estimates on the gradient and Hessian of \(v_t^{\epsilon ,E}\), where we make heavy use of the potential cut-off. We emphasise that the bounds are uniform in \(\phi \in X_\epsilon \), but not in \(\epsilon >0\) and \(E>0\).
Lemma 3.4
Let \(\epsilon >0\) and \(E \in (0,\infty )\). There exists \(C=C(\epsilon ,E)>0\) such that
where there is an implicit summation over the coordinates of \(\nabla v_t^{\epsilon ,E}\) and \({{\,\textrm{Hess}\,}}v_t^{\epsilon ,E}\) in the norms above.
Proof
We first observe that we have by the chain rule
Since \(v_0^\epsilon \) is continuous in \(\phi \in X_\epsilon \) and since \(|v_0^\epsilon (\phi )| \rightarrow \infty \) as \(\Vert \phi \Vert _{L^\infty (\Omega _\epsilon )} \rightarrow \infty \), we have that \(\nabla v_0^{\epsilon ,E}\) has bounded support. Since the components of \(\nabla v_0^\epsilon \) are polynomials in \(\phi \), (3.7) follows for \(t=0\). For \(t>0\), we obtain by differentiating (3.5)
Now the cut-off \(\chi _E\) allows to bound \(|v_t^{\epsilon , E}(\phi )| \lesssim _\epsilon E\) and hence, we can estimate
from which (3.7) follows. Similarly, we have
and thus, (3.8) follows from the same arguments for \(t=0\). For \(t>0\) the statement is obtained by differentiating (3.9) and similar arguments and the bounds for \(\nabla v_t^{\epsilon , E}\). \(\square \)
We now introduce the Polchinski dynamics: a high-dimensional SDE driven by \(\Phi _t^{\text {GFF}_\epsilon }\) and with drift given by the gradient of the renormalised potential. We then use the Polchinski equation to show that this provides coupling between the \({\mathcal {P}}(\phi )_2\) field with cut-off E and the discrete Gaussian free field on \(\Omega _\epsilon \). One of the key properties of this dynamic that makes it useful to study global probabilistic properties of the \({\mathcal {P}}(\phi )_2\) measure is an independence property between small and large spatial scales. Recall that we denote by \(C_0([0, \infty ), {\mathcal {S}})\) the space of continuous sample paths with values in a metric space \(({\mathcal {S}}, \Vert \cdot \Vert _{\mathcal {S}})\) that vanish at infinity.
Proposition 3.5
For \(\epsilon >0\) and \(E>0\) there is a unique \({\mathcal {F}}^t\)-adapted process \(\Phi ^{{\mathcal {P}}_\epsilon ,E} \in C_0([0,\infty ), X_\epsilon )\) such that
In particular, for any \(t>0\), \(\Phi ^{\text {GFF}_\epsilon }_0-\Phi _t^{\text {GFF}_\epsilon }\) is independent of \(\Phi _t^{{\mathcal {P}}_\epsilon ,E}\). Moreover, \(\Phi _0^{{\mathcal {P}}_\epsilon ,E}\) is distributed as the measure \(\nu ^{{\mathcal {P}}_\epsilon ,E}\) defined in (3.4).
In removing the cut-off, the left-hand side of (3.10) loses meaning as an adapted solution to a backwards SDE. However, we show that the right-hand side can be made sense of in the limit and use this to define the left-hand side. Moreover, the independence property in Proposition 3.5 is preserved. As such, the finite variation integral in (3.10) is one of the main quantities of interest in the remainder of this paper, and we record this notion in the following definition.
Definition 3.6
Let \(\epsilon > 0\) and \(E \in (0,\infty ]\). The approximate difference field \(\Phi ^{\Delta _\epsilon ,E} \in C([0,\infty ), X_\epsilon )\) is the \(({\mathcal {F}}^t)_{t \geqslant 0}\) adapted process defined by
where \(\Phi ^{{\mathcal {P}}_\epsilon , E}\) is the solution to (3.10). In particular,
Proof of Proposition 3.5
We first prove that the SDE is well-defined and has a (pathwise) unique, strong solution on \([0,\infty )\). Then the independence property follows by independence of increments of the underlying cylindrical Brownian motion. Since \(\epsilon >0\) is fixed throughout the proof, we suppress it from the notation when clear.
We first show that the coefficients of the SDE (3.10) are uniformly Lipschitz thanks to the global Hessian bound (3.8). The proof is then completed by arguing as in [7, Theorem 3.1]. For \({\mathcal {D}},{\mathcal {E}}\in C([0,\infty ), X_\epsilon )\), let
By the mean value theorem and the bound (3.8), we have, for \(s\geqslant 0\) and \(\Phi , {\tilde{\Phi }} \in X_\epsilon \),
where \(\Vert \dot{c}_s\Vert \) denotes the spectral norm of the operator \(\dot{c}_s :X_\epsilon \rightarrow X_\epsilon \). Thus, for \({\mathcal {E}}\in C([0,\infty ),X_\epsilon )\), we have that
Suppose first that there are two solutions \(\Phi ^{{\mathcal {P}}}\) and \({\tilde{\Phi }}^{{\mathcal {P}}}\) to (3.10) satisfying \({\mathcal {D}}:=\Phi ^{{\mathcal {P}}}-\Phi ^{\text {GFF}} \in C_0([0,\infty ),X_\epsilon )\) and \({\tilde{{\mathcal {D}}}}:={\tilde{\Phi }}^{{\mathcal {P}}}-\Phi ^{\text {GFF}} \in C_0([0,\infty ),X_\epsilon )\). Then, by (3.14),
Thus, \(f(t) = \Vert {\mathcal {D}}_t-{\tilde{{\mathcal {D}}}}_t \Vert _{L^2}\) is bounded with \(f(t)\rightarrow 0\) as \(t\rightarrow \infty \) and additionally satisfies
Since \(\Vert \dot{c}_s \Vert \lesssim _m \frac{1}{1+ s^2}\), we have \(\int _{0}^\infty O(\Vert \dot{c}_s \Vert ) \, ds < \infty \) and thus, a version of Gronwall’s inequality implies that for \(t\geqslant 0\)
and hence, \({\mathcal {D}}={\tilde{{\mathcal {D}}}}\) on \([0,\infty )\).
That a solution to (3.10) on \([0,\infty )\) exists follows from Picard iteration. For \({\mathcal {D}}\in C([0,\infty ), X_\epsilon )\) and \(t\geqslant 0\) let \(\Vert {\mathcal {D}}\Vert _t = \sup _{s\geqslant t} \Vert {\mathcal {D}}_s \Vert _{L^2}\). Fix \({\mathcal {E}}\in C_0([0,\infty ), X_\epsilon )\) and set \({\mathcal {D}}^0=0\) and \({\mathcal {D}}^{n+1}=F({\mathcal {D}}^n,{\mathcal {E}})\). Then,
and from the elementary identity
applied with \(g(s) = O( \Vert \dot{c}_s \Vert )\), we conclude that
Since the right-hand side of (3.15) is summable, we have that \({\mathcal {D}}^n \rightarrow {\mathcal {D}}^*\) for some \({\mathcal {D}}^* = {\mathcal {D}}^*({\mathcal {E}}) \in C_0([0,\infty ),X_\epsilon )\) in \(\Vert \cdot \Vert _{0}\), and the limit satisfies \(F_s({\mathcal {D}}^*({\mathcal {E}}),{\mathcal {E}}) = {\mathcal {D}}^*({\mathcal {E}})_s\) for \(s\geqslant {0}\). Now, we apply this result for \({\mathcal {E}}=\Phi ^\text {GFF}\) noting that, from the representation (3.3),
which implies that \(\Phi ^\text {GFF}\in C_0([0,\infty ), X_\epsilon )\) a.s. In summary, \(\Phi ^{{\mathcal {P}}, E}_t = D^*(\Phi ^\text {GFF})_t + \Phi ^{\text {GFF}}_t\) is the desired solution on \([0,\infty )\).
We now prove that \(\Phi _0^{{\mathcal {P}},E}\) is distributed as \(\nu ^{{\mathcal {P}},E}\) as defined in (3.4), for which we proceed similarly as in the proof of [7, Theorem 3.2]. Let \(\nu _t^{{\mathcal {P}},E}\) be the renormalised measure defined by
where \({\varvec{E}}_{c_t}\) denotes the expectation of the Gaussian measure with covariance \(c_t\). Then, let \({\tilde{\Phi }}^T\) be the unique strong solution to the (forward) SDE
with initial condition \({\tilde{\Phi }}_0\sim \nu _T^{{\mathcal {P}},E}\) and \(\tilde{W}^{T}_t = W_T-W_{T-t}\). Existence and uniqueness can be seen by the exact same arguments as above for the case \(T=\infty \). Using the Polchinski semigroup \({\varvec{P}}_{s,t}\), \(s\leqslant t\) as defined in [6, (1.3-\(-\)1.4)] to evolve the measure \(\nu _T^{{\mathcal {P}},E}\), it follows that \({\tilde{\Phi }}_T^T\) is distributed as the measure (3.4). Reversing the direction of t and setting \(\Phi _t^T = {\tilde{\Phi }}_{T-t}^T\) we obtain
Note that this yields a coupling of all solutions \(\Phi ^{{\mathcal {P}},E}\), \(\Phi ^T\), \(T>0\). Therefore, we have with \(\Phi ^{{\mathcal {P}},E}_\infty =0\) that
We will show that as \(T\rightarrow \infty \), we have \(\Vert \Phi ^{{\mathcal {P}},E}_0 - \Phi _0^T\Vert _{L^2}\rightarrow 0\) in probability, from which we deduce that \(\Phi ^{{\mathcal {P}},E}_0\sim \nu ^{{\mathcal {P}},E}\). In what follows, we denote by \(\Vert \dot{c}_t\Vert \) the operator norm of \(\dot{c}_t\) when seen as an operator \(L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega _\epsilon )\).
The first, third, and fourth terms on the right-hand side above are independent of t and they converge to 0 in probability as \(T \rightarrow \infty \). Indeed, for the first term this follows from the weak convergence of the measure \(\nu _T^{{\mathcal {P}},E}\) to \(\delta _0\), e.g., in the sense of [6, (1.6)]. The third term is bounded by
which converges to 0 in \(L^1\) as \(T\rightarrow \infty \) by (3.7) and the fact that \(\Vert \dot{c}_t \Vert = O(1/t^2)\). The fourth term is a Gaussian field on \(\Omega _\epsilon \) with covariance matrix \(c_\infty - c_T \rightarrow 0\) as \(T\rightarrow \infty \). Since \(\Omega _\epsilon \) is finite, it is a trivial consequence that this Gaussian field convergences to 0.
In summary, we have shown that there is \({\mathcal {R}}_T\) such that \(\Vert {\mathcal {R}}_T \Vert \rightarrow 0\) in probability, and
For any \(t\geqslant 0\), we have by (3.13) that
so that we have with \(M_t= \Vert \dot{c}_t \Vert \)
Thus, we have shown that \({\mathcal {D}}_t = \Phi ^{{\mathcal {P}},E}_{t}-\Phi _t^T\) satisfies
Since \(\int _{0}^\infty M_s \, ds < \infty \), the same version of Gronwall’s inequality as above implies that
Since the right-hand side is uniform in \(t\geqslant 0\), we conclude that \(\sup _{t\geqslant 0} \Vert {\mathcal {D}}_t\Vert _{L^2} \rightarrow 0\) in probability as \(T\rightarrow \infty \). \(\quad \square \)
3.3 A stochastic control representation via the Boué–Dupuis formula
We now turn to a stochastic control representation of the renormalised potential \(v_t^{\epsilon ,E}\) defined in (3.5) based on the Boué–Dupuis variational formula, which allows to express the moment generating function for functionals of Brownian motion as an expectation of the shifted functional and a drift term. For this, we interpret the drift part of our SDE as a minimiser of the control problem. This correspondence is known as the verification principle in stochastic control theory, see [39, Section 4]. To connect to the Polchinski renormalisation group approach of Sect. 3.2 and also to the notation in [5] we write
for the small scale of the Gaussian free field process \(\Phi ^{\text {GFF}_\epsilon }\). Note that \(Y_t^\epsilon \) is a Gaussian field with covariance \(c_t^\epsilon \), which follows from standard stochastic analysis results. Thus, for \(\phi \in X_\epsilon \), the renormalised potential \(v_t^{\epsilon ,E}\) can be likewise expressed as
The right-hand side is now expressed as a measurable functional of Brownian motions. One can then exploit continuous-time martingale techniques, in particular Girsanov’s theorem, to analyse this expectation. This underlies the stochastic control representation that we now present.
Let \(\mathbb {H}_a\) be the space of progressively measurable (with respect to the backward filtration \({\mathcal {F}}^t\)) processes that are a.s. in \(L^2(\mathbb {R}_0^+ \times \Omega _\epsilon )\), i.e. \(u \in \mathbb {H}_a\) if and only if \(u|_{[t,\infty )}\) is \({\mathcal {B}}([t,\infty )) \otimes {\mathcal {F}}^t\)-measurable for every \(t\geqslant 0\) and
Here \({\mathcal {B}}([t,\infty ))\) denotes the Borel \(\sigma \)-algebra on \([t,\infty )\). The restriction of \(\mathbb {H}_a\) to a finite interval [0, t] is denoted \(\mathbb {H}_a[0,t]\) with the convention \(\mathbb {H}_a[0,\infty ]= \mathbb {H}_a\). We will refer to elements \(u\in \mathbb {H}_a\) as drifts. For \(u\in \mathbb {H}_a\) and \(0\leqslant s \leqslant t \leqslant \infty \) define the integrated drift
with the convention \(I_{0,t}^\epsilon (u) \equiv I_{t}^\epsilon (u)\). The following proposition is the Boué–Dupuis formula for the renormalised potential. We state it in the conditional form in order to be able to draw the correct comparison to the Polchinski renormalisation group approach afterwards.
Proposition 3.7
(Boué–Dupuis formula). Let \(\epsilon >0\). Then the conditional Boué–Dupuis formula holds for the renormalised potential \(v_t^{\epsilon ,E}\), i.e. for \(t\in [0, \infty ]\)
Proof
Since \(Y_t^\epsilon \) is independent of \(\Phi _t^{{\mathcal {P}}_\epsilon ,E}\) we have by standard properties of conditional expectation
From [13, Theorem 8.3], we have that for a deterministic \(\phi \in X_\epsilon \) the unconditional expectation on right-hand side of this display is equal to
from which the claim follows. \(\quad \square \)
3.4 Correspondence between the Polchinski and the Boué–Dupuis representations
The following proposition describes the equivalence between the Polchinski renormalisation group dynamics and the stochastic control representation via the Boué–Dupuis formula in the presence of a potential cut-off \(E > 0\).
Proposition 3.8
Let \(\epsilon > 0\), \(E\in (0,\infty )\) and let \(\Phi ^{{\mathcal {P}}_\epsilon ,E} \in C_0([0,\infty ), X_\epsilon )\) be the unique strong solution to (3.10). Let \(u^E:[0,t]\times \Omega _\epsilon \rightarrow \mathbb {R}\) denote the process defined by
Then \(u^E\) is a minimiser of the conditional Boué–Dupuis variational formula (3.18). In particular, the relation between \(u^E\) and the difference field \(\Phi ^{\Delta _\epsilon ,E}\) is given by
Proof
Since \((\Phi _t^{{\mathcal {P}}_\epsilon ,E})_{t\in [0,\infty ]}\) is \({\mathcal {F}}^t\)-measurable and continuous, it follows that \(u^{\epsilon ,E} \in \mathbb {H}_a^\epsilon [0,t]\) for any \(t\in [0,\infty ]\). To ease the notation, we drop \(\epsilon \) throughout this proof. Applying Itô’s formula to \(v^E_{T-\tau }(\Phi ^{{\mathcal {P}},E}_{T-\tau })\) for \( \tau \leqslant T<\infty \) and substituting \(t=T-\tau \) thereafter, we obtain
where we used the Polchinski equation (3.6) in the last step. Integrating from 0 to t and taking conditional expectation yields
where we used that for \(t\in [0,\infty ]\)
which holds by (3.7) and the fact that \(\Vert \dot{c}_t \Vert = O(1/t^2)\). Since \(Y_t\) is independent of \({\mathcal {F}}^t\) and \(\Phi ^{{\mathcal {P}},E}_t\) is \({\mathcal {F}}^t\)-measurable, we have by standard properties of conditional expectation
Therefore, we obtain from (3.19)
Rearranging this and using (3.17) show that
which proves that \(u^E_s = -q_s \nabla v^E_s(\Phi ^{{\mathcal {P}},E}_s)\), \(s\in [0,t]\) is a minimiser for (3.18). \(\quad \square \)
4 Fractional Moment Estimate on the Renormalised Potential
In this section we prove a fractional moment estimate on the small-scale behaviour of the renormalised potential. We exploit the connection between the Polchinski dynamics and a minimiser of the Boué–Dupuis variational problem (3.18) as in Proposition 3.8 to transfer estimates on minimisers onto the renormalised potential. First, we prove some a priori bounds on Sobolev norms of the integrated drift.
4.1 Sobolev norms of integrated drifts
Recall that, for \(0\leqslant s\leqslant t \leqslant \infty \) and \(u\in \mathbb {H}_a\), the integrated drift is defined by
and that u is a.s. \(L^2\) integrable on \([0,\infty ]\), i.e. \(\int _0^\infty \Vert u_\tau \Vert _{L^2}^2 d\tau <\infty \) a.s. In Fourier space this condition reads as
where \({\hat{u}}_\tau (k)\), \(k\in \Omega _\epsilon ^*\) denote the Fourier coefficients of \(u_\tau \).
In what follows we will discuss the regularity of \(I_{s,t}^\epsilon (u)\) for different choices of s, t, for which we use the Sobolev norm defined in (2.5). Note that we have
In addition, recall that the Fourier multipliers of \(q_\tau ^\epsilon \) are given by
where \(-{\hat{\Delta }}^\epsilon (k)\) are as in (2.3). Using (2.4), we have that
for \(k\in \Omega _\epsilon ^*\) with \(c=\frac{4}{\pi ^2}\). Hence, we have for \(k\in \Omega _\epsilon ^*\)
The following results establish bounds for Sobolev norms of the large and small scales of \(I^\epsilon (u)\) for a given drift \(u\in \mathbb {H}_a\). For the rest of this subsection we drop \(\epsilon >0\) from the notation.
Lemma 4.1
Let \(\alpha \in [1,2]\). Then, for \(0<t<1\),
Proof
By the Cauchy-Schwarz inequality and (4.1) we have
where we used that \(\sup _{x\geqslant 0}\frac{(1+x)^\beta }{1+tx} {\lesssim _\beta } \frac{1}{t^\beta }\) for \(t<1\) and \(\beta <1\). \(\quad \square \)
Lemma 4.2
Let \(\alpha \in [0,1]\). Then,
Proof
Using the same calculations as above, we see that
Taking the supremum over \(t\geqslant 0\) establishes the estimate. \(\quad \square \)
Lemma 4.3
For \(0\leqslant \alpha \leqslant 1\) and \(0\leqslant s\leqslant t\), we have
Proof
By the same arguments that were used in the proof of the previous statements, we obtain
\(\square \)
Lemma 4.4
For \(\alpha \in (1, 2]\) and \(0<s\leqslant t\), we have
Proof
The following calculations are similar to before. We include them here for completeness.
\(\square \)
4.2 Uniform \(L^2\) estimate for minimisers
As an application of the estimates established in the previous section we now show that if a drift minimises the functional in (3.18) for \(t=\infty \), then the expectation of its \(L^2\) norm is finite as stated in the following proposition.
Proposition 4.5
Assume that \({\bar{u}}\) minimises the functional
in \(\mathbb {H}_a\). Then
Before we prove Proposition 4.5, we establish the following auxiliary result to fix the numerology in the analysis when using interpolation inequalities, which will allow to treat all polynomials \({\mathcal {P}}\) as in (1.3) simultaneously.
Lemma 4.6
Let \(l,N \in \mathbb {N}\) such that \(1\leqslant l \leqslant N-1\). There exist \(\theta _{l,N}\in (0,1)\) and \(\alpha _{l,N},\beta _{l,N},\gamma _{l,N} \in (0,\infty )\) satisfying
such that the following inequalities hold:
Proof
The first inequality can be rewritten as
To reformulate the second condition, we compute
Thus, to satisfy the second inequality in (4.4), we need to choose \(\theta _{l,N}\) such that
which is equivalent to
Hence, both conditions in (4.4) are satisfied, if we choose \(\theta _{l,N} \in (0,1)\) satisfying
which is possible for all specified values of l, N. \(\quad \square \)
Lemma 4.7
For every \(l,N \in \mathbb {N}\) such that \(1 \leqslant l \leqslant N-1\) and for every \(\kappa >0\), there exists \({\tilde{\kappa }}_{l,N} >0\) such that the following estimate holds. For any \(\delta > 0\) sufficiently small there exists \(C>0\) such that, for \(f\in {\mathcal {C}}^{- {\tilde{\kappa }}_{l,N}} \) and \(u\in L^{2}(\mathbb {R}_0^+ \times \Omega _{\epsilon })\), we have, for \(t \in (0,\infty ]\),
Proof
We only prove the statement for \(t \in (0,1]\) - the case \(t=\infty \) follows similarly. Let \(\theta _{l,N}\) and \(\alpha _{l,N},\beta _{l,N}, \gamma _{l,N}\) be as in Lemma 4.6. Multiplying the first inequality in (4.4) by 1/l we have
Now, the standardFootnote 1 Besov embedding \({B_{p',q'}^{s'}} \hookrightarrow {B_{p,q}^{s}}\) if \(s' > s, p' \geqslant p\), and \(q' \geqslant q\), together with interpolation (2.9) and Lemma 4.4, implies that
where, for a given \(\kappa \), we chose \({\tilde{\kappa }}_{l,N} \equiv {\tilde{\kappa }} = 2\theta _{l,N} \kappa \).
Then, by duality (2.7), the iterated multiplication inequality (2.8) and the definitions of \(\alpha _{l,N}, \beta _{l,N}\) in (4.3), there exists \(C>0\) such that
Now, applying Young’s inequality with \(\alpha _{l,N},\beta _{l,N},\gamma _{l,N}\) and using the fact that \(\gamma _{l,N}/ \alpha _{l,N}>1\), the estimate in (4.5) follows. \(\quad \square \)
Proof of Proposition 4.5
Throughout this proof \(\epsilon > 0\) is fixed and therefore dropped from the notation. The goal is to prove that for any \(\delta > 0\) sufficiently small, there exists \({\bar{C}} > 0\) such that, for all \(u \in \mathbb {H}\),
This allows us to compare the cost function evaluated at a minimiser \({\bar{u}}\) against the cost function evaluated at a competitor drift. For our purposes it suffices to choose this competitor as the zero drift. Indeed, since \({\bar{u}}\) is a minimiser and since \(\chi _E\) is concave and satisfies \(\chi _E(0)=0\), we have
Hence, by (4.6),
Taking the supremum over \(E >0\) and \(\epsilon > 0\) establishes (4.2).
It remains to establish (4.6). Fix \(u \in \mathbb {H}_a\). For \({\mathcal {P}}\) as in (1.3), by repeated use of the triangle inequality, we have that
First, observe that there exists \(c_1 > 0\), depending on the coefficients of \({\mathcal {P}}\), such that
Second, observe that, for any \(\delta > 0\), there exists \(c_2=c_2(\delta ) > 0\), also depending on the coefficients of \({\mathcal {P}}\), such that
Let us estimate the first term on the right-hand side of (4.7). By the triangle inequality and binomial theorem for Wick powers (2.13),
An application of Lemma 4.7 and setting \({\tilde{\kappa }}= \min _{l\leqslant N} {\tilde{\kappa }}_{l,N}\) and \({\tilde{\kappa }}_{l,N}\) as in (4.5) now yields
Set \(\Gamma =\max _{l \leqslant N}\gamma _{l,N}\) and define \(Q = \sum _{k=1}^N \sum _{l=1}^{k} \big (\Vert {:\,} Y_\infty ^l {:\,} \Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}} + 1 \big )^\Gamma + \sum _{k=1}^N \Vert {:\,} Y_\infty ^k {:\,} \Vert _{{\mathcal {C}}^{-{\tilde{\kappa }}}}\). Inserting this estimate into (4.10) and combining it with (4.8) and (4.9), we thus obtain the estimate: there exists \(c> 0\) depending on the mass m only such that there exists \(c_4,c_5 > 0\) such that
where the last inequality follows by Lemma 4.2.
Thus, by the monotonicity of the cut-off, and the fact that \( \chi _E(x) = x\) for \(x \leqslant 0\), (4.11) yields
Hence, by Lemma 2.9, and the a priori estimate on the drift Lemma 4.2, we have that there exists \({\bar{C}} > 0\) such that,
which, after a redefinition of \(\delta \), establishes (4.6). \(\quad \square \)
Remark 4.8
It may seem surprising that the naive choice of comparing the minimiser \({\overline{u}}\) with the 0 drift yields useful information, as opposed to a more carefully chosen competitor. This is a manifestation of the mild ultraviolet divergences in dimension 2 for these field theories. Indeed, a trivial lower bound on the partition functions uniform in the cutoff can be obtained via Jensen’s inequality and using that Wick polynomials are martingales - note that Jensen’s inequality coincides exactly with choosing the 0 drift. To extend these techniques to more singular field theories, such as the sine-Gordon model beyond a certain threshold, a more informed choice of competitor would need to be made.
Remark 4.9
The above argument also establishes uniform \(L^{p}\) bounds for the density of the \({\mathcal {P}}(\phi )_{2}\) measure with respect to the Gaussian. Indeed, observe that we have proved that \(F^{\infty ,E}\) is bounded from below uniformly in \(E,\epsilon \). Recall that by (3.18) we have
So, a uniform lower bound on \(F^{\infty ,E}\) establishes a uniform bound on the expectation of the density of \(\nu ^{{\mathcal {P}}_\epsilon ,E}\). The argument in the proof of Proposition 4.5 is also valid when \({\mathcal {P}}\) is replaced by \(p{\mathcal {P}}\), since Wick renormalisations are linear in the coefficients, which implies a uniform \(L^{p}\) bound.
4.3 A fractional moment estimate for small scales
We now turn to a more refined estimate on the conditional second moment of minimisers restricted to finite time-horizons. Here and henceforth, we denote
for \(t\in [0,\infty ]\) and \(E\in (0,\infty ]\) with the convention \(F^{t,E}(u,0) = F^{t,E}(u)\).
Proposition 4.10
Assume that \({\bar{u}}\) minimises the functional \(F^{t,E}(u,\varphi )\) in \(\mathbb {H}_a[0,t]\) and that \(\varphi \in X_\epsilon \) is \({\mathcal {F}}^t\)-measurable. Then, as \(t\rightarrow 0\) and for \(\kappa \) sufficiently small,
where \({\mathcal {W}}_t^\epsilon \geqslant 0\) a.s. for all \(t\geqslant 0\) and \(\sup _{\epsilon >0}\sup _{t \geqslant 0} \mathbb {E}[{\mathcal {W}}_t^\epsilon ] <\infty \), and \(L>0\) only depends on the maximal degree of \({\mathcal {P}}\) and the interpolation parameters chosen in Lemma 4.6.
Proof
From the definition of \(F^{t,E}(u,\varphi )\) we see that we need to bound the conditional expectation of \(v_0^{\epsilon ,E} (Y_\infty ^\epsilon + I_t^\epsilon (u) + \varphi ) \). Similarly to the proof of Proposition 4.5, we compare the cost function evaluated at the minimiser against the cost function evaluated at a competitor, which is again the zero drift because of similar reasons outlined in Remark 4.8. To simplify the notation, we will drop \(\epsilon \) from here on. Since \({\bar{u}}\) is a minimiser for \(F^{t,E}(u,\varphi )\), we have \( F^{t,E}({\bar{u}}, \varphi ) - F^{t,E}(0,\varphi ) \leqslant 0\), and therefore
where, for a generic drift \(u \in \mathbb {H}_a[0,t]\), we set
Note that, above, we tacitly use that the polynomial \({\mathcal {P}}\) has no constant term.
The heart of the proof is to show the following estimate on \({\mathcal {A}}\): there exists \(\kappa > 0\) sufficiently small and \(L>0\) such that, for any \(\delta > 0\), there exists \(C>0\) such that, for any \(u \in \mathbb {H}_a[0,t]\),
Then, inserting (4.16) into (4.14) and using that \(\chi _E(a-b) \geqslant \chi _E(a) -b\) for \(a \in \mathbb {R}\) and \(b\geqslant 0\), we obtain
where
Finally, observe that by Jensen’s inequality, the tower property of conditional expectation, and Lemma 2.9, we have
Thus, rearranging (4.17) completes the proof conditional on (4.16).
We now focus on (4.16). Applying Lemma 4.7 with \(f={:\,} (Y_{\infty }+\varphi )^{k-l} {:\,}\) and \(f=1\), we know that there exists \(\kappa , {\tilde{\kappa }} > 0\) sufficiently small and \(L>0\) such that, for any \(\delta > 0\), there exists \(C>0\) such that, for any \(u \in \mathbb {H}_a[0,t]\),
and
In addition, by Lemma 2.9, there exists \({\bar{\kappa }}>0\) sufficiently small, such that
Therefore, inserting (4.20) into (4.18), and summing this with (4.19), there exists \(L>0\) such that
thereby establishing (4.16) up to a redefinition of \(\kappa \). \(\quad \square \)
We now prove the main estimate of this section.
Proposition 4.11
Let \(u^E_t=-q_t^\epsilon \nabla v_t^{\epsilon ,E}(\Phi _t^{\Delta _\epsilon ,E})\) and let L be as in Proposition 4.11. Then for any \( \kappa >0\) sufficiently small, we have
Proof
From Proposition 3.8 we know that \(u^E\) is a minimiser of the Boué–Dupuis variational formula (3.18). Since \(\Phi _t^{{\mathcal {P}}_\epsilon ,E} = \Phi _t^{\Delta _\epsilon ,E} + \Phi _t^{\text {GFF}_\epsilon }\) and \(Y_t^\epsilon + \Phi _t^{\text {GFF}_\epsilon } = Y_\infty ^\epsilon \), we also have that \(u^E\) minimises
in \(\mathbb {H}_a[0,t]\), where \(I_{t,\infty }^\epsilon (u^E) = \Phi _t^{\Delta _\epsilon ,E}\) is \({\mathcal {F}}^t\)-measurable. Hence, by Proposition 4.10 we have for any \(\kappa \) sufficiently small
Thus, for every such \(\kappa \)
for some universal constant \(C>0\). Taking the supremum over \(t\leqslant 1\) and \(\epsilon >0\), the claim follows. \(\quad \square \)
5 Proof of the Coupling to the GFF
This section is devoted to the proof of the main results Theorem 1.1 and Corollary 1.2. Recall that we introduced the notion \(u^E_t = -q_t^\epsilon \nabla v_t^{\epsilon ,E} (\Phi _t^{{\mathcal {P}}_\epsilon ,E})\). By Proposition 3.8 we know that this is a minimiser of the variational problem (3.18). Hence, it satisfies the estimate in Proposition 4.5 and also the estimates in Proposition 4.11.
5.1 Estimates on the difference field \(\Phi ^{\Delta _\epsilon ,E}\)
First, we show how the bound for the minimiser \(u^E\) in Proposition 4.5 implies bounds on the fractional moments of Sobolev norms of the difference field \(\Phi _t^{\Delta _\epsilon ,E} = \int _t^\infty q_\tau ^\epsilon u^E_\tau d\tau \) in (3.10). The main result is the following proposition.
Proposition 5.1
Let \(\alpha \in (0,2)\). Then
The first step towards (5.1) is the following estimate for the large scale of \(\Phi ^{\Delta _\epsilon ,E}\).
Lemma 5.2
Let \(t_0>0\) and \(\alpha \in [1,2)\). Then, for \(t\geqslant t_0\),
Proof
Since \(\Phi _t^{\Delta _\epsilon ,E} = I_{t,\infty }^\epsilon (u^E)\) and since \(\mathbb {E}[\int _0^\infty \Vert u^E_\tau \Vert _{L^2}^2 d\tau ] <\infty \) uniformly in \(\epsilon >0\) and \(E>0\), the claim follows from Lemma 4.1. \(\quad \square \)
Lemma 5.3
Set \(\Phi _{s,t}^{\Delta _\epsilon ,E} = \int _s^t \dot{c}_\tau ^\epsilon \nabla v_\tau ^{\epsilon ,E}(\Phi _\tau ^{{\mathcal {P}}_\epsilon ,E}) d\tau \). Then for \(\alpha \in (1,2)\)
Proof
Again, the proof is an application of the regularity estimates at the end of Sect. 3.3. Noting that \(\Phi _{s,t}^{\Delta _\epsilon ,E} = I_{s,t}^\epsilon (u^E)\), by Lemma 4.4 that
from which the claim follows by taking expectation. \(\quad \square \)
Combining Lemma 5.2 and Lemma 5.3 we can now give the proof of Proposition 5.1.
Proof of Proposition 5.1
Let \(\alpha \in (0,2)\) and let \(\kappa = (2-\alpha )/r\) for r large enough. Further, let \((t_n)_n\) be a decreasing sequence of numbers with \(t_n\rightarrow 0\) as \(n\rightarrow \infty \) that will be determined later. Then, using Lemma 5.3 and Proposition 4.11,
Now, choosing \(t_n=2^{-n}\) we see that
and thus, the sum in the last display is finite for the specified values of \(\alpha \) and \(\kappa \). \(\quad \square \)
5.2 Removal of the cut-off: proof of Theorem 1.1
We have collected all results to give the proof of Theorem 1.1 and Corollary 1.2. The main task in this section is to remove the cut-off by taking the limit \(E\rightarrow \infty \).
Proof of Theorem 1.1
We first prove that the sequence of processes \((\Phi ^{\Delta _\epsilon ,E})_E\) is tight. For \(R>0\) and \(\alpha <1\), let
Note that \({\mathcal {X}}_R\) is totally bounded and equicontinuous. Therefore, by the Arzèla-Ascoli theorem, \({\mathcal {X}}_R \subset C_0([0,\infty ),X_\epsilon )\) is precompact, and the closure \(\overline{{\mathcal {X}}_R}\) is compact. Moreover, we have by Lemma 4.2 and Lemma 4.3 that
Therefore, we have for some constant \(C>0\), which is independent of \(\epsilon \) and E,
So, for a given \(\kappa >0\), we can choose R large enough such that
which establishes tightness for the sequence \((\Phi ^{\Delta _\epsilon ,E})_E \subseteq C_0([0,\infty ), X_\epsilon )\).
By Prohorov’s theorem there is a process \( \Phi ^{\Delta _\epsilon }\) and a subsequence \((E_k)_k\) such that \(\Phi ^{\Delta ,E_k} \rightarrow \Phi ^{\Delta _\epsilon }\) in distribution as \(k\rightarrow \infty \). By (3.11) there exists a process \(\Phi ^{{\mathcal {P}}_\epsilon } \equiv \Phi ^{\Delta _\epsilon } + \Phi ^{\text {GFF}_\epsilon }\), such that \(\Phi ^{{\mathcal {P}}_\epsilon ,E_k}\rightarrow \Phi ^{{\mathcal {P}}_\epsilon }\) in distribution as \(k\rightarrow \infty \).
Since \(e^{-v_0^{\epsilon ,E}(\phi )} \rightarrow e^{-v_0^\epsilon (\phi )}\) as \(E\rightarrow \infty \) for every \(\phi \in X_\epsilon \) and since \(e^{-v_0^{\epsilon ,E}} \leqslant e^{C_\epsilon }\) for some constant \(C_\epsilon >0\), we have by dominated convergence for every bounded and continuous \(f:X_\epsilon \rightarrow \mathbb {R}\)
so that \(\nu ^{{\mathcal {P}}_\epsilon ,E}\rightarrow \nu ^{{\mathcal {P}}_\epsilon }\) in distribution. Since \(\Phi _0^{{\mathcal {P}}_\epsilon ,E} \sim \nu ^{{\mathcal {P}}_\epsilon ,E}\) we conclude that \(\Phi _0^{{\mathcal {P}}_\epsilon } \sim \nu ^{{\mathcal {P}}_\epsilon }\) by uniqueness of weak limits. Moreover, since independence is preserved under weak limits, we have that for every \(t>0\), \(\Phi _t^{{\mathcal {P}}_\epsilon }\) is independent of \(\Phi _0^{\text {GFF}_\epsilon } - \Phi _t^{\text {GFF}_\epsilon }\).
Finally, the bounds (1.9–1.12) follow from the respective estimates on \(\Phi ^{\Delta _\epsilon ,E}\) and the fact that the norms are continuous maps from \(C_0([0,\infty ), X_\epsilon )\) to \(\mathbb {R}\). For instance, (1.11) is proved by
where the last display is finite by Proposition 5.1. \(\quad \square \)
5.3 Lattice convergence: proof of Corollary 1.2
In this section we prove that as \(\epsilon \rightarrow 0\) the processes \((\Phi ^{\Delta _\epsilon })_\epsilon \) converge along a subsequence \((\epsilon _k)_k\) to a continuum process \(\Phi ^{\Delta _0}\). In order to obtain a continuum process \(\Phi ^{{\mathcal {P}}_0}\) from the sequence \((\Phi ^{{\mathcal {P}}_\epsilon })_\epsilon \), we also need the convergence of the decomposed Gaussian free field \(\Phi ^{\text {GFF}_\epsilon }\), which is the content of Lemma 5.4 below. Define for \(t\geqslant 0\)
Note that for \(t>0\) the convergence of the sum is understood in \(H^\alpha \), \(\alpha <1\), while for \(t=0\) it is in \(H^\alpha \) for \(\alpha <0\).
Lemma 5.4
Let \(I_\epsilon :L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega )\) be the isometric embedding defined in Sect. 2.1. Then, for any \(t_0 > 0\) and \(\alpha < 1\), we have
Moreover, if \(t_0=0\) we have for \(\alpha <0\)
Proof
Note that the process \(I_\epsilon \Phi ^{\text {GFF}_\epsilon } - \Phi ^{\text {GFF}_0}\) is a backward martingale adapted to the filtration \({\mathcal {F}}^t\) with values in \(H^\alpha \) with \(\alpha <1\) for \(t_0>0\) and \(\alpha <0\) for \(t_0=0\). Thus, the \(H^\alpha \) norm of this process is a real valued submartingale. Therefore, we have by Doob’s \(L^2\) inequality,
so it suffices to prove that the right-hand side converges to 0 for the specified values of \(t_0\) and \(\alpha \).
To this end, we consider the difference of the Fourier coefficients \({\hat{q}}^\epsilon _t(k)\) and \({\hat{q}}_t^0(k)\). By (2.4), we have that
where we recall that \(c =4/\pi ^2\) as below (2.4). By the definition of the Sobolev norm, we have
Taking expectation and using the estimate (5.5) for the first sum yields
Further note that \(h^2(\epsilon k) \leqslant O(\epsilon |k|)^\delta \) for any \(\delta <4\). We can now discuss the convergence to 0 as \(\epsilon \rightarrow 0\) for \(\alpha \) and \(t_0\) as above. For \(t_0>0\) the first sum on the right-hand side can be bounded by
The last sum is finite for \(\alpha <1\) and \(\delta \) small enough (depending on \(\alpha \)), and hence vanishes as \(\epsilon \rightarrow 0\). For the second sum on the right-hand side of (5.6) we similarly have
which is finite uniformly in \(\epsilon \) for \(\alpha <1\), and hence converges to 0 as \(\epsilon \rightarrow 0\). Together with (5.7) this shows the convergence in (5.3).
For the proof of (5.4), we note that both sums on the right-hand side lose a term of order \(O(|k|^{-2})\) when \(t_0=0\). Using the same arguments as for the case \(t_0>0\), we find that the corresponding sums are finite for \(\alpha <0\) in this case. \(\quad \square \)
The next result is the convergence of the difference field \(\Phi ^{\Delta _\epsilon }\) along a subsequence \((\epsilon _k)_k\), and its proof is almost identical to the removal of the cut-off in the proof of Theorem 1.1. Recall that we denote by \(C_0([0,\infty ), {\mathcal {S}})\) the set of all continuous processes with values in a metric space \(({\mathcal {S}}, \Vert \cdot \Vert _{{\mathcal {S}}})\) that vanish at \(\infty \) and that \(I_\epsilon :L^2(\Omega _\epsilon ) \rightarrow L^2(\Omega )\) is the isometric embedding.
Proposition 5.5
Let \(\alpha <1\). Then \((I_\epsilon \Phi ^{\Delta _\epsilon })_\epsilon \) is a tight sequence of processes in \(C_0([0,\infty ), H^{\alpha })\). In particular, there is a process \(\Phi ^{\Delta _0} \in C_0([0,\infty ), H^{\alpha })\) and a subsequence \((\epsilon _k)_k\), \(\epsilon _k \rightarrow 0\) as \(k\rightarrow \infty \) such that the laws of \(\Phi ^{\Delta _{\epsilon _k}}\) on \(C_0([0,\infty ), H^{\alpha })\) converge weakly to the law of \(\Phi ^{\Delta _0}\).
Proof
For \(R>0\) and \(\alpha <1\), let
As in the proof of Theorem 1.1 the closure \(\overline{{\mathcal {X}}_R}\) is compact by the Arzèla-Ascoli theorem. Moreover, we have
and thus, by the weak convergence of \((\Phi ^{\Delta _\epsilon ,E})_E\) as \(E\rightarrow \infty \) (along a subsequence \((E_k)_k\)), we have for some constant \(C>0\), which is independent of \(\epsilon \) and E,
So, for a given \(\kappa >0\), we can choose R large enough such that
which establishes tightness for the sequence \((I_\epsilon \Phi ^{\Delta _\epsilon })_\epsilon \subseteq C_0([0,\infty ], H^{\alpha })\). The existence of a weak limit \(\Phi ^{\Delta _0}\) then follows by Prohorov’s theorem. \(\quad \square \)
Proof of Corollary 1.2
Since \(\Phi ^{\Delta _{\epsilon _k}} \rightarrow \Phi ^{\Delta _{0}}\) in distribution as \(k\rightarrow \infty \), we also have that there exists a process \(\Phi _t^{{\mathcal {P}}_0} \equiv \Phi ^{\Delta _0} + \Phi ^{\text {GFF}_0}\), such that \(\Phi ^{{\mathcal {P}}_{\epsilon _k}} \rightarrow \Phi ^{{\mathcal {P}}_{0}}\) in distribution as \(k\rightarrow \infty \). Moreover, as \(\epsilon \rightarrow 0\), we have that \(\nu ^{{\mathcal {P}}_\epsilon } \rightarrow \nu ^{{\mathcal {P}}}\), where \(\nu ^{\mathcal {P}}\) is the continuum \({\mathcal {P}}(\phi )_2\) measure, see also Proposition 5.6 below.
Finally, the estimates on the norms of \( \Phi ^{\Delta _0}\) and the independence of \(\Phi _t^{{\mathcal {P}}_0}\) and \(\Phi _0^{\text {GFF}_0} - \Phi _t^{\text {GFF}_0}\) follow from the convergence in distribution similarly as in the proof of Theorem 1.1. \(\quad \square \)
5.4 Uniqueness of the limiting law for fixed \(t > 0\)
For the discussion of the maximum, we need a refined statement on the convergence of \((\Phi _t^{{\mathcal {P}}_\epsilon })_\epsilon \) for \(t>0\) as \(\epsilon \rightarrow 0\). More precisely, we prove that the law of the limiting field \(\Phi _t^{{\mathcal {P}}_{0}}\) for a fixed \(t>0\) does not depend on the subsequence. By the same arguments that led to (5.2), we have that \(\Phi _t^{{\mathcal {P}}_\epsilon }\) is distributed as the renormalised measure \(\nu _t^{{\mathcal {P}}_\epsilon }\) defined by
where \(F:X_\epsilon \rightarrow \mathbb {R}\), and \(v_t^\epsilon \) is the renormalised potential defined in (3.5) for \(E=\infty \). Let \((I_\epsilon )_*\nu _t^{{\mathcal {P}}_\epsilon }\) denote the pushforward measure of \(\nu _t^{{\mathcal {P}}_\epsilon }\) under the isometric embedding \(I_\epsilon \). Then we have the following convergence to a unique limit as \(\epsilon \rightarrow 0\).
Proposition 5.6
As \(\epsilon \rightarrow 0\), we have for \(t>0\) that, as measures on \(H^\alpha (\Omega )\) for every \(\alpha <1\), \((I_{\epsilon })_*\nu _t^{{\mathcal {P}}_\epsilon }\) converges weakly to \(\nu _t^{\mathcal {P}}\) given by
for \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\) bounded and measurable, and where \(e^{-v_\infty ^0(0)} = \mathbb {E}\big [ e^{-\int _{\Omega } {:\,} {\mathcal {P}}(Y_{\infty }) {:\,}dx}\big ]\).
Moreover, for \(t=0\), the weak convergence \((I_{\epsilon })_*\nu _0^{{\mathcal {P}}_\epsilon } \rightarrow \nu _0^{\mathcal {P}}\) holds as measures on \(H^\alpha (\Omega )\) for any \(\alpha <0\) and with \(\nu _0^{\mathcal {P}}\) defined by (5.9) with \(t=0\) and \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\) for \(\alpha <0\).
Remark 5.7
Note that, although \({:\,} {\mathcal {P}}(Y_\infty ) {:\,}\) is almost surely a distribution of negative regularity and not a function, the expression \(\int _\Omega {:\,} {\mathcal {P}}(Y_\infty ) {:\,}dx\) makes sense as an abuse of notation to denote the duality pairing between a distribution and a constant function (which is smooth on the torus). Furthermore, we sometimes include the spatial argument of the distribution as a further abuse of notation, e.g. \(\int _\Omega Y_\infty (x) dx\), not to be confused with a pointwise evaluation (which may not exist).
We first prove that the discrete potential, suitably extended, converges to the continuum potential in \(L^2\).
Lemma 5.8
As \(\epsilon \rightarrow 0\), we have
Proof
It is convenient to introduce an alternative extension operator to the trigonometric extension that behaves well under commutation with products. We choose the piecewise constant extension of a function defined on \(\Omega _\epsilon \) to a function defined on \(\Omega \) that is piecewise constant on \(x + \epsilon (-1/2, 1/2]^2\) for all \(x \in \Omega _\epsilon \subseteq \Omega \). Indeed, for any \(f \in X_\epsilon \), define \(E_\epsilon f :\Omega \rightarrow \mathbb {R}\) for \(x \in \Omega \) by
We identify \(Y^{\epsilon }_{\infty }\) with \(E_\epsilon Y^{\epsilon }_{\infty }\) and regard its covariance operator \(c^\epsilon =c_\infty ^\epsilon \) as an operator acting on such piecewise constant functions. In the following computation, we abuse notation and evaluate the distribution \(Y_{\infty }\) pointwise. Although such a pointwise evaluation alone is a priori ill-defined, the covariance \(E[Y^{\epsilon }_{\infty }(x) Y_{\infty }(y)]\) for \(x,y \in \Omega _\epsilon \) yields a well defined expression, as our computation below shows. This can be made rigorous by first approximating \(Y_{\infty }\) with smooth fields and then removing the approximation. We omit this to lighten the notation. Thus, for \(x,y \in \Omega \),
as \(\epsilon \rightarrow 0\) and where we recall that \(c=c_\infty ^0\) is the kernel of \((-\Delta + m^2 )^{-1}\).
To prove (5.10), it is sufficient to show that as \(\epsilon \rightarrow 0\)
Note that, by Wick’s theorem and an abuse of notation regarding evaluating distributions pointwise as above, we have
Hence, by expanding, we have that the expression on the left-hand side of (5.11) is equal to
which converges to 0 by dominated convergence. \(\quad \square \)
We need the following exponential integrability lemma, which follows from a by-now standard argument due to Nelson, see [24, Chapter 9.6]. Note that one may also prove this using the Boué–Dupuis formula and estimates similar to those in Sect. 4.2, see Remark 4.9. The convergence statement below then follows by Vitali’s theorem as stated in [12, Theorem 4.5.4] and the convergence (5.10).
Lemma 5.9
For any \(0\leqslant p<\infty \), we have
In particular, as \(\epsilon \rightarrow \infty \)
Combining Lemma 5.8 and Lemma 5.9, we can now give a proof of Proposition 5.6.
Proof of Proposition 5.6
Recall from (5.8) that the renormalised measure \(\nu _t^{{\mathcal {P}}_\epsilon }\) is defined by
where \(F:X_\epsilon \rightarrow \mathbb {R}\) is bounded and continuous, and \(v^\epsilon _t\) is the renormalised potential. By the definition of the pushforward measure and the renormalised potential we obtain that, for \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\), bounded and continuous,
Now, we have by (5.3) that \(I_{\epsilon }(Y^{\epsilon }_{\infty }-Y^{\epsilon }_{t})\) converges to \(Y_{\infty }-Y_{t}\) in \(L^2\) with respect to the norm of \(H^\alpha (\Omega )\) for any \(\alpha <1\). Moreover, we have by (5.10) that \(v_0^\epsilon (Y_\infty ^\epsilon ) \rightarrow v_0^0(Y_\infty )\) in \(L^2\). Take any subsequence, which we continue to denote as \(\epsilon \). Then, there is a further subsequence \((\epsilon _k)_k\), along which we have
almost surely, where we also used that F is continuous with respect to the norm on \(H^\alpha \). Since F is bounded, we have by (5.12) that
is uniformly integrable. Hence, by Vitali’s theorem, the convergence in (5.14) holds in \(L^1\), i.e.
In summary, we have shown that for every subsequence of \(\epsilon \) there is a further subsequence \((\epsilon _k)_k\) such that (5.15) holds, thus showing full convergence of (5.15).
For the case \(t=0\) we follow the same arguments as for \(t>0\), but now we take \(F:H^\alpha (\Omega ) \rightarrow \mathbb {R}\) for \(\alpha <0\) and use (5.4). \(\quad \square \)
6 Convergence in Law for the Maximum of \({\mathcal {P}}(\phi )_2\)
In this section we use the results on the difference field \(\Phi ^\Delta \) and prove that the maximum of the \({\mathcal {P}}(\phi )_2\) field converges in distribution to a randomly shifted Gumbel distribution. The analogous result was recently established for the sine-Gordon field in [7, Section 4]. The main difficulty in this reference is to deal with the non-Gaussian and non-independent term \(\Phi _0^\Delta \), which requires generalising and extending several key results of [14]. In the present case the combination of the Polchinski approach and the Boué–Dupuis variational approach produce a similar situation with the essential difference to the sine-Gordon case being different regularity estimates for the difference field. Thus, the main goal in this section is to argue that all results in [7, Section 4] also hold under the modified assumptions on \(\Phi ^\Delta \).
From now on, when no confusion can arise, we will drop \(\epsilon \) from the notation. Moreover, to ease notation, we will use the notation \(\Vert \varphi \Vert _{L^\infty (\Omega _\epsilon )} \equiv \Vert \varphi \Vert _{\infty }\) for fields \(\varphi :\Omega _\epsilon \rightarrow \mathbb {R}\). Recall that by Theorem 1.1 we have that
where \(R_s = \Phi _0^\Delta -\Phi _s^\Delta \) satisfies \(\sup _{\epsilon >0} \mathbb {E}[\Vert R_s\Vert _\infty ^{2/L} ] \rightarrow 0\) as \(s\rightarrow 0\) by the Sobolev embedding and (1.12). In the analysis of the maximum of \(\Phi _0^{\mathcal {P}}\) we also need the following continuity result for the field \(\Phi ^\Delta \).
Lemma 6.1
Let \(\alpha \in (0,1)\). Then
Proof
The statement follows from the Sobolev-Hölder embedding in Proposition 2.2 and (1.11). \(\quad \square \)
To express convergence in distribution we will use the Lévy distance d on the set of probability measures on \(\mathbb {R}\), which is a metric for the topology of weak convergence. It is defined for any two probability measures \(\nu _1,\nu _2\) on \(\mathbb {R}\) by
where \(B^\kappa =\{y\in \mathbb {R}:{{\,\textrm{dist}\,}}(y,B) < \kappa \}\). We will use the convention that when a random variable appears in the argument of d, we refer to its distribution on \(\mathbb {R}\). Note that if two random variables X and Y can be coupled with \(|X-Y|\leqslant \kappa \) with probability \(1-\kappa \) then \(d(X,Y) \leqslant \kappa \).
6.1 Reduction to an independent decomposition
The first important step is the introduction of a scale cut-off \(s>0\) to obtain an independent decomposition from (6.1). More precisely, we write
and argue that we may from now on focus on the auxiliary field \({\tilde{\Phi }}_s\). The following statement plays the same role as Lemma 4.1 in [7].
Lemma 6.2
[Analog to [7, Lemma 4.1]]. Assume that the limiting law \({\tilde{\mu }}_s\) of \(\max _{\Omega _\epsilon } {\tilde{\Phi }}_s-{\mathfrak {m}}_\epsilon \) as \(\epsilon \rightarrow 0\) exists for every \(s>0\), and that there are positive random variables \(\textrm{Z}_s\) (on the above common probability space) such that
for some constant \(\alpha ^*>0\). Then the law of \(\max _{\Omega _\epsilon } \Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon \) converges weakly to some probability measure \(\mu _0\) as \(\epsilon \rightarrow 0\) and \({\tilde{\mu }}_s \rightharpoonup \mu _0\) weakly as \(s\rightarrow 0\). Moreover, there is a positive random variable \(\textrm{Z}^{\mathcal {P}}\) such that
Proof
We follow identical steps as in the proof of Lemma 4.1 of [7]. We first argue that the sequence \((\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon )_\epsilon \) is tight. Indeed, since
and since \((\max _{\Omega _\epsilon }\Phi _0^\text {GFF}- {\mathfrak {m}}_\epsilon )_\epsilon \) is tight by [15], we see that \((\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon )_\epsilon \) differs from a tight sequence by a sequence \(Y_\epsilon \) with \(\mathbb {E}[|Y_\epsilon |^{2/L}] <\infty \). Elementary arguments such as Markov’s inequality then imply that also sequence \((\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon )_\epsilon \) is tight.
Thus, there is a probability distribution \(\mu _0\) such that the law of \(\max _{\Omega _\epsilon }\Phi ^{\mathcal {P}}- {\mathfrak {m}}_\epsilon \) converges to \(\mu _0\) weakly along a subsequence \((\epsilon _k)_k\). Considering the Lévy distance to \({\tilde{\mu }}_s\), we have
The last display can be estimated as follows: for any open set \(B \subseteq \mathbb {R}\) we have
Markov’s inequality implies that
and thus, choosing
we get by the definition of the Levy distance
Taking \(s\rightarrow 0\) shows that the subsequential limit \(\mu _0\) is unique, as it is the unique weak limit of \({\tilde{\mu }}_s\). Thus, it follows that \(\max _{\Omega _\epsilon }\Phi _0^{\mathcal {P}}- {\mathfrak {m}}_\epsilon \rightarrow \mu _0\) in distribution as \(\epsilon \rightarrow 0\). Moreover, by (6.4) we have \({\tilde{\mu }}_s \rightharpoonup \mu _0\) weakly, and thus
for the distribution functions.
It remains to show that \((\textrm{Z}_s)_s\) is tight. Using (6.2) we get for any \(C>0\)
Now, for a given \(\kappa >0\) we use Markov’s inequality and choose C such that
Then we obtain from (6.5)
Let \(\mu _0^\text {GFF}\) be the limiting law of the centred maximum of the discrete Gaussian free field which exists by [14]. Then taking \(\epsilon \rightarrow 0\) in (6.6) together with the assumption (6.3) yields
From here the argument is analogous as in the proof of [7, Lemma 4.1]: assume that the sequence \((\textrm{Z}_s)_s\) is not tight. Then we have
It follows that
Sending \(M\rightarrow \infty \), (6.7) implies that
which is a contradiction when sending \(x\rightarrow \infty \). \(\quad \square \)
6.2 Approximation of small scale field
Thanks to Lemma 6.2 we may focus from now on the centred maximum of \({\tilde{\Phi }}_s\), which has a Gaussian small scale field \(\Phi _0^\text {GFF}- \Phi _s^\text {GFF}\) and a non-Gaussian but independent large scale field \(\Phi _s^{\mathcal {P}}\). Similar to [7, Section 4.2] we replace the small scale field by a collection of massless discrete Gaussian free fields, so that the results in [14] apply. The only difference is the regularisation of the Gaussian free field covariance: in [7] the heat-kernel regularisation is used, i.e.
while here, we use the Pauli-Villars regularisation (3.1), which implies
Therefore, we do not need the additional decomposition (4.17) in [7] involving the function \(g_s\). The following paragraph is completely analogous to [7, Section 4.2], but we include it here to set up the notation and improve readability.
We introduce a macroscopic subdivision of the torus \(\Omega \) as follows: let \(\Gamma \) be the union of horizontal and vertical lines intersecting at the vertices \(\frac{1}{K}\mathbb {Z}^2 \cap \Omega \) which subdivides \(\Omega \) into boxes \(V_i \subset \Omega \), \(i=1,\dots , K^2\) of side length 1/K. We use the notation \(V_i\) for both the subset of \(\Omega \) and the corresponding lattice version as subset of \(\Omega _\epsilon \).
Let \(\Delta _\Gamma \) be the Laplacian on \(\Omega \) with Dirichlet boundary conditions on \(\Gamma \), and let \(\Delta \) be the Laplacian with periodic boundary conditions on \(\Omega \). The domain of \(\Delta \) is the space of 1-periodic functions, and that of \(\Delta _\Gamma \) is the smaller space of 1-periodic functions vanishing on \(\Gamma \). This implies that \(-\Delta _\Gamma \geqslant -\Delta \) and thus,
as quadratic form inequalities.
Hence, using (6.8), we can decompose the small scale Gaussian field \(\Phi _0^\text {GFF}- \Phi _s^\text {GFF}\) as
where the two fields on the right-hand side are independent Gaussian fields with covariances
Note that for this decomposition, which is analogous to the Gibbs-Markov decomposition of the massless GFF with Dirichlet boundary condition, the Pauli-Villars decomposition is particularly convenient due to (6.9). Using [7, Lemma 4.2] in the exact same form, we see that the maximum of \(\Phi _0^\text {GFF}- \Phi _s^\text {GFF}\) can be replaced by the maximum of \(X_K^f + {\tilde{X}}_{s,K}^c\), where
This yields a new auxiliary field denoted \(\Phi _s\) with independent decomposition
which is completely analogous to (4.43) in [7], except that there is no field \(X_s^h\) for the different choice of the covariance regularisation. In fact the two fields \(X_K^f\) and \({\tilde{X}}_{s,K}^c\) are exactly the same and thus, the covariance estimates for \({\tilde{X}}_{s,K}^c\) in [7, Lemma 4.4] can be used verbatim. The only essential difference is that the field \(\Phi _s^{\mathcal {P}}\) is different, but the following statement establish the same regularity estimates as in [7, Lemma 4.5].
Lemma 6.3
For any \(s>0\) and \(\epsilon \geqslant 0\) the fields \(\Phi _s^\text {GFF}\) and \(\Phi _s^{\mathcal {P}}\) are a.s. Hölder continuous. Moreover, there is \(\alpha \in {(0,1)}\) such that for \(\#\in \{\text {GFF}, {\mathcal {P}}\}\)
Proof
Note that the Fourier coefficients of \(\Phi _t^\text {GFF}\) satisfy
Hence, (6.11) for \(\Phi _s^\text {GFF}\) follows from standard results as stated in [29, Proposition B.2 (i)] (for Hölder continuity) and [14, Lemma 3.5] (for the maximimum). For \(\Phi _s^{\mathcal {P}}\) the results follow from the properties of the difference term \(\Phi _s^\Delta \). \(\quad \square \)
The only remaining results, where the properties of \(\Phi ^\Delta \) enter, are Proposition 4.8 and Proposition 4.9 in [7]. To state these results we introduce for technical reasons a small neighbourhood of the grid \(\Gamma \) as follows. For \(\delta \in (0,1)\), define
Proposition 6.4
[Version of [14, Proposition 5.1]]. Let \(\Omega _\epsilon ^{\delta }\) be as in (6.12). Then,
Proposition 6.5
[Version of [14, Proposition 5.2]]. Let \(\Phi _s\) be as in (6.10). Let \(z_i \in V_i^{\delta }\) be such that
and let \({\bar{z}}\) be such that
Then for any fixed \(\kappa >0\) and small enough \(\delta >0\),
Moreover, there is a function \(g:\mathbb {N}\rightarrow \mathbb {R}_0^+\) with \(g(K) \rightarrow \infty \) as \(K\rightarrow \infty \), such that
Proof of Proposition 6.4 and Proposition 6.5
In the case of the sine-Gordon field it was used that \(\Vert \Phi _s^\Delta \Vert _\infty \) is bounded by a deterministic constant and that \(\Phi _t^\Delta \) is Hölder continuous. Thus, the generalisation of these results are immediate when restricting to the event
whose probability is arbitrarily close to 1 if C is large enough.
6.3 Approximation by \(\epsilon \)-independent random variables
Following [14, Section 2.3] we approximate \(\max _{\Omega _\epsilon }\Phi _s-{\mathfrak {m}}_\epsilon \) by \(G^*_{s,K}= \max _{i}G_{s,K}^i\) where
and also define
Here, the sequence g(K) is as in Proposition 6.5 and the random variables in (6.13) and (6.14) are all independent and defined as follows:
-
The random variables \(\rho _{K}^i \in \{0,1\}\), \(i=1,\dots , K^2\), are independent Bernoulli random variables with \({{\mathbb {P}}}(\rho _{K}^i=1)=\alpha ^* m_\delta g(K) e^{-\sqrt{8\pi }g(K)}\) with \(\alpha ^*\) and \(m_\delta \) as in [7, Proposition 4.6].
-
The random variables \(Y_{K}^i \geqslant 0\), \(i=1,\dots , K^2\), are independent and characterised by \({{\mathbb {P}}}(Y_{K}^i\geqslant x)= \frac{g(K)+x}{g(K)}e^{- \sqrt{8\pi }x}\) for \(x\geqslant 0\).
-
The random field \(Z_{s,K}^{c,0}(x)\), \(x\in \Omega \), is a weak limit of the overall coarse field \(Z_{s,K}^c\equiv {\tilde{X}}_{s,K}^c+\Phi _s^{\mathcal {P}}\) as \(\epsilon \rightarrow 0\). The existence of this limit is guaranteed by Corollary 1.2 and [7, Lemma 4.4].
-
The random variables \(\textbf{u}_\delta ^i \in V_i^\delta \), \(i=1,\dots ,K^2\), have the limiting distribution of the maximisers \(z_i\) of \(X_K^f\) in \(V_i^{\delta }\) as \(\epsilon \rightarrow 0\). Thus, \(\textbf{u}_\delta ^i\) takes values in the i-th subbox of \(\Omega =\mathbb {T}^2\) and, scaled to the unit square, its density is \(\psi ^\delta \) as in [7, Proposition 4.6].
Note that the correction in (6.13) can be understood from
From here the rest of [7, Section 4] can be used without adjustment, as no other properties of the difference field are used. We omit further details.
Data availability statement
Not applicable to this article, as no datasets were generated or analysed.
Notes
Note that this embedding is not true if \(s' = s\), since the condition \(q' \geqslant q\) must be replaced by \(q' \leqslant q\). However, in allowing for the strict inequality in s, we are allowed to trade for any q.
References
Arguin, L.-P., Belius, D., Bourgade, P.: Maximum of the characteristic polynomial of random unitary matrices. Commun. Math. Phys. 349(2), 703–751 (2017)
Bailey, E.C., Keating, J.P.: Maxima of log-correlated fields: some recent developments. J. Phys. A Math. Theor. 55(5), Paper No. 053001, 76 (2022)
Barashkov, N.: A stochastic control approach to Sine Gordon EQFT (2022). Preprint. arXiv:2203.06626
Barashkov, N., De Vecchi, F.: Elliptic stochastic quantization of sinh-Gordon QFT. (2021). Preprint, arXiv:2108.12664
Barashkov, N., Gubinelli, M.: A variational method for \(\Phi ^4_3\). Duke Math. J. 169(17), 3339–3415 (2020)
Bauerschmidt, R., Bodineau, T.: Log-Sobolev inequality for the continuum sine-Gordon model. Commun. Pure Appl. Math. 74(10), 2064–2113 (2021)
Bauerschmidt, R., Hofstetter, M.: Maximum and coupling of the sine-Gordon field. Ann. Probab. 50(2), 455–508 (2022)
Belius, D., Wu, W.: Maximum of the Ginzburg–Landau fields. Ann. Probab. 48(6), 2647–2679 (2020)
Biskup, M., Louidor, O.: Extreme local extrema of two-dimensional discrete Gaussian free field. Commun. Math. Phys. 345(1), 271–304 (2016)
Biskup, M., Louidor, O.: Full extremal process, cluster law and freezing for the two-dimensional discrete Gaussian free field. Adv. Math. 330, 589–687 (2018)
Biskup, M., Louidor, O.: Conformal symmetries in the extremal process of two-dimensional discrete Gaussian free field. Commun. Math. Phys. 375(1), 175–235 (2020)
Bogachev, V.I., Ruas, M.A.S.: Measure Theory. Springer, 1 (2007)
Boué, M., Dupuis, P.: A variational representation for certain functionals of Brownian motion. Ann. Probab. 26(4), 1641–1659 (1998)
Bramson, M., Ding, J., Zeitouni, O.: Convergence in law of the maximum of the two-dimensional discrete Gaussian free field. Commun. Pure Appl. Math. 69(1), 62–123 (2016)
Bramson, M., Zeitouni, O.: Tightness of the recentered maximum of the two-dimensional discrete Gaussian free field. Commun. Pure Appl. Math. 65(1), 1–20 (2012)
Brydges, D., Kennedy, T.: Mayer expansions and the Hamilton–Jacobi equation. J. Stat. Phys. 48(1–2), 19–49 (1987)
Carpentier, D., Le Doussal, P.: Glass transition of a particle in a random potential, front selection in nonlinear renormalization group, and entropic phenomena in Liouville and sinh-Gordon models. Phys. Rev. E 63, 026110 (2001)
Chhaibi, R., Madaule, T., Najnudel, J.: On the maximum of the \({\rm C}\beta {\rm E}\) field. Duke Math. J. 167(12), 2243–2345 (2018)
De Vecchi, F.C., Fresta, L., Gubinelli, M.: A stochastic analysis of subcritical Euclidean fermionic field theories. (2022). Preprint, arXiv:2210.15047
Ding, J., Roy, R., Zeitouni, O.: Convergence of the centered maximum of log-correlated Gaussian fields. Ann. Probab. 45(6A), 3886–3928 (2017)
Fels, M., Hartung, L.: Extremes of the 2d scale-inhomogeneous discrete Gaussian free field: convergence of the maximum in the regime of weak correlations. ALEA Latin Am. J. Probab. Math. Stat. 18(2), 1891–1930 (2021)
Fyodorov, Y.V., Hiary, G.A., Keating, J.P.: Freezing transition, characteristic polynomials of random matrices, and the Riemann zeta function. Phys. Rev. Lett. 108(17), (2012)
Fyodorov, Y.V., Keating, J.P.: Freezing transitions and extreme values: random matrix theory, and disordered landscapes. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 372, 20120503 (2014)
Glimm, J., Jaffe, A.: Quantum Physics, 2nd edn. Springer-Verlag, New York (1987)
Gubinelli, M., Hofmanová, M.: Global solutions to elliptic and parabolic \(\Phi ^4\) models in Euclidean space. Commun. Math. Phys. 368(3), 1201–1266 (2019)
Gubinelli, M., Hofmanová, M.: A PDE construction of the Euclidean \(\phi _3^4\) quantum field theory. Commun. Math. Phys. 384(1), 1–75 (2021)
Hofstetter, M.: Extremal process of the sine-Gordon field. (2021). Preprint, arXiv:2111.04842
Huang, Y.: Another probabilistic construction of \(\phi ^{2n}\) in dimension 2. Electron. Commun. Probab. 26, 1–13 (2021)
Lacoin, H., Rhodes, R., Vargas, V.: Complex Gaussian multiplicative chaos. Commun. Math. Phys. 337(2), 569–632 (2015)
Moinat, A., Weber, H.: Space-time localisation for the dynamic \(\Phi ^4_3\) model. Commun. Pure Appl. Math. 73(12), 2519–2555 (2020)
Mourrat, J.-C., Weber, H.: Global well-posedness of the dynamic \(\Phi ^4\) model in the plane. Ann. Probab. 45(4), 2398–2476 (2017)
Nelson, E.: A quartic interaction in two dimensions. In Mathematical Theory of Elementary Particles (Proc. Conf., Dedham, Mass., 1965), pages 69–73. M.I.T. Press, Cambridge, Mass., (1966)
Osterwalder, K., Schrader, R.: Axioms for Euclidean Green’s functions. II. Communications in Mathematical Physics, 42, 281–305 (1975). With an appendix by Stephen Summers
Paquette, E., Zeitouni, O.: The maximum of the CUE field. Int. Math. Res. Not. IMRN 16, 5028–5119 (2018)
Schweiger, F.: The maximum of the four-dimensional membrane model. Ann. Probab. 48(2), 714–741 (2020)
Schweiger, F., Zeitouni, O.: The maximum of log-correlated Gaussian fields in random environments. (2022). Preprint, arXiv:2205.07210
Simon, B.: The \(P(\phi )_{2}\) Euclidean (quantum) field theory. Princeton Series in Physics. Princeton University Press, Princeton, N.J (1974)
Symanzik, K.: Euclidean quantum field theory. In Local Quantum Field Theory. Academic Press, New York (1969)
Touzi, N.: Optimal stochastic control, stochastic target problems, and backward SDE, volume 29 of Fields Institute Monographs. Springer, New York; Fields Institute for Research in Mathematical Sciences, Toronto, ON. With Chapter 13 by Angès Tourin (2013)
Tsatsoulis, P., Weber, H.: Spectral gap for the stochastic quantization equation on the 2-dimensional torus. Ann. Inst. Henri Poincaré Probab. Stat. 54(3), 1204–1249 (2018)
Wu, W., Zeitouni, O.: Subsequential tightness of the maximum of two dimensional Ginzburg-Landau fields. Electronic Communications in Probability, 24:Paper No. 19, 12 (2019)
Acknowledgements
TSG would like to thank Ajay Chandra for interesting discussions on the Polchinski equation and Romain Panis for useful comments. MH thanks Roland Bauerschmidt for discussing the project at early stages, as well as Benoît Dagallier for the helpful comments. NB, TSG, and MH would like to thank Hugo Duminil-Copin for hosting us at the Université de Genève in April 2022 to work on this project.
TSG was supported by the Simons Foundation, Grant 898948, HDC. MH was partially supported by the UK EPSRC grant EP/L016516/1 for the Cambridge Centre for Analysis. NB is supported by the ERC Advanced Grant 741487 (Quantum Fields and Probability).
Funding
Open access funding provided by University of Geneva.
Author information
Authors and Affiliations
Contributions
All authors wrote the manuscript and contributed equally.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Communicated by J. Ding.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Barashkov, N., Gunaratnam, T.S. & Hofstetter, M. Multiscale Coupling and the Maximum of \({\mathcal {P}}(\phi )_2\) Models on the Torus. Commun. Math. Phys. 404, 833–882 (2023). https://doi.org/10.1007/s00220-023-04850-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00220-023-04850-2