1 Introduction

The last decade has seen a renewed interest in the study of rigorous stochastic quantization (SQ) of Euclidean quantum field theories (EQFTs). SQ is a technique, first proposed by Nelson  [43] and Parisi–Wu  [48], to realize EQFTs, or more generally Gibbsian measures on \({\mathbb {R}}^d\) obtained as limits of perturbations of Gaussian measures, as solutions to certain stochastic partial differential equations (SPDEs) driven by Gaussian noise. After the pioneering work of Jona–Lasinio and Mitter  [36, 37] and Da Prato–Debussche  [16], only very recently substantial advances have allowed to attack the challenging problem of the SQ for classical EQFTs, including the \(\Phi ^4_3\) model, see e.g.  [5, 15, 27, 28, 30, 31, 39, 41, 42].

While the original approach of Parisi–Wu to the SQ method based on a Langevin equilibrium diffusion gives rise to parabolic SPDEs, this it is not the only possibility. Nowadays we dispose of at least two other methods of stochastic quantization:

  • the elliptic SQ approach  [1, 2, 14, 27], based on the dimensional reduction phenomenon described by Parisi and Sourlas  [46, 47] and involving the solutions of an elliptic singular SPDE in \(d + 2\) dimensions;

  • the variational method  [8, 12, 13] which involves forward–backward SDEs and can be also applied to fermionic EQFTs  [17].

The aim of this work is to discuss the decay of correlations of Euclidean quantum fields from the point of view of the SQ methods. In particular we consider the elliptic SQ framework and restrict our attention to the following elliptic SQ equation with respect to the real valued random field \(\varphi (z), z \in {\mathbb {R}}^4\),

$$\begin{aligned} (- \Delta _{} + m^2) \varphi + \alpha \exp (\alpha \varphi - \infty ) = \xi , \end{aligned}$$
(1)

where \(\alpha \in {\mathbb {R}}\) and \(m > 0\). Here, \(\xi \) is a Gaussian white noise on \({\mathbb {R}}^4\) and \(- \infty \) means that the equation should be properly renormalized. The existence of a unique solution to Eq. (1) and the link with the corresponding EQF measure in two dimensions, called the Høegh–Krohn model  [33] (also known as Liouville model in the literature) has been established in  [2] for

$$\begin{aligned} | \alpha | < \alpha _{\max } :=4 \pi \sqrt{8 - 4 \sqrt{3}}. \end{aligned}$$

More precisely, well-posedness holds in the weighted Besov space \(B_{p, p, \ell }^s ({\mathbb {R}}^4)\), for suitable (ps) given in (5) and \(\ell > 0\) large enough (see Sect. 1.1 for precise notations).

The estimation of connected (or truncated) correlation functions, for example, the connected two-point function,

$$\begin{aligned} \begin{array}{l} {\mathbb {E}} [\varphi (x_1) \varphi (x_2)] -{\mathbb {E}} [\varphi (x_1)] {\mathbb {E}} [\varphi (x_2)], \quad x_1, x_2 \in {\mathbb {R}}^4, \end{array} \end{aligned}$$

is a basic goal of any constructive EQFT approach. General truncated correlation functions allow to infer informations about masses of the particles in the QFT and estimate scattering amplitudes (see e.g.  [32]). In the constructive literature, estimation of the connected correlation functions is obtained via cluster expansion methods or correlation inequalities. See for example the early work of Glimm–Jaffe–Spencer  [25, 26]. The literature about expansion methods abounds. We suggest the interested reader to refer to  [4, 6, 18, 24] and the reference therein for details and to  [34] for a nice review of related results. Expansion methods for Euclidean fields involve two primary steps. The initial step is to expand the interaction into parts localized in different bounded volumes of Euclidean space. This gives control over the infinite volume method to establish the exponential decay of correlations. The second step is to expand interaction into components which are localized on different momentum scales. This helps in dealing with the local regularity properties of correlation functions. The technical difficulty is to mix these two expansions in a manageable way and to systematically extract contributions which require renormalization. Correlation inequalities methods instead employ discrete approximations, such as lattice approximations, whose specific algebraic properties allow for establishing bounds on a sufficiently broad class of observables.

While expansion methods can be applied to stochastic quantization, as evidenced in works such as [19, 38], we look here for a stochastic analytic approach leveraging the intrinsic features of SQ. Parisi [45] presented an early non-rigorous discussion of correlations within the SQ approach and studied how to estimate them directly via computer simulations. In this paper we introduce two simple, general and direct methods to study correlations in SQ applying them to the elliptic SQ of the exponential model (1):

  1. Coupling approach

    It is possible to infer the decay of truncated correlations by proving that the solutions to the SQ equation exhibit almost independent behaviour in different regions of space. This can be achieved by coupling the solution to two independent copies by suitably choosing the driving noises. As far as our knowledge extends, it has been Funaki [23] who first introduced this idea in the context of equilibrium dynamics of Ginzburg–Landau continuum models.

  2. Malliavin calculus approach

    Parisi  [45] suggests to study variations of the SQ equations in order to infer truncated two-point correlations. His observation can actually be made precise and more general using the stochastic calculus of variations, i.e. the Malliavin calculus [44], and computing derivatives of the solutions to the SQ equation w.r.t. the driving noise \(\xi \).

These two approaches will be used to prove the following statement about a general class of truncated covariances:

Theorem 1

Let \(F_1, F_2\) be Lipschitz and functionals on \(B_{p, p, \ell }^s ({\mathbb {R}}^4)\) and f be a given smooth function supported in an open ball of unit radius around the origin. Then we have the following exponential decay

$$\begin{aligned} | {\text {Cov}} (F_1 (f \cdot \varphi (\cdot + x_1)), F_2 (f \cdot \varphi (\cdot + x_2))) | \leqslant M e^{- c | x_1 - x_2 |}, \end{aligned}$$
(2)

for all \(x_1, x_2 \in {\mathbb {R}}^4\) where the constant M depends on \(m, f, F_1, F_2\), the constant c depends on m but both are independent of \(x_1, x_2\).

Remark 1

Here \({\text {Cov}} (F, G) :={\mathbb {E}} [F G] -{\mathbb {E}} [F] {\mathbb {E}}[G]\) as usual and \((f \cdot \varphi (\cdot + x)) (\phi ):= \varphi (f (\cdot - x) \phi (\cdot ))\) for every test function \(\phi \).

In particular we prove that the solution of SQE (1) satisfies (formally),

$$\begin{aligned} | {\text {Cov}} (\varphi (x_1), \varphi (x_2)) | \lesssim e^{- c | x_1 - x_2 |}, \forall x_1, x_2 \in {\mathbb {R}}^4. \end{aligned}$$

It follows from Theorem 1 that the exponential EQFT in two dimensions has a mass gap, a fact first proven in [3] via correlation inequalities for the lattice approximation.

These approaches are general enough to be applicable to other EQFT models like \(P(\varphi )_2\) or \(\Phi ^4_3\) models. However a fundamental difficulty presents itself in establishing the required apriori estimates for the coupling method or in controlling the decay of Malliavin derivative in the Malliavin method. Both these difficulties originate in the lack of convexity of the renormalized interaction for a general EQFT. A similar problem is present in the analysis of logarithmic Sobolev inequalities for EQFT in bounded volumes [9, 11] especially for polynomial models. It also manifests in controlling the infinite volume limit of EQFT via stochastic quantization [13, 27, 28, 30], leading to a major obstacle in establishing uniqueness of the infinite volume solutions to the SQ equation.

Fortunately, these difficulties do not show up in the exponential model because its renormalization is multiplicative and it does not spoil the convex character of the interaction. For this reason our methods could be readily applied to obtain decay of correlations for the Sinh–Gordon model studied in [14]. Another model where uniqueness and correlations can be controlled via stochastic quantization is the Sine–Gordon model (for large mass and up the first renormalization threshold), studied by Barashkov via the variational method in [8]. Let us also mention that, inspired by the present paper, the coupling method has been already used to show decay of correlations for Euclidean fermionic QFTs and and for sine-Gordon Euclidean QFTs via the FBSDE SQ method, respectively in [17] and [29].

Let us stress that proving uniqueness of (any kind of) stochastic quantization and establishing decay of correlation of models like \(\Phi ^4_{2, 3}\) at high temperature is still largely an open problem which should be considered, in our opinion, as a crucial test to evaluate the effectivity of stochastic quantization as a constructive tool in quantum field theory. The present work is a preliminary step in the direction of understanding better this problem, and in general in devising appropriate tools to study stochastically quantized EQFTs.

Plan of the paper After introducing notations and definitions of function spaces in Sect. 1.1, the paper is structured into two main parts. In Sect. 2, we present a proof of Theorem 1 utilizing the coupling method, commencing with a review of essential results from  [2]. Following this, in Sect. 3, we provide the Malliavin calculus proof of Theorem 1, beginning with a summary of relevant tools. The paper concludes with “Appendix A”, where we revisit a few necessary results from the literature and establish the existence and uniqueness of solutions to the approximate Eq. (35).

1.1 Notations

In this section we describe some notations and definitions of function spaces used across the whole paper. Some approach depending notations which are also used in the paper are discussed in the corresponding sections.

  • Throughout the paper, we use the notation \(a \lesssim b\) if there exists a constant \(c > 0\), independent of the variables under consideration, such that \(a \leqslant c b\). If we want to emphasize the dependence of c on the variable x, then we write \(a (x) \lesssim _x b (x)\). The symbol \(:=\) means that the right hand side of the equality defines the left hand side.

  • We set \({\mathcal {L}} :=- \Delta + m^2\).

  • For a distribution \(\varphi \), a smooth function f and \(x \in {\mathbb {R}}^d\), we define the translated distribution \((\varphi (\cdot + x))(\phi ) = \varphi (\phi (\cdot - x))\) for all test functions \(\phi \) and by \(f\cdot \varphi (\cdot + x)\) we denote the multiplication of a smooth function f and distribution \(\varphi (\cdot + x)\).

  • By \({\mathbb {N}}\) we understand the set of natural numbers \(\{ 1, 2, \ldots \}.\) For \(k \in {\mathbb {N}} \cup \{ 0 \}\), we write \(C^k ({\mathbb {R}}^d)\) to denote the set of real valued functions which are differentiable up to k-times and the k-th derivative is continuous. We write \(C ({\mathbb {R}}^d)\) for \(k = 0\) and the topology we consider on this space is uniform norm topology. By \(C_c^k ({\mathbb {R}}^4)\) we mean the collection of functions in \(C^k ({\mathbb {R}}^d)\) having compact support. We denote the the space of smooth functions having compact support by \(C_c^{\infty } ({\mathbb {R}}^4)\).

  • For any \(\ell > 0\) and weight \(r_{\ell } (x) :=(1 + | x |^2)^{- \ell / 2}\), by \(C_{\ell }^0 ({\mathbb {R}}^d)\) we denote the space of continuous functions on \({\mathbb {R}}^d\) such that

    $$\begin{aligned} \Vert f \Vert _{C_{\ell }^0} {:=\sup _{x \in {\mathbb {R}}^d}} | f (x) r_{\ell } (x) | < \infty . \end{aligned}$$
  • By symbol \(L^p_{\ell } ({\mathbb {R}}^d), p \in [1, \infty ],\) we mean the Banach space of all (equivalence classes of) \({\mathbb {R}}\)-valued weighted p-integrable functions on \({\mathbb {R}}^d\). The norm in \(L^p_{\ell } ({\mathbb {R}}^d), 1 \leqslant p < \infty \) is given by

    $$\begin{aligned} \Vert f \Vert _{L^p_{\ell }} :=\left[ \int _{{\mathbb {R}}^d} | f (y) r_{\ell } (y) |^p \textrm{d}y \right] ^{1 / p}, \qquad f \in L^p_{\ell } ({\mathbb {R}}^d). \end{aligned}$$

    For \(p = \infty \) we understand it with the usual modification. If \(\ell = 0\) we only write \(L^p ({\mathbb {R}}^d)\) instead \(L^p_0 ({\mathbb {R}}^d)\). Sometimes we also use weight function \(r_{\lambda , \ell } (x) :=(1 + \lambda | x |^2)^{- \ell / 2}\), for \(\lambda , \ell >0\), and in this case we define \(L^p_{\lambda , \ell } ({\mathbb {R}}^d)\) by writing \(r_{\lambda , \ell }\) in place of \(r_{\ell }\) in definition of \(L^p_{\ell } ({\mathbb {R}}^d)\). Similarly we define \(L^p_{\ell }(E)\) and \(L^p_{\lambda , \ell }(E)\) for an open subset \(E \subset {\mathbb {R}}^d\).

  • Let s be a real number and (pq) be in \([1, \infty ]^2\). The weighted Besov space \(B_{p, q, \ell }^s ({\mathbb {R}}^d)\) consists of all tempered distributions \(f \in {\mathcal {S}}' ({\mathbb {R}}^d)\) such that the norm

    $$\begin{aligned} \Vert f \Vert _{B_{p, q, \ell }^s} :=\left[ \sum _{j \geqslant - 1} 2^{s j q} \Vert \Delta _j (f) \Vert _{L^p_{\ell } ({\mathbb {R}}^d)}^q \right] ^{1 / q} \end{aligned}$$

    is finite, where \(\Delta _j\) are the non-homogeneous dyadic blocks. See Appendix A of  [2] for details and properties of \(B_{p, q, \ell }^s ({\mathbb {R}}^d)\). We set \(C_{\ell }^2({\mathbb {R}}^d):= B_{\infty , \infty , \ell }^2 ({\mathbb {R}}^d)\).

  • For \(r > 0, x \in {\mathbb {R}}^d,\) we denote an open ball of radius r around x by B(xr). We also use d(xS) to define the distance between the point \(x \in {\mathbb {R}}^d \) and set \(S \subset {\mathbb {R}}^d.\)

  • Let \({\mathfrak {a}}\) be an auxiliary (radial) smooth, compactly supported function such that \({\text {supp}} {\mathfrak {a}} \subset B (0, 1)\), \(\int {\mathfrak {a}} (x) d x = 1,\) and \({\mathfrak {a}}_{\varepsilon } (x) :=\varepsilon ^{- 4} {\mathfrak {a}} (x / \varepsilon ), x \in {\mathbb {R}}^4\). Note that \({\text {supp}} {\mathfrak {a}}_{\varepsilon } \subset B (0, \varepsilon ).\)

Note that to save space we do not write the integration limit and the measure in the case when it is easily understood from the context.

2 The coupling approach

In this approach towards to proof of Theorem 1 we first prove (2) for a random field \(\varphi _{\varepsilon }\) which solves an approximation (7) of SPDE (1). Then due to Fatou’s lemma we pass to the limit \(\varepsilon \rightarrow 0\) and obtain (2) for \(\varphi \). We only need to consider the case of large \(l :=| x_1 - x_2 |\) in detail as for small l the estimate (2) holds trivially.

Let us now sketch briefly the idea of the coupling approach. We consider two open balls \(D_1\) and \(D_2\) in \({\mathbb {R}}^4\) of radius l/2 with centers \(x_1\) and \(x_2\). Further, we take two copies of Gaussian independent space white noises \(\zeta _1\) and \(\zeta _2\) and define, for \(i = 1, 2\),

$$\begin{aligned} \xi _i :=\mathbb {1}_{D_i} \xi +\mathbb {1}_{D_i^c} \zeta _i. \end{aligned}$$

In this way, in \(D_i\) we have that \(\xi = \xi _i\) for \(i = 1, 2\), while \(\xi _1\) and \(\xi _2\) are independent everywhere. We let \(X_{\varepsilon }, X_{1, \varepsilon }\) and \(X_{2, \varepsilon }\) be the solutions to linear part (cfr. (8)) of the approximations of the Eq. (1) with noises replaced by \(\xi _{\varepsilon }\), \(\xi _{1, \varepsilon }\) and \(\xi _{2, \varepsilon }\), respectively. Therefore \(X_{1, \varepsilon }\) and \(X_{2, \varepsilon }\) are independent while we will have \(X_{i, \varepsilon } \approx X_{\varepsilon }\) in \(D_i\). By stability estimates for eq. (1) we can derive estimates of the form (cfr. (23))

$$\begin{aligned} {\mathbb {E}}&[\Vert f \cdot \varphi _{\varepsilon } (\cdot + x_i) - f \cdot \varphi _{i, \varepsilon } (\cdot + x_i) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}}] \\&\qquad \lesssim e^{-{\mathfrak {c}} \left( 1 - \frac{l}{8} \right) } ({\mathbb {E}} [\Vert X_{\varepsilon } - X_{i, \varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}}] +{\mathbb {E}} [\Vert {\bar{\varphi }}_{\varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}}] +{\mathbb {E}} [\Vert {\bar{\varphi }}_{i, \varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}}] \end{aligned}$$

for some \({\mathfrak {c}}\) which depends on m and \({\mathfrak {p}}\), where \({\mathfrak {p}} \in [2, \infty )\) is fixed. In the above we have \(\varphi _{\varepsilon } = {\bar{\varphi }}_{\varepsilon } + X_{\varepsilon }\) and \(\varphi _{i, \varepsilon } = {\bar{\varphi }}_{i, \varepsilon } + X_{i, \varepsilon }\), where \({\varphi }_{\varepsilon }\) and \({\varphi }_{i, \varepsilon }\) respectively, are the unique solutions to the regularized SPDE (7) with \(\xi _{\varepsilon }\) and \(\xi _{i, \varepsilon }\) as detailed in Sect. 2.1. This estimate allows to replace \(\varphi _{\varepsilon }\) by \(\varphi _{i, \varepsilon }\) in \(D_i\) by paying a small error of the order \(e^{- c l}\) for some \(c>0\) (independent of \(x_i\)). Since \(\varphi _{1, \varepsilon }\) and \(\varphi _{2, \varepsilon }\) are independent, from the last estimate we can conclude easily the exponential decay for Lipschitz observables, see Sect. 2.2 for details.

2.1 Preliminaries

In this subsection we summarize the steps, with another suitably modified approximation, of the proof from  [2], which also set further required notation. The main result of  [2], which is about the existence of a unique solution to the singular SPDE (1), is based on the Da Prato–Debussche trick  [16] and the fact that the Wick exponential is a positive measure.

  • Let us consider a complete probability space \((\Omega , {\mathfrak {F}}, {\mathbb {P}})\), which satisfies the usual hypothesis, and \(\xi \) as Gaussian white noise on \({\mathbb {R}}^4\) defined on \((\Omega , {\mathfrak {F}}, {\mathbb {P}})\).

  • Let X be the solution to \({\mathcal {L}}X = \xi \). The existence and uniqueness of such \(X \in B_{q, q, \ell }^{- \delta } ({\mathbb {R}}^4)\) for every \(q \in [1, \infty ], \delta > 0\) and \(\ell > 0\) is proved in  [27].

  • To avoid clumsy notation we write \(\eta :=\exp ^{\diamond } (\alpha {\mathcal {L}}^{- 1} \xi )\) for the renormalized version of the distribution \(\exp (\alpha {\mathcal {L}}^{- 1} \xi - \infty )\), where \(\exp ^{\diamond }\) denotes the Wick exponential of the Gaussian distribution \(X ={\mathcal {L}}^{- 1} \xi \).

  • The first step in giving a meaning to Eq. (1) is to take the decomposition \(\varphi = {\bar{\varphi }} + X\). Then observe that formally \({\bar{\varphi }} \) satisfies

    $$\begin{aligned} \begin{array}{l} {\mathcal {L}} {\bar{\varphi }} + \alpha \exp (\alpha {\bar{\varphi }}) \eta = 0. \end{array} \end{aligned}$$
    (3)
  • For any \(\varepsilon > 0\) let us set \(\xi _{\varepsilon } :={\mathfrak {a}}_{\varepsilon } *\xi \) where \(*\) denotes convolution. Note that

    $$\begin{aligned} \begin{array}{l} \eta = \sum _{k = 0}^{\infty } \frac{\alpha ^k}{k!} ({\mathcal {L}}^{- 1} \xi )^{\diamond k}, \end{array} \end{aligned}$$

    where \(\diamond \) denotes the Wick product and \(({\mathcal {L}}^{- 1} \xi )^{\diamond k}\)= \(\underbrace{{\mathcal {L}}^{- 1} \xi \diamond {\mathcal {L}}^{- 1} \xi \diamond \cdots \diamond {\mathcal {L}}^{- 1} \xi }_{k - {\text {times}}} = X^{\diamond k}\). By denoting \(X_{\varepsilon } ={\mathcal {L}}^{- 1} \xi _{\varepsilon }\) as the unique smooth solution to \({\mathcal {L}}X_{\varepsilon } = \xi _{\varepsilon }\), we set \(\eta _{\varepsilon }\) as the following positive measure

    $$\begin{aligned} \eta _{\varepsilon } (\textrm{d}z) = \exp ^{\diamond } (\alpha {\mathcal {L}}^{- 1} \xi _{\varepsilon }) \textrm{d}z = \exp (\alpha {\mathcal {L}}^{- 1} \xi _{\varepsilon } - C_{\varepsilon }) \textrm{d}z, \end{aligned}$$
    (4)

    where \(C_{\varepsilon } :=\frac{\alpha ^2}{2} {\mathbb {E}} [| X_{\varepsilon } |^2]\). Moreover, from Section 3.1 of  [2], we know that

    $$\begin{aligned} \begin{array}{l} \eta _{\varepsilon } = \sum _{k = 0}^{\infty } \frac{\alpha ^k}{k!} ({\mathcal {L}}^{- 1} \xi _{\varepsilon })^{\diamond k}, \end{array} \end{aligned}$$

    and, for \(| \alpha | < 4 \sqrt{2} \pi , p \in (1, 2], s \leqslant - \frac{\alpha ^2 (p - 1)}{(4 \pi )^2}\) and \(\ell > 0\) large enough, \(\eta _{\varepsilon } \rightarrow \eta ,\) as \(\varepsilon \rightarrow 0\), in probability in \(B_{p, p, \ell }^s ({\mathbb {R}}^4).\) Note that the convergence \(\eta _{\varepsilon } \rightarrow \eta \) in probability implies that there exists a sequence \(\varepsilon _n\), which converges to 0, such that \(\eta _{\varepsilon _n} \rightarrow \eta ,\) as \(\varepsilon _n \rightarrow 0,\) in \(B_{p, p, \ell }^s ({\mathbb {R}}^4)\) \({\mathbb {P}}\)-almost surely. We will fix this sequence \(\{ \varepsilon _n \}_{n \geqslant 1}\) in the whole paper.

  • By Theorems 21 and 25 from  [2] we have that for any \(| \alpha | < \alpha _{\max }\), there exist \(p, s, \delta \) satisfying

    $$\begin{aligned} 1< p \leqslant 2, \quad p< \frac{2 (4 \pi )^2}{\alpha ^2}, \quad - 1< s \leqslant - \frac{\alpha ^2 (p - 1)}{(4 \pi )^2} \quad \text { and }\quad 0< \delta < s + 1,\nonumber \\ \end{aligned}$$
    (5)

    the Eq. (3) has a unique solution \({\bar{\varphi }}\) in \(B_{p, p, \ell + \delta '}^{s + 2 - \delta } ({\mathbb {R}}^4)\), \({\mathbb {P}}\)-almost surely, for large enough \(\ell > 0\) and small enough \(\delta ' > 0\). Moreover,

    $$\begin{aligned} \begin{array}{l} \alpha {\bar{\varphi }} \leqslant 0 \end{array} \end{aligned}$$

    holds true. Furthermore, for \(\{ \varepsilon _n \}_{n \geqslant 1}\) as fixed above, \({\bar{\varphi }}_{\varepsilon _n} \rightarrow {\bar{\varphi }} \) in \(B_{p, p, \ell + \delta '}^{s + 2 - \delta } ({\mathbb {R}}^4)\) as \(n \rightarrow \infty \), \({\mathbb {P}}\)-almost surely, where \({\bar{\varphi }}_{\varepsilon _n}\) solves the approximate equation

    $$\begin{aligned} {\mathcal {L}} {\bar{\varphi }}_{\varepsilon _n} + \alpha \exp (\alpha {\bar{\varphi }}_{\varepsilon _n}) \eta _{\varepsilon _n} = 0 \end{aligned}$$
    (6)

    uniquely in \(C_{\ell }^0 ({\mathbb {R}}^4)\) such that \(\alpha \varphi _{\varepsilon _n} \leqslant 0\).

  • Thus, for (ps) such that (5) holds and \(\ell > 0\) large enough, \(\varphi = X + {\bar{\varphi }} \in B_{p, p, \ell }^s ({\mathbb {R}}^4),\) \({\mathbb {P}}\)-almost surely, solves SPDE (1) uniquely. If we consider the following approximation of SPDE (1)

    $$\begin{aligned} \begin{array}{l} {\mathcal {L}} \varphi _{\varepsilon _n} + \alpha \exp (\alpha \varphi _{\varepsilon _n} - C_{\varepsilon _n}) ={\mathfrak {a}}_{\varepsilon _n} *\xi , \end{array} \end{aligned}$$
    (7)

    then, from the proof of Theorem 35 of  [2], we know that \(\varphi _{\varepsilon _n} = {\bar{\varphi }}_{\varepsilon _n} + X_{\varepsilon _n}\) is the unique solution to (7) and \(\varphi _{\varepsilon _n} \rightarrow \varphi \) in \(B_{p, p, \ell }^s ({\mathbb {R}}^4)\), \({\mathbb {P}}\)-almost surely as \(n \rightarrow \infty \).

Let us recall that we have fixed the sequence of \(\{ \varepsilon _n \}_{n \in {\mathbb {N}}}\) which converges to 0 as \(n \rightarrow \infty \). To shorten the notation, we will write \(\varepsilon \rightarrow 0\) equivalently to \(n \rightarrow \infty \).

2.2 Proof of Theorem 1

      Assume that \(| x_1 - x_2 | \leqslant 8.\) It is trivial to get (2) because its l.h.s. is bounded. Consider now the complementary case and let \(l :=| x_1 - x_2 | > 8\). Take two open balls \(D_1\) and \(D_2\) in \({\mathbb {R}}^4\) of radius l/2 with centers \(x_1\) and \(x_2,\) respectively. Further, we take two copies of Gaussian independent space white noises \(\zeta _1\) and \(\zeta _2\) defined on \((\Omega , {\mathfrak {F}}, {\mathbb {P}})\). Define the processes \(X_1\) and \(X_2\) as follows:

(8)

Note that that \(\xi - \xi _i = 0\) on \(D_i, i = 1, 2\) in the sense of distributions \({\mathbb {P}}\)-a.s. Moreover, since \(D_1 \cap D_2 = \emptyset \), the processes \(X_1\) and \(X_2\) are independent. Indeed, by setting \((\mathbb {1}_{D_i} \xi ) (f) :=\xi (\mathbb {1}_{D_i} f)\), we observe that for \(f, g \in L^2 ({\mathbb {R}}^4),\)

$$\begin{aligned} {\mathbb {E}} [\xi _1 (f) \xi _2 (g)] = \langle \mathbb {1}_{D_1} f, \mathbb {1}_{D_2} g \rangle _{L^2} = 0. \end{aligned}$$

Let us set \(\xi _{\varepsilon } :={\mathfrak {a}}_{\varepsilon } *\xi \) and \(\xi _{i, \varepsilon } :={\mathfrak {a}}_{\varepsilon } *\xi _i, i = 1, 2\) for the whole subsection. Let \(\varphi _{\varepsilon }\) and \(\varphi _{i, \varepsilon }\), respectively, be the unique solutions to the following regularized version of eq. (1)

$$\begin{aligned} {\mathcal {L}} \varphi _{\varepsilon } + \alpha \exp (\alpha \varphi _{\varepsilon } - C_{\varepsilon }) = \xi _{\varepsilon }, \end{aligned}$$

and

$$\begin{aligned} {\mathcal {L}} \varphi _{i, \varepsilon } + \alpha \exp (\alpha \varphi _{i, \varepsilon } - C_{\varepsilon }) = \xi _{i, \varepsilon }, \end{aligned}$$

where \(C_{\varepsilon } :=\frac{\alpha ^2}{2} {\mathbb {E}} [| X_{\varepsilon } |^2] = \frac{\alpha ^2}{2} {\mathbb {E}} [| X_{i, \varepsilon } |^2] \) for \(X_{\varepsilon } ={\mathcal {L}}^{- 1} \xi _{\varepsilon }\) and \(X_{i, \varepsilon } ={\mathcal {L}}^{- 1} \xi _{i, \varepsilon }, i = 1, 2\). Note that due to stationarity in space of the white noise \(\xi \), the constant \(C_{\varepsilon }\) does not depend on \(x \in {\mathbb {R}}^4\).

Next, let us fix \({\mathfrak {p}} \in [2, \infty )\) and consider \(\varphi \), \(\varphi _1\) and \(\varphi _2\) as the unique solutions to the SPDE (1) with noises \(\xi \), \(\xi _1\) and \(\xi _2\), respectively. Their existence has been summarized in Sect. 2.1. Then observe that, since \(F_1\) and \(F_2\) are Lipschitz and bounded functionals, using the Hölder inequality we get the following

$$\begin{aligned}&| {\text {Cov}} (F_1 (f \cdot \varphi (\cdot + x_1)), F_2 (f \cdot \varphi (\cdot + x_2))) | \nonumber \\&\quad \le | {\mathbb {E}} [(F_1 (f \cdot \varphi (\cdot + x_1)) - F_1 (f \cdot \varphi _1 (\cdot + x_1))) F_2 (f \cdot \varphi (\cdot + x_2))] | \nonumber \\&\quad \quad + | {\mathbb {E}} [F_1 (f \cdot \varphi _1 (\cdot + x_1)) (F_2 (f \cdot \varphi (\cdot + x_2)) - F_2 (f \cdot \varphi _2 (\cdot + x_2)))] | \nonumber \\&\quad \quad + | {\mathbb {E}} [F_1 (f \cdot \varphi _1 (\cdot + x_1)) F_2 (f \cdot \varphi _2 (\cdot + x_2))] \nonumber \\&\quad \quad -{\mathbb {E}} [F_1 (f \cdot \varphi (\cdot + x_1))] {\mathbb {E}} [F_2 (f \cdot \varphi (\cdot + x_2))] | \nonumber \\&\quad \lesssim _{F_1, F_2} [{\mathbb {E}} [\Vert f \cdot \varphi (\cdot + x_1) - f \cdot \varphi _1 (\cdot + x_1) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}}]]^{1 /{\mathfrak {p}}} \nonumber \\&\quad \quad + [{\mathbb {E}} [\Vert f \cdot \varphi (\cdot + x_2) - f \cdot \varphi _2 (\cdot + x_2) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}}]]^{1 /{\mathfrak {p}}}. \end{aligned}$$
(9)

Here we used that, since the processes \(\xi _1\) and \(\xi _2\) are independent and the processes \(\xi ,\xi _1\) and \(\xi _2\) have same law,

$$\begin{aligned}{} & {} {\mathbb {E}} [F_1 (f \cdot \varphi _1 (\cdot + x_1)) F_2 (f \cdot \varphi _2 (\cdot + x_2))] \\ {}{} & {} \quad -{\mathbb {E}} [F_1 (f \cdot \varphi (\cdot + x_1))] {\mathbb {E}} [F_2 (f \cdot \varphi (\cdot + x_2))]= 0. \end{aligned}$$

But thanks to Fatou’s lemma (see Theorem 2.72 of  [10]), to get (2) from (9) it is enough to prove that, for \(i = 1, 2\),

$$\begin{aligned} \begin{array}{lll} {\mathbb {E}} [\Vert f \cdot \varphi _{\varepsilon } (\cdot + x_i) - f \cdot \varphi _{i, \varepsilon } (\cdot + x_i) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}}]\lesssim & {} e^{- c l}, \end{array} \end{aligned}$$
(10)

uniform in \(\varepsilon \), for some \(c > 0\) which does not depend on \(x_1, x_2\).

Due to symmetry, it is sufficient to estimate \({\mathbb {E}} [\Vert f \cdot \varphi _{\varepsilon } (\cdot + x_1) - f \cdot \varphi _{1, \varepsilon } (\cdot + x_1) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}}]\). For that let \({\tilde{D}}_1 :=B \left( x_1, \frac{l}{4} \right) \) and take

$$\begin{aligned}\rho (x) :=e^{- \beta m | x - x_1 |}, \quad x \in {\mathbb {R}}^4, \end{aligned}$$

a weight function where we set the value of \(\beta \) later. Further, let us take \(\theta \) as a non-negative smooth function supported in \({\tilde{D}}_1\) such that \(\theta = 1\) in \({\bar{D}}_1 :=B \left( x_1, \frac{l}{8} \right) \). To shorten the notation we also set \({\bar{\rho }} (x):= \theta (x) \rho (x)\).

Since f has support in B(0, 1), by the Besov embedding Theorem 5 followed by continuous embedding of \(L_{\ell }^{{\mathfrak {p}}}({\mathbb {R}}^4)\) into \(B^0_{{\mathfrak {p}}, \infty , \ell }({\mathbb {R}}^4)\) we get, where \(\chi _{\varepsilon } :=\varphi _{\varepsilon } - \varphi _{1, \varepsilon }\),

$$\begin{aligned}&\Vert f \cdot \varphi _{\varepsilon } (\cdot + x_1) - f \cdot \varphi _{1, \varepsilon } (\cdot + x_1) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}} \lesssim \Vert f (\cdot - x_1) \chi _{\varepsilon } \Vert _{B_{{\mathfrak {p}}, p, \ell }^s}^{{\mathfrak {p}}} \nonumber \\&\qquad \lesssim \int _{B (x_1, 1)} | f (x - x_1) \chi _{\varepsilon } (x) |^{{\mathfrak {p}}} \textrm{d}x ~\le ~ e^{m{\mathfrak {p}} \beta } \Vert f \Vert _{L^{\infty }} \Vert {\bar{\rho }} \chi _{\varepsilon } \Vert ^{{\mathfrak {p}}}_{L^{{\mathfrak {p}}} (B (x_1, 1))}. \end{aligned}$$
(11)

Towards estimating \(\Vert {\bar{\rho }} \chi _{\varepsilon } \Vert ^{{\mathfrak {p}}}_{L^{{\mathfrak {p}}} (B (x_1, 1))} \), first we claim that

$$\begin{aligned} \begin{array}{l} \theta (x) (\xi _{\varepsilon } - \xi _{1, \varepsilon }) (x) = 0 \quad {\text {for}} {\text {all}} \quad x \in {\mathbb {R}}^4. \end{array} \end{aligned}$$
(12)

This is obvious for \(x \in {\mathbb {R}}^4 \setminus {\tilde{D}}_1\). So let us take \(x \in {\tilde{D}}_1\). Since

$$\begin{aligned} (\theta (\xi _{\varepsilon } - \xi _{1, \varepsilon })) (x) = \theta (x) ({\mathfrak {a}}_{\varepsilon } *(\xi - \xi _1)) (x), \end{aligned}$$

it is sufficient to show that \((\xi - \xi _1, {\mathfrak {a}}_{\varepsilon } *g)_{{\mathcal {S}}', {\mathcal {S}}} = 0\) for all \(g \in C_c^{\infty } ({\tilde{D}}_1)\), where \((\cdot , \cdot )_{{\mathcal {S}}', {\mathcal {S}}}\) is duality between Schwartz function \({\mathcal {S}}\) and Schwartz distribution \({\mathcal {S}}'\). But, since \(\xi - \xi _1 = 0\) on \(D_1\), for this it is enough to show that \({\text {supp}} ({\mathfrak {a}}_{\varepsilon } *g) \subset D_1\). This follows because

$$\begin{aligned} \begin{array}{l} ({\mathfrak {a}}_{\varepsilon } *g) (z) = \int _{{\tilde{D}}_1} {\mathfrak {a}}_{\varepsilon } (z - y) g (y) \, \textrm{d}y, \end{array} \end{aligned}$$

and for \(z \in D_1^c\) and \(y \in {\tilde{D}}_1\), \(| z - y | \geqslant \frac{l}{4} > \frac{\varepsilon l}{4}\). Hence the claim (12).

Next, observe that \(\chi _{\varepsilon }\) satisfies

$$\begin{aligned} \begin{array}{ll}&{\mathcal {L}} \chi _{\varepsilon } + Q_{\varepsilon } \chi _{\varepsilon } = \xi _{\varepsilon } - \xi _{1, \varepsilon }, \end{array} \end{aligned}$$
(13)

where \(Q_{\varepsilon } :=\alpha ^2 \int _0^1 \exp \{ \alpha \varphi _{1, \varepsilon } - C_{\varepsilon } + \Theta \alpha (\varphi _{\varepsilon } - \varphi _{1, \varepsilon }) \} \textrm{d}\Theta > 0\). Then, testing (13) with \({\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon }\) and integrating on \({\mathbb {R}}^4\) give

$$\begin{aligned} \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } {\mathcal {L}} \chi _{\varepsilon } + \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}} Q_{\varepsilon } = 0, \end{aligned}$$
(14)

where the noise term vanishes because of (12). The first term on the l.h.s. above can be expanded as

$$\begin{aligned} \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } {\mathcal {L}} \chi _{\varepsilon }&= m^2 \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}} + \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } (- \Delta ({\bar{\rho }} \chi _{\varepsilon })) \\&\quad + \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \Delta {\bar{\rho }} + 2 \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } \nabla {\bar{\rho }} \cdot \nabla \chi _{\varepsilon } . \end{aligned}$$

But, since the integration by parts and the definition of divergence give

$$\begin{aligned}&2 \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } \nabla {\bar{\rho }} \cdot \nabla \chi _{\varepsilon } = \frac{2}{{\mathfrak {p}}} \int {\bar{\rho }}^{{\mathfrak {p}}- 1} \nabla {\bar{\rho }} \cdot \nabla | \chi _{\varepsilon } |^{{\mathfrak {p}}}\\&\quad = - \frac{2}{{\mathfrak {p}}} \int | \chi _{\varepsilon } |^{{\mathfrak {p}}} {\text {div}} ({\bar{\rho }}^{{\mathfrak {p}}- 1} \nabla {\bar{\rho }}) = - \frac{2}{{\mathfrak {p}}} \int | \chi _{\varepsilon } |^{{\mathfrak {p}}} [({\mathfrak {p}}- 1) {\bar{\rho }}^{{\mathfrak {p}}- 2} | \nabla {\bar{\rho }} |^2 + {\bar{\rho }}^{{\mathfrak {p}}- 1} \Delta {\bar{\rho }}], \end{aligned}$$

we have

$$\begin{aligned} \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } {\mathcal {L}} \chi _{\varepsilon }&= m^2 \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}} + \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } (- \Delta ({\bar{\rho }} \chi _{\varepsilon })) \nonumber \\&\quad + \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \Delta {\bar{\rho }} - \frac{2 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | \chi _{\varepsilon } |^{{\mathfrak {p}}} {\bar{\rho }}^{{\mathfrak {p}}- 2} | \nabla {\bar{\rho }} |^2 . \end{aligned}$$
(15)

Thus, substitution of (15) into (14) together with \(Q_{\varepsilon } > 0\) yield

$$\begin{aligned} \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } (- \Delta ({\bar{\rho }} \chi _{\varepsilon }))&+ \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \Delta {\bar{\rho }} + m^2 \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \nonumber \\&\le \frac{2 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | \chi _{\varepsilon } |^{{\mathfrak {p}}} {\bar{\rho }}^{{\mathfrak {p}}- 2} | \nabla {\bar{\rho }} |^2. \end{aligned}$$
(16)

Furthermore, since the integration by parts and the product rule of derivative give

$$\begin{aligned} \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } (- \Delta ({\bar{\rho }} \chi _{\varepsilon }))&= \int ({\mathfrak {p}}- 1) | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \chi _{\varepsilon } {\bar{\rho }}^{{\mathfrak {p}}- 2} \nabla {\bar{\rho }} \cdot \nabla ({\bar{\rho }} \chi _{\varepsilon }) \nonumber \\&\quad + \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \nabla \chi _{\varepsilon } \cdot \nabla ({\bar{\rho }} \chi _{\varepsilon }) \nonumber \\&\quad + \int ({\mathfrak {p}}- 2) {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} \nabla \chi _{\varepsilon } \cdot \nabla ({\bar{\rho }} \chi _{\varepsilon }) \nonumber \\&= ({\mathfrak {p}}- 1) \int | \chi _{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\rho }}^{{\mathfrak {p}}- 2} | \nabla ({\bar{\rho }} \chi _{\varepsilon }) |^2 \nonumber \\&\geqslant 0, \end{aligned}$$
(17)

from inequality (16) we obtain

$$\begin{aligned} \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \Delta {\bar{\rho }} + m^2 \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \leqslant \frac{2 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | \chi _{\varepsilon } |^{{\mathfrak {p}}} {\bar{\rho }}^{{\mathfrak {p}}- 2} | \nabla {\bar{\rho }} |^2.\qquad \end{aligned}$$
(18)

Since \(\nabla \rho (x) = - m \beta \frac{x - x_1}{| x - x_1 |} \rho (x)\) for \(x \in {\mathbb {R}}^4 \setminus \{ x_1 \}\),

$$\begin{aligned} \begin{array}{l} \Delta {\bar{\rho }} = \rho \Delta \theta + 2 \nabla \rho \cdot \nabla \theta + m^2 \beta ^2 \theta \rho , \quad {\text {and}} \quad | \nabla {\bar{\rho }} | \leqslant | \nabla \theta | \rho + m \beta \rho \theta , \end{array} \end{aligned}$$

inequality (18) yield

$$\begin{aligned}&\left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \rho \Delta \theta -2m \beta \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \left[ \rho \frac{x - x_1}{| x - x_1 |} \cdot \nabla \theta \right] \\&\qquad \qquad + m^2 \beta ^2 \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \theta \rho + m^2 \int {\bar{\rho }}^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \\&\le \frac{4 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | \chi _{\varepsilon } |^{{\mathfrak {p}}} {\bar{\rho }}^{{\mathfrak {p}}- 2} \left[ | \nabla \theta |^2 \rho ^2 + m^2 \beta ^2 \rho ^2 \theta ^2 \right] , \end{aligned}$$

where to get the r.h.s. terms we also used \((a+b)^2 \le 2(a^2 + b^2)\), \(\forall a,b \in {\mathbb {R}}\). Consequently, by regrouping the terms together with \(\bigg \vert \frac{x - x_1}{| x - x_1 |} \cdot \nabla \theta \bigg \vert \le |\nabla \theta |\) we get

$$\begin{aligned}&m^2 \left( 1 + \beta ^2 \left( \frac{2 - 3{\mathfrak {p}}}{{\mathfrak {p}}} \right) \right) \Vert {\bar{\rho }} \chi _{\varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}} + \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \rho \Delta \theta \nonumber \\&\quad \le ~ \frac{4 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | \nabla \theta |^2 \rho ^{{\mathfrak {p}}} \theta ^{{\mathfrak {p}}- 2} | \chi _{\varepsilon } |^{{\mathfrak {p}}} + 2 m \beta \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \rho | \nabla \theta | . \end{aligned}$$
(19)

Moreover, since \(\theta \) is supported in \({\tilde{D}}_1\) and \(\theta = 1\) on \({\bar{D}}_1\), (19) gives

$$\begin{aligned} m^2 \left( 1 + \beta ^2 \left( \frac{2 - 3{\mathfrak {p}}}{{\mathfrak {p}}} \right) \right) \Vert {\bar{\rho }} \chi _{\varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1)}^{{\mathfrak {p}}}&\le \frac{4 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int _{{\tilde{D}}_1 \setminus {\bar{D}}_1} | \nabla \theta |^2 \rho ^{{\mathfrak {p}}} \theta ^{{\mathfrak {p}}- 2} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \nonumber \\&\quad + 2 m \beta \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int _{{\tilde{D}}_1 \setminus {\bar{D}}_1} {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \rho | \nabla \theta | \nonumber \\&\quad + \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int _{{\tilde{D}}_1 \setminus {\bar{D}}_1} {\bar{\rho }}^{{\mathfrak {p}}- 1} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \rho | \Delta \theta | . \end{aligned}$$
(20)

To keep the coefficient of \(\Vert {\bar{\rho }} \chi _{\varepsilon } \Vert _{L^{{\mathfrak {p}}}}\) positive in the l.h.s. above, we choose \(\beta = \beta ({\mathfrak {p}}) > 0\) so small such that

$$\begin{aligned} \begin{array}{l} 1 + \beta ^2 \left( \frac{2 - 3{\mathfrak {p}}}{{\mathfrak {p}}} \right) > 0. \end{array} \end{aligned}$$
(21)

To keep the notation simpler we set

$$\begin{aligned} K (m, \beta , {\mathfrak {p}}) :=m^2 \left( 1 + \beta ^2 \left( \frac{2 - 3{\mathfrak {p}}}{{\mathfrak {p}}} \right) \right) . \end{aligned}$$

Thus, from (20) we deduce that

$$\begin{aligned}{} & {} K (m, \beta , {\mathfrak {p}}) \Vert {\bar{\rho }} \chi _{\varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1)}^{{\mathfrak {p}}}\nonumber \\{} & {} \quad \le M_{\theta }^{{\mathfrak {p}}} \left( \frac{4 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} + (2 m \beta + 1) \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \right) \int _{{\tilde{D}}_1 \setminus {\bar{D}}_1} \rho ^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}}, \end{aligned}$$
(22)

where \(M_{\theta } > 0\) is the bound of \(\theta \) and its derivatives up to order 2.

Further, since

$$\begin{aligned} | \rho (x) | \leqslant e^{- m \beta \frac{l}{8}} \qquad {\text {for}} x \in {\tilde{D}}_1 \setminus {\bar{D}}_1, \end{aligned}$$

by substituting (22) in (11) we infer that

$$\begin{aligned}&\Vert f \cdot \varphi _{\varepsilon } (\cdot + x_1) - f \cdot \varphi _{1, \varepsilon } (\cdot + x_1) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}} \\&\quad \lesssim ~ \frac{e^{m{\mathfrak {p}} \beta } \Vert f \Vert _{L^{\infty }}}{K (m, \beta , {\mathfrak {p}})} M_{\theta }^{{\mathfrak {p}}} \left( \frac{4 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} + (2 m \beta + 1) \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \right) \int _{{\tilde{D}}_1 \setminus {\bar{D}}_1} \rho ^{{\mathfrak {p}}} | \chi _{\varepsilon } |^{{\mathfrak {p}}} \\&\quad \lesssim _{m, {\mathfrak {p}}, M_{\theta }, \Vert f \Vert _{L^{\infty }}} ~~ e^{m{\mathfrak {p}} \beta \left( 1 - \frac{l}{8} \right) } (\Vert X_{\varepsilon } - X_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 \setminus {\bar{D}}_1)}^{{\mathfrak {p}}} + \Vert {\bar{\varphi }}_{\varepsilon } - {\bar{\varphi }}_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 \setminus {\bar{D}}_1)}^{{\mathfrak {p}}}). \end{aligned}$$

Thus, by applying \({\mathbb {E}}\) on both sides we get

$$\begin{aligned}&{\mathbb {E}} \left[ \Vert f \cdot \varphi _{\varepsilon } (\cdot + x_1) - f \cdot \varphi _{1, \varepsilon } (\cdot + x_1) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}} \right] \nonumber \\&\quad \lesssim _{m, {\mathfrak {p}}, M_{\theta }, \Vert f \Vert _{L^{\infty }}} ~~ e^{m{\mathfrak {p}} \beta \left( 1 - \frac{l}{8} \right) } \bigg ( {\mathbb {E}} \left[ \Vert X_{\varepsilon } - X_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 \setminus {\bar{D}}_1)}^{{\mathfrak {p}}} \right] \nonumber \\&\quad \quad + {\mathbb {E}} \left[ \Vert {\bar{\varphi }}_{\varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 \setminus {\bar{D}}_1)}^{{\mathfrak {p}}} \right] +{\mathbb {E}} \left[ \Vert {\bar{\varphi }}_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 \setminus {\bar{D}}_1)}^{{\mathfrak {p}}} \right] \bigg ). \end{aligned}$$
(23)

To estimate the term \({\mathbb {E}} [\Vert X_{\varepsilon } - X_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 {\setminus } {\bar{D}}_1)}^{{\mathfrak {p}}}]\), since \({\text {supp}} {\mathfrak {a}}_{\varepsilon } \subset B (0, \varepsilon ),\) we first infer that \(\xi _{\varepsilon } = \xi _{1, \varepsilon }\) on \(D_{1, \varepsilon } :=B \left( x_1, \frac{l}{2} - \varepsilon \right) \). By using the representation from Lemma 6 we have that

$$\begin{aligned} (X_{\varepsilon } - X_{1, \varepsilon }) (x) = \int _{{\mathbb {R}}^4} K (x - z) \mathbb {1}_{D_{1, \varepsilon }^c} (z) (\xi _{\varepsilon } (\textrm{d}z) - \xi _{1, \varepsilon } (\textrm{d}z) ). \end{aligned}$$
(24)

Since, for \(x \in {\tilde{D}}_1 \setminus {\bar{D}}_1\), we have \(| x - z | > \frac{l}{4} - \varepsilon \gg 1\) for \(z \in D_{1, \varepsilon }^c\), thus by Lemma 6 (1) we obtain

$$\begin{aligned} {\mathbb {E}} \left[ ((X_{\varepsilon } - X_{1, \varepsilon }) (x))^2 \right]&= {\mathbb {E}} \left[ \int _{{\mathbb {R}}^4} K (x - z) \mathbb {1}_{D_{1, \varepsilon }^c} (z) \xi _{\varepsilon } (\textrm{d}z) \int _{{\mathbb {R}}^4} K (x - z_1) \mathbb {1}_{D_{1, \varepsilon }^c} (z_1) \xi _{\varepsilon } (\textrm{d}z_1) \right] \\&\quad -{\mathbb {E}} \left[ \int _{{\mathbb {R}}^4} K (x - z) \mathbb {1}_{D_{1, \varepsilon }^c} (z) \xi _{1, \varepsilon } (\textrm{d}z) \int _{{\mathbb {R}}^4} K (x - z_1) \mathbb {1}_{D_{1, \varepsilon }^c} (z_1) \xi _{\varepsilon } (\textrm{d}z_1) \right] \\&\quad -{\mathbb {E}} \left[ \int _{{\mathbb {R}}^4} K (x - z) \mathbb {1}_{D_{1, \varepsilon }^c} (z) \xi _{\varepsilon } (\textrm{d}z) \int _{{\mathbb {R}}^4} K (x - z_1) \mathbb {1}_{D_{1, \varepsilon }^c} (z_1) \xi _{1, \varepsilon } (\textrm{d}z_1) \right] \\&\quad +{\mathbb {E}} \left[ \int _{{\mathbb {R}}^4} K (x - z) \mathbb {1}_{D_{1, \varepsilon }^c} (z) \xi _{1, \varepsilon } (\textrm{d}z) \int _{{\mathbb {R}}^4} K (x - z_1) \mathbb {1}_{D_{1, \varepsilon }^c} (z_1) \xi _{1, \varepsilon } (\textrm{d}z_1) \right] \\&\lesssim \int _{{\mathbb {R}}^8} C_1^2 {e^{- C_2 | x - z |}} e^{- C_2 | x - z |} \mathbb {1}_{D_{1, \varepsilon }^c} (z) \mathbb {1}_{D_{1, \varepsilon }^c} (z_1) \\ {}&\quad \int _{{\mathbb {R}}^4} {\mathfrak {a}}_{\varepsilon } (z - z_2) {\mathfrak {a}}_{\varepsilon } (z_1 - z_2) \textrm{d}z_2 \textrm{d}z \textrm{d}z_1 \\&\le C_1^2 e^{- C_2 d (x, D_{1, \varepsilon }^c)} \int _{{\mathbb {R}}^4} e^{- C_2 | x - z_1 |} \textrm{d}z_1 \lesssim e^{- C_2 d (x, D_{1, \varepsilon }^c)}, \end{aligned}$$

which is finite and independent of \(\varepsilon \). Here we have also employed the fact that \(\int _{{\mathbb {R}}^4} ({\mathfrak {a}}_{\varepsilon } *{\mathfrak {a}}_{\varepsilon }) (z - z_1) \textrm{d}z = 1\), which holds true because \({\mathfrak {a}}_{\varepsilon } *{\mathfrak {a}}_{\varepsilon }\) approximates \(\delta *\delta \), where \(\delta \) represents the Dirac delta distribution.

Consequently, since \((X_{\varepsilon } - X_{1, \varepsilon }) (x)\) is Gaussian from (24), by hypercontractivity (see Theorem 3.50 in [35]) there exists a constant \(C_{{\mathfrak {p}}} > 0\) such that, for every \(x \in {\tilde{D}}_1 \setminus {\bar{D}}_1\),

$$\begin{aligned} {\mathbb {E}} \left[ | (X_{\varepsilon } - X_{1, \varepsilon }) (x) |^{{\mathfrak {p}}} \right] ~{} & {} \le ~ C_{{\mathfrak {p}}} \left( {\mathbb {E}} \left[ | (X_{\varepsilon } - X_{1, \varepsilon }) (x) |^2 \right] \right) ^{\frac{{\mathfrak {p}}}{2} } ~\lesssim ~ C_{{\mathfrak {p}}} e^{- \frac{{\mathfrak {p}}}{2} C_2 d (x, D_{1, \varepsilon }^c)}\nonumber \\{} & {} \leqslant C_{{\mathfrak {p}}}. \end{aligned}$$
(25)

Furthermore, since

$$\begin{aligned}{\mathbb {E}} \left[ \Vert X_{\varepsilon } - X_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 \setminus {\bar{D}}_1)}^{{\mathfrak {p}}} \right] ~=~ \int _{\Omega } \int _{{\tilde{D}}_1 \setminus {\bar{D}}_1} | X_{\varepsilon } (x, \omega ) - X_{1, \varepsilon } (x, \omega ) |^{{\mathfrak {p}}} \, \textrm{d}x\, {\mathbb {P}} (\textrm{d}\omega ),\end{aligned}$$

the Fubini Theorem followed by (25) yield

$$\begin{aligned} \begin{array}{lllll} {\mathbb {E}} \left[ \Vert X_{\varepsilon } - X_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 \setminus {\bar{D}}_1)}^{{\mathfrak {p}}} \right]= & {} \int _{{\tilde{D}}_1 \setminus {\bar{D}}_1} {\mathbb {E}} [| (X_{\varepsilon } - X_{1, \varepsilon }) (x) |^{{\mathfrak {p}}}] \textrm{d}x\lesssim & {} C_{{\mathfrak {p}}}. \end{array} \end{aligned}$$
(26)

Finally, we assert that \({\mathbb {E}} [\Vert {\bar{\varphi }}_{\varepsilon } \Vert _{L^{{\mathfrak {p}}}({\mathbb {R}}^4)}^{{\mathfrak {p}}}] < \infty \). This assertion trivially implies \({\mathbb {E}} [\Vert {\bar{\varphi }}_{\varepsilon } \Vert _{L^{{\mathfrak {p}}} ({\tilde{D}}_1 {\setminus } {\bar{D}}_1)}^{{\mathfrak {p}}}] < \infty \) in (23). We start the proof of this claim by recalling from Sect. 2.1 that \(\alpha {\bar{\varphi }}_{\varepsilon } \leqslant 0\) and \({\bar{\varphi }}_{\varepsilon }\) is a unique solution to

$$\begin{aligned} \begin{array}{l} {\mathcal {L}} {\bar{\varphi }}_{\varepsilon } + \alpha \exp (\alpha {\bar{\varphi }}_{\varepsilon }) \eta _{\varepsilon } = 0. \end{array} \end{aligned}$$
(27)

By testing (27) with \(\rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon }\) and integrating it on \({\mathbb {R}}^4\) we obtain

$$\begin{aligned} \int \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon } {\mathcal {L}} {\bar{\varphi }}_{\varepsilon } + \int \alpha \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon } \exp (\alpha {\bar{\varphi }}_{\varepsilon }) \eta _{\varepsilon } = 0. \end{aligned}$$
(28)

Since from (15) and (17)

$$\begin{aligned} \int \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon } {\mathcal {L}} {\bar{\varphi }}_{\varepsilon }&= m^2 \int \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} + \int \rho ^{{\mathfrak {p}}- 1} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon } (- \Delta (\rho {\bar{\varphi }}_{\varepsilon })) \\&\quad + \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int \rho ^{{\mathfrak {p}}- 1} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} \Delta \rho - \frac{2 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} \rho ^{{\mathfrak {p}}- 2} | \nabla \rho |^2, \end{aligned}$$

where \(\int \rho ^{{\mathfrak {p}}- 1} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon } (- \Delta (\rho {\bar{\varphi }}_{\varepsilon })) \ge 0\), (28) gives

$$\begin{aligned}&m^2 \int \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} + \int \alpha \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon } \exp (\alpha {\bar{\varphi }}_{\varepsilon }) \eta _{\varepsilon } \nonumber \\&\qquad \quad = ~ \frac{2 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} \rho ^{{\mathfrak {p}}- 2} | \nabla \rho |^2 - \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int \rho ^{{\mathfrak {p}}- 1} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} \Delta \rho . \end{aligned}$$
(29)

Since \(\eta _{\varepsilon } \rho ^{{\mathfrak {p}}}\) is a positive distribution, the second l.h.s. term in (29) can be estimated as

$$\begin{aligned} \left| \int \alpha \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} {\bar{\varphi }}_{\varepsilon } \exp (\alpha {\bar{\varphi }}_{\varepsilon }) \eta _{\varepsilon } \right| ~\le ~ \Vert ~ | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} \alpha {\bar{\varphi }}_{\varepsilon } \exp (\alpha {\bar{\varphi }}_{\varepsilon }) {\mathbb {I}} (\alpha {\bar{\varphi }}_{\varepsilon }) \Vert _{C ({\mathbb {R}}^4)} \left[ \int \eta _{\varepsilon } \rho ^{{\mathfrak {p}}} \right] , \end{aligned}$$

where \({\mathbb {I}}: {\mathbb {R}} \rightarrow {\mathbb {R}}_+\) is a smooth function supported on \((- \infty , 1)\). Note that \({\mathbb {I}} (\alpha {\bar{\varphi }}_{\varepsilon }) = 1\), since \(\alpha {\bar{\varphi }}_{\varepsilon } \leqslant 0\). But, for each \(x \in {\mathbb {R}}^4\),

$$\begin{aligned} | ~ | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} \alpha {\bar{\varphi }}_{\varepsilon } \exp (\alpha {\bar{\varphi }}_{\varepsilon }) {\mathbb {I}} (\alpha {\bar{\varphi }}_{\varepsilon }) |&=~ | \alpha |^{2 -{\mathfrak {p}}} | | \alpha {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}- 2} \alpha {\bar{\varphi }}_{\varepsilon } \exp (\alpha {\bar{\varphi }}_{\varepsilon }) {\mathbb {I}} (\alpha {\bar{\varphi }}_{\varepsilon }) | \\&\le ~ | \alpha |^{2 -{\mathfrak {p}}} \sup _{z \in {\mathbb {R}}^4} [z | z |^{{\mathfrak {p}}- 2} \exp (z) {\mathbb {I}} (z)]\\&\le ~ C | \alpha |^{2 -{\mathfrak {p}}}, \end{aligned}$$

for some \(C > 0\), where the r.h.s is independent of \(\varepsilon \) and x. By substituting the above estimate into (29) we obtain

$$\begin{aligned} m^2 \int \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}}&~ \le ~ \frac{2 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \int | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} \rho ^{{\mathfrak {p}}} \frac{| \nabla \rho |^2}{\rho ^2} - \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \int \rho ^{{\mathfrak {p}}} | {\bar{\varphi }}_{\varepsilon } |^{{\mathfrak {p}}} \frac{\Delta \rho }{\rho } \nonumber \\&\qquad + C | \alpha |^{2 -{\mathfrak {p}}} \int \eta _{\varepsilon } \rho ^{{\mathfrak {p}}} . \end{aligned}$$
(30)

Now since \(\nabla \rho (x) = - m \beta \frac{x - x_1}{| x - x_1 |} \rho (x)\) for \(x \in {\mathbb {R}}^4 {\setminus } \{ x_1 \}\) and \(\Delta \rho = \rho m^2 \beta ^2\), we can choose \(\beta > 0\) such that

$$\begin{aligned} \frac{2 ({\mathfrak {p}}- 1)}{{\mathfrak {p}}} \frac{| \nabla \rho |^2}{\rho ^2} - \left( 1 - \frac{2}{{\mathfrak {p}}} \right) \frac{\Delta \rho }{\rho } ~=~ m^2 \beta ^2 ~\leqslant ~ \frac{m^2}{2}. \end{aligned}$$
(31)

Consequently, with \(\beta \) such that (21) and (31) hold true, from (30) we deduce that

$$\begin{aligned} \frac{m^2}{2} \Vert \rho {\bar{\varphi }}_{\varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}} ~\leqslant ~ C | \alpha |^{2 -{\mathfrak {p}}} \int \eta _{\varepsilon } \rho ^{{\mathfrak {p}}}. \end{aligned}$$
(32)

Thus, \({\mathbb {E}} \left[ \Vert \rho {\bar{\varphi }}_{\varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}} \right] < \infty \) and the bound is uniform in \(\varepsilon \) because

$$\begin{aligned} {\mathbb {E}} \left[ \int \eta _{\varepsilon } \rho ^{{\mathfrak {p}}} \right] = \int e^{- m{\mathfrak {p}} \beta | x - x_1 |} \textrm{d}x < \infty . \end{aligned}$$

Similarly we can show that \({\mathbb {E}} [\Vert \rho {\bar{\varphi }}_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}}] < \infty \) uniformly in \(\varepsilon \).

Hence, substituting (26) together with (32) and the uniform boundedness of \({\mathbb {E}} [\Vert \rho {\bar{\varphi }}_{\varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}}]\) and \({\mathbb {E}} [\Vert \rho {\bar{\varphi }}_{1, \varepsilon } \Vert _{L^{{\mathfrak {p}}}}^{{\mathfrak {p}}}]\) from (23), for \(\beta \) satisfying (21) and (31), we have

$$\begin{aligned} \begin{array}{lll} {\mathbb {E}} \left[ \Vert f \cdot \varphi _{\varepsilon } (\cdot + x_1) - f \cdot \varphi _{1, \varepsilon } (\cdot + x_1) \Vert _{B_{p, p, \ell }^s}^{{\mathfrak {p}}} \right]&\lesssim _{m, {\mathfrak {p}}, M_{\theta }, | \alpha |, \Vert f \Vert _{L^{\infty }}}&e^{- m \beta \frac{l}{8}}, \end{array} \end{aligned}$$

which is independent of \(\varepsilon \) and \(x_1\). Here we have also used \(e^{m \beta \left( 1 - \frac{l}{8} \right) } \simeq _{m, {\mathfrak {p}}} e^{- m \beta \frac{l}{8}}\). Hence we get (10) and due to inequality (9) the proof of Theorem 1 is complete.

3 The Malliavin calculus approach

      In this section our aim is to present the proof of Theorem 1 via the approach based on Malliavin calculus. The proof will start by considering an approximation \(\varphi _{\varepsilon , R}\) useful to be able to apply easily the Malliavin calculus, see eqns. (35) and (36). The solution theory to (35) is closely related to Lemmata 30 and 31 of  [2] and proved in Proposition 1 and Lemma 5 below. The Malliavin calculus enters in estimating \({\text {Cov}} (\varphi _{\varepsilon , R} (x_1), \varphi _{\varepsilon , R} (x_2))\) in terms of the Malliavin derivative of \(\varphi _{\varepsilon , R}\) which we denote by \(D_z \varphi _{\varepsilon , R}\), see eqs. (60), (62) and (63). The existence of \(D_z \varphi _{\varepsilon , R}\) and the linear elliptic SPDE it satisfies are established in Theorem 3 thanks to a preliminary abstract result from  [49] which we state as Theorem 2. Finally the Feynman–Kac formula and some estimates from Malliavin calculus, for example (61), help us to finish the proof.

3.1 Preliminaries

      Before moving on, let us first recall the tools from Malliavin calculus that we will need. Most of the definitions and preliminary results here are taken from Chapter 1 of Nualart’s book  [44]. Let H be a separable Hilbert space and \(W = \{ W (h), h \in H \}\) an isonormal Gaussian process defined on a complete probability space \((\Omega , {\mathfrak {F}}, {\mathbb {P}})\). Let \({\mathcal {E}}\) be the \(\sigma \)-field generated by the random variables \(\{ W (h), h \in H \}\). Since \({\mathcal {E}} \subseteq {\mathfrak {F}}\), note that when we write \((\Omega , {\mathcal {E}}, {\mathbb {P}})\) we mean that \({\mathbb {P}}\) is the restriction of the probability measure defined on \({\mathfrak {F}}\) to \({\mathcal {E}}\).

For each \(n \geqslant 0\) by \(H_n (x)\) we denote the well known nth Hermite polynomial and by \({\mathcal {H}}_n,\) the Wiener chaos of order n, that is, the closed linear subspace of \(L^2 (\Omega , {\mathcal {E}}, {\mathbb {P}})\) generated by the random variables \(\{ H_n (W (h)), h \in H, \Vert h \Vert _{H} = 1 \}\) whenever \(n \geqslant 1\), and the set of constants for \(n = 0\). One of the important results in the Malliavin calculus is the Wiener chaos decomposition of \(L^2 (\Omega , {\mathcal {E}}, {\mathbb {P}})\) into its projections in the spaces \({\mathcal {H}}_n\), i.e.,

$$\begin{aligned} L^2 (\Omega , {\mathcal {E}}, {\mathbb {P}}) = \oplus _{n \geqslant 0} {\mathcal {H}}_n. \end{aligned}$$

In particular for any \(F \in L^2 (\Omega , {\mathcal {E}}, {\mathbb {P}})\), we have \(F = \sum _{n = 0}^{\infty } J_n F\) where \(J_n F\) denotes the projection of F into \({\mathcal {H}}_n\). We will restrict our discussion of this section to \(L^2 (\Omega , {\mathcal {E}}, {\mathbb {P}})\) and to shorten the notation we will denote it by \(L^2 (\Omega )\).

The Malliavin derivative operator D maps the domain \({\mathbb {D}}^{1, 2} \subseteq L^2 (\Omega ) \) to the space of H-valued random variables \(L^2 (\Omega ; H).\) Note that \(F \in {\mathbb {D}}^{1, 2}\) if and only if \(\sum _{n = 1}^{\infty } n \Vert J_n F \Vert _{L^2 (\Omega )}^2 < \infty \). Moreover, in this setting for all \(n \geqslant 1\), we have

$$\begin{aligned} D (J_n F) = J_{n - 1} (D F). \end{aligned}$$

The divergence operator \(\delta : {\text {Dom}} \delta \subseteq L^2 (\Omega ; H) \rightarrow L^2 (\Omega )\) is defined as the adjoint of the derivative operator D. We will work in the special case of \(H = L^2 (T, {\mathcal {B}}, \tau )\), where \((T, {\mathcal {B}})\) is a measurable space and \(\tau \) is a \(\sigma \)-finite atom-less measure on \((T, {\mathcal {B}})\). Also, we will identify \(L^2 (\Omega ; L^2 (T))\) with \(L^2 (T \times \Omega )\) which is the set of square integrable stochastic processes. Thus, for \(F \in {\mathbb {D}}^{1, 2}\), \(D F \in L^2 (T \times \Omega )\) and we write \(D_t F = D F (t), ~ \forall t \in T\). By \({\mathbb {D}}^{1, 2} (L^2 (T))\) we denote the set of stochastic processes \(u \in L^2 (T \times \Omega )\) such that \(u (t) \in {\mathbb {D}}^{1, 2}\) for almost all \(t \in T\) and there exists a measurable version of the two parameter process \(\{ D_s u (t) \}_{s, t \in T} \subset L^2 (\Omega )\) satisfying

$$\begin{aligned} {\mathbb {E}} \left[ \int _T \int _T (D_s u (t))^2 \, \tau (\textrm{d}s)\, \tau (\textrm{d}t) \right] < \infty . \end{aligned}$$

In the Malliavin calculus literature, the space \({\mathbb {D}}^{1, 2} (L^2 (T))\) is generally denoted by \({\mathbb {L}}^{1, 2}\). Note that \({\mathbb {L}}^{1, 2}\) is a subset of \({\text {Dom}} \delta \) and isomorphic to \(L^2 (T; {\mathbb {D}}^{1, 2}).\) Then, see (1.54) of [44], for \(u, v \in {\mathbb {L}}^{1, 2}\) we have

$$\begin{aligned} {\mathbb {E}} [\delta (u) \delta (v)] = \int _T {\mathbb {E}} [u (t) v (t)] \, \tau (\textrm{d}t) + \int _T \int _T {\mathbb {E}} [D_s u (t) D_s v (t)] \, \tau (\textrm{d}s)\, \tau (\textrm{d}t). \nonumber \\ \end{aligned}$$
(33)

Let \(\{ P_t, t \geqslant 0 \}\) be the one parameter Ornstein-Uhlenbeck semigroup of contraction operators in \(L^2 (\Omega )\) and by \(L: L^2 (\Omega ) \ni F \rightarrow \sum _{n = 0}^{\infty } - n J_n F \in L^2 (\Omega )\) we denotes its infinitesimal generator with domain

$$\begin{aligned} {\text {Dom}} L = \left\{ F \in L^2 (\Omega ): \sum _{n = 0}^{\infty } n^2 \Vert J_n F \Vert _{L^2 (\Omega )} < \infty \right\} . \end{aligned}$$

From Proposition 1.4.3 of [44] we know that, for \(F \in L^2 (\Omega ),\) \(F \in {\text {Dom}} L\) if and only if \(F \in {\mathbb {D}}^{1, 2}\) and \(D F \in {\text {Dom}} \delta \). In this case we have \(\delta D F = - L F.\)

With the above notation, equality (90) in [21] gives the following commutation property

$$\begin{aligned} D (I - L)^{- 1} F = (2 I - L)^{- 1} D F, \quad \forall F \in L^2 (\Omega ), \end{aligned}$$

and the proof of Lemma B.1 in [21] give the following first order expansion

$$\begin{aligned} F -{\mathbb {E}} [F] = \delta (I - L)^{- 1} D F, \qquad \forall F \in {\mathbb {D}}^{1, 2}. \end{aligned}$$
(34)

To proceed with our analysis, let us fix the \(\sigma \)-finite measure space \((T, {\mathcal {B}}, \tau )\) as \(({\mathbb {R}}^4, {\mathcal {B}} ({\mathbb {R}}^4), \textrm{d}x)\) where \({\mathcal {B}} ({\mathbb {R}}^4)\) denotes the Borel \(\sigma \)-field on \({\mathbb {R}}^4\) and \(\textrm{d}x\) stands for the Lebesgue measure.

3.2 Proof of Theorem 1

We recall that \(\xi \) is a given space white noise on \({\mathbb {R}}^4\). Thus, the isonormal Gaussian process we consider here is \(W (h) = \langle \xi , h \rangle , h \in L^2_{\ell } ({\mathbb {R}}^4)\), indexed by the Hilbert space \(L^2_{\ell } ({\mathbb {R}}^4).\) We will be working under the framework of Malliavin calculus associated to white noise \(\xi \) on \({\mathbb {R}}^4\). To setup, let \(\Omega = B^{- 2 - \kappa }_{\infty , \infty , \ell } ({\mathbb {R}}^4)\) and let \({\mathbb {P}}\) be the law of \(\xi \) on \(\Omega .\)

It turns out that the following approximation of the Eq. (3), instead of (6), is more suitable to work with the above mentioned tools from Malliavin calculus

$$\begin{aligned} {\mathcal {L}} {\bar{\varphi }}_{\varepsilon , R} + \alpha K_R (\exp (\alpha {\bar{\varphi }}_{\varepsilon , R}) \exp (\alpha X_{\varepsilon } - C_{\varepsilon })) = 0, \end{aligned}$$
(35)

where \(K_R: (0, \infty ) \rightarrow (0, \infty )\) is a smooth function which is equal to x if \(x \in (0, R - 1]\), equal to R if \(x \geqslant R\) and \(K_R\) is increasing for \(x \in (R - 1, R)\). Since the proof presented here of the solution theory to Eq. (35) is closely related to Lemmata 30 and 31 of  [2], the results about the existence of a unique solution \({\bar{\varphi }}_{\varepsilon , R}\) to (35) are postponed to Proposition 1 and Lemma 5 in Appendix A. Moreover, it is straightforward to see that \({\bar{\varphi }}_{\varepsilon , R} \rightarrow {\bar{\varphi }}_{\varepsilon }\) as \(R \rightarrow \infty \), where \({\bar{\varphi }}_{\varepsilon }\) is the unique solution to the Eq. (6).

Further recall, from (4), that we denote the expression \(\exp (\alpha X_{\varepsilon } - C_{\varepsilon })\) by \(\eta _{\varepsilon }\). Let us define the following random field

$$\begin{aligned} ({\mathcal {G}}_{\varepsilon } *\xi ) (x) :=\int _{{\mathbb {R}}^4} ({\mathfrak {a}}_{\varepsilon } *{\mathcal {G}}) (x - y) \, \xi (\textrm{d}y), \qquad x \in {\mathbb {R}}^4, \end{aligned}$$

where \({\mathcal {G}}\) is the Green function associated with the operator \((- \Delta + m^2)^{- 1}\) and \({\mathcal {G}}_{\varepsilon } :={\mathfrak {a}}_{\varepsilon } *{\mathcal {G}}\). It can be shown that \({\mathcal {G}}_{\varepsilon } *\xi \) is a smooth Gaussian process, see Theorem 5.1 of [41].

By setting \(\varphi _{\varepsilon , R} = {\bar{\varphi }}_{\varepsilon , R} + X_{\varepsilon }\), from (35) we get that \(\varphi _{\varepsilon , R}\) uniquely solves the following equation

$$\begin{aligned} {\mathcal {L}} \varphi _{\varepsilon , R} + \alpha K_R (\exp (\alpha \varphi _{\varepsilon , R} - C_{\varepsilon })) = \xi _{\varepsilon }, \end{aligned}$$
(36)

which is equivalent to say that, for \(x \in {\mathbb {R}}^4\) and \(\omega \in \Omega \),

$$\begin{aligned} \varphi _{\varepsilon , R} (x, \omega ) + \alpha \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha \varphi _{\varepsilon , R} (y, \omega ) - C_{\varepsilon })) \, \textrm{d}y = ({\mathcal {G}}_{\varepsilon } *\xi ) (x). \end{aligned}$$
(37)

To shorten the notation we will write

$$\begin{aligned} \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha \varphi _{\varepsilon , R} (y, \omega ) - C_{\varepsilon })) \, \textrm{d}y = ({\mathcal {G}} *K_R (\exp (\alpha \varphi _{\varepsilon , R} (\cdot , \omega ) - C_{\varepsilon }))) (x). \end{aligned}$$

Since one can write the term \({\text {Cov}} (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1)), F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2)))\), that we want to estimate, in terms of \(D_z \varphi _{\varepsilon , R}\), see (60) for precise expression, we aim next to find the equation for \(D_z \varphi _{\varepsilon , R}\). This we achieve in Theorem 3 whose proof is based on the following abstract result which is stated as Theorem 2.5 in  [49].

Theorem 2

Let \((\Omega , {\mathbb {P}})\) be a complete probability space on which \(\xi \) is a canonical process. Further, assume that H is continuously embedded in \(\Omega \) and let us denote this embedding by i. Let \(F \in L^2 (\Omega )\). Then \(F \in {\mathbb {D}}^{1, 2}\) iff the following conditions are satisfied.

  1. 1.

    For all \(h \in H\), there exists a version \({\tilde{F}}_h\) of F such that, for every \(\omega \in \Omega \), the mapping \({\mathbb {R}} \ni t \mapsto {\tilde{F}}_h [\omega + t i (h)]\) is absolutely continuous.

  2. 2.

    There exists \(\varsigma \in L^2 (\Omega ; H)\) such that, for all \(h \in H,\)

    $$\begin{aligned} \lim _{t \rightarrow 0} \frac{1}{t} \{ F [\omega + t i (h)] - F (\omega ) \} = \langle \varsigma (\omega ), h \rangle , \qquad {\mathbb {P}} \text {-a.s.} \end{aligned}$$

From the proof of Theorem 3 it can be observed that we apply Theorem 2, for each \(x \in {\mathbb {R}}^4,\) \(\varepsilon \) and R on F with \(H :=L_{\ell }^2 ({\mathbb {R}}^4)\) where

$$\begin{aligned} F (\omega ) :=\varphi _{\varepsilon , R} (x, \omega ), \quad \omega \in \Omega . \end{aligned}$$
(38)

Since most of the results of this section are independent of \(\varepsilon , R\) and x or for fixed \(\varepsilon , R\) and x, unless otherwise stated we will not write the explicit dependence of functions defined here on \(\varepsilon , R\) and x.

To study the required properties of F, which allow us to apply Theorem 2, we write (37) in the functional form as, for \(\omega \in \Omega \),

$$\begin{aligned} \begin{array}{l} {\mathcal {T}} (\varphi _{\varepsilon , R} (\cdot , \omega )) = ({\mathcal {G}}_{\varepsilon } *\xi ) (\cdot ). \end{array} \end{aligned}$$

Here \({\mathcal {T}}\) is defined as

$$\begin{aligned} \begin{array}{l} {\mathcal {T}}: {\mathcal {B}} \ni w \rightarrow w + \alpha {\mathcal {G}} *K_R (\exp (\alpha w - C_{\varepsilon })) \in {\mathcal {B}} :=C_{\ell }^0 ({\mathbb {R}}^4). \end{array} \end{aligned}$$
(39)

Note that, because of the convolution, the map \({\mathcal {T}}\) is well-defined. Moreover, by definition of the map \({\mathcal {T}}\), (38) can be understood as, for each \(\omega \in \Omega \),

$$\begin{aligned} \begin{array}{l} F (\omega ) = ({\mathcal {T}}^{- 1} ({\mathcal {G}}_{\varepsilon } *\xi )) (x). \end{array} \end{aligned}$$
(40)

Thus, because of (40), in order to study F we first show in Lemma 2 that \({\mathcal {T}}^{- 1}\) exists, i.e., prove the bijectivity of the map \({\mathcal {T}}\). This is precisely our next result. Before this we prove an auxiliary result as follows.

Lemma 1

Let \({\mathfrak {v}} \in L_{\ell }^2 ({\mathbb {R}}^4) \) and \({\mathfrak {u}} \in H_{\ell }^2 ({\mathbb {R}}^4)\) be a unique weak solution to \((- \Delta + m^2) {\mathfrak {u}}={\mathfrak {v}}\). Then

$$\begin{aligned} \langle \nabla ({\mathcal {G}} *{\mathfrak {v}}), ({\mathcal {G}} *{\mathfrak {v}}) \nabla r_{\ell }^2 \rangle + m^2 \Vert {\mathcal {G}} *{\mathfrak {v}} \Vert _{L^2_{\ell } ({\mathbb {R}}^4)}^2 \leqslant \langle {\mathcal {G}} *{\mathfrak {v}}, {\mathfrak {v}} \rangle _{\ell }, \end{aligned}$$
(41)

where \(\langle \cdot , \cdot \rangle \) and \(\langle \cdot , \cdot \rangle _{\ell }\), respectively, denote the standard inner product in \(L^2 ({\mathbb {R}}^4)\) and \(L^2_{\ell } ({\mathbb {R}}^4)\).

Proof

Let \({\mathfrak {u}}={\mathcal {G}} *{\mathfrak {v}} \in H_{\ell }^2 ({\mathbb {R}}^4)\) be a unique weak solution to \((- \Delta + m^2) {\mathfrak {u}}={\mathfrak {v}}\) for given \({\mathfrak {v}} \in L_{\ell }^2 ({\mathbb {R}}^4)\).

Multiplying on both sides of \((- \Delta + m^2) {\mathfrak {u}}={\mathfrak {v}}\) by \(r_{\ell }^2 {\mathfrak {u}}\) give

$$\begin{aligned} \langle (- \Delta + m^2) {\mathfrak {u}}, {\mathfrak {u}} \rangle _{\ell } = \langle {\mathfrak {u}}, {\mathfrak {v}} \rangle _{\ell }. \end{aligned}$$

Integration by parts yield,

$$\begin{aligned} \langle \nabla {\mathfrak {u}}, {\mathfrak {u}} \nabla r_{\ell }^2 \rangle + m^2 \Vert {\mathfrak {u}} \Vert _{L^2_{\ell } ({\mathbb {R}}^4)}^2 \leqslant \langle {\mathfrak {u}}, {\mathfrak {v}} \rangle _{\ell }. \end{aligned}$$

By substituting \({\mathfrak {u}}={\mathcal {G}} *{\mathfrak {v}}\), above gives the conclusion. \(\square \)

To avoid complexity in notation we set \(G (w) :=\alpha K_R (\exp (\alpha w - C_{\varepsilon })), w \in {\mathcal {B}}\). Then, G is non-negative, bounded, smooth and non-decreasing.

Lemma 2

The map \({\mathcal {T}}\) is bijective from \({\mathcal {B}}\) onto \({\mathcal {B}}\).

Proof

Let us first show that \({\mathcal {T}}\) is one-one. In particular, we show that for small enough \(\lambda > 0\) if \(u, v \in {\mathcal {B}} \subset L_{\lambda , \ell '}^2 ({\mathbb {R}}^4)\) for \(\ell \leqslant \ell '\) such that \({\mathcal {T}}u ={\mathcal {T}}v\), then \(u = v\).

Since \({\mathcal {T}}u ={\mathcal {T}}v,\) we have

$$\begin{aligned} u - v + [{\mathcal {G}} *G (u) -{\mathcal {G}} *G (v)] = 0. \end{aligned}$$
(42)

Multiply this by \(r_{\lambda , \ell '} (G (u) - G (v))\) and integrate on \({\mathbb {R}}^4\) to get

$$\begin{aligned} \langle u - v, G (u) - G (v) \rangle _{\lambda , \ell '} + \langle {\mathcal {G}} *(G (u) - G (v)), G (u) - G (v) \rangle _{\lambda , \ell '} = 0, \end{aligned}$$
(43)

where \(\langle a, b \rangle _{\lambda , \ell '} :=\int a (x) b (x) (1 + \lambda | x |^2)^{- \ell '} d x.\)

Consequently, since G is non-decreasing and \(\langle u - v, G (u) - G (v) \rangle _{\lambda , \ell '} \geqslant 0\), from (43) we get

$$\begin{aligned} \langle {\mathcal {G}} *(G (u) - G (v)), G (u) - G (v) \rangle _{\lambda {,}\ell '} \leqslant 0. \end{aligned}$$
(44)

But by substituting \(G (u) - G (v)\) in place of \({\mathfrak {v}}\) in (41) we obtain

$$\begin{aligned}&\langle \nabla ({\mathcal {G}} *(G (u) - G (v))), ({\mathcal {G}} *(G (u) - G (v))) \nabla r_{\lambda , \ell '}^2 \rangle + m^2 \Vert {\mathcal {G}} *(G (u) - G (v)) \Vert _{L^2_{\lambda , \ell '} }^2 \nonumber \\&\quad \le \langle {\mathcal {G}} *(G (u) - G (v)), (G (u) - G (v)) \rangle _{\lambda , \ell '} . \end{aligned}$$

So, using (44) in above yield

$$\begin{aligned}{} & {} \langle \nabla ({\mathcal {G}} *(G (u) - G (v))), ({\mathcal {G}} *(G (u) - G (v))) \nabla r_{\lambda , \ell '}^2 \rangle + m^2 \Vert {\mathcal {G}} *(G (u) - G (v)) \Vert _{L^2_{\lambda , \ell '}}^2 \nonumber \\{} & {} \quad \leqslant 0. \end{aligned}$$
(45)

But due to the integration by parts we have

$$\begin{aligned}&\langle \nabla ({\mathcal {G}} *(G (u) - G (v))), ({\mathcal {G}} *(G (u) - G (v))) \nabla r_{\lambda , \ell '}^2 \rangle \\&= - \langle \nabla ({\mathcal {G}} *(G (u) - G (v))), ({\mathcal {G}} *(G (u) - G (v))) \nabla r_{\lambda , \ell '}^2 \rangle \\&\quad - \int ({\mathcal {G}} *(G (u) - G (v)))^2 \Delta r_{\lambda , \ell '}^2 \textrm{d}x. \end{aligned}$$

This gives

$$\begin{aligned}&2 \langle \nabla ({\mathcal {G}} *(G (u) - G (v))), ({\mathcal {G}} *(G (u) - G (v))) \nabla r_{\lambda , \ell '}^2 \rangle \nonumber \\&=- \int ({\mathcal {G}} *(G (u) - G (v)))^2 \Delta r_{\lambda , \ell '}^2 \textrm{d}x, \end{aligned}$$
(46)

where \(r_{\lambda , \ell '}^2 (x) = (1 + \lambda | x |^2)^{- \ell '}\) and \(\nabla r_{\ell '}^2 (x) = - 2 \lambda \ell ' (1 + \lambda | x |^2)^{- (\ell ' + 1)} x\) and

$$\begin{aligned} \Delta r_{\ell '}^2 (x) = - 4 \lambda \ell (1 + \lambda | x |^2)^{- (\ell + 1)} + 4 \lambda ^2 \ell (\ell + 1) | x |^2 (1 + \lambda | x |^2)^{- (\ell + 2)}. \end{aligned}$$
(47)

Hence, substituting (47) into (46) give

$$\begin{aligned}&2 \langle \nabla ({\mathcal {G}} *(G (u) - G (v))), ({\mathcal {G}} *(G (u) - G (v))) \nabla r_{\lambda , \ell '}^2 \rangle \nonumber \\&\qquad = - 4 \lambda ^2 \ell ' (\ell ' + 1) \int ({\mathcal {G}} *(G (u) - G (v)))^2 | x |^2 (1 + \lambda | x |^2)^{- (\ell ' + 2)} \textrm{d}x \nonumber \\&\qquad \qquad + 4 \lambda \ell ' \int ({\mathcal {G}} *(G (u) - G (v)))^2 (1 + \lambda | x |^2)^{- (\ell ' + 1)} \textrm{d}x. \end{aligned}$$
(48)

Consequently, using (48) into (45) provides

$$\begin{aligned} ( m^2 {- 2 \lambda \ell '}^2 ) \Vert {\mathcal {G}} *(G (u) - G (v)) \Vert _{L^2_{\lambda , \ell '}}^2 \leqslant 0. \end{aligned}$$
(49)

By taking sufficiently small \(\lambda \), using (42) together with (49) we get \(\Vert u - v \Vert _{L^2_{\lambda , \ell '}}^2 \leqslant 0\). This implies \(u = v\) in \(L^2_{\lambda , \ell '} ({\mathbb {R}}^4)\) and hence the map \({\mathcal {T}}\) is 1-1.

To prove surjectivity let \(v \in {\mathcal {B}}\) and \(\{ v_n \}_n \subset C_c^2 ({\mathbb {R}}^4)\) such that

$$\begin{aligned} \Vert v_n - v \Vert _{L^2_{\ell '}} \rightarrow 0 \quad {\text {as}} \quad n \rightarrow \infty . \end{aligned}$$

Let \(h_n :=(- \Delta + m^2) v_n\). Then it follows, from the first part of Proposition 1, that the elliptic PDE

$$\begin{aligned} (- \Delta + m^2) u_n + G (u_n) = h_n \end{aligned}$$

admits a unique solution in \(C_{\ell }^2 ({\mathbb {R}}^4)\). Then we get

$$\begin{aligned} u_n +{\mathcal {G}} *G (u_n) ={\mathcal {G}} *h_n = v_n \quad \Rightarrow {\mathcal {T}} (u_n) = v_n. \end{aligned}$$
(50)

Next, we prove that \(\{ u_n \}_n\) forms a Cauchy sequence in \(L_{\lambda , \ell '}^2 ({\mathbb {R}}^4)\) for sufficiently small \(\lambda > 0\). By multiplying

$$\begin{aligned} u_n - u_m +{\mathcal {G}} *G (u_n) -{\mathcal {G}} *G (u_m) = v_n - v_m \end{aligned}$$
(51)

by \(r_{\lambda , \ell '}^2 (G (u_n) - G (u_m))\) and integrate on \({\mathbb {R}}^4\) we get

$$\begin{aligned}&\langle u_n - u_m, G (u_n) - G (u_m) \rangle _{\lambda , \ell '} + \langle {\mathcal {G}} *G (u_n) -{\mathcal {G}} *G (u_m), G (u_n) - G (u_m) \rangle _{\lambda , \ell '}\\&\qquad =\langle v_n - v_m, G (u_n) - G (u_m) \rangle _{\lambda , \ell '} . \end{aligned}$$

Since G is increasing, \(\langle u_n - u_m, G (u_n) - G (u_m) \rangle _{\lambda , \ell '} \geqslant 0\). Thus, the above implies

$$\begin{aligned} \langle {\mathcal {G}} *G (u_n) -{\mathcal {G}} *G (u_m), G (u_n) - G (u_m) \rangle _{\lambda , \ell '} \leqslant \langle v_n - v_m, G (u_n) - G (u_m) \rangle _{\lambda , \ell '}. \end{aligned}$$

Thus, taking \({\mathfrak {v}}= G (u_n) - G (u_m)\) in (41) (modified version for \(\lambda \)) yield

$$\begin{aligned}&\langle \nabla ({\mathcal {G}} *(G (u_n) - G (u_m))), ({\mathcal {G}} *(G (u_n) - G (u_m))) \nabla r_{\lambda , \ell '}^2 \rangle \\&+ m^2 \Vert {\mathcal {G}} *(G (u_n) - G (u_m)) \Vert _{L^2_{\lambda , \ell '}}^2 \leqslant \langle {\mathcal {G}} *(G (u_n) - G (u_m)), G (u_n) - G (u_m) \rangle _{\lambda , \ell '} . \end{aligned}$$

So the last two estimates together with the Cauchy-Schwartz inequality give

$$\begin{aligned}&\langle \nabla ({\mathcal {G}} *(G (u_n) - G (u_m))), ({\mathcal {G}} *(G (u_n) - G (u_m))) \nabla r_{\lambda , \ell '}^2 \rangle \\&\qquad + m^2 \Vert {\mathcal {G}} *(G (u_n) - G (u_m)) \Vert _{L^2_{\lambda , \ell '}}^2 \leqslant \Vert v_n - v \Vert _{L^2_{\ell } } \Vert G (u_n) - G (u_m) \Vert _{L^2_{\lambda , \ell '}} . \end{aligned}$$

Consequently, the computation as in (49) gives

$$\begin{aligned} ( m^2 {- 2 \lambda \ell '}^2 ) \Vert {\mathcal {G}} *(G (u_n) - G (u_m)) \Vert _{L^2_{\lambda , \ell '}}^2 \leqslant \Vert v_n - v \Vert _{L^2_{\ell '}} \Vert G (u_n) - G (u_m) \Vert _{L^2_{\lambda , \ell '}}. \nonumber \\ \end{aligned}$$
(52)

Substituting \({\mathcal {G}} *(G (u_n) - G (u_m))\) from (51) into (52) followed by the reverse triangle inequality yield

$$\begin{aligned} \begin{array}{lll} ( m^2 {- 2 \lambda \ell '}^2 ) \Vert u_n - u_m \Vert _{L^2_{\lambda , \ell '}}^2 \le ( m^2 {- 2 \lambda \ell '}^2) \Vert v_n - v \Vert _{L^2_{\lambda , \ell '} } + \Vert v_n - v \Vert _{L^2_{\ell '}} \Vert G (u_n) \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad - G (u_m) \Vert _{L^2_{\lambda , \ell '}} . \end{array} \end{aligned}$$

Since \(\Vert v_n - v \Vert _{L^2_{\ell '} ({\mathbb {R}}^4)} \rightarrow 0 {\text {as}} n \rightarrow \infty \) and G is bounded, for sufficiently small \(\lambda > 0\) we get that \(\{ u_n \}_n\) forms a Cauchy sequence in \(L_{\lambda , \ell '}^2 ({\mathbb {R}}^4)\). Since \(L_{\lambda , \ell '}^2 ({\mathbb {R}}^4)\) is complete, there exists \(L_{\lambda , \ell '}^2 ({\mathbb {R}}^4) \ni u = \lim _{n \rightarrow \infty } u_n\). Since G is bounded, \(G (u) = \lim _{n \rightarrow \infty } G (u_n)\) in \(L_{\lambda , \ell '}^2 ({\mathbb {R}}^4)\). Thus by taking limit \(n \rightarrow \infty \) in (50) we obtain the existence of \(u \in L_{\lambda , \ell '}^2 ({\mathbb {R}}^4)\) such that

$$\begin{aligned} u +{\mathcal {G}} *G (u) = v \quad \Rightarrow {\mathcal {T}} (u) = v. \end{aligned}$$

So if we show that \(u \in {\mathcal {B}}\) then we are done but that is true because,

$$\begin{aligned} \Vert u \Vert _{C_{\ell }^0} \leqslant | \alpha | R \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) r_{\lambda , \ell '} (x) \, \textrm{d}x + \Vert v \Vert _{C_{\ell }^0}, \end{aligned}$$

which is finite. Hence \(u \in {\mathcal {B}}\) and we finish the proof of bijectivity of \({\mathcal {T}}\). \(\square \)

Hence we know that \({\mathcal {T}}^{- 1}\) exists. Let \({{\mathcal {T}}^{- 1}} (V) = v\) for some \(v, V \in {\mathcal {B}}\). Then \(V ={\mathcal {T}} (v)\) and from (39), we have that

$$\begin{aligned} {{\mathcal {T}}^{- 1}} (V) = V - \alpha {\mathcal {G}} *K_R \left( \exp \left( {\alpha {\mathcal {T}}^{- 1}} (V) - C_{\varepsilon } \right) \right) . \end{aligned}$$

From here it is clear that, for \(V \in {\mathcal {B}}\),

$$\begin{aligned} \left| {{\mathcal {T}}^{- 1}} (V) (x) \right|&\leqslant | V (x) | + \alpha \left| {\mathcal {G}} *K_R \left( \exp \left( {\alpha {\mathcal {T}}^{- 1}} (V) (x) - C_{\varepsilon } \right) \right) \right| \\&\leqslant | V (x) | + \alpha R \int _{{\mathbb {R}}^4} | {\mathcal {G}} (x - y) | \, \textrm{d}y. \end{aligned}$$

Consequently, by the Minkowski inequality for integral we get

$$\begin{aligned} \left\| {{\mathcal {T}}^{- 1}} (V) (x) \right\| _{L^2 (\Omega )}^2&\le \int _{\Omega } | V (x, \omega ) |^2 \, {\mathbb {P}} (\textrm{d}\omega ) \nonumber \\&\quad + (\alpha R)^2 \left( \int _{{\mathbb {R}}^4} \left( \int _{\Omega } | {\mathcal {G}} (x - y) |^2 \, {\mathbb {P}} (\textrm{d}\omega ) \right) ^{1 / 2} \, \textrm{d}y \right) ^2 \nonumber \\&\le \int _{\Omega } | V (x, \omega ) |^2 \, {\mathbb {P}} (\textrm{d}\omega ) + (\alpha R)^2 \left( \int _{{\mathbb {R}}^4} | {\mathcal {G}} (x - y) | \, \textrm{d}y \right) ^2 =:C_R . \end{aligned}$$
(53)

In our next result we show that \({\mathcal {T}}^{- 1}\) is continuous as well.

Lemma 3

The map \({\mathcal {T}}^{- 1}\) is continuous on \({\mathcal {B}}.\)

Proof

Let \(\{ w_n \}_n \subset {\mathcal {B}}\) be a sequence converging to some \(w \in {\mathcal {B}}\). Let us set \({\mathcal {T}}^{- 1} w_n =:{\bar{w}}_n\) and \({\mathcal {T}}^{- 1} w =:{\bar{w}}.\) We will show that \({\bar{w}}_n \rightarrow {\bar{w}} \) as \(n \rightarrow \infty \) in \({\mathcal {B}}.\) Note that, we have

$$\begin{aligned} {\bar{w}}_n (x) + \alpha \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha {\bar{w}}_n (y) - C_{\varepsilon })) \, \textrm{d}y = w_n (x). \end{aligned}$$
(54)

The first claim in the current proof is that the sequence \(\{ {\bar{w}}_n \}_n\) is relatively compact in \({\mathcal {B}}\). In order to prove this, first we show that \(\{ {\bar{w}}_n \}_n\) is uniformly bounded. Since \(\{ w_n \}_n\) is convergent in \({\mathcal {B}}\) and \(\alpha \) is a constant, due to (54) it is sufficient to show the uniform boundedness property for \(\left\{ \int _{{\mathbb {R}}^4} {\mathcal {G}} (\cdot - y) K_R (\exp (\alpha {\bar{w}}_n (y))) \, \textrm{d}y \right\} _n \subset {\mathcal {B}}.\) For this observe that, by (47), (48) of  [2] we have

$$\begin{aligned} \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha {\bar{w}}_n (y))) \, \textrm{d}y&~\le ~R {\int _{| z | < 1}} \left\{ \frac{- 2}{(4 \pi )^2} \log _+ (| z |) + C_1 \right\} \, \textrm{d}z\\&\qquad + R \int _{| z | \geqslant 1} C_2 \exp (- C_3 | z |) \, \textrm{d}z, \end{aligned}$$

where the rhs is bounded uniformly in x and n. To move further, let us set

$$\begin{aligned} \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha {\bar{w}}_n (y) - C_{\varepsilon })) d y =:{\bar{g}}_n \text { in } {\mathcal {B}}. \end{aligned}$$

But by its structure we know that \({\bar{g}}_n\) solves the following equation uniquely

$$\begin{aligned} (- \Delta + m^2) {\bar{g}}_n = K_R (\exp (\alpha {\bar{w}}_n - C_{\varepsilon })). \end{aligned}$$

Thus,

$$\begin{aligned} \Vert {\bar{g}}_n \Vert _{C_{\ell }^2} ~\lesssim ~ \Vert K_R (\exp (\alpha {\bar{w}}_n - C_{\varepsilon })) \Vert _{C_{\ell }^0 } ~\le ~ R. \end{aligned}$$

This further implies, due to embedding, see (3.10) in [41], \(B^2_{\infty , \infty , \ell } ({\mathbb {R}}^4) \hookrightarrow B^{1 / 2}_{\infty , \infty , \ell } ({\mathbb {R}}^4)\) and the equivalency of \(B^{1 / 2}_{\infty , \infty , \ell } ({\mathbb {R}}^4)\) with \(\frac{1}{2}\)-Hölder weighted continuous functions, the equicontinuity of \(\{ {\bar{g}}_n \}_n\). Thus, since the uniform topology, which space \({\mathcal {B}}\) has, implies the topology of compact convergence, the Ascoli–Arzelà theorem (e.g. see Theorem 47.1 on page 290 in  [40]) implies the relative compactness of \(\{ {\bar{w}}_n \}_n \subset {\mathcal {B}}\). Let us denote a converging subsequence \(\{ {\bar{w}}_{n_k} \}_k\) of \(\{ {\bar{w}}_n \}_n\) and set the limit as \({\mathcal {B}} \ni {\hat{w}} :=\lim _{k \rightarrow \infty } {\bar{w}}_{n_k}\). Since \(G (\cdot ) = \alpha K_R (\exp (\alpha \cdot - C_{\varepsilon }))\) is smooth and bounded, we have

$$\begin{aligned} \alpha \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha {\bar{w}}_{n_k} (y) - C_{\varepsilon }))\, \textrm{d}y\rightarrow & {} \alpha \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha {\hat{w}} (y)\\ {}{} & {} - C_{\varepsilon })) \, \textrm{d}y, \end{aligned}$$

as \(k \rightarrow \infty \) and thus passing the limit \(k \rightarrow \infty \) in (54) yield

$$\begin{aligned} {\hat{w}} (x) + \alpha \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) K_R (\exp (\alpha {\hat{w}} (y) - C_{\varepsilon })) \, \textrm{d}y = w (x) \quad \Rightarrow ~ {\mathcal {T}} ({\hat{w}}) = w.\end{aligned}$$

But since \({\mathcal {T}}^{- 1} w = {\bar{w}}\) and \({\mathcal {T}}\) is bijective, we have \({\hat{w}} = {\bar{w}}.\) Consequently, any converging subsequence \(\{ {\bar{w}}_{n_k} \}_k\) converges to \({\bar{w}}\), which implies the continuity as desired. Hence the proof of continuity of \({\mathcal {T}}^{- 1}\) on \({\mathcal {B}}\) is complete. \(\square \)

Recall that we aim to prove that, for fixed \(\varepsilon \) and R, \(\varphi _{\varepsilon , R}\), which solves (36) and has representation (38), is Malliavin differentiable. Due to (40), in order to prove the differentiability of \(\varphi _{\varepsilon , R}\) or say F as the next step we show that the map \({\mathcal {T}}^{- 1}\), whose existence and continuity is proved, respectively, in Lemmata 2 and 3, is differentiable.

Lemma 4

The map \({\mathcal {T}}^{- 1}\) is differentiable and there exists a constant \(M > 0\) (depends on m) such that

$$\begin{aligned} \Vert ({\mathcal {T}}_v')^{- 1} \Vert _{{\mathfrak {L}} ({\mathcal {B}}, {\mathcal {B}})} \leqslant M, \end{aligned}$$

where \({\mathfrak {L}} ({\mathcal {B}}, {\mathcal {B}})\) is the set of all bounded linear operators from \({\mathcal {B}}\) to \({\mathcal {B}}\), uniformly for \(v \in {\mathcal {B}}\).

Proof

It is straightforward to see that the Gateaux derivative of \({\mathcal {T}}\) at \(v \in {\mathcal {B}}\) in the direction of an arbitrary \(w \in {\mathcal {B}}\), is

$$\begin{aligned} \lim _{t \rightarrow 0} \frac{{\mathcal {T}} (v + t w) -{\mathcal {T}} (v)}{t} = w + \int _{{\mathbb {R}}^4} {\mathcal {G}} (\cdot - y) G' (v (y)) w (y)\, \textrm{d}y, \end{aligned}$$

where recall that \(G (v (\cdot )) = \alpha K_R (\exp (\alpha v (\cdot ) - C_{\varepsilon })).\) Thus, \({\mathcal {T}}\) is differentiable. Let us denote by \({\mathcal {T}}_v' (w)\) the derivative of \({\mathcal {T}}\) at \(v \in {\mathcal {B}}\) in the direction of \(w \in {\mathcal {B}}\) which is defined above, i.e.,

$$\begin{aligned} {\mathcal {T}}_v' (w) :=w + \int _{{\mathbb {R}}^4} {\mathcal {G}} (\cdot - y) G' (v (y)) w (y) \, \textrm{d}y. \end{aligned}$$
(55)

Note that since \(G'\) is bounded and \(w \in {\mathcal {B}}\), \({\mathcal {T}}_v' (w)\) is a well-defined element of \({\mathcal {B}}\). Next, let us fix \(v \in {\mathcal {B}}\) in the remaining part of the proof.

We claim that \({\mathcal {T}}_v'\) is one-one. Indeed, let \({\mathcal {T}}_v' (w) = 0\) for each \(w \in {\mathcal {B}}\) as element of \({\mathcal {B}}\) then by (55) we deduce that w solves the following equation

$$\begin{aligned} \begin{array}{l} (- \Delta + m^2) w + G' (v) w = 0. \end{array} \end{aligned}$$
(56)

It is clear that \(w = 0\) is a solution to (56). From the computation in the proof of Proposition 1 and Lemma 5, we know that (56) has a unique solution in \(C_{\ell }^2 ({\mathbb {R}}^4)\), in particular \(w = 0\) is the unique solution to (56). Thus, \({\mathcal {T}}_v'\) is non-degenerate and \(({\mathcal {T}}_v')^{- 1} \in {\mathfrak {L}} ({\mathcal {B}}, {\mathcal {B}})\) is well-defined.

We aim to show that \(({\mathcal {T}}_v')^{- 1} \in {\mathfrak {L}} ({\mathcal {B}}, {\mathcal {B}})\) is uniformly bounded in \(v \in {\mathcal {B}}\). For this let us take any \(U \in {\mathcal {B}}\) and \(W :=({\mathcal {T}}_v')^{- 1} (U)\). Note that \(({\mathcal {T}}_v')^{- 1} (U)\) satisfies

$$\begin{aligned} ({\mathcal {T}}_v')^{- 1} (U) = U - \int _{{\mathbb {R}}^4} {\mathcal {G}} (\cdot - y) G' (v (y)) (y) ({\mathcal {T}}_v')^{- 1} (U) \, \textrm{d}y. \end{aligned}$$
(57)

Let us consider \(C_{c, \ell }^2 ({\mathbb {R}}^4)\), space of functions in \(C_{\ell }^2 ({\mathbb {R}}^4)\) having compact support, as subset of \({\mathcal {B}}\). Let \(U \in C_{c, \ell }^2 ({\mathbb {R}}^4)\) and let \(V = (- \Delta + m^2) U \in {\mathcal {B}}\). Then from (57) we get that \(({\mathcal {T}}_v')^{- 1} (U)\) satisfy the following equation

$$\begin{aligned} (- \Delta + m^2 + G' (v (y))) ({\mathcal {T}}_v')^{- 1} (U) = V. \end{aligned}$$

Here \(G' (v) ({\mathcal {T}}_v')^{- 1} (U)\) is simply the product of two functions \(G' (v)\) and \(({\mathcal {T}}_v')^{- 1} (U)\). Thus, since \(G' \geqslant 0\), Theorem 5.1 on page 145 in  [22] implies, for \(w \in {\mathcal {B}}\),

$$\begin{aligned} ({\mathcal {T}}_v')^{- 1} (U) (x) = \tilde{{\mathbb {E}}}_x \left[ \int _0^{\infty } e^{- m^2 t} V (B_t) \exp \left( - \int _0^t G' (v (B_s)) \, \textrm{d}s \right) \, \textrm{d}t \right] , \end{aligned}$$

by the Feynman–Kac formula, where B is an \({\mathbb {R}}^4\)-valued Brownian motion which starts at x defined on a complete probability space \(({\tilde{\Omega }}, \tilde{{\mathfrak {F}}}, \tilde{{\mathbb {P}}})\) and \(\tilde{{\mathbb {E}}}_x\) denotes the expectation w.r.t. \(\tilde{{\mathbb {P}}}\). But the r.h.s. in above can be estimated as follows to get, since \(\exp \left( - \int _0^t G' (v (B_s)) \, \textrm{d}s \right) \) is bounded,

$$\begin{aligned} ({\mathcal {T}}_v')^{- 1} (U) (x)&~\le ~ \Vert V \Vert _{{\mathcal {B}}}~ \tilde{{\mathbb {E}}}_x \left[ \int _0^{\infty } e^{- m^2 t} (1 + \tilde{{\mathbb {E}}}_x [| B_t |^2])^{\ell / 2} \, \textrm{d}t \right] \\&~\le ~ \Vert U \Vert _{C_{\ell }^2 } \int _0^{\infty } e^{- m^2 t} (1 + | x |^2 + t)^{\ell / 2} \, \textrm{d}t. \end{aligned}$$

This gives

$$\begin{aligned} {\Vert ({\mathcal {T}}_v')^{- 1} (U) \Vert _{{\mathcal {B}}}}&~\le ~ \Vert U \Vert _{C_{\ell }^2} \sup _{x \in {\mathbb {R}}^4} \int _0^{\infty } e^{- m^2 t} (1 + | x |^2)^{- \ell / 2} (1 + | x |^2 + t)^{\ell / 2} \, \textrm{d}t\\&~\le ~ \Vert U \Vert _{C_{\ell }^2} \sup _{x \in {\mathbb {R}}^4} \int _0^{\infty } e^{- m^2 t} \, \textrm{d}t, \end{aligned}$$

which is finite. Consequently, by extension to \({\mathcal {B}}\), we get that there exists a constant \(M > 0\) (depends on m) such that

$$\begin{aligned} \Vert ({\mathcal {T}}_v')^{- 1} \Vert _{{\mathfrak {L}} ({\mathcal {B}}, {\mathcal {B}})} \leqslant M \text { for every } v \in {\mathcal {B}}. \end{aligned}$$

\(\square \)

Now we come to an important result of our paper that justifies the Malliavin differentiability of \(\varphi _{\varepsilon , R}\), for fix \(\varepsilon \) and R, which solves (36). In other words, the next result gives the differentiability of F which is defined in (38).

Theorem 3

Let us fix \(\varepsilon > 0\), \(R > 1\) and \(x \in {\mathbb {R}}^4\). The solution \(\varphi :=\varphi _{\varepsilon , R}\) to (36) is such that \(\varphi (y) \in {\mathbb {D}}^{1, 2}\), for every \(y \in {\mathbb {R}}^4\). Moreover, the process \(\{ D_z \varphi (x), z \in {\mathbb {R}}^4 \}\) satisfies

$$\begin{aligned} D_z \varphi (x) + \alpha \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - y) D_z \varphi (y) G' (\varphi (y)) \, \textrm{d}y ~=~ ({\mathfrak {a}}_{\varepsilon } *{\mathcal {G}}) (x - z), \end{aligned}$$

which is equivalent to

$$\begin{aligned} ({\mathcal {L}}D_z \varphi ) (x) + \alpha G' (\varphi (x)) D_z \varphi (x) ~=~ ({\mathfrak {a}}_{\varepsilon } *\delta _z) (x) ~=~{\mathfrak {a}}_{\varepsilon } (x - z). \end{aligned}$$

Proof

Since \(\varepsilon > 0\) and \(R > 1\) are fixed, we will avoid there explicit dependency. The idea of the proof is to show that the conditions of Theorem 2 with \(F (\omega ) :=\varphi _{\varepsilon , R} (x, \omega )\), are satisfied which will imply the conclusions of the current result. By (53) we have that that \(\varphi _{\varepsilon , R} (x) \in L^2 (\Omega )\). Indeed,

$$\begin{aligned} \Vert \varphi _{\varepsilon , R} (x) \Vert _{L^2 (\Omega )}^2&= \Vert {\mathcal {T}}^{- 1} ({\mathcal {G}}_{\varepsilon } *\xi ) (x) \Vert _{L^2 (\Omega )}^2 \\&\le \int _{\Omega } | ({\mathcal {G}}_{\varepsilon } *\xi ) (x) |^2 \, {\mathbb {P}} (\textrm{d}\omega ) + (\alpha R)^2 \left( \int _{{\mathbb {R}}^4} | {\mathcal {G}} (x - y) | \, \textrm{d}y \right) ^2, \end{aligned}$$

which is finite with the bound depends on R. Denote by \({\mathfrak {G}}h\) the following defined function

$$\begin{aligned} {\mathfrak {G}}h (x) = ({\mathcal {G}}_{\varepsilon } *h) (x) :=\int _{{\mathbb {R}}^4} ({\mathfrak {a}}_{\varepsilon } *{\mathcal {G}}) (x - y) h (y)\, \textrm{d}y. \end{aligned}$$

Note that \(\varphi _{0, \varepsilon } (\omega ) ={\mathfrak {G}} \xi \). Observe that, for \(h \in L^2_{\ell } ({\mathbb {R}}^4)\), due to (40)

$$\begin{aligned} F [\omega + t i (h)] = \{ {\mathcal {T}}^{- 1} ({\mathfrak {G}} \xi + t{\mathfrak {G}}h) \} (x). \end{aligned}$$

But, since the above expression is linear in t, \(F [\omega + t i (h)]\) as function of t is an absolutely continuous function of t. From Lemma 4, we know that \({\mathbb {P}}\)-a.s. \({\mathcal {T}}_{\varphi _{0, \varepsilon }}'\) exists and non-degenerate and satisfy \(\Vert ({\mathcal {T}}_{\varphi _{0, \varepsilon }}')^{- 1} \Vert _{{\mathfrak {L}} ({\mathcal {B}}, {\mathcal {B}})} \leqslant M\), \({\mathbb {P}}\)-a.s. Thus, \({\mathbb {P}}\)-a.s. we have

$$\begin{aligned} \lim _{t \rightarrow 0} \frac{F [\omega + t i (h)] - F (\omega )}{t} ~=~ \{ ({\mathcal {T}}^{- 1} )_{\varphi _{0, \varepsilon }}' ({\mathfrak {G}}h) \} (x). \end{aligned}$$

Finally, due to the nice decay property of \({\mathfrak {a}}_{\varepsilon } *{\mathcal {G}}\) and Hölder inequality, \({\mathbb {P}}\)-a.s. we have

$$\begin{aligned} \Vert ({\mathcal {T}}^{- 1} )_{\varphi _{0, \varepsilon }}' ({\mathfrak {G}}h) \Vert _{L_{\ell }^{\infty }} ~\le ~ \Vert ({\mathcal {T}}^{- 1} )_{\varphi _{0, \varepsilon }}' \Vert _{{\mathfrak {L}} ({\mathcal {B}}, {\mathcal {B}})} ~ \Vert h \Vert _{L^2_{\ell }} \left( \int _{{\mathbb {R}}^4} | ({\mathfrak {a}}_{\varepsilon } *{\mathcal {G}}) (y) r_{\ell }(y) |^2 \, \textrm{d}y \right) ^{1 / 2}, \end{aligned}$$

but this is finite. Now we define an H-valued random variable by

$$\begin{aligned} \varsigma : \Omega \ni \omega \mapsto \{ ({\mathcal {T}}^{- 1} )_{\varphi _{0, \varepsilon }}' {\mathfrak {G}} \} (x) \in H \text { such that } \langle \varsigma (\omega ), h \rangle = \{ ({\mathcal {T}}^{- 1} )_{\varphi _{0, \varepsilon }}' ({\mathfrak {G}}h) \} (x). \end{aligned}$$

This is well-defined by the Riesz representation theorem and satisfies point (2) of Theorem 2. Hence, we complete the proof of Theorem 3. \(\square \)

Recall that \(\varphi _{\varepsilon , R}\) is the unique solution to (36). Proceeding further, we set \(\theta _{\varepsilon , R}^z :=D_z (\varphi _{\varepsilon , R})\). By applying Theorem 3 to \(\varphi _{\varepsilon , R}\), we ascertain that \(\varphi _{\varepsilon , R} (x) \in {\mathbb {D}}^{1, 2}\) and at point \(z \in {\mathbb {R}}^4,\)

$$\begin{aligned} ({\mathcal {L}} \theta _{\varepsilon , R}^z) (x) + \alpha ^2 K_R' (\exp (\alpha \varphi _{\varepsilon , R} (x) - C_{\varepsilon })) \exp (\alpha \varphi _{\varepsilon , R} (x) - C_{\varepsilon }) \theta _{\varepsilon , R}^z (x) ={\mathfrak {a}}_{\varepsilon } (x - z). \nonumber \\ \end{aligned}$$
(58)

Here we have also used the chain rule (Proposition 1.2.3 of  [44]). Since (58) is linear in \(\theta _{\varepsilon , R}^z\), the Feynman-Kac formula yields

$$\begin{aligned} \theta _{\varepsilon , R}^z (x)&= \int _0^{\infty } e^{- m^2 t} ~ {\mathbb {E}}_x \left[ {\mathfrak {a}}_{\varepsilon } (x + B_t - z) e^{- \int _0^t \alpha ^2 {\mathfrak {K}} (s) \, \textrm{d}s} \right] \, \textrm{d}t, \end{aligned}$$
(59)

where

$$\begin{aligned} {\mathfrak {K}} (s) :=K_R' (\exp (\alpha \varphi _{\varepsilon } (x + B_s) - C_{\varepsilon })) \exp (\alpha \varphi _{\varepsilon } (x + B_s) - C_{\varepsilon }). \end{aligned}$$

Here \({\mathbb {E}}_x\) is the expectation operator w.r.t. the probability measure \({\mathbb {P}}^x\) and {\(B_t, t \geqslant 0\)} is a \({\mathbb {R}}^4\)-valued Brownian motion under \({\mathbb {P}}^x\) with initial condition \(B_0 = x\). Observe that, for \(x_1, x_2 \in {\mathbb {R}}^4,\) expressions (34) followed by (33) yield

$$\begin{aligned}&{\text {Cov}} (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1)), F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) \nonumber \\&= {\mathbb {E}} \left[ \{ \delta (I - L)^{- 1} D (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1))) \} \{ \delta (I - L)^{- 1} D (F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) \} \right] \nonumber \\&= \int _{{\mathbb {R}}^4} {\mathbb {E}} \left[ \{ (I - L)^{- 1} D_z (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1))) \} \{ (I - L)^{- 1} D_z (F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) \} \right] \, \textrm{d}z \nonumber \\&\quad + \int _{{\mathbb {R}}^4} \int _{{\mathbb {R}}^4} {\mathbb {E}} \big [D_{z'} \{ (I - L)^{- 1} D_z (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1))) \} \times \nonumber \\&\qquad \hspace{2 cm} D_z \{ (I - L)^{- 1} D_{z'} (F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) \} \big ] \, \textrm{d}z \, \textrm{d}z' . \end{aligned}$$
(60)

Since, see Corollary B.6 in [21] for the proof, for every \(F \in L^2 (\Omega )\) we have

$$\begin{aligned} \Vert D (I - L)^{- 1} F \Vert _{L^2 (\Omega ; L_{\ell }^2 ({\mathbb {R}}^4))} \lesssim \Vert F \Vert _{L^2 (\Omega )}, \end{aligned}$$
(61)

we estimate the second term in the r.h.s. of (60), by the Hölder inequality and Proposition 1.2.3 of  [44] along with Lipschitzness of \(F_1\) and \(F_2\), as

$$\begin{aligned}&\int _{{\mathbb {R}}^4} \int _{{\mathbb {R}}^4} {\mathbb {E}} \big [D_{z'} \{ (I - L)^{- 1} D_z (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1))) \} \times \nonumber \\&\qquad \hspace{2cm} D_z \{ (I - L)^{- 1} D_{z'} (F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) \} \big ] \, \textrm{d}z \, \textrm{d}z' \nonumber \\&\lesssim ~ \int _{{\mathbb {R}}^4} \Vert D_z (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1))) \Vert _{L^2 (\Omega )} \Vert D_z (F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) \Vert _{L^2 (\Omega )} \, \textrm{d}z \nonumber \\&= ~ \int _{{\mathbb {R}}^4} \Vert F_1' (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1)) D_z (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1)) \Vert _{L^2 (\Omega )} \times \nonumber \\&\qquad \hspace{2cm}\Vert F_2' (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2)) D_z (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2)) \Vert _{L^2 (\Omega )} \, \textrm{d}z \nonumber \\&\lesssim _{F_1, F_2} ~ \int _{{\mathbb {R}}^4} \Vert f \cdot D_z (\varphi _{\varepsilon , R} (\cdot + x_1)) \Vert _{L^2 (\Omega )} \Vert f \cdot D_z (\varphi _{\varepsilon , R} (\cdot + x_2)) \Vert _{L^2 (\Omega )} \, \textrm{d}z \nonumber \\&\le _f ~ \sup _{\begin{array}{c} x \in B (x_1, 1) \\ y \in B (x_2, 1) \end{array}} \int _{{\mathbb {R}}^4} \left( {\mathbb {E}} \left[ | \theta _{\varepsilon , R}^z (x) |^2 \right] \right) ^{1 / 2} \left( {\mathbb {E}} \left[ | \theta _{\varepsilon , R}^z (y) |^2 \right] \right) ^{1 / 2} \, \textrm{d}z. \end{aligned}$$
(62)

Further, since \((I - L)^{- 1}\) is a bounded operator on \(L^2 (\Omega )\), as in (62), the first integral in r.h.s. of (60) satisfies

$$\begin{aligned}&\int _{{\mathbb {R}}^4} {\mathbb {E}} \left[ \{ (I {-} L)^{{-} 1} D_z (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot {+} x_1))) \} \{ (I {-} L)^{{-} 1} D_z (F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot {+} x_2))) \} \right] \, \textrm{d}z \nonumber \\&\lesssim _{F_1, F_2, f} ~ \sup _{\begin{array}{c} x \in B (x_1, 1) \\ y \in B (x_2, 1) \end{array}} \int _{{\mathbb {R}}^4} \left( {\mathbb {E}} \left[ | \theta _{\varepsilon , R}^z (x) |^2 \right] \right) ^{1 / 2} \left( {\mathbb {E}} \left[ | \theta _{\varepsilon , R}^z (y) |^2 \right] \right) ^{1 / 2} \, \textrm{d}z. \end{aligned}$$
(63)

Thus, substituting (62)–(63) into (60) yield

$$\begin{aligned}&| {\text {Cov}} (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1)), F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) | \nonumber \\&\lesssim _{F_1, F_2, f} \sup _{\begin{array}{c} x \in B (x_1, 1) \\ y \in B (x_2, 1) \end{array}} \int _{{\mathbb {R}}^4} \left( {\mathbb {E}} \left[ | \theta _{\varepsilon , R}^z (x) |^2 \right] \right) ^{1 / 2} \left( {\mathbb {E}} \left[ | \theta _{\varepsilon , R}^z (y) |^2 \right] \right) ^{1 / 2} \, \textrm{d}z. \end{aligned}$$
(64)

Applying the Minkowski inequality for integrals and the Fubini theorem to (59) we further have

$$\begin{aligned}&\left( {\mathbb {E}} \left[ | \theta _{\varepsilon }^z (x) |^2 \right] \right) ^{1 / 2} = \left( {\mathbb {E}} \left[ \left| \int _0^{\infty } e^{- m^2 t} {\mathbb {E}}_x \left[ {\mathfrak {a}}_{\varepsilon } (x + B_t - z) e^{- \alpha ^2 \int _0^t {\mathfrak {K}} (s) \, \textrm{d}s} \right] \textrm{d}t \right| ^2 \right] \right) ^{1 / 2} \nonumber \\&\le {\mathbb {E}}_x \left[ \int _0^{\infty } e^{- m^2 t} {\mathfrak {a}}_{\varepsilon } (x + B_t - z) \left( {\mathbb {E}} \left[ e^{- 2 \alpha ^2 \int _0^t {\mathfrak {K}} (s) \, \textrm{d}s} \right] \right) ^{1 / 2} \textrm{d}t \right] \nonumber \\&\lesssim {\mathbb {E}}_x \left[ \int _0^{\infty } e^{- m^2 t} {\mathfrak {a}}_{\varepsilon } (x + B_t - z) \, \textrm{d}t \right] . \end{aligned}$$
(65)

Note that the r.h.s. of (65) is independent of R. Hence, substituting the above into (64) and using the Feynman-Kac formula (59) (with zero nonlinearity), we obtain

$$\begin{aligned}&| {\text {Cov}} (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1)), F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) | \nonumber \\&\lesssim \sup _{\begin{array}{c} x \in B (x_1, 1) \\ y \in B (x_2, 1) \end{array}} \int _{{\mathbb {R}}^4} {\mathbb {E}}_x \left[ \int _0^{\infty } e^{- m^2 t} {\mathfrak {a}}_{\varepsilon } (x + B_t - z) \, \textrm{d}t \right] \nonumber \\&\quad {\mathbb {E}}_y \left[ \int _0^{\infty } e^{- m^2 t} {\mathfrak {a}}_{\varepsilon } (y + B_t - z) \, \textrm{d}t \right] \, \textrm{d}z \nonumber \\&= \sup _{\begin{array}{c} x \in B (x_1, 1)\\ y \in B (x_2, 1) \end{array}} \int _{{\mathbb {R}}^4} \left( \int _{{\mathbb {R}}^4} {\mathcal {G}} (x - u) {\mathfrak {a}}_{\varepsilon } (u - z)\, \textrm{d}u \right) \left( \int _{{\mathbb {R}}^4} {\mathcal {G}} (y - u) {\mathfrak {a}}_{\varepsilon } (u - z) \, \textrm{d}u \right) \, \textrm{d}z \nonumber \\&=:\sup _{\begin{array}{c} x \in B (x_1, 1)\\ y \in B (x_2, 1) \end{array}} I (x, y) . \end{aligned}$$
(66)

To estimate I(xy), we utilize the following representation of the kernel \({\mathcal {G}}\), which is based on the Fourier transform, see pg. 273 of [7],

$$\begin{aligned} ({\mathcal {G}} *{\mathfrak {a}}_{\varepsilon } (\cdot - z)) (x) = [{\mathcal {F}}^{- 1} ({\mathcal {F}} ((m^2 - \Delta )^{- 1} ({\mathfrak {a}}_{\varepsilon } (\cdot - z))))] (x), \qquad \forall x, z \in {\mathbb {R}}^4. \end{aligned}$$

Thus, we deduce that, for all \(x, y \in {\mathbb {R}}^4\),

$$\begin{aligned} I (x, y)&= \int _{{\mathbb {R}}^4} ({\mathcal {G}} *{\mathfrak {a}}_{\varepsilon }) (x - z) ({\mathcal {G}} *{\mathfrak {a}}_{\varepsilon }) (y - z) \, \textrm{d}z = ((m^2 - \Delta )^{- 2} ({\mathfrak {a}}_{\varepsilon } *{\mathfrak {a}}_{\varepsilon })) (x - y) \nonumber \\&= [{\mathcal {F}}^{- 1} ({\mathcal {F}} ((m^2 - \Delta )^{- 2} ({\mathfrak {a}}_{\varepsilon } *{\mathfrak {a}}_{\varepsilon })))] (x - y) = \int _{{\mathbb {R}}^4} e^{i z \cdot (x - y)} \frac{(({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (z))^2}{(m^2 + | z |^2)^2} \, \textrm{d}z. \end{aligned}$$
(67)

Next, we apply a change of variable \(z \mapsto A z\), in (67), where A represents the rotation matrix on \({\mathbb {R}}^4\) such that the vector \(x - y \in {\mathbb {R}}^4_H\) transform to align with one axis, let’s say the first axis. Then, with \(z =A w\), we have

$$\begin{aligned} I (x, y)&= \int _{{\mathbb {R}}^4} \frac{e^{i w \cdot \frac{({\bar{x}}_1 - {\bar{y}}_1, 0, 0, 0)}{| x - y |}} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A w))^2}{(m^2 | x - y |^2 + | A w |^2)^2} \, \textrm{d}w \\&= \int _{{\mathbb {R}}^3} \int _{{\mathbb {R}}} \frac{e^{\pm i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A w))^2}{(m^2 | x - y |^2 + w_1^2 + | w_1 |^2)^2} \, \textrm{d}w_1 \, \textrm{d}w_{1, \perp }, \end{aligned}$$

where

$$\begin{aligned} A x = ({\bar{x}}_1, 0, 0, 0), A y = ({\bar{y}}_1, 0, 0, 0) \text { and }w = (w_1, w_{1, \perp }) \in {\mathbb {R}} \times {\mathbb {R}}^3. \end{aligned}$$

Let us only consider the positive sign in \(e^{\pm i w_1}\). A similar approach will handle the negative sign case, with the contour C containing \(-i\) instead i.

First, we compute the integral \(\int _{{\mathbb {R}}} \frac{e^{i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })))^2}{(m^2 | x - y |^2 + w_{^1}^2 + | w_{1, \perp } |^2)^2} \textrm{d}w_1\) for fixed \(w_{1, \perp } \in {\mathbb {R}}^3\) using the residue theorem. We define the contour C that traverses along the real lime from \(- a\) to a and then counterclockwise along a semicircle centered at 0 from \(- a\) to a. Choosing \(a \ge 1\) ensures that the point \(i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2}\) lies within the contour. Now, consider the contour integral

$$\begin{aligned} \int _C \frac{e^{i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })))^2}{(m^2 | x - y |^2 + w_{^1}^2 + | w_{1, \perp } |^2)^2} \textrm{d}w_1. \end{aligned}$$
(68)

Since the integrand in (68) has singularities at \(\pm i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2}\) with multiplicity 2, by the residue theorem, for fixed \(w_{1, \perp } \in {\mathbb {R}}^3\), we have

$$\begin{aligned} \int _C \frac{e^{i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })))^2}{(m^2 | x - y |^2 + w_{^1}^2 + | w_{1, \perp } |^2)^2} \, \textrm{d}w_1&= - 2 \pi i ({\text {Res}} (f (w_1), w_1 = i (m^2 | x - y |^2 \nonumber \\&\quad + | w_{1, \perp } |^2)^{1 / 2})), \end{aligned}$$
(69)

where

$$\begin{aligned} f (w_1) = \frac{e^{i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })))^2}{(m^2 | x - y |^2 + w_{^1}^2 + | w_{1, \perp } |^2)^2}\end{aligned}$$

and

$$\begin{aligned}&{\text {Res}} (f (w_1), w_1 = i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2}) \nonumber \\&= ~ \lim _{z \rightarrow i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2}} \frac{\textrm{d}}{\textrm{d}w_1} \left[ \frac{e^{i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })))^2}{(w_1 + i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2})^2} \right] . \end{aligned}$$
(70)

Here, with abbreviated notation \({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon }) = ({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp }))\), we find that

$$\begin{aligned}&\frac{\textrm{d}}{\textrm{d}w_1} \left[ \frac{e^{i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })))^2}{(w_1 + i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2})^2} \right] \\&= ~ \frac{e^{i w_1} {\mathcal {F}} ({\mathfrak {a}}_{\varepsilon }) \{ [i{\mathcal {F}} ({\mathfrak {a}}_{\varepsilon }) + 2 ({\mathcal {F}} (- i x_1 {\mathfrak {a}}_{\varepsilon } (x)))] (w_1 + i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2}) - 2{\mathcal {F}} ({\mathfrak {a}}_{\varepsilon }) \}}{(w_1 + i (m^2 | x - y |^2 + | w_{1, \perp } |^2)^{1 / 2})^3} . \end{aligned}$$

Hence, substituting the above expression in (70) and then taking the limit as \(a \rightarrow \infty \) in (69), we obtain, for fixed \(w_{1, \perp } \in {\mathbb {R}}^3\),

$$\begin{aligned}&\int _{{\mathbb {R}}} \frac{e^{i w_1} (({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })))^2}{(m^2 | x - y |^2 + w_{^1}^2 + | w_{1, \perp } |^2)^2} \, \textrm{d}w_1 \\&= ~ \frac{\pi }{4} \frac{e^{- ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}} {\mathcal {F}} ({\mathfrak {a}}_{\varepsilon }) \{ [i{\mathcal {F}} ({\mathfrak {a}}_{\varepsilon }) + 2 ({\mathcal {F}} (- i{\mathfrak {a}}_{\varepsilon } (x)))] (2 i ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}) - 2{\mathcal {F}} ({\mathfrak {a}}_{\varepsilon }) \}}{{(m^2 | x - y |^2 + | w_{1, \perp } |^2)^{3 / 2}} } . \end{aligned}$$

where we set \(m^2 | x - y |^2 + | w_{1, \perp } |^2 =:{\mathfrak {w}}_{1, \perp } (x, y).\) Consequently,

$$\begin{aligned} I (x, y)&\lesssim \int _{{\mathbb {R}}^3} \frac{e^{- ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}} | ({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })) |^2}{{\mathfrak {w}}_{1, \perp } (x, y)} \textrm{d}w_{1, \perp } \nonumber \\&\quad + \int _{{\mathbb {R}}^3} \frac{e^{- ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}} | ({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })) | | ({\mathcal {F}} (- i x_1 {\mathfrak {a}}_{\varepsilon } (x) ) (A (w_1, w_{1, \perp })) |}{{\mathfrak {w}}_{1, \perp } (x, y)} \textrm{d}w_{1, \perp } \nonumber \\&\quad + \int _{{\mathbb {R}}^3} \frac{e^{- ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}} | ({\mathcal {F}} ({\mathfrak {a}}_{\varepsilon })) (A (w_1, w_{1, \perp })) |^2}{{({\mathfrak {w}}_{1, \perp } (x, y))^{3 / 2}} } \textrm{d}w_{1, \perp } \nonumber \\&=:I_1 (x, y) + I_2 (x, y) + I_3 (x, y) . \end{aligned}$$
(71)

Since \(- i x_1 {\mathfrak {a}}_{\varepsilon } (x)\), where \(x= (x_1,x_2,x_3,x_4)\), is also a smooth and compactly supported function, it suffices to estimate \(I_1\) and \(I_3\) in (71). For \(I_1\), using estimate (81), for \(N =1\), where we let \(w_{1, \perp } = m u | x - y |\), we have

$$\begin{aligned} I_1 (x, y)&\lesssim C_N \int _{{\mathbb {R}}^3} \frac{e^{- ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}} (1 + | A (i ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}, w_{1, \perp }) |)^{- 2 N}}{{\mathfrak {w}}_{1, \perp } (x, y)} \, \textrm{d}w_{1, \perp } \nonumber \\&\lesssim _N \int _{{\mathbb {R}}^3} \frac{e^{- ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}} (1 + | i ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}, w_{1, \perp } |^2)^{- N}}{{\mathfrak {w}}_{1, \perp } (x, y)} \textrm{d}w_{1, \perp } \nonumber \\&\lesssim e^{- m | x - y |} \int _{{\mathbb {R}}^3} \frac{m | x - y | e^{- | u |}}{\left( 1 + |u|^2 \right) (m^2 | x - y |^2 + 2 m^2 | x - y |^2 | u |^2)^N} \, \textrm{d}u \nonumber \\&\lesssim e^{- m | x - y |} \int _{{\mathbb {R}}^3} \frac{e^{- |u |}}{\left( 1 + | u|^2 \right) ^2} \, \textrm{d}u. \end{aligned}$$
(72)

For \(I_3\) in (71), we can perform similar computation and obtain,

$$\begin{aligned} I_3 (x, y)&\lesssim C_N \int _{{\mathbb {R}}^3} \frac{e^{- ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}} (1 + | A (i ({\mathfrak {w}}_{1, \perp } (x, y))^{1 / 2}, w_{1, \perp }) |)^{- 2 N}}{({\mathfrak {w}}_{1, \perp } (x, y))^{3 / 2}} \, \textrm{d}w_{1, \perp } \nonumber \\&\lesssim \int _{{\mathbb {R}}^3} \frac{e^{- \left( m^2 | x - y |^2 + m^2 | x - y |^2 | u |^2 \right) ^{1 / 2}} (1 + m^2 | x - y |^2 + 2 | x - y |^2 | u |^2)^{- N}}{\left( m^2 | x - y |^2 + | x - y |^2 | u|^2 \right) ^{3 / 2}} m^3 | x - y |^3 \, \textrm{d}u \nonumber \\&\lesssim e^{- m | x - y |} \int _{{\mathbb {R}}^3} \frac{e^{- | u |}}{(m | x - y |)^2 \left( 1 + | u |^2 \right) ^{3 / 2} (1 + | u |^2)} \, \textrm{d}u \nonumber \\&\lesssim e^{- m | x - y |}\int _{{\mathbb {R}}^3} (1 + | u | )^{- 5} \textrm{d}u . \end{aligned}$$
(73)

Thus, substituting (72)–(73) into (71), yields

$$\begin{aligned} I (x, y) \lesssim e^{- m | x - y |}. \end{aligned}$$

Further, by substituting this into (66), we can make the estimation under the condition \(l = | x_1 - x_2 | > 2\) as

$$\begin{aligned} | {\text {Cov}} (F_1 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_1)), F_2 (f \cdot \varphi _{\varepsilon , R} (\cdot + x_2))) | \lesssim _{F_1, F_2, f} e^{- m l}. \end{aligned}$$
(74)

The complementary case of \(| x_1 - x_2 | \leqslant 2\) is straightforward, akin to the coupling approach. Therefore, since (74) holds uniformly in \(\varepsilon \) and R, we conclude the proof of Theorem 1 by first taking the limit taking \(R \rightarrow \infty \) and then letting \(\varepsilon \rightarrow 0\).