Probability to be positive for the membrane model in dimensions 2 and 3

We consider the membrane model on a box $V_N\subset \mathbb{Z}^n$ of size $(2N+1)^n$ with zero boundary condition in the subcritical dimensions $n=2$ and $n=3$. We show optimal estimates for the probability that the field is positive in a subset $D_N$ of $V_N$. In particular we obtain for $D_N=V_N$ that the probability to be positive on the entire domain is exponentially small and the rate is of the order of the surface area $N^{n-1}$.


Introduction
The membrane model is the centred Gaussian field indexed by (a subset of) Z n , n ≥ 1, whose covariance matrix is given by the Green's function of the discrete Bilaplacian. It is closely related to the well-known discrete Gaussian free field, or gradient model, whose covariance is the Green's function of the discrete Laplacian. Both of these models are considered to describe interfaces in the context of statistical physics. The particular motivation for studying the membrane model stems from physical surfaces that tend to have constant curvature, [Lip95,HL97,RLCMS05]. The two models have many features in common.
One example is that there is a critical dimension (n = 2 for the gradient model, n = 4 for the membrane model), such that the variances are unbounded in the subcritical dimensions, logarithmically divergent in the critical dimension and bounded in the supercritical dimensions. See e.g. [Vel06] for a more general overview.
A particular feature of the gradient model is the existence of a random walk representation, which allows relatively easy estimates on the covariances, and provides proofs for correlation inequalities such as the FKG inequality. In the membrane model, such a random walk representation is present only in certain special cases, see [Kur09]. This makes the derivation of bounds on the covariances much harder, and moreover, some widely used correlation inequalities do not hold for the membrane model. In [MS17], Müller and Schweiger obtained very precise estimates on the Green's function of the discrete Bilaplacian in the subcritical dimensions 2 and 3. These results in particular imply that the membrane model is Hölder continous, [MS17,CDH18]. Here we use the estimates to provide bounds for the probability of the interface to be positive on certain subsets of its domain.
Such results are related to the phenomenon of entropic repulsion, which refers to the observation that some interfaces are repelled by a hard wall to a height which is determined by the fluctuations of the field. Mathematically speaking, this amounts to considering the field conditional on the event of being positive on a specified part of the domain. The field then needs to accommodate its fluctuations, so its local averages will increase. We speak of entropic repulsion if the order by which the field increases is strictly larger than the order of the square root of the variances of the original field, [Gia01,LM87].
For the Gaussian gradient model entropic repulsion was proved in [BDZ95,BDG01,Deu96,DG99]. For the membrane model, entropic repulsion was shown for n ≥ 4 by Sakagawa and by Kurt [Sak03,Kur07,Kur09]. In dimension n = 1 the model corresponds to an integrated Gaussian random walk, see [CD08]. Dembo, Ding and Gao [DDG13] proved that for such processes with zero mean and finite variance the probability to be positive on an interval of lenght N is of order N −1/4 , extending a result by Sinai [Sin92] for integrated simple random walk. We consider here the membrane model defined on a box of side-length 2N + 1, N ∈ N, and focus on dimensions n = 2, 3. In this case only a first result by Sakagawa [Sak16] is available.

Main results
Let V = [−1, 1] n and V N = N V ∩Z n with n ∈ N + and N ∈ N + . We consider the Hamiltonian H N (ψ) = 1 2 x∈Z n |∆ψ x | 2 , where ∆ is the discrete Laplacian and ψ ∈ R VN is a function on V N , extended by 0 to all of Z n . The associated Gibbs measure is then the distribution of a Gaussian random field on Z n with 0 boundary data, the so-called membrane model. We care about the subcritical case n ∈ {2, 3}, and we are interested in the event Ω DN , A first result in this direction was already established by Sakagawa [Sak16] who proved that for every x ∈ V there is a small neighborhood B x such that P N (Ω N Bx,+ ) > c for some (universal) constant c.
Let us emphasize two important special cases of our theorem, which will help motivate its statement. We first consider the case D N = V δN for δ ∈ (0, 1), where the hard wall stays away from the boundary. In that case the fact that the membrane model is Hölder continuous suggests that the field has a decent chance to be positive if it is uniformly positive at a sufficiently dense set of lattice points of bounded cardinality. Thus the probability that ψ is positive on D N = V δN should be comparable to the probability of uniform positivity at that dense set, and thus bounded away from zero. Indeed, Theorem 1.1 implies the following corollary.
Corollary 1.2. Let n = 2 or n = 3. For δ ∈ (0, 1) there is a constant c δ > 0 such that When D N = V N , the situation is somewhat different. While the Hölder continuity holds up to the boundary, the ψ x for x near the boundary are only weakly correlated and behave almost like independent random variables. This suggests that the probability to be positive on all of V N can at best scale like e −cN n−1 (note that the number of points of distance 1 to the boundary is of the order N n−1 ). On the other hand, if the field is positive at all near-boundary points it gets pushed up in the interior quite a bit so the probability to be positive everywhere should be of lower order.
Indeed, another particular case of Theorem 1.1 is an estimate for P N (Ω VN ,+ ). We expect this result to be true for the membrane model and the gradient model in any dimension n ≥ 2. For the gradient model a stronger result has been shown for n ≥ 3 in Theorem 4.1 in [Deu96]. Note that the behaviour for general L ≥ 1 in Theorem 1.1 is different for the gradient model and also for the membrane model in dimension n ≥ 4.
We give a proof of the lower and upper bound in Theorem 1.1 in Section 3 and 4, respectively.

Implications for entropic repulsion
Corollary 1.2 easily implies that conditioning on Ω δN,+ does not change the order of the maximum of the field. Indeed the Hölder continuity results from [CDH18] (see Corollary 2.2 there) imply that N − 4−n 2 max x∈VN ψ x converges in distribution to a non-concentrated random variable M . By the Borell-TIS inequality the random variables N − 4−n 2 max x∈VN ψ x have sub-Gaussian tails uniformly in N and therefore (1.5) Then Corollary 1.2 combined with the trivial estimate E N (X | Ω V δN,+ ) ≤ EN (X) PN (ΩV δN,+ ) for X ≥ 0 imply the following corollary.
Corollary 1.4. Let n = 2 or n = 3, and δ ∈ (0, 1). We have that In other words conditioning on Ω V δN ,+ changes the maximum of the field only by a bounded factor, and so there is no entropic repulsion.
We conjecture that the same holds true if we condition on Ω VN ,+ , but a proof is more difficult since the probability of Ω VN ,+ is exponentially small.
In fact, we expect that conditioned on Ω VN ,+ a typical field looks like ψ x = cd N (x) 4−n 2 , where d N (x) denotes the distance of x to the boundary. That is, the field increases steeply near the boundary, but stays of the order N 4−n 2 in the interior. We thus conjecture the following.

Notation
Let e 1 , . . . , e n be the standard basis of R n . We use the discrete forward derivative For a set A we denote by |A| its cardinality. In the following c, C and C ′ denote constants that may change from line to line, but are always independent of N and L.

Preliminaries
Let us recall the relevant results that will be used in the proof of the main theorems. Let G N be the Green's function of ∆ 2 on V N with 0 boundary data outside V N , i.e. G N (·, y) = 0 if y / ∈ V N and The Greens function G N agrees with the covariance matrix of ψ, i.e. we have that Cov N (ψ x , ψ y ) = G N (x, y), see also [Kur09]. Our proofs are based on the estimates for the Green's function G N recently found in [MS17].
Theorem 2.1. Let n = 2 or n = 3. Then we have for any  .3) by discrete integration along a path from x to y.
The lower bound relies on Dudley's inequality proved in [Dud67]. To state the inequality we introduce the following two notions. For a Gaussian process (X t ) t∈T we define the pseudometric d X by (2.6) The entropy number N (T, d X , r) is the minimal number of open balls of radius r in the d X metric that is needed to cover T .
Theorem 2.2. Let (X t ) t∈T be a centred Gaussian process. Then Remark 2.3. The theorem is true for arbitrary sets T if one defines the supremum appropriately, see e.g. [Tal96]. Since we only apply it to finite index sets we do not discuss this issue here.
We also use the Gaussian correlation inequality due to Royen [Roy14] (see also [LM17]). Actually for our results the case where K and L are rectangles would be sufficient (see Remark 3.2 below). In that case the theorem is due to Khatri [Kha67] and Šidák [Šid67].
Theorem 2.4. Let ν be a centred Gaussian measure on R m and K, L ⊂ R m be closed, symmetric and convex. Then (2.8) Finally, we recall a Gaussian correlation inequality do to Li and Shao [LS04, Lemma 5.1] that will be used in the proof of the upper bound Lemma 2.5. Let m ∈ N, and X = (X 1 , . . . X m ), Y = (Y 1 , . . . Y m ) be Gaussian random vectors with mean 0 and positive definite covariance matrices Σ X , Σ Y , and let P denote their joint measure. If Σ Y ≥ Σ X (in the sense of symmetric matrices, i.e., Σ Y − Σ X is positive semidefinite) then for every Borel set F ⊂ R m For the convenience of the reader we repeat the short proof.
Proof. Let f X , f Y be the densities of X and Y . The assumption Σ Y ≥ Σ X implies that (2.10) (2.11)

Lower bounds
Let be the event that ψ is uniformly small on V N −L .
If ψ was C 0, 4−n 2 -Hölder continuous (with Hölder constant 1), this event would have a positive probability uniformly in N and L. Now ψ is only C 0, 4−n 2 −ε -Hölder continuous (see [MS17], [CDH18]), so we cannot expect a lower bound independent of N . Instead, we prove in Subsection 3.2 that the probability of Ω VN−L,∞ is bounded below by e −c N n−1 (L+1) n−1 . Then, using a change of measure argument, we show in Subsection 3.3 that, given f : V N → R, we have (L+1) n−1 to prove the lower bound in Theorem 1.1.

Local smallness of the field
We first prove that locally the field is small with a positive probability. For x 0 ∈ V N and γ > 0 we define the set Lemma 3.1. Let n = 2 or n = 3. There is a pair of constants γ, δ > 0 with the following property: For all x 0 ∈ V N the following estimate holds Proof. We apply Theorem 2.2 to the Gaussian process ψ distributed according to P N . We assume γ < 1 2 so that (3.5) Therefore we will always estimate distances to the boundary for x ∈ A x0,γ by d N (x 0 ) in the following. The bound (2.4) implies for x, y ∈ A x0,γ and some Θ > 0. Therefore we estimate the Gaussian pseudometric defined in (2.6) by This implies that for x, y ∈ A x0,γ such that |x − y| ∞ ≤ r 2 ΘdN (x0) 3−n we have d ψ (x, y) ≤ r. (3.8) In particular B d ψ (x, r) ⊂ B ∞ x, r 2 ΘdN (x0) 3−n and therefore (3.9) Then Theorem 2.2 implies (3.10) where K only depends on n.
If we take γ = (16K) −2 we obtain (3.11) Define the oscillation of a function f on a set T as usual by Since ψ x is a centred process (3.10) implies This implies that (3.14) Note that we have the inclusions (3.15) Now the Gaussian correlation inequality (2.8) together with (2.2) imply that for some fixed δ > 0.
Remark 3.2. The use of the Gaussian correlation inequality could be avoided here: From (3.11) and (2.2) one easily obtains for some Ξ > 0 and therefore (3.18) We could work with this estimate instead of (3.4) by using instead of Ω VN−L,∞ in the following.

Global smallness of the field
Using the Gaussian correlation inequality we can now conclude global estimates from Lemma 3.1.
Lemma 3.3. Let n = 2 or n = 3, let Ω VN−L,∞ be as before. Then we have Proof. Recall the definition of A x,γ in (3.3). Fix γ such that the conclusion of Lemma 3.1 holds and use the shorter notation A x := A x,γ . We want to construct a subset B N of V N such that |B N | ≤ C N n−1 (L+1) n−1 and such that (3.21) If we have found such a set, then the Gaussian correlation inequality (Theorem 2.4) and Lemma 3.1 imply that (3.22) It remains to prove the existence of B N . We split V N into the dyadic annuli Because W k has outer sidelength 2(N − 2 k ) ≤ 2N and thickness 2 k , we can cover it by at most cubes A x , i.e. we find a set B N,k of at most C N n−1 2 k(n−1) points in V N such that (3.24) (3.25)

Change of measure
We can now prove the lower bound in Theorem 1.1. The idea is simple: We use an explicit calculation with densities to prove that the probability of the event P N (f + Ω VN−L,∞ ) is bounded below by e − ∆f 2 L 2 P N (Ω VN−L,∞ ). Then it remains to make a good choice of f .
Proof of Theorem 1.1, lower bound. Let f : V N → R be a function to be specified later, and extend it by 0 to all of Z n . We want to estimate the probability of the event f + Ω VN−L,∞ = {f + ψ : ψ ∈ Ω VN−L,∞ }. To do so, we calculate (3.26) Because Ω VN−L,∞ is symmetric around the origin, we can replace ψ by −ψ and obtain that (3.27) If we add (3.26) and (3.27) and use the estimate e t + e −t ≥ 2, we conclude (3.28) Note that the the conclusion in (3.28) also follows by Jensen from (3.26).
We now choose f as in Lemma 3.4 below. Then ∆f 2 L 2 ≤ C N n−1 (L + 1) n−1 . (3.30) Lemma 3.4. There is a constant C > 0 such that for every N and 0 ≤ L ≤ N there is a function ϕ : for all x ∈ V N −L and x∈Z n |∆ϕ(x)| 2 ≤ C N n−1 (L + 1) n−1 .

Upper bounds
In order to prove the upper bound in Theorem 1.1, we will find a suitably sparse set E N,L of points at the boundary such that the {ψ x : x ∈ E N,L } are almost independent in the sense that their covariance matrix is diagonally dominant. We can then use Lemma 2.5 to compare them to actually independent random variables. The following argument is taken from [Sch16, Section 6. We now choose α large enough that the right hand side of (4.4) becomes less than 1 4 . We define the Gaussian random vector (X x ) x∈EN,L by X . Let Σ X be its This means that Σ Y − Σ X is strictly diagonally dominant and hence positive definite. Hence we can apply Lemma 2.5 and obtain 1 2  It remains to estimate det ΣX det ΣY . Since Σ Y is diagonal, det Σ Y = 3 2 |EN,L| .