High points for the membrane model in the critical dimension

In this notice we would like to study the fractal structure of the set of high points for the membrane model in the critical dimension d=4. We are able to compute the Hausdorff dimension of the set of points which are atypically high, and also that of clusters, showing that high points tend not to be evenly spread on the lattice. We will see that these results follow closely those obtained by Olivier Daviaud for the 2-dimensional discrete Gaussian Free Field.


The model
The field of random interfaces has been widely studied in statistical mechanics. These interfaces are described by a family of real-valued random variables indexed by the d-dimensional integer lattice, which are considered as a height configuration, namely they indicate the height of the interface above a reference hyperplane. The probability of a configuration depends on its energy (the Hamiltonian), which defines a measure on the space of such configurations. The most well-known models are the socalled gradient model, in particular the Discrete Gaussian Free Field (DGFF), or harmonic crystal, whose Hamiltonian is a function of the discrete gradient of the heights, and the membrane model. The study of such interface was firstly undertaken by Sakagawa in [Sak03]; we are aware of the contributions of Kurt ([Kur09], [Kur07]) regarding also a phenomenon called entropic repulsion in dimension 4. The Membrane Model is a Gaussian multivariate random variable whose Hamiltonian depends on the mean curvature of the interface, in particular favors configurations whose curvature is approximately constant. It is indeed a lattice-based scalar field {ϕ x } x∈Z d where ϕ x ∈ R is viewed as a height variable at the site x of the lattice. There are three convenient and equivalent ways in which one can see such a field. Denote by V N := [−N, N] d ∩ Z d the centered box of side-length 2N + 1. Then 1. the membrane model is the random interface model whose distribution is given by where ∆ is the discrete Laplacian, ∂ 2 V N := {y ∈ V c N : d(y, V N ) ≤ 2} and Z N is the normalizing constant.
2. By re-summation, the law P N of the field is the law of the centered Gaussian field on V N with covariance matrix G N (x, y) := Cov N (ϕ x , ϕ y ) = (∆ 2 N ) −1 (x, y). Here, ∆ 2 N (x, y) = ∆ 2 (x, y)1 {x,y∈V N } is the Bilaplacian with 0-boundary conditions outside V N . 3. The model is a centered Gaussian field on V N whose covariance matrix G N satisfies, for x ∈ V N , ∆ 2 G N (x, y) = δ xy , y ∈ V N G N (x, y) = 0, y ∈ ∂ 2 V N .
For d ≥ 5 the infinite volume Gibbs measure P exists [Kur08, Prop. 1.2.3] and is the law of the centered Gaussian field with covariance matrix G(x, y) = ∆ −2 (x, y).
The membrane model presents several points in common, as well as challenging differences, from the more known DGFF. The former lacks some key features of the latter, namely 1. the random walk representation for the Green's function. In the harmonic crystal, it is possible to establish the well-known relation involving the covariance matrix Γ N : where E x is the law of a standard random walk (S n ) n≥0 started at x ∈ Z 2 and τ ∂V N is the first exit time from V N .
2. Absence of monotonicity, for example the FKG inequality.
It is thus not possible to rely on harmonic analysis to control the field, and this renders many problems solved for the harmonic crystal quite intractable. Despite the lack of such tools it is sufficient to establish two crucial properties to study the high points: one is the logarithmic bound on covariances which are explained in Lemma 2.1, and the other one is the 2-Markov property, which can be stated as follows: Then {ϕ x } x∈A and {ϕ x } x∈B are independent under the conditional law This suggests that the behavior of certain Gaussian fields with respect to exceedences is universal, in the sense that as soon as the model displays a Gibbs-Markov property and covariances decay at the same rate, then the behavior of high points is the same (with some small adjustments to be done according to the dimension). This also opens up the question of whether there are other points in common between log-correlated Gaussian fields, and we believe a more precise answer will be given soon. The starting point is understanding how many "high" points viz. points that grow more than the average there are typically. The first step is to find the average height of the field, in other words to show that there exists a constant c > 0 such that Theorem 1.2 ([Kur09, Theorem 1.2]) Let d = 4, ℓ ∈ (0, 1), and let g := 8/π 2 . Then Roughly said, the first-order approximation of the maximum is of order log N, which also implies that the field behaves approximately like independent variables. For us then an α-high point will be a point whose height is greater than 2 2gα log N. The behavior of α-high points for the 2-dimensional DGFF, as shown in [Dav06], tells us that such points exhibit a fractal structure. Very similar results were obtained by Dembo, Peres, Rosen and Zeitouni in [DPRZ06] for the set of late points of the 2-d standard random walk.
To begin with, we recall the definition of the discrete fractal dimension: The fractal dimension of the high points is given then in (a) For 0 < η < 1 we obtain the following limit in probability: (b) For all δ > 0 there exists a constant C > 0 such that for N large We can push further the comparison between the DGFF and the Membrane Model at their respective critical dimensions, and one can find an interesting similarity in the behavior of the points. [Dav06] for example also showed that high points appear in clusters; this is what occurs in the membrane model, as the following two theorems show: For 0 < α < β < 1 and δ > 0 Theorem 1.6 (Cluster of high points 2) For 0 < α < 1, 0 < β < 1 and δ > 0 we have It is also possible to evaluate the average number of pairs of high points as in the following theorem: Theorem 1.7 (Pairs of high points) Let 0 < α < 1, 0 < β < 1 and let Note that Γ α,β = [0, 1/α] is independent of β. Then the following limit in probability holds: Finally we can also show what the maximum width of a spike of given length is: Theorem 1.8 (The biggest high square) Let −1 < η < 1, D N (η) the side length of the biggest sub-box for which all height variables are uniformly greater than 2 2gη log N, i. e.
Then the following limit in probability holds: The paper is organized as follows: in Section 2 we will prove some preliminary results that will be used for the proofs of the main theorems, to which Section 3 is going to be devoted.

Preliminary Lemmas and results
Notation denotes the open (resp. closed) Euclidean ball of center x and radius a, while B(x, a) is a box centered at x of side length a.
For the rest of this notice, recall the definition (1.3) and we let once and for all ℓ ∈ (0, 1/2). Let x 0 ∈ V N and We denote by x B the center of a (sub)box B and as Π α the union of sub-boxes of side-length N α (without discretization issues) and midpoint in M α . F α will be the sigma-algebra generated by {ϕ x } for x ∈ B∈Π α ∂ 2 B. Practically we denote with Π α a set of disjoint boxes separated by layers of thickness 2, which thanks to the 2-Markov property will enable us to perform a decomposition procedure on these sets.

The function G N (·, ·)
In order to prove some of the next results we will introduce the convolution of the harmonic Green's function, which will prove to be a key tool to obtain the crucial estimates on the covariances of our model. Let A be an arbitrary subset of Z 4 , and for x ∈ A let Γ A (x, ·) be the solution of the discrete boundary value problem Note that Γ N as in (1.2) is the unique solution to the above problem for [Kur09] contains several bounds and properties of such a function, and we would like here to recall those that we are going to use in the sequel: for all With this in mind it is now easier for us to show how to bound the variances and covariances of our field.
Lemma 2.1 (Bounds on the variances) Let d = 4 and 0 < δ < 1. Then It is therefore sufficient to show that (2.4) and (2.5) hold for G(·, ·). But we have from [Kur09, Lemma 2.10], that there exists a constant K such that in d = 4 for x = y and all α ∈ (0, 2) Hence The other bound follows similarly by considering (2.5).
Next we give a decomposition of the field which is similar to the one existing for the DGFF (see for example [Szn12, Section 2.1]). With this in mind, we can prove that conditioning on the values of the field assumed on the double boundary of a subset of V N ⊆ Z 4 (in fact of any Z d ) the resulting field is again the membrane model restricted to the interior of the smaller domain.
We have to show that the above results hold.
(a) It is clear from the definition.
In other words, P A,η is a Gaussian distribution with covariance matrix
For (x, y) ∈ S define T(x, y) as the set of sub-boxes of side length 2N β such that the centered subbox of side length N β contains x, y. Then we can find C, ǫ 0 > 0 such that for ǫ ≤ ǫ 0 and all N max x,y∈S B∈T(x,y) ǫ 0 can be chosen uniformly on (α, β) on compact sets of (0, 1) 4 .

Proof. Define
We distinguish two cases: Furthermore we have the usual decomposition Define the auxiliary function f (a, b, β) . We use these bounds in (2.10) to obtain By the equality 2a + b = γβ Hence Finally notice that This allows us to conclude the proof.
Finally we would like to recall

Lemma 2.7 ([Kur09, Lemma 2.11])
Let 0 < n < N, A N ⊆ Z 4 be a box of side-length N, A n ⊆ A N a box of side-length n. Let 0 < ǫ < 1/2. There exists C > 0 such that for all x ∈ A n with |x − x B | < ǫn

Five theorems
Proof of Theorem 1.4. The core of the proof is the lower bound (b) which was already proved by [Kur09, Theorem 1.3] and is based on the hierarchical decomposition of the membrane model, similar to that of the DGFF (for the main idea supporting the proof we also refer to [BDG01]). We show here for the reader's convenience the upper bound, in order to obtain the desired limit in probability.

Proof of Theorem 1.4 (a).
For any δ > 0 one can apply Chebyshev's inequality to get where we have used Lemma 2.1 too.
Proof of Theorem 1.5. We choose η, δ > 0 and define and for an ǫ > 0 to be fixed later , and so P(A) = O N 4β exp −c log 2 N tends to 0. Furthemore also P(D c + ) tends to 0 by virtue of the bounds on covariances and (2.8). We then have By tuning the parameters N large enough and η, ǫ small enough we can obtain (roughly speaking, we have ǫ ′ ≈ α(1 − β)). By Theorem 1.4 and from this the claim follows. We now go to the lower bound proof, which is similar in spirit to the upper bound. By setting we also define We observe that where ǫ ′ satisfies and we conclude as before.
Proof of Theorem 1.6. We will use the notation b ± (α, β, η, N) as in the proof of Lemma 2.5. We will also introduce the following quantities: let B := B(x, 4N β ), and for η, δ > 0, We know by the bounds (2.2) and (2.9) Therefore we can find d > 0 such that P(F)/P(G) ≤ exp(d log N) and to show the result it suffices to prove that P(E|F) ≤ exp(−c log 2 N) for a positive c. For this purpose define From Lemma 2.7 it follows that P(A) ≤ exp(−c log 2 N) for c > 0 and from (2.9) that P(F) ≥ exp (−d log N) for some d > 0, all in all P(A|F) ≤ exp −O log 2 N . So we can write If we are on A c ∩ F, then where ǫ ′ is such that From Theorem 1.4 we know that (3.1) is bounded from above by exp(−c log 2 N) for a constant c > 0, provided that ǫ ′ is small (which can be obtained if η, ǫ and N are small, small and large respectively).
Upper bound. Let K ∈ N and β j := j K β 1≤j≤K . Then let as soon as N is large. It is then sufficient to prove that for all i We can consider β j 's for which 4β and b + (α, β j , η, N) as above. By Lemma 2.5 we obtain By the bounds on the covariance and the normal distribution we have for N large. By Lemma 2.6 by defining γ * = 2 2−β j > 1 when η is small and K large we obtain (3.5) Inserting (3.4) and (3.5) in (3.3) we obtain Proof of Theorem 1.7. Preliminary we would like to make some considerations. It holds that ρ(α, β) is positive and in particular (3.6) (3.6) derives from the fact that F 2,β (γ) has a unique global minimum at 1 in the range γ ∈ Γ α, β . Moreover notice that ρ(α, β) is increasing in β. If we set γ m := inf γ∈Γ α,β F 2,β (γ), γ * := inf γ≥0 and γ + := sup Γ α,β we have γ * ≤ γ m ≤ γ + and moreover since F h,β (·) does not depend on α as well as Γ α,β does not depend on β we have γ m = min {γ * , γ + }. that We are now ready to prove the lower and upper bounds.
Lower bound. We set On D we have at least B j : 1 ≤ j ≤ N m γ −δ/2 boxes. Set We observe Let us now put for some arbitrary η > 0 The idea is to scale the square: now we take the box with mesh N/N β and the grid is made by {x B : B ∈ Π β }. In this way Theorem 1.4 tells us that H N 1−β (γα) ≈ N 4(1−β)(1−γ 2 α 2 ) = N m γ .
As before P(A) = o(1) as N → +∞. Plugging this in, exactly as in the proof of Theorem 1.5 Finally we observe that which is exp(−O(log 2 N)) by Theorem 1.4 for η small enough, as we have already seen. Hence P(C ∩ D) = o(1), and we conclude the proof.
Upper bound. By Theorem 1.4 we see that for λ > 0 the number of α-high points within distance N λβ is at most N 4(1−α 2 )+4λβ . We have with (3.6) Therefore when this condition is not satisfied it is enough to find that there exists h = h(δ) < 1 such that for all β ′ ∈ [β(1 − α 2 ), β] We separate the two cases γ * ≧ γ m : By Chebyshev inequality where we have used the assumption that h is close to 1 and Lemma 2.4 γ * > γ m . We construct for each B ∈ Π β ′ a bigger box of size 4N β ′ by juxtaposing to it the 12 adjacent subboxes of same side length. We call the set of such bigger boxes B, and for each B ′ ∈ B we center in x B ′ a box of twice bigger volume as B ′ . The latter boxes belong to a new set named C. We remark that all pairs of points within distance N β ′ must belong to at least one B ′ ∈ B. For ǫ > 0 set By Lemma 2.1 and the fact that ϕ y : y ∈ B with boundary conditions ∂ 2 B is a Gaussian field Proof of Theorem 1.8. Lower bound. We recall the notation used in the proof of Theorem 1.4 by N. Kurt. For α ∈ (1/2, 1) we choose 1 ≤ k ≤ K + 1 such that (δ must be thought small). Let us now define recursively Γ α 1 := Π α 1 . Then for i ≥ 2, we set Γ α i as follows: for any We re-use the notation B (k) for a sequence of boxes We denote the biggest box of B (k) with B 1,k . Let B be a box of side , where κ is the constant appearing in [Kur07, Lemma 3.2]. Define moreover for ǫ > 0 By Lemma 2.7 P(A c ) → 1 and P(C k ) → 1 as in Theorem 1.4 (C k is the same event). So where in the latter inequality we used the fact that V 1/4 N α k ⊇ B. For 2 2g log N(−η + (α − α k )(1 − γ K )(1 − ǫ)) > 2 2g log N α k (3.8) we would obtain thanks to Theorem 1.4 that for N large this probability tends to 0. But (3.7) and (3.8) give rise to a system of equations which has a solution for large K and N, α close to 1 and ǫ small when 1/2 + η/2k/K < η/2 + δ + 1/2. ≤ o(1) + E(P(C|F β )1 F ).
If B ∈ Π β we indicate with B (1/4) the sub-box B(x B , N β /2). Choose ǫ > 0 and define With Lemma 2.7 we obtain that P(A) tends to 0 as in Theorem 1.5. We can further bound To go on we notice that