A note on the extremal process of the supercritical Gaussian Free Field

We consider both the infinite-volume discrete Gaussian Free Field (DGFF) and the DGFF with zero boundary conditions outside a finite box in dimension larger or equal to 3. We show that the associated extremal process converges to a Poisson point process. The result follows from an application of the Stein-Chen method from Arratia et al. (1989).


Introduction
In this article we study the behavior of the extremal process of the DGFF in dimension larger or equal to 3. This extends the result presented in [9] in which the convergence of the rescaled maximum of the infinite-volume DGFF and the 0-boundary condition field was shown. It was proved there that the field belongs to the maximal domain of attraction of the Gumbel distribution; hence, a natural question that arises is that of describing more precisely its extremal points. In dimension 2, this was carried out by [6,7] complementing a result of [8] on the convergence of the maximum; namely, the characterization of the limiting point process with a random mean measure yields as by-product an integral representation of the maximum. The extremes of the DGFF in dimension 2 have deep connections with those of Branching Brownian Motion ( [1,2,3,4]). These works showed that the limiting point process is a randomly shifted decorated Poisson point process, and we refer to [15] for structural details. In d ≥ 3, one does not get a non-trivial decoration but instead a Poisson point process analogous to the extremal process of independent Gaussian random variables. To be more precise, we let E := [0, 1] d × (−∞, +∞] and V N := [0, n − 1] d ∩ Z d the hypercube of volume N = n d . Let (ϕ α ) α∈Z d be the infinite-volume DGFF, that is a centered Gaussian field on the square lattice with covariance g(·, ·), where g is the Green's function of the simple random walk. We define the following sequence of point processes on E: where ε x (·), x ∈ E, is the point measure that gives mass one to a set containing x and zero otherwise, and Here g(0) denotes the variance of the DGFF. Our main result is The proof is based on the application of the two-moment method of [5] that allows us to compare the extremal process of the DGFF and a Poisson point process with the same mean measure. To prove that the two processes converge, we will exploit a classical theorem by Kallenberg. It is natural then to consider also convergence for the DGFF (ψ α ) α∈Z d with zero boundary conditions outside V N . For the sequences of point measures we establish the following Theorem: The outline of the paper is as follows. In Section 2 we will recall the definition of DGFF and the Stein-Chen method, while Section 3 and Section 4 are devoted to the proofs of Theorems 1.1 and 1.2 respectively.

The DGFF
Let d ≥ 3 and denote with · the ∞ -norm on Z d . Let ψ = (ψ α ) α∈Z d be a discrete Gaussian Free Field with zero boundary conditions outside Λ ⊂ Z d . On the space Ω := R Z d endowed with its product topology, its law P Λ can be explicitly written as In other words ψ α = 0 P Λ -a. s. if α ∈ Z d \Λ, and (ψ α ) α∈Λ is a multivariate Gaussian random variable with mean zero and covariance (g Λ (α, β)) α, β∈Z d , where g Λ is the Green's Λ. For a thorough review on the model the reader can refer for example to [16]. It is known [10,Chapter 13] that the finite-volume measure ψ admits an infinite-volume limit as Λ ↑ Z d in the weak topology of probability measures. This field will be denoted as ϕ = (ϕ α ) α∈Z d . It is a centered Gaussian field with covariance matrix g(α, β) for α, β ∈ Z d . With a slight abuse of notation, we write g(α − β) for g(0, α − β) and also g Λ (α) = g Λ (α, α). g admits a so-called random walk representation: if P α denotes the law of a simple random walk S started at α ∈ Z d , then In particular this gives g(0) < +∞ for d ≥ 3. A comparison of the covariances in the infinite and finite-volume is possible in the bulk of V N : for δ > 0 this is defined as In order to compare covariances in the finite and infinite-volume field, we recall the following Lemma, whose proof is presented in [9, Lemma 7]).

The Stein-Chen method
As main tool of this article we will use (and restate here) a theorem from [5]. Consider a sequence of Bernoulli random variables (X α ) α∈I where X α ∼ Be(p α ) and I is some index set. For each α we define a subset B α ⊆ I which we consider a "neighborhood" of dependence for the variable X α , such that X α is nearly independent from X β if β ∈ I \B α .

Theorem 2.2 ([5, Theorem 2])
. Let I be an index set. Partition the index set I into disjoint non-empty sets I 1 , . . . , I k . For any α ∈ I, let (X α ) α∈I be a dependent Bernoulli process with parameter p α . Let (Y α ) α∈I be independent Poisson random variables with intensity p α . Also let where · T V denotes the total variation distance and L(W 1 , . . . , W k ) denotes the joint law of these random variables.

Proof of Theorem 1.1: the infinite-volume case
To show the convergence of η n to η, we will exploit Kallenberg's theorem [11,Theorem 4.7].
According to it, we need to verify the following conditions: We adopt the convention e −∞ = 0 and the notation |A| for the Lebesgue measure of A. ii) For all k ≥ 1, and A 1 , A 2 , . . . , A k disjoint rectangles in [0, 1] d and R 1 , R 2 , . . . , R k , each of which is a finite union of disjoint intervals of the type (x, y] ⊂ (−∞, +∞], Let us denote by u N (z) := a N z + b N . The first condition follows by Mills ratio More precisely Similarly, one can plug in (3.3) the reverse bounds of (3.2) to prove the lower bound, and thus condition i).
To show ii), we need a few more details. Let k ≥ 1, A 1 , . . . , A k and R 1 , . . . , R k be as in the assumptions. Let us denote by I j = nA j ∩ V N and Choose now a small > 0 and fix the neighborhood of dependence B α := B α, (log N ) 2+2 ∩ I 3 for α ∈ I. Let W j := α∈Ij X α and Z j be as in Theorem 2.2.
By the simple observation that 3 B(x, r) denotes a ball of radius r centered at x to prove the convergence (3.1), we can use Theorem 2.2 and show that the error bound on the RHS of (2.3) goes to 0. First we bound b 1 as follows. By definition of R 1 , R 2 , . . . , R k , there exists z ∈ R such that R j ⊂ (z, +∞] for 1 ≤ j ≤ k. Hence for any 1 ≤ j ≤ k, for any α ∈ I j we have that The bound is independent of α and j, therefore for some C > 0 For b 2 note that it was shown in [9] that for z ∈ R and α = β ∈ V N Here we have introduced κ := P 0 H 0 = +∞ ∈ (0, 1) and H 0 = inf {n ≥ 1 : S n = 0}.
Observe that for any 1 ≤ j ≤ k, α ∈ I and β ∈ B α one has so that by (3.6) we can find some constant C > 0 such that The error is similar to the estimate obtained in [9, Equation (8)]. Finally we need to handle b 3 . From Section 2.2 we set for α ∈ I, H 1 := σ (X β : β ∈ I \ B α ) and we define H 2 := σ (ϕ β : β ∈ I \ B α ). We observe that where (ψ α ) α∈Z d is a Gaussian Free Field with zero boundary conditions outside U α and Here H Λ := inf {n ≥ 0 : S n ∈ Λ}, Λ ⊂ Z d . Now as in [9, Equation (10)] one can show, using the Markov property, that for some c > 0. Hence we get that there exists a constant c > 0 (independent of α and j) such that Recalling that R j ⊂ (z, +∞] for all 1 ≤ j ≤ k, this immediately shows that for d ≥ 3 k j=1 α∈Ij We now focus on the term inside the summation. For this, first we write R j = m l=1 (w l , r l ] with −∞ < w 1 < r 1 < w 2 < · · · < r m ≤ +∞ for some m ≥ 1. Hence, we can expand the difference in the absolute value of (3.8) as follows: (if r l = +∞ for some l, we conventionally set P(ϕ α > u N (r l )) = 0 and similarly for the other summand). Using the triangular inequality in (3.8), it turns out that to finish it is enough to show that for an arbitrary w ∈ R, For this, first we show that on Q := P(ϕ α > u N (w)) > P Uα (ψ α + µ α > u N (w)) This follows from the same estimates of T 1,2 and Claim 6 of [9]. Indeed on Q ∩ Similarly one can show that on the complementary event Q c (recall (3.11) for the definition of Q) Extremal process of the supercritical Gaussian Free Field having used the independence of the Z j 's. Notice that by definition Z j is a Poisson random variable with intensity α∈Ij P ((ϕ α − b N )/a N ∈ R j ). Decomposing R j as a union of finite intervals and using Mills ratio, similarly to the argument leading to (3.4), one has (3.12) which completes the proof of ii) and therefore of Theorem 1.1.

Proof of Theorem 1.2: the finite-volume case
We will now show the theorem for the field with zero boundary conditions. As remarked in the Introduction, since on the bulk defined in (2.1) we have a good control on the conditioned field, we will first prove convergence therein, and then we will use a converging-together theorem to achieve the final limit. We will first need some notation used throughout the Section: first, we consider (ψ α ) α∈V N with law P N := P V N . We also use the shortcut g N (·, ·) = g V N (·, ·). We will need the notation C + K (E) for the set of positive, continuous and compactly supported functions on E = [0, 1] d × (−∞, +∞]. We first begin with a lemma on the point process convergence on bulk. Define a point Proof. We will show i) and ii) of Page 4 (and from which we will borrow the notation starting from now). i) We begin with an upper bound on E N ρ δ n (A × (x, y]) : ii) To show the second condition we again use Theorem 2.2. Let A 1 , . . . , A k and R 1 , . . . , R k be as in proof of Theorem 1.1. Let I j := nA j ∩ V δ N and I = I 1 ∪ · · · ∪ I k . For > 0 we are setting B α := B α, (log N ) 2(1+ ) ∩ I. Note that, albeit slightly different, we are using the same notations for the neighborhood of dependence and the index sets of Section 3, where we have also used the fact that g N (α) ≤ g(0). The bound on b 1 (cf. Theorem 2.2) follows exactly as in (3.5) and yields that, for some C > 0, The calculation of b 2 can be performed similarly using the covariance matrix of the vector (ψ α , ψ β ), α = β ∈ V δ N and Lemma 2.1. This gives that for some C, (8)]). Note the estimate for b 2 is exactly same as in the infinite volume case.
We will now pass to b 3 . We repeat our choice of H 1 = σ (X β : β ∈ I \ B α ) and We define U α := V N \ (I \ B α ). By the Markov property of the DGFF E N [X α | H 2 ] = P Uα (ξ α + h α ∈ u N (R j )) P N − a. s. for (ξ α ) α∈Z d a DGFF with law P Uα and (h α ) α∈Z d is independent of ξ. From [9, Equation (28)] we can see that, for any α ∈ V δ N and N large enough such that B α, (log N ) 2(1+ ) V N ,  One sees that the breaking up (3.9) can be performed also here replacing ϕ α and ψ α (with their laws) with ψ α and ξ α (with their laws) respectively, and µ α with h α . Accordingly, it is enough to show that α∈I E N P Uα (ξ α + h α > u N (w)) − P N (ψ α > u N (w)) 1 {|hα|≤(uN (z)) −1− } → 0 (4.6) for all w ∈ R. To this aim, we choose for any w ∈ R the event Q := P N (ψ α > u N (w)) > P Uα (ξ α + h α > u N (w)) and we proceed as in (3.11) with the help of Lemma 2.1 to show (4.6). Given this, the convergence b 3 → 0 is finally ensured. Hence we can conclude that Poisson of mean p α . By Mills ratio, as in (4.2) we see that From this it follows that the two conditions i) and ii) of Kallenberg's Theorem are satisfied, and thus we obtain the convergence to a Poisson point process with mean measure given in i).
Proof of Theorem 1.2. M p (E) is a Polish space with metric d p : for a sequence of functions f i ∈ C + K (E) (cf. [12,Section 3.3]). Therefore we are in the condition to use a converging-together theorem [13,Theorem 3.5], namely to prove that ρ n d → η it is enough to show the following: (a) ρ δ n d → ρ δ , as n → +∞.
(b) ρ δ d → η as δ → 0.  Hence by the dominated convergence theorem we can exchange limit and expectation as δ → 0 to obtain that x d x and the right hand side is the Laplace functional of η at f . This shows (b).
Hence to complete the proof it is enough to show (4.7). Thanks to the definition of the metric d p it suffices to prove that for f ∈ C + K (E) and for > 0 lim sup δ→0 lim n→+∞ P N ρ n (f ) − ρ δ n (f ) > = 0.
Without loss of generality assume that the support of f is contained in [0, 1] d × [z 0 , +∞) for some z 0 ∈ R. Choosing n large enough such that u N (z 0 ) > 0 and g N (α) ≤ g(0), we obtain that as n → +∞ for some positive constants C, C . Now letting δ → 0 the result follows and this completes the proof.