Tightness of the recentered maximum of log-correlated Gaussian fields

We consider a family of centered Gaussian fields on the d-dimensional unit box, whose covariance decreases logarithmically in the distance between points. We prove tightness of the recentered maximum of the Gaussian fields and provide exponentially decaying bounds on the right and left tails. We then apply this result to a version of the two-dimensional continuous Gaussian free field.


Main result
Let Y x ǫ : x ∈ [0, 1] d ǫ>0 be a family of centered Gaussian fields indexed by the d-dimensional unit box [0, 1] d , where d is any positive integer. Suppose that the family satisfies, for some constant 0 < C Y < ∞ and all x, y ∈ [0, 1] d , ǫ > 0, where · is Euclidean distance. Display (1.1) implies that the covariance is logarithmic for distant points and that the variance is nearly constant. The second condition is imposed so that the field does not vary too much for close points. Display (1.2), basic relations between the moments of Gaussian random variables and Kolmogorov's continuity criterion (see [1,Theorem 1.4.17]) imply that the fields have continuous modifications. When d = 2, an example of a field satisfying (1.1) and (1.2) is the bulk of the mollified continuous Gaussian free field (MGFF), which will be defined in Section 3.1, and will be the object of our attention in Section 3.
for some constant C depending on C Y and d.
The main idea of the proof of Theorem 1.1 is to use Slepian's Lemma (see [2,Theorem 2.2.1]) to compare the maximum of the field Y ǫ with the maximum of the modified branching random walk (MBRW), a field introduced by Bramson and Zeitouni in [3]. Since Slepian's Lemma only allows comparison of fields with the same index set, we will add an appropriately chosen independent continuous field to the MBRW. Adding an independent continuous field to the MBRW does not change the maximum much, provided the continuous field is small and smooth enough. These fields are defined in detail in Section 2.1. After defining the fields, we compare the right and left tails in Sections 2.2 and 2.3, respectively. We then show, in Section 3, that Theorem 1.1 implies tightness of the recentered maximum of the MGFF.
A comment on constants: c will always denote a small positive constant and C will always denote a large positive constant. Both constants are allowed to change from line to line. The dependence of the constants will be explicit or will be clear from the context. The phrase "absolute constant" will refer to fixed numbers that are independent of everything.

Related work
Our approach is motivated by recent advances in the study of the two dimensional discrete Gaussian free field (DGFF). In [3], Bramson and Zeitouni computed the expected maximum of the DGFF up to an order 1 error and concluded tightness of the recentered maximum. In [4], Ding obtained bounds on the right and left tail of the recentered maximum of the DGFF. Later on, in [5], Bramson, Ding and Zeitouni proved convergence in distribution of the recentered maximum. The approach of this line of research is to use first and second moment methods, together with decomposition properties of the DGFF, to obtain good estimates on tail events. Previous work on the DGFF includes [6], where Bolthausen, Deuschel and Giacomin obtained asympotics for the maximum of the DGFF, and [7], where Daviaud studied the extreme points of the DGFF. On the other hand, previous work on the continuous Gaussian free field (CGFF) includes [8], where Hu, Miller, and Peres studied the Hausdorff dimension of the "thick points" of the MGFF, which are closely related to the work of Daviaud. We also mention [9] for a nice discussion of Gaussian fields induced by Markov processes, and [10] for a survey on the CGFF.
Our main result implies, in particular, an analog of [3, Theorem 1.1] for the MGFF. Our approach consists on extending the MBRW by Brownian sheet, so that it is possible to compare the extended field with scaled log-correlated continuous fields. Log-correlated Gaussian fields are subject of current interest (see [11], [12], [13]). In particular, in [12], Madaule proved convergence for stationary centered Gaussian fields Z ǫ (x) : x ∈ [0, 1] d whose covariance satisfies where the fixed kernel k : R d → R is of class C 1 , vanishes outside [−1, 1] d , and satisfies k(0) = 1. Theorem 1.1 has weaker conditions on the covariance structure, and consequently, only tightness is achieved.
In [13], the authors proved the so called "Freezing Theorem for GFF in planar domains" for a sequence of Gaussian fields approximating the continuous GFF by cutting-off white noise, so that the covariance kernel is proportional to the function G t : [0, 1] 2 × [0, 1] 2 → R given by where p ∂[0,1] 2 (r, x, y) is the transition probability density of a Brownian motion killed at ∂[0, 1] 2 . In the present paper, we consider a sequence of fields approximating the GFF by mollifying the Green function (see (3.5)), and we prove tightness. Convergence for the MGFF is expected to follow by adapting of the arguments given in [5].

Auxiliary fields
In this subsection, we rigorously introduce the fields we mentioned in Section 1. A few properties of these fields will be stated; the proofs of these properties will be given in the Appendix.
In order to define these fields, it will be notationally more convenient to use [0, 1) d instead of [0, 1] d as the index set. This will not affect the main result because the supremum of Y ǫ over [0, 1) d is the same, due to continuity, as the maximum over [0, 1] d .

Modified branching random walk
We now define the modified branching random walk (MBRW) as the centered Gaussian field for all 0 ≤ t, s ≤ log (1/ǫ) and v, u ∈ V ǫ , where v i is the i-th coordinate of v, and (·) + = max {·, 0}.
For simplicity, write ξ v ǫ = ξ v ǫ (log(1/ǫ)). Note that, for each point v ∈ V ǫ , the process (ξ v ǫ (t)) t is a standard Brownian motion. Moreover, for each pair v, u ∈ V ǫ , the Brownian motions are correlated until t = − log v − u ∞ , at which time their increments become independent. The end time is t = log (1/ǫ), because, for the "usual" d-ary branching random walk, it takes log(1/ǫ) units of time to generate |V ǫ | particles (see the proof of Proposition 4.3 for a definition of the usual d-ary branching random walk).
It will be proved in the Appendix (see Proposition 4.1) that the MBRW exists and that it satisfies for some constant C depending on d. The MBRW also satisfies (see Proposition 4.2) where c is a constant depending only on d. It will also be proved in the Appendix (see Proposition 4.3) that there exist constants 0 < c, C < ∞ (depending on d) such that for all A ⊂ V ǫ , z ∈ R and ǫ > 0 small enough, where |A| is the cardinality of A.

Brownian sheet
As mentioned before, we will need an additional continuous Gaussian field. For x = (x i ) i≤d ∈ R d + , let ψ x denote the centered standard Brownian sheet. Recall that it satisfies Define a new field ψ x ǫ : x ∈ [0, 1) d , depending on a parameter p ≥ 1, as follows: for v ∈ V ǫ , let l be the linear map from v ǫ onto [p, 2p) d sending v to (p) i≤d = (p, p, . . . , p). Set for all x ∈ [0, 1) d , and that . Note that p can be chosen as large as desired.
To understand the motivation behind the previous definitions, we invite the reader to compare the bounds (1.1) and (1.2) with (2.3) and (2.8), respectively. These bounds will be used in the next sections.
We now proceed to the comparison of the right and left tail of the maximum of the field Y ǫ (which was defined in Section 1 and satisfies (1.1) and (1.2)) and the maximum of an appropriate combination of the fields ξ ǫ and ψ ǫ (which will be specified in the next section). Note that we will only use Brownian sheet when comparing the right tail; for the left tail, we will compare directly the MBRW with the field Y ǫ on a discrete index set.

The right tail
Recall from Section 1 that the field Y ǫ satisfies (1.1) and (1.2), by definition.
and ψ x ǫ : x ∈ [0, 1) d be independent fields, defined as in (2.1) and (2.6), respectively. Then, there exist δ > 0 small enough and p large enough (depending on C Y and d) such that P sup so, by choosing p large enough (depending on C Y , d and δ), we obtain a(x) ≤ 1, for all x.
We now compare the covariance for points x = y, for which we distinguish two cases: ǫ ). In this case, (1.2) and (2.8) imply for p large enough (depending on C Y ). The last inequality is due the independence between ξ ǫ and ψ ǫ . ǫ to obtain for some δ > 0 small enough (depending on C Y and d). Proposition 2.1 follows now from Slepian's Lemma.
Proposition 2.1 provides an upper bound for the right tail of the supremum of Y δǫ taken over the δ-box δ[0, 1) d . The same proof works for any δ-box. Therefore, a union bound implies P sup for all λ ∈ R. We now provide an upper bound for the probability on the right hand side of the previous display. We first prove an upper bound on the supremum of the Brownian sheet.
Lemma 2.2. There exist constants 0 < c, C < ∞ (depending on p and d) such that Therefore, µ (B(x, r)) ≥ cr 2d for some constant c > 0 depending on p and d. Applying the previous display and Fernique's Majorizing Criterion, we obtain where C depends on p and d. Borell's Inequality (see [2, Theorem 2.1.1]) and (2.7) imply where C is the constant obtained in the previous display. Lemma 2.2 now follows from a change of variables.
The previous display implies We now compute an upper bound for the right hand side of the previous display. Define the random sets The definition of a(x) easily implies that 1/2 ≤ a(x) ≤ 1, for ǫ > 0 small enough. Therefore, the last display implies P sup Since ψ ǫ and ξ ǫ are independent, from (2.5) we obtain, Then, For y = 0, we simply use |Γ 0 | ≤ ǫ −d . Therefore, from displays (2.10) and (2.11), we obtain P sup for some constants 0 < c, C < ∞ (depending on p and d).
Proof of Theorem 1.1, (1.3), the right tail. Display (2.9) and Proposition 2.3 imply P max It is easy to see from the definition that m δǫ ≤ m ǫ +C ′ for some C ′ depending on δ and d. Therefore, The upper bound (1.3) for the right tail follows by adjusting the constants.

The left tail
In this subsection we prove the upper bound (1.3) for the left tail. As previously mentioned, we can reduce the set under maximization to a discrete set. More precisely, if {D ǫ : ǫ > 0} is any If we select D ǫ appropriately, we can perform a comparison with the MBRW using Slepian's Lemma.
Proposition 2.4. There exist δ, ρ > 0 small enough (depending on C Y and d) such that P max , which is greater than 1 for , where the last bound follows from (2.3). All the hypotheses of Slepian's Lemma are satisfied, so P max for all λ ∈ R. Proposition 2.4 follows by observing that for all λ ≥ 0 and all ǫ > 0 small enough.
Proof. It follows from the definition of b(u) that, for small enough ǫ > 0, Our task is now to find an upper bound for the probability on the right hand side of Proposition 2.5.
Proposition 2.6. There exist constants 0 < c, C < ∞ (depending on ρ and d) such that where k is a large number, that will be chosen later. Let for some constant c > 0 depending on d and ρ. By ǫ (k), and ξ ν ǫ (k) is therefore a Gaussian random variable with mean zero and variance k. But Therefore, by choosing k = log λ, the last two displays imply proving Proposition 2.6 in the case k = log λ ≤ log(1/ǫ)/2. On the other hand, for λ ≥ 1/ǫ, where v is any point), which implies Proposition 2.6 in this case.
Using Propositions 2.4, 2.5 and 2.6, we are now ready to finish the proof of Theorem 1.1. 1.1, (1.3), the left tail. Propositions 2.4, 2.5 and 2.6 imply the existence of constants 0 < δ, ρ, c, C < ∞, depending on C Y and d, such that P max

Proof of
where C ′ depends on δ and d. Therefore, The bound (1.3) for the left tail follows by adjusting the constants.
3 Example: a mollified Gaussian free field in d = 2 The Gaussian free field in two dimensions provides an important example of a log-correlated field. Intuitively speaking, the reason for the log-correlation is simply that, in d = 2, the Green function for the Laplacian is logarithmic.
We begin by recalling in Section 3.1 the definitions of the Dirichlet product and the Hilbert space induced by it. We then use this Hilbert space to define the continuous Gaussian free field and the mollified Gaussian free field. After that, we prove some useful properties of these fields, which will be used to check the hypotheses of Theorem 1.1. Finally, in section 3.2, we use Theorem 1.1 to prove tightness of the recentered maximum of the family of mollified Gaussian free fields.

Dirichlet product
We begin by recalling the definition of the Dirichlet product. Let C ∞ c (0, 1) 2 denote the set of real valued C ∞ functions with compact support in (0, 1) denote the Dirichlet product, where ∇ is the gradient and dx is two-dimensional Lebesgue measure. Note that the Dirichlet product satisfies where ∆ is the standard Laplacian. The Dirichlet product induces a norm on C ∞ c (0, 1) 2 by called the Dirichlet norm. Denote by W = W (0, 1) 2 the completion of C ∞ c (0, 1) 2 with respect to the Dirichlet norm. The set W , together with the Dirichlet product on W , defines a Hilbert space.
The Dirichlet norm satisfies Poincare's Inequality: there exists a constant C (which depends only on the domain (0, 1) 2 ) such that Poincare's Inequality implies that the Dirichlet norm is equivalent to the norm Recall that the completion of C ∞ c (0, 1) 2 with respect to the latter norm is called a (1, 2)-Sobolev space (i.e., measurable functions such that their weak derivatives up to order 1 exist and belong to L 2 (0, 1) 2 ). Since the norms are equivalent, the space (W, · ∇ ) is also a Sobolev space. Therefore, for any g ∈ W and any measurable set E ⊂ [0, 1] 2 , the integral´E g(x)dx is well-defined.
For a given open set U ⊂ (0, 1) 2 , Poincare's Inequality implies that the linear mapping W → R given by is · ∇ -continuous. Note that, since W is a Hilbert space, the Riesz representation theorem implies the existence of a function f = f U ∈ W such that for all g ∈ W .

Gaussian free fields
The continuous Gaussian free field is defined as follows: since ·, · ∇ is positive definite, there exists a family X f : f ∈ W of centered Gaussian variables, defined on some probability space (Ω, P), such that Cov(X f , X g ) = f, g ∇ .
The family X f : f ∈ W is called the continuous Gaussian free field.

Orthogonal decomposition
The next proposition shows that the MGFF satisfies a tree-like decomposition property.
Proposition 3.1. Let Q = 1 2 (0, 1) 2 ⊂ (0, 1) 2 be a sub-square of side length 1/2. Then, X x ǫ can be decomposed as Proof. Denote by C ∞ c (Q) the set of real valued C ∞ functions with compact support in Q, and let W (Q) be the corresponding Hilbert space induced by the Dirichlet product in C ∞ c (Q). Note that for all f, g ∈ C ∞ c (Q). By taking the completion of C ∞ c (Q) with respect to the Dirichlet product, we see that W (Q) is a Hilbert subspace of W (0, 1) 2 and that (3.6) holds for all f, g ∈ W (Q).
Let f x,ǫ be as in (3.3) and decompose it as where g x,ǫ ∈ W (Q) and h x,ǫ ∈ W (Q) ⊥ (the orthogonal space). Set X x ǫ = X gx,ǫ and φ x = X hx,ǫ .
Since k ∈ W (Q), the function k vanishes outside of Q. Therefore, as desired.
Claim 3.2 implies, in analogy with (3.5), that the following is true for all x, y ∈ Q: Cov X x ǫ ,X y ǫ = g x,ǫ , g y,ǫ ∇ = g x,ǫ , g y,ǫ ∇,Q = 1 where G Q is the Green function of Q for the operator −∆, with Dirichlet boundary conditions on ∂Q.

The change of variables
and Claim 3.3 implies that the previous display is For Gaussian fields, equality of the covariance structure implies that the fields have the same distribution. Therefore, and the right hand side is clearly equal to X x 2ǫ : x ∈ [0, 1] 2 , which finishes the proof of Proposition 3.1. Proposition 3.1 is true for any sub-square Q ⊂ (0, 1) 2 of side length 1/2, because Green functions are translation invariant (i.e., G Q+z (u + z, v + z) = G Q (u, v) for any z ∈ R 2 , u, v ∈ Q, where G Q+z is the Green function of Q + z for the operator −∆, with Dirichlet boundary conditions on ∂Q + z).
Suppose that x − y < ǫ. Then, The change of variables u ′ = (u − y)/ǫ implies that the previous display is by continuity in z and compactness of D(0, 1), where C is an absolute constant. Suppose now that x − y ≥ ǫ. Then, The change of variables u ′ = (u − y)/ x − y implies that the previous line is by continuity in r, z and compactness of {0 ≤ r ≤ 1} × { z = 1}, where C is an absolute constant.
Note that the fact that we are integrating over disks is not essential. We could define similar MGFF for other mollifiers.
A trivial corollary (which follows from elementary properties of log) of the previous proposition is G(u, y)du − 2 π log(1/ǫ) ≤ C whenever x − y < c 0 ǫ, and Now we prove an important corollary of Proposition 3.4.
Corollary 3.6. Let K, k be as in Proposition 3.4. Then, there exists a constant C (depending only on k) such that, for all x, y ∈ K, ǫ > 0, Proof. Let us prove (3.7). If x − y ≤ 2ǫ, by Corollary 3.5, for every u ∈ D(x, ǫ). Integrating the last inequality over u ∈ D(x, ǫ) and using (3.5), we obtain that The same (with a different constant) holds for x − y ≥ ǫ, because Γ is logarithmic. Integrating over u ∈ D(x, ǫ) finishes the proof of (3.7). We now prove (3.8). Display (3.5) implies We can use Corollary 3.5 to obtain an upper bound of the first term and a lower bound of the second term of the previous display. Then, the previous display is where |D(x, ǫ)\D(y, ǫ)| is the Lebesgue measure of the set D(x, ǫ)\D(y, ǫ). Elementary geometry implies |D(x, ǫ)\D(y, ǫ)| ≤ Cǫ x − y . Repeating the previous argument for Cov(X y ǫ , X y ǫ − X x ǫ ) finishes the proof.

Tightness for the MGFF
In the next theorem we provide upper bounds on the left and right tail of the MGFF, and we compute the expected maximum up to an order 1 term.
In order to prove the bound (3.9) for the right tail, we use Proposition 3.1 and the comment that follows it to decompose where X x ǫ/2 : x ∈ Q d = X x ǫ : x ∈ [0, 1] 2 and the fields φ x : x ∈ Q , X x ǫ/2 : x ∈ Q are independent. If χ = arg max X x ǫ/2 : x ∈ Q , then But independence of φ and χ implies because φ is a centered field. By using the last display and (3.10), we obtain for some absolute constants 0 < c, C < ∞. The bound (3.9) and m ǫ/2 = m ǫ + O(1) implies tightness of the family and the same bound also implies finishing the proof.

Appendix
We prove here the claims made in Section 2.1.
Proposition 4.1. The MBRW, defined by display (2.1), exists and satisfies V ar(ξ v ǫ (t)) = t for all 0 ≤ t ≤ log(1/ǫ) and all v ∈ V ǫ , and Proof. We show that the mapping (V ǫ × [0, log(1/ǫ)]) 2 → R given by where dz is d-dimensional Lebesgue measure and A(v, r) is the d-dimensional box of side length 1, centered at e r v. Let {(v α , t α )} α be any finite subset of V ǫ × [0, log(1/ǫ)], and let {c α } α be arbitrary real numbers. Then, applying the previous display, we obtain as desired. This shows that the MBRW exists. For any v ∈ V ǫ and t ≤ log(1/ǫ), Proof. We use a second moment method. Let T = T ǫ = log(1/ǫ) and let where the second inequality follows by Cauchy-Schwarz. We first compute a lower bound for E [Z].
Note that Girsanov's Theorem (see [16,Theorem 5.1]) implies thatξ v ǫ (t) is Brownian motion under Q. Note that for some absolute constant c > 0. It follows easily from the Reflection Principle (see [16,Proposition 6.19 Combining the three previous displays, we obtain for some constant c > 0, depending on the dimension d.
We now compute an upper bound for E Z 2 . Note that Both ξ v ǫ (·), ξ w ǫ (·) are Brownian motions, which have independent increments starting at time s = s v,w = − log (max {ǫ, v − w ∞ }). Therefore, Assume 0 < s < T . Applying Girsanov's Theorem and the Reflection Principle, we obtain for some constant C. Therefore, from (4.4) and the last display, Applying Girsanov's Theorem and the Reflection Principle again, for some constant C.
Consider now the case s = 0. Then, the independence of ξ v ǫ (·) and ξ v ǫ (·) implies where the last bound follows from Girsanov's Theorem and the Reflection Principle. In the case s = T , . (4.7) In consequence, for any pair v, w ∈ V ǫ , displays (4.5), (4.6) and (4.7) imply Therefore, from (4.3), the last display, we obtain because the last expression is (eventually) decreasing in T . Proposition 4.2 follows from the last display, (4.1) and (4.2).
for all A ⊂ V ǫ , z ∈ R and ǫ > 0 small enough.
Proof. We introduce the d-ary branching random walk (BRW) as follows: let ǫ = 2 −n for some n ∈ N. At each time T k = k log 2; k = 0, 1, . . . , n, we partition [0, 1) d into 2 kd disjoint boxes of side length 2 −k . For a pair v, w ∈ V ǫ , denote by l(v, w) the first time that v, w lie in different boxes of the partition. With this notation, define the BRW as the Gaussian field (η v Cov(η v ǫ (t), η w ǫ (s)) = min {t, s, l(v, w)} . For simplicity, let T = T n and η v ǫ = η v ǫ (T ). It is not hard to show that such a field exists. Note that our BRW can be interpreted as a branching Brownian motion that splits every log 2 units of time into 2 d independent Brownian motions. Following the argument given in [15,Lemma 3.7], one can show that there exists C (depending on the dimension) such that for all A ⊂ V ǫ ⊂ V ǫ/C and all λ ∈ R. Therefore, it is enough to prove Proposition 4.3 for the BRW. We do so by following very closely the proof in [5, Lemma 3.8].
We will use the following estimate, which is proved in [5, Lemma 3.6]: let W s be standard Brownian motion under P and fix a large constant C 1 . Then, if µ * q,r (x) = P W q ∈ dx, W s ≤ r + C 1 (min {s, q − s}) 1/20 for all 0 ≤ s ≤ q /dx, we have µ * q,r (x) ≤ C 2 r(r − x)/q 3/2 (4.8) for all x ≤ r, where C 2 depends on C 1 . We next define the event for all λ ≥ 1.
Proof. Following the proof of [5, Lemma 3.7], we define ψ t = λ + 10 log (min {t, T − t}) + and χ T k (x) = P η v ǫ (t) − mǫ T t ≤ ψ t for all t ≤ T k , η v ǫ (T k ) − mǫ T T k ∈ dx /dx. Then, by decomposing based on the first time such that η v ǫ (t) − mǫ T t ≥ ψ t , we obtain that where C is an absolute constant. Display (4.8) and Girsanov's Theorem imply that where C depends on d. On the other hand, for some absolute constant C. Therefore, by the three previous displays, we obtain P (G(λ)) ≤ C where · ∨ · = max {·, ·}, and the convergence of the last sum is due the exponent 10 in the denominator (with room to spare).
Adding the last display for v ∈ A and using Claim 4.4, we obtain