Subsequential tightness of the maximum of two dimensional Ginzburg-Landau fields

We prove subsequential tightness of centered maxima of two-dimensional Ginzburg-Landau fields with bounded elliptic contrast.


Introduction
Let V ∈ C 2 (R) satisfy where c − , c + are positive constants. The ratio κ = c + /c − is called the elliptic contrast of V . We assume (1) and (2) throughout this note without further mentioning it. We treat V as a a nearest neighbor potential for a two dimensional Ginzburg-Landau gradient field. Explicitly, let D N := [−N, N ] 2 ∩Z 2 and let the boundary ∂D N consist of the vertices in D N that are connected to Z 2 \ D N by some edge. The Ginzburg-Landau field on D N with zero boundary condition is a random field denoted by φ DN ,0 , whose distribution is given by the Gibbs measure , e 1 = (1, 0) and e 2 = (0, 1), and Z N is the normalizing constant ensuring that µ N is a probability measure, i.e. µ N (R |DN | ) = 1. We denote expectation with respect to µ N by E N , or simply by E when no confusion can occur.
Ginzburg-Landau fields with convex potential, which are natural generalizations of the standard lattice Gaussian free field corresponding to quadratic V (DGFF), have been extensively studied since the seminal works [FS97,HS94,NS97]. Of particular relevance to this paper is Miller's coupling, described in Section 2.2 below, which shows that certain multi-scale decompositions that hold for the Gaussian case continue to hold, approximately, for the Ginzburg-Landau model.
In this paper, we study the maximum of Ginzburg-Landau fields. Given and set M N = M DN . For the Gaussian case, we write M G N for M N . Much is known about M G N , following a long succession of papers starting with [Bra83]. In particular, see [BDZ16] and [BL16], M G N − m G N converges in distribution to a randomly shifted Gumbel, with m G N = c 1 log N − c 2 log log N and explicit constants c 1 , c 2 .
Much less is known concerning the extrema in the Ginzburg-Landau setup, even though linear statistics of such fields converge to their Gaussian counterparts [NS97]. A first step toward the study of the maximum was undertaken in [BW16], where the following law of large numbers is proved: In this note we prove that the fluctuations of M DN around its mean are tight, at least along some (deterministic) subsequence.
Theorem 1 There is a deterministic sequence {n k } with n k → k→∞ ∞ such that the sequence of random variables M Dn k − EM Dn k is tight.
As will be clear from the proof, the sequence {n k } can be chosen with density arbitrarily close to 1. Theorem 1 is the counterpart of an analogous result for the Gaussian case proved in [BDZ11], building on a technique introduced by Dekking and Host [DH91]. The Dekking-Host technique is also instrumental in the proof of Theorem 1. However, due to the fact that the Ginzburg-Landau field does not possess good decoupling properties near the boundary, significant changes need to be made. Additional crucial ingredients in the proof are Miller's coupling and a decomposition in differences of harmonic functions introduced in [BW16].

The Brascamp-Lieb inequality
One can bound the variances and exponential moments with respect to the Ginzburg-Landau measure by those with respect to the Gaussian measure, us-ing the following Brascamp-Lieb inequality. Let φ be sampled from the Gibbs measure (3). Given η ∈ R DN , set Lemma 2 (Brascamp-Lieb inequalities [BL76]) Assume that V ∈ C 2 (R) satisfies inf x∈R V ′′ (x) ≥ c − > 0. Let E GFF and Var GFF denote the expectation and variance with respect to the DGFF measure (that is, (3) with V (x) = x 2 /2). Then for any η ∈ R DN ,

Approximate harmonic coupling
By their definition, the Ginzburg-Landau measures satisfy the domain Markov property: conditioned on the values on the boundary of a domain, the field inside the domain is again a gradient field with boundary condition given by the conditioned values. For the discrete GFF, there is in addition a nice orthogonal decomposition. More precisely, the conditioned field inside the domain is the discrete harmonic extension of the boundary value to the whole domain plus an independent copy of a zero boundary discrete GFF. While this exact decomposition does not carry over to general Ginzburg-Landau measures, the next result due to Jason Miller, see [Mil11], provides an approximate version.
Let φ be sampled from the Ginzburg-Landau measure (3) on D with zero boundary condition, and φ f be sampled from Ginzburg-Landau measure on D with boundary condition f . Then there exist constants c, γ, δ ′ ∈ (0, 1), that only depend on V , so that if r > cR γ then the following holds. There exists a coupling φ, φ f , such that ifφ : Here and in the sequel of the paper, for a set A ⊂ Z 2 and a point x ∈ Z 2 , we use dist(x, A) to denote the (lattice) distance from x to A.

Pointwise tail bound
We also recall the pointwise tail bound for the Ginzburg-Landau field (3), proved in [BW16].

Theorem 4 Let g be the constant as in (4). For all u > 0 large enough and all
This allows us to conclude that the maximum of φ DN ,0 does not occur within a thin layer near the boundary.

The recursion and proof of Theorem 1
We prove Theorem 1 by establishing a recursion for some random variable see Figure 1.
the claim (8) follows from [BW16], since the upper control on M DN /log N follows from (4) while the lower control on M D ε N /log N follows from the display below (5.19) in [BW16].
We now switch to dyadic scales. For n ∈ N, set N = 2 n and m n := M Y 2 n ,δ . We set up a recursion for m n . Clearly, 1N, 3N ), see Figure 2 The next two lemmas will allow us to control the difference between φ D4N ,0 and φ DN ,0 (and as a consequence, between m n+2 and m n ).

Lemma 8 With notation as in Lemma 7, there exists a constant
The proof of Lemmas 7 and 8 are postponed to Section 4. In the rest of this section, we bring the proof of Theorem 1. Proof of Theorem 1.
Denote by m * n an independent copy of m n . We combine Lemmas 7 and 8 to conclude We apply (4) to conclude that Thus for all large n, we can apply Lemma 8 to get Using max {a, b} = 1 2 (a + b + |a − b|) and Jensen's inequality, we obtain We need the following lemma.

Lemma 9 There exists a sequence {n k } and a constant K < ∞ such that
Em n k +2 ≤ Em n k + K.
We continue with the proof of Theorem 1. Using the subsequence {n k } from Lemma 9, we have from (9) that E m n k − Em * n k ≤ 2K + 6C 1 , which implies, using Jensen's inequality, that m n k − Em * n k is tight. This implies that the the sequence of random variables is the maximum of 4 rotated copies of m n k .
Finally, combining (4) and Lemma 5 we obtain We conclude that the sequence M DN k − EM DN k is tight.

Proof of Lemma 7 and 8
Proof of Lemma 7. The existence of the harmonic decomposition is implied by the Markov property and Theorem 3 (with δ ′ , γ taken as the constants in Theorem 3). It thus suffices to obtain an upper bound for Var h The orthogonal decomposition for GFF implies We now estimate the last two expressions for different regions of v ∈ Y (1) N,δ . First of all, it suffices to control h We first show that Indeed, standard asymptotics for the lattice Green's function (following e.g. from [Law96, Proposition 1.6.3]) give, for some constant g 0 , and similarly, To conclude the proof, we also claim for δ ∈ (γ, 1) Indeed, denote by T γ the top boundary of D γ N , we apply asymptotics for lattice Green's function to obtain v . We will prove that there exist C 0 < ∞ and α > 0, such that for all C 1 > C 0 , Indeed, (12) follows from (11) and the exponential Brascamp-Lieb inequality (6): where C 2 , C 3 are some fixed constants. The same argument using (10) gives (14). We now prove (13) using chaining. Omitting the superscripts (1) inĥ (1) and D γ,(1) N , we claim that there exists K < ∞, such that for u, v ∈ R, Applying the orthogonal decomposition of the DGFF we obtain and therefore, by the independence of φ We now apply the representation of the lattice Green's function, see, e.g., [Law96, Proposition 1.6.3], where H ∂DN (u, ·) is the harmonic measure of D N seen at u and a is the potential kernel on Z 2 which satisfies the asymptotics where D 0 is an explicit constant (see e.g. [Law96, Page 39] for a slightly weaker result which nevertheless is sufficient for our needs). Substituting into (16), we see that The same argument gives A D γ N ≤ K |u−v| εN , thus (15) is proved.
Now fix a large k 0 . For k ≥ k 0 let P k be subsets of R that plays the role of dyadic approximations: P k contains O 2 k vertices that are equally spaced and the graph distance between adjacent points is εN 2 −k . For v ∈ R, denote by P k (v) the k th dyadic approximation of v, namely the vertex in P k that is closest to v. Then for v ∈ R, We now apply the exponential Brascamp-Lieb inequality (6), (15), and a union bound to obtain for some constant C 4 . Since both K 3 2 −k and the tail probability are summable in k, we conclude that (13) holds.