The Gaussian free field in interlacing particle systems

We show that if an interlacing particle system in a two-dimensional lattice is a determinantal point process, and the correlation kernel can be expressed as a double integral with certain technical assumptions, then the moments of the fluctuations of the height function converge to that of the Gaussian free field. In particular, this shows that a previously studied random surface growth model with a reflecting wall has Gaussian free field fluctuations.


Introduction
The Gaussian free field is widely considered to be a universal object describing the flucutations of heights of random surfaces. Previous work has rigorously shown this to be the case in specific models ( [2], [10]). In this paper, we show that if an interlacing particle system in two dimensions can be described as a determinantal point process, and mild assumptions are made about the correlation kernel, then the covariances of the fluctuations of the height function are governed by a particular Green's function. A general formula for the Green's function is given.
In particular, we will use this general theorem to show that in an interlacing particle system that arises from the representation theory of the orthogonal groups, the Green's function is Note that G is the Green's function for the Laplace operator with Dirichlet boundary conditions on the set H − D := {z ∈ C : z > 0 and |z| > 1}. It turns out that there is a map Ω from the surface to H − D. We will show that the fluctuations of the height function converge to a Gaussian process whose covariance is given by the pullback of G under Ω. Particle System. Now let us describe this particle system, which was the initial motivation for this paper. Introduce coordinates on the plane as shown in Figure 1. Denote the horizontal coordinates of all particles with vertical coordinate m by y m 1 > y m 2 > · · · > y m k , where k = (m + 1)/2 . There is a wall on the left side, which forces y m k ≥ 0 for m odd and y m k ≥ 1 for m even. The particles must also satisfy the interlacing conditions y m+1 k+1 < y m k < y m+1 k for all meaningful values of k and m. By visually observing Figure 1, one can see that the particle system can be interpreted as a stepped surface. We thus define the height function at a point to be the number of particles to the right of that point.
Define a continuous time Markov chain as follows. The initial condition is a single particle configuration where all the particles are as much to the left as possible, i.e. y m k = m − 2k + 1 for all k, m. This is illustrated in the left-most iamge in Figure 2. Now let us describe the evolution. We say that a particle y m k is blocked on the right if y m k + 1 = y m−1 k−1 , and it is blocked on the left if y m k − 1 = y m−1 k (if the corresponding particle y m−1 k−1 or y m−1 k does not exist, then y m k is not blocked). Each particle has two exponential clocks of rate 1 2 ; all clocks are independent. One clock is responsible for the right jumps, while the other is responsible for the left jumps. When the clock rings, the particle tries to jump by 1 in the corresponding direction. If the particle is blocked, then it stays still. If the particle is against the wall (i.e. y m [ m+1 2 ] = 0) and the left jump clock rings, the particle is reflected, and it tries to jump to the right instead.
When y m k tries to jump to the right (and is not blocked on the right), we find the largest r ∈ Z ≥0 {+∞} such that y m+i k = y m k + i for 0 ≤ i ≤ r, and the jump consists of all particles y m+i k r i=0 moving to the right by 1. Similarly, when y m k tries to jump to the left (and is not blocked on the left), we find the largest l ∈ Z ≥0 {+∞} such that y m+j k+j = y m k − j for 0 ≤ j ≤ l, and the jump consists of all particles y m+j k+j l j=0 moving to the left by 1. In other words, the particles with smaller upper indices can be thought of as heavier than those with larger upper indices, and the heavier particles block and push the lighter ones so that the interlacing conditions are preserved. again.
In terms of the underlying stepped surface, the evolution can be described by saying that we add possible "sticks" with base 1 × 1 and arbitrary length of a fixed orientation with rate 1/2, remove possible "sticks" with base 1 × 1 and a different orientation with rate 1/2, and the rate of removing sticks that touch the left border is doubled. 1 A computer simulation of this dynamics can be found at http : //www.math.caltech.edu/papers/Orth Planch.html.
This particle system falls in the universality class of the Anisotropic Kardar-Parisi-Zhang (AKPZ) equation with a wall. The KPZ equation was first introduced in [8] and is of interest to physicists, see [5]. Similar Markov chains have been previously studied in [2] without the wall, and in [13] with a different ("symplectic") interaction with the wall. More general particle systems. By following the proof, the author realized that a more general statement could be proved. If a point process in a two-dimensional lattice is determinantal, and the correlation kernel can be expressed as a double integral with certain technical assumptions (see Definition 2.1 below), then the moments of the fluctuations of the height function can be governed by a Green's function. The exact statement is Theorem 2.3. We then can use this theorem to determine the Green's function for the specific point process described above.
Motivations. There are three reasons for proving the results in the more general case. The first reason is that the proofs are not much more difficult. The second reason is that it is easier to check the conditions for the general case than it is to repeat the full calculations. The third reason is that the general result tells us that the formula for the Green's function only depends on G ν , and therefore should only depend on the horizontal movement of the particles. For instance, in the model of [2], the expression for G was explicitly computed. In the model of [6], the particles had two different jump rates, and the expression for G was the same. Our general results should imply, for example, that G will be the same for any number of different jump rates. In order for a model to have a different expression for G, there should be some change along the horizontal direction. For example, in the model considered in this paper, a reflecting wall is added to the model of [2], and we see that G is changed.
The correlation kernels in previous problems ( [4], [6]) should satisfy the conditions of the more general case, although the author has not rigorously checked the details.
Conjectures. There are several statements which should be true, but are not pursued in this paper. One of them concerns the one-point fluctuations. Namely, after a scaling of √ log N , the fluctuations of the height function at a single point should converge in moments to a standard Gaussian. The proof would consist mainly in proving that the variance is proportional to log N , which by [12] immediately implies the convergence to a Gaussian.
The appropriate scaling was first demonstrated in [14] using a renormalization group argument. Using this convergence, one can then show that the convergence to the Gaussian free field occurs in distribution, not just in moments. Analogous (rigorous) statements are found as Theorem 1.2 and Proposition 5.6 in [2].
It should also be possible to prove similar results when τ 1 = τ 2 , but this was also not pursued. Organization of Paper. In section 2.1, the theorem for more general particle systems (Theorem 2.3) is stated. In sections 2.2 and 2.3, the algebraic and analytic steps of the proof are given. In section 3, it is shown that the particle system described above satisfies the conditions of Theorem 2.3. Section 4.2 deals with the necessary asymptotic analysis.

Statement of the Main Theorem
Suppose we have a family of point processes on X = Z×Z ≥1 which runs over time t ∈ [0, ∞). (Note that these are different co-ordinates from the introduction). In other words, at any time t, the system selects a random subset X ⊂ X. If (x, n) ∈ X, then we say that there is a particle at (x, n). For any k ≥ 1 and t ≥ 0, let ρ t k : X k → [0, 1] be defined by ρ t k (x 1 , n 1 , . . . , x k , n k ) = P(There is a particle at (x j , n j ) at time t for each j = 1, . . . , k).
The maps ρ k and K are called the kth correlation function and the correlation kernel, respectively. A function c on X × X is called a conjugating factor if there exists another function C on X such that Note that if c is a conjugating factor, then (2) Two kernels K andK are called conjugate ifK = cK for some conjugating factor c.
If a correlation kernel exists, the point process is called determinantal. On a discrete space, a point process is determined uniquely by its correlation functions (see e.g. [9]). Therefore, if we are given two determinantal point process on a discrete space with conjugate kernels, they must have the same law.
The set Z × {n} is called the nth level. Given a subset X ⊂ X, let m n be the cardinality of the set X ∩ (Z × {n}). Assume that the numbers m n take constant finite values which are independent of the time parameter t. In words, this means that the number of particles on the nth level is always m n . Further assume that m n ≤ m n+1 ≤ m n + 1 for all n. Let x , when m n+1 = m n + 1.
Assume that at any time t, the system almost surely selects an interlacing subset. Let δ n equal m n+1 − m n .
In words, h counts the number of particles to the right of (x, n) at time t.
We wish to study the large-time asymptotics of this particle system. Let Let H N be defined by In words, H N is the fluctuation of the height function around its expectation. Before stating the theorem, we need to state some more assumptions on the kernel.
Definition 2.1. With the notation above, a determinantal point process on Z × Z ≥1 is normal if all of the following hold: • For all η, τ > 0, the limit Ω(q 2 (η, τ ) − 0, η, τ ) exists and is a positive real number.
• K is conjugate to someK such that (3)-(8) hold for some integer L.
• For any l ≥ 3, the following indefinite integral satisfies where the sum is taken over all l-cycles in S l and the indices are taken cyclically.
• For any finite interval [a, b], G ∈ C 2 [a, b] and the Lesbesgue measure of the set {x ∈ [a, b] : The following remarks will help explain the definition.
(2) Since G(z) = G(z), this means that G has a critical point at both Ω and Ω. As Ω approaches the real line, the two critical points coalesce into a triple zero, so G (t, η, τ ) converges to 0 as t approaches q 2 (η, τ ). We need a control for how quickly this convergence to 0 occurs, in order to order to control the behavior near the boundary of D. More specifically, it controls the bound in Proposition 4.11.
There is a heuristic understanding for why (2) should hold. The function G has two critical points which coalesce into a triple zero. The simplest example of such a function is G(t, x) = x 3 /3 − tx as t approaches 0. In this case, the solution to G (t, x) = 0 is Ω(t, x) = t 1/2 . Then G (t, Ω(t, x)) = 2t 1/2 .
(4) It is common for the kernel to be expressed in this form; previous examples are [4] and [2]. If the kernel has a different expression with the same asymptotics as in Propositions 4.5 and 4.11, the results still hold.
(5) In particular, (11) holds if there always exist u-substitutions and an expression Y such that where z l+1 = z 1 and u l+1 = u 1 . This is because of Lemma 7.3 in [7], which refers back to [1], which says that (6) This is a technical lemma which allows Lemma 4.1 to be applied. We can now state the main theorem.
Theorem 2.3. Suppose we are given a normal determinantal point process. For 1 ≤ j ≤ k, let κ j = (ν j , η j , τ ) be distinct points in D , and let Ω j = Ω(ν j , η j , τ ). Define the function G on the upper half-plane to be , Ω σ(2j) ), k is even 0, k is odd, where F k is the set of all involutions in S k without fixed points.
Remark 2.4. We note that these are the moments of a linear family of Gaussian random variables: see Appendix A. Using the results of [12], it should be possible to show that H N (κ)/ VarH N (κ) converges to a Gaussian, but this was not pursued.

Algebraic steps in proof of Theorem 2.3
The most natural way to view X is as a square lattice. However, it turns out that a hexagonal lattice is more useful. To obtain the hexagonal lattice, take the nth level and shift it to the right by (n + 1)/2 − m n . See Figure 3. Figure 3 also shows that the particle system can be interpreted as lozenges. Each lozenge is a pair of adjacent equilateral triangles. See Figure 4.
By setting the location of each triangle to be the midpoint of its horizontal side, each lozenge can be viewed as a pair (x, n, x , n ), where the black triangle is located at (x, n) and the white triangle is located at (x , n ). For example, in Figure 3 there are lozenges (1, 3, 1, 3), (2, 3, 2, 4), and (0, 3, 1, 4). The three types of lozenges can be described as follows. For lozenges of type I, (x , n ) = (x, n). For lozenges of type II, (x , n ) = (x − 1 + δ n , n + 1). For lozenges of type III, (x , n ) = (x + δ n , n + 1). Note that a lozenge of type I is just a particle.
We say that (x, n, x , n ) ∈ X × X is viable if (x , n ) = (x, n), (x − 1 + δ n , n + 1), or (x + δ n , n + 1). A sequence (x 1 , n 1 , x 1 , n 1 ), . . . , (x k , n k , x k , n k ) of viable elements is non-overlapping if (x 1 , n 1 ), . . . , (x k , n k ) are all distinct from each other and (x 1 , n 1 ), . . . , (x k , n k ) are also all distinct from each other. We do, however, allow the possibility of (x i , n i ) = (x j , n j ). Figure 4: Lozenges of types I,II, and III, respectively. Note that lozenges of type I occur exactly at the same places as particles.
The statement and proof of the next proposition are similar to Theorem 5.1 of kn:BF.
Remark 2.6. The equations (3)-(8) can now be intuitively understood. Equation (3) says that each black triangle is located in exactly one of the three lozenges around it, and equation (4) makes an identical statement for white triangles. Equations (5) and (7) say that lozenges of type II almost surely do not occur far to the right of the particles, with (5) controlling the off-diagonal entries in the determinant and (7) controllling the diagonal entries. Similarly, equations (6) and (8) says that lozenges of type III almost surely do occur far to the right of the particles. This intuition will be exploited in the proof of Thereom 2.5.
Proof. We proceed by induction on the number of lozenges that are not of type I. When this number is zero, the statement reduces to (1) and (2). For any set S = {(x 1 , n 1 , x 1 , n 1 ), . . . , (x k , n k , x k , n k )} of non-overlapping, viable elements, let P (S) and D(S) denote the left and right hand sides of (12), respectively. First, as a preliminary statement, we prove that if (x k+1 , n k+1 ) = (x r , n r ) for 1 ≤ r ≤ k, then Note that if D is replaced by P in (13), the statement is immediate, since the black triangle at (x k+1 , n k+1 ) must be contained in exactly one lozenge.
In a similar manner, if (x k+1 , n k+1 ) = (x r , n r ) for 1 ≤ r ≤ k, then (4) implies that Again, the statement holds if D is replaced by P . Now let us prove that D and P still agree if we add a lozenge of type II to S. Suppose that (x, n, x−1+δ n , n+1) is viable and that S ∪{(x, n, x−1+δ n , n+1)} is non-overlapping. Then equation (13) is equivalent to (15) and the same holds for P instead of D. By the induction hypothesis, Thus, (15) implies Assume for now that (x r , n r ) = (x + δ n , n + 1) for 1 ≤ r ≤k. Then we cam apply equation (14), which implies that and the same statement holds for P . Thus, If S ∪ {(x + 1, n, x + δ n , n + 1)} is non-overlappinng, then (15) is again applicable. We repeatedly apply (15) and (16) as often as possible. First, suppose that this can be done indefinitely. Then Since lozenges of type II almost surely do not appear when we look far to the right of the particles,  (15) and (16) can only be applied finitely many times. This In the first case, S ∪{(x+M +1, n, x+M +δ n , n+1)} is non non-overlapping. This implies D(S ∪ {(x + M + 1, n, x + M + δ n , n + 1)}) = 0 (because two of the rows are idential) and P (S ∪ {(x + M + 1, n, x + M + δ n , n + 1)}) = 0 (because a triangle cannot be in two different lozenges at the same time). Thus, D and P agree. A similar argument holds in the second case. Thus, D and P agree whenever a lozenge of type II is added to S.
An identical holds for type III lozenges, except that we use (6) and (8) instead of (5) and (7).
We have been describing a lozenge as a pair (x, n, x , n ). It can also be described as (x , n , λ), where (x , n ) is the location of the white triangle and λ ∈ {I, II, III} is the type of the loznege. Thus the proposition can be restated as the following statement.
Proof. This is a result of the correspondences There are two different formulas for the height function. One formula is h(x, n) = s>x 1(lozenge of type I at (s, n)).
It is possible to only use (17) to complete the proof. However, when there are multiple points on one level, i.e. not all η 1 , . . . , η k are distinct, the computation becomes much more complicated. This is because lozenges of type I will appear in multiple sums of the form (17). We can avoid this difficulty by introducing another formula for the height function: where, for n < n , can be expressed as a sum of terms of the form Lemma 2.8. Assume that the following sets are disjoint: Then where the matrix blocks are: Proof. By applying Corollary 2.7 to (17) and (19), we see that H n l ,n l (x l )   equals the right hand side of (22) with the (1 − δ ij ) terms removed. It is wellknown that subtracting the expectation corresponds to putting zeroes on the diagonal. For example, this is noticed in the proof of Theorem 7.2 of [7].
Write the determinant in (22) as a sum over permutations σ in S k . If the cycle decomposition of σ contains the cycle (c 1 c 2 . . . c r ) of length r and M denotes the matrix in the right hand side of (22), then the contribution from σ is where (· · · )(· · · ) correspond to other cycles of σ. Let ψ cι denote s cι if 1 ≤ c ι ≤ k , and p cι if k < c ι ≤ k. Since the sum over ψ cι only affects the matrix terms M cι−1cι and M cιcι+1 , the contribution from σ is where (. . .) denote other cycles. In other words, the contribution from σ can be expressed as a product over the cycles in the cycle decomposition of σ.
Note that if σ fixes any points, then the correponding contribution is zero because all the diagonal entries are zero.

Analysis steps in proof of Theorem 2.3
In (22), set x j = [N ν j ], n l = [N η l ], and t = N τ . Our goal is to find the limit of (22) as N → ∞. Expanding the determinant into a sum over σ ∈ S k , we just saw that the contribution from a fixed σ is of the form (23). First note that if any of the ψ ci denotes p ci , then This is because each M cj cj+1 is proportional to 1/N (by Proposition 4.5, so M c1c2 M c2c3 . . . M crc1 is proportional to N −r , but the sum is only taken over O(N r−1 ) terms. Therefore, (20) can be expressed as a single term of the form in (21), and in this term k = k. Now we will prove (stated as Theorem 2.10 below) that Once this is proven, (11) implies that the total contribution from S k − F k equals zero. When l = 2, then the right hand side is just G(Ω 1 , Ω 2 ), completing the proof of Theorem 2.3.
K(x 1 , n 1 , y, n 2 , t)K(y, n 2 , x 3 , n 3 , t) Proof. Let G 2 (z) denote G([y/N ], η 2 , τ, z), let θ 2 denote θ([y/N ], η 2 , τ ) and Ω 2 denote Ω(y/N, η 2 , τ ). Fix some β ∈ (−1/2, 0) and split up the sum into two parts: the first part is from N ν 2 to N (q 2 − N β ) , while the second sum is from N (q 2 − N β ) to N q 2 . Since there are no particles to the right of N q 2 in the limit N → ∞, the sum from N q 2 to ∞ can be ignored. It is common to refer to the first sum as the bulk and the second sum as the edge. First examine the bulk. By Proposition 4.5, K(x 1 , n 1 , y, n 2 , t)K(y, n 2 , x 3 , n 3 , t) where denotes the other fifteen terms that occur in the sum. First let us examine the error term in the bulk.
For the four terms withΩ 2 , make the substitution z =Ω(ν, η 2 , τ ). The path of integration is Γ − . Finally, the integral becomes Now we sum over the edge. By Proposition 4.11 and (2) of Definition 2.1, the sum is bounded above by As long as β < 0, the sum over the edge is also o(1/N ).
The indices are taken cyclically.
Proof. By Proposition 4.5, the product has 4 l terms. Each application of Proposition 2.9 decreases the number of terms by a factor of 4, so repeated applications of Proposition 2.9 yields the result.

Particle system with a wall
We now return to the particle system with a reflecting wall described in the Introduction. For notational reasons, it is more convenient to use different coordinates. Instead of labeling the levels as 1, 2, 3, . . ., it is more convenient to label them as (1, −1/2), (1, 1/2), (2, −1/2), (2, 1/2), . . .. If the (n 1 , a 1 ) is at least as high as the (n 2 , a 2 ) level, then this will be denoted as (n 1 , a 1 ) (n 2 , a 2 ). This happens if and only if 2n 1 + a 1 ≥ 2n 2 + a 2 . Using the notation of Section 2.2, m (n,a) = n and δ (n,a) = a + 1/2. Along the horizontal direction, we will use a square lattice, so that the particles live on N instead of 2N or 2N + 1.
Let m a1 (dz) be defined by denote the (normalized) Jacobi polynomial with parameters (±1/2, −1/2). The normalization is set so that for any nonzero complex number Let W (a,−1/2) (s) be defined for nonnegative integers s by Note that for a = ±1/2, (28) By Theorem 4.1 of [3], the correlation functions are determinantal with kernel K(n 1 , a 1 , s 1 , n 2 , a 2 , s 2 , t) where the z-contour is the unit circle and the v-contour is a circle centered at the origin with radius bigger than 1.
Theorem 3.1. The determinantal point process is normal. The Green's function is given by Once we prove the point process is normal, the expression for the Green's function follows from Theorem 2.3 with In section 3.2, we show that the third condition in Definition 2.1 is satisfied. In section 3.3, we show that the fourth and second conditions are satisfied. Since these are conditions are the hardest to prove, we will focus mainly on their proofs. The fifth conditions follows from the substitution u j = z j + z −1 j and (5) of Remark 2.2. and c 0 (n 1 , a 1 , s 1 , n 2 , a 2 , s 2 ) = C 0 (n 1 , a 1 , s 1 )/C 0 (n 2 , a 2 , s 2 ). ThenK = c 0 K satisfies (3)-(8) for L = 1.

Algebraic steps in proof of theorem 3.1
Proof. Using (26)-(27) and the orthogonality relation (28), it is straightforward to check that (3) and (4) hold. What happens is that in the left hand side of (3) or (4), one obtains six terms, three of which come from (29) and three of which coe from (30). The three terms from (29) always sum to 0, while the three terms from (30) sum to 0 or 1. Now we will prove (7)-(8) when a 1 = −1/2. The term (30) equals zero, so we only need to look at (29). Explicitly, the expression is and we want the asymptotic result when s, s → ∞ in such a way that s − s is 0 or 1. Expand the paranthetical expression v s +1/2 − v −s −1/2 to get two terms, each of which is a double integral. Since 1 = |z| < |v|, the term with v −s −1/2 goes to zero. For the remaining term, expand z s + z −s to get two terms. For the term with z s , make the substitution z → z −1 . What remains is Now deform the z-contour to the circle |z| = 1 + 2 and the v-contour to the circle |v| = 1 + , where > 0. With these deformations, |v| < |z|, so the double integral goes to zero. However, residues are picked up when the contours pass through each other. These residues equal There is a residue at z = 1 which equals −2, and a residue at z = 0 which equals 0 for s ≥ s and 2 for s > s . Since c 0 (n, −1/2, s, n, 1/2, s) = −1/2, this proves (7) and (8) when a 1 = −1/2. The case when a 1 = 1/2 is similar. It remains to show (5) and (6). When considering the product of two kernels, we obtain a quadruple integral. After the substitutions z 1 → z −1 1 and v 2 → v −1 2 , the part of the integrand that depends on s is just (z 1 /v 2 ) s . Therefore, deforming contours so that |v 2 | > |z 1 | gives (5) and (6).

Analysis steps in proof of theorem 3.1
For this section, we need a slightly different expression for the kernel. By (40)-(42) of [3], the kernel equals K(n 1 , a 1 , s 1 ; n 2 , a 2 , s 2 , t) where θ is any real number, and the arc from e −iθ to e iθ is outside the unit circle and does not cross (−∞, 0]. Set , we can take D to be Let Ω ± denote Ω(±ν, η, τ ). ThenΩ + Ω − ≡ 1.
For each integral, deform the contour to a circular arc of constant radius. Then the absolute value of the integrand is maximized at the endpoints. To see this, note that |z| is constant on the arc, so it suffices to analyze Therefore, R increases as we move counterclockwise along the arc in the upper half-plane, and R decreases as we move counterclockwise along the arc in the lower half-plane. If n 1 ≥ n 2 , then R (n1−n2)/2 is maximized when R is maximized, which occurs at the endpoints since the contour crosses (0, ∞). Likewise, if n 1 < n 2 , then R is minimized at the endpoints. Thus, in both cases, R (n1−n2)/2 is maximized at the endpoints. Using a standard asymptotic analysis (see e.g. chapter 3 of [10]), we get that the asymptotic expansion of (40) is for some constants c 1 , c 2 . To complete the proof, notice that if for some selection of ±, then the asymptotic expansion of the kernel would depend on ξ. But ξ was arbitrarily selected, so this is impossible. Now that the fourth condition has been proved, it remains to show that the second condition in Definition 2.1 holds. Recall that Ω(ν, η, τ ) is the root of p(ν, η, τ, z) that lies in the upper half-plane, where p is the polynomial from Lemma 3.3. We thus need to solve p(q 2 (η, τ ) − 1 , η, τ, Ω(q 2 (η, τ ), η, τ ) + 2 ) = 0.
We bound this sum in two cases. Case 1. Assume I (t s ) / ∈ 2πZ + [−δ, δ]. For s − N 1/3 ≤ k ≤ s + N 1/3 , Taylor's theorem says that for some c k between t s and t k . We will prove that Using the inequality we have that Furthermore, for |k − s| ≤ N 1/3 , Using (41), the definition of I 1 and I 2 , (42) and (43) respectively, Case 2. Assume that I (t s ) ∈ 2πZ + (−δ, δ). In this case, only a simple estimate is needed: Since the estimate in case 1 is much better than the estimate in case 2, we need an upper bound on how frequently case 2 can occur. In other words, we need an upper bound on the measure of the set {x ∈ [a, b] : I (x) ∈ 2πZ+(δ, δ)}. We assumed that where g and I satisfy the same assumptions as in Lemma 4.1. Further suppose that the error term o(N d ) is uniform, i.e.
Then as N → ∞, bN x= aN +1 Proof. Let > 0. By assumption, there exists N 0 such that for N > N 0 and By summing over x, we see that The Proposition now follows from Lemma 4.1. Furthermore, for N > 3BV (g)/ , Proposition 4.4. Suppose f : Z ≥0 → R is a function such that for each t > 0, where g is a function on [a, b] of bounded variation. Further suppose that the Proof. Let > 0. By assumption, there exists N 0 such that for N > N 0 and We can see this by noting that when t = ( aN + 1)/N , tN = aN + 1. Furthermore, increasing x by 1 causes t to increase by 1/N , which causes tN to increase by 1. By summing over x, we see that The Proposition now follows from Lemma 4.3.
Make the substitutions s = G 1 (Ω 1 ) − G 1 (u) and t = G 2 (Ω 2 ) − G 2 (w). In the neighborhood of u = Ω 1 and w = Ω 2 , we have which imply Then we get where the last equality follows from G(z) = G(z). The 4 appears because the maps u → s and w → t are both two-to-one. It still remains to show that the error term is correct. The remainder of this section is devoted to proving this. The idea is to reduce the double integral to progressively simpler forms. First, by a reparametrization, the integral over two arcs in C can be written as a integral in R 2 . Second, by using a Taylor approximation, the integral in R 2 can be written as a product of two integrals in R, each of which is of the form e −N R(t) φ(t)dt, where R(t) has a maximum t max in the interval of integration. Third, by using the implicit function theorem, this integral reduces to the form e −N t 2 g(t), where the interval of integration is a small neighbourhood t max . Fourth, this last integral is a slight generalization of e −N t g(t)dt, which is dealt with by the well-known Watson's lemma (Lemma 4.6 below). Since the first two steps have been done before (see Chapters 3 and 4 of [10]), we will focus mostly on the third and fourth steps. Also suppose that φ(t) = t σ g(t), where σ > −1 and g is continuously differentiable in some neighbourhood of t = 0. Then for any N > 1 and s ∈ [0, T ],  Also suppose that φ is twice continuously differentiable in some neighbourhood of t = 0. Then for any N > 1 and s ∈ [0, where m = min(α, β) and Proof. See pages 58-60 of [10].
Lemma 4.8. Suppose that R and φ are infinitely continuously differentiable in some neighbourhood of t max . Also suppose that t max is a local maximum of R and R (t max ) < 0. Then for any N > 1 and s ∈ [0, where v(s) is an infinitely differentiable function solving Before continuing, a few estimates on v(s) are needed.
Let v(s) be as in (44).
To estimate v (s), let us first estimate R . By a Taylor expansion, By the triangle inequality and part (b), which, by (45) and (46), implies To estimate the inverse of R , use which, by setting a 1 = 2v(0) −1 and a 2 = 4v(0) −2 v (0), implies that Multiplying by 2|s| finishes the proof of (c).
Corollary 4.10. Suppose that R and φ are infinitely continuously differentiable in some neighbourhood of t max . Also suppose that t max is a local maximum of R and R (t max ) < 0. Let δ 1 and δ 2 be positive numbers such that m 2 := −R(t max − δ 1 ) = −R(t max + δ 2 ), and assume m 2 equals the right-hand side of (47). Let Then for any N > 1, N + e −Ns 2 /2 Λ.
In Proposition 4.5, the error term blows up at the edge. Therefore a better bound is needed. To get this bound, we simply use the first term in Watson's lemma, as opposed to using two terms. Since the method of the proof is identical as before and the details are simpler, the proof will be omitted. The exact statement is the following. Proposition 4.11. For j = 1, 2, let (ν j , η j , τ ) ∈ D, Ω j denote Ω(ν j , η j , τ ), G j (z) denote G(ν j , η j , τ, z), and θ j denote θ(ν j , η j , τ ). With the assumptions in section 2.1, 1 2πi 2 Γ1 Γ2 exp(N G(η 1 , ν 1 , τ, u)) exp(N G(η 2 , ν 2 , τ, w)) f (u, w)dwdu

A Gaussian Free Field
This section is adapted from [11]. At the most fundamental level, a Gaussian free field is a Gaussian Hilbert space. It turns out to be equivalent to consider other Hilbert spaces, which we will consider below.
Let {e j } be the eigenfunctions of the Dirichlet Laplacian on D which form an orthonormal basis for L 2 (D), and let {λ j } denote the corresponding eigenvalues. By the spectral theorem for compact self-adjoint operators, all the λ j are negative. Suppose the ordering satisfies λ 1 ≥ λ 2 ≥ . . .. Define the operator on L 2 (D): (−∆) a β j e j = (−λ j ) a β j e j .
Define L a (D) := (−∆) a L 2 (D) to be the set of all β j e j such that (−λ j ) −a β j e j ∈ L 2 (D). We may view L a (D) has a Hilbert space with inner product (f, g) a = ((−∆) −a f, (−∆) −a g). Note that when a < 0, the sum β j e j may not converge in L 2 (D), but it converges in the space of distributions. The higher moments can be computed from the following theorem, Theorem A.1. If X 1 , . . . , X k are mean zero random variables such that a 1 X 1 + . . . + a k X k is Gaussian for all a 1 , . . . , a k ∈ R, then where the sum is over all involutions on {1, . . . , k} with no fixed point.