Topics in loop measures and the loop-erased walk

These are notes based on a course that I gave at the University of Chicago in Fall 2016 on"Loop measures and the loop-erased random walk."This is not intended to be a comprehensive view but rather a personal selection of some key ideas including some recent results.


Introduction
This is a collection of notes based on a course that I gave at the University of Chicago in fall, 2016 on "Loop measures and the loop-erased random walk". This was not intended to be a comprehensive view of the topics but instead a personal selection of some key ideas including some recent results. This course was to be followed by a course on the Schramm-Loewner evolution (SLE) so there was some emphasis on the ideas that have become important in the study of conformally invariant processes. I will first give some history of the main results I discuss here; this can be considered a personal perspective of the developments of (some of the) ideas. I will follow with a summary of the topics in this paper.

Some history
I started looking at the loop-erased random walk in my thesis [10] spurred by a suggestion by my advisor, Ed Nelson. My original motivation had been to try to understand the self-avoiding walk. Soon in the investigation, I found out two things: the bad news was that this process was different than the self-avoiding walk, but the good news was that it was a very interesting process with many of the attributes of other models in critical phenomena. In particular, there was an upper critical dimension (in this case d = 4) and (conjecturally) conformal invariance in two dimensions. My thesis handled the easiest case d > 4. The four dimensional case was significantly harder; I did not have the chance to discuss this in this course even though there is some recent work on the subject [19].
The interest in loop-erased random walk increased when the relationship between it and uniform spanning trees was discovered [23,25]. I believe there were several independent discoveries of this; one thing I know is that I was not one of the people involved! I found out about it from Robin Pemantle who was trying to construct the infinite spanning tree and forest. He was able to use my results combined with the Aldous-Broder algorithm to show that the limit of the uniform spanning tree was a tree for d = 4 and a forest for d > 4. I discuss one version of this construction (limit of wired spanning trees) in Section 4.5. The argument here uses an algorithm found by David Wilson. Although the loop-erased random walk was first considered for simple random walk on the integer lattice, it immediately extends to loop-erased Markov chains. The study of Markov chains often boils down to questions in linear algebra, and this turns out to be true for the loop-erased walk. As we will see in these notes, analysis of the loop-erased walk naturally leads to consider the following quantity for a subMarkov chain on a state space A = {x 1 , . . . , x n }: where A j = A \ {x 1 , . . . , x j−1 } and G denotes the Green's function. At first, this quantity looks like it depends on the ordering of the vertices. I first noticed that the quantity was the same for the reverse ordering {x n , x n−1 , . . . , x 1 } when I needed to show the reversibility of the distribution of the LERW in [11]. The proof in that paper, which is not difficult and is Exercise 3 in these notes, shows the quantity is the same for any permutation of the indices. I did not notice this until conversations with Lon Rosen when he was working on a paper [27]. His calculations with matrices led to a measure on self-avoiding that looked like it could be the loop-erased measure; however, if that were the case we would need invariance under permutation, so this caused me to check this out.
This fact arose again at a conference in Cortona in 1997 [26]. There are three things I remember about that conference: first, Cortona is a very pretty town; second, this is the first time I heard Oded Schramm discuss his ideas for what would become the Schamm-Loewner evolution; and finally, I was told David Wilson's beautiful algorithm that uses loop-erased random walk to construct a uniform spanning tree. This algorithm was certainly a surprise for me, but I remember saying then that if it were true I could probably verify it quickly. In this case, I was right -the key fact is the invariance of F (A) under permutations of the indices. I published the short proof as part of a survey paper [12]; see [29] for Wilson's proof using "cycle popping". I believe that Marchal [24] was the first to identify F (A) as a determinant.
There were two major papers published in 2000 on loop-erased walk. In [7], Rick Kenyon used a relationship between dimers and spanning trees to prove the conjecture that the growth exponent for two-dimensional LERW is 5/4. He also showed how to compute a chordal exponent exactly. Oded Schramm [28] constructed what would later be proved to be the scaling limit for loop-erased random walk; it is now called the Schramm-Loewner evolution. We will not discuss the latter in these notes. We will not do Kenyon's proof exactly, but as mentioned below, the proof we discuss in these notes uses an important idea from that paper. Kenyon's result computed both a "chordal" and a "radial" exponent. A nice identity by Fomin [5] can be used to derive the chordal exponent; this derivation is in these notes. The radial exponent, from which the growth exponent is deduced, is more difficult.
The current theory of loop measures started with the paper [16] where the Brownian loop measure was constructed and used to interpret some of the computations done about SLE in [14]. (It was later realized that work of Symanzik had much of the construction; indeed, in the whole subject of loop measures one continually finds parts of the theory that are older than one realizes!) The random walk version was first considered in a paper with José Trujillo Ferreras [15] where a strong coupling was given between the random walk and Brownian loop measures. That paper did not actually give the correct definition of the random walk loop measure (it was not important for that paper), but we (with contribution by John Thacker as well) soon realized that a slightly different version was the one that corresponded to the loop-erased walk. The general theory of discrete time loop measures for general (positive) weights was given in Chapter 9 of [17].
At about the same time Yves Le Jan was extending [16] by developing a theory of continuous time loop measures on discrete sample spaces (the case of continuous time random walk appears in John Thacker's thesis that was never published). Le Jan used the continuous time loop soup to construct the square of the Gaussian free field. See [20] and references therein. This is similar (but not exactly the same) as the Dynkin isomorphism, which itself is a generalization of ideas of Brydges, Fröhlich, and Spencer. Continuous times are needed to get a continuous random variable. When I was trying to learn Le Jan's work, I realized that one could also construct the field using the discrete time loop measure and then adding exponential random variables. This idea also appears later in Le Jan's work. This construction only gives the square of the field. A method to find the sign of the field was found by Lupu and we discuss a version of this here although not his complete construction. There is also a relationship with currents -here we give a somewhat novel treatment but it was motivated by the paper of Lupu and Werner [22].
Another recent improvement to the loop measure theory is the consideration of nonpositive weights. Loop measures are very closely tied to problems of linear algebra and much (but certainly not all) of the theory of "Markov chains" can be extended to negative weights. There have been two recent applications of such weights: one is an extension of Le Jan's result to some Gaussian fields with some negative correlations [18] and the other is a loop measure proof of the Green's function for two-dimensional LERW [4,13]. The latter is an improvement of Kenyon's result, but it uses a key idea from his paper.

Summary of notes
Most of Section 2 sets up the notation for the notes. Much of the framework is similar to that in [17,Chapter 9], but it is done from scratch in order to show that nonpositive weights are allowable. When dealing with nonpositive weights some care is needed; if the weight is integrable (that is, the weight as a function on paths is L 1 with respect to counting measure on paths), then most of the operations for positive weights are valid. For nonintegrable weights, some results will still hold, but some will not because many of the arguments involve interchange of sums over paths.
Loop erasure is the topic of Section 3. Here we only consider the deterministic transformation of loop erasure and see the measure it induces on paths. The expression involves the quantity F (A). The invariance of this quantity under permutations of the indices is discussed as well as the fact that it is a determinant of the Laplacian for the weight.
Section 4 discusses the loop-erased walk obtained for Markov chains. There are three main cases: transient chains, where loop-erasure is done on the entire path; chains on finite state spaces, where loop-erasure is done on the path stopped when it hits the boundary; and (some) recurrent chains, for which the LERW on infinite paths can be defined as a limit of finite state spaces. One main example is simple random walk in two dimensions. The relationship between the loop-erased walk and the Laplacian random walk is discussed. Wilson's algorithm to generate spanning trees is discussed in Section 4.4. The fact that the algorithm generates uniform spanning trees on graphs works is surprising; however, once one is told this verifying it takes little time (this is often true for algorithms). Combining this with the interpretation of F (A) as a determinant gives Kirchhoff's matrix-tree theorem as an almost immediate corollary. The next subsection shows a nice application of Wilson's algorithm to understand the uniform spanning tree or forest in Z d ; the algorithm is easily defined for infinite graphs and it is not too difficult to show that this gives the same tree or forest as that obtained by a limit of "wired" spanning trees. We only touch on this subject: see [3] for a deeper description of such trees and forests.
The (discrete time, discrete space) loop measures are introduced in Section 5. It is easiest to define for rooted loops first, where it is just the usual measure with an extra factor. The utility of the measure comes from the number of different ways that one can get the same measure on unrooted loops. We also give more emphasis to another measure on rooted loops that uses an ordering of the vertices. It is the discrete analog of a Brownian bubble measure decomposition of the Brownian loop measure. This measure is often the most useful for calculations and estimations. In Section 5.1, we find the other expression for F (A) in terms of the exponential of the loop measure. In the next subsection, we define a soup (the terminology comes from [16]) which for a positive weight is a Poissonian realization from the loop measure. We extend the definition of the soup for nonpositive weights by considering the distribution of the soup. Some of the material in Sections 5.3 and 5.4 may be new. Here the "bubble soup" (which is a union of "growing loops") version of the loop soup is studied and the soup is shown to be given by a negative binomial process (for the number of "elementary" loops). A particular case of this, which was known, was that the loop soup at intensity one corresponding to the loops erased in the loop-erasing procedure. This is made more explicit in Section 5.5. Section 6 discusses the results of Le Jan and Lupu about the Gaussian field. Some of the treatment here is new and, as in [18], applies to some negative weight fields. It also uses the relation with currents [1,22]. After defining the field and giving some basic properties, we study the measure on currents generated by the loop soup at intensity 1/2. We compute the distribution exactly in Theorem 2. The main work is a combinatorial lemma proved in Section 6.4. This is a measure on discrete currents. We then get a continuous local time by adding independent exponential random variables for each visit to a vertex by the discrete soup. Given Theorem 2 we get the joint distribution of the current and the continuous local times and by integrating out the currents we get a new proof of the Le Jan's theorem. We then discuss Lupu's way of producing the signs for the field.
The final section deals with several questions dealing with multiple looperased walks. The first is understanding the natural measure in terms of the loop measure. We then discuss Fomin's identity and discuss two nontrival applications. The first is the derivation of the chordal crossing exponent first calculated rigorously by Kesten. The second is a start of the asymptotics of the SLE Green's function by Beneš, Viklund, and myself [4,13]. We do not give a complete proof of the latter result but we do discuss how a loop measure with negative weight reduces the problem to several estimates about random walks.
I thank the members of the class for their comments and in particular Jeffrey Shen for pointing out a number of misprints.

Definitions and notations
Loop measures and loop-erased walks were first considered for random walks and, more generally, Markov chains. One of the first things that one learns about Markov chains on finite state spaces is that much of the initial theory quickly boils down to questions of linear algebra. However, the probabilistic interpretation gives new insights and in some cases new techniques, e.g., coupling.
The theory of (sub)Markov chains is therefore a study of (sub)stochastic matrices. There are times when one does not want to restrict one's study to matrices with nonnegative entries; indeed, many models in mathematical physics lend very naturally to complex weights on objects. Much of the theory of loop measures also extends to complex weights, so we will allow them in our setup. A disadvantage of this is that we will need to start with a lot of notation and definitions. First time readers may wish to consider the case of nonnegative entries first when trying to learn the material.
To be a little careful, we will adopt the following terminology. If Λ is a countable set, we will call φ : Λ → C a function or a weight. We will also call φ a measure on Λ if either φ ≥ 0 or • A, ∂A finite sets, A = A ∪ ∂A. We call the elements in A vertices or sites.
(There will be times that we allow infinite sets, but we assume finite unless stated otherwise.) • We let E A = A × A denote the set of directed edges in A. If (x, y) is a directed edge, we call x the initial and y the terminal point of the directed edge, respectively. Let be the set of directed edges in A with at least one vertex in A. We will write bold-face e for directed edges. We say that edge e 2 follows edge e 1 if the terminal vertex of e 1 is the initial vertex of e 2 . Note that we do allow self-edges, i.e., edges with the same initial and terminal point. • Let E A denote the set of (undirected) edges in A which can be viewed as equivalence classes of E A under the equivalence (x, y) ∼ (y, x). Note that E A includes self-edges from x to x. We define E A similarly. The word "edge" will mean undirected edge unless otherwise specified. We write e for undirected edges. • A function q : E A → C is called a weight (on edges). Weights restricted to E A are the same as square matrices indexed by A. A weight is symmetric if q(x, y) = q(y, x) in which case it is a function on E A . It is Hermitian if q(x, y) = q(y, x). • We say that p is a positive weight if p(e) ≥ 0 for all e. When we use p, P for a weight, then the assumption will be that it is a positive weight. If we wish to consider complex weights, we will use q, Q. Of course, positive weights are complex so results about complex weights apply to positive weights. • If q is a weight, we will write |Q| for the matrix [|q(x, y)|]. Note that |q| is a positive weight. • We will call p a Markov chain (weight) if [p(x, y)] are transition probabilities of an irreducible Markov chain X j on A. Let τ = τ A = min{j : X j ∈ ∂A}; the assumptions imply that for x ∈ A, P x {τ < ∞} = 1. We write P for the transition matrix restricted to A. It is standard that there is a unique positive eigenvalue λ < 1 of P such that all other eigenvalues have absolute value at most λ. • More generally, we say that q is an integrable weight on A if the largest positive eigenvalue of |Q| is strictly less than one. • We say that q is a green weight, if the eigenvalues of Q are all strictly less than one in absolute value. This is a weaker condition than integrability. • Simple random walk -Let A be a connected graph and A a strict subset of vertices. There are two forms of simple random walk on the graph we will consider. Let d x denote the degree of x and write x ∼ y if x, y are adjacent in the graph. * Type I. p(x, y) = 1/d x if x ∼ y. In this case, the invariant probability π is proportional to d x . The chain is reversible, that is π(x) p(x, y) = π(y) p(y, x), but is not symmetric unless all the degrees are the same. * Type II. Let n be a number greater than or equal to the largest degree of the vertices in A, and let p(x, y) = 1/n if x ∼ y. This is symmetric and hence the invariant probability is the uniform distribution.
-Simple random walk in Z d is a particular example. On the whole plane, it is both a Type I or Type II walk. If A is a finite subset of Z d and ∂A = {x ∈ Z d : dist(x, A) = 1}, then it is often convenient to view the simple random walk on A = A ∪ ∂A as a Type II walk as above with n = 2d.
• A path or walk in A of length n is a sequence of n + 1 vertices with repetitions allowed We allow trivial paths of length 0, ω = [ω 0 ]. Any path of length n > 0 can also be represented as We call ω 0 , ω n the initial and terminal vertices of ω, respectively. A path of length one is the same as a directed edge. We write |ω| = n for the length of the path. • If ω 1 = e 1 ⊕ · · · ⊕ e n , ω 2 = e n+1 ⊕ · · · ⊕ e n+m and e n+1 follows e n , we define the concatenation Conversely, any concatenation of n edges such that e j follows e j−1 gives a path. • If ω = [ω 0 , ω 1 , . . . , ω n ] = e 1 ⊕· · ·⊕e n is a path we write ω R for the reversed path ω R = [ω n , ω n−1 , . . . , ω 0 ] = e R n ⊕ e R n−1 ⊕ · · · ⊕ e R 1 .
• If x, y ∈ A, we let K A (x, y) denote the set of paths in A starting at x and ending in y. If x = y, we include the trivial path [x]. We let • We also write K A (x, y) when one or both of x, y are in ∂A. In this case it represents paths ω = [ω 0 , . . . , ω n ] with ω 0 = x, ω n = y and ω j ∈ A for 0 < j < n. If x, y ∈ ∂A, we also require that n ≥ 2, that is, that there is at least one vertex of ω in A. We let • We call a walk in K A (x, x) a (rooted) loop rooted at x. This includes the trivial loop [x] of length zero. We sometimes write l instead of ω for loops; l will always refer to a (rooted) loop. • A weight q gives a weight on paths by q(e 1 ⊕ · · · ⊕ e n ) = q(e 1 ) q(e 2 ) · · · q(e n ), where paths of length zero get weight one. Note that • If q is positive or integrable, then q is a measure on K A . It is easy to see that the q-measure of the set of walks of length n in K A (x, y) is the same as the (x, y) entry of the matrix Q n . • If q is weight and λ ∈ C, then λq is also a weight. We sometimes write q λ for the weight on paths induced by λq, that is, where we recall that |ω| is the length of ω. For any weight q there exist δ > 0 such that q λ is integrable for |λ| < δ.
• For a Markov chain, the Green's function is given by The second expression extends immediately to green complex weights. • In matrix form from which we get (I − Q) G = I, that is, G = (I − Q) −1 . We will write G p A or G q A if we wish to emphasize the weight that we are using. This expression only requires the eigenvalues of Q to all have absolute value less than one.
For integrable q, we can view sampling from q as a two-step process: first sampling from |q| and then specifying a rotation q/|q|.
• More generally, the Green's generating function is defined as a function of λ, Note that G q A (x, y; λ) = G λq A (x, y). • We will say that l ∈ K A (x, x) is an elementary loop if it is nontrivial (|l| > 0) and the only visits to x occur at the beginning and terminal vertices of l. We writeL 1 x =L 1 x (A) for the set of elementary loops in A with initial vertex x.
• For the Markov chain case, let T x = min{j ≥ 1 : Then it is standard that This formula extends to complex weights if which is true, say, for integrable weights. Any l ∈ K A (x, x) of length at least one can be written uniquely as A little care is needed when q is not integrable. Let Vj denote the set of loops inL 1 x of length j. If q is green and satisfies (2), then Therefore, A number of standard results for which probabilists use stopping times can be written in terms of products of generating functions by suitable path splitting. Such arguments are standard in combinatorics and much mathematical physics literature. While the probabilistic form is more intuitive, it is often useful to go to the generating functions, especially when using nonpositive weights.
• The (discrete) Laplacian is defined by • Analysts often use −∆ = I − Q which is a positive operator for positive P. Indeed, we will phrase many of our results in terms of I − Q.
• In the case of random walks on graphs, ∆ is sometimes called the random walk Laplacian. Combinatorialists often use the combinatorial or graph Laplacian which is −n∆ for the Laplacian for the Type II random walk on graph. Note that this is the diagonal matrix of degrees minus the adjacency matrix.
• The Poisson kernel for a Markov chain is defined by In this case, We extend the definition for complex weights q by The analogue of the first equality in (3) holds but it is not necessarily true that q[K A (x, ∂A)] = 1.
• The boundary Poisson kernel is defined by Exercise 1. Suppose x ∈ A and z, w ∈ ∂A.

Show that
2. Suppose that q is symmetric and K A (z, w; x) denotes the set of paths in K A (z, w) that include the vertex x at least once. Show that The boundary Poisson kernel goes under a number of other names and is related to the Dirichlet to Neumann map.

Loop-erasure
A path ω = [ω 0 , . . . , ω n ] is called a self-avoiding walk (SAW) if all of the vertices are distinct. We will write η = [η 0 , . . . , η m ] for SAWs. We write W A (x, y) for the set of ω ∈ K A (x, y) that are self-avoiding walks. We write W A (x, V ), etc., as well.
We will reserve the notation η for self-avoiding walks and use ω for general walks that can have self-intersections.
There is a deterministic procedure called (chronological) loop-erasing that takes every ω ∈ K A (x, y) and outputs a subpath η = LE(ω) ∈ W A (x, y). One erases the loops in the order that they appear. This definition makes this precise.
Note that LE(ω) is a self-avoiding subpath of ω with the same initial and terminal vertices.
In general, there are many self-avoiding subpaths of a path ω with the same initial and terminal vertices. The loop-erasing procedure specifies a particular choice. Topologists use the word "simple" to mean with no self-intersections. Since this conflicts with our terminology of simple random walk (which does not produce a path with no self-intersections) we will use the term "selfavoiding" to refer to such paths. There is a some possibility of confusion because "self-avoiding walk" is also used to refer to a particular measure on SAWs that is different from the ones we will consider. Exercise 2. Give an example of a path ω such that Definition Given an integrable weight q on A which gives a measure q on K A , the loop-erased measureq is the measure on W A defined bŷ We can also consider the restriction ofq to W A (x, y) and note that The next proposition gives an expression forq(η) in terms of q(η) and the Green's function.
Proof. Suppose ω = [ω 0 , . . . , ω n ] is such that LE(ω) = η. Define the indices j 0 , j 1 , . . . , j m as in the definition of LE(ω). This gives a unique decomposition where l j ∈ K Aj (η j , η j ). Conversely, any choice of l j ∈ K Aj (η j , η j ), j = 0, 1 . . . , m produces an ω as above with LE(ω) = η. Since we getq The case when one or both of the endpoints of η is in ∂A is almost the same except that there is no loop to be erased at the boundary point. We only state the proposition.
The quantity m j=0 G Aj (η j , η j ) appears to depend on the ordering of the vertices {η 0 , . . . , η m }. Actually, as this next proposition shows, it is independent of the order. The proof is easy (once one decides that this is true!), and we leave it as an exercise.  Given the proposition, we can make the following definition.
By convention, if B ⊂ A, we define F B (A) = F B∩A (A). Also, The proposition implies the rule It also allows us to rewrite Propositions 3.1 and 3.2 as follows.
In the statement of the proposition we have used η for both the path and for the set of vertices in the path. We will do this often; hopefully, it will not cause confusion.
In Proposition 5.2, we will give expression for F B (A) in terms of the loop measure and the invariance under reordering will be seen from this. The next proposition gives a formula for F (A) that is clearly invariant under permutation. Proposition 3.5.
Proof. If A has a single element x and q = q(x, x), then so the result is immediate. We now proceed by induction on the number of elements of A. Assume it is true for sets of n − 1 elements and let A = {x 1 , . . . , x n }, A ′ = {x 2 , . . . , x n }. From the formula we see that In other words, the vector v satisfies where δ x1 is the vector with 1 in the first component and 0 elsewhere. Using Cramer's rule to solve this equation we see that where M is the matrix obtained from I − Q by changing the first column to δ x1 . By expanding along the first column, we see that where Q ′ is Q restricted to the entries indexed by A ′ . Therefore, using the inductive hypothesis, Exercise 5. Let A be the complete graph on n vertices, and A ⊂ A a set with m < n vertices. Assuming that we are doing simple random walk on A, compute F (A).

Loop-erased walk on Markov chains
For this section, we will only consider loop-erased walks arising from Markov chain transition probabilities p. We assume that the reader knows basic facts about Markov chains. As before, we write for the Laplacian of the chain.

Loop-erased walk from transient chains
Let S n be an irreducible transient Markov chain on a countable set X . We we can define loop-erased random walk as a stochastic process by erasing loops from the infinite path. Indeed, if is an infinite sequence of points such that no vertex appears an infinite number of times, the loop-erasing algorithm in Section 3 outputs an infinite subpath η = LE(ω). This probability measure on infinite self-avoiding paths can also be viewed as a nonMarkovian processŜ n starting at the same point as the chain S n . We will give another description of the process by specifying for each SAW η = [η 0 , η 1 , . . . , η n ] the probability that the LERW starts with η. If A ⊂ X is a bounded set, let φ A (z) = P z {S n ∈ A for all n ≥ 0}.
It is well known that φ A is the unique nonnegative function on X satisfying We define the escape probability Es A (x) to be Proposition 4.1. If η = [x 0 , . . . , x n ] is a self-avoiding walk in X starting at x 0 , then P [Ŝ 0 , . . .Ŝ n ] = η = p(η) F η (X ) Es η (x n ), The right-hand side of (7) is easily seen to be the conditional probability that the Markov chain starting at xn takes its first step to z given that it never returns to A.
Proof. Similarly as in the proof of Proposition 3.1, if ω = [ω 0 , ω 1 , . . .] is an infinite path such that the loop-erasure LE(ω) starts with η, then we can write ω uniquely as where l j is a loop rooted at x j contained in X \ {x 0 , . . . , x j−1 } and ω + is an infinite path starting at x n that never returns to η. In this case, LE(ω) = η ⊕ LE(ω + ). The measure of possibilities for l 0 , . . . , l n is given by F η (X ) and the measure of possibilities for ω + is Es η (x n ). Given that ω + does not return to η, the first step of LE(ω + ) is the same as the first step of ω + and the conditional probabilities for this step are given by (7).
The processŜ n could have been defined using the transition probability (7). Since φ η is the solution of the Laplace's equation ∆φ η = 0, the process is sometimes called the Laplacian random walk. More generally, we can define a process called the b-Laplacian random walk by using the transitions where we set 0 b = 0 even if b ≤ 0. For b = 1, this process is much harder to study and little is known about it rigorously. The case b = 0 is sometimes called the infinitely growing self-avoiding walk (IGSAW). The IGSAW chooses randomly among all possible vertices that will not trap the chain.
The decomposition of ω into LE(ω) and the loops in l 0 , l 1 , l 2 , . . . in Proposition 4.1 extends to the infinite path. If x n =Ŝ n , then the path of the Markov chain is decomposed into As a corollary of the proof, we get the conditional distribution of l 0 , l 1 , . . . given its loop-erasure. We state the result. Recall that Proposition 4.2. GivenŜ n = [x 0 , x 1 , x 2 , . . .] the distribution of l 0 , l 1 , . . . is that of independent random variables taking values respectively in K Xj (x j , x j ). The random variable l j has the distribution Here X j = X \ {x 0 , . . . , x j−1 }.
There is another way to view the distribution on loops in the last proposition. For fixed j, let S k = S j k denote the Markov chain starting at x j and let τ j = inf{k : S k ∈ X j } (this can equal infinity) and σ j = max{k < τ j : S k = x j }. Then it is easy to check that the distribution of the loop [S 0 , S 1 , . . . , S σj ] is given by (9). This gives us a method to obtain a path of the Markov chain by starting with a loop-erased path (or, equivalently, a realization of the Laplacian walk with transitions as in (7)) and adding loops with the appropriate distribution. We omit the proof (we have done all the work already). x ∈ X }, each with transition matrix P, with S x 0 = x. Create a new path as follows.
Choose a loop atŜ j with distribution (9) by using the method in previous paragraph. Note that the loops {l j } are conditionally independent givenŜ n .
Then the path has the distribution of the Markov chain starting at x 0 .
One thing to emphasize about the last proposition is that the construction has the following form.
• We first choose independentlyŜ and the loop-making Markov chains {S x j }. • The Markov chain S is then constructed as a deterministic function of the realizations of these processes.

Loop-erased walk in a finite set A
Definition Suppose A = A ∪ ∂A with ∂A nonempty and P is an irreducible Markov chain on A. If x ∈ A, then loop-erased random walk (LERW) from x to ∂A is the probability measure on W A (x, ∂A) obtained by starting the chain at x, ending at the first time that the chain leaves A, and erasing loops chronologically.
Equivalently, using Proposition 3.2, we see that LERW is the probability measurep on W A (x, ∂A) given bŷ We can also describe this process by giving the transition probabilities for the random processŜ n . Let φ η (y) = φ η,A (y) denote the function on A satisfying As before, we let . Then the probability that the LERW from x to ∂A starts with η is Proof. Essentially the same as the proof of Proposition 4.1.
One can define the Laplacian b-walk on A similarly as to the transient case.
Proposition 4.5. Suppose S n is an irreducible, transient Markov chain on a countable state space X starting at x ∈ X and A j is an increasing sequence of subsets of X containing x, such that Let η be a (finite) SAW in X starting at x and letp(η) andp j (η) denote the probability that LERW starting at x to infinity and ∂A j , respectively, start with η. Then,p (η) = lim j→∞p j (η).
Proof. For fixed η, we need only show that both of which are easily verified (Exercise 7).
We state the analogue of Proposition 4.2 which is proved in the same way.
Proposition 4.6. Given η the distribution of l 0 , l 1 , . . . , l k−1 is that of independent random variables taking values respectively in K Aj (x j , x j ). The random variable l j has the distribution Here It is often useful to consider LERW from a boundary point to a subset of the boundary. Suppose A, ∂A are given and z ∈ ∂A, V ⊂ ∂A\{z}. Then loop-erased random walk from z to V in A is the measure on paths of total mass obtained from the measurep restricted to W A (z, V ). This also gives a probability measure on paths when we normalize so with total mass one. Let us consider this probability measure. Note that if z ∈ A, then LERW from z to V is the same as if we make z a boundary point. An important property that the probability measure satisfies is the following: • Domain Markov Property. Suppose z ∈ ∂A, V ⊂ ∂A \ {z}. Then the probability measure of loop-erased random walk from z to V satisfies the following domain Markov property: conditioned that the path starts as η = [η 0 = z, . . . , η k ], the remainder of the walk has the distribution of There is a slight confusion in terminology that the reader must live with. When referring to loop-erased random walk say from z to w in A where z, w ∈ ∂A, one sometimes is referring to the measure on paths of total mass H ∂A (z, w) and sometimes to the probability measure obtained from normalizing to total mass one. Both concepts are very important and the ability to go back and forth between the two ideas is fruitful in analysis.
1. Verify the domain Markov property. 2. Extend it to the the following "two-sided" domain Markov property. Take LERW from z to V in A and condition on the event that the beginning of the path is η = [η 0 , . . . , η k ]; the end of the path is Show that the conditional distribution of the remainder of the path is the same as LERW from The following is used in the proof of Proposition 4.5. Suppose X n is an irreducible, transient Markov chain on a countable state space X ; A n is an increasing sequence of finite sets containing x whose union is X . Then for Exercise 8. Using the notation of Proposition 4.6, suppose S is a random walk starting at x j and let τ = min{k : is the same as (10).

Infinite LERW for recurrent Markov chains
If X n is an irreducible, recurrent Markov chain on a countably infinite state space X , then one cannot define LERW on X by erasing loops from the infinite path. However, if one can prove a certain property of the chain, then one can give a good definition. This property will hold for two-dimensional simple random walk. Let x 0 ∈ X . Suppose A n is an increasing sequence of finite subsets of X with x 0 ∈ A 0 and whose union is X . Let η = [η 0 = x 0 , . . . , η k ] be a SAW in X starting at x 0 . In order to specify the distribution of the infinite LERW it suffices to give the probability of producing η for each η. Using the previous section, we see that we would like to define this to bê assuming that the limit on the right-hand side exists.
• Property A. For every finite V ⊂ X and y ∈ V , there exists a nonnegative function φ V,y that vanishes on V and is harmonic (that is, ∆φ V,y = 0) on X \ V satisfying the following. Suppose A n in an increasing sequence of subsets of X whose union is X . Let φ n be the function that is harmonic on A n \ V ; vanishes on V ; and takes value 1 on X \ (A n ∪ V ). Then for all In Definition If a recurrent irreducible Markov chain satisfies Property A, then we define the infinite LERW starting at x 0 by We will show later the nontrivial fact that two-dimensional random walk satisfies Property A. However, this property is not satisfied by all recurrent chains as can be seen from the next example.
One could also define infinite LERW with respect to a particular sequence {A n } provided that the appropriate limit exists for this sequence.
Exercise 10. Suppose that X n is an irreducible, recurrent Markov chain on a countably infinite state space X , and A n is an increasing sequence of finite subsets of X whose union is X . Show that if x, y ∈ X , Is the recurrence assumption needed?
Exercise 11. Assume that X n is an irreducible, recurrent Markov chain on a countably infinite state space X that satisfies Property A. Let A n be an increasing sequence of finite subsets of X whose union is X and V a finite subset of X .
1. Show that there exists a single function φ V and a positive function Show that the process is a Laplacian random walk in the sense that Assume as given that two-dimensional simple random walk satisfies Property A. Show that φ V,x = φ V,y for all x, y ∈ Z 2 .

Random walk in Z 2
Here we will show that two-dimensional simple random walk satisfies Property A using known facts about the random walk. We will establish this property with y = 0 (the other cases are done similarly) and write φ V,y = φ V . Let C m = {z ∈ Z 2 : |z| < m} and if S j denotes a simple random walk, The potential kernel (see [17,Section 4.4]) is defined by This limit exists, is nonnegative, and satisfies where c 0 = (2γ + log 8)/π and γ is Euler's constant. We set Let A n be an increasing sequence of finite subsets of Z 2 containing V whose union is Z 2 . Let τ n = min{j : S j ∈ A n }; we need to show that Let us first consider the case V = {0}. Let T = inf{j > 0 : S j = 0}. As usual for Markov chains, we have denote the hitting probability of ∂C m by a random walk starting at the origin. Using a last exit decomposition, we can see that Using [17,Proposition 6.4.5], we see that for |x| < m/2, from which we conclude that For more general V , let ζ = ζ V and let φ n denote the function that is harmonic on A n \ V with boundary value 0 on V and 1 on Z 2 \ A n . Let ψ n be the corresponding function with V = {0}. Note that

Wilson's algorithm
Suppose P is the transition matrix of an irreducible Markov chains on a finite state space A = {x 0 , x 1 , . . . , x n } and let A = {x 1 , . . . , x n }. A spanning tree T of (the complete graph) of A is a collection of n (undirected) edges such that A with those edges is a connected graph. This implies that every point is connected to every other point (this is what makes it spanning), and since there are only n edges that there are no "loops" (this is what makes it a tree). Given a spanning tree T , for each x ∈ A, there is a unique SAW η ∈ W A (x, x 0 ) whose edges lie in T . This gives us a directed graph (that we also label as T although it depends on the choice of "root" x 0 ) by orienting each edge towards the root.
The weight of T (with respect to x 0 ) is given by where the product is over the directed edges in the tree. We will now describe an algorithm to choose a spanning tree with a fixed root x 0 .
Definition Given A, P, x 0 , Wilson's algorithm to select a spanning tree is as follows.
• Take a LERW in A starting at x 1 to ∂A = {x 0 }. Include all the edges traversed by the walk in the tree and let A 2 be the set of vertices that have not been connected to the tree yet. and recursively, • If A k = ∅, then we have a tree and stop.
• Otherwise, let j be the smallest index such that Add those edges to the tree and let A k+1 be the set of vertices that have not been connected to the tree.
Proposition 4.7. Given {x 0 }, P, the probability that a particular spanning tree Proof. Given any T we can decompose it in a unique way as follows.
• Let η 1 be the path in T from x 1 to x 0 .
• Given η 1 , . . . , η k , let x j be the vertex of smallest index (if any) that is not included in η 1 ∪ · · · ∪ η k . Let η k+1 be the unique path from x j to η 1 ∪ · · · ∪ η k . (If there were more than one path, then there would be a loop in the tree.) Given this decomposition of T into η 1 , . . . , η k we can see from repeated application of Proposition 3.4 that the probability of choosing T is but p(η 1 ) · · · p(η k ) = p(T ; x 0 ) and (5) shows that The second equality in (12) follows from (6).
A particularly interesting case of this result is random walk on a graph. Suppose (G, E) is a simple, connected graph with vertices {x 0 , x 1 , . . . , x n }. Let us do the "Type II" version of random walk on the graph. Then for each spanning tree T of G we have p(T ; x 0 ) = n −n .
In particular each tree is chosen with equal probability and this probability is Recall that n(I − P) is the graph Laplacian. We have proved an old result due to Kirchhoff sometimes called the matrix-tree theorem. Exercise 12. Explain why doing Wilson's algorithm with a "Type I" simple random walk on a graph generates the same distribution (that is, the uniform distribution) on the set of spanning trees.
Exercise 13. Use Proposition 4.7 and Exercise 5 to compute the number of spanning trees in a complete graph.
We will generalize this a little bit. Given A = A ∪ ∂A, we define the graph obtained by wiring the boundary ∂A to be the graph with vertex set A ∪ {∂A} (that is, considering ∂A as a single vertex) and retaining all the edges with at least one vertex in A. This gives some multiple edges between vertices in A and ∂A, but we retain the multiple edges. A wired spanning tree for A is a spanning tree for the wired graph. Wilson's algorithm gives a method for sampling from the uniform distribution on wired spanning trees. We can describe the algorithm recursively as follows.
• Choose any vertex x ∈ A and let η be LERW from x to ∂A.
What we emphasize here is that we can choose any vertex at which to start the algorithm, and after adding a SAW η to the tree, we can choose any remaining vertex at which to continue. If we take the uniform wired spanning tree and restrict to the edges in A, then we call the resulting object the uniform spanning forest on A. The terminology is perhaps not best because this is not the same thing as looking at all spanning forests of A and choosing one at random. Since we will use this terminology we define it here.
Definition Suppose (A, E) is a connected graph and A is a strict subset of A.
• The uniform wired spanning tree on A is a spanning tree of A ∪ {∂A} chosen uniformly over all spanning trees of the wired graph. • The uniform spanning forest of A is the uniform wired spanning tree of A restricted to the edges for which both endpoints are in A, Wilson's algorithm applied to the simple random walk on the graph generates a uniform wired spanning tree and hence a uniform spanning forest.

Uniform spanning tree/forest in Z d
The uniform spanning tree in Z d is the limit as n → ∞ of the uniform spanning forest on the discrete ball C n = {x ∈ Z d : |x| < n}. If d = 1, the uniform spanning forest of C n is all of C n , so we will consider only d ≥ 2. We will use Wilson's algorithm to give a different definition for the forest, but then we will prove it is also the limit of the uniform spanning forests on C n . The construction will yield a spanning tree of Z d if d = 2, 3, 4, but will only be a forest for d ≥ 5.
The difference between d ≤ 4 and d ≥ 5 comes for a property about loop-erased random walk that we now discuss. If S j is a simple random walk we write Proposition 4.9. If S 1 , S 2 are independent simple random walks in Z d starting at the origin, then Exercise 14. Prove Proposition 4.9. You may want to consider first the expec- A little harder to prove is the following.
Proposition 4.10. If S 1 , S 2 are independent simple random walks starting at the origin then We will not prove this. The critical dimension is d = 4. The probability that two simple random walks in Z 4 starting at neighboring points go distance R without intersecting is comparable to (log R) −1/2 . The probability that one of the walks does not intersect the loop-erasure of the other is comparable to (log R) −1/3 .
Using this proposition, we will now define the spanning tree/forest in the three cases. In each case we will use what we will call the infinite Wilson's algorithm. We assume that we start with an enumeration of Z d = {x 1 , x 2 , . . . , } and we have independent simple random walks S j n starting at x j . The algorithm as we state it will depend on the particular enumeration of the lattice, but it will follows from Theorem 1 below that the distribution of the object is independent of the ordering.

Uniform spanning tree for d = 3, 4
• Start by taking S 1 and erasing loops to getŜ 1 [0, ∞). Include all the edges and vertices ofŜ 1 [0, ∞) in the tree. We call this tree (which is not span- Otherwise, consider the random walk S j and stop it at the first time T j it reaches a vertex inT j−1 . By Proposition 4.10, this happens with probability one. Take the loop-erasure LE(S j [0, T j ]) and add those edges and vertices to the tree to formT j .
This algorithm does not stop in finite time, but it gives a spanning tree of the infinite lattice Z d . To be more precise, suppose that C m ⊂ {x 1 , . . . , x k }. Then every vertex in C m in included inT k , and it is impossible to add any more edges adjacent to a vertex in C m . Hence for all n ≥ k, and hence we can set T ∩ C m =T k ∩ C m . Here we are writing T ∩ C m for the set of edges in T that have both vertices in C m .
This distribution on spanning trees is called the uniform spanning tree on Z d , d = 3, 4.

Uniform spanning tree for d = 2
The uniform spanning tree for d = 2 is defined similarly. The only difference is that in the first step, one takes the infinite LERW starting at x 1 as discussed in Section 4.3 and uses those edges to formT 1 . The remaining construction is the same.

Uniform spanning forest for d ≥ 5
The construction will be similarly to d = 3, 4 except that theT k will only be forests, that is, they will not necessarily be connected.
• Start by taking S 1 and erasing loops to getŜ 1 [0, ∞). Include all the edges and vertices ofŜ Otherwise, consider the random walk S j and stop it at the first time T it reaches a vertex inT j . It is possible that T = ∞. Erase loops from S j [0, T ] and add those edges and vertices to the tree. If T < ∞, this adds edges to one of the components ofT j−1 . If T = ∞, this adds the complete loop-erasureŜ j [0, ∞) and hence gives a new connected component to the forest.
The output of this algorithm is an infinite spanning forest T f with an infinite number of components.
Exercise 15. Show that the uniform spanning forest for d ≥ 5 has an infinite number of components.
This was not the original definition of the uniform spanning tree/forest. Rather, it was described as a limit of trees on finite subsets of Z d . Let C n = {x ∈ Z d : |x| < n} and consider the uniform spanning forest on C n . To be precise, we construct the uniform spanning tree T n on the wired graph C n ∪ {∂C n } and let T f n be the forest in C n obtained by taking only the edges in C n . For every finite set A, we write A ∩ T f n for the set of edges in T f n with both vertices in A. This gives a probability measure ν A,n on forests in A. We can also consider the probability measure ν A obtained from intersecting the infinite spanning tree with A. Theorem 1. If T f denotes the uniform spanning forest (or tree) in Z d , then we can couple T f and {T f n : n ≥ 1} on the same probability space, such that with probability one for each finite set A, for all n sufficiently large Proof. If suffices to prove this for A = C m , and we write T n,m , T ∞,m for T f n ∩ C m , T f ∩ C m , respectively. We will do the d ≥ 3 case leaving the d = 2 case as an exercise. Assume d ≥ 3 and choose any ordering of Z d = {x 1 , x 2 , . . .}. We assume we have a probability space on which are defined independent simple random walks S j i starting at x j . Given these random walks the spanning forest T is output using Wilson's algorithm above (it is a forest for d ≥ 5 and a tree for d = 3, 4, but we can use a single notation). For each n, we construct the uniform spanning forest on C n on the same probability space, using Wilson's algorithm with the same random walks and the same ordering of the points. The only difference is that the random walks are stopped upon reaching ∂C n . We recall that the distribution of this forest is independent of the ordering. If m < n, we will write T f ∞,m , T f n,m for the forests restricted to C m . We fix m. Given the realization of the random walks S j we find N as follows. We writeT k for the (non-spanning) forest obtained from the infinite Wilson's algorithm stopped once all the vertices {x 1 , . . . , x k } have been added to T k . We writeT k,n for the analogous forest for the walks stopped at ∂C n .
• Choose k sufficiently large so that C m+1 ⊂ {x 1 , . . . , x k }. In particular, every vertex in C m+1 has been added toT k . We partition • Choose n 1 sufficiently large so that for each x j ∈ V 1 , The path S j hits T j−1 before reaching ∂C n1 . • Choose n 1 < n 2 < N such that each for each j ∈ V 2 , the random walk S j never returns to C n1 after reaching ∂C n2 and never returns to C n2 after reaching ∂C N . Note that this implies that for every n ≥ N , the intersection of C n1 and the loop-erasure of S j stopped when it reaches ∂C n is the same as intersection of C n1 and the loop-erasure of the infinite path.
Then one readily checks that for n ≥ N , T n,m = T ∞,m .
While this proof was not very difficult, it should be pointed out that no estimates were given for the rate of convergence. Indeed, the numbers n1, n2, N in the proof can be very large.
Given the infinite spanning tree or forest, we can also consider the intersection of this with a discrete ball C n . This gives a forest in C n . For d = 2, 3, the largest components of this forest have on order n d points. However, for d = 4, there are of order log n components of order n 4 / log n points. In other words, the uniform spanning tree in Z 4 does not look like a tree locally.

Loop measure
Recall that a loop rooted at x in A is an element of K A (x, x). We will say that l ∈ K A (x, x) is an elementary loop if it is nontrivial (|l| > 0) and the only visits to x occur at the beginning and terminal vertices of l. We writeL 1 x =L 1 x (A) for the set of elementary loops in A rooted at x. Recall that if q is an integrable weight, Any nontrivial loop l ∈ K A (x, x) can be written uniquely as where k is a positive integer and l 1 , . . . , l k ∈L 1 x . We writeL k x for the set of loops of the form (14) for a given k, and we writeL 0 x for the set containing only the trivial loop at x. LetL x =L x (A) be the set of nontrivial loops, so that we have partitions Note that q(L k x ) = f k x . We will define a measure on nontrivial loops, that is, oñ Definition If q is a weight on A, then the (rooted) loop measurem =m q A is defined onL(A) bym The loop measure is a measure onL(A) and hence gives zero measure to trivial loops. It may not be immediately clear why one would make this definition. The usefulness of it comes when we consider the corresponding measure on unrooted loops. An unrooted loop is an oriented loop that has forgotten where the loop starts. We will write ℓ for unrooted loops and l for rooted loops. We write l ∈ ℓ if l is a representative of the unrooted loop ℓ. Note that |l| and q(l) are the same for all representatives of an unrooted loop ℓ so we can write |ℓ| and q(ℓ).
For each unrooted loop, let s ℓ denote the number of distinct representatives l of ℓ. If s ℓ = |ℓ| we call ℓ irreducible; we also call a rooted loop l irreducible if its corresponding ℓ is irreducible. More generally, if |ℓ| = n, s ℓ = s, then each representative of ℓ can be written as l = l ′ ⊕ · · · ⊕ l ′ n/s where l ′ is an irreducible loop of length s. For example, if ℓ is the unrooted loop with representative [x, y, x, y, x] we have s ℓ = 2 and the two irreducible loops are [x, y, x] and [y, x, y]. Note that s ℓ is always an integer dividing n. In particular, n(l; x) = k if l ∈L k x (A). • If ℓ is an unrooted loop, we similarly write n(ℓ; x).
The next proposition is important. It relates the unrooted loop measure restricted to loops that visit x to a measure on loops rooted at x.
Then the induced measure on unrooted loops is m restricted to L(A; x).
Proof. Let l be a representative of ℓ inL x and let s = s ℓ , n = |ℓ|. Then we can write l = l ′ ⊕ · · · ⊕ l ′ n/s , where l ′ is an irreducible loop inL x . The loop l ′ is the concatenation of n(l ′ ; x) elementary loops. Note that n(l; x) = (n/s) n(l ′ ; x), and there are n(l ′ ; x) distinct representatives of ℓ inL x .
Recall that if B = {x 1 , . . . , x n }, then where A j = A \ {x 1 , . . . , x j−1 }. In Proposition 3.3 (actually in Exercise 3), it was shown that this is independent of the ordering of the points of B. In the next proposition we give another expression for F B (A) in terms of the unrooted loop measure that is clearly independent of the ordering.
Proposition 5.2. Suppose that q is an integrable weight on A.
1. By the previous lemma, the measure m restricted to L(A; x) can be obtained from m ′ = m ′ A,x by "forgetting the root". Using (13), we get and use part 1 j times. 3. This is part 2 with B = A combined with (6).
The last proposition might appear surprising at first. The first equality can be rewritten as On the right-hand side we have a measure of a set of paths and on the left-hand side we have the exponential of the measure of a set of paths. However, as the proof shows, this relation follows from the Taylor series for the logarithm, As a corollary of this result, we see that a way to sample from the unrooted loop measure m on A is to first choose an ordering A = {x 1 , . . . , x n } and then sample independently from the measures on rooted loopsm xj,Aj where A j = A \ {x 0 , . . . , x j−1 }. Viewed as a measure on unrooted loops, this is independent of the ordering of A.

Soups
We use the word soup for the more technical term "Poissonian realization" from a measure. If X is a set, then a multiset of X is a subset where elements can appear multiple times. A more precise formulation is that a multiset of X is an element of N X , that is, a function where N (x) = j can be interpreted as saying that the element x appears j times in the multiset. Here N = {0, 1, 2, . . .}. Let N X fin denote the finite multisets, that is, the N such that #(x : N (x) > 0} is finite.
Definition If µ is a positive measure on a countable state space X , then a soup is a collection of independent Poisson processes where N x has rate µ x = µ(x).
If µ is a finite measure, then N t ∈ N X fin with probability one and the distribution of the soup at time t is Although the product is formally an infinite product, since N ∈ N X fin , all but a finite number of terms equals one. We can give an alternative definition of a loop soup in terms of the distributions. This definition will not require the measure to be positive, but it will need to be a complex measure. In other words, if µ x denotes the measure of x, then we need Definition If µ is a complex measure on a countable set X , then the soup is the collection of complex measures {ν t } on N X fin given by The generalization to complex measures is straightforward but it is not clear if there is a probabilistic intuition. Let us consider the simple case of a "Poisson random variable with parameter λ ∈ C". This does not make literal sense but one can talk about its "distribution" which is the complex measure ν on N given by ν(k) = e −λ λ k k! , k = 0, 1, 2, . . . .
As in the positive case, ν(N) = 1; however, the total variation is larger, Using this calculation, we can see that ν t as defined in (15) is a complex measure on N X fin of total variation

The growing loop at a point
In this subsection, we fix x ∈ A and an integrable weight q and consider loops coming from the measure m ′ = m ′ A,x as in Proposition 5.1. We first consider the case of positive weights p ≥ 0. Recall that the measure m ′ is given by . Sampling from m ′ can be done in a two-step method, • Choose k ∈ N from the measure • Given k, choose l 1 , . . . , l k independently from the probability measure p/f onL 1 .
At time t, the soup outputs a (possible empty) multiset of loops inL x . If we concatenate them in the order they appear, we get a single loop in K A (x, x), which we denote by l(t). We can also write l(t) as a concatenation of elementary loops. If no loops have appeared in the soup, then the concatenated loop is defined to be the trivial loop [x].
Definition The process l(t) is the growing loop (in A at x induced by p) at time t.
The growing loop at time t is a concatenation of loops inLx(A); if we only view the loop l(t) we cannot determine how it was formed in the soup.
The growing loop can also be defined as the continuous time Markov chain with state space K A (x, x) which starts with the trivial loop and whose transition rate of going froml tol ⊕ l is m ′ (l). The next proposition computes the distribution of the loop l(t).

Proposition 5.3.
• The distribution of the growing loop at x at time t, is • In particular, the distribution at time t = 1 is given by The expression involving the Gamma function is also written as the general binomial coefficient defined by We choose to use the Gamma function form because we will occasionally use properties of the Gamma function.
Proof. We can decompose the growing loop at time t into a number of elementary loops l j ∈L 1 . Let K t be the number of elementary loops in l(t). Given K t , the elementary loops l 1 , . . . , l k are chosen independently from the measure p/f . To compute µ t we first consider the distribution on N for the number of elementary loops at t. Given the number of such loops, the actual loops are chosen independently from p/f . In other words, the distribution µ t at time t can be written as µ t [l 1 ⊕ · · · ⊕ l k ] = P{K t = k} p(l 1 ) · · · p(l k ) f k , l 1 , . . . , l k ∈L 1 .
The process K t is sometimes called the negative binomial process with parameter f . It can also be viewed as the Lévy process with Lévy measure f k /k, which can be written as a compound Poisson process where N t is a Poisson process with parameter m ′ (L) = − log(1 − f ), and Y 1 , Y 2 , . . . are independent random variables with distribution The distribution of K t is given by (see remark below) Therefore if l = l 1 ⊕ · · · ⊕ l k with l j ∈L 1 , Recalling that (1 − f ) = 1/G A (x, x), we get the result.
Here we discuss some facts about the negative binomial process Kt with parameter p ∈ (0, 1). At time 1, K1 will have a geometric distribution with parameter p, and hence Since Kt is a Lévy process, we see that the characteristic function of Kt must be 1 − p 1 − pe is t .
To check that (16) gives the distribution for Kt, we compute the characteristic function. Using the binomial expansion (for positive, real t), we see that which shows that is a probability distribution on N. Moreover, if Kt has distribution νt, Exercise 17. Let f ∈ (0, 1) and let µ t denote the probability distribution on N given by Show that for each k, We can extend the last result to show a general principle • The distribution of the loops erased in a LERW is the same as that of the appropriate soup at time t = 1. Proof. This follows immediately by comparison with Proposition 4.6.
The growing loop distribution is also defined for complex integrable weights q although some of the probabilistic intuition disappears. Let f = f x be as before although now f can be complex. Integrability implies that |f | < 1, so we can define the negative binomial distributions Since |f | < 1, this gives a complex measure on N with We can then define and we can check that µ t is a complex measure onL x (A) with µ t [L x (A)] = 1. If q is green but not integrable, the formula (17) defines a function on [0, ∞)×L k x (A) although µ t is not necessarily a measure.
As in the case of positive weights, we can view the measure µ k for integrable q in two steps: first, choose k according to (the complex measure) νt and then, given k, choose independent l 1 , . . . , l k from the measure q/f . This latter measure gives measure one toL 1 x , although it is not a probability measure since it is not a positive measure.
Verify directly that q(t, r) is the solution of the system with initial condition q(0, r) = 1, r = 0, 0, r ≥ 1.
You may wish to derive or look up properties of the logarithmic derivative of the Gamma function,

Random walk bubble soup
We continue the discussion of the previous subsection to define what we call the (random walk) bubble soup. We start with a finite set A and an ordering of the points A = {x 1 , . . . , x n }. As before, we write A j = A \ {x 1 , . . . , x j−1 }.
Definition The (random walk) bubble soup (for the ordering x 1 , . . . , x n ) is an increasing collection of multisets fromL(A) obtained by taking the union of the independent soups from m ′ xj ,Aj .
The colorful terminology "bubble soup" come from the relation between this discrete construction and a construction of the Brownian loop soup in terms of (boundary) "bubbles".
By concatenation we can also view the bubble soup as an n-tuple of growing loops l(t) = (l 1 (t), . . . , l n (t)), where l j (t) is the loop growing at x j in A j . These loops grow independently (although, of course, their distribution depends on the ordering of A). More generally, if B = {x 1 , . . . , x k } ⊂ A is an ordered subset of A, we can define the bubble soup restricted toL(A; B) as a collection of growing loops l(t) = (l 1 (t), . . . , l k (t)). The following is an immediate consequence of Proposition 5.3 and the relation Proposition 5.5. The distribution of the bubble soup at time t is given by where l = (l 1 , . . . , l n ); q(l) = q(l 1 ) · · · q(l n ); and j i is the number of elementary loops in l i , that is, l i ∈L ji xi (A i ). In particular, Note that we can write where c(t, l) is a combinatorial term, independent of q, Proposition 5.6. Suppose p comes from a Markov chain. Suppose x ∈ A, and the loop-erased walk from x to ∂A is Then given that the loop-erasure is η, the distribution of the loops erased is the same as the growing loops restricted to η using the ordering {x 0 , . . . , x n ].
We can state this a different way.
Proposition 5.7. Suppose p is coming from a Markov chain and x 0 ∈ A.
Then the path has the distribution of the Markov chain started at 0 ending at ∂A. In other words, for each ω ∈ K A (x 0 , ∂A), the probability that this algorithm outputs ω is p(ω).

Random walk loop soup
Definition The (random walk) loop soup is a soup from the measure m on unrooted loops ℓ ∈ L A .
If q is a positive measure, then the soup can be viewed as an independent collection of Poisson processes {X ℓ t : ℓ ∈ L A }, where X ℓ has parameter m(ℓ). If q is complex, the soup is defined only as the collection of complex measures ν t on N LA . The definition of the unrooted loop soup does not require an ordering of the points on A.
However, if a realization of the loop soup is given along with an ordering of the vertices A = {x 1 , . . . , x n } we can get a soup on rooted loops with a little more randomness. Indeed, suppose that an ordering of the vertices A = {x 1 , . . . , x n } is given. If ℓ ∈ L A , we choose a rooted representative of ℓ as follows: • Find the smallest j such that the vertex x j is in ℓ.
• Consider all l ∈ ℓ that are rooted at x j and select one (uniformly).
This gives a collection of rooted loops. At any time t we can construct a loop in K A (x, x) by concatenating all the loops in K A (x, x) that have been output by time t, doing the concatenation in the order that the loops arrive.
Proposition 5.8. The random walk loop soup considered as a collection of growing loops as above has the same distribution as the bubble loop soup.
Proof. This is not difficult to show given the fact that the measure m ′ A,x , considered as a measure on unrooted loops L x (A), is the same as m restricted to L x (A). Proposition 5.9. Suppose p is coming from a Markov chain and x 0 ∈ A.
• Let η = [η 0 , η 1 , . . . , η n ] be LERW from x 0 to ∂A. That is, η is chosen from the probability distribution on W A (x 0 , ∂A), • Let {X ℓ t : ℓ ∈ L(A)} denote an independent realization of the random walk loop soup. Let us view the realization at time t = 1 as a finite sequence of loops where the loops are ordered according to the time they were added to the soup.
• Take a subsequence of these loops, which we also denote by [ℓ 1 , ℓ 2 , . . . , ℓ M ], by considering only those loops that intersect η. • For each ℓ k let j be the smallest index such that η j ∈ ℓ k . Choose a rooted representativel k of ℓ k rooted at η j . If there are several representatives choose uniformly among all possibilities. Define loops l j , j = 0, . . . , n − 1 to be the loop rooted at η j obtained by concatenating (in the order they appear in the soup) all the loopsl k that are rooted at η j .
Then the path has the distribution of the Markov chain started at 0 ending at ∂A. In other words, for each ω ∈ K A (x 0 , ∂A), the probability that this algorithm outputs ω is p(ω).

Relation to Gaussian field
There is a strong relationship between the loop soup at time t = 1/2 with a Gaussian field that we will discuss here. We will consider only integrable, Hermitian weights q, that is q(x, y) = q(y, x). If q is real, then this implies that q is symmetric. This implies that for every path ω, q(ω R ) = q(ω). Every Hermitian weight can be written as where p is positive and symmetric and Θ is anti-symmetric, Θ(x, y) = −Θ(y, x). If q is an integrable Hermitian weight on A, then since q(l) + q(l R ) ∈ R, and hence Proposition 6.1. If q is an integrable, Hermitian weight on q, then the Green's matrix G is a positive definite Hermitian matrix.
Proof. Since I − Q is Hermitian, it is clear that G = (I − Q) −1 is Hermitian. It suffices to prove that I − Q is positive definite. Since I − Q is Hermitian, Sylvester's criterion states that it suffices to show that for each V ⊂ A, that det(I − Q V ) > 0 where Q V denotes Q restricted to the rows and columns associated to V . If V = {x 1 , . . . , x k }, then (6) gives where V j = V \ {x 1 , . . . , x j−1 }. This is positive by (19).

Weights on undirected edges
Definition • A (real, signed) weight on undirected edges E A is a function θ : E A → R.
• If θ is a weight on E A , then there is a symmetric weight q = q θ on directed edges given by . • Conversely, if q is a symmetric weight on E A , we define θ by • We say that θ is integrable or green if the corresponding q is integrable or greenable, respectively. • If f : A → C, we also write f for the function f : where e connects x and y.
Clearly it suffices to give either θ or q, and we will specify symmetric weights either way. Whenever we use θ it will be a function on undirected edges and q is a function on directed edges. They will always be related by (20) and (21). If we give θ, we will write just q for q θ . In particular, if θ is integrable, we can discuss the Laplacian ∆ = Q − I and the Green's function G = (I − Q) −1 where Q = [q(x, y)] x,y∈A . We will write D = det(I − Q) = 1 det G .

Gaussian free field
Definition Given a (strictly) positive definite symmetric real matrix Γ indexed by a finite set A, the centered Gaussian field (with Dirichlet boundary conditions) is a centered multivariate normal random vector {Z x : x ∈ A} indexed by A with covariance matrix Γ.
The density of Z x is given by where · denotes the dot product We will consider the case Γ = G, Γ −1 = I − Q = −∆ where q is a green weight.
Then we have Here θ e is as defined in (21). From this we see that the Gaussian distribution with covariance G has Radon-Nikodym derivative with respect to independent standard Gaussians of Definition The Gaussian field generated by a green weight θ on E A is a random vector {Z x : x ∈ A} indexed by A whose density is where φ = φ A is the density of a standard normal random variable indexed by A. This is the same as the centered Gaussian field with covariance matrix Γ = (I − Q) −1 .
We will consider a two step process for sampling from the Gaussian field. We first sample from the square of the field, and then we try to assign the signs. Recall that if N is a standard normal random variable, then N 2 has a χ 2 -distribution with one degree of freedom. In particular, if T = N 2 /2, then T has density (πt) −1/2 e −t . The next proposition gives the analogous computation for the Gaussian field weighted by θ.
x ∈ A} is the Gaussian field generated by an integrable weight θ on E A and let T x = Z 2 x /2. Then {T x : x ∈ A} has density.
In other words, Φ is the Radon-Nikodym derivative of {Tx} with respect to the density obtained for standard normals (θ ≡ 0).
Proof. This is obtained by change of variables being a little careful because the relationz →t is not one-to-one. Let n = #(A), and let J x = sgn(z x ), y x = |z x | so that z x = J x y x . Then we can write the density (22) as We now do the change of variables t x = z 2 x dy x to see that for a fixed value ofJ, the density ofT restricted toz with the signs ofJ is If we now sum over the 2 n possible values forJ, we get the result.
Note that (24) gives the conditional distribution of the signs of the field given the square of the field. Corollary 6.3. Suppose {Z x : x ∈ A} is the Gaussian field generated by a green weight θ on E A and let T

The measure on undirected currents
We will write E = E A for the set of undirected edges. For each e ∈ E, x ∈ A, we let n e (x) be the number of times the edge touches x. More precisely, n e (x) = 2 if e is a self-loop at x; n e (x) = 1 if e connects x to a different vertex; and n e (x) = 0 otherwise. Ifk = (k e : e ∈ E) ∈ N E , thenk generates a local time on vertices by n x = n x (k) = 1 2 e∈E k e n e (x). (25) Note that We say thatk is an (undirected) current if n(x) is an integer for each x. Equivalently,k is a current if for each x ∈ A the number of edges ink that go from x to a different vertex is even. Let C = C A denote the set of undirected currents in A. Given a weight θ on E, there is a corresponding symmetric weight q on E A given by (20). The loop soup for an integrable weight q viewed at time t induces a measure on N E that is supported on C. (This process is not reversible without adding randomness -one cannot determine the realization of the loop soup solely from the realization of the current.) At time t = 1/2, this has a particular nice form. Ifk ∈ C we define Theorem 2. If θ is an integrable weight on E and µ = µ 1/2 denotes the distribution at time t = 1/2 of the corresponding loop soup considered as a measure on C, then for eachk ∈ C, Here n x = n x (k) is the vertex local time as in (25).
Proof. The proof will use a combinatorial identity that will be proved in Section 6.4. Here we will show how to reduce it to this identity. We write q for the corresponding weight on directed graphs as in (20). We choose an ordering of A = {x 1 , . . . , x n } and consider the growing loop (bubble) representation of the soup as in Section 5.3. Let l = (l 1 , . . . , l n ) be the output of the growing loop at time t = 1/2. Let π denote the function that sends each l to the corresponding currentk; note that π is not one-to-one. By (18) the measure on l of the soup is If π(l) =k, then q(l) = 2 −S(k) θ(k), where S(k) = e∈E 0 k e , and E 0 denote the edges in E that are not self-edges. Therefore, the induced distribution on C gives measure tok. So we need to show that This is done in Theorem 3. We note that since (26) also represents the measure of the current k from the (unordered) loop measure, that the sum on the left-hand side is indpendent of the ordering of the vertices.
In the last proof, π is used both for a function on loops and for the number 3.14 · · · . This will also be true in Section 6.4. We hope that this is not confusing.

A graph identity
Here we prove a combinatorial fact that is a little more general than we need for Theorem 2. We will change the notation slightly although there is overlap with our previous notation. Let G = (A, E) be a finite but not necessarily simple graph. The edges E are undirected, but we allow self-edges and multiple edges; let E 0 be the set of edges that are not self-edges. We write A = {x 1 , . . . , x n }, A j = A \ {x 1 , . . . , x j−1 } and we letω = (ω 1 , . . . , ω n ) be an ordered n-tuple of loops where ω j is a loop in A j rooted at x j . To be more precise, a loop rooted at x j in A j is a sequence of points as well as a sequence of undirected edges such that the endpoints of e i are ω j i−1 and ω j i . As before, we write n e (x) = 2, 1, 0, if e is a self-edge at x; is in E 0 and has x as a vertex; and does not touch x, respectively. A currentk = {k e , e ∈ E} is an element of N E with the property that the number of edges going out of each vertex is even. To be precise, if then n x is an integer for each x. For eachω there is a corresponding current, which we denote by π(ω), obtained by counting the total number of transversals of each edge. We write N j = N j (ω j ) for the number of elementary loops in ω j , that is, Theorem 3. If (A, E) is a graph, then for everyk ∈ C, We will do this by induction by treating various cases. We will use the fact from the last section that ω, π(ω)=k n j=1 Γ(N j + 1 2 ) N j ! is independent of the ordering of the vertices.

Adding a self-edge
Suppose that (27) holds for a graph G = (A, E) with A = {x 1 , . . . , x n } and consider a new graphG = (A,Ẽ) by adding one self-edgeẽ at x 1 . We write C,C for the currents for G andG respectively. We writek ∈C ask = (k, k) wherē k ∈ C and k = kẽ. Let us write n x ,ñ x for the corresponding quantities in G,G, respectively. We also write L andL for the corresponding collections of ordered pairsω. Let U, V denote the left and right-hand sides of (27), respectively, for G and k, and letŨ ,Ṽ be the corresponding quantities forG andk = (k, k). We will show that U = V implies thatŨ =Ṽ .
Let r = n x1 and henceñ x1 = r + kẽ = r + k. Note thatñ xj = n xj for j ≥ 2 and S(k) = S(k). In particular, Ifω = (ω 1 , . . . , ω n ) ∈ L with π(ω) =k we have N 1 = n x1 = r. (this uses the fact that x 1 is the first vertex in the ordering). To obtain anω ′ ∈L with π(ω ′ ) =k we replace ω 1 withω 1 which is constructed by placing the edge e k times into ω 1 . The number of ways to add the edge e in k times is Note thatÑ 1 = N 1 + k. Using S(k) = S(k), we see that Comparing (28) and (29), we see thatŨ =Ṽ .

Edge duplicating
Suppose that (27) holds for a given graph G = (A, E) and we take an edge e ∈ E and add another edge e 1 with the same endpoints. If e is a self-edge, this is the same as adding a self-edge and we can use the previous argument; hence, we will assume that e connects distinct vertices. LetG = (A,Ẽ) withẼ = E ∪ {e 1 }. We write C,C, L,L for the corresponding quantities as before. Ifk ∈C, then we can obtain a currentk ∈ C by lettinḡ k e =k e +k e1 .
and lettingk agree withk on E \ {e}. Suppose that k e = k. Then there are k + 1 possiblek that givek. We writek j ∈C for the current that agrees withk on E \ {e} and hask j e = j,k j e1 = k − j, Let us fixk as above and let U, V be the left and right hand sides of (27) for G andk. We will choose an ordering A = {x 1 , . . . , x n } for which x 1 is an endpoint of e. For j = 0, 1, . . . , k, letŨ j ,Ṽ j be the left and right-hand sides of (27) forG andk j . Note thatñ xi = n xi and N i =Ñ i for i = 1, . . . , n. We will show that if U = V , thenŨ j =Ṽ j for each j.
First note thatṼ Ifω ∈ L is a walk with π(ω) =k, then N 1 = k and we traverse e k times. Iñ L for each of these traversals we can either keep e or we can replace e with e 1 . There are k j ways in which we can retain e for j times and change to e 1 at k − j times. ThereforeŨ j = k j U.

Converting a self-edge
Suppose that G = (A, E) is a graph with A = {x 1 , . . . , x n } for which (27) holds.
Suppose that e ∈ E is a self-edge at x 1 . LetG = (Ã,Ẽ) be a new graph obtained by converting the self-edge to an edge to a new vertex, that is, where e ′ connects x 1 and y. We write C,C, L,L for the corresponding quantities as before. We will show that (27) holds forG. Let k ∈C and let 2k =k y . Note thatk y must be even since y has no other edges adjacent to it. Letk be the current in C that agrees withk on E \ {e} and has k e = k. We will show that if (27) holds for G andk, then it also holds forG and k. As before let U, V be the left and right-hand sides of (27) for G,k andŨ ,Ṽ the corresponding quantities forG,k. Note that n x agrees withñ x on A withñ y = k. Also, S(k) = S(k) + 2k. This and a standard identity for the Gamma function givẽ Comparing U andŨ is not difficult. There is a one-to-one correspondence between walks ω 1 that visit e k times and walksω 1 that visit e ′ 2k times. We just replace each occurrence of e with e ′ ⊕ e ′ . Therefore, Here we setting N n+1 = 0 as the value corresponding to the new vertex y. Therefore,Ũ Comparing (30) and (31) givesŨ =Ṽ .

Merging vertices
. , x n , y 1 , y 2 , . . . , y s } with s ≤ n such that there there are edges e j , 1 ≤ j ≤ s connecting x j , y j ; and for 1 ≤ i < j ≤ s, edges e ij connecting x i , x j . We also assume that there are no more edges adjacent to y 1 , . . . , y s but there may be more edges connecting x 1 , . . . , x n .
Our new graphG will combine y 1 , y 2 , . . . , y s into a single vertex that we call y. We keep the edges e 1 , e 2 , . . . , e s (that now connect x j and y) and we remove the edges e ij . The remaining edges of E (all of which connect points in x 1 , . . . , x n ) are also inẼ. We write C,C, L,L as before.
We choose the orderings of A = {x 1 , . . . , x n , y 1 , . . . , y s } andÃ = {x 1 , . . . , x n , y}. There is a one-to-one relationship between the ω ∈ L andω ∈L by replacing each traversal of the edge e ij starting at x i with e i ⊕ e j and each traversal of e ij starting at x j with e j ⊕ e i .
If there are any edges e that are not of the form e j or e ij , thenk e =k e . This correspondencek →k is not one-to-one. Let us write U (k), V (k) for the left and right-hand sides of (27) for (G,k) andŨ ,Ṽ for the corresponding quantities for (G,k). We will show that if U (k) = V (k) for eachk, thenŨ =Ṽ . Note thatŨ where in each case the sum is over all a j , b ij satisfying (32). The resultŨ =Ṽ follows from the following combinatorial lemma.
Lemma 6.4. Suppose K is a positive integer and k 1 , . . . , k n are positive integers with k 1 + · · · + k n = 2K. Then, and the sum is over all nonnegative integers a 1 , . . . , a n and {b ij : In the last formula, we write b ji = b ij if j > i.
Proof. The right-hand side of (33) is the number of sequences (m 1 , . . . , m 2K ) with m i ∈ {1, . . . , n} such that integer j appears exactly k j times. We can write each such sequence as a sequence of K ordered pairs Let a j denote the number of these pairs that equal (j, j) and if i < j, let b ij denote the number of these that equal (i, j) or (j, i). Then the condition that the integer j appears exactly j times in the first sequence translates to (34) for the sequence of ordered pairs. The factor 2 B takes into consideration the fact that (i, j) or (j, i) are counted by the b ij .

General case
We proceed by induction on the number of vertices. Suppose the result is true for all graphs of n vertices. If G = (A, E) is a simple (no self-edges or multiple edges) graph of n + 1 vertices, write A = A ′ ∪ {y} where A = {x 1 , . . . , x n } and E = E ′ ∪ E y where E y is the set of edges that include y. By induction the result holds for E ′ and we can obtain E from E ′ by first adding edges to vertices {y j } for each j for which there is an edge in E connecting x j and y and then merging the vertices to get the graph G. This handles simple graphs of n + 1 vertices, but then we can add multiple edges and self-edges as above.

Square of the Gaussian free field
If θ is an integrable weight on E = E A with corresponding directed weight q, we can consider the loop soup associated to q. Here we consider this only as measure on currents {k e : e ∈ A} ∈ C and hence also on vertex local times {n x : x ∈ A}.
We will use this measure plus some extra randomness to construct the square of the Gaussian field with weight θ. Let us first consider θ ≡ 0 for which the measure on currents is supported on the trivial current. In this case, the field {Z x : x ∈ A} should be the standard Gaussian and hence {Z 2 x : x ∈ A} are independent χ 2 random variables with one degree of freedom. Equivalently, we can say that if R x = Z 2 x /2, thenR = {R x : x ∈ A} are independent Gamma random variables with parameters 1 2 and 1, that is, each with density 1 √ πt e −t .
More generally, given a realization of the currentk and hence of the vertex local times {n x }, at each vertex x we put the sum of n x independent exponential random variables of rate 1. In other words, we will consider a random vector Y = {Y x : x ∈ A} such that Y x are independent (givenk) with a Gamma density with parameters n x and 1. IfT =R +Ȳ , then (givenk), {T x } are independent Gamma random variables with parameters n x + 1 2 and 1, that is, the joint density forT is This can also be written as If we combine this with Theorem 2, we get the following.
Proposition 6.5. Suppose θ is an integrable weight on a set A with n elements.
Letk denote an undirected current and let µ = µ 1/2 denote the measure onk induced by the loop soup at time 1/2. LetT =R +Ȳ as above. Then the joint density on (k,t) is given by Ifρ = {ρ e : e ∈ E} ∈ C E andk ∈ N E , we set We can write the density (35) as Letĝ denote the marginal density ont which can be given by summing over all possibilities fork, We will now show the relationship between the distribution oft and the square of the Gaussian free field which was first found by Le Jan (see [20] and references therein). We will use the following lemma (see, e.g., [1, Section 2.1]).
Proof. Ifk ∈ N E , we let n x = n x (k) be as before. Then we can expand If n x (k) is odd for some x ∈ A, we get E(J nx(k) x ) = 0. Otherwise, E(J nx(k) x ) = 1 for all x. This gives the lemma.
The formula (35) is valid in the non-integrable case although it is not the density for the measure. However, the calculations of this section are valid. In particular, for fixedt, the conditional measure onk is a measure and (37) shows that the total mass is positive. Theorem 4. Under the assumptions above, the marginal density fort = {t x } is the same as that of {Z 2 x /2} whereZ is the centered multivariate normal distribution indexed by A with covariance matrix G.
Proof. In (23) we showed that the density of (Z 2 x /2) is where {J x , x ∈ A} are independent ±1 coin flips; J e = J x J y if e connects x and y; and ρ e = θ e √ t e . By using (37), we get (36).
The form of the joint density also gives us the conditional density for the currentsk givent. Proposition 6.7. Givent, the conditional distribution for the currentsk is proportional to The form (38) may look like the distribution of independent Poisson random variables {ke : e ∈ E } where ke has intensity ρe. However, this distribution is restricted tok ∈ C. The distribution is that of independent Poisson distribution conditioned that thek is a current.

Finding the signs
x ∈ A}, the values of the field are given by where J x = ±1 is the sign of Z x . A way to specify the signsJ is to give the "positive set" V = {x ∈ A : J x = 1}. For a positive weight, we can give an algorithm due to Lupu [21] to get the Gaussian field with signs from a realization of the loop soup combined with some extra randomness.
• Obtain a sample (k,t) of currents and times as above. This gives the edge weights ρ e . • Open any edge e with k e ≥ 1. is open if it has been opened for either of the two reasons. We say two vertices are connected if there is a path between them using open edges. • For each connected cluster U , take an independent random variable J U = ±1 with equal probability.
Theorem 5. The distribution ofZ = (Z x ) is that of a centered multivariate random variable indexed by A with covariance matrix G.
Proof. We do it on induction on the cardinality of A. It is clearly true for a one-point set. Assume that it is true for all sets of cardinality at most n − 1 and suppose that #(A) = n.
We will write elements of R V asz = (z + , −z − ) wherez + ∈ (0, ∞) V ,z − ∈ (0, ∞) A\V . We will consider the density ofZ restricted to R V . Let G V , G A\V denote the Green's function restricted to the edges in V, A \ V , respectively, and let We write E * = E * (V, A) for the set of edges in E A that have one vertex in V and one vertex in A \ V .
• In order for the algorithm to outputz ∈ R V it is necessary that every edge in E * is closed. This requires it to be closed using both criteria.
-To be closed using the first criterion, the loop soup must not contain any loop in L, the set of unrooted loops that intersect both V and A \ V . We know that where m denotes the unrooted loop measure. Therefore, the probability that no such loop is chosen in the soup of intensity 1/2 is -Given the realization oft, the probability that no loop is open using the second criterion is • Given that the loop soup contains no loop in L, the algorithm acts independently on V and A \ V . By the induction hypothesis, the density of the output of the algorithm, restricted to R V , is given by Using (22) and (39), we see that this is the same as f A ((z + , −z − )). • This argument computes the density f A (z) for anyz ∈ R V such that V is not ∅ or A. However, symmetry shows that f A (z) = f A (−z) forz ∈ R A , and since the total integral of the density must equal one, we get the result for V = ∅ and V = A as well.
G(x, x) = G(y, y) = 1 1 − a , The distribution of n x at time t = 1/2 for the growing loop at x is The joint density of (N x , T x ) is Summing over k we get the density of (T x ) is which is the density of Z 2 /2 where Z is a centered normal with variance 1/(1−a).
To get normal random variables with covariance matrix G, we can let U, V be independent N (0, 1) and let Note that the joint distribution of (Z 2 x , Z 2 y ) in this case is independent of the sign of q. However the distribution of (Z x , Z y ) does depend on the sign.

An example: nearest neighbor, symmetric walks on Z 2
Before doing the general theory, we will consider simple random walk in Z 2 . We will make use of some planarity properties as well as conformal invariance in the scaling limit. For now we set up some notation. We will use complex notation, Z 2 = Z + iZ; in particular, we write just k for the point k + 0 · i. We will write p for the usual random walk weight, p e = 1/4 for every nearest neighbor directed edge. Equivalently, in the notation of Section 6, we can set θ e = 1/2 for each undirected, nearest neighbor edge. We will consider random walk restricted to finite, connected open sets A often stopped at the boundary ∂A.
• Associated to every finite A there is a domain D A ⊂ C obtained by replacing each lattice point with a square of side length one centered at the point.
To be more precise, we let S = r + is ∈ C : |r|, |s| ≤ 1 2 , and we define D A to be the interior of The boundary of D A is a union of unit segments what are edges in the dual lattice The (downward) zipper at w 0 is the vertical line in C starting at w 0 and going downward until it first reaches ∂D A . If p is a positive, symmetric, nearest neighbor integrable weight on A, we define the corresponding zipper measure q by saying that q e = J e p e where J e = −1 if e crosses the zipper and J e = 1 otherwise. Equivalently, let k be the smallest positive integer, such that either −ki or 1 − ki is not in A. Then, J e = −1 if e connects −ji and 1 − ji with 0 < j < k and J e = 1 otherwise. • We say that connected A is simply connected if Z 2 \ A is also a connected subgraph of Z 2 ; this is equivalent to saying that D A is a simply connected domain. • We let A be the set of finite subsets of Z + iZ containing 0 and 1, and we set A sc be the collections of such sets that are simply connected. • Here is a topological fact. Suppose A ∈ A sc and {a, b} are distinct boundary edges. Then we can order (a, b) such that the following is true.
-Any η ∈ W A (a, b) using the directed edge − → 01 crosses the zipper an even number of times. In particular, q(η) = p(η).
-Any η ∈ W A (a, b) using the directed edge − → 10 crosses the zipper an odd number of times. In particular, q(η) = −p(η).
We will call this the positive ordering of {a, b} (with respect to this zipper).
• We let O A be the set of unrooted loops in A that intersect the zipper an odd number of times. Note that if A ∈ A sc , • If A ∈ A sc , let f : D A → D := {z ∈ C : |z| < 1} be the unique conformal transformation with f (0) = 0, f (a) = 1 and define θ by f (b) = e 2iθ . The existence and uniqueness of the map follows from the Riemann mapping theorem. We set Using the Koebe 1/4-theorem from complex analysis we can see that We have made an arbitrary choice of the zipper. We could take any curve such that in the scaling limit it gives a simple curve from the origin to the boundary. It makes the discussion a little easier by making a specific choice. It is important that the zipper comes up all the way to − → 01. We have defined a collection of sets that contain the ordered edge − → 01. This is an arbitrary choice. If we are interested in sets containing the nearest neighbor edge − → zw where w = z + e 2iθ , we can take A ∈ A, translate by z, and then rotate by θ.
If D ⊂ C is a bounded domain containing the origin, we can make a lattice approximation to D. To be specific, for each positive integer n, we let A D,n be the connected component containing the origin of all z ∈ Z + iZ such that S + z ⊂ nD. We then set D n = n −1 D AD,n . We get the following properties.
• For every z ∈ ∂D n , dist(z, ∂D) ≤ √ 2/n. • If D is simply connected, then D n is simply connected.
• If ζ ∈ D, then for all n sufficiently large, ζ ∈ D n .

General theory
If q is a weight on A, and hence a measure on K A , we extend this to a measure on ordered pairs of paths using product measure. The next definition makes this precise.
• We write H A (x, y) for the total mass which is given by the product There are several reasonable choices for extensions of the loop-erased measurê H A (x, y), but the following has proved to be the most useful.

Definition
• If x = (x 1 , . . . , x k ), y = (y 1 , . . . , y k ) are 2k distinct points in A, let W A (x, y) denote the set of ordered k-tuples of SAWs that are mutually avoiding • The loop-erased measureq is defined on W A (x, y) bŷ where log F η (A) = log F η 1 ∪···∪η k (A) is the loop measure of loops in A that intersect at least one of the SAWs η 1 , . . . , η k . • More generally, the measure is defined on • We writeĤ A (x, y) for the total mass of the measurê • If σ is a permutation of {1, . . . , k} and x = (x 1 , . . . , x k ) we write x σ = (x σ(1) , . . . , x σ(k) ). Note that (It is important that we use the same permutation σ to permute the coordinates of x and y.) In particular,q is defined on W A (x, y) neither as the product measure nor as the product measure restricted to mutually avoiding paths. The definition allows the points to be interior or boundary points. It will be easier to consider only the case of boundary points and the next proposition, which is really no more than an immediate observation which we do not prove, shows that one can change an interior point to a boundary point and we need only multiply the entire measure by a loop term. Proposition 7.1. If x = (x 1 , . . . , x k ), y = (y 1 , . . . , y k ) are 2k distinct points in A, then for all η ∈ W A (x, y) where B = {x 1 , . . . , x k , y 1 , . . . , y k }. In particular, Example If A ∈ A sc and x = (x 1 , . . . , x k ), y = (y 1 , . . . , y k ) are 2k distinct boundary points that appear in order on ∂A, there is only one permutation σ of {1, . . . , k} such thatĤ A (x, y σ ) = 0. From the definition, we immediately get the Radon-Nikodym derivative with respect to product measure.
We could have used (41) as a definition ofĤA(x, y) but it is not obvious from this definiton thatĤA(x σ , y σ ) =ĤA(x, y) for permutations σ.
The next proposition shows that we can give the probability that a looperased walk uses a particular edge in terms of the total mass of pairs of walks (the past and the future as viewed by the edge).
We now sum over all possible η.
It can readily be checked that this gives the necessary bijection. The bijection shows that the terms on the right-hand side of (42) corresponding to V and V σ cancel. The equality of the remaining terms is seen by (41).
The two-path result is a special case of a more general theorem. For a proof of the following see [17, Proposition 9.6.2]. Proposition 7.5 (Fomin's identity). Suppose x 1 , x 2 , . . . , x k , y 1 , y 2 , . . . , y k are distinct points in ∂A. Then where the sum is over all permutations of {1, . . . , k}.
We could alternatively write the right-hand side of the above equation as In the case of simple random walk in a simply connected domain, topological constraints imply thatĤ A (x, y σ ) is non-zero for at most one permutation σ.
If x1, x2, . . . , xn, y1, y2, . . . , yn are distinct points on ∂A, then there may be several permutations σ for whichĤA(x, y σ ) = 0. However, one can determine the value of the loop-erased quantities in terms of random walk determinants. We give the idea here; see [8] and [6] for more details.
We will consider pairings of [2n] = {1, 2, . . . , 2n}. A planar pairing P is a partition into n sets of cardinality 2 such that nonintersecting curves can be drawn in the upper half plane H connecting the points. We write x↔y if x and y are paired. There is a one-to-one correspondence between planar pairings and "Dyck paths"; this is combinatorial terminology for one-dimensional random walk bridges that stay nonnegative, that is, The correspondence is given by This defines a partial order on planar partitions: P P ′ if fP ≤ f P ′ . Given a planar pairing, let x = xP denote the vector of left endpoints in increasing order and y = yP = (y1, . . . , yn) where xj↔yj. If σ is a permutation of [n], we write Pσ for the (not necessarily planar) pairing given by xj↔y σ(j) . Fomin's identity implies that Note that we can restrict the sum on the right-hand side to permutations σ such that Pσ is a planar pairing sinceĤ(x, y σ ) = 0 for the others.
The key observation, that we leave as an exercise, is that if Pσ is a planar pairing, then P Pσ. We can then write (43) as where M P,P ′ ∈ {0, ±1}, MP,P = 1 and M P,P ′ = 0 unless P P. Therefore, if we order the pairings consistently with , M is an upper triangular matrix with nonzero diagonal terms and we can invert givinĝ • If X is any function on K A (x, y) we write X q for the integral or "expectation value" X q = X; A, x, y q = ω∈KA(x,y) X(ω) q(ω).
• If ω is a path and e is a directed edge, we let Y e (ω) be the number of times that ω traverses the directed edge e. We also set Y − e (ω) = Y e R (ω) be the number of traverses of the reversed edge, and note that Y e − Y − e represents the number of "signed" traverses of e.
• Note that if q is symmetric and z ∈ A, then Y e − Y − e ; A, z, z q = 0 since the terms with l and l R cancel. • We write I z (ω), I e (ω), I e (ω) for the indicator function that the loop erasure LE(ω) contains the vertex z, the undirected edge e, and the directed edge e, respectively.
Proposition 7.7. Suppose q is a symmetric integrable weight on A, x, y ∈ ∂A, and e = − → zw ∈ E A . Then If ω does not visit both z and w, then Y (ω) = Y − (ω) = 0; hence we will only consider ω that visit both z and w. Let ξ, τ be the first and last indices j with ω j = z and ξ ′ , τ ′ the corresponding quantities for w.
We can also considerω = ω − ⊕ l R ⊕ ω + . Since ω τ +1 = w, X(ω) = X(l) = −X(l R ) = −X(ω). Since q is symmetric, q(ω) = q(ω). Hence A similar argument shows that Suppose ξ < ξ ′ , τ < τ ′ , and decompose ω as in (44). Note that X(ω) = X(l) + 1{ω τ +1 = w}. By comparing l and l R as in the previous paragraph, we see that If ω τ +1 = w, we can write By construction we see that ω − ∈ K A ′ (z, z), l ∈ K A (z, z), l ′ ∈ K A\{z} (w, w), and ω + ∈ K A ′ (w, y). Therefore, A similar argument shows that Proposition 7.8. Suppose q is a symmetric integrable weight on A, x, y ∈ ∂A, and e = − → zw ∈ E A . Then Let ω ∈ K A (x, y) and let η = LE(ω) = [x 0 = x, x 1 , . . . , x n = y]. Then we can write where l j ∈ K A\{x0,...,xj−1} (x j , x j ). By construction, we can see that Using the fact that the terms with l j and (l j ) R cancel as in the previous proof, we see that The next proposition gives the probability that a two-dimensional loop-erased random walk uses an undirected edge in terms of two quantities: the measure of the set of loops with odd winding number and boundary Poisson kernels with respect to the signed zipper measure. Proposition 7.9. Suppose A ∈ A sc ; x, y ∈ ∂A are positively ordered with respect to the zipper; p is simple random walk with corresponding zipper weight q; and η ∈ W A (x, y) is a nearest neighbor SAW that contains the directed edge where O A is the set of unordered loops ℓ ∈ L A that cross the zipper an odd number of times. In particular, if e denotes the undirected edge associated to e, Proof. We know thatp Since x, y are positively ordered, we have p(η) = q(η). Also m q (ℓ) = −m p (ℓ) if ℓ ∈ O A and otherwise m q (ℓ) = m p (ℓ). For topological reasons, we see that if ℓ ∈ O A and if η contains {0, 1}, then ℓ ∩ η = ∅. This gives the first equality, and by summing over η we see that For the second, we give a similar argument for SAWs η that contain − → 10. The argument is the same except that q(η) = −p(η) and hencê Adding (45) and (46) and using I e = I e + I e R gives the penultimate equality, and the last follows from Propositions 7.7 and 7.8.

A crossing exponent in Z 2
We will calculate a boundary exponent for simple random walk. We will first consider the k = 2 case. if N, r are nonnegative integers, we set A N,r = {x + iy ∈ Z + iZ : 0 < x < rN, 0 < y < πN } .
We will be considering the case with r fixed and N → ∞, in which case N −1 A N,r is an approximation of the rectangle D r = {x + iy ∈ C : 0 < x < r, 0 < y < π}.
In the limit, random walk approaches Brownian motion. For simple random walk and domains that are parallel to the coordinate axes, the convergence is very sharp. Indeed, it can be shown that lim N →∞ N 2 H AN,r (z j,N , w k,N ) = h ∂Dr (iy j , r + iy k ).
Here we use h to denote the boundary Poisson kernel for Brownian motion. More precisely, the Poisson kernel h(ζ) := h Dr (ζ, r + iy k ) is the harmonic function on D r with boundary value the delta function at r + iy k ,and h ∂Dr (iy j , r + iy k ) = ∂ x h(iy j ). Therefore, We will now take the asymptotics of the right-hand side as r → ∞. The boundary Poisson kernel for Brownian motion can be computed exactly using separation of variables (see, e.g., [2, Section 11.3]): sin(jy) sin(jỹ) j sinh(jr) .
In particular, the ratio is asymptotic to c(y 1 , y 2 ) e −r . A similar argument works for k paths, and we leave the calculation as an exercise.
Exercise 19. Suppose that 0 < y 1 < y 2 < . . . < y n < π. Show that there exists c = c(y 1 , . . . , y n ) > 0 such that as r → ∞, The exponent n(n+1) 2 is a (chordal) crossing exponent for loop-erased random walk. It can also be computed directly as a crossing exponent for its continuous counterpart the chordal Schramm-Loewner evolution with parameter κ = 2. There are corresponding crossing exponents for all κ.
We let N → ∞ and then r → ∞ to make the calculation easier. In fact, one can use a finite Fourier series, which is really just a diagonalization of a matrix, to find the discrete Poisson kernel exactly in terms of a finite sum that is dominated by the initial terms. See, e.g., [17,Chapter 8]. This allows us to take N, r to infinity at the same time as long as r does not go too much faster than N .

Green's function for loop-erased random walk in Z 2
We will now give a very sharp estimate for the probability that loop-erased random walk goes through a particular edge. Recall the definitions of A sc , r A , S A,a,b from Section 7.1.
Theorem 6. There exist c ′ , u > 0 such that if A ∈ A sc and a, b ∈ ∂ e A, then the probability that a loop-erased random walk from a to b uses the directed edge − → 01 equals The error term O(·) is bounded uniformly over all A, a, b. Let us be more precise. The probability that a loop-erased random walk uses edge e = − → 01 is where we have left implicit the simple random walk weight p. Then we can restate the theorem as saying there exists C < ∞ such that log P (e, A, a, b) − log(c ′ r In particular, if S A,a,b ≥ r .
The result will follow from three estimates: there exist c 1 , c 3 , u > 0, c 2 ∈ R such that F q e (A) = c 1 + O(r The relation (49) follows from F q e (A) = G A (0, 0; q) G A\{0} (1, 1; q) and the following proposition. Although we can estimate the absolute value of the sum by the sum of the absolute values, the latter sum does not decay fast enough for us. We will have to take advantage of some cancellations in the sum. Let K = {x + iy ∈ Z + iZ : |x|, |y| < r A /10}. Using (40) we see that K ⊂ A. In particular any loop in L \ L A can be decomposed as where ω − is l stopped at the first visit to ∂K. We can further decompose the walk as l = ω 1 ⊕ ω 2 ⊕ ω + , where ω 1 is ω − stopped at the last visit to {0, 1} before reaching ∂K. We now do a third decomposition. Let L 0 denote the set of loops in L \ L A as in the previous paragraph such that the last vertex of ω 1 is zero. Let L ′ 0 be the set of loops in L 0 such that ω 2 ∩ {2, . . . , k} = ∅, where k = ⌊r A /10⌋. If l ∈ L ′ 0 , we write where ω 3 is ω 2 stopped at the first visit to {2, . . . , k}. Letl = ω 1 ⊕ω 3 ⊕ ω 4 ⊕ ω + whereω 3 is the reflection of ω 3 about the real axis -that is, the real jumps of ω 3 are the real jumps of ω 3 but the imaginary jumps ofω 3 are the negative of the imaginary jumps of ω 3 . Since ω 3 does not use the edge {0, 1}, we can see that q(ω 3 ) = −q(ω 3 ) and hence q(l) = −q(l).

The estimate (50)
We will study the loop measure of O A , the set loops that cross the zipper an odd number of times. We will first consider the case A = C n := C e n = {z ∈ Z 2 : |z| < e n }, and let O n = O C n . If we restrict to the sets C n , the estimate (50)  To establish this we consider the scaling loop of the random walk loop measure, the Brownian loop measure. The definition is similar to that for random walk. We will start with a measure on rooted loops by giving the analog of m from Section 5.1. A (rooted) loop γ : [0, t γ ] → C is a continuous function with γ(0) = γ(t γ ). One important probability measure on loops is the Brownian bridge measure ν b defined as the measure on Brownian paths B t , 0 ≤ t ≤ 1 conditioned so that B 0 = B 1 = 0. (This is conditioning on an event of probability zero so some care needs to be taken, but it is well known how to make sense of this; indeed, there are numerous equivalent constructions.) If we want to specify a loop γ, we can write a triple (z, t γ ,γ) where z is the root, t γ is the time duration, andγ is a loop rooted at 0 of time duration one (obtained from γ by translation and Brownian scaling). The rooted Brownian loop measure can be defined as the measure on triples given by The factor 1/2πt 2 should be read as (1/2πt) · (1/t). The factor 1/2πt is the "probability that Brownian motion is at the origin at time 0"; more precisely, it is the density at time t evaluated at z = 0. The factor 1/t is the analog of the 1/|l| factor in the definition ofm.
This gives the Brownian loop measure in all of C. For loops in a domain D we restrict the measure to such loops. This is an infinite, σ-finite measure, because the measure of small loops blows up. However, if D is a bounded domain, and ǫ > 0, the measure of loops in D of diameter at least ǫ is finite (it goes to infinity as ǫ ↓ 0).
The amazing and useful fact about the Brownian loop measure is that it is conformally invariant, at least if viewed as a measure on unrooted loops (these are defined in the obvious way). In other words, if D is a domain and f : D → f (D) is a conformal transfrormation then the image of the loop measure in D under f is the same as the loop measure on f (D). (One does need to worry about the parametrization -there is a time change which is the same as that for the usual conformal invariance of Brownian motion.) In particular, if we consider the measure of loops in the disk of radius e n that are not contained in the disk of radius e n−1 but that have winding number about 0, then this value is independent of n. Indeed, it can be computed (we will not do it here) and the answer is 1/8.
Although we will not give the details, we will describe how to show that the random walk loop measure converges to the Brownian measure We will couple the rooted random walk loop measure, giving measure (2n) −1 4 −2n to each loop of length 2n with the corresponding Brownian loop measure. We will use the fact that for one dimension walks, the probability of being at the origin at time n is and by a well known trick gives for two dimensional random walk Note that if q n = P{S 2n = 0}, then |q n − q ′ n | ≤ c 1 n −4 . For each (z, n) we let (K z,n , K ′ z,n ) be coupled Poisson random variables with parameters q n , q ′ n , respectively, coupled so that P{K z,n = K ′ z,n } ≤ |q n − q ′ n |. We let (K z,n , K ′ z,n ), z ∈ Z 2 , n ≥ 1 be independent. We also have a coupling of Brownian bridge B t , 0 ≤ t ≤ n and random walk bridge S t , 0 ≤ t ≤ 2n such that P{ max 0≤t≤n |B t − S 2t | ≥ c 2 log n} ≤ c 2 n −4 .
We then use this to construct the coupling. Whenever a random walk and Brownian pair (z, n) occur we do the following: • Choose from the (B, S) distribution for n, • Let the random walk loop be S + z.
• Choose t ∈ [n − 3 8 , n + 5 8 ] from density c nt −2 . • Scale B from time n to time t (this is not much of a change). W s = (t/n) 1/2 B sn/t , 0 ≤ s ≤ t.
• Let the Brownian loop be W +z +Y where Y is a uniform random variable on the square of side length one about the origin.
After the estimate is proved for O n , it is done for more general domains again using conformal invariance of the Brownian loop measure.

The estimate (51)
We first consider the denominator H A (a, b) = H p A (a, b). In this setup, this was considered in [9] where it was shown that there is an absolute constant c such that H A (a, b) = c [sin −2 θ] H A (0, a) H A (0, b) [1 + O(r −u A )], at least if sin θ ≥ r −u A (explicit values of c and u were given but we will not use them). The way to think of this result is that the term H A (a, b) has three factors: one local factor at a measuring the probability of escaping the boundary there; a similar local factor at b; and one global factor that is a conformal invariant. Similarly, H(0, a) has the local factor at a and a conformal invariant. Given this, one needs to estimate the terms like .
For these terms the "local factor" at a cancels and the limit should be a conformally invariant quantity about Brownian motion.
To see what the limit should be in our case, let us consider the continuous case. Let D be a bounded domain containing the origin and let λ(t) : 0 ≤ t ≤ 1 be a "zipper", that is, a simple curve with λ(0) = 0, b := λ(1) ∈ ∂D, λ[0, 1) ⊂ D.
Let a ∈ ∂D \ {b}, and letD = D \ λ[0, 1]. An example would be λ the vertical line segment starting at 0, going downward, and stopping at the first visit to ∂D. Let f : D → D be the conformal transformation with f (0) = 0, f (a) = −1. Letλ = f • λ which is a zipper in D from 0 to ∂D.
If z ∈D \ {0}, consider a curve B t , 0 ≤ t ≤ τ where B 0 = z, B τ = a and assume that 0 ∈ B[0, τ ]. The example we have in mind is B t is an h-process that is, Brownian motion "conditioned" to leave D at a. We want to consider (−1) J where J is the number of times that B crosses the zipper λ. This does not quite make sense for curves such as Brownian motion since there are infinitely many crossings. But we will make sense of this in terms of arguments. Define θ t by f (B t ) = e 2iθt . Note that θ 0 is only defined up to an additive multiple of π; we we make an arbitrary choice of θ 0 but then require θ t to be continuous in t. This is well defined assuming the curve does not go through the origin. In this case θ τ is well define and θ τ − θ 0 is independent of the arbitrary choice for θ 0 . We then define J to be +1 if θ τ = 2kπ for some integer k and define J to be −1 if θ τ = (2k + 1)π for some k. We then set g(z) = E z a (−1) J where we write E a to denote expectations with respect to the h-process corresponding to Brownian motion conditioned to leave D at a.
To compute g, we first note that g is conformally invariant and so it suffices to compute it when D = D, a = −1. Let l = [0, 1) denote the radial line is antipodal to −1. By symmetry we can see that g(z) = 0 for z ∈ l, and hence by the strong Markov property, that g(z) is the probability that an h-process in D toward −1 starting at z reaches −1 without hitting the antipodal line l. This can be written as Here h denotes the Poisson kernel which we will normalize so that h D (0, −1) = 1. As z → 0, h D (z, −1) = 1 + O(z). To compute h D (z, −1) it is somewhat easier to consider the upper half disk D + = D ∩ H. Note that F (z) = z 2 takes D + conformally onto D. A computation shows that h D+ (z, i) = 4 Im(z) [1 + O(|z|)].
(One can see that it must be asymptotic to c Im(z) as z → 0 by considering the "gambler's ruin" estimate for the imaginary part of the Brownian motion. The reader may note that the sin 3 term in our results consists of two factors: we have a sin 2 coming from H p A (a, b) and then one extra sin coming from the ratio H q A /H p A .