Functional integral representations for self-avoiding walk

We give a survey and unified treatment of functional integral representations for both simple random walk and some self-avoiding walk models, including models with strict self-avoidance, with weak self-avoidance, and a model of walks and loops. Our representation for the strictly self-avoiding walk is new. The representations have recently been used as the point of departure for rigorous renormalization group analyses of self-avoiding walk models in dimension 4. For the models without loops, the integral representations involve fermions, and we also provide an introduction to fermionic integrals. The fermionic integrals are in terms of anti-commuting Grassmann variables, which can be conveniently interpreted as differential forms.


Introduction
The use of random walk representations for functional integrals in mathematical physics has a long history going back to Symanzik [25], who showed how such representations can be used to study quantum field theories. Representations of this type were exploited systematically in [1,4,5,11,12]. It is also possible to use such representations in reverse, namely to rewrite a random walk problem in terms of an equivalent problem for a functional integral.
Our goal in this paper is to provide an introductory survey of functional integral representations for some problems connected with self-avoiding walks, with both strict and weak self-avoidance. In particular, we derive a new representation for the strictly self-avoiding walk. These representations have proved useful recently in the analysis of various problems concerning 4-dimensional selfavoiding walks, by providing a setting in which renormalization group methods can be applied. This has allowed for a proof of |x| −2 decay of the critical Green function and existence of a logarithmic correction to the end-to-end distance for weakly self-avoiding walk on a 4-dimensional hierarchical lattice [3,6,7]. It is also the basis for work in progress on the critical Green function for weakly self-avoiding walk on Z 4 and a particular (spread-out) model of strictly selfavoiding walk on Z 4 [10]. In addition, the renormalization group trajectory for a specific model of weakly self-avoiding walk on Z 3 (one with upper critical dimension 3 + ǫ) has been constructed in [20], in this context. In this paper, we explain and derive the representations, but we make no attempt to analyze the representations here, leaving those details to [3,6,7,10,20].
The representations we will discuss can be divided into two classes: purely bosonic, and mixed bosonic-fermionic. The bosonic representations will be the most familiar to probabilists, as they are in terms of ordinary Gaussian integrals. They represent simple random walks, and also systems of self-avoiding and mutually-avoiding walks and loops.
The mixed bosonic-fermionic representations eliminate the loops, leaving only the self-avoiding walk. They involve Gaussian integrals with anticommuting Grassmann variables. A classic reference for Grassmann integrals is the text by Berezin [2], and there is a short introduction in [23,Appendix B]. Such integrals, although familiar in physics, are less so in probability theory. It turns out, however, that these more exotic integrals share many features in common with ordinary Gaussian integrals. One of our goals is to provide a minimal introduction to these integrals, for probabilists.
Representations for self-avoiding walks go back to an observation of de Gennes [13]. The N -vector model has a random walk representation given by a selfavoiding walk in a background of mutually-avoiding self-avoiding loops, with every loop contributing a factor N . This led de Gennes to consider the limit N → 0, in which closed loops no longer contribute, leading to a representation for the self-avoiding walk model as the N = 0 limit of the N -vector model (see also [18,Section 2.3]). Although this idea has been very useful in physics, it has been less productive within mathematics, because N is a natural number and so it is unclear how to understand a limit N → 0 in a rigorous manner.
On the other hand, the notion was developed in [19,21] that while an Ncomponent boson field φ contributes a factor N to each closed loop, an Ncomponent fermion field ψ contributes a complementary factor −N . The net effect is to associate zero to each closed loop. We give a concrete demonstration of this effect in Section 5.2.1 below. This provides a way to realize de Gennes' idea, without any nonrigorous limit.
Moreover, it was pointed out by Le Jan [16,17] that the anticommuting variables can be represented by differential forms: the fermion field can be regarded as nothing more than the differential of the boson field. This observation was further developed in [8,6], and we will follow the approach based on differential forms in this paper. In this approach, the anticommuting nature of fermions is represented by the anticommuting wedge product for differential forms. Thus the world of Grassmann variables, initially mysterious, can be replaced by differential forms, objects which are fundamental in differential geometry in the way that random variables are fundamental in probability.
We have attempted to keep this paper self-contained. In particular, our discussion of differential forms for the representations involving fermions is intended to be introductory.
The rest of the paper is organized as follows. In Section 2, we derive integral representations for simple random walk, and for a model of a self-avoiding walk and self-avoiding loops all of which are mutually avoiding. These are purely bosonic representations, without anticommuting fermionic variables. In Section 3, we define the self-avoiding walk models (without loops). Their representations are derived in Section 5, using the fermionic integration introduced in Section 4. The mixed bosonic-fermionic integrals are examples of supersymmetric field theories. Although an appreciation of this fact is not necessary to understand the representations, in Section 6 we briefly discuss this important connection.

Gaussian integrals
By "bosonic representations" we mean representations for random walk models in terms of ordinary Gaussian integrals. For our purposes, these integrals are in terms of a two-component field (u x , v x ) x∈{1,...,M} , which is most conveniently represented by the complex pair (φ x ,φ x ), where (2.1) The differentials dφ x , dφ x are given by and their product dφ x dφ x is given by where we adopt the convention that differentials are multiplied together with the anticommutative wedge product; in particular du x du x and dv x dv x vanish and do not appear in the above product. This anticommutative product will play a central role when we come to fermions in Section 4, but until then plays no role beyond the formula (2.3). We are using the letter "x" as index for the field in anticipation of the fact that in our representations the field will be indexed by the space in which our random walks take steps. We now briefly review some elementary properties of Gaussian measures. Let C be an M × M complex matrix. We assume that C has positive Hermitian part, i.e., M x,y=1 φ x (C x,y +C y,x )φ y > 0 for all nonzero φ ∈ C M . Let A = C −1 . We write dµ C for the Gaussian measure on R 2M with covariance C, namely where φAφ = M x,y=1 φ x A x,yφy , and where Z C is the normalization constant We will need the value of Z C given in the following lemma.
Proof. Consider first the case where C, and hence A, is Hermitian. In this case, there is a unitary matrix U and a diagonal matrix D such that A = U −1 DU .
For the general case, we write A(z) = G+ izH with G = 1 2 (A+ A † ), H = 1 2i (A− A † ) and z = 1. Since φ(iH)φ is imaginary, when G is positive definite the integral in (2.6) converges and defines an analytic function of z in a neighborhood of the real axis. Furthermore, for z small and purely imaginary, A(z) is Hermitian and positive definite, and hence (2.6) holds in this case. Since (det A(z)) −1 is a meromorphic function of z, (2.6) follows from the uniqueness of analytic extension.
A basic tool is the integration by parts formula given in the following lemma. The derivative appearing in its statement is defined by With ∂/∂φ x defined to be its conjugate, this leads to the equations where F is any C 1 function such that both sides are integrable.
Proof. Let A = C −1 . We begin with the integral on the right-hand side, and make the abbreviation dφdφ = dφ 1 dφ 1 · · · dφ M dφ M . By (2.8), we can use standard integration by parts to move the derivative from one factor to the other, and with (2.9) this gives (2.11) Now we multiply by C a,x , sum over x, and use C = A −1 , to complete the proof.

The equations
(2.12) are simple consequences of Lemma 2.2. The last equality is a special case of Wick's theorem, which provides a formula for the calculation of arbitrary moments of the Gaussian measure. We will only need the following special case of Wick's theorem, in which a particular Gaussian expectation is evaluated as the permanent of a submatrix of C.
Proof. This follows by repeated use of integration by parts.

Simple random walk
Our setting throughout the paper is a fixed finite set Λ = {1, 2, . . . , M } of cardinality M ≥ 1. Given points a, b ∈ Λ, a walk ω from a to b is a sequence of points x 0 = a, x 1 , x 2 , . . . , x n = b, for some n ≥ 0. We write |ω| for the length n of ω. Sometimes it is useful to regard ω as consisting of the directed edges (x i−1 , x i ), 1 ≤ i ≤ n, rather than vertices. Let W a,b denote the set of all walks from a to b, of any length. Let J be a Λ × Λ complex matrix with zero diagonal part (i.e., J x,x = 0 for all x ∈ Λ). Let D be a diagonal matrix with nonzero entries D x,x = d x ∈ C. We assume that D − J is diagonally dominant ; this means that max x∈Λ y∈Λ J x,y d x < 1. (2.14) Given ω ∈ W a,b , let Here we regard ω as a set of labeled edges e = (ω(i−1), ω(i)) (the empty product is 1 if |ω| = 0). The simple random walk two-point function is defined by The assumption that D − J is diagonally dominant ensures that the sum in (2.16) converges absolutely. The following theorem was proved in [5].
Proof. The sum in (2.16) can be evaluated explicitly as It is easily verified that D − J applied to the right-hand side gives the identity, and hence G srw When D − J has positive Hermitian part, we may use (2.12) to complete the proof.
Next, we suppose that d x > 0, J x,y ≥ 0, and give two alternate representations for G srw a,b in terms of continuous-time Markov chains. For the first, which appeared in [11], we consider the continuous-time Markov chain X defined as follows. The state space of X is Λ ∪ {∂}, where ∂ is an absorbing state called the cemetery. When X arrives at state x it waits for an Exp(d x ) holding time and then jumps to y with probability π x,y = d −1 x J x,y and jumps to the cemetery with probability π x,∂ = 1 − y∈Λ d −1 x J x,y . The holding times are independent of each other and of the jumps. Let ζ denote the time at which the process arrives in the cemetery. Note that if D − J is diagonally dominant then ζ < ∞ with probability 1, and by right-continuity of the sample paths the last state visited by X before arriving in the cemetery is X(ζ − ). For x ∈ Λ, let L x denote the total (continuous) time spent by X at x. We denote the expectation for X, started from a ∈ Λ, by E a .
Proof. The Markov chain X is equivalent to a discrete-time Markov chain Y which jumps with the above transition probabilities, together with a sequence σ 0 , σ 1 , . . . of exponential holding times. Let η denote the discrete random time after which the process Y jumps to ∂. By partitioning on the events {η = n}, noting that η is almost surely finite, we see that the right-hand side of (2.20) is equal to Given the sequence Y 0 , Y 1 , . . . , Y n , the σ i are independent Exp(d Yi ) random variables and hence If we then take the expectation with respect to the Markov chain Y , we find that (2.21) is equal to ∞ n=0 ω∈W a,b :|ω|=n (2.23) which is the desired result.
Next, we derive a third representation for G srw a,b (v), which is more general than Theorem 2.5 as it does not require diagonal dominance of . This representation was obtained in [3] using the Feynman-Kac formula, but we give a different proof based on Theorem 2.5. The representation involves a second continuous-time Markov process, with generator D − J where we set d x = y∈Λ J x,y and assume d x > 0 for each x ∈ Λ. This process is like the one described above, but has no cemetery site and continues for all time. Let E a denote the expectation for this process started at a ∈ Λ. Let a denote the expectation for the Markov process defined in terms of where the ǫ in the denominator is equal to the product of d b . We partition on the values of ζ, the time of transition to ∂. For δ > 0, let The probability of the symmetric difference By the Markov property and the fact that converges to E a on bounded functions of {X(t) : 0 ≤ t ≤ T } since the transition probabilities and the densities of the holding times σ i converge to their analogues in E a . Noting that we obtain (2.25) by dominated convergence.
The two representations for G srw a,b in Theorems 2.5-2.6 show that the righthand sides of (2.20) and (2.25) are equal. The following proposition generalizes this equality. (2.36) Proof. Let S be a Borel subset of [0, ∞) M , and let χ S denote the characteristic function of S. We define µ(S) and ν(S) by evaluating the left-and right-hand sides of (2.36) on F = χ S , respectively. With these definitions, µ and ν are finite Borel measures. Together, Theorems 2.5-2.6 establish (2.36) for the special case This proves (2.36) in the general case, since finite measures are characterized by their Laplace transforms. The hypothesis on the growth of F assures its integrability.

Self-avoiding walk with loops
Next, we derive a representation for a model of a self-avoiding walk in a background of loops. This requires the introduction of some terminology and notation.
Given not necessarily distinct points a, b ∈ Λ, a self-avoiding walk ω from a to b is a sequence In other words, for a = b, ω is a non-intersecting path from a to b on the complete graph on M vertices and for a = b it is non-intersecting except at a = b. We again write |ω| for the length n of ω, and sometimes regard ω as consisting of directed edges rather than vertices. Let S a,b denote the set of all self-avoiding walks from a to b. For X ⊂ Λ, we write S a,b (X) for the subset of S a,b consisting of walks with x 0 = a, x n = b and x 1 , x 2 , . . . , x n−1 ∈ X. A loop γ is an unrooted directed cycle (consisting of distinct vertices) in the complete graph, regarded sometimes as a cyclic list of vertices and sometimes as directed edges. We include the self-loop which joins a vertex to itself by a single edge, as a possible loop (see Remark 2.9 below). We write L for the set of all loops. We write Γ for a subgraph of Λ consisting of mutually-avoiding loops, i.e., Γ = {γ 1 , . . . , γ m } with each γ i ∈ L and γ i ∩γ j = ∅ (as sets of vertices) for i = j. We write G for the set of all such Γ (including Γ = ∅), and G(X) for the subset of G which uses only vertices in X ⊂ Λ. We write |γ| for the length of γ, and |Γ| = m i=1 |γ i | for the total length of loops in Γ. Given a Λ × Λ real matrix C, ω ∈ W a,b and Γ ∈ G, let where here we regard self-avoiding walks and loops as collections of directed edges and write, e.g., e = (ω(i − 1), ω(i)). An empty product is equal to 1. We define the two-point function The representation for G loop a,b is elementary and we derive it now.
and, finally, Proof. To prove (2.40), we write F = φ b x∈X (1 + φ xφx ) and apply the integration by parts formula (2.10), which replacesφ a F by v∈Λ C a,v ∂F/∂φ v . The first step in the walk ω is (a, v). If the derivative acts on a factor in the product over x, then it replaces that factor byφ v , and the procedure can be iterated until the derivative acts on φ b , in which case ω terminates. The result is (2.40). For (2.41), we expand the product to obtain We then evaluate the integral on the right-hand side using Lemma 2.3, and this gives (2.41). The representation (2.42) follows from the combination of (2.40)-(2.41).
Remark 2.9. Self-loops can be eliminated in the representation by replacing the right-hand side of (2.42) by using a modification of the above proof.

Self-avoiding walk
We define the two-point function: When a = b, the walks are self-avoiding except for the fact that the walk begins and ends at the same site. In this case, there is, in particular, a contribution due to the one-step walk that steps from a to a, which has weight C a,a = 0. The only new result in this paper is the integral representation for G saw a,b . The representation for the loop model (2.39) is easier than for (3.1), as (2.39) is in terms of a bosonic (ordinary) Gaussian integral. To eliminate the loops and obtain a representation for the walk model (3.1), we will need fermionic (Grassmann) integrals involving anticommuting variables. The necessary mathematical background for this is developed in Section 4, and the representation is stated and derived in Section 5.2. This representation is the point of departure for the analysis of the 4-dimensional self-avoiding walk in [10], for a convenient particular choice of C.

Weakly self-avoiding walk
The two-point functions (2.39) and (3.1) are for strictly self-avoiding walks and loops. We also consider the continuous-time weakly self-avoiding walk, which is defined as follows.
Let D have diagonal entries d x > 0, J have zero diagonal entries and J x,y ≥ 0, and suppose that D−J is diagonally dominant. Let X and E a be the continuoustime Markov process and corresponding expectation, as in Theorem 2.5. In particular, the process dies at the random time ζ at which it makes a transition to the cemetery state. The local time at x is given by L x = ∞ 0 I X(s)=x ds (note that the integral effectively terminates at ζ < ∞). By definition, so x∈Λ L 2 x is a measure of the amount of self-intersection of X up to time ζ. The continuous-time weakly self-avoiding walk two-point function is defined by where g > 0, and λ is a parameter (possibly negative) which is chosen in such a way that the integral converges. In (3.4), self-intersections are suppressed by the factor exp[−g x∈Λ L 2 x ]. We will derive a representation for (3.4) in Section 5.1. It follows from Proposition 2.7 that there is also the alternate representation: In the homogeneous case, in which d x − d x = a is independent of x, the second exponential can be written as e −λ ′ T where λ ′ = λ + a. This representation is the starting point for the analysis of the weakly self-avoiding walk on a 4dimensional hierarchical lattice in [3,6,7], on Z 4 in [10], and for a model on Z 3 in [20].

Gaussian integrals with fermions
In this section, we review some standard material about Gaussian integrals which incorporate anticommuting Grassmann variables. We realize these Grassmann variables as differential forms.

Differential forms
We recall and extend the formalism introduced in Section 2. Let Λ = {1, . . . , M } be a finite set of cardinality M . Let u 1 , v 1 , . . . , u M , v M be standard coordinates on R 2N , so that du 1 ∧dv 1 ∧· · ·∧du M ∧dv M is the standard volume form on R 2M , where ∧ denotes the usual anticommuting wedge product (see [22,Chapter 10] for an introduction). We will drop the wedge from the notation and write simply du i dv j in place of du i ∧ dv j . The one-forms du i , dv j generate the Grassmann algebra of differential forms on R 2M . A form which is a function of u, v times a product of p differentials is said to have degree p, for p ≥ 0. The integral of a differential form over R 2M is defined to be zero unless the form has degree 2M . A form K of degree 2M can be written as K = f (u, v)du 1 dv 1 · · · du M dv M , and we define where the right-hand side is the usual Lebesgue integral of f over R 2M . We again complexify by setting φ x = u x +iv x ,φ x = u x −iv x and dφ x = du x + idv x , dφ x = du x − idv x , for x ∈ Λ. Since the wedge product is anticommutative, the following pairs all anticommute for every x, y ∈ Λ: dφ x and dφ y , dφ x and dφ y , dφ x and dφ y . Given an M ×M matrix A, we write φAφ = x,y∈Λ φ x A x,yφy . As in (2.3), The integral of a function f (φ,φ) (a zero form) with respect to x∈Λ dφ x dφ x is thus given by (2i) M times the integral of f (u + iv, u − iv) over R 2M . Note that the product over x can be taken in any order, since each factor dφ x dφ x has even degree (namely degree two). To simplify notation, it is convenient to introduce where we fix a choice of the square root and use this choice henceforth. Then Given any matrix A, the action is the even form defined by In the special case A u,v = δ u,x δ x,v , S A becomes the form τ x defined by Let K = (K j ) j∈J be a collection of forms. When each K j is a sum of forms of even degree, we say that K is even. Let K (0) j denote the degree-zero part of K j . Given a C ∞ function F : R J → C we define F (K) by its power series about the degree-zero part of K, i.e., Here α is a multi-index, with α! = j∈J α j !, and Note that the summation terminates as soon as j∈J α j = M since higher order forms vanish, and that the order of the product on the right-hand side is irrelevant when K is even. For example, Because the formal power series of a composition of two functions is the same as the composition of the two formal power series, we may regard e −SA either as a function of the single form S A or of the M 2 forms φ xφy + 1 2πi dφ x dφ y . The same result is obtained for e −SA in either case.

Gaussian integrals
We refer to the integral e −SA K as the mixed bosonic-fermionic Gaussian expectation of K, or, more briefly, as a mixed expectation. The following proposition shows that if K is a product of a zero form and factors of ψ andψ then the mixed expectation factorizes. Moreover, if K is a zero form then the mixed expectation is just the usual Gaussian expectation of K, and if K is a product of factors of ψ andψ then its expectation is a determinant. It also shows that e −SA is self-normalizing in the sense that it is equal to 1 without any normalization required. The determinant in (4.9) appears also e.g. in [23,Lemma B.7], in a related purely fermionic context and with a different proof. where I f = f dµ C (φ,φ), and where C i1,...,ip;j1,...,jp is the p × p matrix whose r, s element is C ir ,js when p = 0, and the determinant is replaced by 1 when p = 0. In particular, Proof. We first note that if p = q then no form of degree 2M can be obtained by expanding e −ψAψ F and the integral vanishes. Thus we assume p = q. Let i = i 1 , . . . , i p , j = j 1 , . . . , j p , and The tensor product A ⊗p is a linear operator on V ⊗p defined by the matrix elements By definition, (4.8), and the anticommutation relation ψ k lψ k l = −ψ k l ψ k l , (4.14) By antisymmetry, for a nonzero contribution, k 1 , . . . , k M−p , i 1 , . . . , i p must be a permutation of Λ, as must be k 1 , . . . , k M−p , j 1 , . . . , j p . In particular, j 1 , . . . , j p must be a permutation of i 1 , . . . , i p ; let ǫ i,j be the sign of this permutation (and equal zero if it is not a permutation). Then we can rearrange the above to obtain We insert (4.12) on the right-hand side and again use antisymmetry and then Lemma 2.1 to obtain When p = 0 the above calculations give B = I f , as required. For p = 0, we use the fact that C ⊗p is the inverse of A ⊗p to obtain The sum on the right-hand side is the determinant det C k1,...,kp;j1,...,jp , as required.
In the Gaussian integral in the above proposition, the fermionic part dφAdφ of the action gives rise to a factor det A while the bosonic part φAφ gives rise to the reciprocal of this determinant, providing the cancellation that produces the self-normalization property (4.10).
We will use the following corollary in Section 5.2.1.  18) where N (σ) is the number of cycles in the permutation σ.
Proof. It follows from (4.9) and anticommutativity that where ǫ σ is the sign of the permutation σ. Then (4.18) follows from the identity which itself follows from the fact that for a permutation σ ∈ S k consisting of cycles c of length |c|, The case p = 1 of (4.23) states that which is Cramer's rule. Thus (4.23) is a generalization of Cramer's rule.

Integrals of functions of τ
The identity (4.25) below provides an extension of (4.10), and will be used in Section 5.2. The identity (4.26) is sometimes called the τ -isomorphism; it will lead to a representation for the weakly self-avoiding walk two-point function (3.4). Our method of proof follows the method of [3,15]. Alternate approaches to (4.25) are given in Sections 5.2.1 and 6.
Recall the definitions of τ x in (4.6) and L x above Theorem 2.5. We write τ for the entire collection (τ x ) x∈Λ , and similarly for L.
Proof. It is straightforward to adapt the result of [24] to extend F to a C ∞ function on R M , which we also call F . By multiplying F by a suitable C ∞ function, we can further assume that F is equal to zero on the complement of Since H is of Schwartz class, the above integral is absolutely convergent. Also, We may replace t by τ in (4.29); this amounts to a statement about differentiating under the integral since functions of τ are defined by their power series as in (4.7). Let V be the real diagonal matrix with V x,x = v x . Since A − ǫI + iV has positive Hermitian part, (4.10) gives e −SA e x (−ivx+ǫ)τx = e −SA−ǫI+iV = 1. (4.30) Assuming that it is possible to interchange the integrals, we obtain which is (4.25).
To complete the proof of (4.25), it remains only to justify the interchange of integrals; this can be done as follows. By definition, the iterated integral is equal to According to our definition of integration, the outer integral is evaluated as a usual Lebesgue integral by keeping the (finitely many) terms that produce the standard volume form on R 2M . Since H is Schwartz class and A−ǫI has positive Hermitian part, the resulting iterated Lebesgue integral is absolutely convergent and its order can be interchanged by Fubini's theorem. Once the integrals have been interchanged, the sums over n and N can be resummed to see that (4.32) has the same value when its two integrals are interchanged, and the proof of (4.25) is complete.
To prove (4.26), we fix ǫ > 0 such that A − ǫI is diagonally dominant. Then where we have used (4.9) and Theorem 2.4 in the second equality, and Theorem 2.5 in the third. With further application of Fubini's theorem, we obtain which is (4.26). has the representation

Self
Proof. This is immediate when we take F (τ ) = e

The N → 0 limit
If we omit the fermions from the right-hand side of (5.1) and normalize the integral then we obtain instead the two-point function of the |φ| 4 field theory, namely This is known to have a representation as the two-point function of a system of a weakly self-avoiding walk and weakly self-avoiding loops, all weakly mutuallyavoiding [5,25], as we now briefly sketch.
Let n x (ω) denote the number of visits to x by a walk ω. Let dν n (s) = δ(s)ds n = 0 It follows from [5, Theorem 2.1] (see also [4, p.137] and [12, p.197]) that for a real N -component field φ, for any component i we have where ω = |ω| + 1 denotes the number of vertices in ω, is a normalization constant, and dν ω∪ω1∪···∪ωn (t) = x∈Λ dν nx(ω)+nx(ω1)+···+nx(ωn)+N/2 (t x ). (5.6) Note the factor N/2 associated to each loop. If we simply set N = 0 in these formulas, then only the n = 0 term survives, and we obtain the formal limit (formal, because the left-hand side is defined only for N = 1, 2, 3, . . .) As we argue next, the right-hand side of (5.7) is equal to the weakly self-avoiding walk two-point function G wsaw a,b (with modified parameters g, λ). This recovers de Gennes' idea, in the context of the weakly self-avoiding walk [1].
We now show that the right-hand side of (5.7) is equal to the right-hand side in the representation (3.4) of G wsaw a,b , with constant d x ≡ d. As in the proof of Theorem 2.5, we condition on the events {η = n} and also on Y = (Y 0 , Y 1 , . . . , Y n ) ∈ W a,b . Given both of these, the random variable L x has a Γ(n x (Y ), d) distribution, since it is the sum of independent Exp(d) random variables. Thus we obtain which is the right-hand side of (5.7) with a modified choice of constants in the exponent. Theorem 5.1 provides an alternative to the above formal N → 0 limit. The inclusion of fermions in Theorem 5.1 has eliminated all the loops, leaving only the weakly self-avoiding walk. In Section 5.2.1, we will make explicit the mechanism by which this occurs in the strictly self-avoiding walk representation: fermionic loops cancel the bosonic ones.

Strictly self-avoiding walk
Here we obtain the representation for (3.1). We give two proofs based on two different ideas. (1 + τ x ). (5.12) Proof. We write X = Λ \ {a, b}. By expanding the product of 1 + τ (1 + φ zφz ). (5.16) We now interchange the sums over Y and ω, and then resum to obtain

Proof by expansion and resummation
By (4.25), the integral in the last line is 1, and we obtain (5.12).
The above proof ultimately relies on the identity for a subset X ⊂ Λ. This identity follows immediately from (4.25). We now give an alternate, more direct proof of (5.18), which demonstrates that (5.18) results from the explicit cancellation of bosonic loops carrying a factor +1 with fermionic loops carrying a factor (−1). The net effect of a loop is (+1)+(−1) = 0, which provides a realization of the self-avoiding walk as corresponding to an N = 0 model, without the need of a mysterious N → 0 limit.
Alternate proof of (5.18). We expand the last product in (5.13) and apply Proposition 4.1 to obtain φ uφu e −SA v∈X2 ψ vψv .
(5.19) The term X 1 = X 2 = ∅ is special, and contributes 1 to the above right-hand side. We write S(X i ) for the set of permutations of X i , c i for a cycle of σ i ∈ S(X i ), and W ci = e∈ci C e for the weight of the loop corresponding to the cycle c i . With this notation, we can evaluate the integrals using Lemma 2.3 and (4.18) to find that the contribution to the right-hand side of (5.19) due to all terms other than X 1 = X 2 = ∅ is equal to We claim that this equals This is a consequence of the fact that, for fixed Y , which follows by expanding the product on the left-hand side.

Proof by integration by parts
The integration by parts formula (2.10) extends easily to the mixed bosonicfermionic case, to give where A has positive Hermitian part, C = A −1 , and F is any C ∞ form such that both sides are integrable. To see this, we first note that by linearity it suffices to consider the case F = f K where f is a zero form and K is a product of factors of ψ andψ. By Proposition 4.1 and (2.10), 24) and this proves (5.23). The special case F = φ y in (5.23) gives e −SAφ In the Gaussian integral, the effect ofφ a is to start a walk step at a, whereas φ b has the effect of terminating a walk step at b. Each step receives the appropriate matrix element of the covariance C as its weight. This leads to the following alternate proof of Theorem 5.2.
Second proof of Theorem 5.2. The right-hand side of (5.12) is equal to and hence Substitution of (5.27) into (5.23), using (4.25), gives After iteration, the right-hand side gives G saw a,b .

Comparison of two self-avoiding walk representations
The representations (5.1) and (5.12) state that These are heuristically related as follows. We insert the missing factors for x = a, b in the product in (5.30), and make the (uncontrolled) approximation The approximation amounts to matching terms up to order τ 2 x in a Taylor expansion. With this approximation, (5.30) corresponds to (5.29) with g = 1 2 and λ = −1. A careful comparison of the two models is given in [10].

Supersymmetry
Integrals such as e −SA F (τ ) are unchanged if we formally interchange the pairs φ,φ and ψ,ψ. By (4.25), it is also true that e −SA F (τ )φ a φ b = e −SA F (τ )ψ a ψ b (the difference is e −SA τ F (τ ) = 0). This suggests the existence of a symmetry between bosons and fermions. Such a symmetry is called a supersymmetry.
In this section, as a brief illustration, we use methods of supersymmetry to provide an alternate proof of (4.25), following [7]. The supersymmetry generator Q is a map on the space of forms which maps bosons to fermions and vice versa. It can be defined in terms of standard operations in differential geometry, namely the exterior derivative and interior product, as follows.
An antiderivation F is a linear map on forms which obeys F (ω 1 ∧ ω 2 ) = F ω 1 ∧ ω 2 + (−1) p1 ω 1 ∧ F ω 2 , when ω 1 is a form of degree p 1 . The exterior derivative d is the linear antiderivation that maps a form of degree p to a form of degree p + 1, defined by d 2 = 0 and, for a zero form f , Consider the flow acting on C M defined by φ x → e −2πiθ φ x . This flow is generated by the vector field X defined by X(φ x ) = −2πiφ x , and X(φ x ) = 2πiφ x . The action by pullback of the flow on forms is The interior product i = i X with the vector field X is the linear antiderivation that maps forms of degree p to forms of degree p − 1 (and maps forms of degree zero to zero), given by The interior product obeys i 2 = 0.
The supersymmetry generator Q is defined by A form ω that satisfies Qω = 0 is called supersymmetric or Q-closed. A form ω that is in the image of Q is called Q-exact. Note that the integral of any Q-exact form is zero (assuming that the form decays appropriately at infinity), since integration acts only on forms of top degree 2N and the degree of iω is at most 2N − 1, while dω = 0 by Stokes' theorem. We will use the fact that Q obeys the chain rule for even forms, in the sense that if K = (K 1 , . . . , K t ) with each K i an even form, and if F : C t → C is C ∞ , then where F i denotes the partial derivative. A proof is given below.
The Lie derivative L = L X is the infinitesimal flow obtained by differentiating with respect to the flow at θ = 0. Thus, for example, A form ω is defined to be invariant if Lω = 0. For example, the form u x,y = φ x dφ y (6.7) is invariant since it is constant under the flow of X. Cartan's formula asserts that L = d i + i d (see, e.g., [14, p. 146]). Since d 2 = 0 and i 2 = 0, we have that L = Q 2 , so Q is the square root of L.
Alternate proof of (4.25). We will show that e −SA F (λτ ) is independent of λ ∈ R. Comparing the value of this integral for λ = 0 and λ = 1, the identity (4.25) then follows from (4.10). Computation of the derivative gives d dλ e −SA F (λτ ) = e −SA x∈Λ F x (λτ )τ x , (6.8) where F x denotes the partial derivative of F with respect to coordinate x. To show that the integral on the right-hand side vanishes, it suffices to show that the integrand is Q-exact.
Let v x,y = 1 2πi u x,y , where u x,y is given by (6.7). Then v x,y is invariant, and since Qv x,x = τ x , τ x is both Q-exact and Q-closed. Since Q( x,y A x,y v x,y ) = S A and x,y A x,y v x,y is invariant, the form S A is also Qexact and Q-closed. By (6.5), e −SA and F x (λτ ) are both Q-closed. Therefore, since Q is an antiderivation, e −SA F x (λτ )τ x = Q e −SA F x (λτ )v x,x , (6.9) as required.
Proof of the chain rule (6.5) for Q. Suppose first that K is a zero form. Then By the chain rule, this is i F i (K)dK i = i F i (K)QK i . This proves (6.5) for zero forms, so we may assume now that K is higher degree. Let ǫ i be the multi-index that has i th component 1 and all other components 0. Let K (0) denote the degree zero part of K. By (4.7), the fact that Q is an antiderivation, and the chain rule applied to zero forms, Since Q is an antiderivation, 12) The first term on the right-hand side of (6.11) is canceled by the contribution to the second term of (6.11) due to the second term of (6.12). And the contribution to the second term of (6.11) due to the first term of (6.12) is i F i (K)QK i , as required.

Conclusion
We have given a unified treatment of three representations for simple random walk in Theorems 2.4, 2.5 and 2.6. These representations had appeared previously in [5,11,3]. In Theorem 2.8, we have represented a model of a self-avoiding walk in a background of self-avoiding loops, all mutually avoiding, in terms of a (bosonic) Gaussian integral. Mixed bosonic-fermionic Gaussian integrals were introduced in Section 4, and some elements of the theory of these integrals were derived. Using these integrals, and particularly using Proposition 4.4, representations for the weakly self-avoiding walk and strictly self-avoiding walk were obtained in Theorems 5.1 and 5.2, respectively. Our representation in Theorem 5.2 is new. These representations provide the point of departure for rigorous renormalization group analyses of various self-avoiding walk problems [3,6,7,10,20]. For the strictly self-avoiding walk, two different proofs of the representation were given, in Sections 5.2.1 and 5.2.2. The role of the fermionic part of the representation in eliminating loops was detailed in Section 5.2.1. This contrasts with the formal N → 0 limit discussed in Section 5.1.2.
The mixed bosonic-fermionic representations are examples of supersymmetric field theories. A brief discussion of some elements of supersymmetry was given in Section 6.