Inverting the coupling of the signed Gausssian free field with a loop soup

Lupu introduced a coupling between a random walk loop-soup and a Gaussian free field, where the sign of the field is constant on each cluster of loops. This coupling is a signed version of isomorphism theorems relating the square of the GFF to the occupation field of Markovian trajectories. His construction starts with a loop-soup, and by adding additional randomness samples a GFF out of it. In this article we provide the inverse construction: starting from a signed free field and using a self-interacting random walk related to this field, we construct a random walk loop-soup. Our construction relies on the previous work by Sabot and Tarr\`es, which inverts the coupling from the square of the GFF rather than the signed GFF itself. As a consequence, we also deduce an inversion of the coupling between the random current and the FK-Ising random cluster models introduced by Lupu and Werner.


Introduction
Let G = (V, E) be a connected undirected graph, with V at most countable and each vertex x ∈ V of finite degree. We do not allow self-loops, however the edges might be multiple. Given e ∈ E an edge, we will denote e + and e − its end-vertices, even though e is non-oriented and one can interchange e + and e − . Each edge e ∈ E is endowed with a conductance W e > 0. There may be a killing measure κ = (κ x ) x∈V on vertices.
We consider (X t ) t≥0 the Markov jump processes on V which being in x ∈ V , jumps along an adjacent edge e with rate W e . Moreover if κ x = 0, the process is killed at x with rate κ x (the process is not defined after that time). ζ will denote the time up to which X t is defined. If ζ < +∞, then either the process has been killed by the killing measure κ (and κ ≡ 0) or it has gone off to infinity in finite time (and V infinite). We will assume that the process X is transient, which means, if V is finite, that κ ≡ 0. P x will denote the law of X started from x. Let (G(x, y)) x,y∈V be the Green function of X t : G(x, y) = G(y, x) = E x ζ 0 1 {Xt=y} dt .
Let E be the Dirichlet form defined on functions f on V with finite support: Given U a finite subset of V , and f a function on U , P U,f ϕ will denote the law of the GFF ϕ conditioned to equal f on U . (ℓ x (t)) x∈V,t∈[0,ζ] will denote the family of local times of X: For all x ∈ V , u > 0, let τ x u = inf{t ≥ 0; ℓ x (t) > u}. Recall the generalized second Ray-Knight theorem on discrete graphs by Eisenbaum, Kaspi, Marcus, Rosen and Shi [2] (see also [8,10]): Generalized second Ray-Knight theorem. For any u > 0 and x 0 ∈ V , has the same law as Sabot and Tarrès showed in [9] that the so-called "magnetized" reverse Vertex-Reinforced Jump Process provides an inversion of the generalized second Ray-Knight theorem, in the sense that it enables to retrieve the law of (ℓ x (τ x 0 u ), ϕ 2 x ) x∈V conditioned on ℓ x (τ x 0 u ) + 1 2 ϕ 2 x x∈V . The jump rates of that latter process can be interpreted as the two-point functions of the Ising model associated to the time-evolving weights.
However in [9] the link with the Ising model is only implicit, and a natural question is whether Ray-Knight inversion can be described in a simpler form if we enlarge the state space of the dynamics, and in particular include the "hidden" spin variables.
The answer is positive, and goes through an extension of the Ray-Knight isomorphism introduced by Lupu [6], which couples the sign of the GFF to the path of the Markov chain. The Ray-Knight inversion will turn out to take a rather simple form in Theorem 3 of the present paper, where it will be defined not only through the spin variables but also random currents associated to the field though an extra Poisson Point Process.
The paper is organised as follows.
In Section 2 we recall some background on loop soup isomorphisms and on related couplings and state and prove a signed version of generalized second Ray-Knight theorem. We begin in Section 2.1 by a statement of Le Jan's isomorphism which couples the square of the Gaussian Free Field to the loop soups, and recall how the generalized second Ray-Knight theorem can be seen as its Corollary: for more details see [4]. In Subsection 2.2 we state Lupu's isomorphism which extends Le Jan's isomorphism and couples the sign of the GFF to the loop soups, using a cable graph extension of the GFF and Markov Chain. Lupu's isomorphism yields an interesting realisation of the well-known FK-Ising coupling, and provides as well a "Cur-rent+Bernoulli=FK" coupling lemma [7], which occur in the relationship between the discrete and cable graph versions. We briefly recall those couplings in Sections 2.3 and 2.4, as they are implicit in this paper. In Section 2.5 we state and prove the generalized second Ray-Knight "version" of Lupu's isomorphism, which we aim to invert. Section 3 is devoted to the statements of inversions of those isomorphisms. We state in Section 3.1 a signed version of the inversion of the generalized second Ray-Knight theorem through an extra Poisson Point Process, namely Theorem 3. In Section 3.2 we provide a discrete-time description of the process, whereas in Section 3.3 we yield an alternative version of that process through jump rates, which can be seen as an annealed version of the first one.
We deduce a signed inversion of Le Jan's isomorphism for loop soups in Section 3.4, and an inversion of the coupling of random current with FK-Ising in Section 3.5.
Finally Section 4 is devoted to the proof of Theorem 3: Section 4.1 deals with the case of a finite graph without killing measure, and Section 4.2 deduces the proof in the general case.
2. Le Jan's and Lupu's isomorphisms 2.1. Loop soups and Le Jan's isomorphism. The loop measure associated to the Markov jump process (X t ) 0≤t<ζ is defined as follows. Let P t x,y be the bridge probability measure from x to y in time t (conditionned on t < ζ). Let p t (x, y) be the transition probabilities of (X t ) 0≤t<ζ .
Let µ loop be the measure on time-parametrised nearest-neighbour based loops (i.e. loops with a starting site) The loops will be considered here up to a rotation of parametrisation (with the corresponding pushforward measure induced by µ loop ), that is to say a loop (γ(t)) 0≤t≤tγ will be the same as (γ(T + t)) 0≤t≤tγ −T • (γ(T + t − t γ )) tγ −T ≤t≤tγ , where • denotes the concatenation of paths. A loop soup of intensity α > 0, denoted L α , is a Poisson random measure of intensity αµ loop . We see it as a random collection of loops in G. Observe that a.s. above each vertex x ∈ V , L α contains infinitely many trivial "loops" reduced to the vertex x. There are also with positive probability non-trivial loop that visit several vertices.
In [3] Le Jan shows that for transient Markov jump processes, L x (L α ) < +∞ for all x ∈ V a.s. For α = 1 2 he identifies the law of L . (L α ): Le Jan's isomorphism. L . (L 1/2 ) = L x (L 1/2 ) x∈V has the same law as Let us briefly recall how Le Jan's isomorphism enables one to retrieve the generalized second Ray-Knight theorem stated in Section 1: for more details, see for instance [4]. We assume that κ is supported by x 0 : the general case can be dealt with by an argument similar to the proof of Proposition 4.6. Let D = V \ {x 0 }, and note that the isomorphism in particular implies that L . (L 1/2 ) conditionally on L x 0 (L 1/2 ) = u has the same law as ϕ 2 /2 conditionally on ϕ 2 x 0 /2 = u. On the one hand, given the classical energy decomposition, we have ϕ = ϕ D + ϕ x 0 , with ϕ D the GFF associated to the restriction of E to D, where ϕ D and ϕ x 0 are independent. Now ϕ 2 /2 conditionally on ϕ 2 x 0 /2 = u has the law of (ϕ D + η √ 2u) 2 /2, where η is the sign of ϕ x 0 , which is independent of ϕ D . But ϕ D is symmetric, so that the latter also has the law of (ϕ D + √ 2u) 2 /2. On the other hand, the loop soup L 1/2 can be decomposed into the two independent loop soups L D 1/2 contained in D and L ) has the law of (ϕ D ) 2 /2 and L . (L 1/2 ) = u has the law of the occupation field of the Markov chain ℓ(τ x 0 u ) under P x 0 (·|τ x 0 u < ζ), which enables us to conclude.

2.2.
Lupu's isomorphism. As in [6], we consider the metric graphG associated to G. Each edge e is replaced by a continuous line of length 1 2 W −1 e . The GFF ϕ on G with law P ϕ can be extended to a GFFφ onG as follows. Given e ∈ E, one considers inside e a conditionally independent Brownian bridge, actually a bridge of a √ 2× standard Brownian motion, of length 1 2 W −1 e , with end-values ϕ e − and ϕ e + . This provides a continuous field on the metric graph which satisfies the spatial Markov property.
Similarly one can define a standard Brownian motion (BG) 0≤t≤ζ onG, whose trace on G indexed by the local times at V has the same law as the Markov process (X t ) t≥0 on V with jump rate W e to an adjacent edge e up to time ζ, as explained in Section 2 of [6]. One can associate a measure on time-parametrized continuous loopsμ, and letL Theorem 1 (Lupu's Isomorphism, [6]). There is a coupling between the Poisson ensemble of loopsL 1 2 and (φ y ) y∈G defined above, such that the two following constraints hold: • For all y ∈G, L y (L 1 2 ) = 1 2φ 2 y • The clusters of loops ofL 1 2 are exactly the sign clusters of (φ y ) y∈G .
Conditionally on (|φ y |) y∈G , the sign ofφ on each of its connected components is distributed independently and uniformly in {−1, +1}.
Lupu's isomorphism and the idea of using metric graphs were applied in [5] to show that on the discrete half-plane Z × N, the scaling limits of outermost boundaries of clusters of loops in loop soups are the Conformal Loop Ensembles CLE.
Let O(φ) (resp. O(L 1 2 )) be the set of edges e ∈ E such thatφ (resp.L Lemma 2.1. Conditionally on (ϕ x ) x∈V , (1 e∈O(φ) ) e∈E is a family of independent random variables and Proof. Conditionally on (ϕ x ) x∈V , are constructed as independent Brownian bridges on each edge, so that (1 e∈O(φ) ) e∈E are independent random variables, and it follows from the reflection principle that, if ϕ e − ϕ e + > 0, then Let us now recall how the conditional probability in Lemma 2.1 yields a realisation of the FK-Ising coupling.
Assume V is finite. Let (J e ) e∈E be a family of positive weights. An Ising model on V with interaction constants (J e ) e∈E is a probability on configuration of spins (σ x ) x∈V ∈ {+1, −1} V such that Consider the GFF ϕ on G distributed according to P ϕ . Let J e (|ϕ|) be the random interaction constants J e (|ϕ|) = W e |ϕ e − ϕ e + |.
Note that, given that O(φ) has FK-Ising distribution, the fact that the sign of on its connected components is distributed independently and uniformly in {−1, 1} can be seen either as a consequence of Proposition 2.2, or from Theorem 1.
Given ϕ = (ϕ x ) x∈V on the discrete graph G, we introduce in Definition 2.1 as the random set of edges which has the distribution of O(φ) conditionally on ϕ = (ϕ x ) x∈V . can be retrieved by Corollary 3.6 in [6], which reads as follows. , are independent and have probability ) .
This result gives rise, together with Theorem 1, to the following discrete version of Lupu's isomorphism, which is stated without any recourse to the cable graph induced by G.
, 1} E be a percolation defined as follows: conditionally on L 1 2 , the random variables (ω e ) e∈E are independent, and ω e equals 0 with conditional probability given by (2.1).
) the set of edges: x∈V is a Gaussian free field distributed according to P ϕ .

Proposition 2.4 induces the following coupling between FK-Ising and random currents.
If V is finite, a random current model on G with weights (J e ) e∈E is a random assignment to each edge e of a non-negative integern e such that for all x ∈ V , e adjacent to xn e is even, which is called the parity condition. The probability of a configuration (n e ) e∈E satisfying the parity condition is The open edges in O(n) induce clusters on the graph G. Given a loop soup L α , we denote by N e (L α ) the number of times the loops in L α cross the nonoriented edge e ∈ E. The transience of the Markov jump process X implies that N e (L α ) is a.s. finite for all e ∈ E. If α = 1 2 , we have the following identity (see for instance [11]): Loop soup and random current. Assume V is finite and consider the loop soup L Proposition 2.5 (Random current and FK-Ising coupling, [7]). Assume V is finite. Letn be a random current on G with weights (J e ) e∈E . Let (ω e ) e∈E ∈ {0, 1} E be an independent percolation, each edge being opened (value 1) independently with probability 1 − e −Je . Then is distributed like the open edges in an FK-Ising with weights (1 − e −2Je ) e∈E .
2.5. Generalized second Ray-Knight "version" of Lupu's isomorphism. We are now in a position to state the coupled version of the second Ray-Knight theorem.
x ), the edges used by the path (X t ) 0≤t≤τ x 0 u , and additional edges e opened conditionally independently with probability We let σ ∈ {−1, +1} V be random spins sampled uniformly independently on each cluster induced by O u , pinned at x 0 , i.e. σ x 0 = 1, and define Then, conditionally on Remark 2.6. One consequence of that coupling is that the path (X s ) s≤τ Proof of Theorem 2: The proof is based on [6]. Let D = V \ {x 0 }, and letL 1 2 be the loop soup of intensity 1/2 on the cable graphG, which we decompose intoL ) the loop soup hitting (resp. not hitting) x 0 , which are independent. We let L 1 2 and L Theorem 1 implies (recall also Definition 2.1) that we can coupleL D x |, where σ ∈ {−1, +1} V are random spins sampled uniformly independently on each cluster induced by O(L 1 2 ), pinned at x 0 , i.e. σ x 0 = 1. Then, by Theorem 1, ).
On the other hand, conditionally on L . (L 1 2 ), where we use in the third equality that the event e ∈ O(L D 1 2 ) is measurable with respect to the σ-field generated byL D 1 2 , which is independent ofL .
We conclude the proof by observing that L

Inversion of the signed isomorphism
In [9], Sabot and Tarrès give a new proof of the generalized second Ray-Knight theorem together with a construction that inverts the coupling between the square of a GFF conditioned by its value at a vertex x 0 and the excursions of the jump process X from and to x 0 . In this paper we are interested in inverting the coupling of Theorem 2 with the signed GFF : more precisely, we want to describe the law of (X t ) 0≤t≤τ x 0 u conditionally on ϕ (u) . We present in section 3.1 an inversion involving an extra Poisson process. We provide in Section 3.2 a discrete-time description of the process and in Section 3.3 an alternative description via jump rates. Sections 3.4 and 3.5 are respectively dedicated to a signed inversion of Le Jan's isomorphism for loop soups, and to an inversion of the coupling of random current with FK-Ising.

A description via an extra Poisson point process
We define a self-interacting process (X t , (ň e (t)) e∈E ) living on V × N E as follows. The procesš We also denote byČ(t) ⊂ E the configuration of edges such thatň e (t) > 0. As time increases, the interaction parameters J e (Φ(t)) decreases for the edges neighboringX t , and at some random timesň e (t) may drop by 1. The process (X t ) t≥0 is defined as the process that jumps only at the times when one of theň e (t) drops by 1, as follows: • ifň e (t) decreases by 1 at time t, but does not create a new cluster inČ t , thenX t crosses the edge e with probability 1/2 or does not move with probability 1/2, • ifň e (t) decreases by 1 at time t, and does create a new cluster inČ t , thenX t moves/or stays with probability 1 on the unique extremity of e which is in the cluster of the origin x 0 in the new configuration. We setŤ clearly, the process is well-defined up to timeŤ . Theorem 3. Assume that V is finite. With the notation of Theorem 2, conditioned on s.,X t (with the initial conditionφ = ϕ (u) ) ends at x 0 , i.e. T < +∞ andXŤ = x 0 . All previous conclusions for the finite case still hold.

3.2.
Discrete time description of the process. We give a discrete time description of the process (X t , (ň e (t)) e∈E ) that appears in the previous section. Let t 0 = 0 and 0 < t 1 < · · · < t j be the stopping times when one of the stacks n e (t) decreases by 1, where t j is the time when one of the stacks is completely depleted. It is elementary to check the following: 3.
3. An alternative description via jump rates. We provide an alternative description of the process (X t ,Č(t)) that appears in Section 3.1.
Proposition 3.3. The process (X t ,Č(t)) defined in section 3.1 can be alternatively described by its jump rates : conditionally on its past at time t, ifX t = x, y ∼ x and {x, y} ∈Č(t), then (1)X jumps to y without modification ofČ(t) at rate and, conditionally on that last event: -if y is connected to x in the configurationČ(t)\{x, y}, thenX simultaneously jumps to y with probability 1/2 and stays at x with probability 1/2 -otherwiseX t moves/or stays with probability 1 on the unique extremity of {x, y} which is in the cluster of the origin x 0 in the new configuration.
Theorem 4. With the notation of Theorem 2, conditionally on (ϕ (u) , O u ), (X t ) t≤τ x 0 u has the law of self-interacting process (XŤ −t ) 0≤t≤Ť defined by jump rates of Proposition 3.3 starting withΦ Moreover (ϕ (0) , O(ϕ (0) )) has the same law as (σ ′Φ (T ),Č(Ť )) where (σ ′ x ) x∈V is a configuration of signs obtained by picking a sign at random independently on each connected component of C(T ), with the condition that the component of x 0 has a + sign.

A signed version of Le Jan's isomorphism for loop soup.
Let us first recall how the loops in L α are connected to the excursions of the jump process X. Proposition 3.6 (From excursions to loops). Let α > 0 and x 0 ∈ V . L x 0 (L α ) is distributed according to a Gamma Γ(α, G(x 0 , x 0 )) law, where G is the Green's function. Let u > 0, and consider the path

Consider the family of paths
It is a countable family of loops rooted in x 0 . It has the same law as the family of all the loops in L α that visit x 0 , conditioned on L x 0 (L α ) = u.
Next we describe how to invert the discrete version fo Lupu's isomorphism Proposition 2.4 for the loop-soup in the same way as in Theorem 3.
Let (φ x ) x∈V be a real function on V such thatφ x 0 = + √ 2u for some u > 0. Seť Let (x i ) 1≤i≤|V | be an enumeration of V (which may be infinite). We define by induction the self interacting processes ((X i,t ) 1≤i≤|V | , (ň e (t)) e∈E ).Ť i will denote the end-time forX i,t , anď T + i = 1≤j≤iŤ j . By definition,Ť + 0 = 0. L(t) will denote The end-timesŤ i are defined by inductions aš . Let (N e (u)) u≥0 be independent Poisson Point Processes on R + with intensity 1, for each edge e ∈ E. We setň We also denote byČ(t) ⊂ E the configuration of edges such thatň e (t) > 0.X i,t starts at • ifň e (t) decreases by 1 at time t, but does not create a new cluster inČ t , thenX i,t−Ť + i−1 crosses the edge e with probability 1/2 or does not move with probability 1/2, • ifň e (t) decreases by 1 at time t, and does create a new cluster inČ t , thenX i,t−Ť + i−1 moves/or stays with probability 1 on the unique extremity of e which is in the cluster of the origin x i in the new configuration. By induction, using Theorem 3, we deduce the following: Theorem 5. Let ϕ be a GFF on G with the law P ϕ . If one setsφ = ϕ in the preceding construction, then for all i ∈ {1, . . . , |V |},Ť i < +∞,X i,Ť i = x i and the path (X i,t ) t≤Ť i has the same law as a concatenation in x i of all the loops in a loop-soup L 1/2 that visit x i , but none of the x 1 , . . . , x i−1 . To retrieve the loops out of each path (X i,t ) t≤Ť i , on has to partition it according to a Poisson-Dirichlet partition as in Proposition 3.6. The coupling between the GFF ϕ and the loop-soup obtained from ((X i,t ) 1≤i≤|V | , (ň e (t)) e∈E ) is the same as in Proposition 2.4.

3.5.
Inverting the coupling of random current with FK-Ising. By combining Theorem 5 and the discrete time description of Section 3.2, and by conditionning on the occupation field of the loop-soup, one deduces an inversion of the coupling of Proposition 2.5 between the random current and FK-Ising.
We consider that the graph G = (V, E) and that the edges are endowed with weights (J e ) e∈E . We will consider a family of discrete time self-interacting processes ((X i,j ) 1≤i≤|V | , (ň e (j)) e∈E ). X i,j starts at j = 0 at x i and is defined up to a integer timeŤ i . LetŤ + i = 1≤k≤iŤ k , witȟ T + 0 = 0. The end-timesŤ i are defined by induction aš which is consistent with the notationČ(0). The evolution is the following. For j ∈ {Ť + i−1 + 1, . . . ,Ť + i }, the transition from time j − 1 to time j is the following: • first chose an edge e adjacent to the vertexX i,j−1−Ť + i−1 with probability proportional toň e (j − 1), • decrease the stackň e (j − 1) by 1, • if decreasingň e (j − 1) by 1 does not create a new cluster inČ(j − 1), thenX i,· crosses e with probability 1/2 and does not move with probability 1/2. • if decreasingň e (j − 1) by 1 does create a new cluster inČ(j − 1), thenX i,· moves/or stays with probability 1 on the unique extremity of e which is in the cluster of the origin x i in the new configuration. Denoten e the number of times the edge e has been crossed, in both directions, by all the walks ((X i,j ) 0≤j≤Ť i ) 1≤i≤|V | .

If the initial configuration of open edgesČ(0) is random and follows an FK-Ising distribution with weights
(1 − e −2Je ) e∈E , then the family of integers (n e ) e∈E is distributed like a random current with weights (J e ) e∈E . Moreover, the coupling between the random current and the FK-Ising obtained this way is the same as the one given by Proposition 2.5.

Proof of theorem 3
4.1. Case of finite graph without killing measure. Here we will assume that V is finite and that the killing measure κ ≡ 0.
In order to prove Theorem 3, we first enlarge the state space of the process (X t ) t≥0 . We define a process (X t , (n e (t))) t≥0 living on the space V × N E as follows. Let ϕ (0) ∼ P {x 0 },0 ϕ be a GFF pinned at x 0 . Let σ x = sign(ϕ (0) x ) be the signs of the GFF with the convention that σ x 0 = +1. The process (X t ) t≥0 is as usual the Markov Jump process starting at x 0 with jump rates (W e ). We set The initial values (n e (0)) are choosen independently on each edge with distribution where P(2J e (Φ)) is a Poisson random variable with parameter 2J e (Φ). Let ((N e (u)) u≥0 ) e∈E be independent Poisson point processes on R + with intensity 1. We define the process (n e (t)) by n e (t) = n e (0) + N e (J e (Φ(t))) − N e (J e (Φ)) + K e (t), where K e (t) is the number of crossings of the edge e by the Markov jump process X before time t.
Remark 4.1. Note that compared to the process defined in Section 3.1, the speed of the Poisson process is related to J e (Φ(t)) and not 2J e (Φ(t)).
We will prove the following theorem that, together with Lemma 4.2, contains the statements of both Theorem 2 and 3. x =φ, the process (X t , (n e (t)) e∈E ) t≤τ x 0 u has the law of the process (XŤ −t , (ň e (Ť − t)) e∈E ) t≤Ť described in section 3.1. Proof.
Step 1 : We start by a simple lemma.
Lemma 4.3. The distribution of (Φ := |ϕ (0) |, n e (0)) is given by the following formula for any bounded measurable test function h where the integral is on the set and #C(n) is the number of clusters induced by the edges such that n e > 0.
Proof. Indeed, by construction, summing on possible signs of ϕ (0) , we have where the first sum is on the set {σ x ∈ {+1, −1} V , σ x 0 = +1} and the second sum is on the set of {(n e ) ∈ N E , n e = 0 if σ e− σ e+ = −1} (we write n ≪ σ to mean that n e vanishes on the edges such that σ e− σ e+ = −1). Since we deduce that the integrand in (4.2) is equal to where we used in the first equality that n e = 0 on the edges such that σ e+ σ e− = −1. Thus, Inverting the sum on σ and n and summing on the number of possible signs which are constant on clusters induced by the configuration of edges {e ∈ E, n e > 0}, we deduce Lemma 4.3.
Step 2 : We denote by Z t = (X t , Φ(t), n e (t)) the process defined previously and by E x 0 ,Φ,n 0 its law with initial condition (x 0 , Φ, n 0 ).
We now introduce a processZ t , which is a "time reversal" of the process Z t . This process will be related to the process defined in section 3.1 in Step 4, Lemma 4.5.
For (ñ e ) ∈ N E and (Φ x ) x∈V such that we define the processZ t = (X t ,Φ(t),ñ e (t)) with values in V × R V + × Z E as follows. The process (X t ) is a Markov jump process with jump rates (W e ) (so thatX law = X), andΦ(t),ñ e (t) are defined byΦ where (l x (t)) is the local time of the processX up to time t, n e (t) =ñ e − N e (J e (Φ)) − N e (J e (Φ(t))) −K e (t) (4.4) where ((N e (u)) u≥0 ) e∈E are independent Poisson point process on R + with intensity 1 for each edge e, andK e (t) is the number of crossings of the edge e by the processX before time t. We setZ which is also the number of crossings of the edge e by the processX, between time 0 and t. With these notations we clearly havẽ wherel x (t) = t 0 ½ {Xu=x} du is the local time ofX at time t, and n e (t) =ñ e (0) + (N e (J e (Φ(t))) − N e (J e (Φ(0)))) −K e (t).
By time reversal, the law of (X t ) 0≤s≤τu is the same as the law of the Markov Jump process (X t ) 0≤t≤τu , whereτ u = inf{t ≥ 0,l x 0 (t) = u}. Hence, we see that up to the timeT = inf{t ≥ 0, ∃xΦ x (t) = 0}, the process (X t , (Φ x (t)) x∈V , (ñ e (t)) t≤T has the same law as the process defined at the beginning of step 2.
Then, following [9], we make the following change of variables conditionally on the processes (X t , (N e (t))) which is bijective onto the set The last conditions onΦ andñ e are equivalent to the conditionsXT = x 0 andñ e (T ) ≥ 0. The Jacobian of the change of variable is given by Step 3: With the notations of Theorem 6, we consider the following expectation for g and h bounded measurable test functions E g (X τu−t , n e (τ u − t)) 0≤t≤τu h(ϕ (u) ) (4.6) By definition, we have ϕ (u) = σΦ(τ u ), where (σ x ) x∈V ∈ {±1} V are random signs sampled uniformly independently on clusters induced by {e ∈ E, n e (τ u ) > 0} and conditioned on the fact that σ x 0 = +1. Hence, we define for where σ ≪ n means that the signs (σ x ) are constant on clusters of {e ∈ E, n e > 0} and such that σ x 0 = +1. Hence, setting G ((Z τu−t ) t≤τu ) = g (X τu−t , n e (τ u − t)) t≤τu , using lemma 4.3 in the first equality and lemma 4.4 in the second equality, we deduce that (4.6) is equal to with notations of Lemma 4.4. LetF t = σ{X s , s ≤ t} be the filtration generated byX. We define theF-adapted process M t , defined up to timeT by where C(x 0 ,ñ(t)) denotes the cluster of the origin x 0 induced by the configuration C(ñ(t)). Note that at time t =T , we also havẽ Hence, using identities (4.8) and (4.10) we deduce that (4.6) is equal to Step 4 : We denote byŽ t = (X t ,Φ t ,ň(t)) the process defined in section 3.1, which is well defined up to stopping timeŤ , andŽ T t =Ž t∧Ť . We denote byĚ x 0 ,Φ,ň the law of the procesš Z conditionnally on the initial valueň(0), i.e. conditionally on (N e (2J(Φ))) = (ň e ). The last step of the proof goes through the following lemma. ii) LetP ≤t x 0 ,Φ,ñ andP ≤t x 0 ,Φ,ň be the law of the process (Z T s ) s≤t and (Ž T s ) s≤t , then dP ≤t Using this lemma we obtain that in the right-hand side of (4.11) Hence, we deduce, using formula (4.7) and proceeding as in lemma 4.3, that (4.6) is equal to where the last integral is on the set , and where (n e ) ≪ (ϕ x ) means that (ñ e ) ∈ N E andñ e = 0 ifφ e−φe+ ≤ 0. Finally, we conclude that where in the right-hand sideφ ∼ P {x 0 }, √ 2u ϕ is a GFF and (X t ,ň(t)) is the process defined in section 3.1 from the GFFφ. This exactly means that ϕ (u) ∼ P This concludes the proof of Theorem 6.
Proof of lemma 4.5. The generator of the processZ t defined in (4.5) is given, for any bounded and C 1 for the second component test function f , by (4.12) where n − δ {x,y} is the value obtained by removing 1 from n at edge {x, y}. Indeed, sincẽ which is explains the first term in the expression. The second term is obvious from the definition ofZ t , and corresponding to the term induced by jumps of the Markov processX t . The last term corresponds to the decrease ofñ due to the increase in the processÑ e (Φ) −Ñ e (Φ(t)). Indeed, on the interval [t, t + dt], the probability thatÑ e (Φ(t)) −Ñ e (Φ(t + dt)) is equal to 1 is of order ΦX t (t) 2 dt using identity (4.13).
LetĽ be the generator of the Markov jump processŽ t = (X t , (Φ x (t)), (ň e (t))). We have that the generator is equal, for any smooth test function f , to where A i (x, y) correspond to the following disjoint events Indeed, conditionally on the value ofň e (t) = N e (2J e (Φ(t))) at time t, the point process N e on the interval [0, 2J e (Φ(t)))] has the law of n e (t) independent points with uniform distribution on [0, 2J e (Φ(t)))]. Hence, the probability that a point lies in the interval [2J e (Φ(t + dt))), 2J e (Φ(t)))] is of order We define the function Let us first consider the first term in (4.12). Direct computation gives For the second part, remark that the indicators ½ {x∈C(x 0 ,n)} and ½ {ne≥0 ∀e∈E} imply that Θ(y, Φ, n − δ x,y ) vanishes if n x,y = 0 or if y ∈ C(x 0 , n − δ x,y ). By inspection of the expression of Θ, we obtain for x ∼ y, Similarly, for x ∼ y, Combining these three identities with the expression (4.12) we deduce It exactly coincides with the expression forĽ since 1 = ½ A 1 + ½ A 2 + ½ A 3 .

General case.
Proposition 4.6. The conclusion of Theorem 3 still holds if the graph G = (V, E) is finite and the killing measure is non-zero (κ ≡ 0).
Proof. Let h be the function on V defined as h(x) = P x (X hits x 0 before ζ).
Define the conductances W h x,y := W x,y h(x)h(y), and the corresponding jump process X h , and the GFF ϕ This means in particular that for the occupation times, Indeed, at the level of energy functions, we have: where Cste(f (x 0 )) means that this term does not depend of f once the value of the function at x 0 fixed. LetX h t be the inverse process for the conductances (W h) e∈E e and the initial condition for the field ϕ (u) h , given by Theorem 3. By applying the time change 4.15 to the processX h t , we obtain an inverse process for the conductances W e and the field ϕ (u) . Proof. Consider an increasing sequence of connected sub-graphs G i = (V i , E i ) of G which converges to the whole graph. We assume that V 0 contains x 0 . Let G * i = (V * i , E * i ) be the graph obtained by adding to G i an abstract vertex x * , and for every edge {x, y}, where x ∈ V i and y ∈ V \ V i , adding an edge {x, x * }, with the equality of conductances W x,x * = W x,y . (X i,t ) t≥0 will denote the Markov jump process on G * i , started from x 0 . Let ζ i be the first hitting time of x * or the first killing time by the measure κ½ V i . Let ϕ We consider the process (X i,t , (ň i,e (t)) e∈E * i ) 0≤t≤Ť i be the inverse process on G * i , with initial field ϕ (u) i . (X i,t ) t≤τ x 0 i,u , conditional on τ x 0 i,u , has the same law as (X i,Ť i −t ) t≤Ť i . Taking the limit in law as i tends to infinity, we conclude that (X t ) t≤τ x 0 u , conditional on τ x 0 u < +∞, has the same law as (XŤ −t ) t≤Ť on the infinite graph G. The same for the clusters. In particular, where in the first two probabilities we also average by the values of the free fields. Hence P(Ť = +∞ orXŤ = x 0 ) = 1 − lim t→+∞ j→+∞ P(τ x 0 u ≤ t, X [0,τ x 0 u ] stays in V j |τ x 0 u < ζ) = 0.