Recurrence and transience of symmetric random walks with long-range jumps

Let $X_1, X_2, \ldots$ be i.i.d. random variables with values in $\mathbb{Z}^d$ satisfying $\mathbb{P} \left(X_1=x\right) = \mathbb{P} \left(X_1=-x\right) = \Theta \left(\|x\|^{-s}\right)$ for some $s>d$. We show that the random walk defined by $S_n = \sum_{k=1}^{n} X_k$ is recurrent for $d\in \{1,2\}$ and $s \geq 2d$, and transient otherwise. This also shows that for an electric network in dimension $d\in \{1,2\}$ the condition $c_{\{x,y\}} \leq C \|x-y\|^{-2d}$ implies recurrence, whereas $c_{\{x,y\}} \geq c \|x-y\|^{-s}$ for some $c>0$ and $s<2d$ implies transience. This fact was already previously known, but we give a new proof of it that uses only electric networks. We also use these results to show the recurrence of random walks on certain long-range percolation clusters. In particular, we show recurrence for several cases of the two-dimensional weight-dependent random connection model, which was previously studied by Gracar et al. [Electron. J. Probab. 27. 1-31 (2022)].


Introduction and main results
Consider independent Z d -valued random variables X 1 , X 2 , ... that are symmetric, i.e., they satisfy P (X 1 = x) = P (X 1 = −x) for all x ∈ Z d . We want to know for which regimes of decay of P (X i = x) the associated random walk defined by S n = n k=1 X k is recurrent or transient. For this, we first construct an electrical network that is equivalent to this random walk. We do this by giving conductances to all edges {a, b} with a, b ∈ Z d , allowing self-loops here. For two points a, b ∈ Z d we give a conductance of c {a,b} = P (X i = a − b) to the edge between them. The symmetry condition P (X i = x) = P (X i = −x) guarantees that the conductances defined like this are well-defined. Then consider the reversible Markov chain on this network, i.e., the Markov chain defined by P (M n+1 = y|M n = x) = c {x,y} z∈Z d c {x,z} = c {x,y} . The resulting Markov chain has exactly the same distribution as S n , and thus, we will analyze this Markov chain from here on. We can without loss of generality assume that P (X 1 = 0) = 0, as the steps X i with X i = 0 have no influence whether a random walk is recurrent or transient. It is a classical result of Pólya that the simple random walk on the integer lattice Z d is recurrent for d ∈ {1, 2} and transient for d ≥ 3 [32]. Furthermore, it is a well-known result about electrical networks that transience of the random walk is equivalent to the existence of a unit flow with finite energy from o to infinity, where o is an arbitrary vertex in the graph, or the origin for the integer lattice; see for example [27,Theorem 2.10]. With this characterization of transience, one directly gets that the random walk S n defined as above is always transient for d ≥ 3, and recurrent when the X i -s are bounded symmetric random variables and d ∈ {1, 2}. In this paper, we answer the question whether the random walk is recurrent or transient when P (X = x) has a power-law decay, i.e., when P (X = x) = P (X = −x) = Θ ( x −s ), where s > d is a parameter. Note that this question makes no sense for s ≤ d, as the probabilities P (X i = x) need to sum up to 1. This problem has been studied before at several other places, for example in [7] using the recurrence criterion of [35,Section 8]. However, previous proofs used the characteristic function of the random walk whereas our proof does not use characteristic functions, but uses the theory of electric networks. The results of the transience/recurrence of Pólya is often humorously paraphrased as "A drunk man will find his way home, but a drunk bird may get lost forever.", which goes back to Shizuo Kakutani. So in this note, we study the question which kinds of drunk grasshoppers, which tend to make huge jumps, eventually will find their way home and which kinds may get lost forever. The answer is that the random walk is recurrent for d ∈ {1, 2} and s ≥ 2d, and transient otherwise.
Theorem 1.1. Let X 1 , X 2 , . . . be i.i.d. symmetric Z d -valued random variables satisfying P (X 1 = x) = P (X 1 = −x) ≥ c x −s for some c > 0, s < 2d, and all x large enough. Then the random walk S n defined by S n = n k=1 X k is transient. This result is not surprising, as for s < 2d the total conductance between the two boxes A = {0, . . . , n} d and B = 2n · e 1 + {0, . . . , n} d satisfies x∈A y∈B c {x,y} ≈ n 2d−s ≫ 1 and this suggests that it is possible to construct a finite-energy flow from the root to infinity. Here e 1 denotes the standard unit vector pointing in the direction of the first coordinate axis. This suggests that the transition from transience to recurrence in dimension d ∈ {1, 2} happens at s = 2d. Note that for dimension d ≥ 3 there is no such transition in s, as the symmetric random walk is transient for all values of s > d. Also many different properties of the long-range percolation graph change at the value s = 2d; see [2,3] for more examples of such phenomena. What happens at the critical value s = 2d is treated in the following theorem.
Theorem 1.2. Let d ∈ {1, 2}, and let X 1 , X 2 , . . . be i.i.d. symmetric Z d -valued random variables satisfying P (X 1 = x) = P (X 1 = −x) ≤ C x −2d for some constant C < ∞ and all x = 0. Then the random walk S n defined by S n = n k=1 X k is recurrent.
So in particular Theorem 1.2 shows that for dimension d ∈ {1, 2} and for P (X 1 = x) = c x −2d the associated random walk is recurrent, without having a mean in dimension 1, respectively a finite variance in dimension 2. Both cases lie on the exact borderline that separates the transient regime from the recurrent regime. The transience or recurrence of a Markov chain, or of a sum of i.i.d. random variables, is an elementary question that has been extensively studied in many different regimes [8,33,34], including results in random environments [36] and on percolation clusters [1,5,23,31]. We also use parts of the techniques developed by Berger in [5], in particular Lemma 2.2.
The random walk (X n ) n∈N can also be seen to be equivalent to an annealed random walk on a sequence of long-range percolation graphs when the underlying graph of the percolation gets resampled at every time-step. If one does not do this resampling, then one has a simple random walk on a percolation cluster. It is a natural question to ask how the random walk on a graph with long jumps compares to the simple random walk on the associated graph obtained by percolation. Formally, let G = (V, E) be a connected graph with weighted edges (c e ) e∈E ∈ R E ≥0 . Assume that for each vertex v ∈ V one has 0 < e:v∈e c e < ∞, and let (X n ) n∈N be the random walk defined by the transition probabilities for all edges {x, y} ∈ E. If the random walk (X n ) n∈N is recurrent almost surely for all possible starting points, we also say that the graph G = (V, E) is recurrent. Let G = (V, E, ω) be a random graph with vertex set V , where each edge e ∈ E has a random non-negative weight ω(e) that satisfies E [ω(e)] ≤ c e . Note that we do not require that these random weights are independent for different edges. In the case where ω(e) ∈ {0, 1} almost surely for all edges e ∈ E, one can also think of bond percolation on the graph (V, E). Let (Y n ) n∈N be the random walk on this weighted graph, i.e., the random walk with transition probabilities for all vertices y ∈ V and all vertices x ∈ V for which ω({x, y}) > 0. In the case where e:y∈e ω(e) = 0, i.e., when all edges with y as one of its endpoints have a weight of 0, we simply define Y n as the random walk that stays constant on y. For two vertices x, y ∈ V we say that they are connected if there exists a path of edges between them, such that ω(e) > 0 for all edges e in this path. The graphG will not be connected for many examples of percolation, but we say that it is recurrent if all its connected components are recurrent graphs. We prove that if the random walk with the long-range steps (X n ) n∈N is recurrent, then almost every realization of the corresponding random weighted graph is also recurrent. Theorem 1.3. Let G = (V, E) be a graph with weighted edges (c e ) e∈E ∈ R E ≥0 as above. Assume that the random walk (X n ) n∈N defined by (1) is recurrent. LetG = (V, E, ω) be a graph, where the edges e ∈ E carry a random weight ω(e) with E [ω(e)] ≤ c e for all e ∈ E. Then the random walk on these weights defined by (2) is recurrent almost surely.
The proof of this theorem will be a direct consequence of Lemma 3.2. In section 3 below we will use Theorem 1.2 and Theorem 1.3 in order to extend the results on recurrence of random walks of percolation clusters of Berger [5] to percolation clusters on the oneor two-dimensional integer lattice with dependencies, i.e., when the occupation statuses of different edges are not independent. We will also apply this extension to the weightdependent random connection model and obtain several new results regarding the recurrence of random walks on such models. Readers interested mostly in the new results regarding recurrence of the random connection model might also consider to skip section 2 directly go to section 3. It is also completely self-contained, up to the use of Theorem 1.2.
Random walks on long-range models are a well-studied object, including results on mixing times [4] and scaling limits [6,9,10]. However, many results so far focused on independent long-range percolation or needed assumptions on ergodicity. One model of dependent percolation for which the recurrence and transience has been studied recently is the weight dependent random connection model [16]. We consider the weight dependent random connection model in dimension d = 2. The vertex set of this graph is a Poisson process of unit intensity on R 2 × (0, 1). For a vertex (x, s) in the Poisson process, the value x ∈ R 2 is called the spatial parameter and the value s ∈ (0, 1) is called the weight parameter. Two vertices (x, s) and (y, t) are connected with probability ϕ ((x, s), (y, t)), where ϕ : R 2 × (0, 1) 2 → [0, 1] is a function. We will always assume that ϕ is of the form where ρ is a function (also called profile function) from R ≥0 to [0, 1] that is non-increasing and satisfies lim r→∞ r δ ρ(r) = 1 for some δ > 1. The function g : (0, 1)×(0, 1) → R ≥0 is a kernel that is symmetric and nondecreasing in both arguments. We define different kernels depending on two parameters γ ∈ [0, 1) and β > 0. The parameter γ determines the strength of the influence of the weight parameter. The parameter β corresponds to the density of edges. Different examples of kernels are the sum kernel the min kernel the product kernel g(s, t) = g prod (s, t) = 1 β s γ t γ , and the preferential attachment kernel g(s, t) = g pa (s, t) = 1 β min(s, t) γ max(s, t) 1−γ .
We call the resulting graph G β . As g sum ≤ g min ≤ 2 d g sum , the min kernel and the sum kernel show typically the same qualitative behavior. Depending on the value of β, there might be an infinite connected cluster [17,18]; Using the almost sure local finiteness of the graph and Kolmogorov's 0-1-law one sees that the existence of an infinite open cluster is a tail event. Thus we can define the critical value β c as the infimum over all values β ≥ 0 for which an infinite open cluster exists in the graph exists, i.e., The weight-dependent random connection model and other models with scale-free degree distribution have been studied intensively in recent years, including new results on the convergence of such graphs [14,19,24], the chemical distances [11,15,22,25], random walks and the contact process evolving on random graphs [13,16,21], and the percolation phase transitions [11,17,18,20]. In section 3.1 below we study for which combinations of γ and δ all connected components of the resulting graph are almost surely recurrent. Our main (and only) tool for this is a consequence of Theorem 1.3, which allows to make statements about random walks on dependent percolation clusters. Whenever there is no infinite cluster, then the random walk is clearly recurrent on all finite clusters. The question of recurrence and transience has been studied before by Gracar, Heydenreich, Mönch, and Mörters in [16]. We will generally adapt to their notation. After this paper was first submitted, Mönch made further progress on the transient regimes, provided an infinite cluster exists [29][Theorem 2.7]. Among other things, Mönch proved that for the random walk on the infinite open subgraph is transient, provided such an infinite open subgraph exists. The parameter δ eff was first introduced by Gracar, Lüchtrath, and Mönch in [17] and is conjectured to determine many qualitative properties of the longrange percolation graph. They also determined which for which kernels g and for which values of δ and γ the condition δ eff < 2 is satisfied [17][Lemma 1.3]. Whenever δ < 2, then also δ eff < 2. For the min kernel, the sum kernel, and the preferential attachment kernel one has δ eff < 2 if the conditions δ ≥ 2 and γ > δ δ−1 are satisfied. For the product kernel one has δ eff < 2 if δ ≥ 2 and γ > 1 2 . Combining the results of [16] and [29], the following results are known so far. Theorem 1.4 (Gracar, Heydenreich, Mönch, Mörters [16] and Mönch [29].). Consider the weight-dependent random connection model with profile function ρ satisfying (3) in dimension d = 2, and assume β > β c .
(a) For the preferential attachment kernel, the infinite component is almost surely (b) For the min kernel and the sum kernel, the infinite component is almost surely (c) For the product kernel, the infinite component is almost surely An overview of their results and our newly obtained results can be found in Figure 1. Our results for the weight-dependent random connection model are as follows.  [16,29]. The red lines/area is the phase where Theorem 1.5 shows the recurrence of the random walk, and where the recurrence has not been shown by Gracar, Heydenreich, Mönch, and Mörters in [16]. The return properties of the random walk in the striped area are still unknown.  (b) For the min kernel and the sum kernel, every component is almost surely recurrent if δ = 2, γ < 1 2 or δ > 2, γ = 1 2 .
Acknowledgements. I thank Yuki Tokushige for making me aware of this problem and for many helpful comments on an earlier version of this paper. I thank Markus Heydenreich for making me aware of the applications of Theorem 1.3 to the randomconnection model. I thank Noam Berger and Christian Mönch for useful discussions. I thank an anonymous referee for very many helpful remarks and comments. This work is supported by TopMath, the graduate program of the Elite Network of Bavaria and the graduate center of TUM Graduate School.

Random walks with large steps
As already discussed in the introduction, we will always study the random walk on an electric network, and this random walk has the same distribution as the sum of random variables n k=1 X k . The electric network c {x,y} x,y∈Z d ,x =y is given through the conductances c {x,y} = P (X 1 = x − y). Now the Markov chain on these conductances has the same distribution as S n = n k=1 X k . For such a Markov chain, there are well-known criteria for transience/recurrence. A random walk on this network is transient if and only if there exists a unit flow with finite energy from the origin 0 to infinity, see for example [27, Theorem 2.10] or [12,26,28]. We use this connection between transience and flows in the proof of Theorem 1.1 and in the proof of Theorem 1.2 for d = 2. The use in the proof of Theorem 1.2 for d = 2 is more implicit, as it is hidden in the proof of Lemma 2.2. In particular, the proof of Lemma 2.2 uses cutsets [30] and the Nash-Williams criterion in order to show that there can not exist a flow with finite energy from 0 to infinity. Note that the network c {x,y} x,y∈Z d ,x =y defined as above is still translation invariant. The same statements about transience/recurrence of this network can be made without translation invariance, as the following lemma shows.
Thus, using Rayleigh's monotonicity principle [27,Chapter 2.4], it suffices to show that the network defined through the conductances c {x,y} x,y∈Z d ,x =y is recurrent. Define λ := by the definition of λ. Then the random walk S n = n k=1 X k has exactly the same distribution as a random walk started at 0 on the network defined by the conductances c {x,y} x,y . Together with Theorem 1.2 this shows that the random walk on the network defined by c {x,y} x,y is recurrent and, as argued before, this also shows that the random walk on the network defined by c {x,y} x,y is recurrent. The proof of the transience for the case where c {x,y} ≥ c x − y −s for some c > 0 and s < 2d works analogous and we omit it.
After seeing the connection between the electrical networks and the random walk S n = n k=1 X k , we are ready to go to the proof of Theorem 1.1.

The proof of Theorem 1.1
Proof of Theorem 1.1. We iteratively define disjoint boxes A 0 , A 1 , . . . as follows. Let a 0 = b 0 = 0 and define a k and b k iteratively by The resulting sets A k are disjoint for different k, and they are boxes of side length 2 k , thus containing 2 kd elements. We now construct a flow between the different boxes as follows. For k large enough, say for where c ′ is a constant that does not depend on k. So we consider the flow that starts uniformly distributed over A k and each node x ∈ A k distributes its incoming flow uniformly to A k+1 , i.e., it sends a flow of strength 1 The incoming flow in A k+1 is again uniformly distributed over the box. As we only get good upper bounds on the energy of the flow for k ≥ K, we send a different initial flow to A K . For this, we simply consider a unit flow 0 to A K that distributes uniformly over A K , i.e., each vertex in A K receives a flow of 1 |A K | , and all edges used by this unit flow are in a finite box. Concatenating the described flows clearly gives a unit flow θ from 0 to infinity, from which we now want to estimate the energy. We are only interested in whether its energy is finite or infinite, and thus it suffices to consider the energy that is generated by the flows between A k and A k+1 for large enough k. For one pair of boxes Using that s < 2d we can now see that which shows that θ is a flow of finite energy and thus shows the transience of the random walk.

The proof of Theorem 1.2 for d = 1
Proof of Theorem 1.2 for d = 1. The main strategy of this proof is to compare the discrete random walk to the sum of independent Cauchy random variables. We assumed that As we started with the assumption c {x,y} ≤ C x − y −2 , we also have that c {x,y} ≤ λc {x,y} for a constant λ large enough and all x = y. Thus, by Rayleigh's monotonicity principle [27,Chapter 2.4], it suffices to show that the network defined by the conductances λc {x,y} x,y∈Z,x =y is recurrent. Multiplying every conductance by a constant factor does not change whether the network is recurrent or transient, and thus it suffices to show that the network defined by the conductances c {x,y} x,y∈Z,x =y is recurrent. For this, let Y 1 , Y 2 , . . . be i.i.d. Cauchy-random variables and define X ′ k = sgn(Y k )⌈|Y k |⌉. Then X ′ k has the distribution of one step of the random walk on the network defined by c {x,y} x,y∈Z,x =y , and by independence S ′ n = n k=1 X ′ k has exactly the same distribution as the random walk on the network defined by c {x,y} x,y∈Z . Furthermore, we define R k = Y k − X ′ k . Clearly, R 1 , R 2 , . . . are i.i.d. random variables that are bounded by 1 and thus we have that By the stableness of the Cauchy-distribution we furthermore have that (4) and (5) gives Thus, there needs to exist a point x ∈ {−3n, . . . , 3n} with However, for n even, the x ∈ Z that maximizes P ( n k=1 X ′ k = x) is 0. To see this, let ρ be the probability mass function of Using the symmetry of ρ (which is inherited from the symmetry of X ′ i ) and a convolution, we see that where we used the Cauchy-Schwarz inequality for the inequality. So in particular, for n even, we have that Summing this over all even n we get that ∞ n=1 P ( n k=1 X ′ k = 0) = ∞, which implies the recurrence of the random walk S ′ n = n k=1 X ′ k . As discussed above, this already implies the recurrence of the random walk S n . The proof of Theorem 1.2 for d = 2 is a direct consequence of Lemma 2.10 and Lemma 2.11 below. But before going to these, we need to introduce several intermediary statements. The first one, Lemma 2.2, is taken from [5,Theorem 3.9]. It has the slight modification that we want the distribution to be the same for all edges with a fixed orientation only, whereas [5, Theorem 3.9] does not take into account different orientations (The precise definition of orientation is given in Notation 2.4 below). However, the exact same proof as in [5] also works in our situation and we omit it. We say that a distribution µ has a Cauchy tail if there exists a constant C such that Note that in order to determine whether a distribution µ has a Cauchy tail, it suffices to check that condition (6) holds for all numbers t of the form C ′ ·3 j with a constant C ′ ∈ R >0 , instead of all t > 0. Our arguments will mostly use the symmetry of the nearest-neighbor bonds with respect to the ∞-norm. Therefore, we will always mean edges {x, y} with x − y ∞ = 1 when speaking of nearest-neighbor or short-range edges in the following.
Lemma 2.2. Let G be a random electrical network on the nearest-neighbor edges of the lattice Z 2 , i.e., the edges {{x, y} : x − y ∞ = 1}. Suppose that all the edges with the same orientation have the same conductance distribution, and this distribution has a Cauchy tail. Then almost all realizations of this random graph G are recurrent graphs.
Before going to the formal details of the proof of Theorem 1.2, we want to explain the main ideas behind it. Assume that c {x,y} are conductances on Z 2 with c {x,y} = x − y −4 . If one has two disjoint boxes A, B of side length 3 k and with distance approximately 3 k , then one has c {x,y} ≈ 3 −4k for all x ∈ A and y ∈ B. An edge of conductance 3 −4k is equivalent to N edges in series with conductance N · 3 −4k each, where N is an arbitrary positive integer. In our construction, N will be of order 3 k . So the rough idea is to replace each edge {x, y} with Θ 3 k many edges of conductance Θ 3 −3k . By the parallel law, the conductivity of the network further increases if we erase these Θ 3 k many edges in series of conductance Θ 3 −3k , and increase the conductances along a path γ k x,y of length Θ 3 k in the nearest-neighbor lattice by Θ 3 −3k . However, we will not do this independently for all x ∈ A, y ∈ B, but we want that for different points x, x ′ ∈ A and y, y ′ ∈ B the paths γ k x,y and γ k x ′ ,y ′ have an overlap that is relatively big. So far, we only looked at fixed k ∈ N. We will do such a construction for all k ∈ N. But at each k, we will also look at random, 3 k -periodic shifts of the plane. We use these uniform random shifts so that the distribution of the final conductance is the same for all edges of the same orientation. This construction will then lead to Cauchy tails for the individual conductances of the edges in the nearest-neighbor lattice, and thus, using Lemma 2.2, to the recurrence of the random walk on this network. The environment we started with is completeley deterministic, and the edge-weights arising through our construction are random just because of the random shifts of the plane. This also underlines that it is important for our construction to use random shifts, so that we can apply Lemma 2.2.
Next, we introduce some notation. We do this in order to partition the plane Z 2 into boxes with side length 3 k . The same notation was already used in [2,3]. for the box with side length N that is translated by N x. So in particular Z 2 = x∈Z 2 V N x , where the symbol stands for a disjoint union. For l ∈ {0, . . . , k}, each box of side length 3 k can be written as the disjoint union of 3 2(k−l) boxes of side length 3 l . This union is simply given by For each point x ∈ Z 2 , there exists for all l ≥ 0 a unique y = y(l, x) ∈ Z 2 with x ∈ V 3 l y(l,x) . For a point x ∈ Z 2 , let m l (x) be the midpoint of V 3 l y(l,x) , i.e., So in particular we have m 0 (x) = x for all x ∈ Z 2 . Also note that m l (x) and m l+1 (x) can be the same point. A point u ∈ Z 2 for which there exists a point x ∈ Z 2 with m l (x) = u is also called a midpoint of the l-th level. Note that a block V 3 k a contains exactly 3 2(k−l) midpoints of the l-th level, for all l ∈ {0, . . . , k}.
Edges of the form {x, y} with x, y ∈ Z 2 , x − y ∞ = 1 can have four different orientations: , , | , and −. For an orientation → ν ∈ { , , |, −}, we write E→ ν Z 2 for all the short-range edges pointing in this direction in the integer lattice. We also want to make a tiling of E→ ν Z 2 with a given periodicity. We will simply decide on one tiling now. There are, of course, several other natural options, which come from a different inclusion on the boundary of the blocks V N a = N a + {0, . . . , N − 1} 2 .
Notation 2.4. For any a ∈ Z 2 , N ∈ N, we define Note that for x ∈ Z 2 and l ∈ N, the midpoints m l (x) and m l+1 (x) have either 0 or 3 l as distance in the ∞-metric, i.e., m l (x) − m l+1 (x) ∞ ∈ {0, 3 l }. In the case where m l (x) − m l+1 (x) ∞ = 3 l , there exists a path of length 3 l connecting m l (x) and m l+1 (x) which uses edges {u, v} with u − v ∞ = 1 only. Such a path is in general not unique, but it is unique if we make the further restriction that the path uses 3 l edges of the same orientation. So the resulting path, which we refer to as the canonical shortest path, is the path that connects m l (x) and m l+1 (x) using the straight line between these two points. Examples of canonical shortest paths are given in Figure 2. Next, we define a set of paths. We want to define a path γ k x,y for all x, y ∈ Z 2 for which The path γ k x,y defined below is adopted to the renormalization with scale 3, as it uses this iterative structure. Whenever x, y are not of the form as described above, we simply say that the path γ k x,y does not exist. A picture of our construction is given in Figure 4.
We define the path γ k x,y as the path that goes from x = m 0 (x) to m 1 (x) following the canonical shortest path and from there to m 2 (x) following the canonical shortest path and from there, iteratively, following the canonical shortest paths, to m k (x). From there, the path goes in a deterministic way to m k (y) and from there iteratively, following the canonical shortest paths, to m 0 (y) = y. For the path between m k (x) and m k (y) we follow the line sketched in Figure 3.
The paths of the form γ k x,y are not simple paths or shortest paths. In particular, they can go several times over the same edge. Also note that we do not have γ k x,y = γ k y,x , in general. This is because the path chosen between m k (x) and m k (y) is not necessarily the same path, see Figure 3. However, the paths γ k x,y can not be too long. The ∞distance between the points m k (x) and m k (y) is at most 7 · 3 k , and for l + 1 ≤ k one has m l (x) − m l+1 (x) ∞ ∈ {0, 3 l }, and the same statement also holds for y instead of x. Writing |γ k x,y | for the length of the path γ k x,y , we thus get that Consider the set of paths γ k x,y over all suitable points x, y ∈ Z 2 . We want to bound the number of edges that lie in N or more paths γ k x,y . We say that an edge e = {x, y} is in the path γ = (x 0 , . . . , x n ), abbreviated by e ∈ γ, if (x, y) = (x i , x i + 1) or (y, x) = (x i , x i + 1) for an i ∈ {0, . . . , n − 1}. We first focus on the structure of the paths inside of one box The path between the midpoint m k (x) (the blue dot) and a different midpoint m k (y) in a different box (a black dot) is obtained by following the black line.
For each l ∈ {0, . . . , k}, there are 3 2(k−l) midpoints of the l-th level inside A, i.e., points y ∈ A such that y = m l (x) for a point x ∈ A. Thus there are 3 2(k−l−1) midpoints of the form m l+1 (x) in A. Each box of side length 3 l+1 contains 9 boxes of side length 3 l . Thus, there are 8 · 3 l 3 2(k−l−1) ≤ 3 2k−l edges in A that are on the canonical shortest path between two midpoints of the form m l (x) and m l+1 (x). The factor 8 arises, as for one box of side length 3 l+1 with midpoint z we only need to consider the 8 = 3 2 − 1 boxes of side length 3 l that lie inside this box but do not have z as a midpoint. Edges that do not lie on the canonical shortest path between two midpoints of any level are not used in the segments that connect an x ∈ A to m(A), where m(A) is the midpoint of A. Furthermore, for two boxes V 3 k a and V 3 k b with a − b ∞ ≤ 7, there are at most 7 · 3 k edges that are on the path between the midpoints of V 3 k a and V 3 k b . Many of the edges in this path lie actually outside of both the boxes V 3 k a and V 3 k b .
Definition 2.6. For each short-range edge e we define the number N k e by x,y which is just the number of paths of the form γ k x,y that use the edge e. For a number r ≥ 0 and an orientation which is the number of edges in E→ ν V 3 k 0 that lie in at least r different paths of the form γ k x,y .
Remember that we defined the path γ k x,y only for points x, y satisfying x ∈ V 3 k a , y ∈ V 3 k b for some a, b ∈ Z 2 with a − b ∞ ∈ {2, . . . , 7}. So in particular for all edges e we have that Figure 4: The dashed line is the path γ 2 x,y between the points x (blue) and y (red). The dots are points in Z 2 , the gray lines give the partition of Z 2 into sets of the form V 3 a , and the thick black lines give the partition of Z 2 into sets V 9 a . The encircled points are the points m 1 (x), m 2 (x), and m 2 (y). Note that we have y = m 0 (y) = m 1 (y) here.

e /
∈ γ k x,y for all points x, y that are not of this special form. The next lemma gives upper bounds on the number of edges that lie in at least a given number of paths.
and furthermore, one has Proof. Suppose that an edge e is not on the straight line between two midpoints of the l-th level and the (l + 1)-th level in the set V 3 k 0 , and also not on the path between two midpoints m V 3 k a and m V 3 k b for a, b ∈ Z 2 with a − b ∞ ∈ {2, . . . , 7}. So the edge e can only be on the straight line between midpoints of the j-th level and the (j + 1)-th level, for j ≤ l − 1. Thus, there exists a set V 3 l−1 f (e) ⊂ V 3 k 0 such that e can only be part of paths of the form γ k x,y where x ∈ V 3 l−1 f (e) or y ∈ V 3 l−1 f (e) . There are (2 · 7 + 1) 2 − 9 = 216 many a ∈ Z 2 with 2 ≤ a ∞ ≤ 7. Thus, there are at most 216 · 3 2(l−1) 3 2k < 25 · 3 2k+2l pairs (x, y) with x ∈ V 3 l−1 f (e) and y ∈ a∈Z 2 :2≤ a ∞ ≤7 V 3 k a . Using symmetry between x and y we get that N k e < 50 · 3 2k+2l . This shows that edges e with N k e ≥ 50 · 3 2k+2l are either on the canonical path between two midpoints of the l-th level and the (l + 1)-th level in the set V 3 k 0 , or on the path between two midpoints m V 3 k a and m V 3 k b for a, b ∈ Z 2 with a − b ∞ ∈ {2, . . . , 7}.
As discussed before, in the set V 3 k 0 , there are at most 3 2k−l edges that join a midpoint of the l-th level to a midpoint of the (l + 1)-th level. For each orientation, there are 3 k edges that are used by paths between different midpoints. For the orientation , for example, this are simply the edges of the form s s , which shows (8). Note that the last inequality in (10) holds because l ≤ k. Furthermore, for each edge e there are at most (2 · 7 + 1) 2 3 2k 2 < 2 17 3 4k pairs (x, y) such that γ k x,y is defined and for which e ∈ γ k x,y is possible. This holds, as for every path γ k x,y that uses one of the edges in E→ ν V 3 k 0 , say for x ∈ V 3 k a and y ∈ V 3 k b , we already must have a ∞ , b ∞ ≤ 7. This gives us that which finishes the proof.
We are now ready to go to the proof of the recurrence of the network. Remember that we started with conductances c {x,y} satisfying c {x,y} ≤ C x − y −4 ∞ for a uniform constant 0 < C < ∞. For two networks c {x,y} x,y∈Z d and c {x,y} x,y∈Z d we say that the first network has a higher conductivity than the second network if the effective conductances where P a (a → b) is the probability that a random walk hits b before it hits a, when starting at a. So the effective conductance between a and b is related to how likely it is to go from a to b. The effective conductance between two sets A, B is the conductance between the points a, b if the set A is contracted to a point a and the set B is contracted to a point b. Taking A = {0} and B = Z d \ {−n, . . . , n} d , and letting n to ∞, this shows that if the network defined by c {x,y} is recurrent, then the network defined byc {x,y} is also recurrent. By Rayleigh's monotonicity principle [27,Chapter 2.4], the conductivity of the network increases if we increase the conductance of edges. Thus, it suffices to show that the network defined by the conductances c {x,y} = C x − y −4 ∞ is recurrent. However, multiplying every conductance of each edge by a constant factor does not change whether the network is recurrent or transient. Thus, we will, from now on, focus on the case where Following an idea of Berger [5], our strategy is that we erase the long edges and give a higher conductance to the short edges instead, in such a way that the total conductivity increases. The way in which this is done in [5] does not work in the situation we are dealing with. The precise way in which we do this is described in Definition 2.8 for edges of length 2, 3, . . . , 8, and in Definition 2.9 for edges of length 9 and higher (where the length of an edge is measured in the ∞-distance of its endpoints). Some edges might appear several times, but if we increase the conductances twice for one edge, then it only increases the total conductivity of the network. Before going to these definitions, we need to introduce a bit more notation.
For a path γ = (x 0 , x 1 , . . . , x n ) and a point r ∈ Z 2 , we define the path r + γ = (r + x 0 , r + x 1 , . . . , r + x n ), which is now a path between r + x 0 and r + x n . Note that for three points x, y, r ∈ Z 2 , and k ∈ N, for which the path γ k x+r,y+r exists, the path −r + γ k x+r,y+r is actually a path between x and y. Also remember that we write E(Z 2 ) = {x, y} ⊂ Z 2 : x − y ∞ = 1 for the edge set consisting of short edges on Z 2 . Definition 2.8. For two vertices x = (x 1 , x 2 ) and y = (y 1 , y 2 ) in Z 2 , we define the path γ ′ x,y as the path that goes from x to (x 1 , y 2 ) using |x 2 − y 2 | edges of the orientation |, and from there to (y 1 , y 2 ) using |x 1 − y 1 | edges of the orientation −. This path is uniquely defined and has length x−y 1 ≤ 2 x−y ∞ . We now define a weight W : for all edges e ∈ γ ′ x,y by 16. Define W as the limiting object.
Definition 2.9. We now define a weight U k : for all edges e ∈ −r k + γ k x+r k ,y+r k by 10 · 3 −3k . Define U k as the limiting object.
Note that U k and W are well-defined and do not depend on the order of the exhaustion of Z 2 × Z 2 , as we only add a non-negative amount at every step, and never subtract anything. Next, we want to show that the nearest-neighbor network Z 2 , E(Z 2 ), U defined by U = W + ∞ k=1 U k has a higher conductivity than the original network. Note that we can define U = W + ∞ k=1 U k also directly by increasing the conductances along all suitable paths γ ′ x,y or γ k x,y by the corresponding value and then look at the limiting object.
Lemma 2.10. The network defined by the weights U (e) = W (e) + ∞ k=1 U k (e) has a higher conductivity than the network defined by the weights Proof. A non-nearest-neighbor edge e = {u, v} is not included in the network defined by U . However, we have increased the conductances along some path connecting u and v, when we consider the sum W + ∞ k=1 U k . In the following, we will show that for each edge e = {u, v}, the conductances indeed were increased at least once along a nearestneighbor path connecting u and v, and this increase of the conductances of the short edges actually increased the total conductivity of the network. A similar argument for the latter claim was also used in [5]. Assume that e = {u, v} is an edge with length at least 9, and let k ∈ {2, 3, . . .} be such that , but increased the conductance of nearest-neighbor edges along the path −r k−1 + γ k−1 x+r k−1 ,y+r k−1 by 10 · 3 −3(k−1) . The path −r k−1 + γ k−1 x+r k−1 ,y+r k−1 has a length of at most 10 · 3 k−1 by (7), and thus we increased the total conductivity of the network. To see this, assume we have a nearest-neighbor path of length N = 10 · 3 k−1 connecting u and v. The edge {u, v} is actually equivalent to a string of N edges in series, each with conductance N c {u,v} . Identifying the vertices in this string with the vertices in the original path in the nearest-neighbor lattice can only increase the conductivity of the network. Then applying the parallel law with the edges in the original lattice and the newly formed edges is equivalent to adding a conductance of N c {u,v} to each edge in the path connecting u and v. As N c {u,v} ≤ 10 · 3 k−1 3 −4k ≤ 10 · 3 −3(k−1) , this increased the total conductivity of the network.
The same argument as before shows that we also increased the total conductivity in this case.
For edges e = {u, v} with u − v ∞ ≤ 8 we increase the conductances of the short edges along the path γ ′ x,y by 16. As γ ′ x,y has a length of at most x − y 1 ≤ 16, we also increased the conductivity of the network for this case.
, | , −}. Then for all edges e of this orientation, U (e) is identically distributed and has a Cauchy tail. Thus, by Lemma 2.2, the random walk on the network Z 2 , E(Z 2 ), U is almost surely recurrent.
Proof. As W, U 1 , U 2 , . . . are independent, it suffices to show that the distribution of W (e), respectively U k (e), depends only on the orientation of the edge e. This is clear for W , as the value W (e) depends only on the orientation of the edge e. Remember that we say that γ k x+r k ,y+r k exists, when For U k , note that U k (e) depends only on the number of pairs (x, y) for which e ∈ −r k + γ k x+r k ,y+r k , and for which γ k x+r k ,y+r k exists. More precisely, U k (e) is simply 10 · 3 −3k times the number of pairs (x, y) for which e ∈ −r k + γ k x+r k ,y+r k , and for which γ k x+r k ,y+r k exists. However, we have that where we write {u, v} + r k = {u + r k , v + r k } for an edge e = {u, v}. The quantity N k e is clearly 3 k -periodic in both coordinate directions. As r k is uniformly chosen on {0, . . . , 3 k − 1} 2 , we see that the distribution of N k e+r k , and thus also of U k (e), depends only on the orientation of the edge e. Now let us turn to the tail properties of the random variable U (e). W (e) is uniformly bounded over all e, so we can ignore it from here on. From (9) and (13) we get that there exists a uniform constant C < ∞ such that U k (e) = N k e+r k · 10 · 3 −3k ≤ C3 k and for l ∈ {0, . . . , k − 1} we get with (8) that where we used the uniform distribution of r k and (8) for the last inequality. Using j = 2l−k and solving this for l = k+j 2 , we get that there exists a constant C < ∞ such that for all j ∈ {−k, −k + 2, . . . , k − 2} We want to extend this inequality from j ∈ {−k, −k + 2, . . . , k − 2} to j ∈ {−k, −k + 2, . . . , k − 2}. The extension from j ∈ {−k, −k + 2, . . . , k − 2} to j ∈ [−k, k] is easily doable by increasing the constant C and looking at the nearest integers in the set {−k, −k + 2, . . . , k − 2}. For j < −k and C ≥ 1 there is nothing to show, so (14) holds trivially in this regime. Furthermore one has P U k (e) > 2 17 10 · 3 k = P N k e+r k > 2 17 3 4k (9) = 0 which shows that (14) also holds for j ≥ k and a large enough constant C. Finally, as inequality (14) holds for all j ∈ R with a high enough constant C, by further increasing the constant we can make sure that for all j ∈ R. Also note that for j ≪ k inequality (15) gives that P U k (e) ≥ 3 j ≤ We want to use this observation in order to show that ∞ k=1 U k (e) has a Cauchy tail. Note that if we have U k (e) ≤ 3 j+ j−k 2 for all k ≥ j ∈ N, then we also have that As we furthermore have U k (e) ≤ C 1 3 k for a large enough constant C 1 and all k ∈ N, we get that Using the previous arguing in the reverse direction, we see that the event ∞ k=1 U k (e) > C 2 3 j implies that there exists a k ≥ j with U k (e) > 3 j+ j−k 2 . Using this observation and combining it with a union bound, we get that which shows that ∞ k=1 U k (e) has a Cauchy tail and thus finishes the proof.
Remark 2.12. Using the definition of U k , one can easily show that P U k (e) ≥ 3 k ≈ 3 −k , so (15) is approximately an equality for k = j. This already implies that which shows together with Lemma 2.11 that the tail of U is approximately that of a Cauchy distribution, i.e., P (U (e) > M ) ≈ M −1 for M large.

Random walks on percolation clusters
In this section, we prove Theorem 1.3, i.e., that random walks on certain percolation clusters are recurrent. In section 3.1 below we apply this result to the weight-dependent random connection model. From Theorem 1.3 we can deduce the following corollary.
Theorem 1.3 will be a direct consequence of Lemma 3.2 below. For two disjoint finite sets ∅ = A, B ⊂ V we write C eff (A ↔ B; ω) for the effective conductance between these two sets in the environment ω, which is the environment in which each edge e has the conductance ω(e). Note that C eff (A ↔ B; ω) is a random variable that is measurable with respect to ω. We also write C eff (A ↔ B) for the effective conductance between A and B in the environment where each edge e has conductance c e . For a vertex a ∈ V we simply write a for the set {a}. Furthermore, we write C eff (a ↔ ∞) for the limit lim n→∞ C eff a ↔ A C n , where (A n ) n is a sequence with a ∈ A n for all n and A n ր V .
Let us first see how this implies Theorem 1.3.
Proof of Theorem 1.3 given Lemma 3.2. Let a ∈ V be a vertex. Our goal is to show that the random walk started at a ∈ V is recurrent. Let ε > 0 be arbitrary. As the random walk on the conductances c {x,y} x,y∈V is recurrent, there exists a finite set Λ ε ⊂ V such that a ∈ Λ ε and C eff a ↔ Λ C ε < ε. Then V \ ({a} ∪ Λ C ε ) = Λ ε \ {a} is finite and we can apply Lemma 3.2; this lemma already implies that and as C eff (a ↔ ∞; ω) ≤ C eff a ↔ Λ C ε ; ω this already gives that As ε > 0 was arbitrary and C eff (a ↔ ∞; ω) is a non-negative random variable this already implies that C eff (a ↔ ∞; ω) = 0 almost surely, which is equivalent to saying that the random walk on the weights (ω(e)) e∈E started at a ∈ V is recurrent almost surely. As a ∈ V was arbitrary, this finishes the proof.
Lemma 3.2 shows that the expected conductance always decreases if we say that an edge e with conductance c e > 0 now carries a conductance of ω(e) with E [ω(e)] ≤ c e . This inequality might also be strict in many natural examples, despite the fact that the expected conductance over this edge stays the same. The reason why this inequality holds is ultimately linked to the fact that the effective conductance is a concave function over the individual conductances. In the proof of Lemma 3.2 below the concavity is used implicitly, as the infimum over a set of linear functions is a concave function.
Proof of Lemma 3.2. We use Dirichlet's principle for the effective conductance, see for example [27,Exercise 2.13]. It says that for two non-empty disjoint sets A, B ⊂ V for which |V \(A∪B)| < ∞ the effective conductance between these two sets can be expressed as where F is the set of functions f from V to R that are +1 on A and 0 on B. For an edge e = {x, y} we write (df (e)) 2 = (f (x) − f (y)) 2 for the squared difference of the values of f at the endpoints of the edge. This is well-defined, even without fixing an orientation for the edge. Dirichlet's principle also holds for C eff (A ↔ B; ω). Thus we get that where we can interchange the sum and the expectation as all summands are non-negative. The change of the infimum and the expectation is always allowed when putting the inequality. Using this inequality for A = {a} and B = Λ C finishes the proof.

Recurrence for the weight-dependent random connection model
In this section, we prove Theorem 1.5, i.e., different phases of recurrence for the twodimensional weight-dependent random connection model. Our main tool for proving this is a comparison to dependent percolation on the two-dimensional integer lattice in Lemma 3.3 below. A slightly weaker statement was already proven in [16,Lemma 4.1], where the condition (17) needed to hold with |x − y| 4 replaced by |x − y| α for some α > 4. This improvement allows us to prove the results of Theorem 1.5. Lemma 3.3 is a direct consequence of Corollary 3.1.
Lemma 3.3. Let X ∞ be a unit intensity Poisson process on R 2 . Consider a random graph H on this point process, where points x, y ∈ X ∞ = V (H) are joined by an edge with conditional probability P x,y , given X ∞ . If then any infinite component of H is recurrent.
Note that Lemma 3.3 does not make any assumptions on the independence of different edges. In particular, for the proof of Theorem 1.5, we will also require the statement to hold for dependent percolation models.
Proof of Lemma 3.3. We prove this via a discretization. We construct a weighted graph G = Z 2 , E, ω as follows. For each v ∈ Z 2 , identify all vertices in X ∞ ∩ v + [0, 1) 2 to one vertex v, which we also imagine to be at the position v ∈ Z 2 in space. For some u, v ∈ Z 2 , if there are m ≥ 1 edges between u and v, replace them by one edge of conductance m, i.e., ω({u, v}) = m. If there is no edge between two vertices u, v ∈ Z 2 in the graph G, we set ω({u, v}) = 0. Call this new graph G. It is not hard to see that if every connected component of G is recurrent, then also every connected component of H is recurrent. Indeed, joining vertices is equivalent to giving each edge between them a conductance of +∞, and thus we increase the total conductivity of the network by Raleigh's monotonicity principle. In a second step, we then applied the parallel law to possible parallel edges. So we are left with showing that every connected component of G is recurrent. Assumption (17) implies that there exists a constant C < ∞ such that for all u = v and for all x ∈ u + [0, 1) 2 , y ∈ v + [0, 1) 2 one has P x,y ≤ C u − v −4 . Therefore for each edge e = {u, v} ∈ E one now has where we used that the Poisson process has a unit intensity in the last equality. This already implies that the random walk on every connected component of G is recurrent, by Corollary 3.1.
Before going to the proof of Theorem 1.5, we still need to prove a small technical lemma that we will use later.
Lemma 3.4. Suppose that X is a non-negative random variable satisfying P (X ≤ ε) ≤ Cε for some constant C < ∞ and all ε > 0. Then for η < 1 one has E X −η < ∞ (18) and for η > 1 one has as ε goes to 0.
With this, we are now ready to go the the proof of Theorem 1.5. Remember that the vertex set of the two-dimensional weight-dependent random connection model is a Poisson process of unit intensity on R 2 × (0, 1). So in particular if we condition that there is a point in this Process with spatial parameter x ∈ R 2 , the weight-parameter of this vertex is still uniformly distributed on the interval (0, 1). If we condition that there are two points in the Poisson process with spatial parameters x and y, then the weight-parameters of these points are independent random variables that are uniformly distributed on (0, 1).
Proof of Theorem 1.5. Throughout the proof we will always assume that S and T are independent random variables that are uniformly distributed on (0, 1). For all cases of random-connection models considered in Theorem 1.5 we will verify that (17) holds. For this we need to show that as x − y → ∞. This already implies that all connected components are recurrent by Lemma 3.3. We will only do the case γ > 0. The case γ = 0 works analogously or is degenerate. The factor of 1 β in the kernel g(S, T ) does not change whether (20) holds or not, so we will just ignore it from here on and think of β = 1. We will show (20) for all cases appearing in Theorem 1.5. Assuming that (3) holds we directly get that ρ(r) ≤ Cr −δ for a large enough constant C < ∞ and all r ≥ 0. To strengthen this bound, note that we also have ρ(r) ≤ C ½ [0,1) (r) + ½ [1,∞) (r)r −δ (21) for a large enough constant C < ∞ and all r ≥ 0, as ρ(r) ∈ [0, 1] for all r ∈ R ≥0 . Now let us turn to the individual cases.