Random Walk with Long-Range Constraints

We consider a model of a random height function with long-range constraints on a discrete segment. This model was suggested by Benjamini, Yadin and Yehudayoff and is a generalization of simple random walk. The random function is uniformly sampled from all graph homomorphisms from the graph P_{n,d} to the integers Z, where the graph P_{n,d} is the discrete segment {0,1,..., n} with edges between vertices of different parity whose distance is at most 2d+1. Such a graph homomorphism can be viewed as a height function whose values change by exactly one along edges of the graph P_{n,d}. We also consider a similarly defined model on the discrete torus. Benjamini, Yadin and Yehudayoff conjectured that this model undergoes a phase transition from a delocalized to a localized phase when d grows beyond a threshold c*log(n). We establish this conjecture with the precise threshold log_2(n). Our results provide information on the typical range and variance of the height function for every given pair of n and d, including the critical case when d-log_2(n) tends to a constant. In addition, we identify the local limit of the model, when d is constant and n tends to infinity, as an explicitly defined Markov chain.


Introduction
Given two graphs G and H, a graph homomorphism from G to H is a function f : V (G) → V (H) such that if x and y are neighbors in G, then f (x) and f (y) are neighbors in H. A graph homomorphism from a graph G to Z is then a map from the vertex set of G to the integers, that maps adjacent vertices to adjacent integers. For a given vertex v 0 ∈ G, we denote by Hom(G, v 0 ) the set of all homomorphisms from G to Z, which map v 0 to 0. Precisely, Hom(G, v 0 ) := f : V (G) → Z | f (v 0 ) = 0, |f (x) − f (y)| = 1 when (x, y) ∈ E(G) .
The set Hom(G, v 0 ) is non-empty and finite when G is finite, bipartite and connected. Benjamini, Häggström and Mossel [1] initiated the study of random Z-homomorphisms, that is, uniformly chosen elements of Hom(G, v 0 ). Special cases of this model include the simple random walk, when G = {0, 1, . . . , n} with nearest-neighbor connections, the random walk bridge, when G is a cycle, and the branching random walk, when G is a tree. The model is sometimes referred to as a G-indexed random walk. The behavior of typical Z-homomorphisms is poorly understood for general graphs G. Beyond simple and branching random walks, results are available mainly for the hypercube [8,6], high-dimensional cubic lattices [10] and expander and tree graphs [1,11]. In particular, the case when G = Z 2 2n , a two-dimensional discrete torus, appears completely open. This case is related to the 6-vertex, square-ice and antiferromagnetic 3-state Potts models of statistical physics (see [10]).
Benjamini, Yadin and Yehudayoff [2] suggested the study of this model when G = T n,d is a certain one-dimensional graph with long-range edges, defined below. In this work we study the properties of the model on this graph, as well as its close relative, the graph P n,d . Specifically, let P n,d , for n, d ≥ 1, be the graph defined by . The case d = 0 is just a simple random walk. The simulation uses a Metropolis algorithm (see, e.g., [9,Chapter 3]) and coupling from the past [12].
Thus, a uniformly chosen random function f from Hom(P n,d , 0) is a simple random walk conditioned on satisfying |f (i) − f (j)| = 1 whenever i, j have different parity and are at distance at most 2d + 1. Figure 1 shows a typical sample from Hom(P n,d , 0). Similarly, let T n,d , n ≥ 1 even and d ≥ 1, be the graph defined by Thus, a uniformly chosen random function f in Hom(T n,d , 0) is a simple random walk bridge conditioned on satisfying |f (i) − f (j)| = 1 whenever i, j have different parity and are at distance at most 2d + 1 on the cycle. In the rest of the paper we abbreviate Z-homomorphisms to homomorphisms. We shall loosely refer to homomorphisms on P n,d as being on the line, and to homomorphisms on T n,d as being on the torus.
Our main objects of study are the size of the range of a typical homomorphism on P n,d or T n,d and the variance of the homomorphism at given vertices. For a graph G, the range of a function f : V (G) → Z is defined as Benjamini, Yadin and Yehudayoff made the following conjecture. Conjecture ( [2]). There exist constants b, c > 0 such that if f n,d is uniformly sampled from Hom(T n,d , 0), i k Figure 2. A homomorphism jumps from some value t at vertex i to t + 3 at vertex k. The minimal length of such a segment is k − i = 2d + 3. In order for this jump to occur, the values at the d + 1 vertices, k − 1, k − 3, . . . , k − 2d − 1, are forced to be t + 2. Here d = 3.
Our work establishes this conjecture with the precise constants b = 3 and c = 1/ log 2, both on T n,d and P n,d . In addition, we discover that in the subcritical regime, when d(n)−log 2 n → −∞, the size of the range of a typical homomorphism is of order √ n2 −d and the variance of the homomorphism at vertex k is of order k2 −d . Moreover, we explore the behavior in the critical regime, when d(n) − log 2 n → µ ∈ R, and find that in this case, the size of the range is a tight random variable whose distribution is closely related to the Poisson distribution.
Our results may be intuitively understood as follows. Let f ∈ Hom(P n,d ). It is not difficult to verify that if f (i + m) − f (i) ≥ 3 then m ≥ 2d + 3. Figure 2 shows such an event. Moreover, if m = 2d + 3 and this event occurs, then necessarily Thus, intuitively, the probability that the homomorphism changes its height by 3 on any given small segment is about 2 −d . Therefore, when n2 −d → 0, we will not have any such segment, so that the size of the range of the homomorphism will be bounded by 3. Conversely, when n2 −d → ∞, the expected number of segments with an upward or downward movement of size 3 will be roughly n2 −d . Since the direction of these movements should be only mildly correlated, we expect the size of the resulting range to be of order √ n2 −d . Our work makes these ideas precise.

2.1.
Homomorphisms on the line. In this section we present results on homomorphisms on the graph P n,d , which was defined in (1). Throughout this section, f n,d denotes a uniformly chosen homomorphism in Hom(P n,d , 0). We state results regarding the size of the range of a typical homomorphism. As a homomorphism must change its value by exactly one along edges, the range is always of size at least 2. In fact, the range is exactly 2 only for two particular homomorphisms, and at least 3 otherwise. We shall show that the size of the range is 3 plus a term of order √ n2 −d . Hence, we distinguish between three regimes, n2 −d → ∞, n2 −d → 0 and n2 −d → λ ∈ (0, ∞), termed the subcritical regime, the supercritical regime and the critical regime, respectively.
The supercritical regime. The supercritical regime is when d(n)−log 2 n → ∞ (i.e. n2 −d(n) → 0) as n → ∞. In this case, the large number of constraints prevents a typical homomorphism from growing. In fact, we show that, with high probability, it will take on only 3 values. Theorem 2.1. For any positive integers n, d and r, we have P | Rng(f n,d )| ≥ 3 + r ≤ n r 2 −dr and P | Rng(f n,d )| < 3 ≤ 2 1−n/2 .
The following corollary gives more precise information about the structure of a typical homomorphism in the supercritical regime. Denote by V i := {2k + i | 0 ≤ 2k + i ≤ n}, i = 0, 1, the even and odd vertices, respectively, and denote by Ω 0 and Ω 1 the set of homomorphisms which are constant on V 0 and V 1 , respectively. Note that for each i ∈ {0, 1}, conditioned on f ∈ Ω i , the random vector (f (x) − f (i) | x ∈ V 1−i ) consists of independent uniform signs. Moreover, if n tends to infinity through odd numbers then P(Ω 0 ) → 1/2, and if n tends to infinity through even numbers then P(Ω 0 ) → 1/3. The subcritical regime. The subcritical regime is when d(n) − log 2 n → −∞ (i.e. n2 −d(n) → ∞) as n → ∞. Here, the relatively small number of constraints allows a typical homomorphism to grow. Theorem 2.3. There exist absolute constants C, c > 0 such that for any positive integers n and d, we have 3 + c √ n2 −d − 2 1−n/2 ≤ E | Rng(f n,d )| ≤ 3 + C √ n2 −d . Moreover, for any ǫ > 0 there exists a δ > 0 such that for any positive integers n and d, we have In particular, if d(n) − log 2 n → −∞ as n → ∞ then for any positive integer r, we have The next theorem quantifies the rate of growth of the variance of the homomorphism.
The critical regime. The critical regime is when d(n) − log 2 n → − log 2 λ (i.e. n2 −d(n) → λ) as n → ∞, for some λ ∈ (0, ∞). In this case, the balance between the number of constraints at each vertex and the amount of time available leads to an interesting limiting behavior. Perhaps surprisingly, it turns out that the parity of n induces an effect which does not disappear in the limit.
Denote by µ even (λ) the distribution of a Poisson(λ) variable conditioned to be even, and denote by µ odd (λ) the distribution of a Poisson(λ) variable conditioned to be odd. Define the parity-biased Poisson distribution with parameters λ and α to be the following convex combination of µ even (λ) and µ odd (λ), One may check that where α(r) = α if r is even and α(r) = 1 if r is odd and where Z(λ, α) is a normalizing constant.
In particular, we see that the Poisson(λ) distribution is obtained as µ(λ, 1). Let (S i | i = 0, 1, . . . ) denote a simple random walk, and let be independent of (S i | i ≥ 0). Then S N + (λ) and S N − (λ) are simple random walks stopped at independent random times. For a positive integer k, denote Rng(S k ) : In fact, as the proof shows, one may couple a critical homomorphism to a simple random walk run for N + or N − steps, according to the parity of n.

2.2.
Homomorphisms on the torus. In this section we present results for homomorphisms on the graph T n,d , which was defined in (2). Throughout this section, n is even and f n,d denotes a uniformly chosen homomorphism in Hom(T n,d , 0).
The supercritical regime. The supercritical regime is when d(n)−log 2 n → ∞ (i.e. n2 −d(n) → 0) as n → ∞. Similarly to the case on the line, the large number of constraints cause a typical homomorphism to take on only 3 values. Theorem 2.6. For any positive even integer n and any positive integers d and r, we have Similarly to the case of the line, the following corollary gives more precise information about the structure of a typical homomorphism in the supercritical regime. Denote by V i := {2k + i | 0 ≤ k < n/2}, i = 0, 1, the even and odd vertices of T n,d , respectively, and denote by Ω 0 and Ω 1 the set of homomorphisms which are constant on V 0 and V 1 , respectively. Note that for each i ∈ {0, 1}, conditioned on f ∈ Ω i , the random vector (f (x) − f (i) | x ∈ V 1−i ) consists of independent uniform signs.
Thus, a typical homomorphism in the supercritical regime is constant on either the even or odd vertices of T n,d , with the two cases being equally likely. The effect induced by the parity of n in Corollary 2.2 does not appear here, as n is always assumed to be even in the case of the torus. The subcritical regime. The subcritical regime is when d(n) − log 2 n → −∞ (i.e. n2 −d(n) → ∞) as n → ∞. As before, the relatively small number of constraints allows a typical homomorphism to grow. Theorem 2.8. There exist absolute constants C, c > 0 such that for any positive even integer n and any positive integer d, we have Moreover, for any ǫ > 0 there exists a δ > 0 such that for any positive even integer n and any positive integer d, we have In particular, if d(n) − log 2 n → −∞ as n → ∞ then for any positive integer r, we have The critical regime. The critical regime is when d(n) − log 2 n → − log 2 λ (i.e. n2 −d(n) → λ) as n → ∞, for some λ ∈ (0, ∞). As for the line, this choice of parameters leads to an interesting limiting behavior. In this case, the random homomorphism behaves similarly to a simple random walk bridge of length 2N , where N is an independent random variable whose distribution is a type of biased Poisson distribution. The distribution of N here is biased differently from the case of the line. Specifically, N has the distribution of a Poisson random variable with parameter conditioned to be equal to another such independent Poisson random variable. Denote by ν(λ ′ ) the distribution of X conditioned on X = Y , where X and Y are independent Poisson(λ ′ ) random variables. One may check that where Z(λ ′ ) is a normalizing constant. For a positive even integer k, let (B k i | 0 ≤ i ≤ k) denote a simple random walk bridge of length k (that is, a simple random walk conditioned on B k k = 0), and let N (λ ′ ) ∼ ν(λ ′ ) be an independent random variable. Thus, B 2N (λ ′ ) is obtained by first sampling N (λ ′ ) and then sampling a simple random walk bridge of length 2N (λ ′ ). For a positive even integer k, denote where λ ′ is defined by (6).
This theorem is closely related to Theorem 2.5. On the line, the range of a homomorphism in the critical regime is determined by a simple random walk whose length is a parity-biased Poisson random variable. Note that if we condition a simple random walk with a Poisson(µ) number of steps to end at its initial value, then the number of steps it takes has distribution ν(µ/2). To see this, observe that the number of positive and negative steps of the random walk are independent Poisson(µ/2) random variables and we are conditioning that these variables are equal. The same phenomenon continues to hold if we start with a simple random walk taking a parity-biased Poisson(µ,α) number of steps. Indeed, the number of steps must be even in order for the walk to end at its initial value, and a parity-biased Poisson(µ,α) conditioned to be even is the same as a Poisson(µ) conditioned to be even.

2.3.
Local limits on the line. In this section we present results for homomorphisms on the graph P n,d , which was defined in (1), when d is constant and n tends to infinity.
Our first result gives an approximate count of the number of homomorphisms.
Theorem 2.10. For any positive integer d there exists a constant C(d) > 0 such that where λ(d) is the unique positive solution to the equation Remark. The constant λ(d) above satisfies Our next result concerns the local limit of the homomorphism. This local limit lives on P ∞,d , the limiting graph of P n,d . Precisely, P ∞,d , for d ≥ 1, is the graph defined by For a function g defined on a domain Ω and a set A ⊆ Ω, we write g| A for the restriction of g to A.
Remark. The random homomorphism f ∞,d is described by an explicit Markov chain on 2d + 2 states, as shown in Figure 12, through a process which decodes infinite words on the alphabet {a, b, A, B} into homomorphisms on P ∞,d . See Section 6 for details.

Policy on constants:
In the rest of the paper we employ the following policy on constants. We write C, c, C ′ , c ′ for positive absolute constants, whose values may change from line to line. Specifically, the values of C, C ′ may increase and the values of c, c ′ may decrease from line to line.

Preliminaries
We gather here a number of general tools which we require for our proof.
Lemma 3.1. Let E and F be events in a discrete probability space and let T : E → P(F ) be a mapping, where P(A) denotes the collection of all subsets of A. For f ∈ F , define If for some p, q > 0, we have P(T (e)) ≥ P(e) · p, e ∈ E, then Thus, Therefore, We will use a theorem by Benjamini, Häggström and Mossel [1] to transfer results from the line to the torus. This is an FKG inequality for the measure induced on non-negative homomorphisms by taking pointwise absolute value.
Given a set V , we equip Z V with the usual pointwise partial order . A function φ : . Let G be a finite, bipartite and connected graph, let v 0 ∈ V (G) and let f be a uniformly chosen homomorphism in Hom(G, v 0 ). Then, for any two increasing functions φ, ψ : where |f | is the non-negative homomorphism obtained from f by taking pointwise absolute value.
Consider the event Q that a homomorphism on P n,d is in fact a valid homomorphism on T n,d (by identifying the vertex n with the vertex 0). If we could write 1 Q (f ) = ψ(|f |) for some function ψ then we may be able to use the above theorem to transfer results from the line to the torus by conditioning on Q. However, it is not the case that 1 Q is a function of the absolute value of the homomorphism, and so we cannot apply Theorem 3.3 directly. Instead, we make use of Theorem 3.3 in order to prove a similar proposition specialized for our purposes. See Proposition 5.8 in Section 5 for more details.
Then, for any integer r > 0 and any a ∈ R, we have The next proposition, which is a consequence of the previous result, is useful for analyzing homomorphisms on the torus. Proposition 3.5. Let n be a positive integer, let a 1 , . . . , a n ∈ R satisfy |a i | ≥ 1 for 1 ≤ i ≤ n and let π be a uniformly chosen permutation of {1, 2, . . . , n}. Denote Then, for any integer r > 0, we have Proof. Let ǫ 1 , . . . , ǫ n ∼ U ({−1, 1}) be uniform independent signs. Denote a := a 1 + · · · + a n , Let T ∼ Bin(n, 1/2) be independent of π and observe that an observation which was pointed out to us by Gady Kozma. Therefore, by Theorem 3.4, The next lemma presents a simple result on limits of distributions.

Homomorphisms on the Line
In this section we will prove the theorems regarding homomorphisms on the line which were stated in Section 2.1. As was pointed out in the introduction, it seems unlikely that a homomorphism jumps from some value t to t ± 3 on any given small segment. Figure 2 illustrates a section of a homomorphism for which such a jump occurs. The main idea in our proofs is to identify the vertices at which these jumps occur, as they determine the large scale behavior of the homomorphism. That is, the values of the homomorphism at the jumps contain the global information necessary to determine the range and the variance. To this end, we first define the notion of the (local) average height of a homomorphism at a vertex (this is illustrated by the horizontal dashed line in Figure 3). The average height at a vertex is determined by finding the closest past time at which 3 different values appeared consecutively and taking the midpoint to be the average height. For vertices which no such time exists (as is the case for the 0 vertex), we set the average height to 0. One can think of the average height as a process beginning at 0 that "lazily follows" the homomorphism, only to ensure that it is never at a distance greater than 1. With this notion in hand, we define a jump as a change in the average height. Of course any such jump has an associated sign or direction, which is determined by whether the average height increased or decreased.
We later show that the probability of a jump occurring at a given vertex (greater than 2d) is no more than 2 −d . This will show that in the supercritical regime, with high probability, there will not be any jumps (after vertex 2d). That is, the average height does not change after time 2d. A moment's reflection reveals that this means that the homomorphism takes on at most 3 different values (not 4, as it may initially seem).
We do not give a lower bound for the probability of a jump occurring at a given vertex. Instead, we only show that the typical number of jumps is of order n2 −d , the jumps are approximately equidistributed on the line and that, moreover, the directions of these jumps are weakly correlated. Of course, if the directions of these jumps were truly independent, then the values of the homomorphism at the jumps would constitute a simple random walk. We will show that, at least in terms of the maximum/range of the homomorphism, the behavior is very similar to that of a simple random walk. This will show that the range is of order √ n2 −d and that the variance at a vertex k is of order k2 −d .
In the analysis of these so-called jumps, we encounter a minor complication due to the fact that jumps in the same direction can "clump" together. Of course jumps cannot occur consecutively in the sense of two consecutive vertices on the line. So then what is the minimal distance between two jumps? The answer is twofold. The minimal distance between two jumps with different directions is 2d + 3, while two jumps in the same direction can already occur at distance 2d + 1. This phenomenon will pop up again and again in our analysis. For example, its manifestation is evident in the Markov chain describing the local limit in Section 6 (see Figure 12).
One meaning of this phenomenon is that if we condition on the event that a jump occurs at two given vertices, say k 1 and k 2 , k 1 < k 2 , the directions of these jumps are non-negatively correlated. However, conditioning also on the event that a jump does not occur just after the first of these jumps (i.e. at k 1 +2d+1), their directions become independent. This leads us to consider "chains" of jumps. A chain is just a sequence of minimal-distance same-direction jumps. Now, if we condition on the event that there are chains of given lengths (and not longer) at any number of given vertices, the directions of all these chains will be independent. This will allow us to reduce some of the analysis to a case of independent variables. 4.1. Definitions. We consider the graph P n,d whose vertex set is {0, 1, ..., n} and whose edges are (k, m) for |k − m| = 1, 3, ..., 2d + 1. Throughout this section, Hom(P n,d ) := Hom(P n,d , 0), f is a uniformly sampled homomorphism from Hom(P n,d ), the probability space is the uniform distribution on the set Hom(P n,d ), and events are subsets of Hom(P n,d ). We define h(k), the (local) average height at vertex k, inductively as follows. Set h(0) := 0. For

Define the event
When A k occurs, we say that a jump occurred at vertex k (see Figure 3). Let be the positions of the jumps, and denote by the number of jumps after vertex 2d + 1. Recall that if a jump occurs at vertex k, then the minimal possible value of k ′ > k at which another jump can occur is k + 2d + 1. Let C k,t be the event that there is a chain of t minimal-distance jumps ending at vertex k. That is, for t ≥ 1 and (t − 1)(2d + 1) < k ≤ n, we define Let I = {s 1 , . . . , s t } ⊂ {1, 2, . . . , n}. We say that I is a feasible jump structure if {S = I} = ∅. Observe that {S = I} = ∅ if and only if P(S = I) > 0 if and only if when we reorder the s i to satisfy s 1 < s 2 < · · · < s t , we have s 1 is even and for 2 ≤ j ≤ t, s j − s j−1 is odd and satisfies s j − s j−1 ≥ 2d + 1.
In addition, we say that a subset I ⊂ {1, . . . , n} is a feasible jump sub-structure if it is a subset of a feasible jump structure, or equivalently, if {I ⊂ S} = ∅. For a feasible jump sub-structure I, the event {I ⊂ S} can be uniquely written as C k 1 ,t 1 ∩ · · · ∩ C km,tm , where These conditions ensure that there is no overlap between the different chains, and moreover, that there is some gap between them (since otherwise they would merge into a larger chain). For such I, we define and refer to this as the chain structure of I.

Main lemmas.
As the above definitions suggest, the notion of a jump at a given vertex plays an important role in our analysis. It turns out that the behavior of jumps at the first 2d + 1 vertices differs significantly from that of the other vertices. Hence, it will be a recurring theme throughout Section 4 that these cases are handled separately. The first two lemmas concern the probability of jumps at given vertices. The first of which shows that jumps at the first 2d + 1 vertices are not unlikely, while the second shows that elsewhere jumps are unlikely.
Proof. Denote J := A 1 ∪ · · · ∪ A 2d+1 . We shall show that from which the result easily follows. Note that, by (13), A k = ∅ for k = 1, 3, . . . , 2d + 1, so that We note the following useful observation. For a homomorphism f ∈ Hom(P n,d ), we have We begin by showing that One may check that if f ∈ J \ A 2 then f 1 ∈ A 2 . Recalling (15), it is clear that this mapping is invertible, and so we have Again one may check that this mapping is well-defined (in fact, this mapping can be defined on the entire space). Since it is injective (recall (15)), we obtain P( To see that this mapping is well-defined, recall (15), and note that f ∈ J c implies that f (0) = f (2) = · · · = f (2d) = 0. This is not an injective mapping, however, it satisfies |T −1 (g)| ≤ 2 for g ∈ A 2 . Therefore, by Lemma 3.1, we have P(J c ) ≤ 2P(A 2 ).
The next lemma is concerned with the probability of jumps occurring at given vertices after 2d + 1. It states that this probability is exponentially small in d times the number of jumps. The idea behind the proof is to remove the jumps and replace the freed up areas with segments of constant average height. This allows us to gain entropy by setting the values at every other vertex in each such segment to be the average height ±1. See Figure 4.

Lemma 4.2.
For any t ≥ 1 and for any 2d + 1 < s 1 < · · · < s t ≤ n, we have Proof. If I := {s 1 , . . . , s t } is not a feasible jump sub-structure then there is nothing to prove. Otherwise, we consider the chain structure of I, C(I) = {(k 1 , t 1 ), . . . , (k m , t m )}, where we have ordered the elements so that the k j are increasing. Due to our assumption that s 1 > 2d + 1, we have k 1 > (2d + 1)t 1 . We note that it is enough to prove that for all 1 ≤ j ≤ m, We prove something stronger. Let 1 ≤ k ≤ n and t ≥ 1 be such that In order to show this, we construct a mapping which removes this chain and replaces the freed up segment with a segment of constant average height (see Figure 4). Formally, we proceed as follows.
follows immediately from (17) and the fact that f ∈ Hom(P n,d ). It remains to check the case when i ≤ k ′ and j > k ′ and the case when i < k − 1 and j ≥ k − 1.
We begin with the first case. Here, we have Therefore, if i has the same parity as k ′ then f (i) = f (k ′ ) and |w(j − k ′ )| = 1 since j has the same parity as k ′ . Otherwise, i has the opposite parity of k ′ , and then |f In the second case, we have Note that C k,t ⊂ B k−1 and that i − k ′ has the same parity as j + δ − k. One finds in a similar manner as in the first case that |f (j + δ) − f (k − 1)| = 1 andw(i − k ′ ) = 0 when i has the same parity as k ′ , and that f (j + δ) = f (k − 1) and |w(i − k ′ )| = 1 when i has the opposite parity of k ′ . Hence, ∆ i,j = 1.
Observe that for any f ∈ C k,t , necessarily, Thus, it is easy to see that the mapping is injective. Moreover, the event {f | {0,...,k ′ } = ξ} is clearly invariant under this mapping, so that proving (16).
Remark. The proof shows in fact that the probability of the event A s 1 ∩ · · · ∩ A st is bounded by 2 −dt−(⌊t 1 /2⌋+···+⌊tr/2⌋) , where t 1 , . . . , t r are the lengths of the chains corresponding to s 1 , . . . , s t . With a small modification, the proof can be enhanced to give the bound 2 −dt−⌊t/2⌋ , but we neither prove nor use this.
Recall the definition of R from (12). We would like to obtain inequalities on the probability that R is a given value. We could do this in a similar manner to which the previous lemma was proved. However, for variety, we prefer to employ a more direct combinatorial technique. This approach also has the advantage of introducing Lemma 4.4, which gives a useful description of the structure of the homomorphisms in Hom(P n,d ).
We decompose a homomorphism into two parts (see Figure 5). The first part constitutes the changes in average height (the underlying walk) of the homomorphism, while the second part constitutes the fluctuations around the average height (the segments of constant average height). For a feasible jump sub-structure I, define the chain points of I by and the fluctuation points of I by That is, a point is a fluctuation point if its distance from the chain to its left is positive and even. In particular, recalling the definition of S from (11), for any homomorphism f and any k ∈ F P (S(f )), f is not at its average height at k. Now, for a homomorphism f , define and and Claim 4.3. For any feasible jump structure I, we have Proof. Suppose that C(I) = {(k 1 , t 1 ), . . . , (k m , t m )}. Denote t := |I| = t 1 + · · · + t m and s := min I. Then Therefore, recalling that s is even by (13), Proof. We shall describe the inverse mapping which maps a pair (X, be the average height accumulated by chains ending before i. For (k, t) ∈ C(I), denote by k ′ (k, t) := k − (2d + 1)t − 1 the first vertex of the chain and observe that H( (1) X is uniformly distributed over {−1, 1} C(S) .
(3) The difference in average height between two vertices 0 ≤ k 0 < k 1 ≤ n is a sum of independent variables, namely, Proof. The first statement is an immediate consequence of Lemma 4.4. The second statement is in turn a consequence of the first statement and of the definition of the chain structure C(S). For the third statement, since we see that Proof. By considering the distance between two consecutive values in I and recalling (13), we see that the number of feasible jump structures I having |I| = r and min I > 2d + 1 (where we set min ∅ := ∞) is given by the number of non-negative integer solutions to the equation under the additional constraints that x 1 is even and at least 2d + 2 and, for 2 ≤ j ≤ r, x j is odd and at least 2d + 1. Therefore, after substituting x 1 = 2y 1 + 2d + 2 and x j = 2y j + 2d + 1 for 2 ≤ j ≤ r, we see that c d+1 (r) is equal to the number of non-negative integer solutions to the equation from which the first result easily follows. Similarly, the number of feasible jump structures I having |I| = r + 1 and min I = 2i is given by the number of non-negative integer solutions to the equation under the additional constraint that, for 1 ≤ j ≤ r, x j is odd and at least 2d + 1. Therefore, substituting x j = 2y j + 2d + 1 as before, we see that, for 1 ≤ i ≤ d, c i (r) is equal to the number of non-negative integer solutions to the equation from which the second result follows.
Lemma 4.8. For any positive integer r such that (2d + 1)r + 2d ≤ n, we have Proof. By Lemma 4.4, Claim 4.3 and Claim 4.7, we have where c i (r) is given by (21) and It is easy to see that and a computation shows that We present this last computation for i = d + 1. We have it suffices to show that the product above is at most 1 and at least 1−Cdr/(n−Cdr). Indeed, every element in the product is clearly at most 1, and hence so is the product. For the other inequality, note that the last element in the product is the smallest, so that the product is at least The statement now follows directly from (23) and (24).
For any positive integer r such that (2d + 1)r + 2d ≤ n, we have

Proof of theorems.
We are now ready to prove the theorems stated in Section 2.1.
The supercritical regime. We prove Theorem 2.1 and Corollary 2.2. By Lemma 4.2, we have One may easily check that | Rng(f )| ≤ R + 3, so that Moreover, it is easy to describe all homomorphisms which take on at most 3 values. Denote by V 0 and V 1 the even and odd vertices in {0, 1, . . . , n}, respectively, and denote by Ω 0 and Ω 1 the set of homomorphisms which are constant on V 0 and V 1 , respectively. Then it is clear that Also, note that |V 0 | = ⌊n/2⌋ + 1 and |V 1 | = ⌈n/2⌉, so that |Ω 0 | = 2 |V 1 | = 2 ⌈n/2⌉ and |Ω 1 | = 2 |V 0 | = 2 ⌊n/2⌋+1 . Therefore, completing the proof of Theorem 2.1. To obtain Corollary 2.2, note that The subcritical regime. Before proving the relevant theorems, we need a better understanding of the typical number of jumps.
Lemma 4.10. For any ǫ > 0, we have Proof. Let 0 < ǫ < 1 and 1 ≤ i ≤ d + 1. Lemma 4.8 implies that if c is small enough, Lemma 3.2 now yields the result. We shall also require a similar inequality for the number of jumps up to a given vertex. For Proof. First note that the statement is trivial when n2 −d < C. Thus, we may assume that n2 −d ≥ C. Denote Hence, by the assumption that n2 −d ≥ C and by (26), Finally, by Lemma 4.1, we have Proof. If k is odd then |f (k)| ≥ 1 and the result follows by the fact that f (k) is symmetric. Henceforth, we assume that k is even.
One may check that this mapping is indeed well-defined and that it is injective. Since |f 0 (k)| = 2 when f (k) = 0, and since the mapping is injective, we have Therefore, We are now ready to prove Theorem 2.4 and Theorem 2.3. In both proofs, we consider the following modified average height h ′ . For 1 ≤ k ≤ n, define Recall that Corollary 4.5 implies that, for any 2d Proof of Theorem 2.4. By the above remark, we have Notice that, conditioned on S, the expectation of h ′ (k) is zero, so that by the law of total variance, To obtain an upper bound on Var(h ′ (k)), we use Lemma 4.2 to obtain for any 1 ≤ j ≤ n and t ≥ 1. Therefore, by (28) and (29), we have For the lower bound, we note that (j,t)∈C(S∩{2d+2,...,k}) Therefore, by (28), (29) and Lemma 4.12, we have In particular, |f (k)| ≥ |h ′ (k)|/3 when |h ′ (k)| ≥ 3. Therefore, Finally, together with Lemma 4.13, we have Proof of upper bound in Theorem 2.3. Denote C(S ∩ {2d + 2, . . . , n}) = {(k 1 , t 1 ), . . . , (k m , t m )}, ordering the elements so that the k j are increasing. Observe that for any 1 ≤ j < m and any k j ≤ k ≤ k j+1 , we have that h(k) is between h(k j ) and h(k j+1 ). Therefore, In this notation, by (27) we have where, conditioned on S, {∆(k j ) | 1 ≤ j ≤ m} are independent. Therefore, we may apply Kolmogorov's maximal inequality to the process Therefore, by (30), we have From this we obtain Finally, using the fact that we obtain Proof of lower bound in Theorem 2.3. We begin by showing that the range is large with high probability, when n2 −d is large enough. Fix 0 < ǫ < 1. Assume that n2 −d ≥ C/ǫ. By (25), there exists a δ 1 > 0, depending only on ǫ, such that This tells us that typically there are many jumps. We now show that typically there are many distinct chains as well. For s ≥ 1, let be the number of sub-chains of length s. Then, as we shall now show, Indeed, denoting C(S ∩ {2d + 2, . . . , n}) = {(k 1 , t 1 ), . . . , (k m , t m )} and considering the contribution of each chain to M s , we see that Noting that |C(S)| ≥ m now yields (32). By Lemma 4.2, we have Taking s = s 0 large enough, we have by Markov's inequality, Therefore, by (31), (32) and (33), we have for δ 2 := δ 1 /s 0 that Recalling from Corollary 4.5 that, conditioned on S, h(n) is the sum of |C(S)| independent random variables, we may apply Theorem 3.4 to obtain Therefore, Finally, by (34) and (35), for any δ > 0, we have Therefore, there exists a δ > 0, depending only on ǫ, such that if δ √ n2 −d ≥ 1 then proving (3) It remains to show the lower bound on the expectation. Note that the statement is trivial when n ≤ 2, and so we may assume that n ≥ 3. By taking ǫ = 1/4 in (3), noting that | Rng(f )| ≥ 2 and by Theorem 2.1, we conclude that The critical regime. Here we prove Theorem 2.5. Denote λ := lim n2 −d which exists and is a positive number by assumption. The proof of Theorem 2.5 consists of two parts. First, we show that R converges to N ± (λ) as n tends to infinity through even or odd integers. Next, we show that in this regime the values at the jumps constitute a simple random walk and that this walk determines the range of the homomorphism. By Lemma 4.2, we have Therefore, the expectation of R is uniformly bounded as n → ∞, and hence, Markov's inequality implies that R is tight as n → ∞. Using notation as in the proof of Lemma 4.8, we have A direct computation shows that for any constant r ≥ 0, we have where γ(k) := 1 if k is even and γ(k) := √ 2 if k is odd. Denoting by J := A 1 ∪ · · · ∪ A 2d+1 the event that a jump occurs prior to vertex 2d + 2, and recalling (22), we obtain where we have used the fact that d i=1 2 −i → 1. Using the tightness of R, we see that where Z(n) is a normalizing constant. Therefore, recalling the parity-biased Poisson distribution defined in (4) and the equation (5), we see that completing the first part of the proof. We remark that it is also possible to obtain the limiting distribution of R conditioned on whether or not a jump occurred at the first 2d + 1 vertices. We do not make use of this in our paper but we note the final result. A further calculation gives the following formula for the asymptotic probability of J, and the following formula for the asymptotic distribution of R given 1 J , We now proceed to analyze the range of a typical homomorphism in the critical regime. We begin by showing that the jumps are sparse enough so that it is unlikely to have chains of length greater than one. Let be the event that there are no two minimal-distance jumps (i.e. jumps at distance 2d + 1). We wish to show that P(B) = 1 − o(1). Indeed, by considering the first 2d + 1 elements in the intersection separately from the rest, Lemma 4. In other words, for any I ∈ I, conditioned on S = I, (h ′ (s) | 2d + 1 < s ∈ S) is a simple random walk of length R (without the leading zero). Since for any I ∈ I, where S i is an independent simple random walk run for i steps. Define the event It is not difficult to check that We now show that P(E) = 1 − o(1). Observe that Lemma 4.4 implies that Hence, by Lemma 4.2, and since J ⊂ E, we have Finally, Theorem 2.5 follows from (37), (38), (39) and the fact that P(E ∩ B) = 1 − o(1).

Homomorphisms on the Torus
In this section we prove the theorems regarding homomorphisms on the torus which were stated in Section 2.2. The ideas and notions previously introduced in Section 4 to handle the case of homomorphisms on the line will still prove to be effective on the torus, although some of them will need to be adapted. For example, the notions of average height, jumps and chains will still be used and they are defined in an analogous manner. One thing which must change, for instance, is how we use these notions and the events that we condition on. Note that, if we condition on the lengths and the positions of the chains, their signs will not be independent, since they must add up correctly. This fact, which is inherently due to the topology of the torus, makes the analysis slightly more complex. Instead, we will show that, conditioned on the lengths and the signs of the chains (but not on their positions), their relative order is uniform. This will allow us to reduce some of the analysis to a case of a uniformly chosen reordering of a sequence of numbers. One aspect which is simpler for homomorphisms on the torus is that there are no boundary effects, i.e., no need to consider the first 2d + 1 vertices separately. 5.1. Definitions. We consider the graph T n,d , n even, whose vertex set is V := {0, . . . , n − 1} and whose edges are defined by i ∼ j if and only if ρ(i, j) = 1, 3, . . . , 2d + 1, where we define the distance ρ between x and y to be ρ(x, y) := min{|x − y|, n − |x − y|}, x, y ∈ V.
Throughout this section, Hom(T n,d ) := Hom(T n,d , 0), f is a uniformly sampled homomorphism from Hom(T n,d ), the probability space is the uniform distribution on the set Hom(T n,d ), and events are subsets of Hom(T n,d ). We also note that, in this section, addition and subtraction of elements in V are always modulo n.
We would like to define the notion of the (local) average height of a homomorphism at a vertex x ∈ V . To do so, we "look back" just enough in order to define this in a meaningful way. Precisely, for x ∈ V , define the average height at x as the unique number h(x) satisfying This is well defined for any homomorphism f which takes on at least 3 values. There are two specific homomorphisms for which the size of the range is 2, and hence for which this is not well defined. These are f flat Observe that necessarily ∆(x) ∈ {−1, 0, 1}. When A x occurs, we will say that a jump occurred at vertex x. For x, y ∈ V , denote by be the set of vertices at which we have a jump in either direction. Notice that necessarily |S + | = |S − |, and define R := |S + | = |S − | = |S|/2, the number of jumps in a given direction. Notice that the clockwise distance between jumps is at least 2d + 1, as for homomorphisms on the line. For x ∈ V and t ≥ 1, let be the event that there is a chain of t minimal-distance jumps ending at vertex x.
We say that a subset I ⊂ V is a feasible jump structure if {S = I} = ∅, i.e. if P(S = I) > 0. We would like to describe this condition solely in terms of the structure of I. To this end, write In contrast to the case of the line, these conditions alone are not sufficient for I to be a feasible jump structure. This is due to the fact that the torus imposes a topological constraint. Namely, that at the end of the homomorphism the average height must "return" to its initial value. This additional condition, whose precise description (43) we postpone to the next section, along with condition (41), is necessary and sufficient for I to be a feasible jump structure. In addition, we say that a subset I ⊂ V is a feasible jump sub-structure if it is a subset of a feasible jump structure, or equivalently, if {I ⊂ S} = ∅. Notice that the definition implies that condition (41) is necessary for I to be a feasible jump sub-structure. For any I ⊂ V satisfying (41), by considering the connected components of the subgraph of T n,d induced by I, one may see that the event {I ⊂ S} can be uniquely written as C k 1 ,t 1 ∩ · · · ∩ C km,tm , where 0 ≤ k 1 < · · · < k m < n, and where we let k 0 := k m (see Figure 6). These conditions ensure that there is no overlap between the different chains, and moreover, that there is some gap between them (since otherwise they would merge into a larger chain). For a subset I ⊂ V satisfying (41), we define and refer to this as the chain structure of I.

5.2.
The structure of a homomorphism. In this section, our goal is to a give a useful description of the structure of a homomorphism on the torus. Namely, that which is stated in Lemma 5.2 and Lemma 5.3 below. To this end, we would like to decompose a homomorphism into two parts (see Figure 5 and Figure 7). The first part, which we shall denote by X, constitutes the changes in average height (the underlying bridge) of the homomorphism, while the second part, which we shall denote by Y , constitutes the fluctuations around the average height (the segments of constant average height).
We proceed first to define X. Given a subset I ⊂ V satisfying (41), denote the set of feasible sign vectors for I by When I = ∅, this set contains one element, the function with the empty domain. Note that in order for a subset I ⊂ V to be a feasible jump structure, it is necessary and sufficient for I to satisfy (41) and B * (I) = ∅.
(43) This last condition is the manifestation of the topological constraint imposed by the torus. It says that the chain structure induced by the position of the jumps is such that it is possible to assign signs to each chain so that the average height "returns" to its initial value when completing an entire loop around the torus.
For a feasible jump structure I and a feasible sign vector ǫ ∈ B * (I), define the signed chain structure of (I, ǫ) by Recall the definition of S from (40). Define X ∈ {−1, 1} C(S) by and note that X ∈ B * (S). This defines for us the random signed chain structure C * (S, X). This random variable contains in a fairly simple manner all the necessary information for determining the range of f . Namely, it gives us the positions, lengths and signs of the chains in f . We now proceed to define Y . For a non-empty feasible jump structure I, define the fluctuation points of I by F P (I) := y ∈ V | ρ + (y, I) ∈ 2d + 1 + 2N , where ρ + (y, I) := min s∈I ρ + (y, s) and N := {1, 2, 3, . . .}. That is, a point is a fluctuation point if its clockwise distance to the closest jump in the clockwise direction is odd and at least 2d + 3. In particular, for any homomorphism f and any y ∈ F P (S(f )), f is not at its average height at y. Now, for a homomorphism f having at least one jump, define Y ∈ {−1, 1} F P (S) by It will be useful to have the following formula for the number of fluctuation points. The final lemmas show that X, Y and the jump structure S exactly encode the homomorphism. This is an immediate consequence of the following lemma. Proof. It is not hard to verify that this is indeed a bijection (see Figure 7 for a macroscopic picture and Figure 5 for a microscopic picture). We omit the proof as it is very similar to that of Lemma 4.4. For the second statement, we note that {C * (S, X) = ∅} = {S = ∅} = {h ≡ const}, and hence by considering the events {h ≡ 0} and {|h| ≡ 1}, and recalling that we set h ≡ 0 when f takes on only two values, the statement readily follows. 5.3. The range. In this section, our goal is to give a more explicit description of the distribution of the range of a homomorphism. Namely, that which is stated in Proposition 5.4 below.
Recall the definition of the signed chain structure from (44). Let be the set of lengths and signs of the chains taken with multiplicities, i.e.W is a multi-set. For a vector of integers w = (w 1 , . . . , w m ), denote by | Rng(w)| : the size of the smallest interval in Z which contains all partial sums of w.
Proposition 5.4 is a direct consequence of the following two lemmas. The first of these, Lemma 5.5, relates the range of f to a random variable W defined below. The second, Lemma 5.6, describes the distribution of W conditioned onW .
Given a set X and a vector x ∈ X m , define the period of x by and note that per(x ∨ y) = lcm(per(x), per(y)).
That is, W forgets the absolute position of the chains and remembers only their signed length and relative ordering. Note thatW is determined by W . We begin by showing that the random variable W governs the range of the homomorphism. For a vector of integers w whose sum is zero, recalling (46), we define | Rng([w])| := | Rng(w)|, and note that this is indeed well-defined by the equivalence class of w. Proof. The partial sums of W correspond to differences in average height between two vertices. Therefore, | Rng(W )| = 1 + max x∈V h(x) − min x∈V h(x). By the definition of the average height, we have |f (x) − h(x)| ≤ 1 and {h(x) − 1, h(x), h(x) + 1} ⊂ Rng(f ) for any vertex x ∈ V . Therefore, by considering vertices at which the average height is maximal or minimal, we obtain the additional factor of 2 in the above equation.
Remark. On the event {W = ∅}, the size of the range of f is either 2 or 3. However, Lemma 5.2 implies that, conditioned onW = ∅, the probability that the size of the range is 2 is of order 2 −n/2 . The next lemma is the final ingredient in the proof of Proposition 5.4. The remaining part of this section is devoted to its proof. Lemma 5.6. Let m ≥ 1, letw = {w 1 , . . . , w m } be a multi-set such that P(W =w) > 0 and let π be a uniformly chosen permutation of {1, 2, . . . , m}. Then, , where 0 ≤ k 1 < · · · < k m < n. Let k 0 := k m and define That is, Z forgets the absolute positions of the chains in C * (S, X) and remembers only their distances one to the other (precisely, half the distance from the last vertex of one chain to one vertex before the beginning of the next chain). Note that the first coordinate of each element in Z is necessarily a non-negative integer. Also note that W is determined by Z. The next claim calculates the distribution of Z.
(47) Then Proof. Recall conditions (41) and (43), and note that, together with the assumptions, they imply that the event {Z = [x ∨ w]} is non-empty. We partition the event {Z = [x ∨ w]} according to C * (S, X). Let r ≥ 1 be the number of subsets in this partition, so that where the (I i , ǫ i ) are distinct and feasible. By Lemma 5.3, Claim 5.1 and (47), for any 1 ≤ i ≤ r, and therefore, Recalling the definition of Z, we see that Let w be an ordering ofw. Define and note that |Z(x, w)| = gcd(per(x), per(w)). We have Let w ′ ∈ [w] and x ′ ∈ [x] be representatives of their equivalence classes. By Claim 5.7, we have Since, per(x ′ ∨ w ′ ) = lcm(per(x ′ ), per(w ′ )), per(x ′ ) = per(x) and per(w ′ ) = per(w), we see that |Z(x, w)| · lcm(per(x), per(w)) = per(x) · per(w).
That is, conditioned onW =w and D = [x], the probability that W equals [w] is proportional to per(w). Finally, observe that the same is true for the probability that [w π(1) , . . . , w π(m) ] equals [w]. Indeed, one may check that where C(w) is a multinomial coefficient depending onw.

Proof of theorems.
In this section, we are primarily concerned with homomorphisms on the graph T n,d . However, we will occasionally also refer to homomorphisms on the graph P n,d . We note that in either case, such a homomorphism can be seen as an element of Z {0,1,...,n} , where f ∈ Hom(T n,d ) is extended to {0, 1, . . . , n} by f (n) := 0. Therefore, the uniform distributions on Hom(P n,d ) and Hom(T n,d ) can be seen as distributions on Z {0,1,...,n} . We shall denote the probability and expectation with respect to each of these distributions by P P and E P and P T and E T , respectively. Throughout this section, we will frequently drop the subscript, in which case P and E will refer to P T and E T . We first state some technical lemmas and propositions whose proofs we defer to the next section. Our first proposition is one which will allow us to transfer some results from the line to the torus. This is an FKG-type inequality for the measure induced on non-negative homomorphisms by taking pointwise absolute value.
The next two lemmas are concerned with the probability of jumps occurring at given vertices. In the case of the line, we were able to obtain in Lemma 4.2 a good upper bound on the probability of having t jumps at any given vertices. In the case of the torus, we are not able to obtain such a general result. The main difficulty is due to the topological constraint imposed by the torus. In particular, if a jump occurs at a given vertex then a jump in the opposite direction must also occur at some other vertex. The next lemma shows that the probability of a chain of consecutive jumps is still unlikely. Lemma 5.9. For any vertex x ∈ V and any positive integer t, we have The following lemma shows that having jumps in opposing directions at given vertices is also unlikely. The last lemma is the analog of Corollary 4.9 on the line. It will allow us to deduce the typical order of magnitude of R.
Lemma 5.11. For any positive even integer n and any positive integers d and r such that Cdr ≤ n, we have As in the case of homomorphisms on the line, it is also possible to prove an inequality in the opposite direction, showing that P(R = r) ≤ C n 2 r 2 2 2d P(R = r − 1), but we neither use nor prove this. The supercritical regime. We prove Theorem 2.6 and Corollary 2.7. By Lemma 5.10 and by the union bound, we have One may easily check that | Rng(f )| ≤ R + 3, so that Moreover, it is easy to describe all homomorphisms which take on at most 3 values. Let Ω 0 be the set of homomorphisms which are constant on the even vertices (having the value 0 on the even vertices and 1 or −1 on the odd vertices), and let Ω 1 be the set of homomorphisms which are constant on the odd vertices (having the value ±1 on the odd vertices, and 0 or ±2, respectively, on the even vertices). Then {| Rng(f )| ≤ 3} = Ω 0 ∪ Ω 1 , |Ω 0 ∩ Ω 1 | = 2, and |Ω 0 | = |Ω 1 | = 2 n/2 . Therefore, completing the proof of Theorem 2.6. To obtain Corollary 2.7, recall that |Ω 0 | = |Ω 1 | and note that if d − log 2 n → ∞ as n → ∞ then P(Ω 0 ∪ Ω 1 ) = P(| Rng(f )| ≤ 3) = 1 − o(1), by Theorem 2.6. We remark that the bound (48) obtained for the probability that the range is large constitutes something of a compromise between two possibilities. With somewhat less work we could have used the FKG-type inequality, Proposition 5.8, to obtain a weaker bound. With somewhat more work we could make a finer analysis of the possible cases in the proof of Lemma 5.10 and obtain a somewhat better bound, with 2d − 1 replaced by 2d + 1 or even 2d + 2. The bound we chose to prove has the benefit that it is already rather good and has a relatively simple proof. The subcritical regime. We begin by proving the upper bound in Theorem 2.8. Since max 0≤k≤n |f (k)| is an increasing function in |f |, we have by Proposition 5.8, By our previous result on the line, Theorem 2.3, we have and then, using symmetry, We now prove the lower bound in Theorem 2.8. The proof is very similar to the proof of the lower bound in Theorem 2.3 in Section 4, and so we only give an outline of the proof. First, we show that for any ǫ > 0 there exists a δ > 0 such that Let ǫ > 0. Note that, by Theorem 2.6, the statement is trivial when δ √ n2 −d < 1. Hence, we may assume that n2 −d ≥ C/ǫ. Mimicking the proof of Lemma 4.10 and its corollary, using Lemma 5.11 in place of Lemma 4.8, we find that there exists a δ 1 > 0 such that Continuing as in (32) -(34), using Lemma 5.9 in place of Lemma 4.2, we obtain for some δ 2 > 0. Proposition 5.4 and Proposition 3.5 imply that Now, putting (49) and (50) together, we see that there exists a δ > 0 such that Finally, repeating the calculation in (36), where we use Theorem 2.6 in place of Theorem 2.1, we obtain The critical regime. Here we prove Theorem 2.9. Denote λ := lim n2 −d which exists and is a positive number by assumption. We begin by showing that in the critical regime the jumps are sparse enough so that it is unlikely to have chains of length greater than one. Let be the event that there are no two minimal-distance jumps (i.e. jumps at distance 2d + 1). We wish to show that P(B) = 1 − o(1). Indeed, by Lemma 5.9, we have We now find the limiting distribution of R as n tends to infinity. By Lemma 5.2 and Claim 5.1, we have that P(R = r | B) is proportional to where c(n, d, r) is the number of feasible jump structures I having |I| = |C(I)| = 2r. It remains to compute the size of I 0 . By considering the distances between consecutive elements in any I ∈ I 0 , and recalling (41), (42) and (43), we see that |I 0 | is given by the number of non-negative integer solutions to the equation x 1 + x 2 + · · · + x 2r = n, under the additional constraint that, for 1 ≤ j ≤ 2r, x j is odd and at least 2d + 3. Therefore, after substituting x j = 2y j + 2d + 3 for 1 ≤ j ≤ 2r, we see that |I 0 | is equal to the number of non-negative integer solutions to the equation y 1 + · · · y 2r = n/2 − (2d + 3)r, from which the result now follows.
By (51) and Claim 5.12, for any fixed r ≥ 1, we have where λ ′ := λ/(4 √ 2). By Lemma 5.9 and since P(B) = 1 − o(1), we have Therefore, conditioned on B, the expectation of R is uniformly bounded as n → ∞. Hence, Markov's inequality implies that, conditioned on B, R is tight as n → ∞. Recall the definition of the distribution ν(λ ′ ) in (7). Let N (λ ′ ) ∼ ν(λ ′ ) and note that Thus, Lemma 3.6 implies that, conditioned on B, R converges in distribution to N (λ ′ ). Finally, since P(B) = 1 − o(1), we conclude that R converges in distribution to ν(λ ′ ). It remains to understand the range of a homomorphism. Recalling the definition ofW given in (45), we observe that the event B is the same as the event {|W | = |C(S)| = 2R}, which is the same as the event thatW consists of R 1's and R (−1)'s. Therefore, by Proposition 5.4, conditioned on B and on R, on the event {R > 0}, the range of a homomorphism is equal in distribution to two plus the range of a random walk bridge of length 2R. By Theorem 2.6, conditioned on the event {R = 0}, the range of a homomorphism is 3 with probability tending to one. This, together with our previous result on the convergence of R in distribution, completes the proof of Theorem 2.9.

Proof of main lemmas.
Proof of Proposition 5.8. Recall that a homomorphism on P n,d or T n,d can be seen as an element of Z {0,1,...,n} , where f ∈ Hom(T n,d ) is extended to {0, 1, . . . , n} by f (n) := 0. Therefore, the uniform distributions on Hom(P n,d ) and Hom(T n,d ) are distributions on Z {0,1,...,n} , which we denote by P P and P T respectively. We also denote by Q := {f ∈ Hom(T n,d )} the support of P T , so that the measure P P (· | Q) is just the measure P T .
Let φ : Z {0,1,...,n} → [0, ∞) be an increasing function. Note that the event {f (n) = 0} is a decreasing event in |f |. Therefore, we can apply Theorem 3.3 for the functions φ and ψ(f ) := 1 {f (n)=0} , to obtain Notice that sampling a random homomorphism on P n,d conditioned on {f (n) = 0} is not equivalent to sampling a random homomorphism on T n,d , which is just to say that Q = {f (n) = 0}. However, it is equivalent to sampling a random homomorphism on another graph. Namely, the graph P ′ n,d obtained from P n,d by identifying the vertex 0 with the vertex n. In order to obtain T n,d from P ′ n,d , we must still add some edges which are missing, for example, the edge between n − 1 and 2, and the edge between n − 2 and 1. Nonetheless, this observation shows that the measure P P (· | f (n) = 0) also satisfies the FKG inequality in Theorem 3.3, since it is equivalent to sampling a random homomorphism on P ′ n,d . Define the events J := {|f (k)| ≤ 1, k = 0, 1, . . . , 2d} and Thus, using (52) and the fact that φ is non-negative, we obtain .
We now wish to bound P(J ∩ J ′ | f (n) = 0) from below. We first apply Theorem 3.3 to the graph P ′ n,d to get where we have used symmetry in the second step. Next, we apply Theorem 3.3 again to the graph P n,d to get P P (J | f (n) = 0) ≥ P P (J). Finally, since J is just the event that no jump occurs at the first 2d + 1 vertices, we have by Lemma 4.1 that P P (J) ≥ 1/3. Therefore, we have shown that For the proofs of the remaining lemmas, it is convenient to denote by [x, y] the vertices on the arc going from x to y in the clockwise direction. That is, for x, y ∈ V , we define Also, for a set J ⊂ V and an integer t, we let where, as always, addition for vertices on the torus is taken modulo n.
Proof of Lemma 5.9. First, we partition C x,t into two events C 0 x,t and C 1 x,t . Denote x ′ := x − (2d + 1)t − 1 and define since for any f ∈ Hom(T n,d ), |f (x)− f (x ′ − 1)| ≤ 2+ t necessarily holds and |f (x)− f (x ′ − 1)| = 2+ t holds only if there is a chain of length t at x. We now prove that By rotating the torus if necessary, we note that it suffices to prove this under the assumption that x ′ = 1. This simplifies slightly the following discussion as it avoids issues stemming from the fact that f (0) is normalized to be 0.
x ′′ x ′ x Figure 8. A homomorphism f in C 1 x,t . Modifying the value at x ′ − 1 to be f (x ′′ ) injectively maps this homomorphism to C 0 x,t . Here d = 2 and t = 3.
Proof of Lemma 5.10. The idea of the proof is to remove jumps from the jump structure of the given homomorphism and observe that this results in more fluctuation points. We shall do so by removing the jumps two at a time. See Figure 9. We begin with some notation. For a feasible jump structure I and a vertex x ∈ I, denote by C(I, x) the chain in I containing x, i.e., C(I, x) is the unique element (k, t) ∈ C(I) satisfying Now, for a feasible sign vector ǫ ∈ B * (I), define ǫ x,y ∈ {−1, 1} C(I x,y ) by ǫ x,y (k, t) := ǫ(C(I, k)) if k ∈ I ǫ(C(I, k + 1)) if k + 1 ∈ I , (k, t) ∈ C(I x,y ).
That is, the sign of a chain in C(I x,y ) is inherited from its corresponding chain in C(I). Note that, Hence, if ǫ(C(I, x)) = ǫ(C(I, y)) then ǫ x,y ∈ B * (I x,y ) and, by (43), I x,y is a feasible jump structure. Therefore, the lemma is equivalent to (57) x y by T (I, ǫ) := (I x,y , ǫ x,y ). Note that the mapping I → I x,y is injective on {I ∈ I | x, y ∈ I}. Thus, recalling that jumps belonging to the same chain must have the same sign, it is not hard to see that, for any (I ′ , ǫ ′ ) ∈ B(J \ {x}, J ′ \ {y}), we have Thus, (57) follows by induction, proving the lemma.
Proof of Lemma 5.11. The proof utilizes a similar technique as the proof of Lemma 5.10, where this time we aim to add jumps to the jump structure of the homomorphism rather than remove jumps.
For a feasible jump structure I, denote That is, the sign of a chain in C(I x,y )\{(x, 1), (y, 1)} is inherited from its corresponding chain in C(I) and the sign of the chain at x, which is opposite of that of y, is determined independently. Note that, by (43), I x,y is a feasible jump structure. Moreover, since |I x,y | = |I| + 2 and |C(I x,y )| = |C(I)| + 2, Lemma 5.3 and Claim 5.1 imply that P (S, X) = (I x,y , ǫ x,y,i ) = P (S, X) = (I, ǫ) Denote by I the set of all feasible jump structures. For r ≥ 0, let B r denote the set of feasible signed jump structures having 2r jumps, i.e., Therefore, by (58), P (S, X) ∈ T r (I, ǫ) ≥ P (S, X) = (I, ǫ) · (n − Cdr) 2 2 2d+5 · 2 if r ≥ 2 1 if r = 1 .
We have Thus, considering separately the case r = 1, Lemma 3.1 implies that for any r ≥ 1,

Local limits on the line
In this section we prove the theorems which were stated in Section 2.3. Throughout this section, the parameter d ≥ 1 is fixed, and so we drop the d from the notation when convenient. On the other hand, the parameter n ≥ 1 is allowed to vary, and our main goal is to understand Hom(P n,d ) := Hom(P n,d , 0) as n grows larger. At first, in Section 6.2, we investigate the asymptotic size of Hom(P n,d ) as n tends to infinity. Subsequently, in Sections 6.3 and 6.4, we describe the local limit of such homomorphisms as a probability measure on infinite homomorphisms defined through a Markov chain (see Figure 12). 6.1. Definitions. Given a finite set Π, called an alphabet, we denote by Π * the set of all finite words on Π. That is, For u, v ∈ Π * , we denote the length of u by |u| and the concatenation of u and v by u • v, i.e., It is clear that concatenation is associative. For u ∈ Π * with |u| ≥ 1, let u − be the word obtained from u by dropping the last element, i.e., These basic sequences will serve as a means to encode homomorphisms into words (see Figure 10).
It is not hard to see that T is indeed well-defined (see Figure 11), and that it maps a word u ∈ D to the unique word x ∈ Σ * satisfying T ′ (x) = u or T ′ (x) − = u (in which case w(x) = |u| or w(x) = |u| + 1, respectively). Also, one should note that T −1 (x) = ∅ if x ∈ Σ * contains (a, B) or (b, A) as a sub-word or if x = ∅, and that Another observation which will be useful later on is that the recursive relation in the last line of (61) may be generalized to hold for certain u ∈ D.
Claim 6.1. We have Proof. We prove the claim by induction on |u|. If |u| = 0 then there is nothing to prove. Otherwise, |u| ≥ 1. By the assumption, we have |u| = w(T (u)), which implies that u may be decomposed as u = u ′ • u ′′ , where u ′ ∈ Σ. Note that this now implies that |u ′′ | = w(T (u ′′ )), since T (u) = T (u ′ ) • T (u ′′ ), by (61), and since |u ′ | = w(T (u ′ )) trivially. Therefore, by induction, We say a word x ∈ Σ * is d-legal if it satisfies the conditions Denote by Ω n,d the set of d-legal words on Σ of weight n or n + 1. That is, Figure 11. A homomorphism f ∈ Hom(P n,d ) is first viewed as a word u := D n (f ) of length n on the alphabet {−1, 1}. Then, u is encoded into a word x := T (u) on the alphabet Σ by sequentially reading off the letters from left to right, as defined in the recursive formula in (61). If this process exhausts u completely then we end up with a word x of weight exactly n. Otherwise, we remain with a tail of u of length one or two (as is the case in this figure), which is a prefix of at least one element in Σ. In this case, the last letter is chosen in such a way that the weight of the resulting word x is n + 1, as defined by the base cases in (61).
Claim 6.2. The mapping L n is a bijection between Hom(P n,d ) and Ω n,d .
Proof. It is clear from (60) and (61) that L n injectively maps Hom(P n,d ) to words on Σ of weight n or n + 1. It remains to show that the image of L n is precisely Ω n,d . One may easily see that a homomorphism f ∈ Hom(P n,1 ) is a homomorphism in Hom(P n,d ) if and only if D n (f ) does not contain a sequence of the form We have shown that for any f ∈ Hom(P n,1 ), f ∈ Hom(P n,d ) if and only if L n (f ) ∈ Ω n,d . In particular, since Hom(P n,d ) ⊂ Hom(P n,1 ), we have L n (Hom(P n,d )) ⊂ Ω n,d . For the other direction, let x ∈ Ω n,d . Either T ′ (x) or T ′ (x) − is of length n. Let u ∈ D be this sequence and let f := D −1 n (u) ∈ Hom(P n,1 ). Since L n (f ) = T (u) = x is d-legal, we see that f ∈ Hom(P n,d ). Hence, Ω n,d ⊂ L n (Hom(P n,d )), completing the proof.
6.2. Counting the homomorphisms. In this section we prove Theorem 2.10. This is done by deriving a recursion formula and investigating its characteristic polynomial.
For 0 ≤ k, m ≤ d, define By symmetry we have |Ω n,d (k, m)| = |Ω n,d (m, k)|, so we can define This definition is motivated by the following two lemmas, which show that the c n (k) satisfy some explicit recursion formulas and that they have a simple relation to | Hom(P n,d )|. Lemma 6.3. For any n ≥ 3, we have | Hom(P n,d )| = 2c n−2 (0) + 2c n−3 (d − 1).
Proof. Note that by (63), we have Therefore, by partitioning according to the first element and by the symmetry between {a, A} and {b, B}, we obtain The result now follows as |Ω n,d | = | Hom(P n,d )|, by Claim 6.2.
Lemma 6.4. For any n ≥ 3, we have Proof. Note that by (63), similarly to (64), we have Therefore, by partitioning according to the first element, we obtain In a similar manner, for 1 ≤ k ≤ d − 1, we have We express all quantities c n (k) in terms of c n (d − 1). Substituting k = d − 1 in (66) yields Continuing in this manner (by induction), we get for 1 ≤ m < d, In particular, for m = d − 1 this gives, Substituting this in (65) gives The characteristic polynomial for this equation is Claim 6.5. The polynomial q has 2d + 1 distinct (complex) roots. Exactly one of these, which we denote by µ 0 , is positive. Moreover, µ 0 > √ 2, while all other roots have modulus less than √ 2.
Proof. Assume that d ≥ 2 (the case d = 1 can be verified directly). It is easy to verify that the derivative of q does not vanish at any zero, so that the roots are simple, and hence there are 2d + 1 distinct roots. Since q(± Considering q as a real function, by differentiating, one finds that q has a single minimum and a single maximum, and hence at most 3 real roots. Since q(−1) = 0, we see that µ 0 is indeed the unique positive root. For the last part, it suffices to show q has 2d − 1 roots of modulus at most 1. This is a consequence of Rouché's theorem applied to q and g(z) := 2z 2d−1 on the disc D := {z ∈ C | |z| ≤ r} for any sufficiently small r > 1. Indeed, on ∂D, we have |g(z)| = 2r 2d−1 and |q(z) + g(z)| ≤ r 2d+1 + 1, and since r 2d+1 + 1 < 2r 2d−1 (using our assumption that d ≥ 2), Rouché's theorem implies that g and q have the same number of zeros in D. As g clearly has 2d − 1 zeros in D, this completes the proof.
Let µ 0 be the unique positive root of q. We denote λ := µ 2 0 . That is, λ is the unique positive solution of the equation Claim 6.6. For any fixed 0 ≤ k ≤ d − 1, there exists a constant r k > 0, such that c n (k) ∼ r k λ n/2 as n → ∞.
The mapping D n defined in (59) extends to the case n = ∞ in an obvious way. The mapping T defined in (61) can also be extended to map the infinite words D ∞ (Hom(P ∞,d )) to Ω ∞,d by the same recursion formula. Then, following the proof of Claim 6.2, we see that is a bijection between Hom(P ∞,d ) and Ω ∞,d .
6.4. The local limit as a Markov chain. The main goal of this section is to prove Theorem 2.11. To this end, we will describe a Markov chain (see Figure 12) on the state spacẽ Σ := {a 1 , . . . , a d , b 1 , . . . , b d , A, B}, which will allow us to generate words in Ω ∞,d , and hence also homomorphisms in Hom(P ∞,d ) through the bijection L ∞ . Loosely speaking, the idea of this Markov chain is that the state a k (b k ) represents the fact that a streak of k consecutive a's (b's) has been accumulated. Likewise, the state A (B) represents the fact that a jump has occurred in the positive (negative) direction.
Consider the above Markov chain (see Figure 12) on the state spaceΣ with the transition probabilities p and the initial state distribution π as described below.
It is interesting to note that, since λ > 2 and using (68), we have which expresses the fact that there is a small but growing tendency to continue in the same direction. Running this chain for an infinite amount of time and considering its trajectory as an infinite word onΣ, we may obtain an infinite word W ∞ on Σ by dropping the subscripts of the letters inΣ. Then W ∞ is defined by W ∞ (k) := φ(W (k)) for k ≥ 1. Recalling (63), it is clear that this process generates a d-legal word, i.e. that W ∞ ∈ Ω ∞,d . Denote by f ∞ := L −1 ∞ (W ∞ ) the infinite homomorphism corresponding to this word. Let f n be a uniformly chosen homomorphism in Hom(P n,d ). Theorem 2.11 will follow when we show that P(B r (f n ) = f ) −−−→ n→∞ P(B r (f ∞ ) = f ) for any r ≥ 1 and f ∈ Hom(P r,d ).
For n ≥ 1, define W n := L n (f n ).
The remaining cases are again handled similarly. This proves the first part of (71).
This follows directly from (71) when x is d-legal, and it follows trivially when x is not d-legal since the probabilities involved are zero.
Finally, putting together (74) and (75), we conclude that for any r ≥ 1 and f ∈ Hom(P r,d ), we have lim n→∞ P(B r (f n ) = f ) = lim n→∞ x∈X(f ) P(P |x| (W n ) = x) = x∈X(f ) P(P |x| (W ∞ ) = x) = P(B r (f ∞ ) = f ), proving (70), as required. We remark that it is now simple to derive an exact formula for the probability that B r (f ∞ ) = f for certain homomorphisms f ∈ Hom(P r,d ). Specifically, let f ∈ Hom(P r,d ) satisfy w(L r (f )) = r. Thus, elements of Lip(G, v 0 ) may be regarded as real-valued Lipschitz functions on the graph, normalized to equal 0 at v 0 . There is a natural uniform measure on Lip(G, v 0 ) obtained by regarding a function f ∈ Lip(G, v 0 ) as a vector in R V \{v 0 } and using normalized Lebesgue measure there. Hence, one may speak of a uniformly sampled function from Lip(G, v 0 ). In statistical physics terminology, this models a random surface whose energy is defined via the Hammock potential (see, e.g., [3]). Naively, one may expect the behavior of a uniformly chosen function f from Lip(P n,d , 0) to be rather similar, perhaps up to constants, to that of a uniformly chosen function from Hom(P n,d , 0). In particular, one may expect that Var(f (n)) ≈ n2 −d when n2 −d ≥ 1, say. However, a different intuition comes from the following consideration. A standard heuristic in statistical physics is that (continuous) models of random surfaces should behave similarly to the Gaussian free field. The Gaussian free field is again a real-valued function g : V → R, satisfying g(v 0 ) = 0, and sampled from a distribution whose density is proportional to with β ∈ (0, ∞) a parameter. Analysis of the variance of the Gaussian free field on a graph is made simple by the observation that its distribution is a multivariate Gaussian. When G = P n,d and v 0 = 0 one obtains that Var(g(n)) ≈ nd −3 /β. Thus it is not clear whether one should expect a function f sampled uniformly from Lip(G, v 0 ) to satisfy Var(f (n)) ≈ n2 −d or Var(f (n)) ≈ nd −α . We conjecture the latter to be the truth. Thus, we expect a significant difference in behavior between the homomorphism model considered in this paper and its continuous counterpart. Consideration of the complete graph suggests that, when comparing the Gaussian free field to the continuous Lipschitz model on a regular graph, one should take β to be one over the degree. As P n,d is nearly a (2d + 2)-regular graph, this leads to the following conjecture.
In particular, the threshold function d(n) separating the regime of localization from the regime of delocalization is polynomial in n, rather than logarithmic in n as is the case for the homomorphism model. Figure 14 shows a uniformly sampled function in Lip(P n,d , 0). We remark that when considering this model it is natural to consider the non-bipartite graphP n,d , which is the discrete segment {0, 1, . . . , n} with edges between vertices at distance at most d + 1, regardless of their parity.
7.2. The scaling limit. In this paper we explored the properties of a random homomorphism for given n and d, and also the local limit of the homomorphism when d is fixed and n tends to infinity. Another limit of interest is the scaling limit. As in many models of random walk, one may expect that in the subcritical regime, when the range of a homomorphism in Hom(P n,d , 0) tends to infinity as n tends to infinity, the homomorphism has a Brownian motion scaling limit. This is the content of the next conjecture.
Conjecture. There exists a function σ : N → (0, ∞) such that the following holds. Let f n,d be a uniformly chosen homomorphism in Hom(P n,d , 0). Define B n,d : [0, 1] → R to be the continuous function defined by