Equivalence of zero entropy and the Liouville property for stationary random graphs

We prove that any stationary random graph satisfying a growth condition and having positive entropy almost surely admits an infinite dimensional space of bounded harmonic functions. Applications to random infinite planar triangulations and Delaunay graphs are given.


Introduction
A stationary random graph is a random rooted graph whose distribution is invariant under re-rooting by a simple random walk. This notion was made explicit by Benjamini and Curien in [BC12] motivated by several examples, including the Uniform Infinite Planar Triangulation/Quadrangulation (UIPT/Q), and previously defined notions such as unimodular random graphs.
In said work they develop the basic entropy theory for stationary random graphs, analogous to the well known theory for random walks on finitely generated groups, see [KV83]. In particular, they define an entropy and prove that if it is zero then the random graph almost surely satisfies the Liouville property (i.e. bounded harmonic functions are constant). The converse implication, that positive entropy implies the existence of non-constant bounded harmonic functions, was posed as a question, see [BC12,Remark 3.7].
In this work we answer this question in the afirmative under an additional condition on the stationary random graph. The hypothesis is the following (see Lemma 5.1): The expectation of the number of elements of the ball of radius n has finite exponential growth. (1) Our main result is the following (see Theorem 6.2).
Main Theorem. An ergodic stationary random graph satisfying condition (1) above, has zero entropy if and only if it satisfies the Liouville property almost surely. Furthermore, if such a graph has positive entropy, then almost surely it admits an infinite dimensional space of bounded harmonic functions.
Question 1.1. Given a stationary random graph such that the degree of the root is well behaved.
Under what conditions can one deduce that the graph has at most exponential volume growth?
To the best of the author's knowledge there is no widely applicable answer to the above question available in the literature. We discuss two relevant partial results in Section 7.2. First, we give an example, due to Asaf Nachmias, of a stationary random graph with superexponential growth such that the degree of the root has finite mean, and in fact is comparable to a Poisson variable. The example also has the special property that the degree of the root determines the entire graph up to rooted isomorphism. Second, we prove that for unimodular graphs whose root has finite expectation, if the number of elements at distance n from the root is asymptotically independent from the degree of the root, then the graph has at most exponential growth (see Lemma 7.4).
Our proof of the Main Theorem involves Derriennic's zero-two law, a sharp criterion for equivalence of the tail and invariant events of a Markov chain (see Corollary 3.2), and a "looping" argument which allows us to avoid parity problems (see Figure 1). In order to show that our results are valid for graphs with unbounded degree, we improve the inequalities between the linear drift and entropy from [BC12, Proposition 3.6] with essentially the same proof (see Lemma 5.1). To show that positive entropy implies that the space of bounded harmonic functions is infinite dimensional, we relate the dimension of this space to the mutual information between the first m steps of a random walk and its tail behavior (see Lemma 4.1).
This occupies the first few sections of the paper. In section 7 we discuss examples and applications of the main theorem to several examples, many of which were already known.

Tail and invariant events
In this section we introduce the terminology and notation to be used in the rest of the article. Throughout this article we use the word "graph" as a synonym for connected locally finite (i.e. each vertex has finite degree) undirected graph. If X is a graph, we denote by V (X) the set of vertices of X and by E(X) the set of edges. We allow multiple edges and loops.
Consider for any graph X, the path space Ω whose elements are sequences ω = (x 0 , x 1 , . . .) of vertices with the property that x n is a neighbor of x n+1 for all n ≥ 0. The space Ω when endowed with the topology of coordinate-wise convergence is a Polish space. We define the one step transition probability p(x, y) between two vertices x, y ∈ V (X) by p(x, y) = number of edges connecting x to y deg(x) , where edges connecting x to x are only counted once in the denominator. The n-th step transition probability p n (x, y) is defined by p n (x, y) = x1,...,xn−1 p(x, x 1 )p(x 1 , x 2 ) · · · p(x n−1 , y).
For each x ∈ V (X), the distribution of the simple random walk starting at x is the unique Borel probability P x on Ω which satisfies P x (x 0 = x, x 1 = a 1 , . . . , x n = a n ) = p(x, a 1 )p(a 1 , a 2 ) · · · p(a n−1 , a n ) for all sequences a 1 , . . . , a n ∈ V (X). A simple random walk on X is a V (X) valued random process x n , indexed on n = 0, 1, . . ., whose distribution is of the form where the initial distribution of the walk µ is a probability on X.
For each n, let F n be the σ-algebra on Ω generated by x n , x n+1 , . . .. The tail σ algebra F ∞ is defined by while the invariant σ-algebra is defined by Suppose x n is a simple random walk on X whose distribution we denote by P. We say that the tail and invariant σ-algebras are equivalent with respect to x n if for each A ∈ F ∞ , there exists B ∈ F inv such that P(A B) = 0, where denotes symmetric difference.
Remark. Consider a graph consisting of a single edge which joins two distinct vertices x and y. This is a simple example where F ∞ and F inv are not equivalent. All invariant events are trivial. However, the tail event of being at x for all large enough even times is not invariant and has intermediate probability if the initial distribution gives positive mass to both vertices.

The zero-two law
In this section we discuss the criterion for equivalence of tail and invariant events proved by Derriennic. For this purpose, define for each vertex x in a graph X, the quantities and let We restate [Der76, Théorème 3] in our context.
Furthermore, the above supremum is 0 if and only if F ∞ and F inv are equivalent for all simple random walks on X.
We will need the following consequence of Derrienic's result.
Corollary 3.2. If X is a graph such that p(x, x) ≥ 1/2 for all x ∈ V (X), then F ∞ and F inv are equivalent for every simple random walk on X.
Proof. For each x ∈ V (X), we calculate On the other hand one has p 2 (z, z) ≥ 1/4 and p(z, z) ≥ 1/2, so in particular This implies α ∞ (X, x) ≤ 2 − 1/4 for all x ∈ V (X), so by Theorem 3.1, the tail and invariant σ-algebras are equivalent for all simple random walks on X, as claimed.

Mutual information
The mutual information between two random variables is a non-negative (possibly infinite) number which quantifies the dependence relationship between them. In particular, the mutual information is zero if and only if the variables are independent, and is maximized when both variables coincide. In this section we consider the mutual information between the first m steps of a simple random walk and all steps after time n, as well as the mutual information between the first m steps and the tail behavior of the simple random walk on a graph X. We review the basic properties relating these quantities to the space of bounded harmonic functions on the graph (see in particular [Bla55], [Der85], and [Kai92]). This will be useful later in our study of entropy of stationary random graphs.
Fix a graph X, a root vertex x ∈ V (X), and recall that Ω denotes the space of paths (x 0 , x 1 , . . .) in X. Denote by F n the σ-algebra generated by (x 0 , . . . , x n ) for each n.
Let P x be the distribution of two identical copies of a simple random walk starting at x in X, while P x × P x denotes the distribution of two independent random walks starting at x. Note that both probabilities are defined on Ω × Ω but the former is supported on the diagonal, while the later is not (save trivial examples).
Let ϕ be the convex function given by ϕ(t) = t log(t). For m < n ≤ ∞, the mutual information between F m and F n is defined by where the supremum is over all finite partitions of Ω×Ω whose sets A i belong to σ(F m ×F n ). It follows from the convexitiy of ϕ, that I n m (X, x) is always defined and non-negative, and equals zero if and only if P x and P x × P x coincide on σ(F m × F n ).
Recall that a function f : for all y ∈ V (X). A graph is said to satisfy the Liouville property if and only if all its bounded harmonic functions are constant. The following result shows that, under mild hypothesis, the mutual information I ∞ m (X, x) is directly related to the dimension of the space of bounded harmonic functions on the graph X. Proof. By [Bla55, Theorem 2], the bounded harmonic functions on X are in bijection with bounded shift invariant measurable functions on the space of paths Ω considered modulo modifications on P x -null sets. Since F inv and F ∞ are equivalent, this implies that X satisfies the Liouville property if and only if F ∞ is trivial.
If F ∞ is trivial, then F m is independent from F ∞ for each m, so I ∞ m (X, x) = 0 as claimed. In the other direction, if I ∞ m (X, x) = 0 for all m, then F m and F ∞ are independent. Since one can approximate any tail event by events in F m (for m large), we obtain that each tail event is independent from itself. This implies that F ∞ is trivial as claimed.
Suppose now that the space of bounded harmonic functions on X has dimension d. By Blackwell's result above, there is a partition B 1 , . . . , B d of Ω into disjoint tail events which are atoms in F ∞ . By Dobrushin's Theorem (see [Gra11,Lemma 7.3]), one may calculate I ∞ m as the supremum over all partitions of σ(F m × F ∞ ) of the form A i × B j , where A 1 , . . . , A n ∈ F m . For any such partition, one has where we use Jensen's inequality and the fact that By taking supremum, one obtains I ∞ m (X, x) ≤ log(d) as claimed.
The following Lemma gives a concrete formula for the mutual information I n m (X, x) in terms of the transition probabilities of the random walk. It will be used later on when we consider the asymptotic entropy of random walks on stationary random graphs. 2. For each m, the function n → I n m (X, x) is non-increasing and converges to I ∞ m (X, x) when n → +∞.
Proof. By Dobrushin's Theorem (see [Gra11,Lemma 7.3]), one may take the supremum in the definition of I n m (X, x) over partitions in a generating set of σ(F m × F n ). The subsets of Ω × Ω consisting of pairs of paths ((x i ), (y i )) satisfying x 0 = a 1 , . . . , x m = a m , y n = a n , . . . , y N = a N for fixed a i and N > n, generate the necessary σ-algebra, and hence, we may take the supremum over partitions into sets of this form.
For any fixed N > n > m, consider the partition {A j } into sets as above, where a 1 , . . . , a m , a n , . . . , a N range over all of V (X). Because of the Markov property one obtains the same result for all N in the following calculation an,am log p n−m (a m , a n ) p n (x, a n ) p m (x, a m )p n−m (a m , a n ).
This implies the first claim by taking supremum. Since I n m (X, x) is calculated as a supremum over a set of partitions which decreases with n, n → I n m (X, x) is non-increasing, and the limit exists. It is no smaller than I ∞ m (X, x). Notice that the formula for I n m (X, x) implies that I m+1 m (X, x) is finite, and hence so is L.
To simplify notation, fix m and set G m = σ(F m × F n ) for each m < n ≤ ∞. By the Gelfand-Yaglom-Perez Theorem ([Pin64, Theorem 2.4.2] or [Gra11, Lemma 7.4]), one has for all n (including n = ∞) where f n is the Radon-Nikodym derivative of P x restricted to G n relative to P x × P x restricted to the same σ-algebra.
By the reverse martingale convergence theorem (see [Doo01,pg. 483]), one has that f n → f ∞ pointwise when n → +∞. Hence, ϕ(f n ) converges to ϕ(f ∞ ) and it suffices to show that these functions are uniformly integrable to establish that lim I n m (X, x) = I ∞ m (X, x). Since f n = E(f m |G n ), and the conditional expectation is relative to P x × P x , one obtains by Jensen's inequality that for all finite n. By the reverse martingale convergence theorem, the right-hand side converges in L 1 to E(ϕ(f m )|G ∞ )), and therefore, the family ϕ(f n ) is uniformly integrable as claimed. It follows that lim which concludes the proof.

Linear drift and entropy of random graphs
Consider the topological space whose points represent all isomorphism classes of rooted graphs. A sequence of rooted graphs in this space converges if and only if the isomorphism type of the ball of each fixed radius around the root is eventually constant. The resulting space is separable and its topology comes from a complete metric. Furthermore, one can construct a larger space consisting of rooted graphs with a path starting at the root. Given a random graph (X, x), one can find a random element of the space of graphs with paths (X, x, (x 0 , x 1 , . . .)) such that the conditional distribution of (x 0 , x 1 , . . .) given (X, x) is that of a simple random walk on (X, x) starting at x. We call (X, x, (x 0 , x 1 , . . .)) a simple random walk on (X, x). Sometimes we omit (X, x) and just write x n .
A random graph (X, x) is called stationary if it has the same distribution as (X, x 1 ) where x n is a simple random walk on (X, x). A stationary random graph is ergodic if the distribution of the simple random walk on it is an ergodic invariant measure for the shift transformation Let x n be the simple random walk on an ergodic stationary random graph (X, x). By Kingman's subadditive ergodic theorem, the limit exists almost surely and in mean. Here d(x 0 , x n ) denotes the graph distance on the graph (X, x) between x 0 and x n . We call (X, x) the linear drift of the simple random walk on (X, x). One obtains trivially that 0 ≤ (X, x) ≤ 1. Another important quantity associated to the random walk x n is its entropy. It is defined as the limit which exists almost surely and in L 1 (again by Kingman's theorem) under the condition Under a mild assumption on the growth of the random graph one can conclude that h(X, x) = 0 if and only if (X, x) = 0. The following proof is almost the same as that of [BC12, Proposition 3.6], which itself follows closely preceding results, see the references preceding [LP14b, Theorem 13.31]. In the following statement, |B r (x)| denotes the number of elements in the set of vertices at distance r or less from x.
Lemma 5.1. Let (X, x) be an ergodic stationary random graph satisfying the following assump- Then h(X, x) is finite and satisfies the following inequalities Proof. By the Carne-Varopoulos inequalities, one obtains , for all r.
The lower bound follows by taking − log and limit. Notice that E(log deg(x)) < +∞ by assumption (1), and therefore deg(x r )/r → 0 almost surely by Birkhoff's theorem.
For the upper bound, we use the observation that the function is maximized over all p 1 , . . . , p r ≥ 0, with p 1 + · · · + p r = 1, when all the p i are equal to 1/r. This yields (see also the proof of [BC12, Proposition 3.6]) that for all > 0. Here = (X, x), and p r = P(d(x 0 , x r ) ≥ ( + )r) goes to 0 when r → +∞. Dividing by r, taking expectation, inferior limit (at this point we use assumption (1)), and then letting go to zero, yields the desired upper bound for h(X, x).

Bounded harmonic functions and entropy of random graphs
It was shown in [BC12, Theorem 3.2] that h(X, x) = 0 if and only if almost surely F ∞ is trivial for the simple random walk starting at the root of (X, x). It follows that if h(X, x) = 0, then X almost surely satisfies the Liouville property. The question of whether the converse always holds was posed in the same paper, see [BC12, Remark 3.7]. We will settle this question under mild additional hypothesis. We will show that if (X, x) is an ergodic stationary random graph with positive entropy, then almost surely the space of bounded harmonic functions on X is infinite dimensional. In particular, the graph obtained by taking the disjoint union of two copies of Cayley graph of Z 3 and adding an edge joining them cannot occur with positive probability for any stationary random graph since its space of bounded harmonic functions has dimension 2. Also, any graph with transitive isomorphism group must either satisfy the Liouville property or have an infinite dimensional space of bounded harmonic functions.
To begin we express the entropy h(X, x) of a stationary random graph as the average mutual information between the first step and the tail of the corresponding simple random walk.
Lemma 6.1. Let (X, x) be an ergodic stationary random graph with finite entropy. Then for each m, one has Proof. By Lemma 4.2, I n m (X, x) is non-increasing and converges to I ∞ m (X, x) when n → +∞. Hence, Using the formula from Lemma 4.2 and stationarity, one obtains Letting H n = −E(log(p n (x 0 , x n )), one has obtained that H n − H n−m converges monotonely. Since 1 n H n converges to h(X, x), the limit must be mh(X, x) (take the telescoping sum over n = km for k ∈ N). This concludes the proof.
We can now prove our main theorem.
Theorem 6.2. Let (X, x) be an ergodic stationary random graph satisfying the assumption (1) of Lemma 5.1. Then h(X, x) = 0 if and only if almost surely X satisfies the Liouville property. Furthermore, if h(X, x) > 0, then almost surely the space of bounded harmonic functions on X is infinite dimensional.
Proof. By Lemma 6.1, for each m, one has Hence, if h(X, x) = 0, then almost surely one has I ∞ m (X, x) = 0 for all n. By Lemma 4.1, this implies that (X, x) satisfies the Liouville property almost surely as claimed.
Assume now that h(X, x) > 0. Notice that by Lemma 5.1, the linear drift of the random walk on (X, x) is positive. We consider a stationary random graph (X , x) obtained from (X, x) by adding deg(y) edges connecting each vertex y to itself (see Figure 1). The simple random walk on this new random graph differs from the old one by a geometric waiting time with expectation 2. In particular, the linear drift of the simple random walk on (X , x) is also positive. By Lemma 5.1, this implies h(X , x) > 0.
Using Lemma 6.1 as above, one obtains Notice that (X , x) almost surely satisfies the hypothesis of Lemma 3.2, so that F inv and F ∞ are equivalent. Therefore, we may apply Lemma 4.1 and obtain from the inequality above (choosing m so that mh(X , x) > log(d)) that for each d, the probability that (X , x) admits more than d linearly independent bounded harmonic functions is positive. Since the space of bounded harmonic functions remains unchanged by adding deg(y) loops at each vertex, one obtains that for each d, the probability that (X, x) admits at least d linearly independent bounded harmonic functions is positive. Because (X, x) is ergodic, almost surely the space of bounded harmonic functions on (X, x) is infinite dimensional. This concludes the proof.  As an application of Theorem 6.2 we will construct a stationary random graph whose random walk has positive linear drift. It follows that almost surely the graph admits an infinite dimensional space of bounded harmonic functions, a property which, to the best of our knowledge, is not easily established by other means.
In the context of random walks on groups, a similar example, is given by the Cayley graph of the lamplighter group on Z 3 . It is quite simple to establish that the random walk on this group has positive linear drift but, a priori, non-constant harmonic functions are not so easy to exhibit. However, all harmonic functions have recently been explicitely identified thanks to the works of Erschler, Lyons and Peres (see [Ers11] and [LP14a]).
Our random graph is a simplified variant of the Stochastic Hyperbolic Infinite Quadrangulation, see [Ben11, Section 6.3]. The difference is that we label edges with only +1 and −1 (never 0) and we join the vertex at a corner to the first corner such that the sum along edges is 0 (instead of −1 as in the SHIQ, see Figure 2).
This last modification implies that our random graph is regular, while the SHIQ almost surely has vertices with arbitrarily large degree. However, our graph is non-planar, and the new edges connect points which are arbitrarly far away on the tree. In particular, the graph is not quasi-isometric to the tree. Hence, even though our graph is transient, having the regular tree as a subgraph [Lyo83], the existence of non-constant bounded harmonic functions does not follow from [BS96]. In principle the graph might be almost planar, i.e. admit a quasimonomorphism onto a planar graph, we conjecture that this is not the case. We now give the details of the construction.
To begin, take a regular degree three tree T 0 with some fixed root x. We consider this graph embedded in the plane without self crossings so that there is an order (say clockwise) among the three edges sharing each vertex. We define a corner as the angular sector between two consecutive edges. There is a partial order on the set of corners which is given by the clockwise contour of the graph.
A graph (T, x) rooted at x is constructed as follows: A random label +1 or −1 is chosen with probability 1/2 independently for each edge of T 0 . For each vertex y and each corner at y, we add an edge joining y to the vertex z of the first corner in the partial order such that the sum of labels along the shortest path from y to z is equal to 0. It follows that the random graph (T, x) is almost surely regular with all vertices of degree 9. In particular, the assumption (1) of Lemma 5.1 is trivially satisfied.
We first show that (T, x) is stationary. We do so by showing that it is unimodular, which is equivalent to stationarity since (T, x) is regular (see below).
Recall that a random rooted graph (X, x) is said to be unimodular if for every function F , going from the space of isomorphism classes of graphs with two ordered roots to [0, +∞), one has If a random rooted graph (X, x) defined on some probability space (Ω, F, P) is unimodular and E(deg(x)) < +∞, then X is stationary with respect to the probability measure Q defined by . ( See for example [BC12, Section 2.2]. Since P and Q are absolutely continuous, the almost sure properties of (X, x) coincide with that of a stationary random graph.
Lemma 7.1. The random graph (T, x) just constructed is stationary.
Proof. Suppose L is a random labeling of the sides of the ternary tree. Given a vertex y in the tree T 0 , let T (L, x, y) denote the isometry class, in the space of graphs with two ordered roots, of the graph T obtained from the labeling L with two ordered roots at x and y respectively. The claim is that for every function F , going from the space of isomorphism classes of graphs with two ordered roots to [0, +∞), one has Notice that since the vertices of T are deterministic, it suffices to show that for each fixed y one has E (F (T (L, x, y))) = E (F (T (L, y, x))) .
To see this, we assume that the underlying ternary tree T 0 has been embedded into the Hyperbolic plane in such a way that all edges have the same length and meet at each vertex forming 120 • angles.
Under this assumption the hyperbolic central symmetry with respect to the midpoint of any edge leaves the graph invariant and hence uniquely determines an isomorphism of the tree. Any such symmetry acts on a labeling L in the obvious way.
Assume now that y is a neighbor of x in the ternary tree and let σ be the hyperbolic central symmetry with respect to the midpoint of the edge joining x to y. Notice that the graph T (L, y, x) is isomorphic to T (σ * L, x, y), where σ * L is the labeling L rotated using σ. Since σ * L has the same distribution as L, this establishes (3) as claimed.
The general case follows by changing the labeling using the composition σ 1 • · · · • σ n , where the σ i are the central symmetries with respect to the midpoints of the edges in the shortest path joining x to y.
We now verify that the simple random walk on (T, x) has positive linear drift.
Lemma 7.2. The simple random walk on (T, x) has positive linear drift.
Proof. Since T is regular with degree 9, has the same vertex set as the tree T 0 , and contains T 0 as a subgraph, then X satisfies the strong isoperimetric inequality with a deterministic isoperimetric constant. Hence by [Vir00, Theorem 1.1] (see also [Ger88]) there exists a constant > 0 such that Notice that the distribution of (T, x) has compact support. Hence it can be written as the average of ergodic distributions by Choquet's theorem. For almost all of these ergodic distributions, it follows from the previous lemma that the linear drift of the random walk is positive. Hence, by Theorem 6.2, almost every graph admits an infinite dimensional space of bounded harmonic function. We conclude that the dimension of the space of bounded harmonic functions on (T, x) is infinite dimensional almost surely, as claimed.

Volume growth and Canopy Trees
The main limitation of Theorem 1 is that there is no general method to verify the growth hypothesis needed on the graph. In fact, in this section we will show that there exist recurrent, and hence Liouville, stationary random graphs with super-exponential volume growth. The example was communicated to the authors by Elliot Paquette who attributed it to Asaf Nachmias.
Given a sequence of natural numbers a 1 , a 2 , . . . construct a tree as follows: 1. Begin with a single "level-1" vertex joined to a 1 "level-0" vertices and call the resulting tree T 1 .
2. Join a single (new) "level-2" vertex to the "level-1" vertices of a 2 copies of T 1 to obtain the tree T 2 .
3. For each n ≥ 3 join a single "level-n" vertex to the "level-(n − 1)" vertices of a n copies of T n−1 to obtain T n .
We call the unrooted tree obtained as the union of all T n the "Canopy tree" determined by the sequence a 1 , a 2 , . . .. Notice that the isomorphism group of such a tree preserves and acts transitively on the set of level-n vertices for each n.
In the case where the sequence a n is constant and equal to 2, we obtain the Canopy tree as defined by Aizenman and Warzen in [AW06]. One obtains a stationary random graph by rooting the graph randomly at a level-0 vertex with probability 1/4, and at a level-n vertex with probability 3/2 n+2 for each n ≥ 1. This is a nice example of a recurrent graph, in particular Liouville, with exponential volume growth. The ball of radius 2n contains at least 2 n and no more than 3 2n+1 vertices.
On the other hand, if all a n are equal to 1 one obtains a single vertex at each level and there is no way of choosing a random root in a such a way that the resulting random graph is stationary. The following lemma shows that in general one may construct a stationary graph from a Canopy tree if the sequence a n does not contain too many ones.
Lemma 7.3. The Canopy tree determined by a sequence a 1 , a 2 , . . . admits a random root such that the resulting graph is stationary if and only if 1 a 1 a 2 · · · a n < +∞.
Proof. Consider the Markov chain on 0, 1, . . . with probability of going from 0 to 1 equal to 1, probability 1/(a k + 1) of going from k to k + 1 for each k ≥ 1, and probability a k /(a k + 1) of going from k to k − 1 for each k ≥ 1. There is a random root on the given Canopy tree such that the resulting random graph is stationary if and only if there is a stationary probability for the aforementioned Markov chain. If the above series converges, then defining p 0 = 1/S and p k = (a k + 1)/(Sa 1 · · · a k ) for each k ≥ 1, where S = 1 + (a 1 + 1)/a 1 + (a 2 + 1)/(a 1 a 2 ) + (a 3 + 1)/(a 1 a 2 a 3 ) + · · · , one obtains a stationary probability for the Markov chain.
For the converse direction, assume that there is a stationary probability for the chain. Then the expected value m k of the hitting time at 0 for the chain started at k is finite and non-negative for all k ≥ 1. The m k satisfy the recurrence relation m k = a k m k−1 /(a k + 1) + m k+1 /(a k + 1) + 1, or equivalently m k+1 − m k = a k (m k − m k−1 ) − (a k + 1).
Setting ∆ k = m k+1 − m k , one obtains, using the general solution for a first order linear recurrence, that (a k + 1)/(a 1 · · · a k ) .
Since m k ≥ 0 for all k, one must have that the finite sum in the formula for ∆ k is bounded by ∆ 1 (which must be positive) for all n. In particular, one obtains 1/(a 1 · · · a n ) < +∞ as claimed.
The Canopy tree determined by the sequence a n = n satisfies the hypothesis of the above lemma, and therefore can be turned in to a stationary random graph by adding the appropriate random root. One can verify that the ball of radius 2n centered at any vertex of the tree contains at least n! vertices and therefore the graph has super-exponential volume growth. On the other hand, the graph is recurrent and therefore Liouville.
In the previous example the degree of the root is k + 1 with probability (k + 1)e −1 /k! for each k, and has finite expectation. This example is also interesting in that the degree of the root determines the isomorphism class of the graph completely. In particular, the degree of the root and the number of elements at distance r are highly dependent random variables even for large r. We will now show that in a unimodular graph where the degree of the root is well behaved and the number of elements of the sphere of radius r is "reasonably independent" from the degree of the root, one can prove that there is finite exponential growth.
Lemma 7.4. Let (X, x) be a unimodular random graph such that E(deg(x)) < +∞ and there exists a constant C such that for all r, where |S r (y)| denotes the number of elements at distance r from the vertex y of X.
Proof. Let B r (y) be the graph ball of radius r centered at y ∈ V (X) and S r (y) be the respective sphere. By the triangle inequality Consider the function F (X, x, y) = 1 d(x,y)=r deg(y).
by our assumption. Therefore This implies that v(X, x) ≤ 1+C. Recall that (X, x) is stationary with respect to the probability measure Q defined in (2). Notice that condition (4) then implies that v Q (X, x) (i.e. the volume growth relative to the probability measure Q) is also finite.

Augmented Galton-Watson Tree
We will now illustrate how Theorem 1 implies a known result about harmonic functions on Galton-Watson trees. Consider two independent Galton-Watson trees T 1 and T 2 with the same offspring distribution {p k : k ≥ 0}. That is, p k is the probability that a vertex has k children. We assume that p 0 = 0 and the offspring distribution has finite mean and variance. The Augmented Galton-Watson tree is constructed by joining the roots of T 1 and T 2 with a single edge and rooting the resulting graph at the root of T 1 .
It has been shown in [LPP95] that under the above conditions the Augmented Galton-Watson is a stationary random graph and that the simple random walk on it escapes with positive speed given by = k p k (k − 1)/(k + 1).
Since the offspring distribution has finite mean and variance, the resulting random graph has finite exponential volume growth. See for example [LP14b,Chapter 12]. Hence one may apply Theorem 6.2 to obtain that almost surely the Augmented Galton-Watson tree admits an infinite dimensional space of bounded harmonic functions.

Hyperbolic κ-Markovian triangulations
In a recent work [Cur14] N. Curien has introduced a one parameter family of random infinite triangulations of the plane which generalize the Uniform Infinite Planar Triangulation (UIPT) [AS03]. These are called κ-Markovian planar triangulations where the parameter κ ∈ (0, κ 0 ]. The critical parameter κ 0 = 2/27 corresponds to the UIPT, while for κ < κ 0 the triangulations are hyperbolic in flavor. It is shown in [Cur14], that in the hyperbolic regime (κ < κ 0 ) these triangulations are almost surely non-Liouville, have anchored expansion and positive linear drift. The proof of the non-Liouville property relies on the planarity of the triangulations and the fact that almost surely they do not possess the intersection property. In this section, we apply Theorem 1 to provide an alternative proof of the non-Liouville property, and in fact that (when κ < κ 0 ) the κ-Markovian triangulation almost surely admits an infinite dimensional space of bounded harmonic functions.
We fix κ ∈ (0, κ 0 ) for the rest of this section. The κ-Markovian infinite planar triangulation T is a random rooted type II triangulation of the plane. We refer the reader to [AS03, Section 1.2] for the precise definitions. It is defined by the following property: there exists a sequence {C p } p≥2 of positive numbers, which depends on κ, such that if τ is a finite rooted triangulation of the p-gon, then where |τ | is the number of vertices of τ . Here τ ⊂ T means that T is obtained from τ with coinciding roots, by filling its hole with a necessary unique infinite triangulation of the p-gon. By [Cur14, Section 3.1] T is stationary and ergodic.
For any r ≥ 1, let B r (o) denote the sub-triangulation of T consisting of all the triangles which are incident to a vertex at distance less than or equal to r − 1 from the root. Notice that since T is one-ended, the complement of B r (o) has only one infinite connected component. Let B r (o) be the hull of B r (o) obtained by filling-in all the finite components of its complement.
By [Cur14, Theorem 2], the exponential rate growth of B r (o) is known: there exists a constant λ > 1 and a random variable V ∈ (0, +∞), which depend only on κ, such that In order to apply Theorem 1, we need to control the expected number of elements of the balls.
Lemma 7.6. There exists a constant C, which depends only on κ, such that The proof is based on an algorithmic device, called the peeling process, that allows to construct T as a sequence of growing finite triangulations {T n } n≥0 , see [Ang03]. The process starts by declaring T 0 to be one of the triangles that are incident to the root of T . At each step, T n is a finite triangulation whose boundary ∂T n consists of a simple closed curve. Suppose T n is constructed, and enumerate the boundary vertices ∂T n = {x 1 , . . . , x p }, where p = |∂T n | is the perimeter of T n .
There is a triangle in T \ T n incident to the edge {x 1 , x p }. If we call the third vertex of this triangle y, there are two possibilities for the location of y: either y is a new vertex, or y = x i for some i ∈ {2, . . . , p − 1}. The probabilities of these events are given by Here Z i is the partition function associated to the Boltzmann distribution on triangulations of the i-gon: if we denote by T i the set of all finite triangulations of the i-gon, then and for any τ ∈ T i , the probability of τ is given by κ |τ |−i /Z i . An explicit formula for Z i is known, see for example [Ray14, Section 3.3], but we will not need it here. To complete the construction of T n+1 , one fills the hole created in the case when y = x i with a independent Boltzmann triangulation τ of the i-gon. This process depends on the choice of an edge to peel at each step. One way to chose the edge to peel is given by the "peeling by layers" algorithm: at step n, peel the right-most edge of ∂T n which belongs to the triangle just revealed. Using this algorithm, every vertex of ∂T n will be eventually in the interior of T m for some big enough m ≥ n, and so T = n≥0 T n .
We will use the notation ∆X n = X n+1 − X n for the increments of a sequence of random variables. The distribution of the increment ∆P n is given by where the factor of 2 is because there are two ways of attaching a triangle to T n with vertices x 1 , x p and x i+1 . When p is large, these probabilities converge to a limit distribution. In fact, by [Cur14, Lemma 4], if we let α ∈ (2/3, 1) be given by 2κ = α 2 (1 − α), then the following limit exists lim p→+∞ β p C p = c α ∈ (0, +∞), where β = κ/α. The limit distribution is therefore given by Consider (X n ) n≥0 a random walk on the integers, started at 2, with independent increments following the distribution {q 1 , q −i i ≥ 1}. In [Ray14, Lema 4.2] it is shown that this random walk has positive drift given by the expected value of the increments. Conditionally on (X n ) n≥0 , define (Y n ) n≥0 so that ∆Y n are independent and distributed as the number of internal vertices of a Boltzmann triangulation of the (−∆X n + 1)-gon (if ∆X n = 1, let ∆Y n = 1 by convention). More precisely, the distribution of ∆Y n is given by i+1 denotes the set of all triangulations of the (i + 1)-gon with k internal vertices. The exact value of |T i+1 | is known, see for example [AS03, Theroem 2.1], but we will not use it here.
In the proof of Lemma 7.6 we will need the following fact which is a consequence of moderate deviations estimates. From now on if f and g are non-negative real valued functions defined on a set A, we will write f g if there exists a constant C, such that f (a) ≤ Cg(a) for all a ∈ A.
Lemma 7.7. Let (ξ n ) n≥1 be a sequence of independent identically distributed random variables, such that E e t|ξ1| < ∞ for some t > 0. Denote by µ = E(ξ 1 ), and let τ ∈ N be any random variable. Then, for all > 0, there exists a constant C such that Proof. From [LG05, Lemma 1.12], for any > 0 we have P (A n ) ≤ C 1 e −n 1/2+ for all n ≥ 1, where Here, the multiplicative constant depends on . Consider η(ω) = max{n : ω ∈ A n }. By Borel-Cantelli's lemma, η is finite almost surely, and in fact We first compute the expectation on the event {τ = n}: By the Cauchy-Schwarz inequality, the second term in the right-hand side of the last inequality is bounded from above by n 1/2 Var(ξ 1 ) 1/2 k≥n P (τ = n, η = k) where the las inequality follows from (5). Bounding from above the last sum, we finally obtain the estimate where γ n is a summable sequence. Summing over all the possible values for τ we get the desired inequality.
Proof of Lemma 7.6. For r ≥ 1, let τ r be the first time when all the vertices of ∂T n are at distance at least r from the root. Then B r (o) = T τr , and in particular V τr = B r (o) . Notice that the increments ∆P n are bounded from above by 1, so for all r we have P τr ≤ τ r . First we need the following estimate which is proved in [Ang03, Lemma 4.2]: ∃ a, b > 0 such that P (∆τ r ≥ k|P τr = p) ≤ e −bk for all p and k > ap.
From this it follows that for all p: We have shown that there exists a constant C so that E (τ r ) ≤ C r for all r ≥ 1.
The key point in what follows is that (P n , V n ) is equal in distribution to (X n , Y n ) conditioned on the event {X i ≥ 2, ∀i ≥ 0} which has positive probability, see [Cur14,Section 2]. Denote by γ > 0 the probability of this event, then E (V τr ) ≤ γ −1 E(Y τr ).
We first show that the distribution of the increments ∆Y n have an exponential tail. Let k ≥ 2, then Notice that β is a monotone function of κ, so if we take κ < κ < κ 0 , the corresponding β satisfies β < β < β 0 = 1/9. Therefore In particular, we are in the hypotheses of Lemma 7.7. Let µ = E (∆Y 1 ), then for any > 0, there exists a constant C such that Fix ∈ (0, 1/2), then by Jensen's inequality we obtain E (Y τr ) E(τ r ). This finishes the proof of the lemma.

Poisson Delaunay random graphs
In this section we denote by M either the d-dimensional Euclidean space R d , or the d-dimensional hyperbolic space H d . In the latter case, we use the Poincaré ball model where in polar coordinates the metric is given by where dθ 2 is the standard metric on the sphere S d−1 . In both cases we write dx for the volume element on M . Let Π be a homogeneous Poisson point process of intensity one on M . That is, Π is a random discrete set of points on M with the following properties: 1. the number of points in any Borel set A is a Poisson random variable whose expected value is the volume of A; 2. for any two disjoint Borel sets A and B, the corresponding Poisson random variables are independent.
We refer to [Kin93] for an introduction to point processes. We chose the intensity to be one only for simplicity, but the arguments given here go through for the general case of constant intensity with out significant changes. We set o = 0 ∈ M , and consider the Delaunay graph associated to the discrete set That is, we consider the random rooted graph X with root o and vertex set Π o such that two vertices x, y ∈ X are joined by a single undirected edge if, and only if, there exists a ball with x and y on its boundary and whose interior contains no points of Π o . The resulting random graph is almost surely the dual graph of a tessellation of M into simplices known as the Voronoi tessellation. See Figure 3 for a realization in R 2 . The main goal of this section is to prove the following Theorem 7.8.
1. The Euclidean Poisson-Delaunay random graph is almost surely Liouville for all d. Figure 3: An approximate Poisson-Delaunay triangulation This is the Delaunay triangulation associated to a set of independent uniform points in [−1, 1] 2 ⊂ R 2 and rooted at the origin. As the number of points increases the resulting random graph approximates the Poisson-Delaunay random graph in distribution.
2. The Hyperbolic Poisson-Delaunay random graph almost surely admits an infinite dimensional space of bounded harmonic functions for d = 2.
We will first establish that the growth assumption (1) on Lemma 5.1 is satisfied. The main tool we will use is Slivnyak's formula, see [Møl94, Proposition 4.1.1.], which we will now restate for the reader's convenience. We will now prove that in the Euclidean case the Poisson-Delaunay graph grows subexponentially. Recall that Q is the probability measure defined in (2). In particular, X has polynomial volume growth and v Q (X, o) = 0.
Proof. We will prove 2. The first assertion follows from the proof by applying Borel-Cantelli's Lemma. Let Π be the Poisson point process in R d with intensity 1, and define L Π (x) to be the Euclidean distance between x and its farthest neighbor in the Delaunay graph of Π ∪ {x}.
The proof relies on the following exponential bound for the tail of L Π (x): there is a positive constant c such that for all s > 0. See for example [Møl94,OBS92]. Let us denote by B M r (o) the Euclidean ball of radius r centered at o. Let {s r } r≥0 be a monotone sequence of non-negative numbers, with r 0 = 0, to be chosen later and define S r = s 1 + · · · s r−1 . Applying (6), we will bound from above the probability that there exists an edge in the Delaunay graph X, of Euclidean length at least s r , starting at a point of B M Sr (o). Recall that D denotes the space of discrete subsets of R d , and consider the function f : By Slyvniak's formula, we have

Consider the sequence of events
and define K(Z) = max{r : Z ∈ A r }. Then {K ≥ k} = r≥k A r , and therefore Suppose that Π o satisfies K(Π o ) = k. This implies that for any r ≥ k and any point x ∈ Π o \ B Sr−1 , the distance in the graph X between o and x is at least r − k. In other words, we have For the first term, we have In summary, we obtained E|B r (o)| 2 e 2Lr + C r = O(e 2Lr ).
The proof concludes as in the previous lemma by applying the Cauchy-Schwarz inequality.
We now establish unimodularity of the Poisson-Delaunay graphs in both the Euclidean and Hyperbolic case using Slivnyak's formula. The proof also applies to the more general case when M is a symmetric space. A different proof of the same result is given in [BPP14].
Proof. Given a discrete subset D of M and two points x, y ∈ D, let π(D, x, y) be the Delaunay graph associated to D with two ordered root vertices corresponding to x and y. The codomain of π is the space of isomorphism classes of graphs with two roots.
Recall that Π is a Poisson point process with intensity 1. The aim is to show that the Delaunay graph associated to Π o rooted at o is unimodular. From Slivnyak's formula, we obtain for any measurable function on the space of graphs with two roots: For each y ∈ M , the expected value in the integral on the right-hand side can be written as E (F (π(Π ∪ {o, y}, o, y))) = E (F (π(Π ∪ {o, y}, o, y))) where Π is obtained from Π by symmetry with respect to the midpoint of the geodesic segment [o, y]. The equality follows because the distribution of Π is invariant under isometries of M .
The first part of Theorem 7.8 follows directly from Lemma 7.10, Theorem 1 and Lemma 5.1, since in that case v Q (X, o) = 0. In the second part, by [BPP14, Theorem 1. 1 and Theorem 1. 4], the random walk on the 2-dimensional Hyperbolic Delaunay graph has positive linear drift, so we can conclude as before using Lemma 7.11. As far as the authors are aware there there are no results on the speed of the random walk on Hyperbolic Poisson-Delaunay graphs of dimension d > 2 in the literature.