Mixing time for the random walk on the range of the random walk on tori

Consider the subgraph of the discrete $d$-dimensional torus of size length $N$, $d\ge3$, induced by the range of the simple random walk on the torus run until the time $uN^d$. We prove that for all $d\ge 3$ and $u>0$, the mixing time for the random walk on this subgraph is of order $N^2$ with probability at least $1 - Ce^{-(\log N)^2}$.


Introduction
Let X n be a simple random walk on a large d-dimensional discrete torus T d N = (Z/N Z) d , d ≥ 3, started from the uniform distribution on T d N . For u > 0 and N ∈ N, let I u N = X 0 , . . . , X uN d be its range on the time interval [0, uN d ]. We view I u N as a subgraph of T d N in which the edges are drawn between any two vertices within 1 -distance 1 from each other.
In this note, we are interested in the behavior of the mixing time of the random walk on this graph as N grows while u > 0 remains fixed. We prove that the mixing time is of order N 2 and give bounds on the probability of the good event.
To state our main theorem, we recall that a lazy random walk on a finite connected graph G = (V, E) is a Markov chain with the transition probabilities {p(x, y)} x,y∈V given by where d x is the degree of x in G. The 1 4 -uniform mixing time (or simply mixing time) of the lazy random walk on G is defined by t mix (G) = min n : p n (x, y) − π(y) π(y) ≤ 1 4 , for all x, y ∈ V , where π denotes the (unique) stationary distribution of the walk, and p n its n-step transition probability. Our main result is the following theorem.
Theorem 1.1. Let d ≥ 3 and u > 0. There exist c = c(d, u) > 0 and C = C(d, u) < ∞ such that for all N ∈ N, The lower bound on t mix (I u N ) of Theorem 1.1 is not difficult to show. In fact, its probability can be easily improved to ≥ 1 − Ce −N δ . The substantial contribution of this note is the upper bound on t mix (I u N ) of correct order. Previously, it was shown by Procaccia and Shellef in [PS14, Theorem 2.2] that lim N →∞ P t mix (I u N ) ≤ N 2 log (k) N = 1, for every k ≥ 0, where log (k) N is the k-th iterated logarithm. Our theorem on the one hand sharpens their result, and on the other hand gives a bound on the probability of the good event.
The decay rate in (1.1) can be easily improved from e −(log N ) 2 to any e −(log N ) p , p > 2, but our method does not allow to obtain a stretched exponential rate e −N δ .
The main ingredient of the proof of Theorem 1.1 is the following isoperimetric inequality for subsets of I u N , which may be of independent interest. For A ⊆ I u N , let Theorem 1.2 is proved by combining a new isoperimetric inequality for (deterministic) graphs from [Sap14] with the strong coupling of the I u N and the random interlacements from [ČT14]. We will recall these results in Section 2. In the remaining two sections of this note we then prove Theorem 1.2 and Theorem 1.1, respectively.
In the remainder of this note, we omit the dependence of various constants on d. The constants inherit their numbers from the theorems where they appear for the first time, and their dependence on other parameters is explicitly mentioned.

Preliminaries
We introduce some notation first. For x = (x 1 , . . . , x d ) ∈ R d , its 1 and ∞ norms are defined by We consider the measurable space Ω = {0, 1} Z d , d ≥ 3, equipped with the σ-algebra F generated by the coordinate maps {ω → ω(x)} x∈Z d . For any ω ∈ {0, 1} Z d , we denote the induced subset of Z d by We view S as a subgraph of Z d in which the edges are drawn between any two vertices of S within 1 -distance 1 from each other.
2.1. Deterministic isoperimetric inequality. One of the main tools for our proofs is an isoperimetric inequality from [Sap14] for subsets of a (deterministic) graph, satisfied uniformly over a large class of graphs. Each graph in this class is contained in a large box, well-connected on a mesoscopic scale, and admits a dense well-structured connected subset identified through a multiscale renormalization scheme. In this section we recall some notation and necessary results from [Sap14].
Let λ and L 0 be positive integers. For n ≥ 0 we consider the following sequences of scales (2.1) l n = λ 2 4 n 2 , r n = λ 2 n 2 , L n+1 = l n L n .
To set up a multiscale renormalization with scales L n , we introduce two families of good vertices. Their precise definition will not be used in the paper. The reader may skip directly to the statement of Theorem 2.4.
If x ∈ G 0 is not (0a)-good, then we call it (0a)-bad. For n ≥ 1, we recursively define x ∈ G n to be (na)-bad in ω ∈ Ω if there exist two ((n − 1)a)-bad vertices Definition 2.3. For n ≥ 0, we say that x ∈ G n is n-good in configuration ω ∈ Ω if it is at the same time (na) and (nb) good. Otherwise, we call the vertex x n-bad.
Let us briefly comment on the above definitions. In classical renormalization techniques on percolation clusters, good boxes are usually defined as the ones containing a unique cluster with large diameter. In our case, it is crucial to define good boxes in terms of only monotone events, as for these one can get a good control of correlations with a help of sprinkling, even in models with polynomial decay of correlations (see [Szn10,Szn12,PT15]).
The main motivation behind the choice of η 2 < 2η 1 comes from the following observation. If neighbors x, y ∈ G 0 are 0-good, then C x and C y are defined uniquely and locally connected (see [Sap14,Lemma 3.1]). This property is essential to identify a ubiquitous well-structured connected subset of S through a multiscale renormalization. (See [RS13,DRS14,PRS13,Sap14].) The following statement is a special case of [Sap14, Lemma 3.3 and Theorem 3.8]. It gives an isoperimetric inequality for subsets of a local enlargement of S ∩ [0, KL s ) d . This enlargement serves as a smoothening of a possibly rough boundary of S ∩ [0, KL s ) d , thus improving isoperimetric properties of S ∩ [0, KL s ) d near its boundary.
Corollary 2.6. In the setting of Theorem 2.4, assume in addition that K ≥ L d 3 s . Then for every ω ∈ Ω satisfying (a)-(c) of Theorem 2.4 and for all A ⊂ C with |A| ≤ 1 2 | C|, as required.

2.2.
Coupling with random interlacements. Another principal ingredient for the proofs of our main results is the coupling of I u N with the random interlacements inside of macroscopic subsets of the torus which was constructed in [ČT14]. We use here I u to denote the random interlacements on Z d at level u as introduced in [Szn10, (1.53)]. In the next theorem, we identify the torus T d N with the set [0, N ) d ∩ Z d . Theorem 2.7. [ČT14, Theorem 4.1] Let d ≥ 3, u > 0. For any ε > 0 and α ∈ (0, 1), there exist δ 2.7 > 0, C 2.7 < ∞, and a coupling Q of I u N , I u(1−ε) , and I u(1+ε) , such that for all N ≥ 1,

Proof of the isoperimetric inequality
We may now proceed to the proof of Theorem 1.2. To this end, we need to check that assumptions (a)-(c) of Theorem 2.4 hold true with high probability. We fix u > 0 and consider the function η(u) = 1 − e − u g(0,0) , where g(·, ·) is the Green function of the simple random walk on Z d . (The function η(u) is the density of random interlacements at level u.) We further fix ε > 0 small enough so that (3.1) η 1 := 3 4 η(u(1 − ε)) and η 2 := 5 4 η(u(1 + ε)) satisfy condition (2.2).
The first lemma provides an estimate on the probability that a vertex is s-bad. Its proof relies on corresponding results for random interlacements [DRS14, Lemmas 4.2 and 4.4] and the coupling from Theorem 2.7. In its statement, we consider I u N as a subset of Z d obtained by the canonical periodic embedding of T d N in Z d . Lemma 3.1. For any u > 0, α ∈ (0, 1), and ε as in (3.1), there exist C 3.1 = C 3.1 (u, ε, α) < ∞ and C 3.1 = C 3.1 (u, ε, α, λ) < ∞ such that for all λ ≥ C 3.1 , L 0 ≥ C 3.1 , and s ≥ 0 with L s + 2L 0 ≤ αN , P [0 is s-bad in I u N ] ≤ 2 · 2 −2 s + C 2.7 e −N δ 2.7 . Proof of Lemma 3.1. Observe first that the event {0 is s-bad in I u N } depends only on the state of vertices inside of B := [−L 0 , L s + L 0 ] d ∩ Z d . By assumption, L s + 2L 0 ≤ αN . Thus, using Theorem 2.7, we can couple I u N with I u(1±ε) so that Further, by the monotonicity of (sa) and (sb) bad events, the following inclusion holds: 0 is s-bad for the realization of I u N , ∪ 0 is (sb)-bad for the realization of I u(1+ε) .
By [DRS14, Lemmas 4.2 and 4.4], the probabilities of the two events in the right hand side are bounded from above by 2 · 2 −2 s . The next lemma implies that the assumptions of Theorem 2.4 hold with a large probability for S = I u N .
We first claim that there are large C = C(u) and small δ = δ(u) ∈ (0, 1) such that Indeed, this can be proved as [ČP12, Lemma 8.1], replacing the box of size ln γ N used there with the box of size δL s . The proof in [ČP12] uses ingredients from [TW11], namely Lemmas 3.9, 3.10 and 4.3, which hold true for boxes up to size N 1 2 . Since δL s ≤ N 1/2 , by the assumption on K, we can use them without modifications.
Using the translation invariance, Moreover, using (3.3) with m = δL s , Finally, as in the proof of [ČP12, Theorem 1.6], assuming that the events of the last two displays hold, then for every x, In particular also x i−1 and x i are connected in I u N ∩ B(x i , L s ) and thus x, y are connected in I u N ∩ B(x, 2L s ). It follows that By combining the three bounds and using the relation L s ≤ αN , we obtain the desired bound (3.2).
We will prove Theorem 1.2 by making a suitable choice of s and K in Lemma 3.2 as functions of N .
Take the scales as in (2.1) with λ ≥ C 3.1 and L 0 ≥ C 3.1 . Without loss of generality, we assume that N ≥ 7L d 3 +1 0 . Let Notice that K ≥ L d 3 s and (K + 4)L s ≤ 6N 7 . Thus, the parameters s and K satisfy the conditions of Corollary 2.6 and Lemma 3.2. To apply Corollary 2.6, for each x ∈ [0, N ) d , we define the local enlargement of .
Here as before, we consider I u N as a subset of Z d obtained by the canonical periodic embedding of T d N in Z d . By Corollary 2.6, Lemma 3.2, and translation invariance, for some β = β(u) > 0 and γ = γ(u) > 0, where RHS 3.2 = C 3.2 · (KL s ) d · 2 −2 s + e −L δ 3.2 s is the right hand side of (3.2). The proof of Theorem 1.2 will be completed once we prove that for all N ≥ N 0 (u), (a) the event in (3.4) implies the event in (1.2), with a possibly different γ, and (b) N d · RHS 3.2 ≤ e −(log N ) 2 . We begin showing (a). Assume that the event in (3.4) occurs. Let A be a subset of I u where in the last step we used the inequality i |A x i | 1− 1 Thus, in this case, the event in (3.4) implies the event in (1.2) with γ 1.2 = 1 7 d γ.
It remains to consider the case when for some x, |A x | > 1 2 | C x |. We claim that in this case for N ≥ N 0 (u), there exist x such that Assume that it is not the case. Since there exists x such that |A x | > 1 2 | C x | and | C x | ≥ βN d , the non-validity of (3.5) implies that |A Assume that the last inequality holds for all x. Then, which contradicts the assumption that |A| ≤ 1 2 |I u N |. Thus, for each x, either |A x | < 1 2 βN d or |A x | > 1 − β 2·7 d | C x |, and both types exist. In particular, there exist x, y with |x − y| 1 = 1 such that |A For these x and y, on the one hand, and on the other, where the last inequality holds for large enough K. Since KL s ≤ N and K ≥ N 7 , the two bounds for |A y \ A x | cannot be fulfilled simultaneously if N ≥ N 0 (u). This contradiction proves (3.5). Let Thus, if (3.5) holds, then the event in (3.4) implies the event in (1.2) with γ 1.2 = β 2 7 d γ. Putting the two cases together gives It remains to prove that N d · RHS 3.2 ≤ e −(log N ) 2 for N ≥ N 0 (u). By (2.1), Thus, L s ≥ 1 4 · N 1 3(d 3 +1) . On the other hand, by (2.1), L s ≤ L 0 · λ 2s · 4 s 3 , which implies that there exists c = c(L 0 , λ) > 0 such that s ≥ c(log N ) 1 3 − 1 and 2 s ≥ c(log N ) 4 . Thus, there exists C = C(u) < ∞ such that (KL s ) d · 2 −2 s + e −L δ 3.2 s ≤ Ce −(log N ) 3 . By taking N large enough, The proof of Theorem 1.2 is complete.
4. Proof of Theorem 1.1 We begin with the proof of the upper bound. It is very similar to the proof of [PS14, Theorem 3.1], which relies on the bound on the mixing time from [MP05, Theorem 1]. For r > 0, let By [PS14,(16)], there exists C = C(d) < ∞ such that Consider the event from (1.2) for µ = (1 − 1 4d ). For each realization of I u N from this event and all r, By (1.2), there exists C = C(u) < ∞ such that We proceed with the proof of the lower bound. By the volume bound in (3.4) and (3.6), there exist C = C(u) < ∞ and β = β(u) > 0 such that for all N ≥ 1, Assume the occurrence of event under the probability. By [BP89, Theorem 2.1], there exists C = C(u) < ∞ such that for each ε > 0 and n < 1 3 N , one can find x = x(ε, n) ∈ I u N so that y∈B(x,n) p εn 2 (x, y) ≥ 1 − C ε.
We take ε small enough and n < εN so that the first sum is larger than 1 2 and the second smaller than 1 4 . Then there exists at least one y ∈ B(x, n) such that |p εn 2 (x, y) − π(y)| ≥ 1 4 π(y). Thus, t mix (I u N ) ≥ ε 3 N 2 , and we conclude that for some c = c(u) > 0 and C = C(u) < ∞, P t mix (I u N ) ≥ cN 2 ≥ 1 − Ce −(log N ) 2 . The proof of Theorem 1.1 is complete.
Remark 4.1. (a) In the proof of the lower bound on t mix (I u N ) we only used that I u N has positive density in large subboxes of T d N . This follows from the facts that I u N dominates random interlacements and the random interlacements are dense in large boxes. Both facts hold with probability ≥ 1 − Ce −N δ . Thus, P [t mix (I u N ) ≥ cN 2 ] ≥ 1 − Ce −N δ . (b) The method of this note also applies (with minimal changes) to the largest connected component of the vacant set of the range V u N = T d N \ I u N , when u is strongly supercritical, see [TW11,Definition 2.4]. For instance, the property (b) of Theorem 2.4 for the largest cluster is shown to be very likely for strongly supercritical u's in [DRS14, Section 2.5]. So far, it is only known that strongly supercritical u's exist if d ≥ 5, see [Tei11]. (c) It is natural to consider I u N as a random subgraph of T d N with edges traversed by the random walk. All our results remain true in this case. The proofs presented in the note are robust to this change, but the external ingredients should be adapted to corresponding bond models. Although the changes needed are only notational, presenting them would deviate us from the main goal of this note.