Exponential concentration of cover times

We prove an exponential concentration bound for cover times of general graphs in terms of the Gaussian free field, extending the work of Ding-Lee-Peres and Ding. The estimate is asymptotically sharp as the ratio of hitting time to cover time goes to zero. The bounds are obtained by showing a stochastic domination in the generalized second Ray-Knight theorem, which was shown to imply exponential concentration of cover times by Ding. This stochastic domination result appeared earlier in a preprint of Lupu, but the connection to cover times was not mentioned.


Introduction
Let G = (V, E) be an undirected graph, possibly with self-loops and multiple edges. For the continuous time simple random walk on G started at a given vertex v 0 ∈ V , define τ cov to be the first time that all the vertices in V have been visited at least once. This quantity, known as the cover time, is of fundamental interest in the study of random walks.
Another fundamental object in the study of random walks on graphs is the Gaussian free field (GFF). For purposes of stating our main result, let us define the GFF {η x } x∈V on G with η v0 = 0 to be the Gaussian process given by covariances E(η x − η y ) 2 = R eff (x, y), where R eff denotes effective resistance. More background is given in Section 2.
Our main result is the following concentration bound on the cover time in terms of the Gaussian free field. Theorem 1.1. Let G = (V, E) be an undirected graph with a specified initial vertex v 0 ∈ V . Let {η x } x∈V be the Gaussian free field on G with η v0 = 0. Define the quantities Then, there are universal constants c and C such that for the continuous time random walk started at v 0 , we have P τ cov − |E|M 2 ≥ |E|( √ λR · M + λR) ≤ Ce −cλ for any λ ≥ C.
Remark 1.2. Our result is most easily stated for a continuous time random walk, i.e. a random walk having the same jump probabilities as a simple random walk, but whose times between jumps are i.i.d. unit exponentials. However, note that if a continuous time random walk has run for time t, then the number of jumps it has made has Poisson distribution with mean t, which exhibits Gaussian concentration with fluctuations of order √ t. Thus, Theorem 1.1 can be easily translated into a similar bound for discrete random walks. Remark 1.3. Note that the definition of M is given in terms of a starting vertex v 0 , but it does not depend on v 0 . Indeed, let v ′ 0 be another starting vertex. Then, Remark 1.4. We actually show Theorem 1.1 in the slightly more general setting of electrical networks, which are introduced in Section 2.
We prove Theorem 1.1 following the approach first appearing in a paper of Ding, Lee, and Peres [8] and later refined by Ding [7]. Indeed, Ding observed that Theorem 1.1 is implied by a certain stochastic domination; in [7], the domination was proved for trees, but the general case was left as conjecture ( [7], Question 5.2). We establish Theorem 1.1 by proving the stochastic domination for general graphs.
In relation to these previous works, Theorem 1.1 extends Theorem 1.2 of [7], which gave the same concentration bound for trees. It also sharpens Theorem 1.1 of [8], where the equivalence of cover times and |E|M 2 (in the notation of Theorem 1.1) was proven up to a universal multiplicative constant. By "sharpen", we mean that we are able to remove the constant factor under the assumption √ R ≪ M . We mention that this was done already for bounded-degree graphs in Theorem 1.1 of [7], albeit without exponential tail bounds.
The condition √ R ≪ M is a relatively mild one. Indeed, define τ hit (x, y) to be the time it takes for a random walk started at x to hit y, and define t hit = max x,y∈V Eτ hit (x, y), t cov = max where in the definition of t cov , E x denotes the expectation for the random walk started at x. The well-known commute time identity ( [21], Proposition 10.6) states that Eτ hit (x, y) + Eτ hit (y, x) = 2|E| · R eff (x, y).
It follows that t hit ≥ |E| · R.
On the other hand, it was shown in [8] that |E| · M 2 is within a constant of t cov . It follows that for some constant C, so √ R ≪ M holds whenever t hit ≪ t cov . We obtain the following corollary.
for a universal constant C.
Remark 1.6. There is a deterministic polynomial-time approximation scheme (PTAS) due to Meka [26] for computing the supremum of a Gaussian process. Applying this to the quantity M gives a PTAS for t cov when t hit ≪ t cov .
Conversely, it was shown by Aldous [1] that if t hit is of the same order as t cov , then the cover time cannot be concentrated about its expectation (see the introduction of [7] for a more detailed discussion).
The main tool in estimating cover times employed in [8] and [7] is the generalized second Ray-Knight theorem, which is an identity in law relating the Gaussian free field to the time spent at each vertex by a continuous time random walk. In fact, the upper bound on t cov in Corollary 1.5 was previously established as Theorem 1.4 of [8] (the same argument also proves the corresponding upper tail estimate in Theorem 1.1).
In [7], the matching lower bound was reduced to proving a certain stochastic domination in the generalized second Ray-Knight theorem. There, the stochastic domination was proven only for trees ([7], Theorem 2.3), but it was asked whether the same holds for general graphs ( [7], Question 5.2).
Indeed, in Section 3 we prove Theorem 3.1, which generalizes Theorem 2.3 of [7] to arbitrary graphs. This is accomplished by viewing the random walk as Brownian motion on a metric graph. After writing up an early draft of the proof, it was pointed out to us that this idea appeared previously in a recent preprint of Lupu [22] to prove essentially the same result ( [22], Theorem 3). In that context, the idea was mainly used to study the percolation of loop clusters ( [22], Theorems 1 and 2; see also subsequent work by Sznitman [31]). However, the application to cover times was not mentioned.
Even though Theorem 3.1 uses the same ideas as Theorem 3 of [22], we include a proof in order to establish the result in the language of our specific application. Additionally, our exposition is intended to be more accessible to audiences interested in cover times of random walks.

Related work on cover times
Cover times have been studied in many papers over the last few decades. We highlight several of them below; see also §1.1 of [8] for further background.
We first mention some results relating cover times and hitting times. Clearly, t cov ≥ t hit . A classical result of Matthews [25] is that on a graph of n vertices, t cov ≤ t hit (1 + log n). This was proved by a clever argument analogous to the analysis of the coupon collector's problem. Matthews also gave an expression for a lower bound, which was later shown by Kahn, Kim, Lovasz, and Vu [14] to approximate the cover time to within (log log n) 2 .
In [1], Aldous analyzed a generalization of the coupon collector's problem. As a consequence, he showed that τ cov is concentrated around its expectation with high probability as t hit tcov → 0. More precisely, for any ǫ > 0, there is a small enough δ so that tcov < δ. This shows qualitatively the concentration of cover times. On the other hand, cover times have also been estimated for many specific classes of graphs, including regular graphs [15], lattices [32], and bounded degree planar graphs [13], to name a few. Precise asymptotics are known for the two-dimensional discrete torus [6] and regular trees [2].
More recently, a breakthrough was made by Ding, Lee, and Peres [8] whereby the cover time was given (up to a constant factor) in terms of the Gaussian free field. Their result gives in some sense a quantitative estimate of the cover time that works for any graph. As touched upon earlier, Ding [7] later removed the constant factor for trees and bounded degree graphs. We complete the picture by extending this to general graphs.

Outline
The remaining sections are organized as follows. In Section 2, we establish notation and provide a brief review of electrical networks, local times, Gaussian free fields, and the generalized second Ray-Knight theorem. The notation mostly follows [7]. Section 3 is devoted to proving the aforementioned stochastic domination in the form of Theorem 3.1. This is very similar to Theorem 3 of [22]; nevertheless, we include a proof in the notation of our setting. In Section 4, we apply Theorem 3.1 to cover times to obtain Theorem 1.1. The final section contains acknowledgements.

Definitions and preliminaries
An electrical network G is a finite, undirected graph (V, E), allowing self-loops, together with positive weights on the edges called conductances. We use either c xy or c yx to denote the conductance of an edge (x, y), and for vertices x, y ∈ V that do not share an edge, we define c xy = 0. It is convenient to define the quantity c x = y∈V c xy , which we refer to as the total conductance at x.
The name "electrical network" comes from the fact that G can be used to model an electric circuit, where each edge (x, y) corresponds to placing a resistor with resistance 1 cxy between vertices x and y. For any x, y ∈ G, we can define the effective resistance R eff (x, y) between x and y to be the physical resistance when a voltage is applied between x and y. Mathematically, this quantity can be defined as a certain minimum energy (see Chapter 9 of [21] for more background on effective resistance and electrical networks).
There is a canonical discrete time random walk on an electrical network defined by taking the transition probability from x to y to be cxy cx . In the case where the non-zero conductances are all equal, this reduces to the simple random walk on the underlying graph.
We will also want to consider the continuous time random walk on an electrical network. This is a continuous time process {X t } t∈R + which can be sampled by having the same transition probabilities as the discrete time walk but introducing unit exponential waiting times between transitions. (Contrast this with the discrete time random walk, which we can think of as having waiting times that are deterministically equal to 1.) In what follows, unless otherwise specified, all the electrical networks we consider will have a distinguished vertex v 0 ∈ V , and all random walks will be assumed to start at v 0 .

Local times
Let X = {X t } t∈Z + be a discrete time random walk on an electrical network G. For each time t and vertex v, we define the quantity which counts the number of visits of X to v up to time t.
We also define a continuous analogue of L X t (v). Suppose now that X = {X t } t∈R + is a continuous time random walk on G. For any time t ≥ 0 and vertex v ∈ V , we define the local time Note the factor of 1 cv ; this is a convenient normalization for various formulas. When there is no risk of confusion about the random walk X, we will sometimes shorten the notation to L t (v) or L t (v).
Clearly, the cover time is related to the local time; it is the first time that all local times are positive. For a continuous time random walk X, we have We will also frequently consider the first time that v 0 accumulates a certain amount of local time. We give a formal definition for this stopping time. For a continuous time random walk X and any t > 0, define the inverse local time τ + (t) as It will always be clear what X is, so it is not included in the notation for sake of brevity.

Gaussian free fields
For an electrical network G = (V, E), the Gaussian free field η S with boundary S ⊂ V is defined to be a random variable taking values in the set R V \S of real-valued functions on V \ S. Its probability density at an element f ∈ R V \S is proportional to where we define f (x) = 0 for each x ∈ S. For our purposes, Gaussian free fields will always have boundary S = {v 0 }. Thus, if we refer to the Gaussian free field on some network, we will mean the one with this boundary, and we will drop the subscript S.
From (1) it is clear that η is a multidimensional Gaussian random variable. It is not too hard to calculate (e.g., Theorem 9.20 of [12]) that for all x, y ∈ V , which confirms that our definition of the GFF is consistent with the one given in the introduction. Noting that η v0 = 0, the above formula completely determines the correlations of η in terms of the effective resistances.
The Gaussian free field comes into the picture via a class of identities known as Isomorphism Theorems. The first such theorems were proved independently by Ray [28] and Knight [16] relating the local times of Brownian motion to a 2-dimensional Bessel process. More generally, it turns out that for any strongly symmetric Borel right process, there is an identity relating its local times to an associated Gaussian process.
Inspired by formulas of Symanzik [29] and Brydges, Fröhlich, and Spencer [4], Dynkin [9] gave the first isomorphism of this type to be expressed in terms of Gaussian free fields. Various related identities were subsequently discovered by Marcus and Rosen [23], Eisenbaum [10], Le Jan [18], Sznitman [30], and others. There is a nice version of the isomorphism in the case of continuous time random walks on finite electrical networks, first appearing in [11] (see also Theorem 8.2.2 of the book by Marcus and Rosen [24]). Theorem 2.1 (Generalized Second Ray-Knight Theorem). Let G = (V, E) be an electrical network, with a given vertex v 0 ∈ V . Let X = {X t } t≥0 be a continuous time random walk on G, and for any t > 0, define τ + (t) = inf{s ≥ 0 : L X s (v 0 ) ≥ t} to be the first time that v 0 accumulates local time t. Then, we have For more background on isomorphism theorems, we refer the interested reader to [24]. See also [19] for information relating Gaussian free fields to loop measures.

Random walks on paths and the first Ray-Knight theorem
The content of this subsection may appear somewhat unmotivated before reading Section 3. The reader may wish to first skim this subsection and revisit it when reading Section 3.2 where it is used.
We will need a few basic facts concerning the special case where the underlying graph of G is a path. In this setting, it is a classical theorem proved independently by Ray and Knight that the local times of a continuous time random walk can be related to Brownian motion.

Theorem 2.2 (First Ray-Knight Theorem).
For any a > 0, let B t be a standard one-dimensional Brownian motion started at B 0 = a, and let T = inf{t : B t = 0}. Let {W t } t≥0 be a standard two-dimensional Brownian motion. Then, where L Bt T denotes the local time of Brownian motion.
In Section 2.1, we did not define the local time of Brownian motion, which requires some minor technicalities due to the fact that it can only be defined as a density. For background on Brownian local times and Theorem 2.2, we refer the reader to [27]. However, we will only use a discretized version of Theorem 2.2, where we restrict our attention to a finite set of values for x. This is equivalent to replacing the Brownian motion B t with a continuous time random walk on a path. Corollary 2.3. Let G = (V, E) be an electrical network whose underlying graph is a path, with vertices labeled 0, 1, 2, . . . , N and conductances c k,k+1 between k and k + 1 for 0 ≤ k < N . Let X t be a continuous time random walk on G started at X 0 = N , and let T = inf{t : X t = 0}. Define and let {W t } t≥0 be a standard two-dimensional Brownian motion. Then, Proof. The equivalence to Theorem 2.2 can be seen as follows. For any x ∈ R, let B t be a Brownian motion started at x and stopped upon hitting x − r or x + s. Then, the local time accumulated at x is distributed as an exponential random variable with mean rs r+s . When x = a k , r = 1 c k,k−1 and s = 1 c k,k+1 , this corresponds to an exponential jump time from the vertex k in G, scaled by a factor of 1 c k,k−1 +c k,k+1 which appears in the definition of L X T (k).
In light of Corollary 2.3, it is useful to know something about two-dimensional Brownian motion. For our purposes, we need the following estimate, which is a quantitative verson of the standard fact that two-dimensional Brownian motion is not point-recurrent.
Lemma 2.4. Let W t be a standard two-dimensional Brownian motion. For any ǫ ∈ (0, 1) and Proof. See Appendix.
Finally, the next lemma shows that certain conditioned random walks on paths are equivalent to random walks on a path of different conductances. Thus, the first Ray-Knight theorem may be applied in a conditional setting as well. This will be important when we study random walk transitions on general electrical networks. Consider an electrical network G = (V, E) whose underlying graph is a path, with vertices labeled 0, 1, 2, . . . , N + 1. Suppose that the conductances are c k,k+1 = 1 for 0 ≤ k < N and c N,N +1 = r. Let X = {X t } t≥0 be a discrete time random walk on G started at N , and let τ be the first time that X hits 0 or N + 1.
On the other hand, let G ′ be a path on vertices 0, 1, 2, . . . , N with conductances be a discrete time random walk on G ′ started at k. Then, the paths of Y stopped upon hitting 0 have the same distribution as the paths of X conditioned on X τ = 0.
Proof. This can be easily checked by calculating hitting probabilities, which can then be used to calculate transition probabilities for X t conditioned on X τ = 0. See Appendix.
Corollary 2.6. Let N , r, and G be as in Lemma 2.5, and suppose further that r < 1. Let X be a continuous time random walk on G, and let τ = inf{t ≥ 0 : X t = 0 or N + 1}. Then, for any ǫ ∈ (0, 1) and β > 0, where α = rN , and C α > 0 is a number depending only on α.
Remark 2.7. The statement of Corollary 2.6 takes this somewhat awkward form because it will be used for r on the order of 1 N .
Proof. By Lemma 2.5 (using the same notation), the paths of X are distributed as a random walk on a path of N edges with conductances where W t is a two-dimensional Brownian motion, and From the above equations, the following bounds are easy to verify for ǫN ≤ k < N .
It follows that for C α sufficiently large. In the second line, we have used the scale-invariance of Brownian motion, and the third line is an application of Lemma 2.4.

Stochastic domination in the generalized second Ray-Knight theorem
The goal of this section is to prove the following stochastic domination theorem, which is a variant of Theorem 3 in [22]. [22], Theorem 3). Let τ + (t) and η be as in Theorem 2.1. Then, we have where denotes stochastic domination.
Theorem 3.1 extends Theorem 2.3 from [7], which proves the result for trees. The approach in [7] uses a Markovian property of local times for trees which does not seem to extend to general electrical networks. We take a different approach of embedding the finite-dimensional Gaussian free field inside a larger infinite-dimensional Gaussian free field, which has desirable continuity properties that were not apparent in the finite-dimensional setting. As mentioned in the introduction, we discovered while writing up our results that this idea appeared earlier in [22].
Let us first give a heuristic description of the approach. Recall that the continuous time random walk on an electrical network makes jumps at exponentially distributed random intervals. An equivalent way of sampling the continuous time random walk is to perform a Brownian motion along the edges of the network. By this we mean that our discrete state space V is replaced by a larger state space V which includes not only the vertices in V but also each point along each edge of E (regarding the edges as line segments, so that V is topologically a simplicial 1-complex). The object V is known as a metric graph and arises in physics and chemistry (see e.g. §5 of [5]).
A Brownian motion on V is, informally, a continuous Markov process X = { X(t)} t≥0 taking values in V that behaves like a one-dimensional Brownian motion on edges. The earliest rigorous development of this idea we could find was carried out by Baxter and Chacon [3]. See also [17] for a more recent treatment.
It turns out that the Gaussian free field η on V (without defining this precisely) is almost surely continuous in the topology of V . 1 We can also define a notion of local time L X t (v), and we can define the stopping time τ + (t) analogously to the discrete case. For convenience, let us write L t for L X τ + (t) . With an appropriate normalization, the restrictions of η and L t to V ⊂ V have the same laws as the corresponding objects on the original network G = (V, E). The generalized second Ray-Knight theorem translates to where η ′ is another copy of η, and c v is a continuous analogue of the total conductance at a vertex.
Now, suppose that η and η ′ are coupled in a way so that the two sides in equation 2 are actually equal. Consider the function f : We have that f (v 0 ) = √ 2t > 0, f is continuous, and if f (x) = 0, then L t (x) = 0. It turns out that the set U = {v ∈ V : L t (v) > 0} is connected, and clearly it includes v 0 . It follows that f (x) > 0 for all x ∈ U , which is exactly the desired stochastic domination once we restrict to V ⊂ V .
The assertion that U is connected deserves some elaboration. It is intuitively clear that the closure of U should be connected, since any point v ∈ V which accumulates positive local time must have been visited along some connected path from v 0 to v. Thus, every non-trivial segment along this path should have also accumulated positive local time.
On the other hand, it is not immediately obvious why U itself is connected, since there might be local times of 0 at isolated points. However, we can see heuristically that this pathology doesn't occur by the first Ray-Knight theorem. Recall from Section 2.3 that the first Ray-Knight theorem equates the local times of a certain stopped Brownian motion to the distance of a planar Brownian motion from the origin. Because planar Brownian motion is not pointrecurrent, the local times are all positive almost surely, and in particular, the set of points with 0 local time does not have isolated points.
To avoid technicalities, we will not actually use Brownian motion in our proof. Instead, we will use a discrete approximation of Brownian motion and pass to the limit. Arguments involving the continuity of Gaussian free fields and positivity of local times will be translated into corresponding quantitative estimates.

A discrete refinement of G
Recall our setting of an electrical network G = (V, E) with conductances {c xy : x, y ∈ V }. For each positive integer N > 1, we define a refinement G N = (V N , E N ) by replacing each edge (x, y) ∈ E with a length N path whose vertices we denote by {x = v xy,0 , v xy,1 , . . . , v xy,N = y}.
We thus have edges between v xy,i and v xy,i+1 for each 0 ≤ i < N . We will use v yx,i to denote the same vertex as v xy,N −i , and we will regard V as a subset of V N , so that a vertex x ∈ V will sometimes be considered as a vertex in V N .
We choose the conductances of G N so that the effective resistance between x, y ∈ V as vertices in G will be the same when they are considered as vertices in G N . In particular, we set the conductance between v xy,i and v xy,i+1 to be N c xy . Since the effective resistances are equivalent, G is in some sense a projection of G N . The following proposition makes this explicit.

Proposition 3.2.
Let η be the GFF on G, and let X be a continuous time random walk on G. Let η N and X N denote the corresponding objects for G N . Then, for any t > 0 we have the following two identities in law.
The identity between η N and η is immediate from the equivalence of effective resistances. The identity between local times then follows from Theorem 2.1. However, there is also a very direct way to see the equivalence of local times which we now describe.
If X N (t) is a continuous time random walk on G N started at v 0 , then X N (t) induces a random walk X G N (t) on G by only recording the time spent in V . More formally, define t 0 = 0, and for each i ≥ 0, define Define also Then, consider the V -valued process X G N (t) which starts at v 0 and, for each i, jumps to X N (t i+1 ) at time i j=1 s j . Note that if X N (t i ) = x ∈ V , at the next jump X N transitions to v xy,1 with probability cxy cx for each y neighboring x in G. After that, X N behaves like a simple random walk on Z started at 1 and stopped upon hitting either 0 (corresponding to v xy,0 = x) or N (corresponding to v xy,N = y). Thus, with probability N −1 N it returns to x, and with probability 1 N it hits y.
Consequently, between times t i and t i+1 , the number of times X N visits x is geometrically distributed with mean N , and so the accumulated local time s i is exponentially distributed with mean N . Moreover, we see that so X G N (t) has the same law as a continuous time random walk on G except that the waiting times between jumps are scaled by N . In particular, we have where X is a continuous time random walk on G. Note that the factor of N appearing in the middle expression comes from the normalization by total conductance at x, which differs for G and G N .

Local times of G N
We will need two estimates concerning local times on G N , stated as Lemmas 3.4 and 3.6 below. These correspond to our assertion that the set U is connected in the heuristic proof outline provided at the beginning of the section.
In the lemmas that follow, we consider a continuous time random walk X N (t) on G N started at a vertex x ∈ V . Let τ x denote the first time the walk hits another vertex y ∈ V distinct from x. The first estimate states, roughly, that it is very likely for vertices near x to accumulate significant local time.
We will need a standard concentration estimate for sums of i.i.d. exponential random variables. Unfortunately, we were unable to find a reference that contained both tail bounds, so a short proof is included in the appendix. Lemma 3.3. Let X 1 , X 2 , . . . , X N be i.i.d. exponential random variables with mean µ. Then, for any α ∈ [0, 1], we have Proof. See Appendix.
Lemma 3.4. Let y ∈ V be any neighbor of x in G, let ǫ ∈ 0, 1 2 , λ > 0 be given, and define k = ⌊ǫN ⌋. Then, for some constant C G depending on G but not N .
Proof. Recall the notation L τx (x) for the number of visits to x up until time τ x , and recall also from Section 3.1 that L τx (x) is distributed as a geometric random variable with mean N . Conditioning on L τx (x), we may decompose the walk up until time τ x into L τx (x) excursions from x and a path to a neighbor of x in G. Each excursion may be sampled independently.
Let us now consider one excursion. The first step of the excursion goes to some vertex v xz,1 , where z is a neighbor of x in G. As noted earlier, from there the walk behaves like a simple random walk on Z started at 1, stopped upon hitting 0 (corresponding to the return to x), and conditioned on hitting 0 before N (corresponding to avoiding z).
Let E m denote the event that a simple random walk on Z started at 1 hits m before 0. By a standard martingale argument, we have P(E m ) = 1 m . Thus, In particular, this implies that for each excursion, there is a cxy cx probability that the first step is v xy,1 , and with probability at least 1 2k the excursion will then hit v xy,k . In other words, letting p be the probability that a single excursion includes v xy,k , we have p ≥ cxy 2kcx . Let L denote the number of excursions which hit v xy,k . By the preceding discussion, it is the sum of L τx (x) i.i.d. Bernoulli random variables with expectation p. Since L τx (x) is geometrically distributed with mean N , it follows that L is geometrically distributed with mean pN . We thus have Note that for each i ∈ {0, 1, 2, . . . , k}, the vertex v xy,i is visited at least L times, and the total conductance of v xy,i is at most N c x . Thus, L τx (v xy , i) stochastically dominates 1 N cx times the sum of L i.i.d. unit exponentials. By Lemma 3.3 with α = 1 2 , we have Combining this with equation (3) gives which takes the desired form for C G sufficiently large.
Proof. This follows immediately from Lemma 3.4 by taking λ = log 2 N N and ǫ = 1 log 3 N .
The second estimate states that, conditioned upon X N (τ x ) = y, it is very likely that vertices v xy,k are visited a large number of times, as long as k is not too close to N . This essentially follows from Corollary 2.6 from Section 2.3. Lemma 3.6. Let y be a neighbor of x in G. Then, for any ǫ, λ ∈ (0, 1), we have for some constant C G depending on G but not N .
Proof. Let S = {z ∈ V : (x, z) ∈ E}. Note that the process X N up to time τ x induces a continuous time random walk Y = {Y t } t≥0 on the vertices {v xy,0 , v xy,1 , . . . , v xy,N } ∪ S by ignoring visits to vertices outside of that set (namely, those of the form v xz,k for z = y and 1 ≤ k < N ). We can define a stopping time T x analogous to τ x as the first time Y hits S.
For convenience, define p xz = cxz cx for each z ∈ S. Note that P (X N hits v xy,1 before hitting S or returning to x) = p xy P (X N hits S before hitting v xy,1 or returning to x) = 1 − p xy N .
Thus, we can interpret Y up to time T x as a continuous time random walk on a path with vertices (w 0 , w 1 , w 2 , . . . , w N +1 ), where all the conductances are 1 except that the conductance between w N and w N +1 is 1−pxy N pxy . Here, w k corresponds to v yx,k (so Y is started at w N ), and w N +1 corresponds to any vertex in S \ {y} (we may combine all of these states because Y is stopped upon hitting this set anyway).
We are now in the setting of Corollary 2.6, as conditioning on Y Tx = y corresponds to conditioning on hitting w 0 before w N +1 . Following the notation of Corollary 2.6, we have r = 1−pxy N pxy , so that α = 1−pxy pxy . We apply the corollary with β = λc xy . Note that the total conductances at v yx,k are 2N c xy as opposed to 2 in the statement of Corollary 2.6, so the local times will be scaled accordingly. It follows that whenever C G > max(C α , C α +log c xy ). In particular, since there are only finitely many possible values of p xy and hence of α, we can choose C G sufficiently large so that this holds independently of N . This proves the lemma.
Proof. We apply Lemma 3.6 with ǫ = 1 log 3 N and λ = log 2 N N . It suffices to show that both terms on the right hand side tend to zero. Clearly, To bound the other term, note that for sufficiently large N , we have in which case

Proof of Theorem 3.1
We now prove Theorem 3.1, following the plan outlined at the beginning of the section. Let us first prove an approximation of Theorem 3.1.
Lemma 3.8. Let t > 0 be given. Let Ω N be a probability space with random variables η N , η ′ N , and X N = {X N (t)} t≥0 such that η N and η ′ N are distributed as Gaussian free fields on G N , and X N is distributed as a continuous time random walk on G N . Furthermore, suppose that η N and X N are independent, and almost surely for each v ∈ V N , (Theorem 2.1 ensures that such a construction is always possible.) Then, for any ǫ > 0, we have Remark 3.9. Note that the hypothesis of Lemma 3.8 implies for each x ∈ V that Consequently, the conclusion of the lemma may be expressed equivalently as Proof. To shorten notation, we use τ + to denote τ + (t).
Call a vertex x ∈ V well-connected at time s if there exists a sequence of vertices v 0 = w 0 , w 1 , . . . , w n = x in V N such that (w i , w i+1 ) ∈ E N and L XN s (w i ) ≥ log 2 N N for each i. We will show that with high probability, every vertex in V with positive local time at time τ + is well-connected.
Recall from the discussion in Section 3.1 that X N induces a random walk on G which, when regarded as a sequence of visited vertices (disregarding holding times), has the same law as a discrete time random walk on G. Thus, one way of sampling from X N is to first sample a path of the discrete time random walk on G. Then, we construct X N as follows. For each i ≥ 0, let Y i (t) be a continuous time random walk on G N started at x i , and let τ i be the first time that Y i hits a neighbor of x i in G.
Let Z i have the law of a copy of Y i conditioned on the event Y i (τ i ) = x i+1 . Then, we may form X N by concatenating the walks Z i up to time τ i . More formally, we may define n(s) = max n ≥ 1 : To lighten notation, let us write L i = L Yi τi and P i (·) = P (· | Y i (τ i ) = x i+1 ), noting that the randomness of the Y i are independent. Let P (s) = (x 1 , x 2 , . . . , x n(s) ) denote the truncation of P up until time s. We will say that P (s) is well-connected if each x i appearing in P (s) is well-connected at time s. Then, Fix a number T sufficiently large so that P |P (τ + )| > T ≤ ǫ 4 . Again, by the discussion of Section 3.1, the law of P (τ + ) does not depend on N , so the number T can be chosen independently of N . Note that by Corollaries 3.5 and 3.7, each summand in either sum of the last expression of (4) is bounded by ǫ 8T for sufficiently large N . Consequently, for sufficiently large N , the whole expression is bounded by 2|P (τ + )| · ǫ 8T , and we have Note that almost surely, the vertices x ∈ V for which L τ + (x) > 0 are exactly those appearing in P (τ + ). Thus, we have We next show that with high probability, the values of η ′ N at adjacent vertices do not differ by very much. Consider any (x, y) ∈ E and 0 ≤ k < N . For notational convenience, let u = v xy,k and w = v xy,k+1 . We have Since η ′ N,u − η ′ N,w has a Gaussian distribution, it follows that Taking a union bound over all adjacent pairs (u, w) ∈ E N , we obtain P max for N sufficiently large.
Finally, we may combine equations (5) and (6) to deduce the lemma. Indeed, suppose that for some x ∈ V , we have L XN τ + (x) > 0 but √ 2t + η ′ N,x < 0. If x is well-connected at time τ + , which occurs with high probability by (5), then there exists a path v 0 = w 0 , w 1 , . . . , However, we also have Therefore, this can only happen if But by equation (6), this is unlikely. Thus, we have proving the lemma.
Theorem 3.1 is now an easy consequence of Lemma 3.8.
Proof of Theorem 3.1. Let A ⊂ R V be any monotone set. Let ǫ > 0 be given, and take N sufficiently large so that the conclusion of Lemma 3.8 holds.
Let η N be the Gaussian free field on G N , and let X N be a continuous time random walk independent of η N . We will now try to define another Gaussian free field η ′ N,v on the same probability space so as to satisfy the hypotheses of Lemma 3.8. In fact, by the isomorphism theorem, η ′ N can be given in terms of η N and the local times up to a choice of sign in taking the square root.
To determine the signs, we can artificially introduce some additional randomness. Fix an arbitrary ordering on Let U be uniformly distributed on [0, 1] and independent of η N and X N . For any u ∈ [0, 1] and Z ∈ R VN , we may define We can then define We are now in the setting of Lemma 3.8, which gives or equivalently (by Remark 3.9), Now, let η and X be the GFF and a continuous time random walk on G, respectively. By the relationship between G N and G described in Proposition 3.2, we have This holds for each ǫ > 0, so taking ǫ → 0, we obtain which proves the stochastic domination. First, we record two auxiliary results used in [7]. Recall the notation that M = E max x∈V η x for the Gaussian free field η and R = max x,y∈V E (η x − η y ) 2 .

Application to cover times
Lemma 4.1 (Lemma 2.1 of [7]). Let X be a continuous time random walk on an electrical network G = (V, E). Let c tot = x,y∈V c xy be the total conductance of G. For any t ≥ 0 and λ ≥ 1, Proof. See Lemma 2.1 of [7] and the associated remark. We have replaced 2|E| by c tot .
The next result is a well-known Gaussian concentration bound. See for example Theorem 7.1, Equation (7.4) of [20].
Let {η x : x ∈ S} be a centered Gaussian process on a finite set S, and suppose Eη 2 x ≤ σ 2 for all x ∈ S. Then, for α > 0, Note that by symmetry, max can be replaced by min in Proposition 4.2, which is the version that we will use. We now give a proof of Theorem 1.1, closely following the proof of Theorem 1.2 in [7].
Proof of Theorem 1.1. We will prove Theorem 1.1 in the slightly more general setting where G = (V, E) is an electrical network. As before, define c tot = x,y∈V c xy .
We first estimate τ cov in terms of τ + . Let β ≥ 3 be a parameter to be specified later. In what follows, we will often use the fact that To prove an upper bound, let t + = (M+β √ R) 2 2 , and define the event where η is an independent copy of the Gaussian free field as in Theorem 2.1. We also have by Proposition 4.2 that P min so that in light of the isomorphism theorem (Theorem 2.1), Suppose now that τ cov > τ + (t + ). Then, L τ + (t + ) (x) = 0 for some x ∈ V . Since and η is independent of the random walk, it follows that Combining equations (7) and (8), we conclude that For the lower bound, let t − = (M−β √ R) 2 2 . By Theorem 3.1, we have where the last inequality follows again from Proposition 4.2.

Acknowledgements
We are greatly indebted to Jian Ding for suggesting the problem and valuable discussions. We also thank Amir Dembo, James Lee, and Yuval Peres for very helpful feedback and advice at various stages. To break up the proof, we first establish a lemma.
Proof of Lemma 2.4. Define λ ′ = λ 1 log ǫ −1 . Recall that the probability density of the standard two-dimensional Gaussian is bounded above by 1 2π , and so the probability density of W ǫ is bounded above by 1 2πǫ . Thus, We now apply Lemma 6.1 with y = W ǫ , r = √ λ, and taking α = √ λ ′ in the infimum. This gives This along with the previous inequality proves the corollary.

Proof of Lemma 2.5
Proof. Define Note that f (X) is a martingale. Thus, for a walk started at k, the probability of hitting 0 before N + 1 is It follows that for 1 ≤ k < N , P X t+1 = k + 1 X t = k, X τ = 0 Thus, the transition probabilities of X conditioned on X τ = 0 are exactly the unconditioned transition probabilities of Y . Consequently, their paths have the same distribution.
and so By Markov's inequality with t = α 2µ and t = − α 2µ , we obtain