Random walks colliding before getting trapped

Let $P$ be the transition matrix of a finite, irreducible and reversible Markov chain. We say the continuous time Markov chain $X$ has transition matrix $P$ and speed $\lambda$ if it jumps at rate $\lambda$ according to the matrix $P$. Fix $\lambda_X,\lambda_Y,\lambda_Z\geq 0$, then let $X,Y$ and $Z$ be independent Markov chains with transition matrix $P$ and speeds $\lambda_X,\lambda_Y$ and $\lambda_Z$ respectively, all started from the stationary distribution. What is the chance that $X$ and $Y$ meet before either of them collides with $Z$? For each choice of $\lambda_X,\lambda_Y$ and $\lambda_Z$ with $\max(\lambda_X,\lambda_Y)>0$, we prove a lower bound for this probability which is uniform over all transitive, irreducible and reversible chains. In the case that $\lambda_X=\lambda_Y=1$ and $\lambda_Z=0$ we prove a strengthening of our main theorem using a martingale argument. We provide an example showing the transitivity assumption cannot be removed for general $\lambda_X,\lambda_Y$ and $\lambda_Z$.


Introduction
Consider three independent random walks X, Y, Z over the same finite connected graph. What is the probability that X, Y meet at the same vertex before either of them meets Z? If the initial distributions of the three walkers are the same, this probability is at least 1/3 by symmetry, at least if we assume that ties (i.e. triple meetings) are broken symmetrically. Now consider a similar problem where the initial states X 0 , Y 0 , Z 0 are all sampled independently from the same distribution, but Z stays put while X and Y move. What is the probability that X and Y meet before hitting Z?
There are several examples of bounds [1,4,5] relating the meeting time of two random walks to the hitting time of a fixed vertex by a single random walk. These typically provide upper bounds for meeting times in terms of worst-case or average hitting times, sometimes up to constant factors. In light of this, it seems natural to conjecture that the probability in question is at least 1/3. However, the previous argument by symmetry fails. In fact, to the best of our knowledge, no known universal lower bound for this probability is known.
It will be convenient to consider the problem in continuous time. For the remainder of the paper let P be the transition matrix of an irreducible and reversible Markov chain on a finite state space with stationary distribution π. Let X and Y be two independent continuous time Markov chains that jump at rate 1 according to the transition matrix P and let Z ∼ π be independent of X and Y .
We define M X,Y to be the first time X and Y meet, i.e. M X,Y = inf{t ≥ 0 : X t = Y t }.
We also define: We write M good = M X,Y and M bad = M X,Z ∧ M Y,Z .

Main results
Our first result proves a universal lower bound on the probability P M good < M bad for the class of transitive chains. First we recall the definition.
Definition 1.1. Fix a chain with transition matrix P and state space E. An automorphism of P is a bijection ϕ : E → E such that P (z, w) = P (ϕ(z), ϕ(w)) for all z, w. The chain P is transitive if for all x, y ∈ E there exists an automorphism ϕ of P with ϕ(x) = y.
Theorem 1.2. Let P be the transition matrix of a finite irreducible and reversible chain with two or more states. Assume X 0 and Y 0 are independent with law π. If P is transitive, then Next we consider a more general setup. We say that a random walk W has speed λ W and transition matrix P , if it jumps at rate λ W according to the matrix P .
Suppose again that P is the transition matrix of an irreducible and reversible Markov chain on a finite state space with stationary distribution π. Let λ X = 1, 0 ≤ λ Y ≤ 1 and 0 ≤ λ Z < ∞. Let X, Y and Z be three independent continuous time Markov chains with speeds λ X , λ Y and λ Z respectively and transition matrix P .
For the remainder of the paper, we write P for the probability measure under which X 0 , Y 0 and Z 0 are independent with law π. We also write P a,b,c in the case when (X 0 , Y 0 , Z 0 ) = (a, b, c). For computations that only involve two chains we drop one index writing only P a,b ; which two chains are involved will always be clear from context. Likewise, we write P a when only one chain is involved. We define M X,Y as above and redefine: M good = M X,Y and M bad = M X,Z ∧M Y,Z are defined as before. Again we are interested in uniform lower bounds on the probability of the event {M good < M bad } that have good dependence on the three speeds. Theorem 1.3. There exists c > 0 such that the following holds. Let P be the transition matrix of a transitive and reversible chain with stationary distribution π and at least two states. Suppose that X, Y and Z are three independent continuous time Markov chains with speeds λ X = 1, λ Y ≤ 1 and 0 ≤ λ Z < ∞ and transition matrix P started from π. Then The proof shows that we may take c = 1/4752, which implies a version of Theorem 1.2 with 1/4 replaced by 1/(( √ 2 + 1) 2 · 4752). The constant c most likely can be improved, but the dependence of the lower bound on λ Z is sharp when λ Z +∞. Indeed, if P is simple random walk over a large complete graph with n vertices, then where the term O(1/n) corresponds to the possibility of meetings at time 0.
It is natural to ask if our theorems can be extended to all transitive chains. The next theorem shows that the answer is no for the more general Theorem 1.3. The theorem essentially asserts that there are graphs where typical meeting times are much smaller than typical hitting times.
Theorem 1.4. For all ε > 0 there exists a finite connected graph G such that if P corresponds to simple random walk on G and λ X = 1, λ Y = 0 and λ Z = 1, then P M good ≤ M bad < ε.
On the other hand, we believe that for certain values of λ X , λ Y and λ Z , universal lower bounds are possible without transitivity. Here is a concrete conjecture, which relates to the setting of Theorem 1.2.
holds for all finite irreducible and reversible chains P .
Alexander Holroyd (personal communication) pointed out an example showing that for any δ > 0 there exist transitive chains for which P M good ≤ M bad ≤ 1/3 + δ. We describe this example in Section 6. This means that, if true, Conjecture 1.5 is best possible even for transitive chains. However, we note that any uniform lower bound for all P , and for λ X , λ Y and λ Z as in Conjecture 1.5, would be a new result.
Remark 1.6. Without reversibility, the conjecture fails badly. Consider a clockwise continuous time random walk on a cycle of length 2n. More precisely, with P = (p ij ) 1≤i,j≤n we have p ij = 1 if j = (i + 1) mod n and p ij = 0 otherwise. The distance between independent random walkers behaves as continuous time simple symmetric random walk reflected at 0 and n. So started from stationarity, it typically takes such walkers time of order n 2 to meet. On the other hand, the hitting time of any point is at most of order n.
Before we continue, we say a few words about the main proof ideas. The unifying theme of the proofs of Theorems The proof of Theorem 1.4 builds a graph with two parts: the "Up" part concentrates the bulk of the stationary measure, but the "Down" part is where meetings tend to happen, and they happen fast. As a result, only a negligible fraction of the "Up" part is explored before X and Z meet, and the upshot is that M X,Y > M X,Z with high probability.

The 1/4 lower bound
In this section we prove Theorem 1.2. The argument is fairly short, and much simpler than the one for the more general Theorem 1.3. Before presenting the proof, we recall some standard facts about hitting times which are also used later on.
The hitting time of a state z ∈ Ω by X is the first time t at which X t = z, i.e.
We define τ Y z similarly and we also let Whenever there is no confusion, i.e. if there is a single chain in question, we will drop the dependence on X or Y from the notation of the hitting times.
Lemma 2.1. For any reversible chain with two or more states we have Moreover, if X is a reversible and transitive chain, then for all x, z ∈ Ω and all t ≥ 0 we have Proof. For a proof of the first assertion in discrete time we refer the reader to [ for any initial states (x, y) ∈ Ω 2 and any z ∈ Ω.
For a proof of the lemma above we refer the reader to [1, Chapter 14/Proposition 5 and Chapter 3].
Proof of Theorem 1.2. Since X and Y are two independent copies of the same chain, we have for all a, b. By Lemma 2.2 we now get that (G t ) t≥0 is a martingale up to time S, This martingale is bounded (because the state space is finite). The fact that the chain is finite and irreducible implies S < ∞ almost surely for all initial states. We deduce from optional stopping that The left hand side above is given by the quantity t * hit defined in (2.2). This is because where the second equality follows from the fact that for a transitive chain, E π τ X y is independent of y. Using this a second time yields On the other hand, at time S we have two alternatives.
In both cases G S = 0: this is obviously true in the second case, and follows from Lemma 2.1 in the first case.
We deduce that Using that t hit ≤ 2t * hit from Lemma 2.1 finishes the proof.

Towards the general lower bound
In this section we collect the tools that we will use in the proof of Theorem 1.3. We first argue that the obvious "fix" to the proof of Theorem 1.2 does not work in all cases. Indeed, a straightforward extension of Lemma 2.2 establishes that is a martingale up to time S. One can see that in this case which easily yields In particular, we obtain the same bound as in Theorem 1.2 provided that λ Z = 0 and λ X = 0. However, this bound becomes useless when λ Z > (1+λ Y )/2. Other linear combinations of f (X t , Z t ), f (Y t , Z t ) and f (X t , Y t ) also fail to achieve our goal when λ Z is large. So, in general a different strategy is needed.

Hitting times for states and trajectories
In this section we collect some results on hitting times for a single Markov chain.
Recall the quantity t * hit defined in (2.2). The next lemma shows that, up to a constant factor, t * hit also bounds expected hitting times of moving targets, from arbitrary initial states. This lemma is essentially due to Oliveira [4, Lemma 1.1], but in this particular form it appeared in [5].
Lemma 3.1. Let X be a reversible Markov chain taking values in Ω and h = (h t ) t≥0 a deterministic, càdlàg, Ω-valued trajectory. If for a universal constant c > 0, where t hit is as in (2.2). Inspection of the proof [5] shows that c ≤ 4 + 5/4, therefore 2c ≤ 11. Lemma 2.1 finishes the proof.
For a reversible transition matrix P we let . . are its eigenvalues in decreasing order. We define the relaxation time via The next lemma bounds the probability that τ z is small for reversible chains. Clearly, if no assumption is made on z and on the starting state x, such an estimate cannot be very good (think of two adjacent points on a path). The next lemma shows that if we choose the "right" starting state, and only consider the majority of possible target states, we can show that τ z dominates an exponential random variable with mean t rel . This will be used later to upper bound the probability that τ z /t hit is very small.
Lemma 3.2. Let X be a reversible chain. There exist x ∈ Ω and a subset A ⊂ Ω with stationary measure π(A) ≥ 1/2 such that, if τ A := min z∈A τ z , then for any t > 0, Proof. We use the spectral theory of reversible chains [1, Section 3.4]. The first step is to note that P has a non-zero eigenfunction ϕ : Ω → R such that This eigenfunction is orthogonal to the constant eigenfunction in the inner product induced by π, so it must take both positive and negative values. We may assume without loss of generality that the set has measure π(A) ≥ 1/2 (if that is not the case, replace ϕ with −ϕ). Choose x ∈ Ω with ϕ(x) > 0 as large as possible. On one hand Plugging this into (3.1) with the choice u = x, and recalling ϕ(X t ) ≤ ϕ(x) always, we obtain Dividing both sides by ϕ(x) (which is > 0) finishes the proof.

Results for transitive chains
In this section we prove results on hitting times under the assumption that P is transitive and reversible.

A small-time estimate for hitting times
We start by recalling a result which follows from the complete positivity of the law of τ z when starting from π together with bounds from Aldous and Brown [2].
Then f is an increasing function and Proof. Let Q = I − P and Q z be the restriction of Q to Ω \ {z}. We recall the complete positivity of the law of τ z starting from π (see for instance [2, eqn. (18)]): there exist non-negative constants where α is any quasistationary distribution on Ω \ {z} corresponding to the eigenvalue γ 1 .
Using the above representation we can rewrite f as follows Therefore, using this and the fact that f is increasing and (3.2) we conclude that for all s which completes the proof.
The next lemma essentially improves upon Lemma 3.2 from Section 3.1.
Lemma 3.4. Suppose that P is reversible and transitive. Then for any x ∈ Ω, there exists a subset A x ⊂ Ω with π(A x ) ≥ 1/2 such that, for any θ > 0, Since the chain is transitive, this in fact holds for all x. We now fix x ∈ Ω and θ > 0.
We will consider two cases separately: which concludes the proof in this case.
Suppose next that t rel < √ θ t hit . In this case it suffices to prove To see that this suffices, we use the fact that P is transitive and apply Lemma 2.1 to obtain that P x (τ z > t) is symmetric in x and z. As a result, (3.3) implies Therefore we obtain Since π(A) ≥ 1/2, we conclude It remains to prove (3.3). Since P is transitive, E π [τ z ] = t * hit is independent of z. Moreover, we are assuming that t rel ≤ √ θ t hit , and hence using Lemma 2.1 we get that t rel ≤ 2 We now obtain for all s ≥ 0 and for all z ∈ Ω This now finishes the proof of (3.3), since for all z ∈ Ω we have where for the first inequality we used again Lemma 2.1.

Distributional identities for meeting and hitting times
Our next result shows that the distributions of hitting and meeting times are intimately related for transitive and reversible P .
Lemma 3.5. Let P be a reversible and transitive transition matrix. Let X, Y and Z be three independent continuous time Markov chains with speeds λ X = 1, λ Y ≥ 0 and λ Z ≥ 0 and transition matrix P . Then for all (x, z) ∈ Ω 2 , the distribution of τ X z λ Y +λ Z under P x is the same as the distribution of M Y,Z under P (x,z) .
A special case of this lemma is when λ X = λ Z = 1, in which case we obtain the following corollary of Lemma 3.5.
This equality is well known and is usually proven by martingale methods such as the ones used in the proof of Theorem 1.2. Somewhat oddly, it seems that Lemma 3.5 is new, or at least was not widely known before. We also note the following corollary of Lemma 3.4 and Lemma 3.5.
Corollary 3.6. Let P be a transitive and reversible transition matrix and let X, Y and Z be three independent continuous time Markov chains with speeds λ X = 1 and λ Y , λ Z ≥ 0 respectively and transition matrix P . Then for all x ∈ Ω there exists a subset A x ⊂ Ω with π(A x ) ≥ 1/2 such that: Proof of Lemma 3.5. Define the functions We will be done once we show that g x,z (t) = f (x,z) (t) for all (x, z) ∈ Ω 2 and t ≥ 0. These equalities are true (by inspection) when t = 0. We are going to show that the functions (f (x,z) (·)) (x,z)∈Ω 2 and (g x,z (·)) (x,z)∈Ω 2 satisfy the same linear system of ordinary differential equations (with the derivatives at t = 0 interpreted as right derivatives). Then the equality for all t ≥ 0 will follow from the general uniqueness theory of linear ODE's.
To prove that f and g satisfy the same system of ODE's, we will use a standard formula for the cumulative distribution function of a hitting time. If (V t ) t≥0 is an irreducible continuous time Markov chain over a set Ω V with transition rates q(v, w), and A ⊂ Ω V is a nonempty subset of the state space, the hitting time The derivative is understood as a right derivative at time t = 0.
Let ∆ = {(x, x) : x ∈ Ω} ⊂ Ω 2 be the diagonal set. We first apply (3.5) to the product chain (V t ) t≥0 = (Y t , Z t ) t≥0 , with Ω V = Ω 2 , and A = ∆. In this case τ V A = M Y,Z , and a straightforward computation with the transition rates gives: We now apply the same formula (3.5) with (V t ) t≥0 = (X t ) t≥0 . Note that g (x,z) (t) := P x (τ z ≤ s(t)) where s(t) = (λ Y + λ Z ) t, so the chain rule implies (3.7) We will now make crucial use of transitivity, which allows us to use Lemma 2.1 to deduce that P x τ X z ≤ (λ Y + λ Z ) t is symmetric in x and z, i.e.
that is g x,z (·) = g z,x (·) for all x, z. This allows us to reverse the roles of x and z in (3.7) to obtain: (3.8) We add the two formulas (3.7) and (3.8) with weights λ Y /(λ Y +λ Z ) and λ Z /(λ Y +λ Z ) respectively. The upshot is: This is precisely the system of ODEs we obtained for the f 's in (3.6) and it concludes the proof.

The general lower bound
In this section we prove Theorem 1.3.
Proof of Theorem 1.3. We let n = |Ω| denote the number of states. The transitivity assumption implies π(v) = 1/n for all v ∈ Ω. The next lemma will be used in the proof. We defer its proof until Section 4.1.
Lemma 4.1. Let P be a reversible and transitive transition matrix. Let X, Y and Z be three independent continuous time chains with transition matrix P and speeds λ X = 1 and λ Y , λ Z ≥ 0. Let µ be the probability measure given by Our proof is based on the analysis of the time that X, Y spend on the diagonal ∆ = {(x, x) : x ∈ Ω} prior to time M bad , i.e.

(4.2)
Note that M good < M bad if and only if T > 0, so Lemma 4.1 upper bounds the integral appearing above. Thus in order to obtain a lower bound for P M good < M bad it suffices to lower bound Using reversibility and the fact that π is uniform for a transitive chain we obtain Plugging this back into (4.4) gives At this point we recall that M bad = M X,Y ∧ M Y,Z , therefore By Corollary 3.6, for each x ∈ Ω there exists a subset A x with π(A x ) ≥ 1/2 for which we have the following bound for all t > 0 Therefore, fixing ξ > 0 and x ∈ Ω: We can maximize the right hand side by taking which gives the bound ∀x ∈ Ω : Combining this with (4.5) gives that Therefore, using this together with Lemma 4.1 and (4.3) we conclude and this finishes the proof.

Occupation time up to a stopping time
The goal of this section is to prove Lemma 4.1. We start with a more general setting. We give the proof of the lemma at the end of the section.
It is well known that a finite irreducible chain (V t ) t≥0 with state space Ω V , started from a point x and stopped at a stopping time τ > 0 with V τ = x, satisfies: where π V is the unique stationary measure of (V t ) t≥0 (some simple conditions on τ are necessary for this). There are also extensions of this lemma to the case where V 0 and V τ are not necessarily equal, but have the same distribution [1, Proposition 2.4, Chapter 2]. The following lemma extends this idea even further, and shows that τ may be a stopping time for a "larger" Markov chain.
are independent, irreducible, continuous time Markov chains with finite state spaces Ω U and Ω V respectively. Assume µ is a probability measure over Ω U × Ω V and that τ is a stopping time for the process (U t , V t ) t≥0 with the following properties.
where π V is the stationary distribution of V .
Proof. In this proof we will interchange integrals, expectations and summations several times. Instead of justifying this at each step, we note right away that all of these interchanges are valid, because the integrands are non-negative.

Consider the row vector h with nonnegative coordinates
Note that v∈V h(v) = E µ [τ ] > 0 because τ > 0 a.s.. Letting Q be the generator of (V t ) t≥0 , we will show below that hQ = 0. (4.6) which is precisely what we need to prove.
We will derive hQ = 0 from the limit In order to compute the limit we recall e ε Q (w Crucially, the fact that τ is a stopping time for U, V implies that the event {V t = w, τ > t} is measurable with respect to (U s , V s ) s≤t . Using that V and U evolve independently and the Markov property for V implies Plugging this back in the previous display gives The first term is because τ > 0 always. Regarding the second term, we have (4.10) For the first term on the right hand side above we obtain As for the second term in the sum in (4.10) we get On the event {τ ≤ t ≤ τ + ε} in order to have V t = v and V τ = v, there must exist at least one jump of the Markov chain in the time interval [τ, t], which on this event has length less than ε. Therefore, we obtain that on the event {τ ≤ t ≤ τ + ε} Therefore we deduce Hence this together with (4.11) gives that Combining this with (4.9) and (4.8) gives: Our assumption that P µ (V 0 = ·) = P µ (V τ = ·) implies that the right hand side above is zero. Plugging this back into (4.7) gives hQ = 0 and finishes the proof.
Proof of Lemma 4.1. We want to eventually use Lemma 4.2 to estimate the integral, which we can rewrite as This is a sum of terms of the form demanded by Lemma 4.2: just set (U t ) t≥0 = (Z t ) t≥0 and (V t ) t≥0 = (X t , Y t ) t≥0 . However, other conditions are needed for this lemma to apply, and one of them clearly fails: the distribution of (X 0 , Y 0 ) is not the same as that of (X M bad , Y M bad ). To see this, simply note that whereas X 0 = Y 0 under µ (as we will see below), typically X M bad = Y M bad . It turns out that we can circumvent this problem by defining Note that X t = Y t for all M bad ≤ t < τ . Therefore (4.14) Clearly τ is a stopping time for (X t , Y t , Z t ) t≥0 . We claim that τ and the initial distribution µ satisfy the conditions (1) − (3) of Lemma 4.2.
First, P µ (τ > 0) = 1, since P µ (X 0 = Y 0 = Z 0 ) = 1 and P µ τ ≥ M bad > 0 = 1. Hence condition (1) of Lemma 4.2 is satisfied. Next we show that which will imply that condition (2) is also satisfied. Recalling that τ = inf{t ≥ M bad : The first expectation is at most the meeting time of X and Z, which is independent of Y . Lemma 3.1 implies that, conditionally on Z, this is at most 11t * hit almost surely, so E (x,y,z) M bad ≤ 11t * hit . Similarly, M good is the meeting time of X and the independent trajectory Y , and E (x ,y ,z ) M good ≤ 11t * hit . Note now that by definition µ is supported on the set {(x, x, z) : x = z}. One can also check that µ is invariant under the action of any automorphism ϕ of P , i.e. µ(x, y, z) = µ(ϕ(x), ϕ(y), ϕ(z)). This together with transitivity give that the marginal of µ on the first two coordinates is uniform on the diagonal set ∆ = {(x, x) ∈ Ω 2 : x ∈ Ω}.
Similarly the law of (X τ , Y τ ) is invariant under the action of any automorphism ϕ. Using transitivity again we obtain that (X τ , Y τ ) is uniform on ∆. Therefore, condition (3) is also satisfied.
We can now apply Lemma 4.2 to get that where the inequality follows from (4.15). This concludes the proof.

Non transitive chains
The goal of this section is to prove Theorem 1.4. Throughout the section we fix ε > 0 and let C = 6/ε. In what follows K r is the complete graph on r ∈ N \ {0} vertices.
For n ∈ N construct a graph G n as follows: begin from a clique K n+1 and n disjoint copies of K k with k = √ Cn. Fix a vertex v ∈ K n+1 and add exactly one edge from v to each copy of K k . See Figure 1 for a depiction of the graph.
Let Ω be the vertex set of G n . We call Down the set of vertices belonging to K n+1 and Up = Ω\Down the rest.
Let P be the transition matrix of a simple random walk over G and π its stationary distribution. Let X, Y and Z be independent random walks starting from π with transition matrix P and speeds λ X = 1, λ Y = 0 and λ Z = 1.
The idea is that by choosing ε sufficiently small, the stationary measure of Down becomes arbitrarily small. So if we start X, Y and Z according to π, then it is very likely they will all start from different cliques in Up. Let T be the √ n-th time that X visits the vertex v. We will show that as n → ∞ the probability that X and Z collide after time T is arbitrarily small. Moreover, we will show that the probability that X and Y collide before T is arbitrarily small as n → ∞. Combining these two assertions will complete the proof.
Lemma 5.1. There exists α = α(C) > 0 independent of n such that for all x, z ∈ Ω and all r ≥ 1 we have Proof. First note that by the strong Markov property we have for all r ≥ 1 Using the strong Markov property again, for r ≥ 1 we obtain By induction for all r ≥ 1 this yields So we complete the proof by showing that for a positive constant α depending only on C.
Let τ = inf{t ≥ 0 : Z t ∈ Down \ {v}} and fix w ∈ Down \ {v}. By symmetry, for all z we then have where the factor 1/2 corresponds to the probability that the first time X jumps it goes to Down\{v}.
It remains to show that for a positive constant c 1 we have If z ∈ Down \ {v}, then this probability is 1 and if z = v it is easily seen to be at least 1/4. So we assume that z ∈ Up. Let x be the unique neighbour of v lying in the same clique as z. Then the time τ can be expressed as τ = T z,x + T x,v + T v,Down\{v} , where the time T r,S stands for the first hitting time of S starting from r. Using this, it is then not hard to see that there exists a positive constant c such that uniformly over all z ∈ Up we have E is an exponential random variable with mean n. By Markov's inequality we obtain Note that this bound does not depend on z. Since k = √ Cn and E[τ ] ≤ ck 2 the bound in (5.2) follows.
Proof of Theorem 1.4. We show that for n sufficiently large, the graph G = G n satisfies the claim of the theorem.
It is not hard to verify that for n large enough, in G n we have Let A be the set of pairs (x, y) such that y ∈ Up and x is not in the same clique as y. Then let E = {(X 0 , Y 0 ) ∈ A}. By the preceding bound P(E c ) ≤ 3/C = ε/2 for n large enough. We then have Therefore, it suffices to upper bound the last probability appearing above. Fix any z ∈ Ω and (x, y) ∈ A. For r to be determined later we have Because Y is not moving, we have M X,Y = τ y , where y = Y 0 . Since x, y are not in the same clique of Up, if τ y ≤ τ (r) v , then there exists 1 ≤ i ≤ r such that τ By the strong Markov property and union bound we obtain since, when X 0 = v, in order to hit y ∈ Up before returning to v, the first time X moves it must jump into the clique that contains y.
Taking r = √ n or any other function of n that goes to infinity slower than n gives that P x,y,z M X,Y ≤ M X,Z → 0 as n → ∞.
We conclude from (5.3) that P M good ≤ M bad < ε and this finishes the proof.
6 Sharpness of Conjecture 1.5 In this section we describe the example pointed out by Alexander Holroyd, mentioned in the Introduction, of a family of transitive graphs for which P M good ≤ M bad ≤ 1/3 + δ. In what follows we take λ X = λ Y = 1 and λ Z = 0.
To construct the example, fix ε ∈ (0, 1) and consider the chain with state space {0, 1} n in which the j'th coordinate changes value (from 0 to 1 or vice-versa) at rate q j = ε j−1 (1 − ε)/(1 − ε n ); note that n i=1 q i = 1. The idea is that for small ε, earlier coordinates change state much more quickly than later coordinates, so the primary obstacle to both meeting and hitting is simply the largest coordinate in which the value differs. For u, v ∈ {0, 1} n , let k(u, v) = max{i : u i = v i }, or k(u, v) = 0 if u = v.
It thus remains to prove the preceding claim.
Fix x, y, z ∈ {0, 1} n with k(x, y) > min(k(x, z), k(y, z)), and assume by symmetry that k(x, z) < k(x, y). For 1 ≤ k ≤ n, let τ k = min{t : X (i) t = z i , 1 ≤ i ≤ k} be the first time that X t and z agree in the first k coordinates. It is convenient to set τ 0 = 0. Also, let σ X k = min{t : ∃i ≥ k, X (i) t = X (i) 0 } be the first time one of the last n − k + 1 coordinates of X changes, and define σ Y k accordingly. We will show that for all 1 ≤ k < n, so M X,Z = τ k(x,z) . Similarly, if τ k(x,z) < σ X k(x,z)+1 and τ k(x,z) < σ Y k(x,z)+1 then M bad < M good . It then follows, using (6.1) and the subsequent observation, that P x,y,z M good < M bad ≤ P σ X k(x,z)+1 ≤ τ k(x,z) + P σ Y k(x,z)+1 ≤ τ k(x,z) ≤ 2k(x, z)ε < 2nε , as claimed. It thus remains to prove (6.1). In what follows we write σ k = σ X k . Fix 1 ≤ k < n, and note that σ k is exponential with rate n j=k q j = ε k−1 (1 − ε n+1−k )/(1 − ε n ). Furthermore, σ k < σ k+1 precisely if the k'th coordinate of X changes before any larger coordinate. It follows that P(σ k < σ k+1 ) = q k / n j=k q j .