Exponential Approximation for the Nearly Critical Galton-Watson Process and Occupation Times of Markov Chains

In this article we provide new applications for exponential approximation using the framework of Pekoz and Rollin (2011), which is based on Stein's method. We give error bounds for the nearly critical Galton-Watson process conditioned on non-extinction, and for the occupation times of Markov chains; for the latter, in particular, we give a new exponential approximation rate for the number of revisits to the origin for general two dimensional random walk, also known as the Erdos-Taylor theorem.


INTRODUCTION
A new framework for estimating the error of the exponential approximation was recently developed in Peköz and Röllin (2011), where it was applied to geometric sums, Markov chain hitting times, and the critical Galton-Watson conditioned on non-extinction. In this article we provide some generalizations to the approach of Peköz and Röllin (2011) and apply them to study Markov chain occupation times and a result of Erdős and Taylor (1960) for the number of visits to the origin by the two dimensional random walk, as well as to get a rate for the result of Fahady, Quine, and Vere-Jones (1971) for the nearly critical Galton-Watson branching process conditioned on non-extinction.
The main result in Peköz and Röllin (2011) that we use is based on Stein's method (see e.g. Ross and Peköz (2007) for an introduction) and can be thought of as formalizing the intuitive notion that a random variable X has approximately an exponential distribution if X and X e are close in distribution, where X e has the equilibrium distribution with respect to X characterized by (1.1) The equilibrium distribution appears in renewal theory as the time until the next renewal starting from steady-state. A renewal process with exponential inter-renewal times has the exponential distribution for its equilibrium distribution, and so the above intuition is not surprising. Peköz and Röllin (2011) give bounds on the accuracy of the exponential approximation in terms of how closely X and X e can be coupled together on the same probability space; one version of the result we will use below can be written as Some heuristics for Stein's method can be understood using size-biased random variables. For a nonnegative continuous random variable X with probability density function f (x), the size-biased random variable X s has density x f (x)/EX . The size of the renewal interval containing a randomly chosen point as well as the number of children in the family of a randomly chosen child are examples of size-biased random variables; see Brown (2006) and Arratia and Goldstein (2010) for surveys and applications of size biasing.
Stein's method for the exponential distribution, as well as for some other nonnegative distributions, can be viewed in terms of size-biasing. For the Poisson approximation to some random variable X , the Stein-Chen method (see Barbour, Holst, and Janson (1992)) gives a bound on the error in terms of how closely X and X s − 1 can be coupled together on the same probability space; these both have exactly the same distribution when X has a Poisson distribution. For approximation by a binomial distribution (see Peköz, Röllin,Čekanavičius, and Shwartz (2009)), we can obtain a bound in terms of how closely X s − 1 and n − (n − X ) s can be coupled; both of these have exactly the same distribution if X is binomial with parameters n and p. For the exponential distribution, we can obtain a bound on the error in terms of how closely X and U X s can be coupled, where U is an independent uniform (0,1) random variable independent of all else; X e has the same distribution as U X s . This last approach is the one we use below for the nearly critical Galton Watson process conditioned on non-extinction.
The organization of this article is as follows. In Section 2 we give the notation, background and preliminaries. In Section 3 we consider the setting of a nearly critical Galton Watson branching process conditioned on non-extinction. In Section 4 we study general dependent sums, occupation times for Markov chains and the the number of times the origin is revisited for the two-dimensional general random walk.

PRELIMINARIES
We first define the probability metrics we use below. For two probability distributions F and G define the Kolmogorov metric as If both distributions have finite expectation, define the Wasserstein metric We can relate the two metrics using d K (P, Exp(1)) 1.74 d W (P, Exp(1)); see e.g. Gibbs and Su (2002).
Central to the approach in Peköz and Röllin (2011) is the equilibrium distribution from renewal theory, and we next give the definition we use.
Definition 2.1. Let X be a non-negative random variable with finite mean. We say that a random variable X e has the equilibrium distribution w.r.t. X if for all Lipschitz-continuous f (2.1) It is straightforward that this implies (1.1). Indeed for nonnegative X having finite first moment, define the distribution function so that F e is the distribution function of X e and our definition via (2.1) is consistent with that from renewal theory.
The size biased distribution will also be used below. We define it as follows.
Definition 2.2. Let X be a non-negative random variable with finite mean. We say that a random variable X s has the size-biased distribution w.r.t. X if for all bounded f E X f (X ) = EX E f (X s ).
(2.2) It may be helpful in what follows to note that this definition using f (x) = x n immediately gives E(X s ) n = EX n+1 /EX . We next present the key result from Peköz and Röllin (2011) that we will use in the applications that follow.
Theorem 2.1 (Peköz and Röllin (2011), Theorem 2.1). Let W be a non-negative random variable with EW = 1 and let W e have the equilibrium distribution w.r.t. W. Then, for any β > 0, and, if in addition W has finite second moment,

THE NEARLY CRITICAL GALTON-WATSON BRANCHING PROCESS
Consider the Galton-Watson branching process starting from a single particle in generation zero, where each particle has an independent and identically distributed number of children according to some distribution having mean m; let Z n be the size of the nth generation. For the critical case where m = 1 and when E Z 2 1 < ∞ and P(Z 1 = 0) > 0, it was shown by Yaglom (1947) that the conditional distribution of Z n /n given Z n > 0 converges as n → ∞ to an exponential distribution. A corresponding rate of convergence was first proved with the additional condition E Z 3 1 < ∞ by Peköz and Röllin (2011). In the super-and sub-critical cases, respectively when m > 1 and m < 1, the limiting distributions are very difficult to calculate and are only known explicitly in very special cases; see e.g. Bingham (1988). Fahady, Quine, and Vere-Jones (1971), however, were able to show that the limiting distribution of a nearly critical branching process conditioned on non-extinction converges to the exponential distribution as m → 1 over general classes of offspring distributions. The following theorem gives explicit error bounds for the exponential approximation for any finite n and any m = 1. To avoid trivial cases, we make the general assumption that Theorem 3.1. Consider a Galton-Watson branching process starting from a single particle at time zero, and let Z n be the size of the nth generation. 3) It seems difficult to directly deduce rates of convergence from (3.3). The following estimates are more useful (a proof is given in the Appendix).
Lemma 3.2. For any n 1 and m > 1, and, for any n 2 and 1 2 m < 1, ( 3.5) Fahady, Quine, and Vere-Jones (1971) and showed that within each such class the limiting distribution of the conditioned Galton-Watson branching process converges to the exponential as m → 1. While retaining (A), it is not too difficult to see that Condition (B) is equivalent to for some b > 0 (it is easy to see that (B ) implies (B)-a proof of the reverse is given in the Appendix). Hence, it is clear that under these assumptions, the constant C in (3.1) will remain bounded as m → 1 and hence Theorem 3.1 and Lemma 3.2 give explicit bounds under the conditions of Fahady, Quine, and Vere-Jones (1971). Thanks to our explicit bounds, we can furthermore weaken the assumptions on the offspring distributions in the sense that the third moment of Z 1 may grow and P[Z 1 2] → 0 as long as Proof of Theorem 3.1. With some modifications, we follow the line of argument from Peköz and Röllin (2011), which is based on the size-biased branching tree of Lyons, Pemantle, and Peres (1995).
We assume that the particles in the tree are labeled and ordered. That is, if w and v are two particles in the same generation, then all offspring of w are to the left of the offspring of v, whenever w is to the left of v. We start in generation 0 with one particle v 0 and let it have a size-biased number of offspring. Then we pick one of the offspring of v 0 uniformly at random and label it v 1 . For each of the siblings (the other offspring from the same parent) of v 1 we continue with an independent Galton-Watson branching process with the original offspring distribution. For v 1 we proceed as we did for v 0 , i.e., we give it a size-biased number of offspring, pick one uniformly at random, label it v 2 , and so on.
Denote by S n the total number of particles in generation n. Denote by L n and R n , respectively, the number of particles to the left (exclusive v n ) and to the right (inclusive v n ), respectively, of v n . Denote by S n, j the number of particles in generation n that stem from any of the siblings of v j (but not v j itself). Likewise, let L n, j and R n, j , respectively, be the number of particles in generation n that stem from the siblings to the left and right, respectively, of v j . We have the relations L n = n j=1 L n, j and R n = 1 + n j=1 R n, j .
Next let R n, j be independent random variables such that (R n, j ) = (R n, j |L n, j = 0), and, with A n, j = {L n, j = 0}, define R * n, j = R n, j I A n, j + R n, j I A c n, j = R n, j + (R n, j − R n, j )I A c n, j .
Define also R * n = 1 + n j=1 R * n, j . Below are a few facts that we will subsequently use to give the proof of the theorem. In what follows, let σ 2 = Var Z 1 and γ = E Z 3 1 .
(i) The size-biased distribution of (X ) is the same as that of (X |X > 0); (ii) S n has the size-biased distribution of (Z n ); (iii) v n is uniformly distributed among the particles of generation n; Peköz and Röllin (2011). Using independence,

E{R n, j I A c n, j } = ER n, j P[A c n, j ] ES n, j P[A c n, j ] m n− j σ 2 P[A c n, j ],
which proves (v). If X j denotes the number of siblings of v j , having the size-biased distribution of (Z 1 ) minus 1, we have which proves (vii). Finally, using the Corollary on page 356 of Fujimagari (1980), we have which is (viii) (note that the result cited is for bounded offspring distribution, but easily extends to the unbounded case).
Set W = λR * n , and note that, due to (i v), (W ) = (Z n |Z n > 0). Due to (i) and (ii), S n has the sizebiased distribution with respect to R * n . Let U be a uniform random variable on [0, 1], independent of all else. Note that, if Y is a random variable, uniformly distributed on the integers {1, . . . , n}, then Y − U is continuous and uniformly distributed on [0, n]. Observing that, given S n , R n has uniform distribution on {1, . . . , S n } because of (iii), we therefore deduce that R n − U has uniform distribution on [0, S n ]. Hence, (R n −U) = (US n ), which implies that we can set W e = λ(R n −U). Applying (2.3) and using (v)-(vii), we obtain and, using we obtain E|W − W e | λ/2 + λE|R * n − R n | Cη(m, n), which proves (3.2).

VISITS TO THE ORIGIN FOR A TWO DIMENSIONAL SIMPLE RANDOM WALK
Exponential approximation results for sums of nonnegative random variables X 1 , X 2 , . . . , X n satisfying the condition Var(E(X i |X 1 , . . . X i−1 )) = 0 for all i were given in Peköz and Röllin (2011, Theorem 3.1), but not for more general dependent sums. Here we give a construction of the equilibrium distribution for sums of arbitrarily dependent nonnegative random variables having finite means, apply it to occupation times for Markov chains and then illustrate it by getting a new exponential approximation rate for the number of times a general irreducible aperiodic two-dimensional integer-valued random walk revisits the origin.
Theorem 4.1. Let W = λ n i=1 X i where X 1 , X 2 , . . . , X n are (possibly dependent) nonnegative random variables and let λ = 1/E n i=1 X i . Suppose, for each i and each x, W i (x) is a random variable such that For each i, let X s i be a random variable having the size-biased distribution of X i . Let I be independent of all else with P[I = i] = λEX i and let U be a uniform random variable on (0, 1), independent of all else. Then W I (X s I ) + λU X s I has the equilibrium distribution with respect to W . In particular, if X i ∈ {0, 1} for all i, we have X s i = 1 and hence W I (1) + λU has the equilibrium distribution with respect to W .
Proof. Let S m = λ m i=1 X i . By first conditioning on I and U, and using (2.2) and (S i ) = (W i (X i )) for the third equality, we obtain Remark 4.1. The argument goes through in the same way when instead we define We next apply the above result to Markov chain occupation times. Our next result gives a bound on the error of the exponential approximation for the number of times a Markov chain revisits its starting state. More general asymptotic results of this type, but without explicit bounds on the error, go back to Darling and Kac (1957). We next consider a general aperiodic irreducible random walk on the two-dimensional integer lattice started at the origin. As a consequence of Lawler and Limic (2010, p. 24) we have the following lemma. We are now able to give a bound on the error of the exponential approximation for the number of times the random walk revisits the origin. This type of result, for simple random walk, goes back to Erdős and Taylor (1960). for all n.
Proof. Let X n = I {Z n =0} be the indicator for the event that the random walk revisits the origin at time n. Lemma 4.3 gives λ C/ log n and thus the result follows from Corollary 4.2 and, where C may be different (but independent of n) in each instance used, Remark 4.2. The result for the two-dimensional simple random walk for fixed a and b follows from Erdős and Taylor (1960, Eq. (3.10)), so the above corollary can be viewed as a complement and extension. Using the method of moments, Gärtner and Sun (2009, Theorem 1.1) give an argument for the analogous exponential limit theorem for general random walks, but without a rate of convergence.
A PROOF OF LEMMA 3.2 We first need some simple estimates.
Lemma A.1. Let a, b and c be real numbers, strictly greater than 1, such that Proof. It is clear from the monotonicity of the logarithm function that for x, y > 0 we have Rewriting this inequality for x = 1/b and y = 1/c, we have 1 + log bc b+c bc b+c Noting that 1+log(a) a is a decreasing function for a 1 and noting that a Let f be a non-negative function on [a, b] for two integers a and b. If f is either increasing, decreasing or has exactly one minimum, a simple geometric argument yields that (this estimate is not optimal if the function is increasing, but we want to avoid further case distinctions). Furthermore, recalling that m − 1 m log(m), Finally, The last estimate is due to the fact that log(x)/x is clearly bounded by (1 + log(x))/x for x > 1, and the latter is a decreasing function, and then by applying (A.3). Putting the estimates for r 1 through r 4 together proves (3.4).
Proof of Lemma 3.2 for m < 1. Note first that Putting all the estimates together proves (3.5).
To do this we need to find a vector of probabilities p 0 , p 1 , . . . p n that maximizes This is a linear programming problem with n + 1 variables and n + 4 constraints. The constraints define a simplex and the fundamental theorem of linear programming tells us the maximum is achieved at a corner point of the simplex where there are n + 1 binding constraints; this means at most three of the variables p k can be non-zero at the maximum. As we assume p 0 > 0, we have therefore reduced the problem to just looking at three-point distributions, where one of the three points is at 0.
We consider first the case where neither points are at 1, where we are now trying to find x, y and p, q that maximizes x(x − 1)p + y( y − 1)q subject to the constraints x 3 p + y 3 q a, p + q , p, q 0, x, y 2.