Perfect simulation of Vervaat perpetuities

We use coupling into and from the past to sample perfectly in a simple and provably fast fashion from the Vervaat family of perpetuities. The family includes the Dickman distribution, which arises both in number theory and in the analysis of the Quickselect algorithm, which was the motivation for our work.


Introduction, background, and motivation
1.1. Perpetuities in general. Define a perpetuity to be a random variable Y such that for some sequence W 1 , W 2 , W 3 , . . . of independent and identically distributed random variables distributed as W . Throughout this paper we assume W ≥ 0 (a.s.) and E W < 1 since these simplifying assumptions are met by the Vervaat perpetuities, which are discussed in Section 1.2 and are the focus of our work. [Some authors define a perpetuity as in (1.1) but with 1 added to the right-hand side.] The distribution of such a random variable Y is also referred to as a perpetuity. General background on perpetuities is provided in the first paragraph of Devroye [1,Section 1]. To avoid repetition, we refer the reader to that paper, which also cites literature about perpetuities in general and about approximate sampling algorithms. Within the general framework of (1.1), the following simple observations (with L = denoting equality in law, or distribution) can be made: (i) The random variable Y ≥ 0 is finite almost surely; indeed, its expected value is finite: (ii) The perpetuity satisfies the distributional fixed-point equation where, on the right, W and Y are independent. [In fact, this fixedpoint equation characterizes the distribution (1.1).] (iii) If W has a density f W , then Y has a density f Y satisfying the integral equation 1.2. Quickselect, the Dickman distribution, and the Vervaat family of perpetuities. Our interest in perpetuities originated with study of the running time of the Quickselect algorithm (also known as Find), due to Hoare [4]. Quickselect(n, m) is a recursive algorithm to find the item of rank m ≥ 1 (say, from the bottom) in a list of n ≥ m distinct numbers (called "keys"). First, a "pivot" is chosen uniformly at random from among the n keys and every key is compared to it, thereby determining its rank (say, j) and separating the other keys into two groups. If j = m, then Quickselect(n, m) returns the pivot. If j > m, then Quickselect(j − 1, m) is applied to the keys smaller than the pivot. If j < m, then Quickselect(n − j, m − j) is applied to the keys larger than the pivot. Let C(n, m) denote the (random) number of key comparisons required by the call Quickselect(n, m), and write 1(A) to mean the indicator that has value 1 if the Boolean expression A is true and 0 otherwise. Let J n denote the rank of the first pivot chosen. Immediately from the description of the algorithm we find the distributional recurrence relation 3) where, on the right, (i) for each fixed r and s the random variable C(r, s) is distributed as the number of key comparisons required by Quickselect(r, s), the joint distribution of such random variables being irrelevant; (ii) similarly for C * (r, s); (iii) the collection of random variables C(r, s) is independent of the collection of C * (r, s); and (iv) J n is uniformly distributed on {1, . . . , n} and is independent of all the C's and C * 's.
The distribution of C(n, m) is not known in closed form for general finite n and m, so we turn to asymptotics. For any fixed m, formal passage to the limit as n → ∞ suggests that Y (n, m) := C(n, m) n − 1 has a limiting distribution L(Y ) satisfying the fixed-point equation This is indeed the case, as was shown by Mahmoud et al. [12]. Recalling the characterization (1.2), we see that the law of Y is a perpetuity, known as the Dickman distribution. Curiously, if m is chosen uniformly at random from {1, . . . , n} before the algorithm is run, then the limiting distribution of Y (n, m) is the convolution square of the Dickman distribution, as was also shown in [12]. See [6, Section 2] concerning various important settings, including number theory (largest prime factors) and combinatorics (longest cycles in permutations), in which the Dickman distribution arises; also see [6] for some basic facts about this distribution. The support of the distribution is [0, ∞), and simple integral expressions are known for the characteristic and momentgenerating functions of this log-concave (that is, strongly unimodal) and infinitely divisible distribution. In addition, the Dickman distribution has a continuous density f that is constant with value e −γ over (0, 1] (here γ is Euler's constant). Over each interval (k, k + 1] with k a positive integer, f is the unique solution to the delayed differential equation Still, no closed form for f is known. Some years back, Luc Devroye challenged the first author to find a method for simulating perfectly from the Dickman distribution in finite time, where one assumes that only perfect draws from the uniform distribution and basic arithmetic operations such as multiplication (with perfect precision) are possible; in particular, numerical integration is not allowed. The problem was solved even prior to the writing of Devroye's paper [1], but the second author pointed out extensive simplification that could be achieved. The result is the present collaborative effort, which shows how to sample perfectly in a simple, provably fast fashion from the Dickman distribution, and more generally from any member of the Vervaat [15] family of perpetuities handled by Devroye [1]. To obtain the Vervaat perpetuities, one for each value of 0 < β < ∞, choose W = U 1/β in (1.2).

A quick review of Devroye's method.
To compare with the Markovchain-based method employed in the present paper, we first review highlights of Devroye's [1] method for perfect simulation of perpetuities. Devroye sketches a general approach and carries it out successfully for the Vervaat family.
The underlying idea of Devroye's approach is simple acceptance-rejection: 1. Find explicit h > 0 and 1 ≤ c < ∞ so that f Y ≤ h everywhere and h = c.
2. Generate X with density c −1 h. 3. Accept X with probability f Y (X)/h(X); otherwise, reject X and independently repeat steps 2-3.
2. Let U 1 and U 2 be independent uniform(0, 1) random variables and set X : 3. Since f Y can't be computed exactly, having generated X = x and an independent uniform random variable U 3 , one must figure out how to determine whether or not U 3 ≤ f Y (x)/h(x). Devroye's solution for step 3 is to find explicitly computable approximations f n (x) to f Y (x) and explicitly computable bounds R n (x) so that, for every x, Devroye's functions f n are quite complicated, involving (among other things) • the sine integral function S(t) := t 0 sin s s ds and approximations to it computable in finite time; • explicit computation of the characteristic function φ Y of Y ; • use of quadrature (trapezoidal rule, Simpson's rule) to approximate the density f Y as the inverse Fourier transform of φ Y . Devroye proves that the running time of his algorithm is finite almost surely (for any β > 0), but cannot get finite expected running time for any β > 0 without somewhat sophisticated improvements to his basic algorithm. He ultimately develops an algorithm so that More careful analysis would be difficult. Devroye makes no claim "that these methods are inherently practical". We do not mean to criticize; after all, as Devroye points out, his paper [1] is useful in demonstrating that perfect simulation of perpetuities (in finite time) is possible.
The approach we will take is very simple conceptually, very easy to code, and (at least for Vervaat pereptuities with β not too large) very fast (provably so). While we have hopes that our methodology can be generalized to apply to any perpetuity, in this paper we develop the details for Vervaat perpetuities for any β > 0.
1.4. Our approach. Briefly, our approach to perfect simulation of a perpetuity from the Vervaat family is to use the Markov-chain-based perfect sampling algorithm known as coupling into and from the past (CIAFTP) ( [8,9]) that produces draws exactly from the stationary distribution of a Markov chain. CIAFTP requires use of a so-called dominating chain, which for us will be simple random walk with negative drift on a set of the form where x 0 is a fixed real number at least 2. In order to handle the continuous state space, multigamma coupling [13] will be employed.
As we shall see, all of the Markov chain steps involved are very easy to simulate, and the expected number of steps can be explicitly bounded. For example, our bound is the modest constant 15 in the case β = 1 of the Dickman distribution, for which the actual expected number of steps appears to be a little larger than 6 (consult the end of Section 5).
The basic idea is simple. From the discussion in Section 1.1 it is clear that the kernel K given by provides a Markov chain which, for any initial distribution, converges in law to the desired stationary distribution L(Y ), the perpetuity. An update function or transition rule for a Markov chain on state space Ω with kernel K is a function φ : Ω × Ω ′ → Ω (for some space Ω ′ ) so that for a random variable W with a specified distribution on Ω ′ we have L(φ(x, W )) = K(x, ·) for every x ∈ Ω. There are of course many different choices of update function for any particular Markov chain. To employ coupling from the past (CFTP) for a Markov chain, an update function that is monotone suffices. Here, an update function φ is said to be monotone if whenever x y with respect to a give partial order on Ω we have φ(x, w) φ(y, w) for all w ∈ Ω ′ .
Consider the state space [0, ∞) linearly ordered by ≤, and let W be distributed as in (1.2). Then provides a natural monotone update function. If we wish to do perfect simulation, it would appear at first that we are ideally situated to employ coupling from the past (CFTP).
However, there are two major problems: (i) The rule φ natural is strictly monotone, and thus no two trajectories begun at distinct states will ever coalesce. (ii) The state space has no top element. It is well known how to overcome these difficulties: (i) Instead of φ natural , we use a multigamma coupler [13].
(ii) Instead of CFTP, we use CIAFTP with a dominating chain which provides a sort of "stochastic process top" to the state space [8,9]. Our work presents a multigamma coupler for perpetuities that is monotone. Monotonicity greatly simplifies the use of CFTP [14] and CIAFTP. Useful monotone couplers can be difficult to find for interesting problems on continuous state spaces; two very different examples of the successful use of monotone couplers on such spaces can be found in [16] and [5].
1.5. Outline. In Section 2 we describe our multigamma coupler; in Section 3, our dominating chain. Section 4 puts everything together and gives a complete description of the algorithm, and Section 5 is devoted to bounding the running time. Section 6 briefly discusses approaches similar to ours carried out by two other pairs of authors.

The multigamma coupler
The multigamma coupler of Murdoch and Green [13] is an extension of the γ coupler described in Lindvall [11] that couples a single pair of random variables. An update function can be thought of as coupling an uncountable number of random variables simultaneously, thus the need for "multi"gamma coupling. The goal of multigamma coupling is to create an update function whose range is but a single element with positive probability, in order to use CIAFTP as described in Section 4.

A dominating chain
Dominating chains were introduced by Kendall [8] (and extended in [9]) to extend the use of CFTP to chains on a partially ordered state space with a bottom element but no top element. A dominating chain for a Markov chain (X t ) is another Markov chain (D t ) that can be coupled with (X t ) so that X t ≤ D t for all t. In this section we give such a dominating chain for the Vervaat perpetuity, and in the next section we describe how the dominated chain can be used with CIAFTP.
We exhibit such a rule ψ in our next result, Proposition 3.1. It is immediate from the definition of ψ that the dominating chain is just a simple random walk on the integers {x 0 −1, x 0 , . . . } that moves left with probability 2/3 and right with probability 1/3; the walk holds at x 0 − 1 when a move to x 0 − 2 is proposed. In the definition of ψ, note that no use is made of w(2). Proposition 3.1. Fix 0 < β < ∞. Let φ be the multigamma coupler for Vervaat perpetuities described in Proposition 2.1. Define and, for x ∈ S := {x 0 − 1, x 0 , . . . }, Then (3.1) holds, and so ψ drives a chain that dominates the φ-chain.

A perfect sampling algorithm for Vervaat perpetuities
For an update function φ(x, w) and random variables W −t , . . . , W −1 , set To use coupling from the past with a dominated chain for perfect simulation, we require: (i) A bottom element 0 for the partially ordered state space of the underlying chain X, a dominating chain D, and update functions φ(x, w) and ψ(x, w) for simulating the underlying chain and dominating chain forward in time. (This detection of coalescence can be conservative, but we must never claim to detect coalescence when none occurs.) With these pieces in place, dominated coupling from the past [9] is: , then output state x as the random variate and quit. 6. Else let t ← t ′ , t ′ ← t ′ + 1, and go to step 3. The update to t ′ in step 6 can be done in several ways. For instance, doubling time rather than incrementing it additively is often done. For our chain step 5 only requires checking to see whether W −t ′ ≤ 1/(1 + D −t ′ ), and so requires constant time for failure and time t ′ for success. Thus step 6 as written is efficient.
Because the dominating chain starts at time 0 and is simulated into the past, and then the underlying chain is (conditionally) simulated forward, Wilson [16] referred to this algorithm as coupling into and from the past (CIAFTP). Now we develop each of the requirements (i)-(v) for Vervaat perpetuities. Requirement (i) was dealt with in Section 2, where it was noted that the dominating chain is a simple random walk with probability 2/3 of moving down one unit towards 0 and probability 1/3 of moving up one unit. The chain has a partially absorbing barrier at x 0 − 1, so that if the chain is at state x 0 − 1 and tries to move down it stays in the same position.
The stationary distribution of the dominating random walk can be explicitly calculated and is a shifted geometric random distribution; requirement (ii) can be satisfied by letting D 0 be x 0 −2+G where G ∈ {1, 2, . . . } has the Geometric distribution with success probability 1/2. [Recall that we can generate G from a Unif(0, 1) random variable U using G = ⌈−(ln U )/(ln 2)⌉.] Simulating the D-chain backwards in time [requirement (iii)] is also easy since D is a birth-and-death chain and so is reversible. Now consider requirement (iv). Suppose that a one-step transition D −t+1 = D −t + 1 (say) forward in time is observed, and consider the conditional distribution of the driving variable W −t that would produce this [via . What we observe is precisely that the random variable W −t (1) = U 1/β −t fed to the update function ψ must have satisfied W −t (1) > (2/3) 1/β , i.e., U −t > 2/3. Hence we simply generate W −t (1) from the distribution of U 1/β −t conditionally given conditioned on W −t (1) ≤ (2/3) 1/β , i.e., on U −t ≤ 2/3. The random variable W −t (2) is always generated independently as the 1/β power of a Unif(0, 1) random variable.
Finally, requirement (v) is achieved by using the multigamma coupler from the last section; indeed, if ever W −t (1) ≤ 1/(D −t + 1), then the underlying chain coalesces to a single state at time −t + 1, and hence also at time 0.

A bound on the running time
and hence E T = e β ln β+Θ(β) as β → ∞; and Proof. The lower bound in (5.1) is easy: Since D t ≥ x 0 − 1, the expectation of the number of steps t needed to get (U −t ) 1/β ≤ 1/(D −t + 1) is at least x β 0 . We proceed to derive the upper bound. Consider a potential function Φ = (Φ t ) t≥0 such that Φ t depends on D −t , . . . , D 0 and W −t , . . . , W −1 in the following way: We will show that the process (Φ t∧T + (1/3)(t ∧ T )) is a supermartingale and then apply the optional sampling theorem to derive the upper bound. Let F t be the σ-algebra generated by D −t , . . . , D 0 and W −t , . . . , W −1 , so that T is a stopping time with respect to the filtration (F t ). Suppose that T > t.
In this case there are two components to Φ t − Φ t−1 . The first component comes from the change from D −t to D −t+1 . The second component comes from the possibility of coalescence (which gives T = t) due to the choice of (1). The expected change in Φ is the sum of the expected changes from these two sources.
For the change in the D-chain we observe The expected change in Φ due to coalescence can be bounded above by considering coalescence only when D −t+1 = x 0 − 1, in which case we have D −t ∈ {x 0 − 1, x 0 }. But then coalescence occurs with probability at least [1/(1 + x 0 )] β , and if coalescence occurs, Φ drops from (2/3)(x 0 + 1) β down to 0. Therefore the expected change in Φ from coalescence when . So whenever T > t, the potential Φ decreases by at least 1/3 on average at each step, and hence (Φ t∧T + (1/3)(t ∧ T )) is a supermartingale. Since it is also nonnegative the optional sampling theorem can be applied (see for example [3], p. 271) to yield is a geometric random variable with parameter 1/2 and thus has mean 2. Hence E Φ 0 = 1 + (2/3)(x 0 + 1) β , giving the desired upper bound on E T .
We now prove (5.2), using notation such as T (β) to indicate explicitly the dependence of various quantities on the value of the parameter β. However, notice that x 0 (β) = 2 for all 0 < β ≤ β 0 := ln(3/2)/ ln 3 and hence that the same dominating chain D may be used for all such values of β.
Conditionally given the entire D-process, since 1 − (D −s + 1) −β is the probability that coalescence does not occur at the sth step backwards in time, Y t (β) equals the probability that coalescence does not occur on any of the first t steps backwards in time. Thus and hence We now need only apply the dominated convergence theorem, observing that Y t (β) ≥ 0 is increasing in 0 < β ≤ β 0 and that 1 For any fixed value of β, it is possible to find E T to any desired precision. Let us sketch how this is done for the Dickman distribution, where β = 1. First, we note for simplicity that T in Theorem 5.1 has the same distribution as the time it takes for the dominating random walk D = (D t ) t≥0 , run forward from the stationary distribution at time 0, to stop, where "stopping" is determined as follows. Having generated the walk D through time t and not stopped yet, let U t be an independent uniform(0, 1) random variable. If U t > 2/3, let D move up (and stopping is not possible); if U t ≤ 2/3, let D move "down" and stop (at time t + 1) if U t ≤ 1/(D t + 1). Adjoin to the dominating walk's state space {x 0 − 1, x 0 , x 0 + 1, . . .} an absorbing state corresponding to stopping. Then the question becomes: What is the expected time to absorption? We need only compute the expected time to absorption from each possible deterministic initial state and then average with respect to the shifted-geometric stationary distribution for D. For finitestate chains, such calculations are straightforward (see for example [7], pp. 24-25). For infinite-state absorbing chains such as the one we have created here, some truncation is necessary to achieve lower and upper bounds. This can be done using ideas similar to those used in the proof of Theorem 5.1; we omit further details.
The upshot is that is takes an average of 6.07912690331468130722 . . . steps to reach coalescence. This is much closer to the lower bound given by Theorem 5.1 of 5 than to the upper bound of 15 . Our exact calculations confirm results we obtained from simulations. Ten million trials (which took only a few minutes to run and tabulate using Mathematica code that was effortless to write) gave an estimate of 6.07787 with a standard error of 0.00184. The algorithm took only a single Markov chain step about 17.4% of the time; more than four steps about 47.6% of the time; more than eight steps about 23.4% of the time; and more than twenty-seven steps about 1.0% of the time. In the ten million trials, the largest number of steps needed was 112.
Similar simulations for small values of β led us to conjecture, for some constant c near 1, the refinement E T (β) = 1 + (1 + o(1)) c β as β → 0 (5.7) of (5.2). The expansion (5.7) (which, while not surprising, demonstrates extremely efficient perfect simulation of Vervaat perpetuities when β is small) does in fact hold with 2 −i ln(i + 1) ≈ 1.016. (5.8) We will prove next that the right side of (5.7) provides a lower bound on E T (β). Our proof that it also provides an upper bound uses the same absorbing-state approach we used to compute E T (β) numerically in the case β = 1 and is rather technical, so we omit it here. Using (5.6) and (5.5) together with the inequality e −x ≥ 1 − x for x ≥ 0 we find Application of the dominated convergence theorem to the right side of (5.9) gives the lower bound in (5.7), where c = E ln(D −1 + 1) is given by the series (5.8).

Related work
An unpublished early draft of this paper has been in existence for a number of years. Perfect simulation of Vervaat perpetuities has been treated independently by Kendall and Thönnes [10]; their approach is quite similar (but not identical) to ours. One main difference is their use of the multi-shift coupler of Wilson [16] rather than the multigamma coupler. The simulation with (in our notation) β = 10 reported in their Section 3.5 suggests that, unlike ours, their algorithm is reasonably fast even when β is large; it would be very interesting to have a bound on the expected running time of their algorithm to that effect.
The early draft of this paper considered only the Dickman distribution and used an integer-valued dominating chain D with a stationary distribution that is Poisson shifted to the right by one unit. Luc Devroye, who attended a lecture on the topic by the first author of this paper in 2004, and his student Omar Fawzi have very recently improved this approach to such an extent that the expected number of Markov chain steps they require to simulate from the Dickman distribution is proven to be less than 2.32; see [2]. It is not entirely clear that their algorithm is actually faster than ours, despite the larger average number 6.08 of steps for ours (recall the penultimate paragraph of Section 5), since it is possible that one step backwards in time using their equation (1) takes considerably longer than our simple up/down choice (recall step 2 in the algorithm at the end of Section 4).