The quantile transform of a simple walk

We examine a new path transform on 1-dimensional simple random walks and Brownian motion, the quantile transform. This transformation relates to identities in fluctuation theory due to Wendel, Port, Dassios and others, and to discrete and Brownian versions of Tanaka's formula. For an n-step random walk, the quantile transform reorders increments according to the value of the walk at the start of each increment. We describe the distribution of the quantile transform of a simple random walk of n steps, using a bijection to characterize the number of pre-images of each possible transformed path. We deduce, both for simple random walks and for Brownian motion, that the quantile transform has the same distribution as Vervaat's transform. For Brownian motion, the quantile transforms of the embedded simple random walks converge to a time change of the local time profile. We characterize the distribution of the local time profile, giving rise to an identity that generalizes a variant of Jeulin's description of the local time profile of a Brownian bridge or excursion.


Introduction
Given a simple walk with increments of ±1, one observes that the step immediately following the maximum value attained must be a down step, and the step immediately following the minimum value attained must be an up step. More generally, at a given value, the subsequent step is more likely Date: May 11, 2014. 1 to be an up step the closer the value is to the minimum and more likely to be a down step the closer the value is to the maximum. To study this phenomenon more precisely, one can form a two-line array with the steps of the walk and the value of the walk, and then sort the array with respect to the values line and consider the walk defined by the correspondingly re-ordered steps. It is this transformation, which we term the quantile transform, that we study here.
In this paper, we characterize the image of the quantile transform on simple (Bernoulli) random walks, which we call quantile walks, and we find the multiplicity with which each quantile walk arises. These results follow from a bijection between walks and quantile pairs (v, k) consisting of a quantile walk v and a nonnegative integer k satisfying certain conditions depending on v.
We also find, by passing to a Brownian limit, that the quantile transform of certain Bernoulli walks converge to an expression involving Brownian local times. This leads to a novel description of local times of Brownian motion up to a fixed time.
It is not difficult to describe the image of the set of walks under the quantile transform; they are nonnegative walks and first-passage bridges. Our main work is to prove the multiplicity with which each image walk arises; this is stated in our Quantile bijection theorem, Theorem 2.7, and illustrated in Figure 2.4. We establish the bijection by decomposing the quantile transform into three maps: (Q(w), φ −1 w (n)) = γ • β • α(w). (1.1) In the middle stages of our sequence of maps we obtain combinatorial objects which we call marked (increment) arrays and partitioned walks. The three maps α, β, and γ are discussed in sections 4, 5, and 6 respectively. In section 2 we prove an image-but-no-multiplicities version of the Quantile bijection theorem for a more general class of discrete-time processes.
In section 3 we show that the total number of quantile pairs (v, k) with v having length n is equal to the number of walks of length n, i.e. 2 n . Section 4 introduces increment arrays and defines the map α. These arrays are a finite version of the stack model of random walk, which is the basis for cycle-popping algorithms used to generate random spanning trees of edge-weighted digraphs -see Propp and Wilson [45]. Theorem 4.7 asserts that α is injective and characterizes its range; i.e. this theorem gives sufficient and necessary conditions for a marked increment array to minimally describe a walk.
In section 5 we introduce partitioned walks and the map β. This map is trivially a bijection, and Theorem 5.8 describes the image of β • α. Equation (5.3) is a discrete version of Tanaka's formula; this formula has previously been studied in several papers, including [38,19,49,51], and it plays a key role both in this section and in the continuous setting.
In section 6 we prove that γ acts injectively on the image of β • α, thereby completing our proof of Theorem 2.7.
Moving on from Theorem 2.7, in section 7 we demonstrate a surprising connection between the quantile transform and a discrete version of the Vervaat transform, which was discussed in definition 8.17. Theorem 7.3 is the Vervaat analogue to Theorem 2.7. We find that quantile pairs and Vervaat pairs coincide almost perfectly and that every walk has equally as many preimages under the one transform as under the other.
In section 8, we pass from simple random walks to a Brownian limit in the manner of Knight [36,35]. Our path transformed walk converges strongly to a formula involving Brownian local times. The bijection from the discrete setting results in an identity, Theorem 8. 19, describing local times of Brownian motion up to a fixed time, as a function of level. This identity generalizes a theorem of Jeulin [32].
Jeulin's theorem was applied by Biane and Yor [10] in their study of principal values around Brownian local times. Aldous [3], too, made use of this identity to study Brownian motion conditioned on its local time profile; and Aldous, Miermont, and Pitman [1], while working in the continuum random tree setting, discovered a version of Jeulin's result for a more general class of Lévy processes. Leuridan [39] and Pitman [43] have given related descriptions of Brownian local times up to a fixed time, as a function of level.
Related path transformations have been considered by Bertoin, Chaumont, and Yor [8] and later by Chaumont [15] in connection with an identity of fluctuation theory which had previously been studied by Wendel [54], Port [44], and Dassios [21,22,23]. We conclude with a discussion of these and other connections in section 9.

The quantile transform of a non-simple walk
It is relatively easy to describe the image of the quantile transform; the difficulty lies in enumerating the preimages of a given image walk. In this section we do the easy work, offering in Theorem 2.5 a weak version of Theorem 2.7 in the more general setting of non-simple walks. We conclude the section with a statement of our full Quantile bijection theorem, Theorem 2.7.
Throughout this document we use the notation [a, b] to denote an interval of integers. While most results in the discrete setting apply only to walks with increments of ±1, our results for this section apply to walks in general.
Definition 2.1. For n ≥ 0 a walk of length n is a function w : [0, n] → R with w(0) = 0. We may view such a walk w in terms of its increments, In particular, a simple walk is a function w : [0, n] → Z In subsequent sections of the document, for the sake of brevity we will say "walk " to refer only to simple walks.
The quantile path transform sends w to the walk Q(w) given by x φw(i) for j ∈ [1, n]. (2.1) Note that the quantile permutation does not depend on the final increment x n of w. A variant that does account for this final increment was previously considered by Wendel [54] and Port [44], among others; this is discussed further in section 9.
We show an example of a simple walk and its quantile transform in Figure 2.1; for each j the j th increment of w is labeled with φ −1 w (j). Observe that for a walk w of length n, we have Q(w)(n) = w(n). As j increases, the process Q(w)(j) incorporates increments which arise at higher values within w. Consider example in Figure 2.1. The first two increments of Q(w) correspond to the increments in w which originate at the value −2, the first five increments of Q(w) correspond to those which originate at or below the value −1, and so on. In discussing the proof and consequences of Theorem 2.7 it is helpful to refer to several special classes of walks. Definition 2.3. We have the following special classes of (simple) walks: • A bridge to level b is walk w of length n with w(n) = b; when b = 0, w is simply a bridge. • A non-negative walk is a walk of finite length which is non-negative at all times. • A first-passage bridge of length n is a walk w which does not reach w(n) prior to time n. • A Dyck path is a non-negative bridge (to level 0).
As illustrated in Figure 2.2, Q(w)(j) is the sum of increments of w which come from below a certain level. The graph of w is shown on the left and that of Q(w) is on the right. The increments which contribute to Q(w) (6) are shown in both graphs as numbered, solid arrows, and those that do not contribute are shown as dashed arrows. The time j = 6 is marked off with a vertical dotted line on the left. Increments with their left endpoints strictly below this value do contribute to Q(w)(6), increments which originate at exactly this value may or may not contribute, and increments which originate strictly above this value do not contribute. The value Q(w)(6) is the sum of increments of w which originate below A w (6), as well as some which originate exactly at A w (6).
Definition 2.4. Given a walk w, for j ∈ [1, n] we define the quantile function of occupation measure The quantile function of occupation measure may also be expressed without reference to the quantile permutation by On the left in Figure 2.2, the horizontal dotted line indicates A w (6).
Theorem 2.5. For any walk w of length n,

(2.2)
Consequently, Q(w) is either a non-negative walk in the case where w(n) ≥ 0 or a first-passage bridge to a negative value in the case where w(n) < 0.
Proof. First we prove that for j < φ −1 w (n) we have Q(w)(j) ≥ 0. Afterwards, we prove that for j ∈ [φ −1 w (n), n) we have Q(w)(j) > Q(w)(n). Fix j < φ −1 w (n) and let We partition I into maximal intervals of consecutive integers. For example, in Figure 2.3 with j = 6 we have I = {1, 2, 4, 5, 8, 9}, which comprises three intervals: {1, 2}, {4, 5}, and {8, 9}. We label these intervals I 1 , I 2 , and so on. These intervals correspond to segments of the path of w, shown in solid lines in the figure. Each such segment begins at or below A w (j) and each ends at or above A w (j). Here we rely on our assumption that j < φ −1 w (n) and thus n ∈ I: if one of our path segments included the final increment of w then that segment might end below A w (j).
Thus, for each k we have i∈I k x i ≥ 0, and so As with I above, we partition I c into maximal intervals of consecutive numbers, I c 1 , I c 2 , · · · . These intervals correspond to segments of the path of w. Each such segment begins at or above and ends at or below A w (j). As in the previous case, here we rely on our assumption that j ≥ φ −1 w (n): if one of the I c k included the final increment then the corresponding path segment might end above A w (j).
Moreover if one of these segments begins exactly at A w (j) then it must end strictly below A w (j). In order for the segment corresponding to some block [l, l + 1, · · · , m] of I c to begin exactly at A w (j) we would need: (1) w(l − 1) = A w (j) = w(φ w (j) − 1) and (2) l ∈ I c . Thus, by definition of I, we would have l ≥ φ −1 w (j). And since m + 1 ∈ I and m + 1 > φ −1 w (j), we would then have w(m) < A w (j), as claimed. We conclude that for each block I c k , Consequently, as desired.
Theorem 2.5 motivates the following definition.
Definition 2.6. A quantile walk is a simple walk that is either non-negative or a first-passage bridge to a negative value.
A quantile pair is a pair (v, k) where v is a quantile walk of length n and k is a nonnegative integer such that The following is our main result in the discrete setting.
Theorem 2.7 (Quantile bijection). The map w → (Q(w), φ −1 w (n)) is a bijection between the set of simple walks of length n and the set of quantile pairs (v, k) with v having length n.
This theorem is proved at the end of section 6. The next several sections build tools for that proof in the manner described in the introduction.
The index φ −1 w (n) serves as a helper variable in the statement of the theorem, distinguishing between walks that have the same Q-image. This helper variable is the time at which the increment corresponding to the final increment of w arises in Q(w). Figure 2.4 illustrates which indices k may appear as helper variables alongside a particular image walk v, depending on the sign of v(n). If v(n) < 0 then its helper k may be any time from 1 up to the hitting time of −1. If v(n) ≥ 0 and v ends in a down-step then k may be any time in the final excursion above the value v(n), including time n. In the special case where v(n) ≥ 0 and v ends with an up-step, k can only equal n. Throughout the remainder of the document we say "walk " to refer to simple walks.

Enumeration of quantile pairs
In this section we show that there are as many quantile pairs (v, k) in which v has u up-steps and d-down steps as there are walks with u up-steps and d down-steps. We begin with notation.
Let q(u, d) denote the number of quantile pairs (v, k) for which v has exactly u up-steps and d down-steps. For u ≥ d let walk + (u, d) denote the number of everywhere non-negative walks with u up-steps and d down-steps. For u = d let f pb(u, d) denote the number of first-passage bridges with u up-steps and d down-steps.
The following two formulae are well known and can be found in Feller [30, p. 72-77].
A discussion of these and other formulae in this vein may also be found in [28]. We call upon a version of the Cycle lemma. If we condition on the lengths of these first-passage bridges then they are independent and uniformly distributed in the sets of first-passage bridges to −1 of the appropriate lengths.
Versions of this lemma have been rediscovered many times. For more discussion on this topic see [24] and [42, p. 172-3] and references therein.
Finally, we require the following formula.
Proof. The formula is trivial in the case u = d. Moreover, it suffices to prove the formula in the case u > d, since the case u < d follows by swapping variables.
We define a bijective path transformation T which transforms a nonnegative walk ending in a down-step to a first-passage bridge down. This transformation offers a near duality between two classes of quantile pairs.
Let v be a non-negative walk that ends in a down step. We define a bijective path transformation T which transforms such a walk into a first −→ Fix u > d. The transformation T bijectively maps: (1) non-negative walks that end in down-steps and take u up-steps and d + 1 down-steps to (2) first-passage bridges that take d up-steps and u + 1 down-steps. This map has the additional property that v belongs to exactly one more quantile pair than T (v) does: This gives the following identity for u > d: The second term on the right corresponds to the "+1" from equation (3.4). The second term on the left accounts for quantile pairs involving nonnegative walks that end in up-steps. Subbing in the known counts (3.1) and (3.2) gives the desired result.
We now have all of the elements needed to prove our enumeration of quantile pairs. Proposition 3.3. For any non-negative integers u and d, Proof. We prove the result in the case u < d and then use equation ( as desired. Now suppose u ≥ d. By equation (3.3) and the previous case

Increment arrays
The increment array corresponding to a walk is a collection of sequences of ±1s, with each sequence listing the increments from a particular level of that walk. This is a finite version of the stack model of a Markov process, discussed in Propp and Wilson [45, p. 205] in connection with the cycle popping algorithm for generating a random spanning tree of an edge-weighted digraph. Whereas the stack model assumes an infinite excess of instructions, we study increment arrays which minimally describe walks of finite length. Theorem 4.7 characterizes these increment arrays.
In terms of the decomposition of Q proposed in equations (1.1) and (1.2), this section defines and studies the map α.
By virtue of their finiteness, increment arrays may be viewed as discrete local time profiles with some additional information. Discrete local times have been studied extensively; see, for example, Knight [35] and Révész [47]. A more complete list of references regarding asymptotics of discrete local times is given in section 8.
The quantile transform rearranges increments on the basis of their left endpoints.
Definition 4.1. Let w be a walk of length n. For 1 ≤ j ≤ n we define the level of (the left end of) the j th increment of w to be The j th increment of w is said to belong to, or to leave, that level. We name four important levels of a walk w, illustrated in Figure 4.1.
• The start level is the level of the first increment, or − min i<n w(i).
We typically denote this S, or S w in case of ambiguity.
• The terminal level is (w(n)−min i<n w(i)). We typically denote this T or T w . • The preterminal level is the level of the final increment, or (w(n − 1) − min i<n w(i)). We typically denote this P or P w . • The maximum level is max j∈[0,n−1] w(j). We typically denote this L or L w . Note that if w is a first-passage bridge then no increments leave its terminal level. In this case T equals either −1 or L + 1. Because T attains these exceptional values, the set of first-passage bridges arise as a special case throughout this document.
The start, preterminal, and terminal levels share the following relationship.
The quantile transform of a walk w is determined by the levels at which the increments of w occur and the orders in which they occur at each level. We define increment arrays to carry this information.

Definition 4.2. An increment array is an indexed collection
of non-empty, finite sequences of ±1s. We call the x i s the rows and L the height of the array. We say that an increment array (x i ) L i=0 corresponds to a walk w with maximum level L if, for every i ∈ [0, L], the sequence of increments of w at level i equals x i ; i.e.
An example of a walk and its corresponding increment array is given in  Given an increment array x, we define u x i and d x i to be the number of '1's and '−1's, respectively, that appear in x i . Correspondingly, for a walk w we define u w i and d w i to be the numbers of up-and down-steps of w from level i. We call the u x i s and d x i s (respectively u w i s and d w i s) the up-and down-crossing counts of x (resp. of w). We define the sum of x,  denoted σ x , to be the sum of all increments in the array: Clearly, if x corresponds to a walk w of length n then σ x = w(n), and for each i We now define the map α, which was referred to in equations (1.1) and (1.2). We need this map to be injective, but we see in Theorem 4.13 that the map from a walk to its corresponding increment array is not injective, so α(w) must pass some additional information.
Definition 4.4. Given an increment array x = (x i ) L i=0 , we may arbitrarily specify one row x P with P ∈ [0, L] to be the preterminal row. We call the pair (x, P) a marked (increment) array, since one row has been "marked" as the preterminal row. We say that the marked array corresponds to a walk w if w corresponds to x and has preterminal level P.
We define α to be the map which sends a walk w to its corresponding marked array. Equation (4.1) may be restated in this setting. If an array x corresponds to a walk w with preterminal level P then the start and terminal levels of w are specified by where x * P denotes the final increment in the row x P . Definition 4.5. For a marked array (x, P) we define the indices S and T via equation (4.3). If S falls within [0, L] then we call x S the start row of x; otherwise we say that the start row is empty. Likewise, if T ∈ [0, L] then we call x T the terminal row, and if not then we say that the terminal row is empty.
In Figure 4.3 we state an algorithm to reconstitute the walk corresponding to a valid marked array. This is the same algorithm implied by the stack model of random walks, discussed in [45]. In light of this algorithm, a marked increment array may be viewed as a set of instructions for building a walk: the row x i tells the walk which way to go on successive visits to level i.    We wish to characterize which marked arrays correspond to walks. This is the main result of section 4. Definition 4.6. An increment array has the Bookends property if for every i ≤ min{P, T } the final entry in x i is a 1, and for each i ≥ max{P, T } the final entry is a −1.
A marked array has the The Crossings property if for each i ∈ [0, L + 1] A marked array with the Bookends and Crossings properties is called valid. We call an increment array x valid if (x, P) is valid for some P. The necessity of the Bookends property is clear. For each i = P the last increment from level i of a walk w must go towards the preterminal level. Likewise, for each i = T the last increment from level i must go towards the terminal level. Note that because there can be no index i strictly between P and T , these two requirements are never in conflict.
Next we consider decomposing a walk around its visits to a level. We use this idea first to prove the necessity of the Crossings property, and then to prove the sufficiency of the conditions in Theorem 4.7. This approach is motivated by excursion theory and by the approach in Diaconis and Freedman [25], which deals with related issues. In particular, whereas our Theorem 4.7 gives conditions for the existence of a path corresponding to a given set of instructions (a marked array), Theorem (7) in [25] gives conditions, based on comparing instructions, for two paths to arise with equal probability in some probability space. Whereas we begin with instructions and seek paths, Diaconis and Freedman begin with paths and consider instructions.
The following proposition asserts the necessity of the Crossings property in Theorem 4.7.
Proposition 4.8. For any walk w with start, terminal, and maximum levels S, T , and L respectively, and for any i ∈ [0, L + 1], Consider the behavior of a walk w around one of its levels i. The walk may be decomposed into: (i) an initial approach to level i (trivial when i is the start level), (ii) several excursions above and below i, and (iii) a final escape from i (trivial when i is the terminal level). Such a decomposition is shown in Figure 4.5.
The down-crossing count d i must equal the number of excursions below level i, plus 1 if the terminal level is (and final escape goes) strictly below level i. Similarly, u i−1 must equal the number of excursions below i, plus 1 if the start level is (and thus the initial approach comes from) strictly below level i. We observe several special cases of this formula.
The up-crossing count u w L = 0 unless w is a first-passage bridge to a positive value, in which case u w L = 1. We prove the sufficiency of the Bookends and Crossings properties for Theorem 4.7 by structural induction within certain equivalence classes of marked arrays. Definition 4.10. We say that two marked arrays are similar, denoted This equivalence relation corresponds to a relation between paths observed in Diaconis and Freedman [25]. Note that similarity respects both the Bookends and Crossings properties. The following is the base case for our induction. We sketch a proof with two observations. Firstly, the proof of this lemma follows along the lines of the proof of Proposition 4.8. Secondly, the corresponding walk w would be of the form: (1) an initial direct descent from start level to minimum (except in the case T = −1, for which this descent may not reach the minimum) followed by (2) an up-down sawing pattern between the levels 0 and 1, and then between levels 1 and 2, on up to levels L − 1 and L, and finally (3) a direct descent from the maximum level L to the terminal level T (except in the case T = L + 1, for which this descent is replaced by a single, final up-step). A walk of this general form is shown in  We follow with the remainder of our induction argument.
Proof of Theorem 4.7. The necessity of the Bookends property is clear, and that of the Crossings property is asserted in Proposition 4.8. If there exists a walk corresponding to a given marked array then its uniqueness is clear from the algorithm stated in Figure 4.3. So it suffices to prove that for every valid marked array, there exists a corresponding walk. We proceed by structural induction within the ∼-equivalence classes.
Base case: Every ∼-equivalence class of valid marked arrays contains one of the form described in Lemma 4.11. Thus, each class contains a marked array that corresponds to some walk.
Inductive step: Suppose that (x, P) is a valid marked array that corresponds to a walk w. Let x ′ denote an array obtained by swapping two consecutive, non-final increments within some row x i of x, and leaving all other increments in place. Operations of this form generate a group action whose orbits are the ∼-equivalence classes; thus, it suffices to prove that (x ′ , P) corresponds to some walk.
As in our proof of Proposition 4.8, we decompose w into an initial approach to level i, excursions away from level i, and a final escape.
Take, for example, the array: with P = 0. This corresponds to the walk w shown in Figure 4.5.
i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Suppose that x ′ is formed by swapping two consecutive increments within x 2 . Then we decompose the values of w around level 2, which corresponds to the value w(j) = −1: This is analogous to the decomposition depicted in Figure 4.5. The three middle blocks are excursions.
The non-final increments of x i are the initial increments of excursions of w away from level i (in the special case i = T , the final increment of x i also begins an excursion). Each 1 corresponds to an excursion above level i, and each −1 to an excursion below. In the example, the (1, −1, 1) that appear before the final increment of x 2 correspond to the three excursions mentioned above. Swapping a consecutive '+1' and '-1' in x i while leaving the (x j ) j =i untouched corresponds to swapping a consecutive upward and downward excursion.
Returning to the example, swapping the second and third increments in x 2 corresponds to swapping the second and third excursions of w away from the value −1, resulting in the value sequence: Because the middle three blocks all begin at the value −1 and end adjacent to it, swapping two of these result in the value sequence for a walk w ′ -that is, a sequence of values starting at 0, and with consecutive differences of ±1. Thus, there exists a walk w ′ corresponding to (x ′ , P).
Theorem 4.7 may be generalized to classify instruction sets for walks on directed multigraphs. In that setting the Crossings property is replaced by a condition along the lines of "in-degree equals out-degree," and the Bookends property is replaced by a condition resembling "the last-exit edges from each visited, non-terminal vertex form a directed tree." The latter of these has been observed by Broder [14] and Aldous [2] in their study of an algorithm to generate random spanning trees. See also [13, p. 12].
We now digress from our main thread of proving the bijection between walks and quantile pairs to address the question: given a valid array x, what can we say about the indices P for which (x, P) is valid? We begin by asking: what does the Bookends property look like?
By the definition of T given in (4.3), it must differ from P by exactly 1. Therefore the two classifications i ≤ min{P, T } and i ≥ max{P, T } are exhaustive and non-intersecting. Given x, there exists a P for which the Bookends property is satisfied if and only if, for all i below a certain threshold x i ends in an up-step, and for all i above that threshold x i ends in a down-step; if this is the case then P and T must stand on either side of that threshold.
Consider the following array.
The row-ending increments transition from 1s to −1s between rows 2 and 3. Thus, the Bookends property requires that either P = 2 and T = 3 or vice versa. Both of these choices are consistent with equation (4.3). Proof. We begin with the special cases corresponding to first-passage bridges. First, suppose that every row of x ends in a '1'. Then the Bookends property and the bounds on P are only satisfied if P = L, and then T and S are pinned down by (4.3); in particular T = L + 1. By a similar argument, if every row ends in a '-1' then P must equal 0, and again T and S are specified by (4.3) with T = −1. Now suppose that some rows of x = (x i ) L i=0 end in '1's and others in '-1's. Then there exists a P for which the Bookends property is satisfied if and only if there is some number a ∈ [0, L) such that, for i ≤ a row x i ends in a '1', and for i > a row x i ends in a '-1'. So the Bookends property and (4.3) force (P, T ) to equal either (a, a + 1) or (a + 1, a). Thus, the two triples which satisfy all three properties are We can now classify with which P a given x may form a valid marked array.
i=0 be a valid array. If σ x = 0 then x corresponds to a unique walk, and if σ x = 0 then x corresponds to exactly two distinct bridges.
Proof. By the uniqueness asserted in Theorem 4.7 it suffices to prove that if σ x = 0 (or if σ x = 0) then there is a unique P (respectively exactly two distinct values P) for which (x, P) is valid. We proceed with three cases.
Case 1: σ x > 0. By Theorem 4.7, for any valid choice of P the resulting S lies within [0, L] -a walk must start at a level from which it has some increments. By the Crossings property, These two properties uniquely specify S; and by Proposition 4.12 our choice of S uniquely specifies P. Case 2: σ x < 0. This dual to case 1. In this case, S must satisfy Again S is uniquely specified, and by Proposition 4.12 P is uniquely specified. Case 3: σ x = 0. In this case, the Crossings property asserts that u i = d i+1 for every i; this places no constraints on P, T , or S. By our assumption that x is valid, it therefore satisfies the crossings property regardless of P, so the only constraints on P are coming from the Bookends property.
The Crossings property tells us that d 0 = u −1 = 0 and u L = d L+1 = 0, so x 0 ends in a '1' and x L ends in a '-1'. We observed in the proof of Proposition 4.12 that in this case there are either zero or two values P for which (x, P) is valid. And by our assumption that x is valid there are two such values.

Partitioned walks
In this section we introduce partitioned walks and define the map β suggested in equations (1.1) and (1.2). A partitioned walk is a walk with its increments partitioned into contiguous blocks with one block distinguished. Partitioned walks correspond in a natural manner with marked arrays (not just valid marked arrays). Theorem 5.8, which is the main result of this section, describes the β-image of the valid marked arrays. The elements of this image set are called quantile partitioned walks. In section 6 we demonstrate a bijection between the quantile partitioned walks and the quantile pairs.
Let w be a walk of length n, and let the u w i and d w i be the up-and downcrossing counts of w from level i, as defined in the previous section.
Definition 5.1. For j ∈ [0, L + 1], define t w j to be the number of increments of w at levels below j: So 0 = t w 0 < · · · < t w L+1 = n. We call t w j the j th saw tooth of w. Whenever it is clear from context we suppress the superscripts in the saw tooth of a walk.
Note that the helper variable employed in the quantile bijection theorem, Theorem ??, appears in this sequence: This is because the n th increment of w is its final increment at the preterminal level. We are interested in the saw teeth in part because less considerations go into the value of Q(w) at t w j than at some general t. In particular, Q(w)(t w j ) ignores the order of increments within each level of w.  Proof. We note that Q(w)(t j ) is a sum of all increments of w that belong to levels less than j. This proves equation (5.2). Regrouping the terms of (5.2) and applying equation (4.5) then gives equation (5.3).
Equation (5.3) is a discrete-time form of Tanaka's formula, the continuoustime version of which we recall in section 8. Briefly, the value Q(w)(t j+1 ) corresponds to the integral 1 0 1{X(t) ≤ a}dX(t) in that it sums all increments of w which appear below the fixed level j; the term u j corresponds to 1 2 ℓ a -roughly half of the visits of a simple random walk to level j are followed by up-steps; and the latter terms j − S and j − T correspond to a and a − X(1). Further discussion of the discrete Tanaka formula may be found in [38,19,49,51].
Equation (5.3) takes the following form in the bridge case. The saw teeth partition the increments of Q(w) into blocks in the manner illustrated in Figure 5.1: increments from the j th block, between t j and t j+1 , correspond to increments from the j th level of w. This partition provides the link between increment arrays and the quantile transform. This is illustrated in Figure 5.2. The saw teeth are shown as vertical dotted lines partitioning the increments of Q(w). Each block of this partition consists of the increments from a row of x w , stuck together in sequence. We will now define the map β alluded to in equations (1.1) and (1.2) such that it will satisfy We define the partitioned walks to serve as a codomain for this map.
where v is a walk, say of length n, 0 = t 0 < t 1 < · · · < t L+1 = n, and P ∈ [0, L]. Here we are taking the t j , L, and P to be arbitrary numbers, rather than the saw teeth and distinguished levels of v. The name "partitioned walk" refers to the manner in which the times t i partition the increments of v into blocks. We call the block of increments of v bounded by t P and t P+1 the preterminal block of v. We say that such a partitioned walk v corresponds to a walk w if v = (Q(w), (t w i ) Lw i=0 , P w ). Definition 5.5. Define β to be the map which sends a marked array (( Define γ to be the map from partitioned walks to walk-index pairs given by We address the map γ in section 6. The map β may be thought of as stringing together increments one row at a time, as illustrated on the right in Figure 5.2, as well as in Figure 5.3. In this latter example neither the array nor the partitioned walk corresponds to any (unpartitioned) walk. While it is clear that β is a bijection, we are particularly interested in the image of the set of valid marked arrays. Before we describe this image, we make a couple more definitions.
i=0 , P) be a partitioned walk. Motivated by the later terms in equation (5.3) we define the trough function for v to be where we define the indices T and S via This is the partitioned walk analogue to equation (4.3) for marked arrays. If they exist, then we call the block of increments bounded by t S and t S+1 the start block, and the block bounded by t T and t T +1 the terminal block.
A partitioned walk with the Bookends and Saw properties is called a quantile partitioned walk. The equivalence of the Bookends properties for partitioned walks versus marked arrays is clear. We first define the saw path of a partitioned walk, and we use this to generate several useful restatements of the Saw property. Then we demonstrate the equivalence of the Saw property of partitioned walks to the Crossings property of arrays, and use this to prove the theorem.
Definition 5.9. For any partitioned walk v = (v, (t i ), P), we define the saw path S v to be the minimal walk that equals v at each time t i .
The saw teeth of a walk w have been so-named because they typically coincide with the maxima of the saw path S v , where v = β • α(w).
Thus, it suffices to show that the saw property is equivalent to (5.10). First we express a few quantities in terms of the u i s and d i s: From these equations we obtain The saw property asserts that 2M v (j) equals the expression on the left-hand side above. The claim follows.  Proof. We consider three cases. In fact, it follows from Theorem 5.8 that S ∈ [0, L], but we require the weaker result of Lemma 5.12 to prove the theorem.
Proof of Theorem 5.8. Let (x, P) be a marked array and let β(x, P) = v = (v, (t j ) L+1 j=0 , P). Clearly (x, P) has the Bookends property for arrays if and only if v has the Bookends property for partitioned walks. For the remainder of the proof, we assume that both have the Bookends property.
It suffices to prove that v has the Saw property if and only if (x, P) has the Crossings property. In fact, the Saw property is equivalent to the Crossings property even outside the context of the Bookends property, but we sidestep that proof for brevity's sake.
Let (u j ) and (d j ) denote the up-and down-crossing counts of x; these also count the up-and down-steps of v between consecutive partitioning times t j and t j+1 . Let S and T denote the start and terminal row indices for (x, P), or equivalently, the start and terminal block indices for v.
The Saw property for v is equivalent to the following three conditions: The Saw property implies (5.18) by way of Lemma 5.12; and given (5.18), equation (5.19) is equivalent to (5.10), which in turn is equivalent to the Saw property by Lemma 5.10. The Crossings property for (x, P) is equivalent to those same three conditions. The validity of (x, P) implies (5.18) via Theorem 4.7: because the array corresponds to a walk, it must have S ∈ [0, L]. Furthermore, given (5.18) the Crossings property may be shown to be equivalent to (5.19) by substituting in the formula (5.7) for M v .

The quantile bijection theorem
In this section we give a lemma which will help us show that γ is injective on the quantile partitioned walks. We then apply this lemma to prove Theorem 2.7, the Quantile bijection theorem.
i=0 , P) has the Saw and Bookends properties if and only if the following two conditions hold.
(i) For every j ∈ [0, P] Proof. The Saw property of v is equivalent, by algebraic manipulation, to the conditions that for j ∈ [0, P], the t j must solve for t, and for j ∈ [P + 1, L], the t j+1 must solve Now suppose that some s solves equation (6.3) for some j ≤ P. A time r < s offers another solution to (6.3) if and only if v(r) + r = v(s) + s. This is equivalent to the condition that v takes only down-steps between the times r and s. Therefore t j equaling the least solution to (6.3) is equivalent to the t th j increment of v being an up-step, as required by the Bookends property.
Similarly, suppose that s solves equation (6.4) for some j ≥ P + 1. A time r < s provides another solution if and only if v(r) − r = v(s) − s, which is equivalent to the condition that v takes only up-steps between r and s. Therefore t j equaling the least solution to (6.4) is equivalent to the t st j+1 increment of v being a down-step, as required by the Bookends property. Equation (5.8) defines T from P in such a way that the t st P+1 increment of v will always satisfy the Bookends property. Thus, if (6.1) holds for j ∈ [0, P] and (6.2) holds for every j ∈ [P + 1, L], then the Bookends property is met at every t j .
Finally, we are equipped to prove our main discrete-time result.
Theorem 2.5 asserts that this map sends walks to quantile pairs, and by Proposition 3.3 the set of walks with a given number of up-and down-steps has the same cardinality as the set of quantile pairs with those same numbers of up-and down-steps. Theorems 4.7 and 5.8 assert that that β • α bijects the walks with the quantile partitioned walks, so it suffices to prove that γ is injective on the quantile partitioned walks.
Note that, by definition 5.6, We prove by induction that v must equal v ′ , and therefore that γ is injective on the quantile partitioned walks.
Base case: t P+1 = t ′ P ′ +1 = k. Inductive step: We assume that t P+1−i = t ′ P ′ +1−i > 0 for some i ≥ 0. Then by Lemma 6.1 By induction, t P+i = t ′ P ′ +i wherever both are defined. Thus there is some greatest index I ≤ 0 at which these simultaneously reach 0. This I must equal both −P and −P ′ . By the same reasoning L = L ′ . We conclude that v = v ′ .
We also have the following special case.
where 2k is the duration of the final excursion of d.

The Vervaat transform of a simple walk
The quantile transform has much in common with the (discrete) Vervaat transform V , studied in [53]. For discussions of this and related transformations, see Bertoin [9] and references therein. Like the quantile transform, the Vervaat transform permutes the increments of a walk.
Breaking with usual conventions, let mod n to denote the map from Z to the (mod n) representatives [1, n] (instead of the standard [0, n − 1]). Definition 7.1. Given a walk w of length n, let The Vervaat permutation ψ w is the cyclic permutation i → i+τ V (w) mod n.
As with the quantile transform, we define the Vervaat transform V by x ψw(i) .  This transformation was studied by Vervaat because of its asymptotic properties. As scaled simple random walk bridges converge in distribution to Brownian bridge, the Vervaat transform of these bridges converges in distribution to a continuous-time version of the Vervaat transform, applied to the Brownian bridge.
Surprisingly, the discrete Vervaat transform has a very similar bijection theorem to that for Q. Vervaat pair is a pair (v, k) where v is a walk of length n and k is a nonnegative integer such that v(j) ≥ 0 for 0 ≤ j ≤ k and v(j) > v(n) for k ≤ j < n. Theorem 7.3. The map w → (V (w), n − τ V (w)) is a bijection between the walks of length n and Vervaat pairs.

Definition 7.2. A
Proof. If we know that a pair (v, k) arises in the image of (V, K), then it is clear how to invert this map: let where we take these indices mod n; and we define F (v, k) to be the walk with increments x i . Then F (V (w), n − τ V (w)) = w. We show that for every w the pair (V (w), n − τ V (w)) is a Vervaat pair, and that every Vervaat pair satisfies (v, k) = (V (F (v, k)), n − τ V (F (v, k))).
Let w be a walk of length n. By definition of τ V , for every j ∈ [0, τ V (w)) we have w(j) > w(τ V (w)). It follows that Now, consider a Vervaat pair (v, k). Then by definition of F and by the properties of the pair, for j ∈ [0, n − k) we have F (v, k)(j) > F (v, k)(n − k), and for j ∈ [n − k, n], we have F (v, k)(j) ≥ F (v, k)(n − k). Thus, τ V (F (v, k)) = n − k, and the result follows.
To our knowledge, this result has not been given explicitly in the literature. This statement strongly resembles our statement of Theorem 2.7, but we note two differences. The first is the helper variable. The helper variable in this theorem equals ψ −1 w (n) except in the case where w is a first-passage bridge to a negative value, in which case ψ −1 w (n) = n whereas n − τ w = 0; in our statement of Theorem 2.7, the helper always equals φ −1 w (n) and may not equal 0. The second difference is that the value V (w)(k) must be nonnegative, whereas Q(w)(k) may equal −1 (see Figure 2.4). Again, this only affects the case where w(n) < 0.

The quantile transform of Brownian motion
Our main theorem in the continuous setting compares the quantile transform to a related path transformation.
We begin with some key definitions and classical results. Let (B(t), t ∈ [0, 1]) denote standard real-valued Brownian motion. Let (B br (t), t ∈ [0, 1]) denote a standard Brownian bridge and (B ex (t), t ∈ [0, 1]) a standard Brownian excursion -see, for example, Mörters and Peres [40] or Billingsley [12] for the definitions of these processes. When we wish to make statements or definitions which apply to all three of B, B br , and B ex , we use (X(t), t ∈ [0, 1]) to denote a general pick from among these. Finally, we use ' d =' to denote equality in distribution. The existence of an a.s. jointly continuous version is well known, and is originally due to Trotter [52]. We often abbreviate Recall that for a walk w, the value Q(w)(j) is the sum of increments from w which appear at the j lowest values in the path of w. Heuristically, at least, the continuous-time analogue to this is the formula This formula would define Q(X)(t) as the sum of bits of the path of X which emerge from below a certain threshold -the exact threshold below which X spends a total of time t. But it is unclear how to make sense of the integral: it cannot be an Itô integral because the integrand is not adapted. Perkins [41, p. 107] allows us to make sense of this and similar integrals. We quote Tanaka's formula: where J(a) equals the right-hand side of (8.5), which is a semi-martingale with respect to a certain naturally arising filtration. This motivates us in the following definition.

Definition 8.2. The quantile transform of Brownian motion / bridge / excursion is
In the bridge and excursion cases this expression reduces to Q(X)(t) := 1 2 ℓ(A(t)). (8.9) We call upon classic limit results relating Brownian motion and its local times to their analogues for simple random walk. The work here falls into the broader scheme of limit results and asymptotics relating random walk local times to Brownian local times. We rely heavily on two results of Knight [36,35] in this area. Much else has been done around local time asymptotics; in particular, Csáki, Csörgő, Földes, and Révész have collaborated extensively, as a foursome and as individuals and pairs, in this area. We mention a small segment of their work: [47,46,18,19,20,17]. See also Bass and Khoshnevisan [7,6] and Szabados and Székeley [50]. From elementary properties of Brownian motion, (S n (j), j ≥ 0) is a simple random walk. We call the sequence of walks S n the simple random walks embedded in B. Since we will be dealing with the quantile transformed walk Q(S n ), we define a rescaled version: Note that τ n 4 n is the sum of 4 n independent, Exp(4 n )-distributed variables. By a Borel-Cantelli argument, the τ n (4 n ) converge a.s. to 1. So the walks S n depend upon the behavior of B on an interval converging a.s. to [0, 1] as n increases.
The remainder of this section works to prove that, as n increases, Q(S n ) almost surely converge uniformly to Q(B).
Definition 8.4. We define the (discrete) local time of S n (j) at level x ∈ R L n (x) := This is a linearly interpolated version of the standard discrete local time. We also require a rescaled version, L n (x) := 2 −n L n (2 n x).
We note that previous authors have stated convergence results for a discrete version of Tanaka's formula. See Szabados and Szekely [51, p. 208-9] and references therein. However, these results are not applicable in our situation due to the random time change A(t) that appears in our continuous-time formulae.
We require several limit theorems relating simple random walk and its local times to Brownian motion, summarized below. S n (·) → B(·) a.s. uniformly (Knight, 1962[36]). L n (·) → ℓ(·) a.s. uniformly (Knight, 1963[35]). (8.15) Equation (8.13) is an a.s. variant of Donsker's Theorem, which is discussed in standard textbooks such as Durrett [26] and Kallenberg [33]. Equation (8.14) is a corollary to the Knight result: both max and min are continuous with respect to the uniform convergence metric. The map from a process to its local time process, on the other hand, is not continuous with respect to uniform convergence; thus, equation (8.15) stands as its own result. An elementary proof of this latter result, albeit with convergence in probability rather than a.s., can be found in [46], along with a sharp rate of convergence. Knight [37] gives a sharp rate of convergence under the L 2 norm.
Definition 8.6. The cumulative distribution function (CDF) of occupation measure for S n , denoted by F n , is given by Compare these to F , the CDF of occupation measure for B, defined in equation (8.2). We have restated it to highlight the parallel to F n . Also note that for integers k, Equations (8.15) and (8.14) have the following easy consequence.
Corollary 8.7. As n increases theF n a.s. converge uniformly to F .
Because Brownian motion is continuous and simple random walk cannot skip levels, the CDFs F and F n are strictly increasing between the times where they leave 0 reach their maxima, 1 or 4 n respectively. This admits the following definitions.
Compare these to A defined in equation (8.3) in the introduction. Lemma 8.9. As n increases theĀ n a.s. converge uniformly to A.
Proof. In passing a convergence result from a function to its inverse it is convenient to appeal to the Skorokhod metric. For continuous functions, uniform convergence on a compact interval I ⊂ R is equivalent to convergence under the Skorohod metric (see [12]). Let i denote the identity map on I, let || · || denote the uniform convergence metric, and let Λ denote the set of all increasing, continuous bijections on I. The Skorokhod metric may be defined as follows: (8.17) Thus, it suffices to prove a.s. convergence under σ. Fix ǫ > 0. By the continuity of A, there is a.s. some 0 < δ < ǫ sufficiently small so that And by Equation (8.14) and Corollary 8.7 there is a.s. some n so that, for all m ≥ n, We show that σ(Ā n , A) < 3ǫ. We seek a time change λ : [0, 1] → [0, 1] which is close to the identity and for whichĀ n •λ is close to A. Ideally, we would like to define λ =F n ·A so as to getĀ n • λ = A exactly. But there is a problem with this choice: becausē S n and B may not have the exact same max and min,F n • A may not be a bijection on [0, 1]. We turn this map into a bijection by manipulating its values near 0 and 1.
For t near 0 and likewise for t > 1 − δ.
Next we consider the difference between A andĀ n • λ. These are equal on [δ, 1 − δ]. For t < δ we get By our choices of n and δ, the lower bounds on these intervals both lie within 2ǫ of δ. A similar argument works for t > 1 − δ. Thus A(t) lies within 2ǫ of A n • λ(t).
For our purpose, the important consequence of the preceding lemma is the following. General results for convergence of randomly time-changed random processes can be found in Billingsley [12], but in the present case the proof of Corollary 8.10 from equation (8.15) and Lemma 8.9 is an elementary exercise in analysis, thanks to the a.s. uniform continuity of ℓ.
We now make use of the up-and down-crossing counts described in Definition 4.3, and of the saw teeth in Definition 5.1. For our present purpose it is convenient to re-index these sequences. We call these quantities up-and down-crossing counts and saw teeth.
Note that the strict inequality in the bound on j in the definition of m n is necessary.
Comparing the sequence (u Sn i ) in Definition 4.3 with the sequence (u n i ), we have u n i = u Sn i+mn . Comparing the sequence (t Sn i ) defined in Definition 5.1 with the sequence t n i , we have t n i = t Sn i+mn . Note that L n (k) = u n k + d n k = t n k+1 − t n k . (8.20) At saw tooth times, the quantile transform Q(S n ) is uniformly well approximated by a formula based on discrete local time.
Lemma 8.12. Let A n k denote A n (t n k ). As n increases the following quantities a.s. vanish uniformly in k: Proof. The convergence of (ii) follows from that of (i) by equation (8.16), which gives us for integers k; (iii) then follows by Corollary 8.7. The convergence of (iv) follows from that of (ii) by Lemma 8.9 and the uniform continuity of a. And finally, (v) then follows from the others by the discrete Tanaka formula, equation (5.3). Note that by re-indexing, we have replaced the S and T from that formula, which are the start and terminal levels, with 0 and S n (4 n ) respectively, which are the start and terminal values of S n . Thus, it suffices to prove the convergence of (i).
If we condition on L n (k) then u n k is distributed as Binomial(L n (k), 1 2 ). Our intuition going forward is this: if L n (k) is large then (L n (k)−2u n k )/ L n (k) approximates a standard Gaussian distribution. Throughout the remainder of the proof, let binom(n) denote a Binomial(n, 1 2 ) variable on a separate probability space. Fix ǫ > 0 and let Let M be sufficiently large so that for all n ≥ M , Such an M must exist by the central limit theorem and well-known bounds on the tails of the normal distribution. Let N ≥ M be sufficiently large so that for all n ≥ N , max t |S n (t)| < 2 n C 1 and max x L n (x) < 2 n C 2 .
We now apply the Borel-Cantelli Lemma.
The claimed convergence follows by Borel-Cantelli.
Our proof implicitly appeals to the branching process view of Dyck paths. This perspective may be originally attributable to Harris [31] and was implicit in the Knight papers [36,35] cited earlier in this section. See also [43] and the references therein.
In order to prove Theorem 8.19, we must extend the convergence of (v) in the previous lemma to times between the saw teeth. The convergence of (iii) leads to a helpful corollary. Proof. Since min k t n k = 0 and max k t n k = 4 n , it suffices to prove that 4 −n sup k (t n k − t n k−1 ) a.s. converges to 0. This follows from: the uniform continuity of F , the uniform convergence of theF n to F asserted in Corollary 8.7, and the uniform vanishing of |F n (k) − 2 −n t n k | asserted in Lemma 8.12. We now prove a weak version of Theorem 8.16 before demonstrating the full result.
Lemma 8.14. Let Z n be the process which equals Q(S n ) at the saw teeth and is linearly interpolated in between, and letZ n be the obvious rescaling. As n increases,Z n a.s. converges uniformly to Q(B).

Proof. Let
and letȲ n denote the process which equalsX n at the (rescaled) saw teeth 4 −n t n k and is linearly interpolated between these times. We prove the lemma by showing that the following differences of processes go to 0 uniformly as n increases: (i)X n − Q(B), (ii)Ȳ n −X n , and (iii)Z n −Ȳ n .
The uniform vanishing of (i) follows from equations (8.13) and (8.15), Lemma 8.9, and Corollary 8.10. That of (iii) is equivalent to item (v) in Lemma 8.12. Finally, each of the three terms on the right in equation (8.22) converge uniformly to uniformly continuous processes, so by Corollary 8.13, (Ȳ n −X n ) a.s. vanishes uniformly as well.
Before the technical work of extending this lemma to a full proof of Theorem 8.16 we mention a useful bound. For a simple random walk bridge (D(j), j ∈ [0, 2n]), This formula may be obtained via the reflection principle and some approximation of binomial coefficients; we leave the details to the reader. The Brownian analogue to this bound appears in Billingsley [12, p. 85]: For our purposes the '2' in the exponent above is unnecessary, so we've sacrificed it to keep our discrete-time inequality (8.23).
Lemma 8.15. Fix ǫ, δ > 0. Let (λ n k ) n,k≥0 be a family of random nonnegative integers and (W n k ) n,k≥0 a family of walks, each having length λ n k and exchangeable increments of ±1. Suppose that the W nk are mutually independent conditional on {λ n k , W n k (λ n k )} n,k≥0 . And suppose further that there is some a.s. finite N such that, for n ≥ N : Then the largest n for which Proof. We prove this with a coupling argument. First, we observe that Next, we introduce a family of random walks D n k which, conditional on (λ n k ) n,k≥0 , are independent of each other and of the W n k . Let D n k be a simple random walk bridge to 0 in the case where λ n k is even, or to 1 in the case where λ n k is odd. Now let W n k We now arrive at our main result. Proof. Let Z n andZ n be as in Lemma 8.14. After that lemma it suffices to prove that (Q(S n )−Z n ) vanishes uniformly as n increases. By definition, this difference equals 0 at the saw teeth. Moreover, we deduce from Theorems 4.7 and 5.8 that conditional on Z n , the walk Q(S n ) is a simple random walk conditioned to equal Z n at the saw teeth t n k and with some constraints, coming from the Bookends property, on its (t n k ) th steps.
We must bound the fluctuations of Q(S n ) in between the saw teeth. Heuristic arguments suggest that these ought to have size on the order of 2 n/2 ; we need only show that they grow uniformly slower than 2 n . We prove this via a Borel-Cantelli argument. There are many ways to bound the relevant probabilities of "bad behavior;" we proceed with a coupling argument.
For each (n, k) for which t n k is defined -i.e. with k ∈ [min S n , max S n ] -we define several processes and stopping times. These objects appear illustrated together in figure 8.1. First, for j ∈ [0, L n (k) − 1] we definê . Recall from equation (8.20) that L n (k) is the difference between consecutive saw teeth. We only define these walks up to time L n k − 1 so as to sidestep issues around constrained final increments and the bookends property. Observe that so it suffices to bound the fluctuations of theŴ andW . We further define ∆ n k := Q(S n )(t n k+1 − 1) − Q(S n )(t n k ) Observe thatŴ n k (0) = 0 andŴ n k (L n (k) − 1) = ∆ n k , whereaš W n k (0) = −∆ n k andW n k (L n (k) − 1) = 0. (8.25) If L n (k) is an odd number then we may define a simple random walk bridge D n k that has random length L n (k) − 1 but is otherwise independent of S n (we enlarge our probability space as necessary to accommodate these processes). In the next paragraph we deal with the case where L n (k) is even. LetT n k := min{j : D n k (j) + ∆ n k =Ŵ n k (j)} anď T n k := max{j : D n k (j) − ∆ n k =W n k (j)}. These stopping times must be finite, thanks to the values ofŴ andW observed in (8.25). Finally we define the coupled walks.
Conditional on L n (k), theD n k andĎ n k remain simple random walk bridges, albeit vertically translated. These are illustrated in Figure 8.1.
In the case where L n (k) is even rather than odd, we modify the above definitions by making ∆ n k < 0) instead of 0 and including appropriate '+1's (respectively '−1's) into the definitions ofT n k andD n k so that the final value of D n k + ∆ n k + 1 (resp. −1) aligns with that ofŴ n k . Fix ǫ > 0. We may bound the extrema ofŴ n k andW n k by bounding the extrema ofD n k andĎ n k . In particular, we have the following event inclusions. {max First we use previous results from this section to prove that a.s. only finitely many of the ∆ n k are large. Then we make a Borel-Cantelli argument to do the same for the max j |D n k (j)|. By the continuity of Q(B), there is a.s. some δ ∈ (0, ǫ 2 ) sufficiently small so that max And there is a.s. some N sufficiently large so that for n ≥ N : The first two of these bounds follow from the continuity of ℓ and equations (8.13) and (8.15); the third follows from Lemma 8.14. The second and third of these imply that for n ≥ N , |∆ n k | ≤ |Z n (t n k+1 ) − 2 n Q(B)(4 −n t n k+1 )| + 2 n |Q(B)(4 −n t n k+1 ) − Q(B)(4 −n t n k )| + |2 n Q(B)(4 −n t n k ) − Z n (t n k )| ≤ 3 · 2 n ǫ.
So, folding constants into ǫ, there is a.s. some largest n for which any of the |∆ n k | exceed 2 n ǫ. We proceed to our Borel-Cantelli argument to bound fluctuations in the D n k .
The last line above follows from (8.23). We conclude from the Borel-Cantelli Lemma that a.s. only finitely many of the D n k exceed 2 n ǫ in maximum modulus. So by the event inequality (8.28), a.s. only finitely many of the W n k exceed 2 n+1 ǫ in maximum modulus.
Our main result in the continuous setting, Theorem 8.19 below, now emerges as a corollary.
Definition 8.17. Let τ m denote the time of the (first) arrival of (X(t), t ∈ [0, 1]) at its minimum. The Vervaat transform maps X to the process V (X) given by This transform should be thought of as partitioning the increments of X into two segments, prior and subsequent to τ m , and swapping the order of these segments.
. (Vervaat, 1979[53]) . (Biane, 1986[11]) (8.31) We demonstrated in section 7 that for simple random walks, the discretetime analogue of the Vervaat transform of the walk has the same distribution as the quantile transform. Now we have shown that Q(B) arises as an a.s. limit of the quantile transforms of certain simple random walks.  Theorem 8.20 (Jeulin, 1985[32]). If ℓ and A denote the local time and the quantile function of occupation measure, respectively, of a Brownian bridge or excursion, then So the x i are the increments of the process S. Fix some level l ≥ 0. We define S − (n) (and respectively S + (n)) to be the sum of the first n increments of S which originate at or below (resp. strictly above) the value l. That is, an increment x i of S is an increment of S − only if S(i − 1) ≤ l. This is illustrated in Figure 9.1; in that example, the increments x 3 , x 7 , x 8 and x 9 contribute to S + (4). For the sake of brevity we omit a more formal definition, which may be found in [8]. We call the map S → S − the BCY transform (with parameter l).
l · · · · · · · · · S − S S + The BCY transform resembles the quantile transform in that it sums increments below some level. But whereas the quantile transform may only be applied to a walk which has finite length or is upwardly transient, the BCY transform applies equally well to walk.
There are two big differences between the BCY and quantile transforms. Firstly, in the case of the BCY transform, the process S − comprises all those increments which appear in S below some previously fixed level l; whereas in the case of the quantile transform, Q(S)(j) comprises (roughly) those increments which appear in S below a variable level which increases with j. Secondly, the increments of S − appear in the same order in which they appeared in S, whereas the increments of Q(S) appear in order of the value at which they appear in S.
If we suppose that the x i are i.i.d. random variables then by the strong Markov property, S − has the same distribution as S [8, Lemma 2]. But this is not the case for Q(S); Theorem 2.7 indicates that for S a simple random walk, Q(S) tend to rise at early times and fall later.
As a further example, the path transformation studied by Chaumont [15] resembles the concatenation of S − followed by S + , but with some delicate changes. To define it, we require different notation than that introduced earlier. Recall that the final increment of the walk w has no bearing on φ w . We require a version of the permutation which does account for this increment. We draw from the notation of Port [44] and Chaumont [15]; this notation is used only in this section and nowhere else in the paper.
Definition 9.1. Let the increment sequence (x i ) ∞ i=1 and the process S be as above. Let (S n (j), j ∈ [0, n]) denote the restriction of S to its n initial increments. For k ∈ [0, n] we define M S nk and L S nk so that (M S n0 , L S n0 ); (M S n1 , L S n1 ); · · · ; (M S nn , L S nn ) is the increasing lexicographic reordering of the sequence (S(0), 0); (S(1), 1); · · · ; (S(n), n) We call the permutation (0, 1, · · · , n) → (L S n0 , · · · , L S nn ) the quantile permutation of vertices of S n (whereas φ S n might be thought of as a quantile permutation of increments). We define R S nk := #{i ≤ L S nk : S(i) ≤ M S nk }. We suppress the superscript when it is clear from context which process is being discussed.
Both the BCY and Chaumont transforms are motivated by the following theorem.
Suppose that x 1 , · · · , x n are exchangeable real-valued random variables, and let S denote the process with these increments. Fix k ∈ [0, n] and let S ′ denote the process S ′ (j) = S(k + j) − S(k) for j ∈ [0, n − k]. The identity in the first two coordinates in equation (9.2) is due to Wendel; Port made the (satisfying) extension of the result to the third coordinate. For more discussion of related results such as Sparre Andersen's Theorem [5,4] and Spitzer's Combinatorial Lemma [48], see Port [44]. Port's paper also gives, on page 140, a combinatorial formula for the probability distribution of φ S (j) given the distributions of the increments of S.
Chaumont made the suggestive extension of (9.2) to the fourth coordinate and presented the first path-transformation-based proof Port's result. Let the x i and S be as in Theorem 9.2 and fix some k ∈ [0, n]. Chaumont's transformation works by partitioning the increments of S into four blocks. The Chaumont transform sends S to the processS whose increments are the x i with i ∈ I 1 , followed by those with i ∈ I 2 , then I 3 , and finally I 4 , with the increments within each block arranged in order of increasing index. Details may be found in [15, p. 3-4]. This transformation is illustrated in Figure 9.2, in which increments belonging to I 1 and I 2 are shown as solid lines, whereas those belonging to I 3 and I 4 are shown as dotted.  If S has exchangeable random increments then S andS have the same distribution; as with the BCY transform, this presents a marked difference from the quantile transform. Chaumont demonstrates that if we substitutẽ S for S on the right-hand side of equation (9.2) then we get identical equality, rather than identity in law.
Theorem 9.2 admits various continuous-time versions. Before stating some of these, we state a loose continuous-time analogue to the quantile permutation, due to Chaumont [16]. where U is an independent Uniform[0, 1] random variable.
The analogy between m s and the quantile permutation is flawed because m s requires additional randomization in its definition. But there can be no bijection from [0, 1] to itself which has all of the properties we would want in a quantile permutation; so we must settle for m s . X ′ (t)) (9.4) (Dassios, 1996 [21,22]). If X is Brownian bridge plus drift, then   [16]).
Various path transformation-based proofs of (9.4) were obtained by Embrechts, Rogers, and Yor [29] in the Brownian case and by Bertoin et. al. [8] in the Lévy case. Chaumont proved (9.5) with a continuous-time analogue to the Chaumont transform described above. These results have applications to finance in the pricing of Asian options. For a discussion of these applications see Dassios [21,22,23] and references therein.
Beyond connections in the literature around fluctuations of random walks and Brownian motion, we also find links between the quantile transform and discrete versions of Tanaka's formula. Such formulae have previously been observed by Kudzma [38], Csörgö and Revész [19], and Szabados [49]. See also [51]. The quantile transformed path may be thought of as interpolating between points specified by Tanaka's formula. This connection is made in section 5.