The argmin process of random walks, Brownian motion and L\'evy processes

In this paper we investigate the argmin process of Brownian motion $B$ defined by $\alpha_t:=\sup\left\{s \in [0,1]: B_{t+s}=\min_{u \in [0,1]}B_{t+u} \right\}$ for $t \geq 0$. The argmin process $\alpha$ is stationary,with invariant measure which is arcsine distributed. We prove that $(\alpha_t; t \geq 0)$ is a Markov process with the Feller property, and provide its transition kernel $Q_t(x,\cdot)$ for $t>0$ and $x \in [0,1]$. Similar results for the argmin process of random walks and L\'evy processes are derived. We also consider Brownian extrema of a given length. We prove that these extrema form a delayed renewal process with an explicit path construction. We also give a path decomposition for Brownian motion at these extrema


Introduction and main results
In this paper we are interested in the argmin process (α t ; t ≥ 0) of standard Brownian motion (B t ; t ≥ 0). That is, 1] B t+u for all t ≥ 0.
Our approach relies on Denisov's decomposition and excursion theory, see Section 2.
We also investigate the law of jumps of (α t ; t ≥ 0). In particular, we prove that the argmin process α has local times at levels 0 and 1, and provide a Lévy system of (α t ; t ≥ 0). These results imply that the argmin process α is a time-homogeneous Markov process with an explicit description in the framework of jumping Markov processes by Jacod and Skorokhod [27], following the study of piecewise deterministic Markov processes by Davis [15]. The motivation for considering the argmin process comes from the study of Brownian extrema with given length. In the second part of this paper we provide further insight into these extrema, following previous works of Neveu and Pitman [38], and Leuridan [32]. For a, b > 0, let with a < T a,b 1 < T a,b 2 < · · · be the (a, b)-minima set of Brownian motion B.
The study of Brownian extrema dates back to Lévy [33]. See [29, Section 2.9] for development. Neveu and Pitman [38] proved the renewal property of Brownian local extrema by looking at Brownian extrema of a given depth. They gave the following Palm description of Brownian local extrema. Theorem 1.3. [38] Let (C, B) be the space of continuous paths on R, equipped with Wiener measure W, and (E, E) be the space of excursions with lifetime ζ, equipped with Itô's law n (see Section 2.2 for discussion). For A ∈ B, define the σ-finite Palm measure of Brownian local minima by ν(A) := E#{0 ≤ t ≤ 1 : t is a local minimum and θ t ∈ A}, (1.6) where θ t := (b t+u − b t ; t ∈ R) is the space-time shift of a two-sided Brownian motion b. Then for A ∈ B, where f : E × E × C (e, e , w) → w ∈ C is a mapping given by if 0 ≤ t ≤ ζ(e), w t−ζ(e) if t ≥ ζ(e). (1.8) The quantity ν(A) is interpreted as the mean number of Brownian local minima of type A per unit time. See Kallenberg [28,Chapter 11] for background on Palm measures.
In different directions, Groeneboom [24] considered the global extremum of Brownian motion with a parabolic drift, where he gave a density formula in terms of Airy functions. Tsirelson [50] provided an i.i.d. uniform sampling construction of Brownian local extrema under external randomization. Abramson and Evans [1] considered Lipschitz minorants of Brownian motion, which is a variant of Brownian extrema.
Leuridan studied the (a, b)-minima set M a,b := {t ∈ R : b t = inf s∈[t−a,t+b] b s } of a two-sided Brownian motion (b t ; t ∈ R). He proved that times of the set M a,b form a renewal process, and provided the density of the inter-arrival times. Observe that M a,b − a  Theorem 1.4. [32] Let a, b > 0. The times of (a, b)-minima of Brownian motion (B t ; t ≥ 0) form a delayed renewal process, denoted by (T a,b i ; i ≥ 1) so that a < T a,b (−1) n−1 h * n a,b (t), (1.11) where h * n a,b is the n th convolution of h a,b . In addition, T a,b 1 is independent of (∆ a,b i ; i ≥ 1), and has density (1.12) Given a measurable set A ⊂ R + , let N a,b (A) := #(M a,b ∩ A) be the counting measure of (a, b)-minima in Brownian motion B. Leuridan's proof of Theorem 1.4 is based on the formula, for n ≥ 1 and 0 < t 1 < · · · < t n , E(N a,b (dt 1 ) · · · N a,b (dt n )) = 1 π √ ab n−1 k=1 h a,b (t k − t k−1 )dt 1 · · · dt n , (1.13) with convention that ∅ := 1. The case n = 1 of (1.13) follows readily from Theorem 1.3, since for a generic (a, b)-minimum, the left excursion has length larger than a and the right excursion has length larger than b. This implies that the mean number of (a, b)-minima per unit time is given by 1 2 n(ζ(e ) > a)n(ζ(e) > b) = 1 π √ ab .
In particular, E(∆ a,b i ) = π √ ab for all i ≥ 1. (1.14) However, to obtain (1.13) for n ≥ 2 requires extra work. Observe that for a + b = 1, the set M a,b can be viewed as the a-level set of the argmin process α. = (a + b)α −1 a a + b for a, b > 0. (1.15) According to Hoffmann-Jørgensen [25], and Krylov and Juškevič [31], the set M a,b enjoys regenerative property, and is called a strong Markov set. See also Kingman [30] for a survey on regenerative phenomena of level sets of Markov processes. In Section 4.1, we recover Theorem 1.4, in particular (1.13), by using the properties of the argmin process α. Note that the density h a,b defined by (1.10) is induced by a σ-finite measure. By Leuridan's formula (1.11), the Laplace transform of ∆ a,b i is given by (1.16) provided that Ψ(λ) < 1. By analytic continuation, we extend (1.16) to all λ > 0. But it does not seem obvious how to simplify (1.16) analytically.
While the description of M a,b is complicated for general a, b > 0, the case a = b is simplified. For simplicity, we consider a = b = 1. We shall give a construction of the (1, 1)-minima set from which we derive simple formulas for the Laplace transforms of ∆ i := T i+1 − T i and T 1 . Let J be the first descending ladder time of Brownian motion, from which starts an excursion above the minimum of length larger than 1. It is known that the Laplace transform of J is given by x 0 e −t 2 dt is the error function. See Proposition 2.3 for a derivation of (1.17).
The random variable J plays an important role in our construction of the (1, 1)-minima set. Let ∆ be distributed as the law of the inter-arrival times ∆ i , independent of J. It is a simple consequence of the construction in Section 4.2 of the Brownian path over [0, Combined with the fact that T 1 − 1 is the stationary delay for a renewal process with inter-arrival time distributed according to ∆, this leads to the following result: Theorem 1.5. Let (T i ; i ≥ 1) with 1 < T 1 < T 2 < · · · be times of the (1, 1)-minima set M 1,1 of Brownian motion B.
1. Let J be an independent copy of J, whose Laplace transform is given by (1.17).
Then there is the identity in law In particular, the Laplace transforms of T 1 and ∆ are given by (1.21) Consequently, ET 1 = 3 and E∆ = π.
be the Palm measure of (a, a)-minima of a two-sided Brownian motion. Theorem 1.5 implies that ν a has total mass 1 πa , and (1.25) where f is the mapping defined by (1.8). By taking a ↓ 0, the Palm measures ν a increase to the limit ν defined by (1.7). This recovers Theorem 1.3. The set M 1,1 is directly related to the argmin process α without scaling. In fact, T i is the i th time that the process α reaches 0 by a continuous passage from 1. So the law of Brownian fragments between (1, 1)-minima can be derived from the study of α. Let be the left ends of forward meanders of length 1, and be the right ends of backward meanders of length 1. In Lemma 4.5, we show that left ends come before right ends between any two consecutive (1, 1)-minima. So we define for each i ≥ 1, (1.27) By using the Lévy system of the argmin process, we prove the following theorem which identifies the law of this triple.
= J, the Laplace transform of which is given by (1.17); • the density of D i − G i is given by (1.28) Consequently, for each i ≥ 1, (1.29) For a random variable X, let Φ X (λ) be the Laplace transform of X, and f X (·) be the density of X. Theorems 1.4-1.6 provide three different descriptions of the inter-arrival time ∆. This leads to some non-trivial identities. We summarize the results in the following table.
Given by (2.9) Given by (1.11), Finally, we extend Theorem 1.2 to random walks and Lévy processes. Fix N ≥ 1. We study the argmin process (A N (n); n ≥ 0) of a random walk (S n ; n ≥ 0), defined by for all n ≥ 0, (1.30) where S n := n i=1 X i is the n th partial sum of (X n ; n ≥ 1) (with convention S 0 := 0), and (X n ; n ≥ 1) is a sequence of independent and identically distributed random variables with the cumulative distribution function F . This is the discrete analog of the argmin process of Brownian motion. A similar argument as in the Brownian case shows that (A N (n); n ≥ 0) is a Markov chain. For n ≥ 1, let p n := P(S 1 ≥ 0, · · · , S n ≥ 0) and p n := P(S 1 > 0, · · · , S n > 0). Theorem 5.2 below recalls the classical theory of how the two sequences of probabilities p n and p n are determined by the sequences of probabilities P(S n ≥ 0) and P(S n > 0). We give the transition matrix of the argmin chain A N in terms of (p n ; n ≥ 1) and ( p n ; n ≥ 1), which can be made explicit for special choices of F . Theorem 1.7. Whatever the common distribution F of (X n ; n ≥ 0), the argmin chain (A N (n); n ≥ 0) is a stationary and time-homogeneous Markov chain on {0, 1, . . . , N }. Let Π N (k), k ∈ [0, N ] be the stationary distribution, and P N (i, j), i, j ∈ [0, N ] be the transition probabilities of the argmin chain (A N (n); n ≥ 0) on [0, N ]. Then (1.32) Consequently, 1. If (S n ; n ≥ 0) is a random walk with continuous distribution and P(S n > 0) = θ ∈ (0, 1) for all n ≥ 1. Let (θ) n↑ := n−1 i=0 (θ + i) be the Pochhammer symbol. Then (1. 36) and (1.37) 2. If (S n ; n ≥ 0) is a simple symmetric random walk. Let x be the integer part of x. (1.38) if j is even; (1. 40) and if N is even.
For the argmin chain A N , the transition probability from 0 to N is given by (1.33) in the general case. But this probability is simplified to (1.37) and (1.41) in the two special cases. These identities are proved analytically by Lemmas 5.3 and 5.5. We do not have a simple explanation, and leave combinatorial interpretations for the readers.
The following theorem is a generalization of Theorem 1.2.
1. Let (X t ; t ≥ 0) be a Lévy process. Then the argmin process (α X t ; t ≥ 0) of X is a stationary and time-homogeneous Markov process.
2. Let (X t ; t ≥ 0) be a stable Lévy process with parameters (α, β), and assume that neither X nor −X is a subordinator. Let ρ be defined by (1.43). Then the argmin process (α X t ; t ≥ 0) of X has generalized arcsine distributed invariant measure whose density is Organization of the paper: The layout of the paper is as follows.
• In Section 2, we provide background and necessary tools which will be used later.
• In Section 3, we study the argmin process (α t ; t ≥ 0) of Brownian motion, and prove Theorem 1.2.
• In Section 4, we study the (a, b)-minima of Brownian motion with an emphasis on the case a = b = 1. There we prove Theorems 1.4, 1.5 and 1.6.
• In Section 5, we consider the argmin process of random walks and Lévy processes, and prove Theorems 1.7 and 1.8.

Background and tools
This section recalls some background of Brownian motion. In Section 2.1, we consider Denisov's decomposition for Brownian motion. In Section 2.2, we recall various results from Brownian excursion theory.

Path decomposition of Brownian motion
can be regarded as the weak limit of We refer to Durrett et al. [17] for a proof. A Brownian meander of length x, say Consequently, The following path decomposition is due to Denisov.
be the law of two independent Brownian meanders of length x and 1 − x joined backto-back, concatenated by an independent Brownian path running forever. Denisov's decomposition is equivalent to

Brownian excursion theory
Let (B t ; t ≥ 0) be standard Brownian motion, and B t := inf 0≤s≤t B s be the pastminimum process of B. For l > 0, let T l := inf{t > 0; B t < −l} be the first time at which B hits below level −l.
so that for l ∈ D, is an excursion away from −l. Let E be the space of excursions defined by The following theorem is a special case of Itô's excursion theory.
Here we consider positive excursions of the reflected process B − B. So the measure n(d ) corresponds to 2n + (d ) defined in Revuz and Yor [43,Chapter XII].
Let Λ(dx) be the Lévy measure of a 1 2 −stable subordinator such that By applying the master formula of Poisson point processes, we know that be the age process of excursions of B − B, or equivalently of a 1 2 -stable subordinator.
The following proposition gathers useful results of J, defined by That is, J is the first descending ladder time of Brownian motion, from which starts an excursion above the minimum of length exceeding 1. For completeness, we include a proof.

Proposition 2.3.
[42] Let J be the first descending ladder time of Brownian motion, from which starts an excursion above the minimum of length exceeding 1.  2. The density of J is given by where for each n ≥ 1, c n := (n − 1)! 2 n−1 π n/2 Γ(n/2) , and I n is a function supported on (0, 1 n ] defined by Alternatively, let τ := inf{l ∈ D : T l − T l− > 1} be the first level above which an excursion has length larger than 1 so that J = T τ − . As in [23], we deduce from stable subordinator with all jumps of size larger than 1 deleted, so the Laplace exponent of (σ t ; t ≥ 0) is given by and the Laplace transform of J is given by The part (2) is obtained by specializing Pitman and Yor [42, Proposition 20] to α = 1 2 and θ = 0.
is a Brownian meander of length 1, independent of (B u ; 0 ≤ u ≤ J).

The argmin process of Brownian motion
In this section, we study the argmin process α of Brownian motion defined by (1.1). In Section 3.1, we deal with the sample path properties of α. In Section 3.2, we provide a conceptual proof that the argmin process α is a Markov process with the Feller property. In Section 3.3, we study the jumps of α by means of a Lévy system. In Section 3.4, we compute the transition kernel of α, and prove Theorem 1.2. Finally in Section 3.5, we explain why Dynkin's criterion and the Rogers-Pitman criterion do not apply to the argmin process α.

Sample path properties
We have mentioned in the introduction that the argmin process (α t ; t ≥ 0) takes values in [0, 1], and drifts down at unit speed except for positive jumps. More precisely, we provide the following proposition. Proposition 3.1. Let (α t ; t ≥ 0) be the argmin process of Brownian motion. Then a.s.
(2) Observe that (α t ; t ≥ 0) is a càdlàg process with only positive jumps. We first check (i). If α t− = 0 for some t > 0, then B u ≥ B t for all u ∈ [t, t + 1]. We distinguish two cases. In the first case, B u > B t for all u ∈ [t, t + 1], which implies that α t = 0. In the second case, B u = B t for some u ∈ (t, t + 1], and let x := sup{u ∈ (0, 1] : B t+u = B t }. If x = 1, then there exists an excursion of length 1 in Brownian motion by a space-time shift. But this is excluded by Pitman and Tang [41,Theorem 4]. Thus, α t = x ∈ (0, 1). It remains to check (ii). If α t− = x ∈ (0, 1) for some t > 0, then B t+x < B u for all u ∈ (t + x, t + 1). We also distinguish two cases. In the first case, B t+1 > B t+x which implies that α t = x. In the second case, B t+1 = B t+x which yields α t = 1.
Next we prove a time reversal property of the argmin process α. By convention, Let α be the argmin process of B on [0, T ]. Hence,

Markov and Feller property
We provide a soft argument to prove that (α t ; t ≥ 0) is a Markov process, and enjoys the Feller property.
be the σ-field generated by the path B killed at time t + α t . By Proposition 3.1 (1), t → t + α t is increasing. It is not hard to see that for any s < t, s + α s is a measurable function of the path B killed at t + α t . So (G t ) t≥0 is a filtration. Now we show that , the Brownian path is decomposed into four independent components: is Brownian motion running forever. As a consequence, These observations imply that (α t ; t ≥ 0) is Markov relative to (G t ) t≥0 . The timehomogeneity follows from the fact that given α t = x, the law of (B t+x+s − B t+x ; s ≥ 0) does not involve the time parameter t.
We now investigate the Feller property of the argmin process (α t ; t ≥ 0). Recall the definition of P x from (2.3). Let be the argmin process of (B t ; t ≥ 0) under P x , which makes α 0 = x ∈ [0, 1]. By Denisov's decomposition (Theorem 2.1), for all f : C[0, ∞) → R bounded and continuous, where E W is the expectation relative to W. The Feller property of (α t ; t ≥ 0) follows from a direct computation of the transition semigroup Q t (x, ·) of (α x t ; t ≥ 0), which will be given in Section 3.4. But here we provide a conceptual proof.
Proposition 3.4. The argmin process (α t ; t ≥ 0) enjoys the Feller property, and is a strong Markov process.
By Lemma 3.5, let (l 1 t ; t ≥ 0) be the local times of α at level 1, normalized to match the 1 We will prove in Corollary 4.9 that the constant c = 1/ √ 2π. These stationary local times also appeared in the work of Leuridan [32].
Before proceeding further, we need the following terminology. Let (X t ; t ≥ 0) be a Hunt process on a suitably nice state space E, e.g. locally compact and separable metric space. The pair (Π, C) constituted of a kernel Π on E and a continuous additive functional C is said to be a Lévy system for X if for all bounded and measurable function The kernel Π is called the Lévy measure of the additive functional C. The notion of a Lévy system was formulated by Watanabe [52], the existence of which was proved for a Hunt process under additional assumptions. The proof was simplified by Beneviste and Jacod [4]. See also Meyer [35], Pitman [39] and Sharpe [46, Chapter VIII] for development.
By Proposition 3.4, the argmin process (α t ; t ≥ 0) is a Hunt process. Also define a continuous additive functional C by The main result is stated as follows, the proof of which relies on Lemmas 3.7 and 3.9.
Recall from Proposition 3.1 (2) that (α t ; t ≥ 0) can only have (i). jumps from 0 to some x ∈ (0, 1), and (ii). jumps from some x ∈ (0, 1) to 1. We start by computing the jump rate of α from x ∈ (0, 1) to 1.  Let Z x−y be normally distributed with mean 0 and variance x − y. Let R x be Rayleigh distributed with parameter x, independent of Z x−y . We have where the second equality follows from the reflection principle of Brownian motion, and the fact that a Brownian meander of length x evaluated at time x is Rayleigh distributed with parameter x, whose density is given by (2.1).
(2) Note that We obtain the jump rate (3.7) by taking derivative of (3.8) with respect to x.
Remark 3.8. We provide an alternative approach to Lemma 3.7. Consider the excursions above the past minimum of (B t ; t ≥ 0). Given α 0 = x, it must be a ladder time; that is the starting time of an excursion. Thus, the probability that α jumps to 1 on (0, dt] given α 0 = x is the same as that of an excursion terminates in dt given that it has reached length x. Let ζ be the length of such an excursion. By (2.6), the aforementioned probability is given by where Λ(dx) is the Lévy measure of a 1 2 -stable subordinator as in (2.5). This gives the jump rate (3.7).
To conclude this subsection, we compute the Lévy measure of jumps of α in from 0. Lemma 3.9. Let (α t ; t ≥ 0) be the argmin process of Brownian motion. For y ∈ (0, 1), the Lévy measure of jumps of α per unit local time in from 0 is Π 0↑ (dy) defined by (3.6).
Proof. On one hand, the mean number of jumps per unit time from 0 to dy near y is Π 0↑ (dy)El 0 1 = Π 0↑ (dy)/ √ 2π. On the other hand, the mean number of jumps per unit time from dy near 1 − y to 1 is given by By Proposition 3.2, we identify these two quantities and obtain the Lévy measure (3.6).

Transition kernel
We complete the proof of Theorem 1.2. Recall the definition of (α x t ; t ≥ 0) from (3.2), which is viewed as the argmin process α conditioned on α 0 = x.
to be the first time at which (α x t ; t ≥ 0) hits level b. Also recall the definition of µ ↑1 (x) from (3.7). We start with a lemma whose proof is straightforward. Lemma 3.10. For 0 < x < 1, where µ ↑1 (x) is given by (3.7) and s(x, y) is given by (3.8).
Proof of Theorem 1.2. The first part of Theorem 1.2 has been proved as Proposition 3.4. Now we compute the transition kernel Q t (x, dy) for t > 0 and x ∈ [0, 1] of (α t ; t ≥ 0). Observe that α t+s and α s are independent for all t ≥ 1. By Proposition 1.1, π y(1 − y) dy for t ≥ 1 and x ∈ [0, 1], (3.10) which is the invariant measure of the argmin process α.
Consequently, α 1 t is the arcsine distribution rescaled linearly into [1 − t, 1]. That is, while for 0 < x < t ≤ 1, In the case t ≤ x there is an atom of probability s(x, x − t) at x − t, whereas in the case t > x this atom is replaced by probability s(x, 0) redistributed according to Q t−x (0, dy). For t = 1, we know that Q 1 (x, dy) is arcsine distributed, whatever x. So this case gives a formula for Q u (0, dy) for any 0 < u < 1 with u := 1 − x. That is, dy) . (3.14) It remains to evaluate the r.h.s. of (3.12)-(3.14). By (3.8) and (3.11), we get By injecting these expressions into (3.12), we obtain Similarly, we get from (3.13) that for x < t ≤ 1, By combining the above expressions, we obtain dy for x < t ≤ 1. (3.16)

Breakdown of Dynkin's and Rogers-Pitman criterion
In this part, we explain why Dynkin's criterion, and the Rogers-Pitman criterion fail to prove that (α t ; t ≥ 0) is Markov. Before proceeding further, we recall these sufficient conditions for a function of a Markov process to be Markov.
Let (X t ; t ≥ 0) be a continuous-time Markov process on a measurable state space (E, E), with initial distribution λ and transition semigroup (P t ; t ≥ 0). Let (E , E ) be a second measurable space, and φ : (E, E) → (E , E ) be a measurable function.
Dynkin [18] initiated the study of Markov functions, and gave a condition for (φ(X t ); t ≥ 0) to be Markov for all initial distributions λ. Later Rogers and Pitman [44] made a simple observation: if there exists a Markov kernel Λ : E × E (y, A) → Λ(y, A) ∈ R + such that for all t ≥ 0 and A ∈ E, where Φ is the Markov kernel from E to E induced by φ: Φ(x, B) = δ φ(x) (B) for x ∈ E and B ∈ E . The following theorem provides a sufficient condition for (3.17) to hold.

Theorem 3.11 (Rogers-Pitman criterion).
[44] Let Φ be derived from φ : E → E as above. Assume that there exists a Markov kernel Λ from E to E such that (i). ΛΦ = I, the identity kernel; (ii). for each t ≥ 0, the Markov kernel Q t = ΛP t Φ satisfies the intertwining relation ΛP t = Q t Λ.
Note that if instead of (ii), for a Markov kernel Q t on E , then (φ(X t ); t ≥ 0) is Markov for all initial distributions λ.
Proof. Observe that for all t ≥ 0, where α t (w) is given by (3.19). From this follows the result.
Proof. Observe that ΛP t (t, ·) = −→ M 1−t ⊗ W: the law of a Brownian meander of length 1 − t concatenated by Brownian motion. Thus, Next by (1.45), This yields the desired result.

The (a, b)-minima set of Brownian motion
In this section, we study the (a, b)-minima set M a,b of Brownian motion defined by (1.5). In Section 4.1, we consider the renewal property of the set M a,b , and provide an alternative proof of Theorem 1.4. In Section 4.2, we give an explicit construction for times of the set M 1,1 , which implies Theorem 1.5. Finally in Section 4.3, we deal with the sample path of Brownian motion between two (1, 1)-minima. There Theorem 1.6 is proved.

Renewal structure of (a, b)-minima
We provide an alternative proof of Theorem 1.4. Recall that the argmin process α is a stationary Markov process, whose • invariant measure has density f (x), 0 < x < 1 given by (1.3); • transition kernel Q t (x, ·), t > 0 and x ∈ [0, 1] is given by (1.45).   and for 0 ≤ s < t, where h a,b is defined by (1.10).
By Brownian scaling, the latter has the same probability as that of (a + b)α t−a a+b ∈ [a, a + dt] .
A similar argument shows that 1 − a is obtained first by size-biasing the inter-arrival time distribution (1.11), and then by stick-breaking uniformly at random, see Thorisson [49]. This gives the formula (1.12).

Construction of (1, 1)-minima
We consider the case a = b = 1 by studying the law of Brownian fragments between (1, 1)-minima. Let T 1 , T 2 , · · · with 0 < T 1 < T 2 < · · · be times of (1, 1)-minima set of Brownian motion. Now we give a path construction for T 1 , T 2 , · · · , from which the renewal property of M 1,1 is clear. In particular, Theorem 1.5 is a corollary of this construction.
Construction of T 1 Let J be the first descending ladder time of B, from which starts an excursion above the minimum of length exceeding 1. The Laplace transform of J is given by (1.17). By Theorem 2.4, (B − B)[J, J + 1] is a Brownian meander of length 1.
If J ≥ 1, then T 1 = J. If not, we start afresh Brownian motion at the stopping time J + 1; that is B 1 := (B J+1+t − B J+1 ; t ≥ 0). Let J 1 be constructed as J for B 1 . Thus, J 1 ∈ LE, and (B 1 − B 1 )[J 1 , J 1 + 1] is a Brownian meander of length 1. Now we look backward a unit from J 1 to see whether J 1 ∈ RE or not. If J 1 ∈ RE, then T 1 = J 1 . If not, we start afresh Brownian motion at J 1 + 1 and proceed as before until a (1, 1)-minima is found.
Construction of T i+1 given T i By induction, (B Ti+t − B Ti ; 0 ≤ t ≤ 1) is a Brownian meander of length 1. Now it suffices to start afresh Brownian motion at T i + 1, and proceed as in the construction of T 1 .

Evaluation of the geometric rate
It is easy to see that N is geometrically distributed on {1, 2, · · · } with parameter P(J 1 ∈ RE). Note that J i depends on the event {N = i}, but is independent of the event {N ≥ i}. In fact, N is a stopping time of a sequence of i.i.d. path fragments, each starting with a meander and continuing with an independent Brownian motion until time J i . By Wald's identity, . Now by (1.14), we get P(J 1 ∈ RE) = 2 π . (4.4) In view of the dependence of J i and the event {N = i}, the evaluation of the geometric rate in the distribution of N is quite indirect. Here is a more direct approach.
Consider the construction of J 1 as J for a copy of Brownian motion preceded by an independent meander of length 1. It is straightforward that P(J 1 ∈ RE and J 1 ≥ 1) = P (J 1 ≥ 1) = 1 − 2 π , (4.5) where the second equality is obtained by integrating (2.10) over [0, 1]. The evaluation of P(J 1 ∈ RE and J 1 < 1) is more tricky, which relies on the following lemma.  Proof. It can be read from Pitman [40, (3)] that for t ∈ [0, 1], x > 0 and y ∈ R, P(L br t > x|B br t ∈ dy) = exp − (4.7) By integrating (4.7) with respect to the normal density of B br t with mean 0 and variance t(1 − t), we obtain (4.6). Let −ξ be the level of the minimum of the free Brownian part of the path at time J 1 so that J 1 = σ ξ , where σ is 1 2 -stable surbordinator with jumps of size larger than 1 deleted. Recall from Section 2.2 that ξ is exponentially distributed with parameter 2/π. By letting T x := inf{t > 0 : B t = x}, we obtain for 0 < t < 1, 2t dtdx. (4.8) By time-reversing the Biane-Yor construction [7] of Brownian meander minus its future minimum process (see also Bertoin and Pitman [6, Theorem 3.1]), we get where the last equality is obtained by plugging in (4.6) and (4.8). Now (4.4) follows readily from (4.5) and (4.9).
To conclude this part, we give another identity in law similar to (1.18).

Proposition 4.4.
Let U be uniform on [0, 1], independent of J and ∆. Then we have the following identity in law Proof. Note that T 1 := T 1 − 1 is the stationary delay for a renewal process with interarrival time distributed according to ∆. If J = u > 1 then T 1 = J, whereas if J = u < 1 then T 1 = u with probability √ u, and T 1 = u + ∆ with probability 1 − √ u. This is because a meander of length 1 to the right of time u creates a (1, 1)-minimum for a two-sided Brownian motion at time u if and only if the meander of length u looking backwards from time u to time 0 becomes a meander of length 1 when running further backwards to time u − 1. By Brownian excursion theory, the probability that a meander of length u followed by an independent Brownian fragment of length 1 − u creates a meander of length 1 is given by where Λ(dx) is the Lévy measure of a 1 2 -stable subordinator defined by (2.5). The identity (4.23) follows from the above analysis, where U ∼ Uniform[0, 1] serves as a device to replicate the conditional distribution of T 1 given J.
By conditioning on J, the identity (4.23) yields a Laplace transform relation, which can be used to provide an alternate derivation of the Laplace transforms of T 1 and of ∆. Though not obviously equivalent, each of the two relations of (1.18) and (4.23) can be derived from the other after substituting in the explicit formula (1.17) for Φ J (λ), and using the simple density of J on [0, 1]. However, neither relation seems to offer much insight into their remarkable implication (1.19).
Proof. Suppose by contradiction that there exist s ∈ (T a,b i , T a,b i+1 )∩LE b and t ∈ (T a,b i , T a,b i+1 )∩ RE a such that s ≥ t. Let r := argmin u∈[t,s] B u be the time at which B attains its a.s. unique minimum between t and s. It is clear that r i+1 . This is impossible since r ≤ s < T a,b i+1 . Proposition 4.7. Almost surely, for each i ≥ 1,  Proof of Theorem 1.6. Observe that T i is the i th time that the argmin process (α t ; t ≥ 0) reaches 0 by a continuous passage from 1. It is obvious that D i is a stopping time relative to (G t ) t≥0 , the filtrations of the argmin process α.
The following path decomposition is due to Denisov. By Denisov's decomposition for random walks, it is easy to adapt the argument of Proposition 3.3 to show that (A N (n); n ≥ 0) is a time-homogeneous Markov chain on {0, 1, · · · , N }. Now we compute the invariant distribution Π N , and the transition matrix P N of the argmin chain (A N (n); n ≥ 0) on {0, 1, . . . , N }. To proceed further, we need the following result regarding the law of ladder epochs, originally due to Sparre Andersen [47], Spitzer [48] and Baxter [2]. It can be read from Feller [22,Chapter XII.7].
Proof of Theorem 1.7. Observe that the distribution of the argmin of sums on {0, 1, · · · , N } is the stationary distribution of the argmin chain. Following Feller [21,Chapter XII.8], this is the discrete arcsine law • If S N +1 > S i , then the last time at which (S n ) 1≤n≤N +1 attains its minimum is i, meaning that A N (1) = i − 1. • If S N +1 = S i , then the last time at which (S n ) 1≤n≤N +1 attains its minimum is N + 1, meaning that A N (1) = N .
If we look forward from time i, N + 1 is the first time at which the chain enters (−∞, 0]. Consequently, for 0 < i ≤ N , • If S N +1 > S j+1 , then the last time at which (S n ) 1≤n≤N +1 attains its minimum is j + 1, meaning that A N (1) = j. • If S N +1 = S j+1 , then the last time at which (S n ) 1≤n≤N +1 attains its minimum is N + 1, meaning that A N (1) = N .
If we look backward from time j + 1, the origin is the first time at which the reversed walk enters (−∞, 0). So for 0 ≤ j < N , which yields (1.33). The above formula fails for j = N , but P N (0, N ) = 1 − N −1 j=0 P N (0, j).
F is continuous and P(S n > 0) = θ ∈ (0, 1) From Theorem 5.2, we deduce the well known facts that Proof. Note that P N (0, N ) = 1 − N −1 j=0 P N (0, j). Thus , it suffices to show that Furthermore, for |s| < 1, By identifying the coefficients on both sides, we get which leads to the desired result.
When F is symmetric and continuous, the above results can be simplified. In this case, P(S n ≥ 0) = P(S n > 0) = 1 2 .
Corollary 5.4. Assume that F is symmetric and continuous. Then the stationary distribution of the argmin chain (A N (n); n ≥ 0) is given by     By identifying the coefficients on both sides, we get N j=0 p j p N −j = N +1 j=0 p j p N +1−j = 1, which leads to the desired result.

The argmin process of Lévy processes
We consider the argmin process (α X t ; t ≥ 0) of a Lévy process (X t ; t ≥ 0). According to the Lévy-Khintchine formula, the characteristic exponent of (X t ; t ≥ 0) is given by where a ∈ R, σ ≥ 0, and Π(·) is the Lévy measure satisfying R min(1, x 2 )Π(dx) < ∞. The Lévy process X is a compound Poisson process if and only if σ = 0 and Π(R) < ∞. In this case, the process X has the following representation: Y i for all t > 0, (5.9) where c = −a − |x|<1 xΠ(dx), (N t ; t ≥ 0) is a Poisson process with rate λ, and (Y i ; i ≥ 1) are independent and identically distributed random variables with cumulative distribution function F , independent of N and satisfying λF (dx) = Π(dx). See Bertoin [5] and Sato [45] for further development on Lévy processes.
Assume that X is not a compound Poisson process with drift, which is equivalent to (CD). For all t > 0, X t has a continuous distribution; that is for all x ∈ R, P(X t = x) = 0.
The following result is a simple consequence of Millar [36,Proposition 4.2]. Theorem 5.6. [36] Assume that (X t ; 0 ≤ t ≤ 1) is not a compound Poisson process with drift. Let A be the a.s. unique time such that inf t∈[0,1] X t = min(X A− , X A ). Given A, the Lévy path is decomposed into two conditionally independent pieces: In [36], Millar provided the law of the post-A process X A+t − inf t∈[0,1] X t ; 0 ≤ t ≤ 1 − A but he did not mention the law of the pre-A process X (A−t)− − inf t∈[0,1] X t ; 0 ≤ t ≤ A .
This result generalizes Denisov's decomposition to Lévy processes with continuous distribution. Since a compound Poisson process is a continuous random walk, with Denisov's decomposition for random walks, it is easy to extend Theorem 5.6 to:   With Corollary 5.7, it is easy to adapt the argument of Proposition 3.3 to prove that (α X t ; t ≥ 0) is a time-homogeneous Markov process. Now we turn to the stable Lévy process. Let (X t ; t ≥ 0) be a stable Lévy process with parameters (α, β), and neither X nor −X is a subordinator. It is well known that 0 is regular for the reflected process X − X. So Itô's excursion theory can be applied to the process X − X, see Sharpe [46] for background on excursion theory of Markov processes.