Last zero time or Maximum time of the winding number of Brownian motions

In this paper we consider the winding number, $\theta(s)$, of planar Brownian motion and study asymptotic behavior of the process of the maximum time, the time when $\theta(s)$ attains the maximum in the interval $0\le s \le t$. We find the limit law of its logarithm with a suitable normalization factor and the upper growth rate of the maximum time process itself. We also show that the process of the last zero time of $\theta(s)$ in $[0,t]$ has the same law as the maximum time process.

We let θ(t) = B(H(t)) so that θ(t) = arg W (t), which we call the winding number. Without loss of generality we suppose θ(0) = 0. The well-known result of Spitzer [7] states the convergence of θ(t)/ log t in law: It is shown in [1] that for any increasing function f : (0, ∞) → (0, ∞) t(log t) 2 dt converges or diverges; moreover, it is shown that the squre root of random time H(t) is subjected to the same growth law as of θ in (2) and lim inf behavior of H(t) is also given. Another proof of (2) is given in [6].
Before advancing our result we recall the two arcsine laws whose analogues are studied in this paper. Let {B(t) : t ≥ 0} be a standard linear Brownian motion started at zero and denote by Z t the time when the maximum of B s in the interval 0 ≤ s ≤ t is attained. Then the process Z t and the process sup{s ∈ [0, t] : B(s) = 0}, the last zero of Brownian motion in the time interval [0, t], are subject to the same law, and according to Lévy's arcsine law the scaled variable Z t /t is subject to the arcsin law. (cf. e.g., [4] Theorem 5.26 and 5.28) For stating the results of this paper we set and define a random variable M t ∈ [0, t] by the time when θ(s) attains the maximum in the interval 0 ≤ s ≤ t, and a random variable L t by According to Theorem 2.11 of [4] a linear Brownian motion attains its maximum at a single point on each finite interval with probability one. In view of the representation θ(t) = B(H(t)), it therefore follows that the maximiser M t is uniquely determined for all t with probability one.
(b) It holds that Theorem 1.2. Let α(t) be a positive function that is non-increasing, tends to zero as t → ∞ and satisfies and put Then, with probability one according as the integral I{α} converges or diverges.
It may be worth noting that the distribution function V (a/(1 − a)) (0 ≤ a ≤ 1) is expressed as Indeed, and we find the density asserted above.
Proof. By reflection principle [4], (Theorem 2.21) it holds that for any t > 0 showing the assertion of the lemma.
Proof. According to Lévy's representation of the reflecting Brownian motion [4], (Theorem 2.34) we have Hence as in the preceding proof, as desired.
Proof of Theorem 1.1. Lemma 2.2 shows that the process {M s : s ≥ 0} has the same law as {L s : s ≥ 0}, being nothing but the last zero of the process {N (t) − θ(t) : 0 ≤ t ≤ s} for any s. So it remains to prove part (b). Fix a ∈ (0, 1). Set T c = inf{l ≥ 0 : |W (l)| = c}, for which we sometimes write T (c) for typographical reason. We first prove the upper bound. By (1) it holds that whereB is a linear Brownian motion started at zero which is independent of W . Corresponding to (1) Given ǫ > 0, it holds that for all sufficiently large t Therefore, we get ))) + ǫ.
Also, strong Markov property tells us and H(T ).
So if we set for a, b < ∞ ). (8) Note that by Skew-product representation B(t)( resp.B(t)) is independent of H(T Moreover, since θ(T r ) is subject to the Cauchy distribution with parameter | log r| (cf. e.g., [5], Section 5, Exercise 2.16), we get Therefore, since ǫ is arbitrary, this gives the desired upper bound. Next we prove the lower bound. For all sufficiently large t and repeating the argument in (7) and (8) yielding the lower bound.
Since q < ∞ is arbitrary, this concludes the proof.
Next we prove (14). We only need to consider j=1,j∈D k<j,k∈D P (A j ∩ A k ). First we consider n j=1,j∈D Then, since qt So next we consider the case qt Note that when k is satisfied with qt α(t j ) j < t k , we have A k ⊂ A ′ k,j , and by (16) P (A j ∩A ′ k,j ) = P (A j )P (A ′ k,j ). Then, since by the same argument for (15) P (A ′ k,j ) = V ( e k α(t k ) e j α(t j )−e k α(t k ) ), we get P (A j ∩ A k ) ≤ P (A j ∩ A ′ k,j ) = P (A j )P (A ′ k,j ) = P (A j )V ( e k α(t k ) e j α(t j ) − e k α(t k ) ).