Transience/Recurrence and Growth Rates for Diffusion Processes in Time-Dependent Domains

Let $\mathcal{K}\subset R^d$, $d\ge2$, be a smooth, bounded domain satisfying $0\in\mathcal{K}$, and let $f(t),\ t\ge0$, be a smooth, continuous, nondecreasing function satisfying $f(0)>1$. Define $D_t=f(t)\mathcal{K}\subset R^d$. Consider a diffusion process corresponding to the generator $\frac12\Delta+b(x)\nabla$ in the time-dependent domain $D_t$ with normal reflection at the time-dependent boundary. Consider also the one-dimensional diffusion process corresponding to the generator $\frac12\frac{d^2}{dx^2}+B(x)\frac d{dx}$ on the time-dependent domain $(1,f(t))$ with reflection at the boundary. We give precise conditions for transience/recurrence of the one-dimensional process in terms of the growth rates of $B(x)$ and $f(t)$. In the recurrent case, we also investigate positive recurrence, and in the transient case, we also consider the asymptotic growth rate of the process. Using the one-dimensional results, we give conditions for transience/recurrence of the multi-dimensional process in terms of the growth rates of $B^+(r)$, $B^-(r)$ and $f(t)$, where $B^+(r)=\max_{|x|=r}b(x)\cdot\frac x{|x|}$ and $B^-(r)=\min_{|x|=r}b(x)\cdot\frac x{|x|}$.


Introduction and Statement of Results
Let K ⊂ R d , d ≥ 2, be a bounded domain with C 3 -boundary satisfying 0 ∈ K, and let f (t), t ≥ 0, be a continuous, nondecreasing C 3 -function It is known that one can define a Brownian motion X(t) with normal reflection at the boundary in the time-dependent domain {(x, t) : x ∈ D t , t ≥ 0}. More precisely, one has for 0 ≤ s < t, X(t) = x + W (t) − W (s) + t s 1 ∂Du (X(u))n(u, X(u))dL u , where W (·) is a Brownian motion, n(u, x) is the unit inward normal to D u at x ∈ ∂D u and L u is the local time up to time u of X(·) at the time-dependent boundary. See [1].
The process X(t) is recurrent if, with probability one, X(t) ∈ K at arbitrarily large times t, and is transient if, with probability zero, X(t) ∈ K at arbitrarily large times t. As with non-degenerate diffusion processes in unrestricted space, transience is equivalent to lim t→∞ |X(t)| = ∞ with probability one. It is simple to see that the definitions are independent of the starting point and the starting time of the process. In a recent paper [2], it was shown that for d ≥ 3, if  f d (t) dt = ∞, and an additional technical condition is fulfilled, then the process is recurrent. The additional technical condition is that either K is a ball, or that ∞ 0 (f ′ ) 2 (t)dt < ∞. In particular, this result indicates that if for sufficiently large t, f (t) = ct a , for some c > 0, then the process is transient if a > 1 d and recurrent if a ≤ 1 d . The paper [2] also studies the analogous problem for simple, symmetric random walk in growing domains.
In this paper we study the transience/recurrence dichotomy in the case that the Brownian motion is replaced by a diffusion process; namely, Brownian motion with a locally bounded drift b(x). That is, the generator of the process when it is away from the boundary is 1 2 ∆ + b(x)∇ instead of 1 2 ∆. Using the Cameron-Martin-Girsanov change-of-measure formula, or alternatively in the case of a Lipschitz drift, by a direct construction as in [1], one can show that the diffusion process in the time-dependent domain can be defined. We will show how the strength of the radial component, b(x) · x |x| , of the drift, and the growth rate of the domain-via f (t)-affect the transience/recurrence dichotomy.
In fact, we will prove a transience/recurrence dichotomy for a one-dimensional process. Our result for the multi-dimensional case will follow readily from the one-dimensional result along with results in [2]. Let f (t) be as in the first paragraph. Consider the diffusion process corresponding to the gen- where B is locally bounded, in the time-dependent domain (1, f (t)) with reflection at the endpoint x = 1 (for all times) and at the endpoint f (t) at time t. If B(x) = k x , the process is a Bessel process. When this process is considered on the space (1, ∞) with reflection at 1, it is recurrent for k ≤ 1 2 and transient for k > 1 2 . In particular, it is the radial part of a d-dimensional Brownian motion when k = d− 1 2 . The result of [2] noted above can presumably be slightly modified to show that for k > 1 2 , the process on the time dependent domain (1, f (t)) with reflection at the endpoints is transient or recurrent according to whether  f 2k+1 (t) dt = ∞. In this paper we considers drifts that are on a larger order than 1 x . We will prove the following theorem concerning transience/recurrence. If 2bc 1+γ 1 + γ < 1, or 2bc 1+γ 1 + γ = 1 and γ ≥ − 1 2 , then the process is recurrent.
ii. Assume that B(x) ≥ bx γ , for sufficiently large x, 1+γ , for sufficiently large t. If then the process is transient.
Using Theorem 1, we will prove the following result for the multi-dimensional process.
Theorem 2. Consider the diffusion process corresponding to the generator Let γ > −1 and b, c > 0.
Also assume either that K is a ball or that then the process is recurrent.
ii. Assume that 1+γ , for sufficiently large t.
then the process is transient.
1+γ , for all large t, where C > 0 and γ > −1, In the recurrent case, it is natural to consider positive recurrence, which we define as follows: the one-dimensional process above is positive recurrent if starting from x > 1, the expected value of the first hitting time of 1 is finite, while the multi-dimensional process defined above is positive recurrent if starting from a point x ∈K, the expected value of the first hitting time ofK is finite. It is simple to see that this definition is independent of the starting point and the starting time of the process. We have the following theorem regarding positive recurrence of the one-dimensional process.
Remark. The proof of Theorem 3 relies heavily on the estimates in the proof of part (i) of Theorem 1. We suspect that in the borderline cases, when 2bc 1+γ 1+γ = 1, the process is never positive recurrent. However, the estimates in the proof of part (ii) of Theorem 1 don't go quite far enough to prove this.
In the transient case, it is natural to consider the asymptotic growth rate of the process. It is known that the process X(t) corresponding to the generator 1 2 d 2 dx 2 + bx γ d dx on [1, ∞) with reflection at 1 grows a.s. on the order t 1 1−γ if γ ∈ (−1, 1). (In fact, the solutionsx(t) to the differential The process grows a.s. exponentially if γ = 1, and explodes a.s. if γ > 1 [5]. From this it is clear that the one-dimensional process X(t) with B(x) = bx γ on the time-dependent domain (1, f (t)) satisfies X(t) = f (t) for arbitrarily large t a.s., and consequently, 1+γ < 1, then the process is recurrent and thus lim inf t→∞ X(t) = 1.) We restrict to γ ∈ (−1, 1) for technical reasons, but we suspect that the following result also holds for γ ≥ 1.
Theorem 4. Consider the diffusion process corresponding to the generator , with reflection at both the fixed endpoint and the time-dependent one. Let γ ∈ (−1, 1) and b, c > 0.
Assume that for sufficiently large x, t, We now consider the asymptotic growth behavior in the case that B(x) = x γ , γ ∈ (−1, 1), and that f (t) is on a larger order than (log t) we will assume that f (t) = (log t) l , with l > 1 1+γ , or that f (t) = t l , with l ∈ (0, 1 1−γ ). (We have dispensed with the coefficients b and c because here they no longer play a role at the level of asymptotic behavior we investigate.) Theorem 5. Consider the diffusion process corresponding to the generator , with reflection at both the fixed endpoint and the time-dependent one. Let γ ∈ (−1, 1). Assume that i. Assume that for t ≥ 2, ii. Assume that In particular (in light of (1.3)), Asymptotic growth behavior in the spirit of Theorems 4 and 5 for the multi-dimensional case can be gleaned just as Theorem 2 was gleaned from Theorem 1.
In section 2 we prove several auxiliary results which will be needed for the proofs. The proofs of Theorem 1-5 are given in sections 3-7 respectively.
Throughout the paper, the following notation will be employed: Let X(t) denote a canonical, continuous real-valued path, and let T α = inf{t ≥ 0 : X(t) = α}. Let Let P bx γ ;Ref←:β x and E bx γ ;Ref←:β x denote probabilities and expectations for the diffusion process corresponding to L bx γ on [1, β], starting from x ∈ [1, β], with reflection at β and stopped at 1, and let P bx γ ;Ref→:α x and E bx γ ;Ref→:α x denote probabilities and expectations for the diffusion process corresponding to L bx γ on [α, ∞), starting from x ∈ [α, ∞), with reflection at α. We note that this latter diffusion is explosive if γ > 1, but we will only be considering it until time T β for some β > α. We will sometimes work with a constant drift, which we will denote by D (instead of bx γ with γ = 0), in which case D will replace bx γ in all of the above notation.

Auxiliary Results
In this section we prove four propositions. The first three of them are used explicitly in the proof of Theorem 1, and implicitly in many of the other theorems, since many of the calculations in the proof of Theorem 1 are used in the proofs of the other theorems. Proposition 4 is used only for the proof of (1.5) in Theorem 5.

Proposition 4.
(2.6) E bx γ ;Ref→:α x exp(λτ β ) ≤ 2, for x ∈ [α, β] and λ ≤λ, Proof. The proof is similar to that of Proposition 1. By the Feynman-Kac formula, when λ is less than the principal eigenvalue for the operator L bx γ on (α, β) with the Neumann boundary condition at α and the Dirichlet boundary condition at β, the function u λ (x) ≡ E bx γ ;Ref→:α is smaller than the principal eigenvalue and u λ ≤ u. We look for such a It follows readily that if , it is clear thatλ in the statement of the proposition is smaller than the right hand side of (2.7). Thus, u λ (x) ≤ u(x) ≤ 2, for λ ≤λ.

Proof of Theorem 1
We will denote probabilities for the process staring from 1 at time 0 by P 1 .
denote the standard filtration on real-valued continuous paths X(t). By standard comparison results and the fact that the transience/recurrence dichotomy is not affected by a bounded change in the drift over a compact set, we may assume that Proof of (i).
The conditional version of the Borel-Cantelli lemma [3] shows that if then P 1 (A j i.o.) = 1, and thus the process is recurrent. Thus, to show recurrence, it suffices to show (3.2).
Since up to time t j , the largest the process can be is f (t j ), and since up to We estimate the right hand side of (3.3). Let σ For any l j ∈ N, Also, it follows by the strong Markov property that Since Lφ = 0, it follows by standard probabilistic potential theory [6, chapter 5] that Applying L'Hôpital's rule shows that Using (3.9) along with the facts that f (x) = c(log x) 1 1+γ and t j = e j , it for sufficiently large j, for constants K 1 , K 2 > 0. From (3.10) and (3.11), it follows that (3.5) will hold if we define l j ∈ N by since then the general term, 1 − P will be on the order at least 1 j log j . With l j chosen as above, we now analyze P and show that (3.6) holds. By the strong Markov property, σ (j) , and the two IID sequences are independent of one another. By Markov's inequality, for any λ > 0.
By Proposition 1, whereλ(·, ·) is as in (2.2). Using the fact that f (t j ) = cj 1 1+γ , it is easy to check that there exists aλ 0 > 0 such that By comparison, It is easy to check that if one substitutes and β = f (t j+1 ) = c(log(j + 1)) 1 1+γ in the expression on the right hand side of (2.5) in Proposition 2, the resulting expression is bounded in j. Letting M > 1 be an upper bound, it follows that Noting that t j+1 − t j = e j+1 − e j ≥ e j , and choosing λ = Recalling l j from (3.12), we conclude from (3.19) that 1+γ j log 2M , for sufficiently large j.
Recalling that D j is equal to a positive constant, if γ ≥ 0, and that D j is on the order j γ 1+γ , if γ < 0, it follows that the right hand side of (3.20) is summable in j if 2bc 1+γ 1+γ < 1, or if 2bc 1+γ 1+γ = 1 and γ ≥ − 1 2 . Thus (3.6) holds for this range of b, c and γ. This completes the proof of (i).
For j ≥ j 1 , let B j be the event that the process hits 1 sometime between the first time it hits f (j) and the first time it hits f (j + 1): then by the Borel-Cantelli lemma it will follow that P 1 (B j i.o.) = 0, and consequently the process is transient.
To prove (3.21), we need to use different methods depending on whether γ ≤ 0 or γ > 0. We begin with the case γ ≤ 0. To consider whether or not the event B j occurs, we first wait until time T f (j) . Of course, necessarily, is not accessible to the process before time j. Since we may have T f (j) < j + 1, the point f (j + 1) may not be accessible to the process at time T f (j) , however, if we wait one unit of time, then after that, the point f (j + 1) certainly will be accessible, since T f (j) + 1 ≥ j + 1. Let Now if in that one unit of time, the process never got to the level f (j) − M j , then by comparison, the probability of B j occurring is ) (because after this one unit of time the process will be at a position greater than or equal to f (j) − M j ).
By comparison with the process that is reflected at the fixed point f (j), the probability that the process got to the level f (j) − M j in that one unit of time is bounded from above by P . From these considerations, we conclude that For ǫ ∈ (0, 1) to be chosen later sufficiently small, choose M j = ǫf (j). Recall With such a choice of ǫ, it follows from (3.23) and (3.24) that as above. By comparison, we have where D j is equal to the minimum of the original drift on the interval [f (j)− M j , f (j)]; that is, By Markov's inequality, we have for λ > 0, If γ < 0, then lim j→∞ D j = 0 and M j → ∞, and it follows from (3.28) for some K > 0. If γ = 0, then D j = b, for all j, and we have from (3.28), as j → ∞.
Since M j = ǫc(log j) 1 1+γ , it follows from (3.29) and (3.30) that for all choices of λ > 0 in the case γ < 0, and for sufficiently large λ in the case γ = 0. Thus, we conclude from (3.31) and (3.27) that We now turn to the case that γ > 0. Let ζ j+1 = inf{t ≥ j + 1 : X(t) ≥ f (j)}. Since the process cannot reach f (j + 1) before time j + 1, it follows Since the right hand endpoint of the domain is larger than or equal to f (t j+1 ) at all times t ≥ ζ j+1 , it follows by comparison that ). Thus, similar to (3.8) we have As in (3.24), but with ǫ = 0, we have From (3.34), (3.35) and the fact that 2bc 1+γ 1+γ > 1, it follows that For any s j , we have the estimate Here is the explanation for the above estimate. To check whether or not the event C j occurs, one waits until time T f (j) , at which time the process has first and C j does not occur. Otherwise, one watches the process between time T f (j) and time j + 1. If the process hit 1 in this time interval, whose length is no more than 1, then C j occurs. (Note that during this interval of time, the right hand boundary for reflection is always at least f (j).) Otherwise, C j has not yet occurred, but one continues to watch the process after time j + 1 until the first time the process is again greater than or equal to f (j).
If the process reaches 1 in this interval, then C j occurs, while if not, then we conclude that C j did not occur. (Note that if X(j + 1) ≥ f .

Letting
(3.40) it follows from (3.38) with λ = b 2 2 , (3.39) and the fact that γ > 0 that We now estimate P bx γ ;Ref←:f (j) f (j) (T 1 ≤ s j + 1), the first term on the right hand side of (3.37), where s j has now been defined in (3.40). Note that by the strong Markov property, i=2 be independent random variables with X i distributed as T 1 under P D i ;Ref←:2 2 , where (3.42) We will use the generic P and E for calculating probabilities and expecta- i=2 X i ≤ s j + 1).

Proof of Theorem 2
First we prove Theorem 2 in the case that K is a ball. The part of the |x| depends not only on the radial component r = |x| of x, but also on the spherical component x |x| . Let B + (r) = max |x|=r b(x) · x |x| and B − (r) = min |x|=r b(x) · x |x| . Then by comparison, if the multi-dimensional process with radial drift B + (|x|) · x |x| is recurrent, so is the one with drift b(x), and if the multi-dimensional process with radial drift B − (|x|) · x |x| is transient, so is the one with drift b(x). In the case of a radial drift B(|x|) · x |x| , with K a ball, so that D t = f (t)K is a ball, the question of transience/recurrence is equivalent to the question of transience/recurrence considered in Theorem 1 with drift B(x) + d−1 2x and with D t = 1, rad(K) f (t) , where rad(K) is the radius of K. Thus, if B(r) ≡ B + (r) and f (t) satisfy the inequalities (1.1) in part (i) of Theorem 2 with 2bc 1+γ 1+γ < 1, then the multi-dimensional process is recurrent, while if B(r) ≡ B − (r) and f (t) satisfy the inequalities (1.2) in part (ii) of Theorem 2 with 2bc 1+γ 1+γ > 1, then the multi-dimensional process is transient. (Of course, since K is a ball, rad ± (K) appearing in Theorem 1 are equal to rad(K).) Now consider the case that B(r) ≡ B + (r) and f (t) satisfy the inequalities (1.1) in part (i) of Theorem 2 with 2bc 1+γ 1+γ = 1. To show recurrence, we need to show recurrence for the one dimensional case when B(x) = bx γ + d−1 2x , for large x, and f (t) = c(log t) 1 1+γ , for large t, with 2bc 1+γ 1+γ = 1. Thus, the function φ appearing in (3.7) must be replaced by (Here C is the appropriate constant. In (3.7) we integrated over s starting from 0 for convenience in order to prevent such a constant from entering, however in the present case we can't do this because of the term d−1 s .) In place of (3.9), we will now have This causes the term j − γ 1+γ on the right hand side of (3.11) to be replaced by j − γ+d−1 1+γ , which in turn causes l j in (3.12) to be changed to log j exp( 2bc 1+γ 1+γ j)]. Finally, this causes the term on the right hand side of (3.20) to be changed to exp(− Recalling that D j is equal to a positive constant, if γ ≥ 0, and D j is on the order j γ 1+γ , if γ < 0, we conclude that if 2bc 1+γ 1+γ = 1, then the above expression is summable in j if d = 2 and γ ≥ 0. This proves recurrence when 2bc 1+γ 1+γ = 1, d = 2 and γ ≥ 0. We now extend from the radial case to the case of general K. In [2], the proof of a condition for transience was first given for the radial case. The extension to the case of general K, which appears as step III in the proof of Theorem 1.15 in that paper, followed by Lemma 2.1 in that paper. This lemma implies that if one considers two such processes, one corresponding to K 1 and one corresponding to K 2 , where K 1 is a ball and K 2 ⊃K 1 , then the process corresponding to K 2 is transient if the one corresponding to K 1 is transient. Lemma 2.1 goes through just as well when the Brownian motion is replaced by our Brownian motion with drift. This extends our proof of transience to the case of general K. In [2], the proof of the condition for recurrence also was first given in the radial case. The extension to the general case, which is more involved than in the case of transience, and which requires the additional condition ∞ 0 (f ′ ) 2 (t)dt < ∞, appears in step V in the proof of Theorem 1.15 in that paper. The analysis in that step also go through when Brownian motion is replaced by our Brownian motion with drift. This extends the proof of recurrence to the case of general K.

Proof of Theorem 3
We will prove the theorem for the one-dimensional case. The proof for the multi-dimensional case follows from the proof of the one-dimensional case, similar to the way the proof of Theorem 2 follows from the proof of Theorem 1. Let P 2 and E 2 denote probabilities and expectations for the process starting from x = 2 at time 0.
Let t j = e j as in the proof of part (i) of Theorem 1. We have Recall the definition of j 0 and of A j+1 from the beginning of the proof of part (i) of Theorem 1. From (3.3) we have for j ≥ j 0 + 1, If we show that then it will certainly follow from (5.1) and (5.2) that E 2 T 1 < ∞, proving positive recurrence. In order to prove (5.3), it suffices from (3.4) to prove that for some choice of positive integers {l j } ∞ j=j 0 , From (3.8), (3.11) and the fact that lim y→∞ (1− 1 y ) yg(y) = 0, if lim y→∞ g(y) = ∞, it follows that (5.4) holds if we choose With this choice of l j , we have from (3.19), As in the proof of Theorem 1, we can assume that b and f satisfy (3.1).
We will first show that The proof of (6.1) is just a small variant of the proof of recurrence in Theorem 1; that is, part (i) of Theorem 1. As in that proof, let t j = e j . Recalling the definition of j 0 appearing at the very beginning of the proof of part (i) of Theorem 1, it follows from (3.1) that f (t j ) = cj 1 1+γ , for j ≥ j 0 . In that proof, for j ≥ j 0 , A j+1 was defined as the event that the process hits 1 at For the present proof, we define instead, for each ρ ∈ (0, 1), the event A (ρ) j+1 that the process X(t) satisfies X(t) ≤ ρf (t j ) for some t ∈ [t j , t j+1 ]. We mimic the proof of Theorem 1-i up through (3.9), using A (ρ) j+1 in place of A j+1 , replacing the stopping time T 1 by the stopping time T ρf (t j ) , and replacing φ(1) by φ(ρf (t j )). Instead of (3.10), we obtain Instead of (3.11), we have for sufficiently large j, for constants K 1 , K 2 > 0. From (6.2) and (6.3), it follows that (3.5) with T 1 replaced by T ρf (t j ) will hold if we define l j ∈ N by since then the general term, 1 − P bx γ ;Ref←: be on the order at least 1 j log j . We now continue to mimic the proof of Theorem 1-i, starting from the paragraph after (3.12) and up through (3.19). We then insert the present l j from (6.4) in (3.19) to obtain (6.5) 1+γ (1−ρ 1+γ )j log 2M , for sufficiently large j.
Recalling that D j is equal to a positive constant, if γ ≥ 0, and that D j is on the order j γ 1+γ , if γ < 0, it follows that the right hand side of (6.5) is Analogous to the proof of Theorem 1, we conclude then that P 1 (A (ρ) j i.o.) = 1 for ρ as above. From the definition of A (ρ) j and the fact that f is increasing, we conclude that (6.1) holds.
To complete the proof of Theorem 4, we will prove that For this direction, we will need some new ingredients. Recalling again the definition of j 0 appearing at the very beginning of the proof of part (i) of Theorem 1, it follows from (3.1) that f (t) = c(log t) 1 1+γ for t ≥ e j 0 . Let τ 1 = inf{t ≥ e j 0 : X(t) = f (t)}, and for j ≥ 2, let τ j = inf{t ≥ τ j−1 + 1 : By the remarks in the paragraph preceding Theorem 4, it follows that τ j < ∞ a.s. [P 1 ], for all j. By construction, we have (6.7) τ j > j, for all j ≥ 1.
(We have suppressed the dependence of B j on ǫ and ρ.) It follows from (6.8) that on the event B j one has X(t) ≥ (1 − 2ǫ)ρf (t), for all t ∈ [τ j , τ j+1 ].
Thus, for any N , on the event ∩ ∞ j=N B j , one has lim inf t→∞ We will complete the proof of (6.6) by showing that (6.10) lim 1+γ and all sufficiently small ǫ (depending on ρ).
We write (6.11) where ∩ M −1 i=M B i denotes the entire probability space. Let (Note that C j depends on the random variable τ j .) Let P bx γ (1−ǫ)f (τ j ) denote probabilities for the diffusion process corresponding to L bx γ without reflection, starting from (1 − ǫ)f (τ j ). Noting that if τ j+1 ≤ s(τ j ), then , it follows by the strong Markov property and comparison that (6.12) Also, In order to get a lower bound on P 1 (B j |∩ j−1 i=M B i , τ j ), we will bound P 1 (C c j |τ j ) and P bx γ from above, and we will calculate the asymptotic behavior of P bx γ (1−ǫ)f (τ j ) (T ρ(1−ǫ)f (τ j ) > T f (s(τ j )) ). We start with P 1 (C c j |τ j ). Let P BM 0 denote probabilities for a standard Brownian motion starting from 0, and letT x = min(T x , T −x ), for x > 0. By the strong Markov property and comparison we clearly have (6.14) . Thus from (6.14) we obtain .
We now turn to and T f (s(τ j )) refer to the hitting times for the Y process. (Note that we have been using the generic T a for the hitting time of a for any process, the process in question being inferred from the probability measure which appears with it.) Thus, for any t > 0, (6.16) For ease of notation, in the analysis below, we let L 1 = ρ(1 − ǫ)f (τ j ), Using Brownian scaling for the first inequality and symmetry for the second one, we have (6.18) As is well-known, there exist κ, λ > 0 such that P BM 0 (T 1 ≥ t) ≤ κe −λt , for all t ≥ 0. Thus, from (6.16)-(6.18), choosing t = s(τ j ) − τ j − 1, we conclude that (6.19) We now calculate the asymptotic behavior of ). Similar to (3.8), we have (6.20) In light of (6.9) and (3.9), it follows from (6.20) that , as τ j → ∞.

Proof of Theorem 5
Proof of (i). The proof is almost exactly the same as the proof of Theorem 4 starting from (6.6), using (log t) l instead of (log t) 1 1+γ (and with b = c = 1).
Let B j be the event that i.
(We have suppressed the dependence of B j on ǫ and q.) It follows from (7.2) that on the event B j one has Thus, for any N , on the event ∩ ∞ j=N B j , one has Therefore, the proof of (1.4) will be completed when we show that for some ǫ ∈ (0, 1) and all q > q 0 .
We write where ∩ M −1 i=M B i denotes the entire probability space. Let (Note that C j depends on the random variable τ j .) Let P x γ f (τ j )−ǫτ q j denote probabilities for the diffusion process corresponding to L x γ without reflection, starting from f (τ j ) − ǫτ q j . Noting that if τ j+1 ≤ s(τ j ), then X(τ j+1 ) = f (τ j+1 ) ≤ f (s(τ j )), it follows by the strong Markov property and comparison that (7.5) Also, In order to get a lower bound on P 1 (B j |∩ j−1 i=M B i , τ j ), we will bound P 1 (C c j |τ j ) and P x γ from above, and we will calculate the asymptotic behavior of P j > T f (s(τ j )) ). We start with P 1 (C c j |τ j ). We mimic the paragraph containing (6.14), the only change being that ǫf (τ j ) is replaced by ǫτ q j . Thus, similar to (6.15), we obtain (7.7) We now turn to P x γ . We mimic the paragraph following (6.15), the only changes being that ( is replaced by f (τ j ) − τ q j and b is set to 1. Similar to (6.19), we obtain, (7.8) where L 3 = f (s(τ j )) and We now calculate the asymptotic behavior of ). Similar to (3.8), we have .
We now make the assumption, as in the statement of the theorem, that q > q 0 . Thus, lγ + q > 0. Using this in (7.10), along with (3.9) (with b = 1) and (7.9), and recalling that f (τ j ) = τ l j and that, from (7.1), f (s(τ j )) = τ l j + ǫτ q j , we conclude that From (7.5)-(7.8) and (7.11), we have for large τ j . From (7.8) and (7.1), we have L 3 − L 1 = f (s(τ j )) − f (τ j ) + τ q j = (1 + ǫ)τ q j . Thus, for large τ j , If 1 − q − l > 0, then we can complete the proof just like we completed the proof of Theorem 4 and conclude that (7.3) holds, and thus that (1.4) holds. Note that in order to come to this conclusion, we have needed to assume that q > q 0 = max(0, −lγ) and that 1 − q − l > 0; that is, we need max(0, −lγ) < 1 − l and q ∈ (max(0, −lγ), 1 − l). A fundamental assumption in the theorem is that l ∈ (0, 1 1−γ ). For these values of l, the above inequality always holds. Thus, (7.3) holds for those q which are larger than q 0 and sufficiently close to q 0 . Consequently, (1.4) holds for all q which are larger than q 0 and sufficiently close to q 0 . However, if (1.4) holds for some q, then clearly it also holds for all larger q. Thus, (1.4) holds for all q > q 0 .
We now turn to the proof of (1.5). We have γ ∈ (−1, 0] and q 0 = −γl ∈ [0, l). Let t j = j k , for j ≥ 1 and some k > 1 to be fixed later. Since up to time t j , the largest the process can be is f (t j ), and since up to time t j+1 the time-dependent domain is contained in [1, f (t j+1 )], it follows by comparison that (7.13) Clearly, (7.14) corresponds to the L x γ diffusion with reflection at both f (t j ) − M t q 0 j and f (t j+1 ). We estimate the right hand side of (7.14). We have Thus, where P x γ f (t j ) corresponds to the L x γ diffusion without reflection. Similar to (3.8), we have (7. 16) , where φ is as in (3.7) with b = 1. We now choose k so that kl(1 + γ) > 1.