Principal Eigenvalue for Brownian Motion on a Bounded Interval with Degenerate Instantaneous Jumps

We consider a model of Brownian motion on a bounded open interval with instantaneous jumps. The jumps occur at a spatially dependent rate given by a positive parameter times a continuous function positive on the interval and vanishing on its boundary. At each jump event the process is redistributed uniformly in the interval. We obtain sharp asymptotic bounds on the principal eigenvalue for the generator of the process as the parameter tends to infinity. Our work answers a question posed by Arcusin and Pinsky.


Introduction and Statement of Results
In a sequence of recent papers Pinsky [Pin09] [Pin] and Arcusin and Pinsky [AP11] considered the following model of a Brownian motion (elliptic diffusion in [Pin]) with instantaneous jumps. Let D ⊂ R d be a bounded domain, and let µ be a Borel probability measure on D and V ∈ C(D) a nonnegative function. Let C µ,V u := V (x) udµ − u , u ∈ C b (D), denote the generator of the pure-jump process on D with jump intensity V and a jump (or more precisely, redistribution) measure µ. For γ > 0, the diffusion with jumps process is generated by the non-local operator −L γ,µ,V , where with the Dirichlet boundary condition on ∂D. In words, the process considered is Brownian motion killed when exiting D, and while in D, is redistributed at a spatially dependent rate γV according to measure µ. The main object of study in the papers above was the asymptotic behavior of λ 0 (γ), the principal eigenvalue for L γ,µ,V , as γ → ∞. The first paper, [Pin09], studies the model when V ≡ 1. The second paper [AP11] provides the nontrivial extension to the case where V is strictly positive on D.
In what follows, we will refer to this positivity assumption as the "nondegeneracy" condition. When V is constant, redistribution occurs at the jump times of Poisson of rate γV , while for spatially dependent V the jumps occur according to events of a a time-changed Poisson processes with constant rate 1, time being sped up when γV is lager than 1 and slowed down when γV < 1. The most recent paper [Pin] studies the model under the nondegeneracy condition in the general setting of elliptic diffusions.
Let X := (X(t) : t ≥ 0) denote the process generated by −L γ,µ,V , and let P γ x , E γ x denote the corresponding probability and expectation conditioned on X(0) = x ∈ D.
When γ = 0, we abbreviate and write P x and E x . That is, P x and E x correspond to Brownian motion (no jumps). Let τ := inf{t > 0 : X(t) ∈ D} denote the exit time of X from D. Then λ 0 (γ) has the following probabilistic interpretation [AP11]. For any x ∈ D, Observe that (2) implies that given any x ∈ D, we have In fact, the limits and equalities in (2) and (3) remain to hold when replacing the probability P γ x and expectation E γ x with sup x∈D P γ x and sup x∈D E γ x , respectively.
The above cited papers provide sharp asymptotic behavior for λ 0 (γ) as γ → ∞, under the nondegeneracy condition and smoothness assumptions on ∂D and µ. In particular, the following result was obtained.
We comment that [AP11, Theorem 1] includes an additional statement generalizing the result to µ with smooth density near ∂D, vanishing up to the ℓ-th order for some ℓ ∈ Z + .
The nondegeneracy condition could be viewed as one extreme, the other extreme being the case where V is compactly supported. It was noted in [AP11] that when the support K of V is compact, then for x ∈ D\K, and for any γ > 0, the distribution of τ under P γ x dominates the exit time for the Brownian Motion (no jumps) from D\K, and hence it follows from (2) that λ 0 is bounded above by the principal eigenvalue for − 1 2 ∆ on D\K, a positive constant independent of γ.
In light of the above, is it reasonable to expect some transition in the behavior of λ 0 from the nondegenerate case to the compactly supported case to occur when V is positive on D and vanishes on ∂D. The behavior in this regime was left as an open problem in [AP11]. In this paper we answer it in one dimension. Our method is based on analysis of the moment generating function in (3), obtained through probabilistic arguments.
In what follows, for real-valued functions f, g with domain D, and a ∈ ∂D or a taken as ∂D, lim sup x→a f (x)/g(x) < ∞, whenever the limits make sense. This notation will be also used when f, g are real-valued functions on (0, ∞), and a taken as 0 or ∞.
Before stating our main result, we present some heuristics derived from Theorem A, which provide some indication on the behavior when V vanishes on ∂D. Assume that µ is uniform on D and that V (x) ≍ x→∂D d(x, ∂D) α for some α > 0. Observe that (4) is not well-defined also because the surface integral in the numerator of the right-hand side blows up. We can approximate it through volume integrals of the form where D ǫ is as in Theorem A (note that the ratio approximates the integral with respect to the normalized Lebesgue measure, therefore a positive multiplicative constant is missing. Since this constant has no effect on the argument, we will ignore it). When α < 1, the volume integral in the denominator of (4) converges, therefore letting ǫ → 0 in the approximation above, the ratio blows up, giving the prediction √ γ = o(λ 0 (γ)). When α ≥ 1, the denominator also blows up, suggesting a possible phase transition at α = 1. For α > 1, we can approximate the volume integral in the denominator by integrating over D − D ǫ instead of D. Then, Combining both approximations (with same ǫ, this is not a rigorous treatment), we obtain an approximation to the ratio, proportional to ǫ − α 2 /ǫ 1−α = ǫ α 2 −1 , as ǫ → 0. This blows up as ǫ → 0 when α ∈ (1, 2), converges to 1 when α = 2 and converges to 0 when α > 2. Summarizing, the heuristics suggest that √ γ = o(λ 0 (γ)) for α ∈ (1, 2), while λ 0 (γ) ≍ √ γ for α = 0, 2, and λ 0 (γ) = o( √ γ) for α > 2.
Here is our main result.
Theorem 1. Let D = (0, 1) and µ denote the Lebesgue measure on D. Assume that V ∈ C(D) satisfies V > 0 on D, and for some 0 We would like to note the following.
1. Observe that δ(α ′ ) may be larger or smaller than δ(α), yet the asymptotic behavior is determined by the larger parameter α. This is a result of the fact that in the formula for the moment generating function for τ , expressed in terms of the Brownian motion, the function V appears as a penalizing potential, discounting paths which spend more time at sets where V is larger. 3. The graph of δ is shown in Figure 1. Note the phase transition at α = 1. The Theorem corroborates the heuristic derivation preceding it.
The remainder of the paper is organized as follows. In Section 2 we prove some identities and a lower bound on the moment generating function of τ . In Section 3 we obtain the main estimates on functions of Brownian motion, which when combined with the results of Section 2 yield the proof of Theorem 1. This proof is given in Section 4.

Acknowledgement
The author would like to thank Ross Pinsky for helpful discussions and useful suggestions, and to an anonymous referee for carefully reading the manuscript and helping improve the presentation.

The Moment Generating Function
We define a family of stopping times for X. For y ∈ D, we let We begin by recalling a well-known classical result about the moment generating function of the exit time of Brownian motion from an interval (see e.g. [RY99, pp. 71-73]). 1. .

If
≥ π, then the expectation above is infinite.
where J is the time of the first jump. Since the jump rate on the interval (0, 2x) is bounded above by ρ := c 1 γx α , it follows that For λ ∈ R and t ≥ 0, let We have the following proposition, expressing the moment generating function purely in terms of Brownian expectations. .
This result essentially allows to reduce the problem to estimating the asymptotic behavior of the Brownian expectations appearing on the right-hand side of each of the identities. This is carried out in Section 3 below. Since these expectations are also solutions to some related ordinary differential equations, it is interesting to ask for independent analysis not based on the probabilistic analysis. Specifically, let A denote the differential operator Au := 1 2 u ′′ + (λ − γV )u. Then E x e R λ (τ ) is known as the gauge associated to A on D, that is, the solution to and E x τ 0 e R λ (t) dt is a potential for A on D, or total mass of Green's measure, solving : Proof. The first identity follows directly from the strong Markov property. Integrating both sides of the first identity with respect to µ we obtain In what follows we assume λ is less than the principal eigenvalue for − 1 2 ∆ on D. In particular, sup x E x e λτ < ∞. The identities (2)-(4) extend beyond this domain by analyticity. To prove the second identity, observe that This proves the second identity. We turn to the third identity. Thus, It remains to prove the last identity. Observe that λ τ 0 e R λ (t) dt + 1 ≤ e λτ , and that e R λ (τ ) ≤ e λτ . Therefore since sup x E x e λτ < ∞ by assumption, it follows from dominated convergence applied to the right-hand side of (9) that Consequently, we obtain from (7) that , and the right-hand side is finite. Plugging the second and third identities into this we obtain , and the result follows.

Brownian Computations
In this section we obtain the main estimates needed to prove Theorem 1. We need some definitions. Below we let r = r(γ) = r(γ, α) := γ − 1 α+2 , and The function h was chosen to satisfy that γh(γ) is equal to the right-hand side of (5). We also define a function λ = λ(θ, γ, α) by letting In what follows, in order to simplify notation, we sometimes omit the dependence of the functions r, h and λ on some of their arguments.
We begin with following simple lemma needed for our estimates and whose proof will be omitted.
Proof. We first need some preparation before getting into the main argument. The preparation consists of several steps. The first is a reduction to symmetric V . Since . Since in addition V is strictly positive and continuous on D, we can findV R λ denote the analog of R λ withV in place of V . Then R λ ≤R λ . Therefore to prove the lemma, there is no loss of generality assuming that V is symmetric and The next step in the preparation is to obtain the constants θ 1 , γ 1 in the Lemma. We need to define a family of stopping times for the Brownian motion. For l ∈ (0, 1 2 ], let σ l := inf{t ≥ 0 : d(X t , ∂D) = l}.
We turn to the main proof, beginning with the first bound. Fix K ∈ N and let x ∈ ∂D satisfy d(x, ∂D) ≤ r 2 . By the Strong Markov property, It follows from (13) that E r 3 e R λ (σr 1 ) < 1. Therefore Letting x = r 1 , we obtain E r 1 e R λ (τ )∧K ≤ 1 + c 1 , and plugging the latter inequality back into (15), we obtain E x e R λ (τ )∧K ≤ 1 + c 1 . Finally, letting K → ∞, monotone convergence gives when d(x, ∂D) ≤ r 2 .
Letting ρ := c 2 γx α , a := κ −1 x and b := 1 − a in Proposition 1-(1), we obtain Summarizing, we proved that for x ∈ [r 2 , 1 2 ], We are ready to complete the proof of the first bound in the lemma. We have We turn the the second bound. The argument is similar. If d(x, ∂D) ≥ r 1 , we have Assume that d(x, ∂D) ≤ r 2 . From Proposition 1-(2) we have Let K ∈ N. From the strong Markov property we obtain But, Combining the two upper bounds, we obtain Letting x = r 1 , we obtain By letting K → ∞, and using the monotone convergence theorem, we have proved that whenever d(x, ∂D) ≤ r 2 .

Proof of Theorem 1
In this section we use the results of the preceding sections to prove Theorem 1.
We turn to the upper bound. In light of (3), in order to show that λ ≥ λ 0 (γ), it is sufficient to show that E γ x e λτ = ∞ for some x ∈ D. However, by Proposition 3-(1) this condition holds if E γ µ (e λτ ) = ∞. This is what we will prove. We split the discussion according to the value of α.
Finally, assume that α > 1. Note that the upper bounds of Lemma 2 may not hold for all θ, so the argument in the last paragraph may not work. Recalling from (10) that λ = θγ 2 α+2 , it follows from Proposition 2 that there exists a constant θ 0 ∈ (0, ∞) such that for θ > θ 0 , we have E γ µ e λτ = ∞.