Processes with Inert Drift

We construct a stochastic process whose drift is a function of the process’s local time at a reﬂecting barrier. The process arose as a model of the interactions of a Brownian particle and an inert particle in [7]. We construct and give asymptotic results for two diﬀerent arrangements of inert particles and Brownian particles, and construct the analogous process in R d .


Introduction
In this paper, we construct solutions (X, L) to the set of equations The case when µ(l) = Kl, K > 0 was studied by Frank Knight in [7], which was the starting point for this research.
The case when µ(l) = 0 is the classical reflected Brownian motion. There are at least two approaches to constructing pathwise solutions in this case. The first is to let X(t) = |B(t)|, define L(t) from X(t) as above, and define a new Brownian motionB(t) = X(t) − L(t). For the second approach we define and take X(t) = B(t) + L(t). The two approaches yield processes (X, L) with the same distribution. The second approach has the advantages of being easier to work with, and of retaining the original B(t).
For other µ(l), X(t) can be written as the difference of B(t)+L(t) and − t 0 µ•L(s) ds. These two pieces can be interpreted the path of a Brownian particle reflecting from the path − t 0 µ•L(s) ds of an inert particle. We call the second path inert because its derivative is constant when the two particles are apart. With this model in mind, we consider two other configurations of Brownian particles and inert particles. We also consider a generalization to R d .
In Section 2, we consider the one-dimensional case. We also obtain an explicit formula for the distribution of lim t→∞ µ•L(t) and for the excursion measure of the process with drift.
In Section 3, we consider the case when a Brownian particle and an inert particle are confined to a one-dimensional ring. In other words, we have a process with drift confined to a closed interval, with reflection at both endpoints. The velocity of the inert particle turns out to be an interesting process. Indexed by local time, it is a piecewise linear process with Gaussian stationary distribution, though it is not a Gaussian process. Some results from this section have appeared in [3]. We show that under rescaling, the velocity process converges in distribution to an Ornstein-Uhlenbeck process. Section 4 discusses the configuration consisting of two independent Brownian particles with an inert particle separating them. We would like to determine whether the process is recurrent or transient; that is, whether all three particles can meet in finite time, or whether the inert particle in the middle forces the distance between the Brownian particles to tend to infinity. In fact, this configuration is a critical case between the two behaviors. We show that under rescaling the distance between the two Brownian particles is a two-dimensional Bessel process. Dimension two is exactly the critical case for the family of Bessel processes. Section 5 partially extends the results in Section 2 to domains in R d , d ≥ 2. This sections uses results of Lions and Sznitman on the existence of reflected Brownian motion in domains in R d .
We show existence for the process with drift for the case when the velocity gained is proportional to the local time.
Some results similar to those in Section 2 have been found by S. Ramasubramanian in [9]. He in fact considers a slightly more general class of allowable µ, and extends the process to orthants in R d .
The author would like to thank the referees for their very helpful suggestions, and also Kavita Ramanan and one of the referees for the reference to [9]. This paper is partly based on a Ph.D. thesis written at the University of Washington under the guidance of Chris Burdzy.

Existence and Uniqueness in One Dimension
In this section, we prove the following version of the well-known Skorohod Lemma: Theorem 2.1. Let f (t) be a continuous function with f (0) ≥ 0, and let µ(l) be a continuous monotone function with for every l. If µ(l) → −∞, then we further require that There is a unique continuous L(t) satisfying 1. x(t) = f (t) + L(t) + As a reminder to the reader, we quote the Skorohod Lemma. A proof can be found in, for example, [6].  This function is given by We will denote the unique L(t) corresponding to f (t) in equation (3)  Combining these two equations we see that Next we will construct the L(t) corresponding to f (t) and µ(t). First, we may assume that It is easily checked that the L(t) satisfying the theorem forf (t) andμ(t) also works for f (t) and µ(t).
For m ≥ n, we have that I ε m (t) ≥ I ε n (t), so by Lemma 2.3, L ε m (t) ≤ L ε n (t), and from equation (3), L ε m (t) = L ε n (t) for t < T ε m∧n . Let L ε (t) = lim n→∞ L ε n (t), and I ε (t) = lim n→∞ I ε n (t). By Lemma 2.4, we see that [10] then applies. For any sequence ε(n) → 0, L ε(n) (t) → L(t), uniformly on compact subsets of [0, ∞). We will check that this L(t) satisfies the conditions of the theorem. That x(t) ≥ 0 follows from the definition of L ε n (t) and equation (3), as does the second condition. That I εn (t) → t 0 µ•L(s) ds follows from uniform convergence and the inequality For condition 3, notice that {t : x(t) > δ} ⊂ {t : x ε(n) (t) > δ/2} for large enough n. The uniqueness of L(t) proved above shows that L(t) is independent of the ε(n) chosen.
For the case where m = inf{µ(l), l ≥ 0} = −∞, repeat the above construction with µ j (s) = µ(s) ∨ (−j), and denote the results as L j (t) and x j (t). By Lemma 2.5, L j (t) will agree with L(t) for 0 ≤ T j , where T j = inf{t : µ•L(t) = −j}. To complete the proof, it is only necessary to show that T j → ∞. Suppose that T j → T . There are then T n ↑ T so that L(T n ) = n and x(T n ) = 0. We compute that By the continuity of f (t), the LHS approaches 0, so that so that T n+1 − T n ≥ 0.9 |µ(n + 1)| for sufficiently large n. Then T = T 1 + (T n+1 − T n ) ≥ ∞ by (2), a contradiction. Hence T j → ∞, and the proof is complete.
The following corollary restates Theorem 2.1 to be similar to Knight's original result [7, Theorem 1.1]. Theorem 2.1 yields a process with drift reflected at 0, this version yields a process reflected from a second curve v(t).
Corollary 2.6. Let f (t) be a continuous function with f (0) ≥ 0, and let µ(l) be a continuous increasing function with for every l ≥ 0. Then there is a unique continuous L(t) satisfying Proof. This is just a restatement of Theorem 2.1. To see this, let τ ∞ (?) Figure 1: Two equivalent versions of the process, corresponding to Theorem 2.1 and Corollary 2.6, resp. Note that τ ∞ cannot be determined from the pictured portion of the graph, so the best candidate is what is labeled.
The remarks that follow demonstrate the necessity of the restrictions (1) and (2) on µ(t) in Theorem 2.1, at least for the deterministic version of the theorem.
Remark 2 (Blow-up of L(t)). Again, in the context of Theorem 2.1, we let f (t) = −t, µ(l) = −l 2 , and L(t) = tan t. Then so L(t) satisfies the conditions of the theorem, but blows up at t = π/2.
The results so far do not rely on probability. We will now remedy that by applying the results to Brownian paths. For the rest of the section, we will need a standard Brownian motion {B t , F t }, with B 0 = 0 a.s., and some fixed µ(t) satisfying (1). In the statements below, L(t) will be a random function, L(ω, t) = LB(ω, t).
Theorem 2.7. If f (t) in Theorem 2.1 is replaced with a Brownian motion B t , then the corresponding L(t) is the semimartingale local time at zero, Λ t (0), [6, p. 218] of X t ≡ B t + L(t) + t 0 µ•L(s)ds, a.s. Further, we get that, a.s., Proof. To see that L(t) = Λ t (0), use a version of Itô-Tanaka's formula (e.g. (7.4) on p. 218 of [6]), with the identifications M t = B t , V t = L(t) + t 0 µ•L(s)ds, and f (x) = |x|, and get that On the other hand, if f (t) is replaced by B t in Theorem 2.1, we get since X t ≥ 0 Therefore, L(t) = Λ t (0), a.s.
Having established the previous fact, a standard argument shows the other assertion. By [6, Corollary 2.8. For the case µ ≡ λ, that is, Brownian motion with drift λ, reflected at 0, Proof. This follows directly from the previous theorem and equation (3) on page 1511. By the notation L X (t) we will mean the semimartingale local time at 0 of the reflected process X(t) (not just for the specific case of constant drift). Note that in the stochastic case, unlike the deterministic case, the L X (t) can be recovered almost surely from X(t).
We now fix λ, and define X(t) ≡ B(t) + L X (t) + λt as above. Let φ be the right-continuous inverse of L X (t); that is, By Corollary 2.8, we have that [6, p. 196] we have that and a computation shows that for µ ≡ λ, Additionally, we have for b > 0 that Theorem 2.9. Let τ ∞ = inf{s : φ(s) = ∞}. Then with µ(l) as in Theorem 2.1.
Proof. An equivalent definition of τ ∞ is τ ∞ = sup{L X (t) : t ≥ 0}. Recall from the proof of Theorem 2.1 the definition of L ε (t), and let τ ε ∞ = sup{L ε (t) : t ≥ 0}. It is clear from the uniform convergence L ε (t) to L X (t) that τ ∞ = lim ε→0 τ ε ∞ . For notational convenience, define τ = sup{jε : jε ≤ τ, j an integer}, where the value of ε should be apparent from the context. From (4) and (5) we see that Using this, compute Solving this separable differential equation gives and taking the limit as ε → 0 gives the result.
For calculations, it is useful to consider the process we have constructed as a point process of excursions from the origin, indexed by local time. Here we derive some of the formulas that will be used in later sections.
Theorem 2.10. The measure n(·) has density function with µ(l) as in Theorem 2.1.
Proof. For a fixed τ and λ, this decomposes as the product of the probability that an excursion of Brownian motion with drift µ(τ ) has time duration λ given that τ ∞ > τ , times the probability that τ ∞ > τ (from Theorem 2.9). If τ ∞ < τ , no more excursions occur.
Finally, we calculate the probability that an excursion of Brownian motion with constant drift µ reaches height l. We will need this in the sections that follow.
Lemma 2.11. For Brownian motion with constant drift µ, the intensity measure of excursions that reach level l before returning to the origin is given by Proof. Apply the Girsanov transform to get that The last formula comes from [6, 2.8.28].

A Process with Inert Drift in an Interval
The construction method in Section 2 for one inert particle and one Brownian particle can be extended to other configurations of particles. In this section we construct a process X(t), which is a Brownian motion confined to the interval where L 0 (t) and L l (t) are the local times at 0 and l, resp., accumulated before time t.
Another way to look at this process is to allow the interval to move, that is, construct processes and with X(t) the path of a Brownian particle reflected from Y 0 (t) and Y l (t). If we look at these processes modulo l, then Y 0 (t) and Y l (t) can be seen as two sides of a single inert particle, on the boundary of a disc. If we let l = 2π, then exp(iX(t)) and exp(iY 0 (t)) trace out the paths of one Brownian particle and one inert particle on the boundary of the unit disc.
Theorem 3.1. Given constants l, K > 0, v ∈ R, and a Brownian motion B(t) with B(0) = 0, there exist unique processes L 0 (t) and L l (t) satisfying Proof. In Section 2 we constructed a similar process for just one Brownian and one inert particle. Because the two inert particles in this theorem are always distance l apart, we can carry out the construction piecewise; that is, do the construction for one inert particle and one Brownian particle as in the previous section until the distance between these two particles reaches l, then continue the construction with the Brownian particle and the other inert particle, until the distance between these two reaches l, and continue switching between them.
Using Corollary 2.6, with the identifications µ(τ ) = Kτ , f (t) = B(t), we first construct unique processes Y 0 (t), X(t), and L 0 (t) having the properties stated in the theorem, for 0 ≤ t < T 0 ≡ inf{t : X(t) − Y 0 (t) = l}. Applying Corollary 2.6 again, but changing the order of the Brownian and inert paths, we can extend Y 0 (t), X(t), and L l (t) to the time interval 0 ≤ t < Then we are in the initial situation of the inert particle in contact with the upper inert particle Y l (t), and we can repeat the steps of the construction.

Density of the velocity process
The function L(t) ≡ L 0 (t) + L l (t) is the total semimartingale local time that the Brownian particle X(t) spends at the endpoints of the interval [0, l]. Since the velocity process V (t) changes only at these endpoints, we will reparametrize V in terms of the local time. We definẽ The processṼ τ is a piecewise linear process consisting of segments with slope ±K. The slope of the process at a particular time τ depends on which endpoint X has most recently visited. For this reason,Ṽ τ is not Markov. We therefore introduce a second process J τ , taking values 0 or 1, with value 0 indicating that X has most recently visited endpoint 0 and the velocity is increasing, and value 1 indicating that X has most recently visited endpoint l and the velocity is decreasing. This technique of introducing an additional state process is similar to that used in [1].
For convenience, we will define that is, continuous, twice-differentiable functions on our state space that vanish at ∞.
Lemma 3.2. The infinitesimal generator, A, of the process (Ṽ τ , J τ ) is given by Proof. We will prove the lemma using the definition of the generator (see [4]) and Lemma 2.11. Assume f ∈ D.
We will assume that f ∈ C 2 b (R) ∩ {f : lim |x|→∞ f (x) = 0}. By Lemma 2.11, excursions from 0 reach l with Poisson rate ve vl / sinh(vl), and by symmetry, excursions from l that reach 0 occur with rate ve −vl / sinh(vl). Let σ be the time of the first crossing. We can rewrite the previous equation for j = 0 as Because the limit of P (v,0) (σ < τ )/τ is exactly the Poisson rate given above, Af (v, 0) is as stated in the lemma. A similar calculation gives Af (v, 1).
Lemma 3.3. The (formal) adjoint, A * of the generator A is given by Let us assume that A is of the somewhat more general form Then we have that Integrating by parts, Factoring out f (v, j) leaves Proof. The stationary distribution, µ, of a process is the probability measure satisfying Af dµ = 0 (9) for all f in the domain of A. If we assume that dµ is of the form g(v, j)dv, then this is equivalent to so that it is sufficient to find g(v, j) satisfying A * g(v, j) = 0. By Lemma 3.3, so that g(v, 0) and g(v, 1) differ by a constant. Note that this does not depend on the jump intensities a(v) and b(v). Since (g(v, 1) + g(v, 0))dv = 1, g(v, 0) = g(v, 1). Using this fact and Lemma 3.3, we get This gives the separable differential equation which has solutions of the form g(v, 1) = C exp(−v 2 /K).

Behavior as l → 0
In this section we will letṼ l (τ ) denote the processṼ τ constructed in the previous section, with the constant K = 1. We will show that the sawtooth processṼ l (τ ) converges weakly to the Ornstein-Uhlenbeck process, under appropriate rescaling, as l → 0. The proof uses results from section 11.2 of [11].
We first consider the sawtooth processṼ l (τ ) only at those times when it switches from decreasing to increasing, call them τ l 0 = 0, τ l 1 , τ l 2 , . . . We will also refer to those times when the process switches from increasing to decreasing, call them σ l 1 , σ l 2 , . . . Let p l n = σ l n −σ l n−1 , and q l n = τ l n −σ l n . Next, we construct the piecewise linear processesV l (t) and T l (t). We defineV l (t) so that V l (nl 2 ) =Ṽ l (τ l n ), and T l (nl 2 ) = lτ l n . As in the previous section, we will let a(v, l) = ve vl / sinh(vl) and b(v, l) = ve −vl / sinh(vl). We will use the following properties of a and b in our proof: so that a(v, l) and b(v, l) are monotone functions. We also use the property that la(v, l) → 1 and lb(v, l) → 1 as l → 0: Let P l v,t be the probability measure corresponding to V l (t), T l (t) starting from (v, t). Let P v,t be the unique solution of the martingale problem (see [6] or [11]) for The proof of Lemma 3.5 depends on and will follow a number of technical lemmas. We begin with some definitions. The notation P (v,t) (dr, ds) denotes the transition density of (p l , q l ).
To prove the lemma, we need to show that the quantities and |∆ ε l (v, t)| converge to zero as l → 0, uniformly for |v| ≤ R. When this is done, Lemma 11.2.1 of [11] completes the proof.
Because the density P (v,t) (dr, ds) is a bit unwieldy for direct computation, we introducep l andq l , which are exponential random variables with rate a(v, l) and b(v, l), respectively. Definẽ forp l andq l as above, usingP (v,t) (dr, ds) for the transition density of (p l ,q l ). Remark 3. We will need to compare P (v,t) (dr, ds) andP (v,t) (dr, ds) in the calculations that follow. If the processV l (t) stays between v − ∆ and v + ∆ for 0 ≤ t ≤ l 2 , then the rate of the process p l is bounded by a(v, l) and a(v, l) + 2∆, and the rate of the process q l is bounded by b(v, l) − 2∆ and b(v, l) + 2∆. Then the densities satisfy Combining the RHS and LHS with the density forP (v,t) (dr, ds), we arrive at the inequality It is clear that lim Proof. We first need to show that Note that if 0 ≤ r ≤ 1/2 and 0 ≤ s ≤ 1/2, then (r, s) ∈ E l for l < √ 3/2. Then Since la(v, l) → 1 as l → 0, we can show that this last term will converge to 0 if we can show that a 2 (R, l) exp(−a(−R, l)/2) → 0: The same procedure shows that and these calculations (noting that (r − s) 2 ≤ r 2 + s 2 ) give (13).
Sincep l andq l are independent exponential random variables, Since a(v, l) and b(v, l) are monotone, and la(v, l) and lb(v, l) converge to 1 for all v, and I → 0, the RHS of the last equation converges to 2 uniformly for |v| ≤ R. Combining this with equation (13) shows that Since (r − s) 2 ≤ 1 on E l , the second term III can be bounded as follows: The term exp(−1/ √ l)/l 2 → 0 and exp(2R √ l) → 1.
To bound the term II, we observe that on the set E l ∩ {r + s < √ l},V l must lie in the interval Then As before, (2/a 2 − 2/ab + 2/b)/l 2 → 2, and as shown above, C(v, l, √ l) → 0 uniformly on compact sets as l → 0, so we conclude that Proof. We will first show that 1 l E c l (r + s)P (v,t) (dr, ds) → 0.
The same procedure shows that and combining them gives (17).
Sincep l andq l are independent exponential random variables, Since a(v, l) and b(v, l) are monotone, and la(v, l) and lb(v, l) converge to 1 for all v, and I → 0, this quantity converges to 2 uniformly for |v| ≤ R.
Since r + s ≤ 1/l on E l , the second term III can be bounded as follows: which we know converges to 0, since it is exactly the same as (15) on page 1526.
To bound the term II, we observe that on the set E l ∩ {r + s < √ l},V l must lie in the interval Then .
The same procedure shows that and combining them gives (18).
Sincep l andq l are independent exponential random variables, Since a(v, l) and b(v, l) are monotone, and la(v, l) and lb(v, l) converge to 1 for all v, and I → 0, the last equation converges to −2v uniformly for |v| ≤ R.
Since |r − s| ≤ 1 on E l , the second term III can be bounded as follows: which converges to 0, as (15) on page 1526.
To bound the term II, we observe that on the set E l ∩ {r + s < √ l},V l must lie in the interval As before, (1/a + 1/b)/l → 2. For this case, we need to show that C(v, l, √ l)/l → 0 uniformly on compact sets as l → 0. From (12), it is enough to show that (in fact, any finite limit will do), and this is an application of l'Hôpital's rule. We conclude that Finally, we use the triangle inequality to conclude that |b V l (v, t) + 2v| → 0.
Proof. Sincep l andq l are independent exponential random variables, .
Since a(v, l) and b(v, l) are monotone, and a(v, l) and b(v, l) converge to ∞ for all v, the last equation converges to 0 uniformly for |v| ≤ R.
Next we compare a T T l andã T T l .
Since (r + s) 2 ≤ 1/l 2 on E l , the second term II can be bounded as follows: which converges to 0, since it is the same as (15) on page 1526.
To bound the term I, we observe that on the set E l ∩ {r + s < √ l},V l must lie in the interval Then .
In this case, both terms converge to 0 as l → 0. We conclude that Proof. Sincep l andq l are independent exponential random variables, .
Since a(v, l) and b(v, l) are monotone, la(v, l) and lb(b, l) converge to 1, and a(v, l) and b(v, l) converge to ∞ for all v, the last equation converges to 0 uniformly for |v| ≤ R.
Next we compare a V T l andã V T l .
Since |r 2 − s 2 | ≤ 1/l on E l , the second term II can be bounded as follows: which we know converges to 0, since it is exactly the same as (15) on page 1526.
To bound the term I, we observe that on the set E l ∩ {r + s < √ l},V l must lie in the interval Then .
In this case, both terms converge to 0 as l → 0, so Proof. In fact, we need only observe that for l < 1/2, the set which we have shown to converge above (again, (15) on page 1526).
Proof of Lemma 3.5. We need to show that the martingale problem for has a unique solution. Since the coefficients are either bounded or linear, we can use Theorem 5.2.9 of [6]. Once uniqueness of the solution of the martingale problem for L is established, Lemma 3.5 follows directly from Theorem 11.2.3 of [11] and Lemmas 3.6-3.11.
An issue with Lemma 3.5 is that the measures P l v,t are associated withV l (t) rather than with the originalṼ l (τ ). In fact, the convergence holds forṼ l (τ ) as well.
Proof. By the symmetry of a and b, the process interpolated on the other side (along σ j 's) has the same limit asV l (t). Since the processes interpolated along the top and the bottom of the sawtooth process converge to the same process, the whole sawtooth process must converge if we show that the distance between them is 0, or equivalently, that the distance between the two processes converges to 0 uniformly on finite time intervals. This follows from Lemmas 3.7 and 3.9.
The construction ofV l (t) involved a time change because we forced each switch of the process to have duration l 2 . The term 2 ∂ ∂T of the generator L indicates that in the limit, this is twice the duration of the actual time between switches. We can restore the original clock by dividing the generator by two. When we do so, we find that the spatial component of the process has generator corresponding to solutions of the SDE This is an Ornstein-Uhlenbeck process.

A Pair of Brownian Motions Separated by an Inert Particle
In this section, we consider an arrangement of two Brownian particles X 1 and X 2 separated by an inert particle Y in R. More precisely, we construct processes X 1 (t) ≤ Y (t) ≤ X 2 (t), where the interactions between X 1 (t) and Y (t) and between Y (t) and X 2 (t) are as described in Section 2. Figure 3: Two Brownian particles separated by an inert particle A method of construction different from that in Section 2 is needed if the two Brownian particles ever meet. Instead we introduce a random variable T ∞ to represent the first meeting of the two Brownian particles X 1 (t) and X 2 (t). In fact, we will show that with probability one, T ∞ = ∞. Theorem 4.1. Given independent Brownian motions B 1 and B 2 , with B j (0) = 0, constants x > 0, 0 ≤ y ≤ x, v ∈ R, and K > 0, there exist unique processes L 1 (t) and L 2 (t), and a random time T ∞ , satisfying the following conditions: 2. L 1 (t) and L 2 (t) are continuous, nondecreasing functions with L 1 (0) = L 2 (0) = 0, 3. L 1 (t) and L 2 (t) are flat off the sets {t : X 1 (t) = Y (t)} and {t : X 2 (t) = Y (t)}, resp. 4. T ∞ = inf{t : X 1 (t) = X 2 (t)}.
Proof. The construction method in the Section 2 and a sequence of stopping times can be used to construct this process up to the stopping time T ∞ , the limit of the stopping times used in the construction. After time T ∞ the process is not well-defined, but we show below that P (T ∞ = ∞) = 1.
We define X K x (t) ≡ (X 1 (t), X 2 (t), Y (t), V (t)) for the processes constructed with initial state y = 0, v = 0, and constant K. The following lemma describes the scaling law of the process.
Proof. By Brownian scaling, the X 1 and X 2 components remain Brownian motions, and by uniqueness of local time, L 1 and L 2 have the same scaling. However, by the chain rule, the slope of the Y component has been multiplied by 1/ε for each t.
The rest of the section concerns the proof that T ∞ = ∞ a.s. Theorem 4.3. Define a process X ρ,T (t) for T > 0 and ρ ∈ (0, 1) as follows. Let By previous results, there are unique L ρ,T (t) and L(t) such that where L ρ,T (t) and L(t) are the local times of X ρ,T (t) and X(t), respectively, at zero. Define L ρ,T ∞ = lim t→∞ L ρ,T (t) and L ∞ = lim t→∞ L(t). Then Proof. First note that X(t) = X ρ,T (t) and L(t) = L ρ,T (t) for t ≤ T . Also note that the drift of term of X ρ,T (t) at time T is (1 − ρ)L(T ) > 0. After time T , X ρ,T (t) may or may not return to the origin. If not, then X(t) also would not have returned to the origin (B(t) ≥ B(t) + ∆(t)), so L ρ,T ∞ = (1 − ρ)L ∞ . Otherwise, X ρ,T returns to the origin at some time τ ρ,T . Define Notice that X(S) = 0 with probability 1. Construct a Brownian motionB by deleting the time interval (S, τ ρ,T ) from B(t): and an associated local time: and the associated reflected process with drift: Note thatB(t) is a Brownian motion because τ ρ,T is a stopping time and S is depends only on B[S, T ] and so is independent of B[0, S].
We will show thatX(t) =B(t) +L(t) + t 0L (s) ds. In fact, because of the pathwise uniqueness of solutions L(t), we only need to check thatB,X, andL are continuous at S.
The limit ofL(t) as t → ∞ is L ρ,T ∞ (pathwise). But the limit ofL(t) will have the same distribution as L ∞ becauseB(t) is a Brownian motion. Since we have either decreased L ∞ by a factor of ρ or replaced it with a new copy with identical distribution, the inequality holds. Proof. By the previous lemma, we may assume that K = 1. We also assume that v = 0. We use slightly simplified versions of the process X 1 and X 2 below, which incorporate the drift term (Y (t) in the definiton), and which otherwise agree until time T with the definitions in Theorem 4.1.
where L 1 (t), L 2 (t) are the local times of X 1 (t) and X 2 (t) at the origin, B 1 (0) > 0 and B 2 (0) = 0, and with T a stopping time defined below.
Define T 0 = 0, T j+1 = inf{t > T j | V (t) = 0}, and define T ∞ = lim T j . On any of the intervals [T j , T j+1 ] (say that X 1 (T j ) = 0), the term V (t) behaves exactly as in the case of one Brownian particle and one inert particle, except that V (t) may decrease when X 2 (t) = 0. So V (t) is dominated in distribution by L ∞ .
Using the previous theorem, we can check Novikov's condition and then apply the Girsanov theorem to (X 1 , X 2 ): We can now apply Girsanov to see that under some measure, (X 1 , X 2 ) is a standard reflected Brownian motion in the quadrant {(x, y) | x > 0, y > 0}. Observe that if X 1 (T j ) = 0, then X 2 (T j+1 ) = 0 and X 1 (T j+2 ) = 0. Then T ∞ < ∞ implies that the reflected Brownian motion hits the origin, an event with probability zero. Therefore, P (T ∞ = ∞) = 1.

The limiting process is Bess(2)
In this section, we wish to determine the law of X 2 (t) − X 1 (t) for the process described in Theorem 4.1, as the constant K → ∞. Heuristically, one can see that if we apply regular Brownian rescaling to the process, the paths of the inert particle will get steeper, which is equivalent to increasing K. We can therefore get a picture of the long-term behavior of the process by letting K → ∞.
As in the previous section, we approach the limit distribution through a Markov chain. We introduce the stopping times T j defined by We denote the transition probabilities of Y K (j) by noting that T j+1 − T j is independent of the value of T j . Now that our processes are defined, we focus on the transition probabilities of {Y K j } j . The following definitions correspond to those in [11, section 11.2] In the calculations that follow, we focus on the first step in our Markov chain. We introduce two more random times between 0 and T 1 , defined by The typical case will be that 0 < S 1 < S 2 < T 1 . Lemma 4.8 below makes this precise.
We define τ K ∞ as defined in the second section, to be the limit of L 1 (t) as t → ∞ in the absence of the process X 2 (t). Applying Theorem 2.9, we compute Proof. This follows from the inequality L 1 (S 1 ) ≤ τ K ∞ and the explicit distribution for τ K ∞ (from Theorem 2.9): Next we need to show that S K 1 is sufficiently small. We do this first by examining the duration of excursions X 1 makes from the path of the inert particle. The measures are from Theorem 2.10. Lemma 4.6. Define A K ε to be the number of excursions, of duration ε or larger, that X 1 makes from Y before time T 1 .
If we condition the process X 1 not to make an infinite duration excursion from Y , then A K ε is a Poisson random variable with rate bounded above by By a change of variables, we get which converges to 0 as K → ∞. In fact, this will converge to 0 for ε(K) = K −1/2+δ , a fact we will use for the next lemma.
Proof. By the previous lemma, we need only consider excursions of length less than K −1/2+δ on the set where L 1 (S 1 ) < K −1/2+δ . Then The next lemma allows us to work with the much nicer density of τ K ∞ instead of L 1 (S 1 ).
Proof. For L 1 (S 1 ) < τ K ∞ , the inert particle must cross the gap between X 1 t and X 2 t before S K 1 , the last meeting time of X 1 t and the inert particle. Since the particle is in contact with X 1 t at the instant S K 1 , it is sufficient to show that This is equivalent to showing that We bound the LHS by which is 0 by a standard computation.
We will also need a lower bound for L 1 (S 1 ), because the time it takes for the inert particle to cross the gap between the Brownian particles, S 2 − S 1 , is approximately x/KL 1 (S 1 ).  Proof. Because L 1 (S 1 ) < τ K ∞ , it is enough to compute the expectation of (τ K ∞ ) 2 : Multiplying by √ K and taking the limit yields the result.
Proof. By Lemma 4.8, it is enough to compute the expectation of τ K ∞ : Multiplying by √ K, and taking the limit yields the result.
Proof. Using Lemma 4.7 we disregard the contribution of S K 1 and T K 1 − S K 2 . By [6, p. 196], we have that a Brownian motion with drift µ, starting at x, has hitting time density at zero A In our case, assuming X 1 (S K 1 ) is small, that is, the two Brownian particles remain close to distance x apart, we have The assumption that X 1 (S K 1 ) is small can be justified by noting that X 1 (S K 1 ) will have mean L 1 (S 1 ) and variance S 1 , and then applying Lemmas 4.5 and 4.7. Proof. Using the same densities as in the previous lemma, we compute Lemma 4.14. For all R > 0, Proof. The change in X 2 − X 1 can be expressed as Taking the expectation leaves Proof. Following the proof of the previous lemma, Taking the expectation leaves  Proof. As in the preceding lemmas, Taking the expectation leaves As in the previous lemmas, we discard the contribution from T 1 − S 2 . Using the probability densities from Lemma 4.12, it is easy to see that and the result follows.
We define the processŶ K (t) to be a piecewise linear process derived from the Markov chain We can take the domain of L to be C 2 ((0, ∞) × [0, ∞)) in the following theorems.
From the X √ π(∂/∂T ) term above, we can see that the space-time processŶ K (t) runs at a different rate than do the original Brownian motions which defined our process. We can perform a change of time to restore the original clock, by dividing the generator by X √ π.
Proof. We actually change the clock by the factor 2X √ π to get the correct Brownian motion term, because the original process is the difference of two Brownian motions. The generator of the space-time process, after the change of time, is L = 1 2 Since the process now has a linear clock rate, the first coordinate of the process will be the original (X 2 (t/2) − X 1 (t/2)), with the generator L with the T term omitted. This is exactly the generator of the two-dimensional Bessel process.

A Process with Inert Drift in R d
The Brownian motion with inert drift constructed in Section 2, considered as a process on the domain [0, ∞), can be generalized to higher dimensional domains. In this section, we will construct such a process in C 2 domains. Unfortunately, an error was found in the original proof of uniqueness for this process. A correct proof for bounded C 2 domains will appear in [2].
We would like to point out that S. Ramasubramanian has previously extended a more general version of the process to orthants; see [9].
We will rely on some results by P. Lions and A. Sznitman from [8]. Let D be an open set in R d , with a unit inward vector field n on ∂D satisfying (19) and (20) below. We make the following assumptions as in [8]: ∃C 0 , ∀x ∈ ∂D, ∀x ∈D, ∀L ∈ n(x), (x − x , L) Note that C 2 domains are admissible by this definition.
We will call a pair (x t , L t ) a solution of the Skorohod problem (w, D, n) if the following hold: 1. x t = w t + L t for t ≥ 0.
Notationally, x j or (x) j will denote the j-th component of x when x is a vector or vector function.
We will call a function x t a solution to the extended Skorohod problem if condition 5 above is replaced by

Existence when D lies above the graph of a function
The results in this section will be very similar to the one-dimensional case. We assume that D = {x ∈ R d : x d > f (x 1 , · · · , x d−1 )} is admissible, with f (0, · · · , 0) = 0, and that there is an 0 < α < 1 so that |f (x)| < 1 − α and n(x) d > α for all x. Again note that f ∈ C 2 will satisfy this in a neighborhood of any point with an appropriate choice of coordinates.
Lemma 5.1. If x t = w t + L t is a solution to the Skorohod problem in D, then α|L| t < L d t < |L| t < |L| t /α.
Proof. From [8] we have that Clearly, L d t is an nondecreasing function. Since L 0 = 0, we also have that |L t | < |L| t . Combining these, we get (22).
Details of uniqueness will appear in [2].
Proof. The construction is standard. Divide D into neighborhoods N 1 , . . . , N m which are nice, in the sense that, under an appropriate rotation of the standard coordinate system, each N j ∩∂D is a section of the graph of a function f j satisfying the conditions at the beginning of the previous section. Assume that w t first encounters N 1 . Construct the domain which lies above the graph of f 1 and construct x (1) t satisfying (21) on this new domain. Let T 1 = inf{t : x (1) t ∈ N 1 }. Repeat the process starting at T 1 for the function w t = w t + L T 1 + L T 1 (t − T 1 ). Continue that construction, so that the limit x t satisfies (21) on [0, lim n→∞ T n ].
We wish to show that lim n→∞ T n = ∞. If not, say T n → T , then by Lemma 5.4 we must have that lim t→T |L t | = ∞. Then there is some 1 ≤ j ≤ d, R, S < T so that 0 < L j R < L j t for R < t < S, and L j S ≥ L j R + diam(∂D) + diam(w[0, T ]). But this contradicts that x R , x S ∈ D.