A Boundary Local Time For One-Dimensional Super-Brownian Motion And Applications

For a one-dimensional super-Brownian motion with density $X(t,x)$, we construct a random measure $L_t$ called the boundary local time which is supported on $\partial \{x:X(t,x) = 0\} =: BZ_t$, thus confirming a conjecture of Mueller, Mytnik and Perkins (2017). $L_t$ is analogous to the local time at $0$ of solutions to an SDE. We establish first and second moment formulas for $L_t$, some basic properties, and a representation in terms of a cluster decomposition. Via the moment measures and the energy method we give a more direct proof that $\text{dim}(BZ_t) = 2-2\lambda_0>0$ with positive probability, a recent result of Mueller, Mytnik and Perkins (2017), where $-\lambda_0$ is the lead eigenvalue of a killed Ornstein-Uhlenbeck operator that characterizes the left tail of $X(t,x)$. In a companion work, the author and Perkins use the boundary local time and some of its properties proved here to show that $\text{dim}(BZ_t) = 2-2\lambda_0$ a.s. on $\{X_t(\mathbb{R})>0 \}$.


Introduction & Statement Of Main Results
Super-Brownian motion is a Markov process taking values in the space of finite measures on R d , M F (R d ), equipped with the topology of weak convergence.We denote this process by X = (X t : t ≥ 0) and denote by P X X0 and E X X0 , respectively, a probability and its expectation under which X is a super-Brownian motion with initial data X 0 ∈ M F (R d ).In one dimension, X t is almost surely an absolutely continuous random measure and thus has a density we denote by X(t, x).The density is jointly continuous (and will exist) for t > 0, and is continuous with Hölder index 1 2 − ǫ in the spatial variable for all ǫ > 0 (see [17], for example, where this is implicit in the proof of Theorem III.4.2).It was shown by Konno and Shiga in [10] and independently by Reimers in [18] that X(t, x) satisfies the following stochastic partial differential equation (SPDE): (1.1) ∂X(t, x) ∂t = ∆X(t, x) 2 + X(t, x) Ẇ (t, x), where Ẇ (t, x) is a space-time white noise.For a complete discussion of such equations, including the precise definition of a solution, see [20] and [10].
Before we discuss our results, we briefly introduce the canonical measure of super-Brownian motion.The canonical measure N 0 is a σ-finite measure on C([0, ∞), M F (R))\{0} defined as the weak limit When restricted to {X t > 0} for t > 0, N 0 is a finite measure; in particular we have N 0 ({X t > 0}) = 2/t (see Theorem II.7.2 of [17]).N 0 is a fundamental object; it describes the behaviour of a single cluster, that is, the descendants of a single ancestor at the origin, of super-Brownian motion.(Likewise N x is a cluster started from x and is just a shift of N 0 .)An important fact, which we will describe more precisely later on, is that super-Brownian motion under P X X0 can be understood as a superposition of canonical clusters.We will use the notation X t and X(t, x) to denote the superprocess and its density, respectively, under both P X X0 and N 0 .The law of the process will always be clear from context.For a complete overview of the canonical measure, including proofs of the properties just stated, see Section II.7 of [17].
In a recent work by Mueller, Mytnik and Perkins [14], the authors studied the small-scale asymptotic behaviour of X(t, x), as well as the boundary of its zero set.We define the random set Z t = {x ∈ R : X(t, x) = 0}.The boundary of the zero set BZ t is then defined as BZ t := ∂Z t = {x ∈ Z t : (x − ǫ, x + ǫ) ∩ Z c t = ∅ ∀ǫ > 0}, where the second equality holds by continuity of the density.The results in [14] involve an eigenvalue λ 0 ∈ ( 1 2 , 1) which we describe in greater detail shortly.The authors of [14] show that the left tail of the distribution of X(t, x) behaves like (1.2) P X X0 (0 < X(t, x) < a) ≍ t −1/2−λ0 a 2λ0−1 as a ↓ 0, where f (a) ≍ g(a) means that f (a) is bounded above and below by cg(a) for different constants c.The upper bound is uniform in x and the lower bound required a localizing assumption.For details, see Section 4 and in particular Theorem 4.8 of [14].Let dim(B) denote the Hausdorff dimension of a set B ⊆ R.
Because λ 0 ∈ (1/2, 1), the dimension satisfies 2 − 2λ 0 ∈ (0, 1).The lower bound was conjectured to hold with full probability on {X t > 0}, implying that dim(BZ t ) = 2 − 2λ 0 almost surely on {X t > 0}.The difficulty in proving that the lower bound for the dimension holds with probability one on {X t > 0} is owing to the delicate nature of the BZ t .It is not monotone in the initial conditions nor in the measure X t itself.
We will construct a random measure L t , which we call the boundary local time of X t , supported on BZ t .(See Theorems 1.1 and 1.2.)The existence of L t was conjectured in Section 5.1 of [14].Once we have constructed L t , we use it to give a simpler alternative proof of the lower bound in Theorem A. Our method is to show that L t has finite p-energy for all p < 2 − 2λ 0 ; in particular, see Theorem 1.3 below.In a future work [7], L t and several of its properties derived here, including Theorem 1.2(a), Proposition 1.6 and Theorem 1.9, will be used to resolve the problem left open in Theorem A and Theorem 1.3, showing that dim(BZ t ) = 2 − 2λ 0 almost surely on {X t > 0}.
We now give a description of λ 0 .Define a function F (x) by (1.3) F (x) := − log P X δ0 {X(1, x) = 0} = N 0 {X(1, x) > 0} > 0. The second equality is standard and is a consequence of (1.14) below.Section 3, from (3.5) to (3.13), provides a thorough overview of F as the limit as λ → ∞ of the family of functions {V λ 1 } λ>0 which characterize the Laplace transform of the density X(t, x).Let Af (x) = 1  2 f ′′ (x) − x 2 f ′ (x) denote the infinitesimal generator of a standard, one-dimensional Ornstein-Uhlenbeck process Y .For a bounded, continuous function φ with limits at infinity (F is such a function), A φ f = Af − φf is the generator of an Ornstein-Uhlenbeck process with Markovian killing corresponding to φ; that is, for a sample path {Y s : s ∈ [0, ∞)} ∈ C([0, ∞); R), we define the lifetime of the process as ρ φ , after which it is "killed," or put into an inert cemetery state.The distribution of ρ φ is given by (1.4) Section 2 develops the relevant theory for these processes and their generators.In particular, Theorem 2.1 states that A φ , taken as an operator on the appropriate Hilbert space, has countable orthonormal family of eigenfunctions {ψ φ n } ∞ n=0 with corresponding discrete spectrum 0 We define λ 0 = λ F 0 > 0. As we have noted, it was shown in [14] that λ 0 ∈ (1/2, 1).Numerical estimates by Zhu [22], for which the stated digits are expected to be accurate, suggest that λ 0 ≈ 0.8882.This implies that the value of dim(BZ t ) from Theorem A, 2−2λ 0 , is approximately 0.224.A more detailed discussion of the numerics can be found in the introduction of [7].
L λ t is defined the same way under P X X0 and N 0 .The scaling factor of λ 2λ0 can be deduced from (1.5).The convergence of E X X0 (L λ t (φ)) as λ → ∞, noted in (1.5), led the authors of [14] to conjecture (Section 5.1 of that reference) that there is a random measure L t on R such that L λ t → L t in M F (R) in probability.Our main result is the verification of this conjecture.In all that follows, X 0 ∈ M F (R).
Theorem 1.1.Let t > 0. Under both P X X0 and N 0 there is a random measure L t (dx) ∈ M F (R), supported on BZ t , such that L λ t → L t in measure as λ → ∞, and there is a sequence λ n → ∞ such that L λn t → L t a.s. as n → ∞.Moreover, under P X X0 or N 0 , for all bounded and continuous functions φ, L λ t (φ) → L t (φ) in L 2 as λ → ∞.
We note that Z t will contain intervals, unlike the zero set of a Brownian motion (which is equal to its boundary).It is easy to see that L t is supported on BZ t from the fact that as λ gets large, L λ t concentrates on {x : 0 < X(t, x) = O(λ −1 )}, and properties of the weak topology on M F (R) (see the proof of Theorem 1.1 in Section 4).For fixed t > 0, x → X(t, x) is a continuous path taking values in R + = [0, ∞).BZ t is the set of points where this path begins and ends its excursions from 0. As L t is supported on BZ t , in this sense L t is a local time of x → X(t, x) on these excursion endpoints, and hence the boundary local time of X(t, •).
The existence of a measure supported on BZ t allows us to use the energy method to study its dimension.We will provide a second moment formula for L t , with which we compute the expectation of energy integrals of the form (1.7) |x − y| −p dL t (x) dL t (y).
If L t > 0 and the above energy is finite, then dim(supp(L t )) ≥ p by Frostman's connection between energy integrals and Hausdorff dimension (see Theorem 4.27 of Mörters and Peres [13]).We introduce some notation.For h : R 2 → R, define (L t × L t )(h) by (L t × L t )(h) = h(x, y) dL t (x) dL t (y).
For p > 0, we define h p (x, y) = |x − y| −p .The second moment formula for L t allows us to establish the following.
The fact that dim(BZ t ) ≤ 2 − 2λ 0 P X X0 -a.s. is already known from Theorem A, and from this it follows easily under N 0 , as we point out in the proof of Theorem 1.3.By the above, the lower bound, ie.dim(BZ t ) ≥ 2 − 2λ 0 , holds with at least the probability that L t > 0, as in Theorem 1.2(a).This plays an important role in Hughes-Perkins [7]; in Theorem 1.2 of [7] we show that with respect to both P X X0 and N 0 , L t > 0 almost surely on {X t > 0}, thus improving part (a) of Theorem 1.2 above and establishing almost sure non-degeneracy of L t .Combined with Theorem 1.3, this will show that dim(BZ t ) = 2 − 2λ 0 almost surely on {X t > 0}.
There are a number of other potential uses for such a local time.We now discuss some possibilities.By sampling a point from L t , we are able to "view X t from the perspective of a typical point in BZ t ."More precisely, one can define Q X0 ((Z, X t ) ∈ A) = E X X0 ( 1 A (z, X t ) dL t (z)) and study properties of the Palm measure Q X0 (X t ∈ • | Z = z).The behaviour of X t near BZ t is complex and there is still much that is not understood about it.For example, the density has an improved modulus of continuity and is nearly Lipzschitz (ie.Hölder 1 − η for all η > 0) at points in BZ t (see Theorem 2.3 of [16]).This suggests that BZ t would be small, but despite this BZ t has positive dimension.Constructing and studying the Palm measure described above would give a more structured approach for investigating this phenomenon.
As a local time, L t has the potential to study pathwise uniqueness in the SPDE (1.1), a problem which remains open, assuming a similar role as that of the semi-martingale local time in the Yamada-Watanabe Theorem for onedimensional SDEs (see Theorem V.40 of Rogers and Williams [19]).It may also provide insight in the behaviour of some discrete processes; super-Brownian motion in high dimensions is the scaling limit of a number of lattice models and interacting particle systems.In dimension one, it is still the scaling limit of branching random walk (for example see [21] or Theorem II.5.1(iii) of [17]).One could obtain information about the boundaries of such approximating processes by proving a limit theorem establishing weak convergence of the laws of their discrete local times to that of L t .Of course, L t allows for us to study BZ t more directly, as we have done in Theorem 1.3.In fact, with L t it may be possible to determine the exact Hausdorff measure function of BZ t .
We now discuss the method of our proof.Upper bounds on second moments of L λ t were obtained in Section 5.1 of [14], but in order to establish the existence of L t we require exact asymptotics, which are more delicate.The main ingredient is the following convergence result.In order to state it we need to introduce some notation.Recall that m(dx) denotes the centred unit variance Gaussian measure.Let ψ 0 = ψ F 0 (the eigenfunction of A F corresponding to eigenvalue −λ 0 ).The constant C 1.4 is given explicitly in (5.80), and the function ρ is defined in (5.81) Theorem 1.4.There exists a constant C 1.4 > 0 and continuous function ρ : R × R → (0, 1] such that for bounded Borel h : R 2 → R, Moreover, the limit is finite for all bounded h. That the formula above is finite is not obvious, as λ 0 > 1/2; we discuss this in more detail shortly.From the above we can deduce that {L λ t (φ)} λ>0 is Cauchy in L 2 (N 0 ) and therefore has a limit by completeness; in particular see Corollary 4.1 and its proof.We then argue that the limit is in fact the integral with respect to a unique measure, which is L t .The proof of Theorem 1.4 is long and technical; Section 5 is entirely devoted to it.We use the Laplace functional to obtain a Feynman-Kac type representation for N 0 (L λ t (φ) L λ ′ t (φ)) and then establish its convergence.The reason we do so under N 0 is because the Feynman-Kac formulas are simpler in this setting.We now present first and second moment formulas for L t under N 0 ; as one would expect, the second moment formula in part (b) agrees with the limit of N 0 ((L λ t × L λ ′ t )(h)) given in Theorem 1.4.The terms C 1.4 and ρ are the same that appeared in that result.
Theorem 1.5.(a) For a bounded or non-negative Borel function φ : R → R, (b) For measurable h : R 2 → R, either bounded or non-negative, Moreover, (1.9) is finite for all bounded h.
As we noted earlier, finiteness of (1.9) is not obvious since λ 0 > 1/2 (although it is implicit in the proof of Theorem 1.4), which can make (1.9) hard to use; for applications, the following upper bound for second moments is easier to apply than the exact formula.The value θ is defined as θ = ψ 0 dm.Y is an Ornstein-Uhlenbeck process started at z 1 with corresponding expectation E Y z1 .The exponential term in the first bound of the following proposition can be interpreted as a survival probability of Y , producing a w λ0 term which makes the integral finite.(The proofs of Theorem 1.3 and Theorem 1.2(b) in Section 4 both use this technique.)Proposition 1.6.For a non-negative Borel function h : R 2 → R, As we have alluded to, applying (1.10) with h(x, y) = |x − y| −p gives an upper bound for the expectation of energy integrals of the form (1.7), which is how we prove Theorem 1.3.
Thus far, we have not commented on the proofs of existence and properties of L t under P X X0 .The proofs rely on the conditional representation in terms of canonical clusters, which we will discuss shortly.First, in order to keep the moment results together, we state our results regarding the moments of L t under P X X0 .Theorem 1.7.For a bounded or non-negative Borel function φ : R → R, (b) There is a constant C 1.7 such that We note that the right hand side of (1.12) is equal to that of (1.5), and so was originally computed in Proposition 1.5 of [14] as lim λ→∞ E X X0 (L λ t (φ)).The fact that the same formula gives the mean measure of L t then follows from the L 2 convergence of L λ t (φ), as in Theorem 1.1.
We first establish the existence of L t , as well as its properties, under the measure N 0 , owing to the fact that the second moments of L λ t admit simpler formulas in this case.In order to prove the same for super-Brownian motion, we need to use the relationship between super-Brownian motion under P X X0 and the canonical measure, which we now describe.We recall that N x is a σ-finite measure such that N x ({X t > 0}) = 2/t which describes the "law" of a single cluster of super-Brownian motion started at x; that is, the descendants of a single ancestor at x.More precisely, super-Brownian motion is a superposition of canonical clusters; for a bounded, non-negative Borel function φ : R → R, This expression for the Laplace functional is in fact a consequence of a distributional equality between super-Brownian motion under P X X0 and a Poisson point process of canonical clusters.For By Theorem 4 of Section IV.3 of [11], (X t : t ≥ 0) is a super-Brownian motion with initial measure X 0 .The "points" of the point process Θ X0 are the clusters of X.For fixed t > 0, (1.15) leads to where {µ j t : j ∈ I t } are the points of a Poisson point process with finite intensity (1).Assuming our probability space is rich enough to allow us to choose random relabellings of these points, by the above we can write (1.16) where N is Poisson(2X 0 (1)/t) and, given N , . We can and do condition on the values of the initial points of the clusters, denoted by x 1 , . . ., x N , which are iid points with distribution X0 , in which case . In order to prove the existence and properties of L t with respect to a super-Brownian motion X t , we realize the super-Brownian motion as a point process and express X t as above.Conditioning on N and applying (1.16), we can write L λ t (φ) as The almost sure existence of boundary local times corresponding to the canonical clusters allows us to take this limit quite easily and so establish that L t exists under P X X0 (ie.Theorem 1.1).Furthermore, we obtain a conditional representation for L t in terms of its clusters; this allows us to transfer the properties of L t under N 0 to L t under P X X0 .Let L i t denote the boundary local time of X i t .In the statement that follows, we assume that we have realized X t using (1.16).
Theorem 1.8.Let X t be super-Brownian motion under P X X0 and L t be its boundary local time.Conditional on N , we have Remark.Given the nature of BZ t , we expect this behaviour.In the cluster decomposition, each cluster has a boundary local time of its own.Since each is supported on the boundary of its respective zero set, the local time L t of X t will be equal to the sum of cluster local times, except the boundary of the zero set of one cluster may be "swallowed" by the support of another, hence the indicator functions.
The idea of representing the boundary local time of X t in terms of the boundary local time of its clusters is not restricted to a super-Brownian motion and its comprising canonical clusters.The following formulation of the same principle will be useful in Hughes-Perkins [7].Recall that a sum of independent super-Brownian motions is a super-Brownian motion.
Theorem 1.9.Suppose X 1 , . . ., X n are independent super-Brownian motions with corresponding boundary local times L i t at time t > 0, for i = 1, . . ., n.Let X = n i=1 X i and let L t be the boundary local time of X t .Then One example of superprocesses satisfying the above conditions follows from (III.1.3) of [17].Let X 0 ∈ M F (R) and suppose that {A 1 , . . ., A n } is a Borel partition of R. Define X i as the contribution to X from ancestors at time 0 which are in A i .(This makes X i a super-Brownian motion with initial measure X 0 (• ∩ A i ); a precise definition of X i may be given in terms of the historical process as in the above reference.)Then X = N i=1 X i satisfies the conditions of the above theorem.
Notations.We will make use of the common convention that C denotes any positive constant whose value is not important.The value of C may change line to line in a derivation; to bring attention to the fact that the constant has changed, we will sometimes label the new constant C ′ .We write f ∼ g if lim x f (x)/g(x) = 1, where the limit will be clear from context.As the reader has probably inferred, we will write µ > 0 when a measure has positive mass (that is, to indicate that µ(1) > 0).For an interval I ⊆ R, let C(I, R) denote the space of continuous maps from I to R.
Let S t denote the semi-group of Brownian motion and p t the associated heat kernel (the Gaussian density of variance t).Let N (x 0 , σ 2 ) denote the law of a one-dimensional Gaussian with mean x 0 and variance σ 2 .
Organization of Paper.The paper is organized as follows.Section 2 gives a brief overview of the theory of one-dimensional Ornstein-Uhlenbeck processes with Markovian killing.Our method relies on a change of variables which allow us to express certain quantities in terms of eigenvalue problems involving these processes' generators.Section 3 describes fundamental background connecting the Laplace functional of super-Brownian motion to a family of semi-linear PDEs.We also introduce the families V λ and V λ,λ ′ , which play a key role in our analysis.Section 4 contains the proofs of all our main results, including existence and properties of L t and the cluster representations, with the exception of Theorem 1.4.The proof of this result is reserved for Section 5.
Acknowledgements.The author gratefully acknowledges the assistance of Ed Perkins, his thesis supervisor, who introduced him to the problem and provided many useful insights and suggestions during its resolution, and gave close readings of the manuscript at several stages during its preparation.Any remaining inconsistencies are the sole responsibility of the author.

Killed Ornstein-Uhlenbeck Processes
As above, we define the operator A by Af 2 .The Markov process generated by A is a onedimensional Ornstein-Uhlenbeck process with mean zero.We denote this process by Y , denote its law when started at x by P Y x with corresponding expectation E Y x .For general initial conditions Y 0 ∼ µ ∈ M 1 (R) (the space of probability measures on R), we write its law as P Y µ .Y has a stationary measure, the unit variance Gaussian measure, m.When Y 0 ∼ m, the process is reversible and can be defined for time values in R. We will denote the law of this stationary process on R by P Y .
We now introduce the notions of killing and lifetime for the process (Y t : t ≥ 0).Let φ ∈ C + ([−∞, ∞], R), the space of non-negative continuous functions with limits at ±∞.We will call such functions killing functions.Let A φ is the generator of an Ornstein-Uhlenbeck process subjected to Markovian killing at rate φ(Y t ).The lifetime of the killed process is ρ φ = inf{t > 0 : , where e is an independent Exp(1) random variable.We recall that the distribution of ρ φ is given by (1.4).
The generators A and A φ correspond to strongly continuous contraction semigroups on L 2 (m).The following theorem is proved in [14], where it is stated as Theorem 2.3.We note that the statement of the result in that paper had a misprint when describing the convergence of of the transition densities, which appeared in part (c).We have corrected the statement, which is in part (b) of the following.
(b) For t > 0, the diffusion Y generated by A φ has a jointly continuous transition density q t (x, y) with respect to m, given by (2.1) where the series converges in L 2 (m × m) and uniformly absolutely on sets of the form [ǫ, ∞) × [−ǫ −1 , ǫ −1 ] 2 for all ǫ > 0.
(c) For 0 < δ < 1 2 , there exists a constant c δ > 0 such that (2.2) q t (x, y) ≤ c δ e −λ0t e δ(x 2 +y 2 ) for all t ≥ s * (δ), where s * (δ) > 0 is the solution of (d) Denote θ = ψ 0 dm.For all t ≥ 0 and x ∈ R, (2.4) where, for any δ > 0, there is a constant c δ > 0 such that x is the law of the diffusion with the transition density with respect to m.
The bounds in part (c) of the above easily imply the following estimates, which we will often use.For 0 < δ < 1/2, there is a constant C δ > 0 such that (2.8) P Y x (ρ φ > t) ≤ C δ e δx 2 e −λ0t ∀ x ∈ R, t > 0. This implies that there is a constant C > 0 such that (2.9) P Y m (ρ φ > t) ≤ Ce −λ0t ∀ t > 0. The following limit result is a simple consequence of the eigenfunction expansion for q t (x, y).Lemma 2.2.For all x, y ∈ R, lim t→∞ e λ0t q t (x, y) = ψ 0 (x)ψ 0 (y).
The convergence is uniform on compact sets.
The absolute value of the sum above is bounded above by By Theorem 2.1(b) with t = 1, the series in the above is convergent, and the convergence is uniform on compact sets.Part (a) of the same theorem states that −λ 0 is a simple eigenvalue.Hence λ 1 − λ 0 > 0 and the above vanishes as t → ∞; in fact, because the series converges uniformly on compacts to a continuous limit, the above vanishes uniformly on compacts as t → ∞, so (2.10) gives the result.
It will be useful for us to study the distribution of the process Y when conditioned on survival and its endpoint.
Hereafter we assume that Y has killing function φ ∈ C + ([−∞, ∞], R) and we denote its lifetime by ρ.For fixed T > 0 and z ∈ R, consider the [0, T ]-indexed inhomogeneous Markov process taking values in R with transition density (with respect to dm(y 2 )) (2.11) qs,t (y 1 , y 2 ) = q t−s (y 1 , y 2 ) q T −t (y 2 , z) q T −s (y 1 , z) for 0 ≤ s < t < T .(The kernels are degenerate when t = T , since Y T = z.)Below we verify that the finite dimensional distributions defined by this transition kernel have an extension to a (necessarily) unique law on C([0, T ], R), which we denote by P Y x (• | ρ > T, Y T = z) when the initial point is x ∈ R, and show that it gives an explicit version of the suggested regular conditional distribution for all z ∈ R. We then establish that for fixed Before proving the lemma, we make an observation concerning time reversals of Y under P Y x (• | ρ > T, Y T = z).For T > 0 and t ∈ [0, T ], define Ŷt = Y T −t .Let x, z ∈ R. For 0 < t 1 < t 2 < T and φ 1 , φ 2 bounded Borel functions, we have , where the last equality uses q t (x, y) = q t (y, x).The above equality of distributions can be extended to general finite dimensional distributions.Because the extension of the finite dimensional distributions to a law on C([0, T ], R) (ie.from Lemma 2.3(a)) is unique, we therefore have that for all x, z ∈ R, As a last note, we will sometimes denote the law when it is clear from context that we are working with the killed process.
Proof of Lemma 2.3.Let x, z ∈ R and T > 0. We define a distribution P Y x (• | ρ > T, Y T = z) on finite (time indexed) collections of random variables which describes the finite dimensional distributions (FDDs) of the inhomogeneous Markov process with transition density (2.11).For 0 = t 0 < t 1 < . . .< t n < T and bounded, continuous functions φ 1 , . . ., φ n , the n-dimensional FDD of (Y t1 , . . ., Y tn ) under where we use the convention y 0 = x.We note that (2.13) also defines the FDDs of a regular conditional distribution of (Y t : t ∈ [0, T ]) under P Y x conditioned on Y T = z (which is why we have used this notation).Thus when we have established that these laws extend to a probability on C([0, T ], R), we will have explicitly constructed a version of the regular conditional distribution.
To prove that P Y x (• | ρ > T, Y T = z) extends to a probability on C([0, T ], R), we will establish a tightness criterion.We consider the fourth moments of increments of Y .Let 0 < s < t < T .Expanding using (2.13), we have (y 2 − y 1 ) 4 q s (x, y 1 ) q t−s (y 1 , y 2 ) q T −t (y 2 , z) dm(y 1 ) dm(y 2 ).(2.14)We now collect some elementary bounds and inequalities which will allow us to obtain a useful upper bound for the above.First, we note that while q t (x, y) is a transition density with respect to m, it will sometimes be useful to express it as a density with respect to the Lebesgue measure.Since p 1 (•) is the density of m, we have (2.15) q t (x, y) dm(y) = q t (x, y) p 1 (y) dy.
We will use a comparison with an un-killed Ornstein-Uhlenbeck process.The transition kernel of a standard Ornstein-Uhlenbeck process is described by, for 0 ≤ s < t, Let k t (x, y) denote the a transition density of an un-killed Ornstein-Uhlenbeck process with respect to Lebesgue measure.Then for x, y ∈ R and t > 0, (2.16) The transition densities of the killed Ornstein-Uhlenbeck process are bounded above by those of the un-killed process.This implies that (2.17) (i) q t (x, y) dm(y) ≤ k t (x, y) dy, (ii) q t (x, y) p 1 (y) ≤ k t (x, y) It is easy to establish from (2.16) that there is a constant c > 0 such that ) for all t ≤ 2 and x, y ∈ R.
where we recall that p t (•) is the Gaussian density of variance t.Let K > 0. From (2.17)(ii) and (2.18) it follows that there is a constant and t ≤ T ′ < T.
Next, we note that it holds by elementary formulas for moments of Gaussians that there is a constant c > 0 such that Finally, observe that q T (•, •) is bounded below by the transition density of Y with constant killing function φ ∞ .Thus for all K > 0 and M ≥ 1, from (2.15) we have for a sufficiently small constant δ(K, M ) > 0.
Let 0 < T ′ < T and suppose that 0 Using (2.17)(i) to bound q s (x, y 1 )dm(y 1 ) and q t−s (y 1 , y 2 )dm(y 2 ), and (2.19) to bound q T −t (y 2 , z), from (2.14) we obtain that Recall that we have assumed x, z ∈ [−K, K].By (2.16) it is clear that for K > 0, the integral is bounded above by some constant C 2 (K) > 0 for all x ∈ [−K, K] and s > 0. Using this along with (2.21), with a choice of M ≥ 1 for which T ∈ [M −1 , M ], from the above we deduce the following: Hereafter we consider increments of size at most 1 ∧ T /3.We have that (2.25) holds for all 0 < s < t . It remains to show that it also holds on [2T /3, T ].To do so, we make use of reversibility.Suppose T /3 ≤ s < t < T .Then where the last equality uses q t (x, y) = q t (y, x) (a consequence of (2.1)) and (2.13).Since 0 < T − t < T − s ≤ 2T /3, by (2.25) and (2.26) we have that for all x, z ∈ Combined with the previous statement that this holds for all 0 < s < t ≤ 2T /3, we have that The above proof can be easily modified to obtain the same bound (with a potential change to the constant) for increments in which s = 0 or t = T , and we omit it.Thus by (2.27) and the Kolmogorov Continuity Theorem, As we noted earlier, this gives an explicit construction of the regular conditional distribution (Y t : t ∈ [0, T ]) under P Y x given ρ > T and Y T = z, Additionally, suppose that z n → z and that Thus the aforementioned tightness proves that ).Thus we have proved part (a).
Before proving (b), we note the following consequence of (2.27) and its proof.Let S, K > 0 and fix . By considering increments of (Y s : s ∈ [0, S]) but allowing the time T at which we condition Next we turn to part (b).Fix S > 0 and x, z ∈ R. We now check that the FDDs of (Y s : s ∈ [0, S]) under S and let φ 1 and φ 2 be bounded and continuous functions.Then from (2.13), we have Moreover, applying (2.2) with δ = 1/8, we have that where s * (1/8) is as in (2.3).Using (2.31) (replacing t with t 2 ) and (2.17)(i) we obtain the following bound for the integrand in (2.29): for all T ≥ S + s * (1/8).By (2.16), k t1 (x, y 1 ) and k t2−t1 (y 1 , y 2 ) are Gaussians with variance at most 1, and so a short argument shows that the above quantity is integrable.This allows us to use Dominated Convergence in (2.29), so by (2.30) we have lim The above argument can be easily generalized to n-fold FDDs for all n ≥ 2, (the δ = 1/8 in (2.31) can be reduced to handle larger n) and thus we have the desired convergence of the FDDs as T → ∞.In order to obtain weak convergence of the laws on C([0, S], R), we need tightness of the distributions as T → ∞.To prove that the distributions are tight we will analyse the fourth moments of increments, as in (2.14), but first we obtain one more bound.We note that by Lemma 2.2 and joint continuity of (T, (x, y)) → q T (x, y), it holds that for all K > 0, (2.32) for sufficiently small δ(K) > 0. Let K > 0 and x, z ∈ [−K, K].In (2.14), we bound q T −t (y 2 , z) above using (2.31) and bound the other transition densities using (2.17), which gives for all T ≥ S + s * (1/8).In the second inequality we have used (2.32) as well as (2.18) and a change of variables.
The third follows from a short calculation and the fact that t − s ≤ 1. Applying (2.20) to the above and arguing as in (2.23), we obtain that, for all T ≥ S + s * (1/8), for a constant C 3 (S, K) > 0, where to see that the integral is bounded uniformly for |x| ≤ K, we use the fact, from (2.16), that k s (x, y 1 ) is Gaussian with mean absolutely bounded by |x| and variance less than 1.The fact that (2.33) holds for all T ≥ S + s * (1/8) implies that the laws Combined with the convergence of the FDDs to those of P Y,∞ x , this proves (b).

Some non-linear PDE
Let B b + (R) denote the space of bounded, non-negative Borel functions.Recall that S t denotes the semigroup of Brownian motion.By Theorem III.5 of [11], for φ ∈ B b + (R), there exists a unique non-negative solution, denoted V φ t (x), to the evolution equation for all X 0 ∈ M F (R). Applying the above with X 0 = δ x , (1.14) gives We are interested in the case when the initial data is a measure, and also in the differential form of the equation.The integral equation (3.1) has a corresponding PDE, which is the following: In [1], this equation was shown to have a unique C 1,2 solution when φ ∈ M F (R), where V t → φ is understood as weak convergence.By Lemma 2.1 of [15], this solution is also the unique solution to (3.1).We denote the unique solution to (3.1) and (3.4) by V φ t .Part (d) of the same lemma establishes that if φ n → φ weakly as n → ∞, then Using this and the fact that X t has a bounded, continuous density, if we approximate measures by functions in B b + (R), we can take bounded limits in (3.2) and (3.3) to establish that (3.2) and (3.3) hold for V φ t when φ ∈ M F (R). Notation: We now state some useful properties of solutions to (3.4).For a proof, see Lemma 2.6 in [15].
This family was originally studied in [8].It is an exercise to use (3.5) or the scaling properties of super-Brownian motion to show that V λ t (x) satisfies the following space-time scaling relationship.For λ, r > 0, we have By translation invariance in the initial conditions of (3.5), and by (3.2) and ( 3.3) we have for all x ∈ R and t > 0. It is clear from (3.7) that V λ t increases to a limit as λ → ∞.In the PDE literature this was established in [8], where it was shown that V λ converges locally uniformly as λ → ∞ to a function is the solution of (3.5) when λ = +∞.Rigorously, it is the unique solution to the following problem: where B ǫ = B(0, ǫ), the ball with radius ǫ centered at the origin.V ∞ t was introduced and shown to solve (3.9) in [2]; uniqueness of the solution is a consequence of Theorem 3.5 of [12].Taking λ → ∞ in (3.8), we see that We recall that (see Theorem II.7.2 of [17]) Then we have It was shown in [2] that F is the solution to an ODE problem.(In fact, their PDEs and ODEs have different (constant) coefficients, but Section 3 of [14] shows that F is a rescaled version of the function they study.)F is the unique solution of (i) This F is the function we discussed in the introduction, for which −λ 0 is the lead eigenvalue of the operator A F .In particular, by evaluating (3.10) at t = 1 we can recover (1.3), our preliminary definition.
As part of the proof of Theorem A, the authors of [14] computed the rate of convergence of V λ t to V ∞ t .In particular, Proposition 4.6 of that reference states that for some constant C. (This is closely connected to (1.5).)A similar lower bound with the same power of λ is established in the same proposition.We will make frequent use of (3.14) in this work to bound error terms arising when we make approximations to obtain an eigenvalue problem.Let Y be an Ornstein-Uhlenbeck process.We define Z T (Y ) as (3.14), we can easily deduce that the (monotone) limit exists and is finite, and that moreover there is a constant C Z > 0 such that, uniformly for all Y , (3.17) Finally, we introduce another family of solutions to (3.4), which arise when we compute second moments of L λ t ; we will evaluate expressions that involve the density at two points When we evaluate this function at 0 we simply write In particular, by (3.4), this is equivalent to We will make frequent use of the fact that this family of solutions is translation invariant in the initial conditions of (3.4).This implies that V . The family satisfies the following scaling relationship: for all λ, λ ′ , r, c > 0. Taking limits and applying bounded convergence in (3.2), we see that V λ,λ ′ t (x 1 , x 2 ) has a monotone limit as λ, λ ′ → ∞ (by Proposition 3.1(a)).We denote this limit V ∞,∞ t (x 1 , x 2 ).In agreement with our previous notation we define the following.

By taking the limit as
We conclude by stating a version of (3.14) for the functions V λ,λ ′ t .

Lemma 3.2.
There is a positive constant C such that for all t, λ, λ ′ > 0, By (3.20) and (3.3), the first term is equal to where the second last line follows from (3.10) and (3.3), and the final inequality is by (3.14).We use similar reasoning to bound the second term of (3.21) by the same expression with λ ′ replacing λ, which gives the desired result.

Existence and Properties of L t
As stated in the introduction, our method first establishes the existence and properties of L t under N 0 and then uses the cluster decomposition to establish them under P X X0 .The main ingredient in the proof of Theorem 1.1 is the convergence of second moments of L λ t (φ) as λ → ∞.For a bounded Borel function φ, we show that N 0 (L λ t (φ) 2 ) converges as λ → ∞.In fact, we prove convergence of second moments of general functions of two variables.For h : R 2 → R we recall the notation L λ t (φ) 2 is easily recovered by taking h(x, y) = φ(x) φ(y).The following result is the workhorse of this paper.
The proof of Theorem 1.4 is long and technical.We defer it to Section 5, which is devoted to its proof.For now, we take it as a given and use it to establish our other main results, the first being the existence of L t under N 0 .
Proof of Theorem 1.1 for N 0 .Fix t > 0. Because X t = 0 implies that L λ t = 0 for all λ > 0, without loss of generality we can work under the finite measure N 0 (•∩{X t > 0}).By Corollary 4.1, for a bounded continuous function φ, there exists a random variable l(t, φ) such that L λ t (φ) → l(t, φ) in L 2 (N 0 ) as λ → ∞.It follows that L λ t (φ) → l(t, φ) in measure.We will now establish that there exists a unique random measure L t such that the random variable l(t, φ) is the integral of φ with respect to a random measure L t , ie. l(t, φ) = L t (φ) for all continuous and bounded functions φ.
We need to establish that the measures {L λ t : λ > 0} are tight N 0 -almost surely.To see that this is true, we recall that X(t, •) is compactly supported N 0 -a.s., (see Corollary III.1.4 of [17] for the result under P X δ0 ; condition the cluster representation on N = 1 to get it for N 0 ) and hence the mass of X t is contained in a ball B(0, R) for some x)e −λX(t,x) dx, this implies that the mass of L λ t is contained in B(0, R) for all λ > 0, which implies that {L λ t (ω) : λ > 0} is tight.
Let {φ n } ∞ n=1 be a countable determining class for M F (R) consisting of bounded, continuous functions.We choose φ 1 = 1.L 1 -boundedness of the total mass and tightness are sufficient conditions for a family in M F (R) (with the weak topology) to be relatively compact.By Corollary 4.1, {L λ t (1) : λ > 0} is L 2 (N 0 )-bounded, and hence L 1 (N 0 )-bounded, and so from the above we see that As we have noted, L λ t (φ n ) → l(t, φ n ) in measure as λ → ∞.Using the fact that convergence in measure implies almost sure convergence along a subsequence, we can iteratively define subsequences and take a diagonal subsequence {λ m } ∞ m=1 which satisfies (4.1) As we have noted, {L λm t } ∞ m=1 is relatively compact N 0 -almost surely.Combined with the above, this means that for N 0 -a.a.ω we have the above convergence for all n and relative compactness of the measures {L λm t } ∞ m=1 .Choose such an ω.By relative compactness of {L λ t } λ>0 , any subsequence of {λ m } ∞ m=1 admits a further sequence along which the measures converge in the weak topology.It remains to show that all subsequential limits coincide.Suppose L t (ω) and L ′ t (ω) are two such limit measures.Since ω has been chosen so that (4.1) holds, we have that Since the family {φ n } n≥1 are a determining class, this implies that L t (ω) = L ′ t (ω).Hence all subsequences admit a further subsequence with the same limit L t (ω) in the weak topology.Since the weak topology on M F (R) is metrizable, the "every subsequence admits a further converging subsequence" criterion for convergence applies, and we have L λm t (ω) converges to L t (ω) ∈ M F (R) as m → ∞.This gives the almost sure convergence along {λ m } ∞ m=1 .As the weak topology is metrizable we also have L λ t → L t in measure.Furthermore, we observe that for continuous and bounded φ, L t (φ) = l(t, φ).To see this, recall that L λm t (φ) converges to l(t, φ) in L 2 (N 0 ).As we have just shown that lim m→∞ Finally, we verify that L t is supported on BZ t .We fix ω outside of a null set such that Proof of Theorem 1.5.To prove (b), by Theorem 1.4 it is enough to show that N 0 (( for a sequence λ n → ∞, which we choose to be the sequence from Theorem 1.1 on which L λn t → L t almost surely.Because L t = 0 when X t = 0, we can work on the probability measure N 0 (• | X t > 0).For bounded and continuous h : (1), which implies that L λn t (1) 2 and hence (L λn t × L λn )(h) are uniformly integrable (see, e.g.Theorem 4.6.3 of [5]).We can therefore exchange limit and expectation, giving Since L λn t → L t in M F (R) and h is bounded and continuous, the integrand on the left hand side is equal to (L t × L t )(h), which gives the result.By a Monotone Class Theorem (e.g.Corollary 4.4 in the Appendix of Ethier and Kurtz [6]), the same holds for all bounded and measurable h.
We now turn to part (a).Let φ : R → R be bounded and Borel.We recall from the Introduction (see (1.5)) that Proposition 4.5 of [14] states that (The fact that the constant appearing in Proposition 4.5 of [14] equals C 1.4 is implicit in the proof.)The proof uses the Palm measure formula for X t under P X X0 ; see Theorem 4.1.3 of Dawson-Perkins [4].The corresponding Palm measure formula for the superprocess under N 0 is in fact simpler, and the same proof shows that Consider now a bounded and continuous function φ; we can also clearly assume that φ ≥ 0. By Theorem 1.1 (under N 0 ), L λ t (φ) converges in L 2 with respect to the probability measure N 0 (X t ∈ • | X t > 0), which implies that it also converges in L 1 , allowing us to exchange limit and expectation in (4.3), which gives part (a) for bounded and continuous φ.This extends to all bounded and measurable φ by a monotone class argument (as above for part (b)).Finally, it is clear that both (a) and (b) hold for general non-negative functions by the Monotone Convergence Theorem.
We now describe how to ascertain the existence of L t when X t is a super-Brownian motion under P X X0 via the cluster representation.In particular, we recall (1.15) and (1.16).Let X 0 ∈ M F (R) and t > 0.
Proof of Theorem 1.1 for P X X0 and Theorem 1.8.Let N, x 1 , . . ., x N , X 1 t , . . .X N t be as in the cluster decomposition (1.16).For λ > 0, define the measure L λ t via (1.6) using X t .For i = 1, . . ., N , let L i,λ t denote the measure defined in (1.6) corresponding to X i t .By Theorem 1.1 for N 0 and translation invariance, (1.17).That is, Let φ : R → R be bounded and continuous.We will show that (4.4) Once we establish (4.4), the proof of Theorem 1.1 for N 0 applies and shows that L λ t → L t in probability in M F (R) as λ → ∞.With the exception of L 2 convergence, which we show afterward, this proves Theorem 1.1 for P X X0 .Furthermore, since L t is defined by (1.17), this also proves Theorem 1.8.
Turning to (4.4), we will argue conditionally on (N, x 1 , . . ., x N ).That is, we argue under the regular conditional distribution for (X 1 t , . . ., X N t ) given (N, x 1 , . . ., x N ).As such, we treat N ≥ 1 and x 1 , . . ., x N ∈ R as fixed, and X 1 t , . . ., X N t are independent random measures with respective laws N xi (X t ∈ • | X t > 0) for i = 1, . . ., N .Let E denote the expectation of a probability realizing this conditional representation for X t .Expanding L λ t (φ) in terms of the clusters, we have λ 2λ0 X i (t, x)e −λX i (t,x) e −λ j =i X j (t,x) φ(x) dx where we define Z i N (t, x) = j =i X j (t, x), in which the indices are understood to sum from 1 to N .Using this notation, Thus by (4.5), to prove (4.4) it is clearly enough to show that for any 1 Without loss of generality, assume that We first consider R 1 .Since X i t and Z i N (t, •) are independent and L λ t is a measurable function of X i t , conditional on X i t we have, for all λ > 1 and 1 ≤ λ ′ < λ, where in the second last line we have used (3.11), and the last follows from (3.8), (3.10), and translation invariance.We apply (3.14) to the integrand and take the expectation of the above to obtain that ) for all λ > 1 and 1 ≤ λ ′ < λ, where the last inequality is by Theorem 1.5(a) and the fact that L i,λ t (1) → L i t (1) in L 2 (N xi ) (from Theorem 1.1).Next we consider R 3 .Note that we can expand and bound this term in exactly the same way as we did R 1 in (4.7) but with L i t replacing L i,λ t .Taking the expectation and proceeding as above then gives (4.10) 1) .

It remains to show that
) for all continuous and bounded functions φ.Let φ be such a function, and suppose that X t is realized as in (1.16) under a probability P X X0 .Under P X X0 (• | N ), from (4.5) and (1.17) we have We recall that X 1 t , . . ., X N t are iid with distribution N X0 (X t ∈ • | X t > 0), where X0 = X 0 (•)/X 0 (1) and N X0 (•) = N x (•)dX 0 (x).This implies that the N summands in (4.12) are identically distributed; in particular, conditional on N we define identically distributed random variables e N,λ i ≥ 0, for i = 1, . . ., N , by By (4.6), e N,λ i converges to 0 in probability as λ → ∞ when conditioned on (x 1 , . . ., x N ).However, one can integrate the conditional probabilities over (x 1 , . . ., x N ) ∈ R N to determine that (4.14) e N,λ i → 0 in probability under It is clear from (4.13) that for all λ > 0, Conditioning on N = n and summing over n ∈ N, by (4.12) and Fubini's Theorem we have ) for all λ ≥ 1, for some constant C(t, φ) > 0 (by uniform integrability), the nth term in the above is bounded above by C(t, φ) P X X0 (N = n)n 2 .Dominated Convergence therefore allows us to exchange limit and summation in the above, which by (4.16) gives the result.
Proof of Theorem 1.9.This is virtually identical to the above proof of Theorem 1.8 and is omitted.
As we have commented on, the expression in Theorem 1.4, which is the same as (1.9) in Theorem 1.5(b), is finite for all bounded h, despite the appearance of non-integrability (since λ 0 > 1/2).Proposition 1.6, which we restate here for convenience, provides a useful upper bound on second moments which is our main tool for studying L t .The bound is not difficult to obtain.Its derivation relies only on applying trivial upper bounds to several terms and making a few changes of variables.Recall that E Y z denotes the expectation of a standard Ornstein-Uhlenbeck process Y with Y 0 = z.Proposition 1.6.For measurable, non-negative function h : R 2 → R, Proof.Let h : R 2 → R be Borel measurable and non-negative.We use the formula for N 0 ((L t × L t )(h)) given by (1.9).We recall that ρ(z 1 , z 2 ) ≤ 1 and use this bound, and we bound above by using Since z 1 ∼ m, √ t − s z 1 has a normal distribution with variance t − s, and we interpret it as the Brownian increment B t − B s .Hence the above is equal to where in the second line we have used w = t − s and defined Applying this and letting u = e r in the integral, we obtain that the above is equal to We now define a stationary Ornstein-Uhlenbeck process Y (with stationary measure m) by Y r = e −r/2 W e r for r ∈ R. Recall that we denote its law by E Y .The above is therefore equal to By stationarity of Y , we can shift time by log w to obtain that the above is equal to Y 0 has distribution m, so we condition on the value of Y 0 and call it z 1 .This gives the desired expression and proves that (1.10) holds.The proof of (1.11) is a consequence of the following lemma.
Proof of Lemma 4.2.Expanding in terms of the transition densities, we have where • , • L 2 (m×m) denotes the inner product on L 2 (m × m) and ⊗ is the tensor product of functions.Recall from that Theorem 2.1(a) that the eigenfunction expansion (2.1) converges in L 2 (m × m) to q t (•, •), and that ψ 0 L 2 (m) = 1.Thus by the above and Fubini's theorem, (4.17) is equal to where the first equality follows from orthogonality of the eigenfunctions, which implies that ψ n ψ 0 dm = 0 for all n ≥ 1, and the last line has used ψ 0 dm = θ and ψ 2 0 dm = 1.
We now use the bounds in Proposition 1.6 to derive prove the remaining properties of L t .
Proof of Theorem 1.2(a).Via the second moment method, we have where we recall that ψ 0 dm = θ and we have used Theorem 1.5(a) and (1.11).We recall that N 0 (X t > 0) = 2/t, which implies that Proof of Theorem 1.3.Recall that for p > 0, h p (x, y) = |x − y| −p .We first establish that for all p < 2 − 2λ 0 .Applying (1.10) with h p , we have Recalling (1.4), the expectation is equal to the survival probability P Y z1 (ρ F > log(t/w)), so the above equals Applying (2.5) and (2.9), both with δ = 1/8, this is bounded above by The second line follows because the integrand has Gaussian tails in z 1 and z 2 and p < 2 − 2λ 0 < 1.Finally, the integral in the final line is finite because −λ 0 − p/2 > −λ 0 − λ 0 + 1 > −1, which proves (4.18).In fact, we have shown that Next, we establish the same under P X X0 .That is, we will show that (4.20) E X X0 ((L t × L t )(h p )) < ∞ for p < 2 − 2λ 0 .We use the cluster decomposition and argue conditionally as in the proof of Theorem 1.1 (for P X X0 ) above.Suppose that P X X0 is a probability under which X t is realized as in (1.16).Conditioning on N, x 1 , . . ., x N , by (1.17) we have Thus we obtain that which provides a bound for the summands in the first term of (4.21).We now consider the mixed integrals in (4.21), that is, the summands in the second term.Without loss of generality, let i = 1 and j = 2, and denote their (independent) laws . Because the integrands are non-negative, we can change the order of integration and obtain To compute the inner expectation we apply translation invariance and (3.11), which gives where the last line follows from the mean measure formula (1.8).By (2.5) with δ = 1/4, we have that ψ 0 (z 2 ) dm(z 2 ) ≤ c e −z 2 2 /4 dz 2 .Thus the above is bounded above by By the above bound and another application of (1.8), (4.23) is bounded above by We note that both (4.22) and (4.24) are independent of the points x 1 , . . ., x N .Therefore by these bounds and (4.21) we have shown that Taking the expectation above with respect to N , which we recall is Poisson with mean 2X 0 (1)/t, gives which proves (4.20).
Under both P X X0 and N 0 , we have shown that the p-energy of L t has finite expectation, and hence L t has finite p-energy almost surely, for all p < 2 − 2λ 0 .By the energy method (see, for example, Theorem 4.27 of Mörters and Peres [13]), this implies that dim(BZ t ) ≥ 2 − 2λ 0 a.s. on {L t > 0} under P X X0 and N 0 .Combined with Theorem A, this completes the proof of Theorem 1.3 for P X X0 .To see that the upper bound on the dimension holds for N 0 follows from the cluster decomposition.Consider X t under P X δ0 .In the cluster decomposition of X t , the probability that N = 1 is positive.Conditioning on this event, X t is equal to X 1 t , which has law N 0 (X 1 t ∈ • | X t > 0).Because dim(BZ t ) ≤ 2 − 2λ 0 a.s. on this event, we therefore have N 0 {dim(BZ t ) ≤ 2 − 2λ 0 } X t > 0 = 1.This completes the proof.
Proof of Theorem 1.7.To see part (a), we note that (4.2) gives an expression for lim λ→∞ E X X0 (L λ t (φ)).On the subsequence {λ n } ∞ n=1 from Theorem 1.1, L λn t (φ) → L t (φ) a.s.for bounded and continuous φ, so it is enough to show that lim n→∞ E X X0 (L λn t (φ)) = E X X0 (lim n→∞ L λn t (φ)).By Theorem 1.1, L λ t (φ) converges in and hence is bounded in L 2 (P X X0 ).It is therefore uniformly integrable, which justifies the above exchange of limit and integration.This proves the result for bounded and continuous φ.We extend the moment formula to bounded measurable functions by a Monotone Class Lemma and to non-negative measurable functions by Monotone convergence.
We now prove part (b).Suppose we realize X t under a probability P X X0 such that (1.16) holds.Conditionally on N , by (1.17) we have The clusters are independent with laws , the equality by (3.11).Thus, applying Theorem 1.5(a) and Proposition 1.6(b) to the above and using independence, we obtain As in the proof of Theorem 1.3, we take the expectation with respect to N , which has a Poisson(2X 0 (1)/t) distribution.This proves part (b).
Finally we consider the atomless property of L t (see Theorem 1.2(b)).Once again we carry out the necessary moment calculations under canonical measure.L t has an atom of mass c > 0 at x if L t ({x}) = c.We decompose L t as where Lt is atomless and ν t is strictly atomic.We begin with an elementary observation which provides an upper bound for the mass of the atoms of a measure.Let M ∈ N.
and for k = 2, 3 . . ., 2M 2 n , define the dyadic interval The following lemma is elementary.Lemma 4.3.Fix M ∈ N and suppose that µ is a finite measure supported on [−M, M ] with decomposition µ = ρ+ν, where ρ is atomless and ν = i∈I c i δ xi is strictly atomic.Then for every n ≥ 1, The next lemma gives an upper bound for the second moment of L t on a ball.We denote by B(x, r) the ball of radius r > 0 centred at x ∈ R. We recall s * (δ) from Theorem 2.1(c); in what follows we use δ = 1/8, and s * denotes s * (1/8).
Lemma 4.4.There is a constant C 4.4 > 0 and t-dependent constant C 4.4 (t) > 0 such that for all x ∈ R and r < e −s * t, where W is a standard Brownian motion under P W 0 .Proof.We apply (1.10) with h(z 1 , z 2 ) = 1 B(x,r) (z 1 ) 1 B(x,r) (z 2 ).This gives We now divide the above into two cases depending on the size of w.We first consider the singular case, where w is small.
We interpret the exponential in (4.27) as the probability that Y survives until time log(t/w) when it is subject to Markovian killing with rate F (Y u ).Because this probability is equal to the integral of the transition density over all of R, the portion of the integral corresponding to w ∈ [0, e −s * t] equals The first inequality uses (2.2) with δ = 1/8, which applies because log(t/w) > s * for all w in the above integral, and the second uses (2.5), both with δ = 1/8.In the integral in the last line we collect all the Gaussian terms.The square-bracketed term is equal to We have used the convolution property for independent Gaussians.We define Gaussian random variables g 1 ∼ N (0, 4t/3) and g 2 ∼ N (0, 10/3).Substituting the last expression into (4.28),we obtain /20 e −3y 2 /8 dz dy dw Suppose that 4r 2 < e −s * t.If 2rw −1/2 > 1, we bound the probability in the integral above by 1.If 2rw −1/2 ≤ 1, the probability is simply bounded by the diameter of the ball, 4rw −1/2 .Thus (4.29), and hence (4.28), is bounded above by Finally, note that if 4r 2 ≥ e −s * t, then (4.29) is bounded above by so the upper bound for (4.28) obtained in (4.30) holds in this case as well.
In this case we simply bound the exponential term in (4.27) above by 1, effectively ignoring the killing, in which case Y log(t/w) ∼ m.We also use (2.5) with δ = 1 4 .Hence the contribution to (4.27) from the w ∈ (e −s * t, t] case is bounded above by In the above, g 3 ∼ N (0, t) and g 4 ∼ N (0, 1).The third line follows by the convolution property of Gaussians.We again bound the probability in the integral by the size diameter of the ball, which gives the following upper bound for the above: = Ct −3λ0+1/2 P (g 3 ∈ B(x, r)) r. (4.31)By combining (4.30) and (4.31) and interpreting the Gaussian probabilities in terms of Brownian motion, we obtain the first inequality of the result.The second bound is obtained by bounding the Brownian density above by its maximum value.
Proof of Theorem 1.2(b).First consider L t under N 0 and recall the decomposition (4.27), ie.L t = Lt + ν t , the latter strictly atomic.Fix M ∈ N and consider the restriction of L t to [−M, M ], ie.dL . Note that the radius of the dyadic intervals is r( n+1) .By Lemma 4.4, we have Moreover, by Lemma 4.3, the first expression is greater than or equal to the expectation (under N 0 ) of the sum of the squares of the atoms of L (M) t . The above implies that this expectation must in fact be zero, so ν (M) t = 0 N 0 -a.s.As this holds for all M , ν t = 0 and L t is atomless under N 0 .To obtain the result under P X X0 , we note from the cluster decomposition and (1.17) that (conditionally) L t is a sum of N measures which are atomless by the above, and hence is atomless.

Proof of Theorem 1.4
We begin by obtaining an expression for second moments of L λ t under the canonical measure.In particular, we study N 0 (L λ t (φ 1 ) L λ ′ t (φ 2 )) for λ, λ ′ > 0. The moment representation formula is naturally suggested by a branching particle heuristic.Its proof uses PDE methods and the Laplace functional.Let E B x denote the expectation of a Brownian motion started at x. E B 1 ,B 2 (x,y) denotes the law of two independent Brownian motions B 1 and B 2 started from points x and y respectively.We recall that p δ (•) denotes the Gaussian density with variance δ.Proposition 5.1.Let h : R 2 → R be a bounded Borel function and λ, λ ′ , t > 0. Then The proof of Proposition 5.1 requires the following lemma.
Lemma 5.2.Let ϕ ∈ M F (R) and ϕ 1 , ϕ 2 ∈ L 1 (R) be non-negative and continuous.Then Proof.Let ǫ 1 , ǫ 2 > 0 and ϕ, ϕ 1 and ϕ 2 be as in the statement.Viewing ϕ 1 and ϕ 2 as the density functions of the finite measures they induce (ie.ϕ i (A) = A ϕ i (x) dx), let V ϕ,ǫ1,ǫ2 t denote the solution to (3.4) We differentiate this expression once with respect to ǫ 1 and once with respect to ǫ 2 .The derivatives of the inner expression of the left hand side are bounded above by integrable quantities (i.e.X t (ϕ 1 ) and X t (ϕ 1 )X t (ϕ 2 )) so we can take the differentiation inside the expectation in the probabilistic representation, and the derivatives of the right hand side exist.The resulting equation is the following: (5.1) We note that the limit of the left hand side as ǫ 1 , ǫ 2 ↓ 0 is the desired expression.We now obtain an expression for the first derivatives of V ϕ,ǫ1,ǫ2 t (0) with respect to ǫ 1 and ǫ 2 .Consider the following partial differential equation: (5.2) where the u t → ϕ 1 in the sense of weak convergence of measures.The above can be obtained heuristically by formally differentiating (3.4) with respect to ǫ 1 when the initial conditions are ϕ + ǫ 1 ϕ 1 + ǫ 2 ϕ 2 .By Lemmas 2.3 and 2.5 of [15], (5.2) has a unique solution, which we denote by U 1,ǫ1,ǫ2 t , which satisfies . We can apply the same argument to obtain a similar representation for ∂ ∂ǫ2 V φ,ǫ1,ǫ2 t , which we denote by U 2,ǫ1,ǫ2 t .Both U 1,ǫ1,ǫ2 t and U 2,ǫ1,ǫ2 t have Feynman-Kac representations; for example, see Theorem 7.6 of Karatzas and Shreve [9] (on p. 366).For i = 1, 2 we have We take the expression for i = 1 and differentiate it with respect to ǫ 2 .We obtain where the final line follows from another application of (5.3), this time with i = 2. First we note that all the terms are non-negative, so we can take the internal integral over time outside the expectation.For s < t, the integrand then describes one Brownian motion started at 0 and run to time t, and a second which branches from the first at time s and evolves independently.By applying the Markov property at time s we equivalently view it as a Brownian path that branches at time s into two independent Brownian motions B 1 and B 2 which themselves run for a duration of t − s.This formulation combined with the independence of the Brownian motions gives us The derivatives in ǫ 1 and ǫ 2 are one-sided at 0 so we cannot exactly evaluate at ǫ 1 = ǫ 2 = 0.However, V φ,ǫ1,ǫ2 t (x) is continuous in ǫ 1 and ǫ 2 and the integrand is bounded above by ϕ 1 ∞ ϕ 2 ∞ so we can apply bounded convergence.As ǫ 1 , ǫ 2 ↓ 0, V ϕ,ǫ1,ǫ2 t → V φ t by Lemma 2.1(d) of [15].We also take ǫ 1 , ǫ 2 ↓ 0 in the left hand side of (5.1) and apply Dominated Convergence.Evaluating at x = 0 gives the result.
Proof of Proposition 5.1.We will prove the result for functions of product form, ie.h(x, y) = φ 1 (x) φ 2 (y), and then use a monotone class theorem.Let x 1 , x 2 ∈ R and λ, λ ′ > 0. Consider the expression from Lemma 5.2 with ϕ = λδ x1 + λ ′ δ x2 .For now we simply let ϕ 1 and ϕ 2 be functions satisfying the assumptions of Lemma 5.2, but we will shortly choose them to be approximate identities at x 1 and x 2 .Applying Lemma 5.2, we have where we have used translation invariance of V λ,λ ′ (x 1 , x 2 ).Now let ϕ i = p δ (• − x i ), and let φ 1 , φ 2 be bounded, continuous functions and integrate φ 1 (x 1 )φ 1 (x 2 ) multiplied by the above over x 1 and x 2 .The left hand side is then The above is absolutely bounded by where the change of order of integration follows because all the terms are non-negative once we bound |φ i (x i )| by φ i ∞ .Now we note that Combined with (5.6), this implies that (5.5) is absolutely integrable and absolutely bounded above by 2 ).Thus we can apply Fubini and rewrite (5.5) as As noted, the expression inside N 0 is absolutely bounded above by φ 1 ∞ φ 2 ∞ X t (1) 2 , which is integrable under N 0 , for all δ.We take δ ↓ 0 and apply Dominated Convergence to obtain that the limit of (5.8) as δ ↓ 0 is equal to •) is continuous with compact support) N 0 -a.s. and {p δ } δ>0 are an approximate identity family, which together with the above imply that X t (p δ (• − x i )) → X(t, x i ) as δ ↓ 0. Applying (5.7) shows that the integrals in (5.9) are absolutely bounded by φ i ∞ X t (1) for i = 1, 2 uniformly in δ > 0, so by another application of Dominated Convergence in (5.9), the limit of (5.5) as δ ↓ 0 equals When rescaled by (λλ ′ ) 2λ0 this is equal to N 0 (L λ t (φ 1 ) L λ ′ t (φ 2 )).We now turn our attention to the right hand side of (5.4).With ϕ i = p δ (• − x i ), integrating against φ(x 1 )φ(x 2 )dx 1 dx 2 , we have Since the above is equal to (5.5), which we have shown is absolutely integrable, we can take the spatial integrals inside the expectations.At this point we note that we are integrating a bounded function of x 1 and x 2 with respect to the densities p δ (B s + B 1 t−s − x i ), which, because p δ is the kernel of the Brownian semigroup, is the same as viewing x i as B s + B i t−s+δ .Hence the above is equal to Taking δ ↓ 0 and applying Dominated Convergence, we note that because B i t−s+δ → B i t−s and φ 1 , φ 2 and V λ,λ ′ s are continuous, the limit is equal to the above with δ = 0. To obtain the desired form we make a time reversal of the Brownian motions.Let Bi u = B i t−s − B i t−s−u .We note that the Bi are standard Brownian motions and that Bi Making this substitution shows that (5.10) with δ = 0 is equal to The time index of the Brownian motions now matches the time index of the function V λ,λ ′ in the last two lines, allowing us to reverse the time of the integrals for a simpler expression.To obtain the desired expression we now apply the fact that and relabel Bi to be simply B i .This proves the result for h(x 1 , x 2 ) = φ 1 (x 1 ) φ 2 (x 2 ) when φ 1 , φ 2 are bounded and continuous.The result for general bounded measurable h : R 2 → R now follows from a standard monotone class argument such as Corollary 4.4 in the Appendix of Ethier and Kurtz [6].
Definition.Let Γ λ,λ ′ (s) denote the integrand in Proposition 5.1, so that the proposition states that (5.11) Γ λ,λ ′ (s) also depends on h, but we omit this.The next lemma changes variables to obtain an expression involving Ornstein-Uhlenbeck processes.We first introduce some notation.For bounded and measurable h : R 2 → R and a (continuous We define H c u as a scaling of V λ,λ ′ t : (5.13) The scaling in the following lemma cannot be done uniformly for all s ∈ [0, t] because it requires λ 2 > (t − s) −1 and λ ′2 > (t − s) −1 .We derive an expression for Γ λ,λ ′ (s) in terms of two independent Ornstein-Uhlenbeck processes which we denote Y 1 and Y 2 , for which we denote the joint (independent) expectation Then for all λ > (t − s) −1/2 and λ ′ > (t − s) −1/2 , we have Proof.We begin with the expression from Proposition 5.1.We observe that Ψ λ,λ ′ B,s appears and we may write the quantities in the first two lines as Ψ λ,λ ′ B,s (B 1 t−s , B 2 t−s ).In the third and fourth lines we apply (3.19) to obtain , which are both standard Brownian motions.Making a time change in the integrals (ie.letting u = λ 2 r or λ ′2 r) gives Because we have assumed λ, λ ′ > (t − s) −1/2 , the upper bounds of integration in the integrals are greater than 1.
We now apply the Markov property for Bi at time u = 1.We collect the portions of the integrals from the second and third lines on the interval [0, 1], leaving the integrals from 1 to λ 2 (t − s) and λ ′2 (t − s).Conditional on Bi 1 , the Brownian motions in the integrands' arguments are Brownian motions with initial position Bi 1 .If we denote these by Bi u (in which case, essentially, Bi u = Bi u+1 ), we obtain where B is a standard Brownian motion, then Y is a standard one-dimensional Ornstein-Uhlenbeck process with Likewise, we express the argument of Ψ λ,λ ′ B,s in terms of Y i and T i .We substitute u = e r − 1 and apply the above in (5.14) to obtain )), e r/2 Y 2 r ) dr .
We now apply (3.19) and (5.13) in the third and fourth lines.In the third line this gives and similar in the fourth.Noting that V c,d t (a, b) = V d,c t (b, a), we have obtained the desired expression.We now obtain an upper bound for Γ λ,λ ′ (s) and show that the contribution to N 0 ((L λ t × L λ ′ t )(h)) from the integral over [t − ǫ, t] vanishes as (ǫ, λ ′ ) → (0, ∞).Lemma 5.4.Suppose λ 2 t ≥ 1, and let h : R 2 → R be bounded.There is a constant C 5.4 > 0 such that the following hold.
where the final line follows from a time reversal of B and concatenating the time-reversed B with B 1 .Applying (3.6) twice and changing the time variable, the above is equal to The rescaled Brownian motions in the above are themselves standard Brownian motions which we will denote by B1 , B2 .We next let e r = u in both integrals and apply (3.6) to see that the above equals 1 (e −r/2 B2 e r ) dr ds where Y i r = e −r/2 Bi e r , which makes Y i u a stationary Ornstein-Uhlenbeck process for u ∈ R, and we recall our assumption that λ 2 t > 1.We condition on the value of Y 1 0 , which has distribution m.
We first use the above to prove (a).Assuming that λ ′ > (t − s) −1/2 , the upper endpoint of the second integral is positive, so by (5.15) we have where we have also conditioned on Y 2 0 .In order to approximate the expectations above with survival probabilities for killed Ornstein-Uhlenbeck processes, we add and subtract F (Y i u ) in the integrals.Recalling the definition of Z T (Y ) from (3.15), we define ) in the same way.Thus (5.16) is equal to In the first inequality we have used (3.17) twice, and the second equality follows by recognizing the expectations as survival probabilities of killed Ornstein-Uhlenbeck processes killed at rate F (Y i r ).By (2.9), we have Using the above in (5.17), which is an upper bound for (λλ ′ ) 2λ0 Γ λ,λ ′ (s) , proves (a).
We now show (b).Let 0 < ǫ < t.Using (5.15) we obtain that 1 (Y 2 r ) dr ds.(5.18)We can approximate the first expectation with the survival probability of Y 1 , just as we did in the proof of (a), and bound it above by Cλ −2λ0 t −λ0 .Furthermore, by the proof of part (a), we know that when λ ′ > (t − s) −1/2 the expectation in the integral above is bounded above by C(λ ′ ) −2λ0 (t − s) −λ0 .When this is not the case we bound it above by 1. Thus (5.18) is bounded above by The result now follows.
Proof of Theorem 1.4.Let h : R 2 → R be bounded and measurable.Clearly we may assume without loss of generality that h ≥ 0. We recall from (5.11) and Proposition 5.1 that where h ≥ 0 implies that Γ λ,λ ′ (s) ≥ 0. Our strategy is to compute the limit of (λλ ′ ) 2λ0 Γ λ,λ ′ (s) as λ, λ ′ → ∞ and pass the limit through the integral.However, the scaling we use cannot be done uniformly in s.In order to handle this and the singularity at s = t, we fix ǫ > 0 and analyse the integral on [t − ǫ, t] separately.We have By Lemma 5.4(b), the limit superior of the absolute value of the second term as exists for all ǫ > 0, then by the Cauchy condition lim λ,λ ′ →∞ N 0 ((L λ t × L λ ′ t (h)) exists and is the limit of the above as ǫ ↓ 0. Thus it suffices to fix ǫ > 0 and establish the convergence of, and find the limit of, the first term of (5.19), first as λ, λ ′ → ∞ and then as ǫ ↓ 0. By Lemma 5.4(a), we have for all λ, λ ′ > ǫ −1/2 for a function g(s) ≥ 0 satisfying (5.20) and so it suffices to find the limit of (λλ ′ ) 2λ0 Γ λ,λ ′ (s) as λ, λ ′ → ∞.
Let s ∈ (0, t) and assume λ, λ ′ > (t − s) −1/2 .By Lemma 5.3, )) du , (5.21) where T 1 = T 1 (s) = log(λ 2 (t − s)), T 2 = T 2 (s) = log(λ ′2 (t − s)).Inside the integral in the third term we add and subtract F (Y i u ) and decompose as follows We do the same to the fourth term with the obvious changes of indices.The first term in the above is the probability that the Ornstein-Uhlenbeck process Y 1 with killing function F survives until time T 1 .We extract a similar term from the symmetric term corresponding to Y 2 and T 2 .Weighting the expectation of a functional with this survival probability is equivalent to restricting the expectation to the event that the process survives; in our case, we restrict to the event that Y 1 and Y 2 survive until T 1 and T 2 , respectively.Thus (5.21) is equal to where ρ i = ρ F i is the lifetime of the killed process Y i .Recall the transition density q t (•, •) (with respect to m) of the killed diffusion .We condition on the endpoints Y i Ti = z i (recall from Lemma 2.3(a) that the regular conditional distributions exist for all z i ∈ R) and integrate against q Ti (•, z i ) dm(z i ) to obtain that (5.22) is equal to The conditional probabilities that appear are the same that are defined in Section 2, in particular Lemma 2.3.We have used that the terms in the third and fourth lines are independent conditional on the endpoints.Hereafter, Y 1 and Y 2 , and their respective laws, refer to killed Ornstein-Uhlenbeck processes with killing function F .Furthermore, after this point we will suppress the conditioning on ρ i > T i , as it is implicit in the conditioning Y i Ti = z i that ρ i > T i .
Θ is the product of the function G and the rescaled transition densities, ie.λ 2λ0 q T1 (B 1  1 , z 1 ) and λ ′2λ0 q T2 (B 2 1 , z 2 ).We will show that both of these approach finite limits as λ, λ ′ → ∞.First, let us handle the transition densities.By Lemma 2.2, lim Ti→∞ e λ0Ti q Ti (B i 1 , z i ) = ψ 0 (B i 1 ) ψ 0 (z i ) for i = 1, 2. Using the definitions of T 1 and T 2 (e.g.T 1 = log(λ 2 (t − s))), we readily obtain from the above that 1 ) ψ 0 (z 1 ) as λ → ∞, and We now compute the limit of G.We begin by focussing on the components of G for which the analysis is most technical, which are the conditional expectations of Zi Ti .We will focus on i = 1, but the analysis carries over to the i = 2 case.For now, we replace B 1  1 with a generic point x ∈ R. We will show that lim The integrand in W S (Y, z) is negative, which implies that 0 < W S (Y, z) ≤ 1 for all S > 0, and so W ∞ (Y, z) exists and is bounded by 1. Heuristically, the Z ∞ term comes from the early part of the integral in Zi Ti , and the W term comes from the tail part, and these two contributions are asymptotically independent.Since the time at which we condition is T and goes to infinity, in the limit the expectations are computed under the measure of the process conditioned to survive forever.Because z 1 and z 2 are fixed, we will hereafter suppress the dependence of Z1 T1 on them and simply write Z1 T1 (Y 1 , λ ′ /λ).Moreover, we will only be analysing Y 1 , T 1 and Z1 T1 (Y 1 , λ ′ /λ) for the time being, so we simply denote these by Y, T and ZT (Y, λ ′ /λ).
Let us now proceed more carefully.Let 0 < K < T /2.We apply the Markov property to E Y x ( ZT (Y, λ ′ /λ) Y T = z 1 ) at times K and T − K and expand in terms of the joint density of (Y K , Y T −K ).As in (2.11), the joint density of (Y K , Y T −K ) at (w, y) with respect to m × m under P Y x (Y ∈ • |Y T = z 1 ) is q K (x, w)q T −2K (w, y)q K (y, z 1 ) q T (x, z 1 ) .
Thus we obtain the following: dm(w) dm(y).(5.41) Denote the three conditional expectations by A 1 (x, w, λ, λ ′ , K), A 2 (w, y, λ, λ ′ , K) and A 3 (y, z 1 , λ, λ ′ , K).That is, We observe that A 1 , A 2 and A 3 all depend on z 1 and z 2 in addition to their listed arguments, as these values appear in their integrands.As for the time being we are viewing z 1 and z 2 as fixed, we omit this additional dependence.Noting that the integrand is bounded above by F (Y u ) − V e u/2 1 (Y u ) in each case, from (3.17) we have A i ≤ C Z for i = 1, 2, 3.In terms of the A i , (5.41) can be rewritten as A 1 (x, w, λ, λ ′ , K) A 2 (w, y, λ, λ ′ , K) A 3 (y, z 1 , λ, λ ′ , K) q K (x, w) q T −2K (w, y) q K (y, z 1 ) q T (x, z 1 ) dm(w) dm(y).(5.45)There are two main contributions in the A i .The first comes from F and the first argument of the H function, and is approximately equal to F (Y u ) − V e u/2 1 (Y u ); the second comes from the second argument of the H function.We will see that, asymptotically, A 1 is only affected by the first contribution and gives the Z ∞ (Y ) term in (5.39);A 3 is only affected by the second contribution and gives the W ∞ (z 2 , ∞) term in (5.39).The contribution of A 2 is will be seen to be negligible.We first show that A 2 is arbitrarily close to 1 as K is made large, uniformly in T sufficiently large depending on K. Define Z a T (Y, λ ′ /λ, K) as ZT (Y, λ ′ /λ, K) with A 2 replaced by 1; that is, As in (5.41) and (5.45), we therefore have uniformly in T > 2K.Integrating this over u shows that the exponent in A 2 is bounded above C ′ e −(2λ0−1)K/2 for a constant C ′ , uniformly in T > 2K.We choose K large enough so that exponent in A 2 is smaller than 2. Then by (5.45) and (5.46), applying the mean value theorem, we have Together, (5.48) and (5.50) imply that the absolute value appearing in the integral in (5.49) is bounded above by We have already noted that when integrated over u, the first term is bounded by C ′ e −K(2λ0−1)/2 (uniformly in T ).
The first term has no dependence on the spatial parameters w and y, so in (5.49) the transition densities and can be integrated and cancelled with the denominator.We get that for all T > 2K, (5.49) is bounded above by × q K (x, w)q T −2K (w, y)q K (y, z 1 ) dm(w) dm(y).
We consider the time reversed process in the above and apply (2.12), which implies that the above is equal to, and hence for all T > 2K, (5.49) is bounded above by × q K (x, w)q T −2K (w, y)q K (y, z 1 ) dm(w) dm(y).(5.51) We recall the asymptotic behaviour of F from (3.13)(iii), ie. that F (x) ∼ c 1 |x|e −x 2 /2 as |x| → ∞.This implies there is a constant c 2 > 0 such that (5.52) In order for this to give a useful upper bound in (5.51), we'll need to show that the argument of F is large in absolute value.It is enough to show that |Y u | ≪ e K+u 2 |z 2 − z 1 | with high probability when conditioned on its endpoint.Recall that we have assumed z 1 = z 2 .We bound the integrand over the two cases mentioned above and exchange the integral and expectation, which is justifiable since F is positive.We have where we have used (5.52) and the fact that F is radially decreasing.A simple substitution shows that To bound the second term in (5.53) we expand the probability of the large excursion in terms of the transition densities.There are two cases, which we handle in the following lemma.In what follows, s * = s * (1/8) from Theorem 2.1(c).
Lemma 5.5.Let M > 0 andand w, y ∈ R. (a) There is a constant C > 0 such that for S, u > 0 satisfying u, S − u ≥ s * , Proof.To see (a), we use (2.11) and (2.2) with δ = 1/8 to obtain that for u, S − u ≥ s * , ∞ M e (y 2 +2a 2 +w 2 )/8 dm(a) ≤ C q S (y, w) e −λ0S e (y 2 +w 2 )/8 e −M 2 /4 M ∧ 1 , where the last line uses a standard upper bound on Gaussian tails and bounds the integral above by a constant when M is small.The bound for Y u < −M is the same.The first family in part (b) is tight as a consequence of Lemma 2.3(c).To see that the second family is tight we consider the time reversal of Y and use (2.12), from which tightness now also follows from Lemma 2.3(c).
Lemma 5.6.For all x, z 1 , z 2 ∈ R such that z 1 = z 2 , for all K > 0, Given this Lemma, it suffices to find the limit of (the conditional expectation of) Z a T (Y, λ ′ /λ, K), and so A 2 has been replaced by 1.
We now analyse A * 3 in greater detail.In particular, we perform a time reversal on the process Y .By (2.12), we have This is the term that in (5.39) we claimed converges to E Y,∞ Z1 (W ∞ (Y, z 2 )), defined in (5.40), in the limit.However, the above expectation is still conditional on the endpoint.We now show that the contribution from the tail of the integral is vanishing, making the quantity asymptotically independent of the endpoint y.Let 0 < M < K. |A 3 (y, z 1 , λ, λ ′ , K) − A * 3 (y, z 1 , M, K)|q T −K (x, y) q K (y, z 1 ) dm(y).
Using (5.66) as an upper bound for the integrand, we obtain an expression which closely resembles (5.57); in particular, four terms appear, directly corresponding to δ 2 , δ 3 , δ 4 and δ 5 of that expression.Moreover, they can be handled using the exact same arguments, as in the proof of Lemma 5.6, but with (M, K) playing the roles of (K, T ).Because the arguments are the same, we omit them.
We now comment on the limit of A * 3 (w, M, K) as K → ∞.Recalling Note that the second term is now deterministic; it no longer depends on the original Ornstein-Uhlenbeck process Y or the spatial variable y.We then have x, w, λ, λ ′ , K) q K (x, w) q T −K (w, z 1 ) q T (x, z 1 ) dm(w) × E Y,∞ z1 (W M (Y, z 2 )) .
Bounding A 1 ≤ C Z and integrating out the transition densities, by (5.67) we obtain the following.for each fixed M > 0.
From our starting expression for ZT (Y, λ ′ /λ) in (5.45), all that remains to be handled in Z d T (Y, λ ′ /λ, M, K) is the A 1 term, whose definition we recall from (5.42) is The dominant contribution to the integral in A 1 resembles F (Y u ) − V e u/2 1 (Y u ).By Proposition 3.1 we have the following upper and lower bounds for the difference of the integrand and this term: for each fixed 0 < M < K.

Lemma 2 . 3 .
(a) Let x ∈ R and T > 0. For all z ∈ R, the finite dimensional distributions described in (2.11), with initial value x, have a unique extension to C([0, T ], R).The resulting laws P Y x (• | ρ > T, Y T = z) are continuous in z and define a regular conditional probability for Y | [0,T ] under P Y x conditioned on Y T .(b) Let x, z ∈ R, S > 0 be fixed.Then P

0 F
that E Y,∞ x is the expectation under the law of the killed process Y with Y 0 = x conditioned to survive for all time, as defined in Theorem 2.1(e).Z T (Y ) is as defined in (3.15) and we recall from (3.16) that Z ∞ (Y ) = lim T →∞ Z T (Y ) exists and is bounded by C Z .W S (Y, z) is defined as (5.40)W S (Y, z) = exp S (Y u ) − F 2 (Y u , Y u − e u/2 (z − Y 0 )) du .

1 )
dm(w) dm(y),(5.49)uniformly for all T > 2K, where we have also usedA 1 A 3 ≤ C 2 Z .The term in the absolute value inside the integral can be positive or negative; (5.48) provides an upper bound for F − H λ ′ /λ e K+u .To obtain a lower bound, we note that H λ ′ /λ e K+u (a, b) ≤ F 2 (a, b) ≤ F (a) + F (b) by Proposition 3.1 (using part (a) and then part (b)).Using this bound implies that (5.50) F