Particle representations for stochastic partial differential equations with boundary conditions

In this article, we study a weighted particle representation for a class of stochastic partial differential equations with Dirichlet boundary conditions. The locations and weights of the particles satisfy an infinite system of stochastic differential equations (SDEs). The evolution of the particles is modeled by an infinite system of stochastic differential equations with reflecting boundary condition and driven by independent finite dimensional Brownian motions. The weights of the particles evolve according to an infinite system of stochastic differential equations driven by a common cylindrical noise $W$ and interact through $V$, the associated weighted empirical measure. When the particles hit the boundary their corresponding weights are assigned a pre-specified value. We show the existence and uniqueness of a solution of the infinite dimensional system of stochastic differential equations modeling the location and the weights of the particles. We also prove that the associated weighted empirical measure $V$ is the unique solution of a nonlinear stochastic partial differential equation driven by $W$ with Dirichlet boundary condition. The work is motivated by and applied to the stochastic Allen-Cahn equation and extends the earlier of work of Kurtz and Xiong.


Introduction
In the following, we study particle representations for a class of nonlinear stochastic partial differential equations that includes the stochastic version of the Allen-Cahn equation [1,2] and that of the equation governing the stochastic quantisation of Φ 4 d Euclidean quantum field theory with quartic interaction [14], that is the equation where G is a (possibly) nonlinear function 1 and W is a space-time noise 2 . These particle representations lead naturally to the solution of a weak version of a stochastic partial differential equation similar to (1.1). See equation (1.11) below and Section 3. The approach taken here has its roots in the study of the McKean-Vlasov problem and its stochastic perturbation. In its simplest form, the problem begins with a finite system of stochastic differential equations X n i (t) = X n i (0) + t 0 σ(X n i (s), V n (s))dB i (s) + t 0 c(X n i (s), V n (s))ds, 1 ≤ i ≤ n, (1.2) where the X n i take values in R d , V n (t) is the empirical measure 1 n n i=1 δ X n i (t) , and the B i are independent, standard Brownian motions in an appropriate Euclidean space. The goal is to prove that the sequence of empirical measures V n converges in distribution and to characterize the limit V as a measure valued process which solves the following nonlinear partial differential equation, written in weak form, 3 ϕ, V (t) = ϕ, V (0) + t 0 L(V (s))ϕ, V (s) ds, (1.3) where ϕ : R d → R belongs to a suitably chosen class of Borel measurable functions and ϕ, V (t) = ϕdV (t) = ϕ(u)V (t, du).
In (1.3), we use a(x, ν) = σ(x, ν)σ(x, ν) T and L(ν) is the differential operator There are many approaches to this problem [5,11,13]. (See also the recent book [6].) The approach in which we are interested, introduced in [8] and developed further in [3,7,9], is simply to let the limit be given by the infinite system To make sense out of this system (in particular, the relationship of V to the X i ), note that we can assume, without loss of generality, that the finite system {X n i } is exchangeable (randomly permute the index i), so if one shows relative compactness of the sequence, any limit point will be an infinite exchangeable sequence and we can require V (t) to be the de Finetti measure for the sequence {X i (t)}. (See Lemma 4.4 of [7].) Note that while the X n i give a particle approximation of the solution of (1.3), the X i give a particle representation of the solution, that is, the de Finetti measure of {X i (t)} is the desired V (t).
With the results of [9] in mind, we are interested in solutions of stochastic partial differential equations whose solutions V are measures or perhaps signed measures. Our approach will be to represent the state V (t) in terms of a sequence of weighted particles {(X i (t), A i (t))}, where X i (t) denotes the location of the ith particle at time t in a state space D and A i (t) ∈ R the weight. The sequence is required to be exchangeable, so if, for example, ϕ is a bounded measurable function and E[|A i (t)|] < ∞, we have ϕ, V (t) = lim n→∞ 1 n n i=1 A i (t)ϕ(X i (t)). (1.5) If the A i (t) are nonnegative, then V (t) will be a measure, but we do not rule out the possibility that the A i (t) can be negative and V (t) a signed measure. The weights and locations will be solutions of an infinite system of stochastic differential equations that are coupled only through V and common noise terms. We emphasize that we are talking about a representation of the solution of the equation, not a limit or approximation theorem (although these representations can be used to prove limit theorems). To specify the representation another way, let Ξ(t) be the de Finetti measure for the exchangeable sequence {(A i (t), X i (t))}. Then ϕ, V (t) = aϕ(x)Ξ(t, da, dx).
The models in the current paper differ from those in [9] in two primary ways. First, the X i will be independent, stationary diffusion processes defined on a domain D ⊂ R d with reflecting boundary.
In the current work, we take the X i to satisfy the Skorohod equation where η(x) is a vector field defined on the boundary ∂D and L i is a local time on ∂D for X i , that is, L i is a nondecreasing process that increases only when X i is in ∂D. Then, under appropriate regularity conditions and conditions on the coefficients, X i is a diffusion process whose infinitesimal generator is the closure of the second order differential operator where σ T is the transpose of σ. See, for example, Theorem 8.1.5 in [4]. (In the notation of that theorem, η(x) = −c(x).) Much of what we say will hold under more general conditions on the process and domain. We will always assume that A is strictly elliptic and has a stationary distribution denoted by π and that the X i are independent, stationary solutions of the martingale problem for A.
One immediate consequence of this assumption is that the V (t) will be absolutely continuous with respect to π, that is, we can write The second primary difference from the models in [9] is that we will place boundary conditions on the solution. Essentially, we will require that v(t, u) = g(u), u ∈ ∂D, t > 0. (1.9) We assume that g is continuous, although most of our results should also hold for piecewise continuous g.
To give a precise sense in which this boundary condition holds, let g : D∪∂D → R be a continuous function such that g| ∂D = g and for a nonempty compact K ⊂ ∂D, define Since A is strictly elliptic, supp(π) = D and π(∂ ǫ (K)) > 0. Then under regularity conditions on the time-reversed process, for each nonempty, compact K ⊂ ∂D, See Section 2.4 for details.
In the same vein as equation (1.3), we will consider a class of nonlinear SPDE written in weak form To obtain, for example, the stochastic Allen-Cahn equation (1.1), we can choose A = 1 2 ∆ to be the generator for the normally reflecting Brownian motion, b = 0 and G(v, x) = 1 − v 2 for all v ∈ R and x ∈ D. In this case, π is normalized Lebesgue measure on D and, as above, V is a signed-measure-valued process, v(t) is the density of V (t) with respect to π, and v satisfies (1.10). In equation (1.11), the test functions ϕ are chosen to belong to C 2 c (D), the twice continuously differentiable functions with compact support in the interior of D. Whilst the set of test functions C 2 c (D) is included in the domain D of A it is not sufficient to ensure the uniqueness of solutions of (1.11). Extension of (1.11) to larger classes of test functions are given in Section 3.
Throughout, we will assume that U is a complete separable metric space, µ is a σ-finite Borel measure on U, and ℓ is Lebesgue measure on [0, ∞). W is Gaussian white noise on ρ(x, u)W (du, ds), where A * is the formal adjoint of the operator A. For this equation, assuming that D is sufficiently smooth and v(0, x) = h(x), we can take the locations X i to satisfy the Skorohod equation (1.6) with setting τ i (t) = 0 ∨ sup{s ≤ t : X i (s) ∈ ∂D}, then To see that these weights should give the desired representation, let ϕ be in C 2 c (D) and apply Itô's formula to obtain Since ϕ vanishes in a neighborhood of the ∂D, the next to the last term is a martingale and the martingales are orthogonal for different values of i. Assuming exchangeability, which will follow from the exchangeability (that is, the independence) of the X i provided we can show uniqueness for the system (1.13), averaging gives (1.11). The following is the first main result of the paper: Theorem 1 Under certain assumptions (see Condition 2.1 below) , there exists a unique solution ((A i ) i≥1 , V ) of the system Moreover V satisfies equation (1.11) for any ϕ ∈ C 2 c (D) as well as the boundary condition (1.10).
Let L(π) be the space of processes v compatible with W taking values in L 1 (π) such that for each T > 0, and some ε T > 0, v satisfies sup t≤T E D e ε T |v(t,x)| 2 π(dx) < ∞.
Theorem 1 tells us that there exists a (weak) solution of the equation (1.12). More it tells us that there exists a measure valued process V that satisfies (1.11) for any ϕ ∈ C 2 c (D). In general, uniqueness will not hold for (1.11) using test functions ϕ ∈ C 2 c (D). 4 Consequently, to obtain an equation that has a unique solution, we must enlarge the class of test functions. We do that in two different ways, first by taking the test functions to be C 2 0 (D), the space of twice continuously differentiable functions that vanish on the boundary (Section 3.1). The second extension is obtained by taking the test functions to be D(A), the domain of the generator A (Section 3.2). We summarize the second main result of the paper through the following: Theorem 2 There exist extensions of the stochastic partial differential equation (1.11) with a unique solution in L(π) given by the above particle representation.

Particle representation
With the example in the previous section in mind, let {X i } be independent, stationary reflecting diffusions in D with generator A. For our purposes in this section, it is enough to define the generator to be the collection of pairs of bounded measurable functions (ϕ, Aϕ) such that is a {F X i t }−martingale, although in Section 3, we will need to define A more precisely as the generator of the semigroup of operators corresponding to the X i . Note that for X i satisfying (1.6), the domain D(A) will contain D given in (1.8) with Aϕ given by (1.7).
Let τ i (t) = 0 ∨ sup{s < t : X i (s) ∈ ∂D}, and consider the system Aϕ(X i (s))ds for ϕ ∈ C 2 c (D). Then, assuming that a solution (X i , A i ) exists, we have, for ϕ ∈ C 2 c (D) The system of SDEs (2.2) must be considered in conjunction with the existence of a empirical distribution required to have a density v(t, ·) with respect to π. It is by no means clear that a solution satisfying all these constraints exists. We assume the following: 1. g and h are bounded with sup norms g and h .
Observe that Condition 2.1.4 does not imply that G has a lower bound, but only an upper bound. For example, G(v, x) = 1 − v 2 gives the classical Allen-Cahn equation, whilst Theorem 2.2 Assume Condition 2.1.1-6. Then the solution of (2.2+2.3) exists and is unique.
Proof. Uniqueness is proved in Section 2.2, existence in Section 2.3.

Preliminary results
First we explore the properties that a solution must have by replacing v by an arbitrary, measurable L 1 (π)-valued stochastic process U that is independent of {X i } and compatible with W , that is, Define A U i to be the solution of Existence and uniqueness of the solution of (2.5) holds under modest assumptions on the coefficients, in particular, under Condition 2.1.
Then H i is a martingale with respect to the filtration For all A U i defined as in (2.5) and K 1 , K 2 , and K 3 as above, For each T > 0, there exists ε T such that so we have a similar bound on A − i . Together the bounds give the first two inequalities in (2.6).
Note that H i is a continuous martingale with quadratic variation The first inequality in (2.7) follows by the monotonicity of Γ i and the finiteness by standard estimates on the distribution of the supremum of Brownian motion.
The {A U i } will be exchangeable, so we can define ΦU(t, x) to be the density with respect to π of the signed measure determined by Proof. By exchangeability, where the second equality follows by the independence of X i (t) and W . The lemma then follows by the definition of conditional expectation.
Then there exists a version of ΦU(t, x) such that where we interpret the right side as the optional projection, and for this version Moreover the identity (2.9) holds with t replaced by any nonnegative σ(W )-measurable random variable τ .
Proof. The first equality in (2.10) is just (2.9), and the second follows from the fact that X i is Markov.
Corollary A.2, the properties of reverse martingales, and Doob's inequality give (2.11), and the last statement follows by the definition of the optional projection.

Lemma 2.6
For all T > 0 and p ≥ 1, there exists a constant C p,T so that for all t ≤ T Proof. Recall that we have the bound and that Notice that Jensen's inequality gives S was arbitrary, so the result follows.
Recalling that

Uniqueness
Then there exists a constant L 3 > 0 such that Suppose U k = ΦU k , k = 1, 2, that is, we have two solutions. Then conditioning both sides of the above inequality on W and observing we have Taking expectations of both sides and applying Hölder's inequality, The same argument and induction extends uniqueness to any interval.

Existence by iteration
The estimates of the previous section can be applied to give convergence of an iterative sequence proving existence. For an exchangeable family i . By the estimates of the previous section, for n, m ≥ 1 and as in the previous section, for t < ε T 3(L 1 +2L 2 ) , the expectation of the right side goes to zero as C goes to infinity. Consequently, the sequence {A

Boundary behavior
In this section, we present two senses in which the particle representation satisfies the boundary condition (1.9). These results depend on the boundary regularity of the stationary diffusions {X i } run forward or backward in time. We begin with the result coming from regularity of the forward process, which will lead to a weak formulation of the stochastic PDE including the boundary condition in (3.1) below.
Introduce the notation α X t = inf{s > t : X(s) ∈ ∂D} and let L be the second order differential operator in (1.7). The first result depends on the following mild regularity assumptions.
Condition 2.8 Let X satisfy (1.6) with local time L. Assume that X is regular in the sense that: 1. X satisfies the strong Markov property.
Recall that we have assumed that L is uniformly elliptic. If a i,j is continuous, a i,j ∞ , c i ∞ < ∞, and inf x∈∂D ∇φ(x) · η(x) > 0 then Condition 2.8 holds. See for example [ Suppose that Condition 2.8 holds. By stationarity, for t ∈ R + , the process X i (t + ·) has the same distribution as X i (·). Therefore For bounded and continuous ϕ, define which, by the above discussion, satisfies Q(t + s, ϕ) = Q(t, ϕ) + Q(s, ϕ) and |Q(t, ϕ)| ≤ tC ϕ ∞ for some constant C. Therefore, since Q is bounded and additive in its first coordinate, there exists a constant C ϕ so that Q(t, ϕ) = tC ϕ . Since Q is also linear in its second coordinate, it then follows from the Riesz representation theorem that there exists a measure β on ∂D which satisfies By considering test functions of product form which are step functions in time, we can see that for sufficiently regular space-time functions ϕ, we have Denote partial derivatives with respect to the time variable by ∂. We also have the following relation between π and β: We have the following Proof. Local time is a continuous measure supported on the set {t ≥ 0 : X i (t) ∈ ∂D} and therefore assigns measure zero to the (countable) set of left isolated points of {t ≥ 0 : By (2.13) and Lemma 2.11, we have the following theorem.
Theorem 2.12 Suppose that Condition 2.8 is satisfied. Almost surely, for dL i almost every t, A i (t) = A i (t−) = g(X i (t)) and therefore In the next section, we show how Theorem 2.12 leads to a weak formulation of the stochastic partial differential equation with a broader class of test functions than those considered above. We now turn to the form of the boundary condition mentioned in the introduction at (1.10), which depends on regularity of the time-reversed process.
For each t and s ≤ t, define the time reversal of X i by X * i,t (s) = X i (t − s). For notational convenience, when it is clear from context what the value of t is, we will suppress the subscript and take the convention that X * i,t (s) ≡ X * i (s). Since X i is stationary, the time reversal X * i is a Markov process whose generator A * satisfies

Remark 2.13
If D is sufficiently smooth and A = ∆ with normally reflecting boundary conditions, then A * = A and π is normalized Lebesgue measure.
Define the hitting time of the boundary for the reversed process by σ i = inf{s : X * i (s) ∈ ∂D}, so if the reversal is from time t, σ i = t − τ i (t). Showing that (1.10) is satisfied will depend on the following condition.
Proof. By Lemma 2.9 and Jensen's inequality, we have Recalling the definition of A U i (t), we have ρ(X i (s), u)W (du, ds) .
Next, observe that By compactness, there exists x 0 ∈ K and x n → x 0 so that lim sup Continuity of g and (2.15) imply that the limit is zero. A similar argument and (2.14) show that P (τ i (t) = 0|X i (t) ∈ ∂ ǫ K) tends to zero, so that the conditional expectations of the first two terms on the right of (2.16) go to zero. Next, we recall that By the Cauchy-Schwarz inequality, The second factor in this inequality is uniformly bounded by Lemma 2.6, and which tends to zero by 2.14. Notice that W remains white noise when conditioned on X i . Using Itô's isometry and the fact that the L 2 norm dominates the L 1 norm, we have As above, the last expression tends to zero by (2.14).

Weak equations with boundary terms
At least in general, uniqueness will not hold for the weak-form stochastic partial differential equation (1.11) using test functions with compact support in D o , the interior of D. (For example, consider X i reflecting Brownian motion with differing directions of reflection but whose stationary distribution is still normalized Lebesgue measure.) Consequently, to obtain an equation that has a unique solution, we must enlarge the class of test functions. We do that in two different ways obtaining two formally different weak-form stochastic differential equations; however, for both equations, we give conditions under which the process given by the particle representation constructed above is the unique solution. The first extension, which gives the simplest form for the stochastic partial differential equation, is obtained by taking the test functions to be C 2 0 (D), the space of twice continuously differentiable functions that vanish on the boundary. The second extension is obtained by taking the test functions to be D(A), the domain of the generator of the semigroup corresponding to the reflecting diffusion giving the particle locations.

Weak equation based on
In this section, we apply the results of Section 2.4 to enlarge the class of test functions in the definition of the equation (1.11) to all ϕ in C 2 0 (D), the class of C 2 -functions that vanish on ∂D. More precisly, we consider test funtions of the form ϕ(x, s) which are twice continuously differentiable in x, continuously differentiable in s, and vanish on ∂D × [0, ∞). To simplify notation, extend g to all of D by setting g(x) = h(x) for x ∈ D o . We assume that g is continuous on the boundary, but none of the calculations below require this extension to be continuous. The equation becomes ϕ(·, t), V (t) = ϕ(·, 0), V (0) + t 0 ϕ(·, s)G(v(s, ·), ·), V (s) ds ϕ(x, s)ρ(x, u)π(dx)W (du × ds) Lϕ(·, s) + ∂ϕ(·, s), V (s) ds where π is the stationary distribution for the particle location process and β is the measure defined in Section 2.4. With (2.12) in mind, let L(π) be the collection of L 1 (π)-valued processes v compatible with W such that for each T > 0, there exists ε T > 0 such that Remark 3.2 It would be enough to know that there is a core C 0 for A 0 such that all ϕ ∈ C 0 are continuously differentiable.

Derivation of (3.1)
Define Then and note that Y i is a semimartingale. Since A i , or more precisely, Z i • τ i , does not appear to be a semimartingale, we derive a version of Itô's formula for A i ϕ • X i from scratch. Specifically, we consider the limit of the telescoping sum as the mesh size of the partition {t k , 0 ≤ k ≤ m} goes to zero. Since X i is a semimartingale, ϕ • X i is a semimartingale and the second term on the right converges to the usual semimartingale integral. Since (3.4) is an identity, the second sum must also converge. To distinguish limits of expressions of this form from the usual semimartingale integral, we will write Observing that the covariation of ϕ • X i and Y i is zero, applying semimartingale integral results, we get limits for everything but Note that the summands are zero unless Breaking this expression into pieces, we have and it is clear that the last three sums on the right converge to zero. It is not immediately clear that the first sum converges to zero, but it does.
Let γ i (t) = inf{s ≥ t : X i (s) ∈ ∂D}, and for each n, define If s > τ i (s), then lim n→∞ U n (s) = g(τ i (s)), since for n sufficiently large, there exists k such the s ∈ [γ i ( k n ), γ i ( k+1 n )) and τ i (s) = γ i ( k n ). If s = τ i (s) and s ∈ [γ i ( k n ), γ i ( k+1 n )), then |s − γ i ( k n )| ≤ 1 n and U n (s) → g(τ i (s)) by the continuity of g. Observing that The first term on the right converges by Doob's inequality and the last two by the bounded convergence theorem. Defining Applying Theorem 2.12, the averaged identity becomes (3.1).
The assumption that A 0 is the closure of {(ϕ, Lϕ) : ϕ ∈ D 0 } imples that we can extend this identity to functions ϕ which are differentiable in time and satisfy ϕ(·, s) ∈ D(A 0 ) with A 0 ϕ and ∂ϕ bounded and continuous. We denote by T 0 the semigroup generated by A 0 . Then for ψ ∈ D(A 0 ) and ϕ(x, s) = r ǫ (s)T 0 (t − s)ψ(x), where 0 ≤ r ǫ ≤ 1 is continuously differentiable, r ǫ (s) = 0 for s ≥ t and r ǫ (s) = 1, for s ≤ t − ǫ, Assuming r ǫ (s) → 1 [0,t) (s) appropriatly, (3.7) converges to ψ, δV (t) and With reference to Condition 2.1, taking the supremum over ψ ∈ D(A 0 ) with |ψ| ≤ 1, Taking expectations of both sides and applying the Hölder inequality, we have for t ≤ T . As in the proof of Theorem 2.2, the Gronwall inequality implies δv(t, ·) = 0 for But local uniqueness implies global uniqueness, so we have uniqueness for the linearized equation.
To complete the proof, we follow an argument used in the proof of Theorem 3.5 of [9]. Let V be the solution constructed in Section 2, and let U be another solution of (3.9) in L(π). Let ΦU be given by (2.8). Then ΦU is a solution of the linearized equation with v replaced by U. But since U is a solution of the nonlinear equation, it is also a solution of the linearized equation, and by uniqueness for the linearized equation ΦU = U. Consequently, U has a particle representation, and since V is the unique solution given by a particle representation, U = V .

Weak equation based on D(A)
In this section, we take the test functions to be D(A), the domain of the generator for the semigroup corresponding to the location processes. More precisely, we take ϕ(x, t) that are continuously differentiable in t with ϕ(x, t) = 0 for t > t ϕ , and ϕ(·, t) ∈ D(A), t ≥ 0, so that Aϕ is bounded and continuous. As above, we extend g to all of D by setting g( 8) and note that 1 {τ i (t)=0} = 1 {γ i (0)>t} . Let P (dy, ds|x) be the conditional distribution of (X i (γ i (0)), γ i (0)) given X i (0) = x, and let P ϕ(x) = ϕ(y, s)P (dy, ds|x). Let X * be the reversed process and γ * be the first time that X * hits the boundary. We consider the following weak equation for V .
Theorem 3.4 V defined in (2.3) is the unique solution of (3.9) in L(π).

Derivation of (3.9)
For ϕ satisfying the conditions stated above, as the mesh size of the partition {t k , 0 ≤ k ≤ m} goes to zero. Since M ϕ,i is a martingale, ϕ • X i is a semimartingale and the second term on the right converges to the usual semimartingale integral. We must evaluate Breaking A i into its components, the difficulty is again but the limit is different. Breaking this expression into pieces, we have Summing by parts, we can write the first expression on the right as g(X i (τ i (s)))(Aϕ(X i (s), s) + ∂ϕ(X i (s), s)ds +ϕ(X i (t), t)g(X i (τ i (t))) − ϕ(X i (0), 0)g(X i (0)).
For the remaining three terms, a summand is nonzero only if τ i (t k+1 ) = τ i (t k ) which implies that X i hits the boundary between t k and t k+1 . If the mesh size is small, by continuity, X i (t k+1 ) must be close to the boundary and hence must be close to X i (τ (t k+1 )). Let E i (t) be the set of complete excursions from the boundary in the interval [0, t]. Then the last three terms converge to ρ(X i (s), u)W (du × ds).
Note that for γ i given by (3.8) and s ∈ (α, β), γ i (s) = β, so we can write and we have ϕ(X i (s), s)ρ(X i (s), u)W (du × ds) Recalling that ϕ is nonzero only on a finite time interval and letting t = ∞, averaging gives (3.9).

Proof of uniqueness
The proof of uniqueness is essentially the same as for Theorem 3.1.

A Appendix
A.1 Gaussian white noise

It follows that for disjoint
almost surely, but the exceptional event of probability zero, will in general depend on the sequence {C i }, that is, while W acts in some ways like a signed measure, it is not a random signed measure. Define the filtration {F W t } by  By a truncation argument, the integral can be extended to adapted Y satisfying t 0 U Y 2 (u, s)µ(du)ds < ∞ a.s., t > 0.
Note that under this last extension, the integral is a locally square integrable martingale with A.2 Measurability of density process Lemma A.1 Let E and W be complete separable metric spaces, let X be a stationary Markov process in E with no fixed points of discontinuity, and let W be a W-valued random variable that is independent of X. Let f : [0, ∞) × D E [0, ∞) × W → R be bounded and Borel measurable and be nonanticipating in the sense that Letting {G X t } denote the reverse filtration, G X t = σ(X(s−), s ≥ t), there exists a Borel measurable g : [0, ∞) × E × W → R such that {g(t, X(t−), W ), t ≥ 0} gives the optional projection of {f (t, X, W ), t ≥ 0}, that is, for each reverse stopping time τ , ({τ ≥ t} ∈ σ(W ) ∨ G X t , t ≥ 0), E[f (τ, X, W )|σ(W ) ∨ G X τ ] = g(τ, X(τ −), W ). (A.1) Proof. Let R be the collection of bounded, Borel measurable functions f for which the assumptions and conclusions of the lemma hold. Then R is closed under bounded, pointwise limits of increasing functions and under uniform convergence. For 0 ≤ t 1 < · · · < t m , f i ∈ B([0, ∞) × E), f 0 ∈ B(W), let Then letting {T * (t)} denote the semigroup for the time-reversed process, g can be expressed in terms of {T * (t)} and the f i . For example, if m = 2, Let H 0 be the collection of functions of the form (A.2). Then by the appropriate version of the monotone class theorem (for example, Corollary 4.4 in the Appendix of [4]), R contains all bounded functions that are σ(H 0 ) measurable, that is all bounded measurable functions such that f (t, x, w) = f (t, x(· ∧ t−), w). Proof. The uniqueness of the optional projection up to indistinguishability ensures that if Z 1 (t) ≤ Z 2 (t) for all t with probability one, then the optional projection of Z 1 is less than or equal to the optional projection of Z 2 for all t with probability one.