Markov selection for constrained martingale problems

Constrained Markov processes, such as reﬂecting diffusions, behave as an unconstrained process in the interior of a domain but upon reaching the boundary are controlled in some way so that they do not leave the closure of the domain. In this paper, the behavior in the interior is speciﬁed by a generator of a Markov process, and the constraints are speciﬁed by a controlled generator. Together, the generators deﬁne a constrained martingale problem . The desired constrained processes are constructed by ﬁrst solving a simpler controlled martingale problem and then obtaining the desired process as a time-change of the controlled process. As for ordinary martingale problems, it is rarely obvious that the process constructed in this manner is unique. The primary goal of the paper is to show that from among the processes constructed in this way one can “select”, in the sense of Krylov, a strong Markov process. Corol-laries to these constructions include the observation that uniqueness among strong Markov solutions implies uniqueness among all solutions. These results provide useful tools for proving uniqueness for constrained processes including reﬂecting diffusions. The constructions also yield viscosity semisolutions of the resolvent equation and, if uniqueness holds, a viscosity solution, without proving a comparison principle. We illustrate our results by applying them to reﬂecting diffusions in piecewise smooth domains. We prove existence of a strong Markov solution to the SDE with reﬂection, under conditions more general than in [13]: In fact our conditions are known to be optimal in the case of simple, convex polyhedrons with constant direction of reﬂection on each face ([10]). We also indicate how the results can be applied to processes with Wentzell boundary conditions and nonlocal boundary


Introduction
Let A be an operator determining a Markov process X with state space E as the solution of the martingale problem in which Af (X(s))ds (1.1) is required to be a martingale with respect to a filtration {F t } for all f ∈ D(A), the domain of A. The study of stochastic processes that behave like the process determined by A when in an open subset E 0 ⊂ E, are constrained to stay in E 0 , and must behave in a prescribed way on ∂E 0 , is classically carried out by restricting the domain D(A) by specifying boundary conditions, typically of the form Bf (x) = 0 for x ∈ ∂E 0 for some operator B. Then X is required to remain in E 0 and (1.1) is required to be a martingale for all functions in {f ∈ D(A) : Bf (x) = 0, x ∈ ∂E 0 }. This approach to constrained Markov processes, however, frequently introduces difficult analytical problems in identifying a set of functions both satisfying the boundary conditions and large enough to characterize the process.
An alternative approach by Stroock and Varadhan [31] introduces a submartingale problem which weakens the restriction on the domain of A to the requirement that Bf (x) ≥ 0 for x ∈ ∂E 0 and then requires that for all such f ∈ D(A), (1.1) is a submartingale. This approach has been used to great effect by a number of authors. See, for example, [37,20,21].
Restrictions on the values of Bf on the boundary are dropped altogether in [23,24] at the cost of introducing a boundary process λ that, in the simplest settings, measures the amount of time the process spends on the boundary in the sense that λ is nondecreasing and increases only when X (or more precisely X(·−)) is on the boundary. Then X is required to take values in E 0 and for each f ∈ D(A) ∩ D(B), Af (X(s))ds − Bf (X(s−))dλ(s) (1.2) is required to be a martingale. As we will see, the form of the boundary term may be more complicated than this. A process that satisfies these requirements is a solution of the constrained martingale problem. Clearly, every solution of the constrained martingale problem is also a solution of the submartingale problem. This approach, or the corresponding one for stochastic equations, has been used, for example, in [10,5,7].
Whether the submartingale problem approach or the constrained martingale problem approach is used, the critical issue is uniqueness of the solution, which is still an open question for many examples (see e.g. [17,18]).
The primary goal of this paper is to prove a Markov selection theorem for solutions of constrained martingale problems. Beyond the intrinsic interest, this selection theorem is frequently a crucial ingredient in proving uniqueness for constrained martingale problems and hence uniqueness for semimartingale reflecting Brownian motion (see, for example, [26,34,10]) and reflecting diffusions.
In the unconstrained case, the Markov selection theorem ensures the existence of strong Markov solutions to the martingale problem. The construction of the strong Markov solution also ensures that uniqueness among strong Markov solutions implies uniqueness among all solutions. See [32], Theorems 12.2.3 and 12.2.4, for diffusions and [14], Theorem 4.5.19, for general martingale problems. All these results follow [22]. The observation that uniqueness among strong Markov solutions implies uniqueness among all solutions provides a key tool in uniqueness arguments. Unfortunately, these results do not apply immediately to solutions of submartingale or constrained martingale problems. We construct solutions of the constrained martingale problem by time-changing solutions of a controlled martingale problem (Sections 2 and 3). Solutions of the controlled martingale problem evolve on a slower time scale and may take values in all of E. Their behavior in E c 0 is determined by the operator B. Since solutions of the controlled martingale problem capture the intuition behind the controls that constrain the solution, we will refer to solutions of the constrained martingale problem that arise as time-changes of solutions of the controlled martingale problem as natural. We cannot rule out the possibility that there are solutions of the constrained martingale problem which are not natural, but, under very general conditions, uniqueness for natural solutions implies uniqueness for all solutions. See Remark 4.14.
In Section 2.1, we introduce the controlled martingale problem and discuss properties of the collection of solutions. In particular, we prove weak compactness of the collection of solutions. In Section 3, we introduce the time-changed process. Under mild conditions, the time-changed process is a natural solution of the constrained martingale problem. We note however that, even when it is not, the time-changed process still models a process constrained in E 0 , with behavior in the interior determined by A and constraints determined by B.
In Section 4 we prove that there exists a natural strong Markov solution of the constrained martingale problem (Theorem 4.9 and Corollary 4.12) and that uniqueness among natural strong Markov solutions implies uniqueness among all natural solutions (Corollary 4.13).
In Section 5, we discuss connections between solutions of the constrained martingale problem and viscosity semisolutions of the corresponding resolvent equation. In particular, generalizing the results of Section 5 of [6], we see that existence of a comparison principle for the viscosity semisolutions implies uniqueness for natural solutions of the constrained martingale problem. Conversely, uniqueness of natural solutions of the constrained martingale problem gives a viscosity solution of the resolvent equation. Thus one can obtain existence of a viscosity solution from purely probabilistic arguments, without first proving a comparison principle for the resolvent equation.
In Section 6 we apply the results of Section 4 to diffusion processes in piecewise smooth domains of R d with varying, oblique directions of reflection on each face. Existence and uniqueness results for these processes have been obtained by many authors ( [34,10] for convex polyhedrons with constant direction of reflection on each face, [33,28,4,13] for nonpolyhedral domains, etc.). For nonpolyhedral domains, [13] is perhaps the most general result, but it still requires a condition that is not satisfied in some very natural examples (see Example 6.1) or is difficult to verify in other ones (see e.g. [18]). In addition, [13] does not cover the case of cusp like singularities, such as in [17] (in dimension 2, cusp like singularities are covered by [7]). In [34] and [10] a key point in proving uniqueness is the fact that there exist strong Markov processes that satisfy the definition of reflecting diffusion and that uniqueness among these strong Markov processes implies uniqueness. By the results of Section 4, we obtain existence of a strong Markov natural solution of the constrained martingale problem under conditions that coincide with those of [10] in the case of simple, convex polyhedrons with constant direction of reflection on each face (see Remark 6.3). In this case, [10] have shown that these conditions are necessary for existence of a semimartingale reflecting Brownian motion. Under the same assumptions, the results of Section 4 ensure also that uniqueness among strong Markov natural solutions implies uniqueness among all natural solutions. Moreover we show that the set of natural solutions of the constrained martingale problem coincides with the set of weak solutions to the corresponding stochastic differential equation with reflection (Theorem 6.12).
Further examples of application of the results of Section 4 are presented in Section 7.

Notation
For a metric space (E, r), B(E) will denote the σ-algebra of Borel subsets of E, B(E) will denote the set of bounded, Borel measurable functions on E, and · will denote the supremum norm on B(E). P(E) will denote the set of probability measures on (E, B(E)). For F ∈ B(E), with a slight abuse of notation, P(F ) will denote {P ∈ P(E) : P (F ) = 1}. For x ∈ E and F ∈ B(E), d(x, F ) will denote the distance from x to F , that is, d(x, F ) = inf y∈F r(x, y).
1 will denote the function identically equal to 1 and, for F ∈ B(E), 1 F will denote the indicator function of F . |I| will denote the cardinality of a finite set I.
For any function or operator, R(·) will denote the range and D(·) the domain. L(·) will denote the distribution of a stochastic process or a random variable.
If Z is a stochastic process defined on an arbitrary probability space, {F Z t } will denote the filtration generated by Z.
If Z is a stochastic process defined on an arbitrary filtered probability space, Z will also denote the canonical process defined on the path space. {B t } will denote the filtration generated by the canonical process.

Controlled martingale problems
We use the control formulation of constrained martingale problems given in [24] rather than the earlier version given in [23] that was based on "patchwork" martingale problems. The control formulation may be less intuitive, but it is more general and notationally simpler, and models described in the earlier manner can be translated to the control formulation.
Let E be a compact metric space, and let E 0 be an open subset of E. The requirement that E be compact is not particularly restrictive since, for example, for most processes in R d , one can take E to be the one-point compactification of R d .
Let U also be a compact metric space, and let Ξ be a closed subset of E c 0 × U . For each x ∈ E c 0 , let ξ x ≡ {u : (x, u) ∈ Ξ} be the set of controls that are admissible at x, and define F 1 ≡ {x ∈ E c 0 : ξ x = ∅} which is the set of points at which a control exists. Let B ⊂ C(E) × C(Ξ) with (1, 0) ∈ B. Using A and B, we define a controlled process Y that outside E 0 evolves on a slower time scale than the desired process X. Like X, inside E 0 the behavior of Y is determined by A, and outside E 0 the behavior of Y is determined by B. In particular, Y may take values in for all continuous f with compact support in [0, ∞) × U . It is possible to define a metric on L U that induces the above topology and makes L U into a complete, separable metric space. We will say that an L U -valued random variable Λ 1 is adapted to a filtration {F t } if , λ 0 is nonnegative and nondecreasing and increases only when Y ∈ E 0 , Λ 1 is a random measure in L U such that and there exists a filtration {F t } such that Y , λ 0 , and Λ 1 are {F t }-adapted and is an {F t }-martingale for all f ∈ D ≡ D(A) ∩ D(B). By the continuity of f , we can assume, without loss of generality, that {F t } is right continuous.

Remark 2.2.
To get some intuition on λ 0 and Λ 1 , consider the case in which A is a bounded Markov process generator and at each point x ∈ (E 0 ) c there is exactly one control u(x), so B is the bounded Markov process generator that, at x, produces a jump u(x). Then Y is the pure jump process with generator Af ( For general A and B, frequently (Y, λ 0 , Λ 1 ) can be obtained as a limit of a sequence {(Y n , λ n 0 , Λ n 1 )} corresponding to a sequence of bounded Markov process generators {(A n , B n )} (with jump rates going to infinity, if A, B are not bounded) that approximates (A, B). This construction is carried out rigorously in Theorem 2.2 of [24] and yields a quite general method to obtain solutions of the controlled martingale problem. In the case when there is a corresponding patchwork martingale problem, as defined in [23] (see Definition 6.6), this essentially amounts to constructing a solution of the patchwork martingale problem, which will be a solution of the controlled martingale problem as well: This approach is followed in Section 6. See also Section 7.2 for an example of another construction by approximation.
is an {F t }-martingale. Every solution of the controlled martingale problem for (C, Ξ 0 ) gives a solution for the controlled martingale problem for (A, E 0 , B, Ξ) by defining Conversely, every solution of the controlled martingale problem for (A, E 0 , B, Ξ) gives a solution of the controlled martingale problem for (C, Ξ 0 ).
to be the collection of the distributions of solutions of the controlled martingale problem for (A, E 0 , B, Ξ), and for ν ∈ P(E), Π ν ⊂ Π to be the collection of distributions such that Y (0) has distribution ν. P 0 denotes the collection of ν ∈ P(E 0 ∪ F 1 ) such that Π ν = ∅. Proof. Relative compactness for the family of Y follows from Theorems 3.9.4 and 3.9.1 of [14]. The relative compactness of the λ 0 and Λ 1 is immediate, as λ 0 and λ 1 are Lipschitz continuous with Lipschitz constant 1. The fact that every limit point is a solution of the controlled martingale problem follows by standard arguments from the properties of weakly converging measures and from uniform integrability of the martingales in (2.2). Convexity is immediate.

Closure properties of Π
Lemma 2.9. Let (Y, λ 0 , Λ 1 ) be a solution of the controlled martingale problem for is in Π.
Proof. If M is a {F t }-martingale under P and |M (t)| ≤ C(1 + t) for some C > 0, then M is a {F t }-martingale under P H . b) There exists a closed F 2 ⊂ E 0 ∪ F 1 such that P 0 = P(F 2 ).
is the distribution of a solution of the controlled martingale problem for (A, E 0 , B, Ξ).
Proof. For 0 ≤ t < t + r and C ∈ B t by the optional sampling theorem. Therefore, P τ,H ∈ Π.
) be the joint distribution of the 4-tuple of random variables (Y, λ 0 , Λ 1 , τ ). Let ν be the distribution of Y (τ ), and let P 1 ∈ Π ν (not empty by Lemma 2.11). Then there exists P ∈ P( , τ ) has the same distribution under P 0 and P and the distribution of (Y τ , λ τ 0 , Λ τ 1 ) under P is P 1 .

Constrained martingale problems
As discussed in the Introduction and at the beginning of Section 2, we are interested in processes that in E 0 behave like solutions of the martingale problem for the operator A, are constrained to remain in E 0 , and whose behavior on ∂E 0 is determined by the operator B. In Section 2, we have introduced a controlled process Y with values in all of E, that evolves on a slower time scale and whose behavior in E c that is a solution of the controlled martingale problem (Definition 2.1). We now construct the constrained process, X, by time changing Y , where the time change is obtained by inverting λ 0 . The following lemma gives conditions that ensure that the process obtained by inverting λ 0 is defined for all time.  Suppose there is an f ∈ D and > 0 such that Then lim t→∞ λ 0 (t) = ∞ almost surely and E[τ (t)] < ∞, for all t ≥ 0.
2) is a natural condition which is also used in the study of PDEs (see, e.g., [9], Lemma 7.6). An example where it is satisfied is a reflecting diffusion in a smooth domain with a nontangential direction of reflection. More precisely, let . Then ψ itself satisfies (3.2) (recall that ∂E 0 is compact).
Then by Lemma 2.11 and the compactness of . Then, for every solution, λ 0 is a.s. strictly increasing.
With Lemmas 2.8, 3.1, 3.4 and 3.3 in mind, throughout the remainder of the paper, we assume the following: Suppose there exists a sequence η n of {G t }-stopping times such that η n → ∞ and, for each n, Proof. Since τ (t) must be a point of increase of λ 0 , Y (τ (t)) must be in E 0 . Since Y and τ are right continuous, X must be in D E0 [0, ∞).
3) stopped at η n is a martingale.
We may assume, without loss of generality, that {G t } is right continuous.
A solution obtained as in Theorem 3.6 from a solution of the controlled martingale problem will be called natural. Γ ⊂ P(D E0 [0, ∞)) will denote the set of distributions of natural solutions and, for ν ∈ P(E 0 ), Γ ν will denote the set of distributions of natural solutions X such that X(0) has distribution ν.  b) If λ 0 is strictly increasing, then τ is continuous and we can take η n = inf{t : We conclude this section with a result giving conditions that imply a solution of the constrained martingale problem is natural. Proposition 3.10. Suppose that X is a solution of the constrained martingale problem for (A, E 0 , B, Ξ) and Λ is the associated random measure. If Λ([0, ·] × Ξ) is continuous and for all h ∈ C(Ξ) and t > 0,

The Markov selection theorem
Our strategy for obtaining a Markov solution for the constrained martingale problem for (A, E 0 , B, Ξ) generally follows the approach in Section 4.5 of [14] (which in turn is based on an unpublished paper [16]). With reference to these results, for h ∈ C(E 0 ), and ν ∈ P(F 2 ) (F 2 defined in Lemma 2.10), define Recalling that Π ν is compact (Lemma 2.8), we see that the supremum is achieved.
and v h is upper semicontinuous.
where the last equality follows from (4.3).
Consequently, equality must hold here and in (4.8), giving both that P τ,H0 is in Π h1 EJP 24 (2019), paper 135. and that Π h1 is closed under the pasting operation. Now for an arbitrary H as in the statement of the theorem, note that the probability measure P H can be written as Now suppose that the result holds for 1 ≤ k ≤ n − 1. In particular, if the distribution of (Y, λ 0 , Λ 1 ) is in Π h1,...,hn−1 , then the distribution of (Y τ , λ τ 0 , Λ τ 1 ) under P H0 is in Π h1,...,hn−1 . With this observation, the proof of the result for n follows.

The martingale property and the Markov selection theorem
Proof. For t ≥ 0 and H bounded and F t -measurable, by Lemma 4.6 and Remark 4.5 The left side is clearly a martingale, and (4.9) follows by taking t = 0.
Recall that we are assuming Condition 3.5. In particular, we are assuming that for all solutions of the controlled martingale problem, λ 0 (t) → ∞.

.10) and (4.11}.
A is linear and closed under bounded pointwise convergence. [14], for each η > 0 and X = Y • τ as in (4.11), e −ηs (ηv h (X(s)) − v h (X(s)) + h(X(s)))ds], and, as in Proposition 4.3.5 of [14], this implies that A is dissipative.  [14], for each ν ∈ Q ∞ 0 uniqueness holds for the martingale problem for A with initial distribution ν, and, by construction, the distribution of the solution is the unique distribution in Γ ∞ ν . Now let (Y, λ 0 , Λ 1 ) be the canonical process with distribution P ∈ Π ∞ such that L(Y (τ (0))) = ν, so that the distribution of X ≡ Y • τ , defined as in Theorem 4.8, is the unique distribution in Γ ∞ ν . In order to show that X is a strong Markov process we need to show that, for each {B τ (t) } finite stopping time σ, τ (σ) is a {B t }-stopping time and, setting X σ (·) = X(σ + ·), for every F ∈ B τ (σ) , ∀B ∈ B(D E0 [0, ∞)). The fact that τ (σ) is a {B t }-stopping time follows by the right continuity of {B t } and the observation that Fix F ∈ B τ (σ) with P (F ) > 0, and define two probability measures P 1 and P 2 on Note that where τ σ is given by ) ∈ Π by the optional sampling theorem. Moreover, for each n, where the last equality follows from Lemma 4.6. Therefore the distribution of ) under P 2 belongs to Π ∞ , so that L P2 (X σ ) ∈ Γ ∞ µ . Then, by uniqueness of the distribution in Γ ∞ µ , it must hold L P1 (X σ ) = L P2 (X σ ), which gives (4.13). Proof. If (Y, λ 0 , Λ 1 ) has distribution in Π ∞ ν , then τ (0) = 0. Proof. By Lemma 4.11, ν ∈ Q ∞ 0 , and the assertion follows immediately from Theorem 4.9 by the same arguments as in Corollary 3.9. Markov process X = Y • τ with initial distribution ν that can be obtained from a solution of the controlled martingale problem as in Theorem 3.6, then there is a unique (in distribution) process that can be obtained in this way.
In particular, under either condition a) or b) of Corollary 4.12, if there is a unique strong Markov, natural solution of the constrained (local) martingale problem with initial distribution ν, then there exists a unique natural solution.
Proof. If Γ ν contains more than one distribution, then, by selecting appropriate sequences {h n }, more than one strong Markov solution can be constructed. Remark 4.14. We can't rule out the possibility that there exist solutions of the constrained martingale problem that are not natural, but, under Condition 1.2 of [25], Theorem 2.2 of that paper yields that for any solution of the constrained martingale problem there exists a natural solution that has the same one dimensional distributions. By Theorem 3.2 of [24], uniqueness of one dimensional distributions for solutions with any given initial distribution implies uniqueness of finite dimensional distributions, so under Condition 1.2 of [25], uniqueness among natural solutions will imply uniqueness among all solutions. EJP 24 (2019), paper 135.

Viscosity solutions
The approach taken above in the construction of a strong Markov solution to the constrained martingale problem simplifies the proof of existence of viscosity semisolutions to the problem Bv(x, u) = 0, for x ∈ ∂E 0 and some u ∈ ξ x (5.1) given in [6], Section 5. In fact Theorem 5.1 below shows that the function v h defined by (4.1) and Lemma 4.1 is a viscosity subsolution of (5.1), and hence the function −v −h is a viscosity supersolution. As a consequence, under mild assumptions, uniqueness of the strong Markov solution of the constrained martingale problem starting at each x ∈ E 0 implies existence of a viscosity solution (Corollary 5.3). This construction is a "probabilistic" alternative to Perron's method, and it does not require proving the comparison principle for (5.1).
For unconstrained martingale problems, the analogous result follows immediately from Section 3 of [6]. For a class of jump-diffusion processes, for which uniqueness in law holds, [8] proves existence of a viscosity solution to the backward Kolmogorov equation directly, and then uniqueness of the viscosity solution by the comparison principle. The fact that the comparison principle for (5.1) implies uniqueness of the solution to the constrained (or unconstrained) martingale problem is the object of [6]. Then v E0 is a viscosity subsolution of (5.1), that is, it is upper semicontinuous, and if f ∈ D and x ∈ E 0 satisfy sup z∈E0 ( (ξ x and F 1 being defined at the beginning of Section 2).
Proof. v is upper semicontinuous by Lemma 4.1.

Remark 5.2. Note that, for each
and, as noted at the beginning of this subsection, −v −h is a supersolution of (5.1).
The second assertion follows from Remark 5.2 by the same argument.

Diffusions with oblique reflection in piecewise smooth domains: existence and Markov property
where E i 0 , i = 1, ..., m, are simply connected open sets in R d with C 1 boundaries. Specifically, we will assume that for each i there is a function |∇ψi(x)| . We will assume that Suppose that on ∂E i 0 a variable direction of reflection g i is assigned. We assume that g i is continuous on ∂E i 0 and ∇ψ i (x), g i (x) > 0, x ∈ ∂E i 0 . It is convenient, and no loss of generality to assume that g i : R d → R d and is continuous on all of R d with ∇ψ i (x), g i (x) ≥ 0 (allowing 0 away from ∂E i 0 ). Noting that x ∈ ∂E 0 may be in more than one ∂E i 0 , for x ∈ ∂E 0 , we define the cone of possible directions of reflection and also define Starting from the late '70s, there has been a considerable amount of work devoted to proving existence and uniqueness of reflecting diffusions in E 0 with direction of reflection g i on ∂E i 0 . Perhaps the most general result in this sense is [13]. However the assumptions in [13] are not satisfied in many natural situations, as in the following example. Then, at x 0 = 0, it can be proved by contradiction that there is no convex compact set that satisfies (3.7) of [13].
In addition [13] does not cover the case of cusp like singularities (covered by [7] in dimension 2).
[10] considers convex polyhedrons (take ψ i (x) = n i , x − b i , n i and b i constant) with constant direction of reflection g i on each face. In this context, [10] proves existence and uniqueness (in distribution) of semimartingale reflecting Brownian motion under a condition which, in the case of simple polyhedrons, reduces to the assumption that, for every x ∈ ∂E 0 , there exists e(x) ∈ N (x), |e(x)| = 1, such that g, e(x) > 0, ∀g ∈ G(x) − {0}. (6.4) Moreover, for simple polyhedrons, [10], Propositions 1.1 and 1.2, shows that (6.4) is necessary for existence of semimartingale reflecting Brownian motion. (Non-semimartingale reflecting Brownian motion, which is studied, for example, in [19], [21] and [27], is not considered here.) Note that (6.4) is satisfied in Example 6.1.
In [10], a key point in proving uniqueness is the fact that there exist strong Markov processes that satisfy the definition of semimartingale reflecting Brownian motion and that uniqueness among these strong Markov processes implies uniqueness among all processes that satisfy the definition (analogously in [26] and [34]). Our goal here is to prove that this key point holds for general diffusion processes on domains E 0 as defined above under Condition 6.2 below, thus providing the first step in extending proofs of uniqueness to this more general setting In [13], [10] and in most of the literature, reflecting diffusions are defined as (weak) solutions of stochastic differential equations with reflection. Here we start by studying the corresponding controlled martingale problem and constrained martingale problem, and then show that the set of natural solutions to the constrained martingale problem coincides with the set of solutions of the stochastic differential equation with reflection.
We consider the controlled martingale problem for (A, E 0 , B, Ξ), with , and we assume that σ and b are bounded and continuous on R d . Note that F 1 , defined at the beginning of Section 2, in this case is ∂E 0 , so a solution of the controlled martingale problem must take values in E 0 (Remark 2.3).
Since (E j 0 ) c is closed, if j ∈ I(z k ) for some sequence z k → x, then j ∈ I(x). Consequently, for each x ∈ (E 0 ) c there exists δ(x) such that I(z) ⊂ I(x), for z ∈ (E 0 ) c with |z − x| < δ(x). We assume that E i 0 and g i , i = 1, ..., m, satisfy the following condition.
c) For each x ∈ ∂E 0 , I ∈ I(x), and n = i∈I η i n i (x), η i ≥ 0, i∈I η i > 0, there exists j ∈ I such that n, g j (x) > 0. .
As anticipated in Remark 2.2, we will obtain a solution to the controlled martingale problem (6.5) by constructing a solution to the corresponding patchwork martingale problem ( [23]), which will also be a solution to the controlled martingale problem.
(z) since I(x) ∈ I(z). (6.11) implies that for some j ∈ I(x), (6.14) By (6.13), each x ∈ E − E 0 is in at least one of the F i , so defining , and B i f = ρ(x) ∇f (x), g i (x) , A and the B i are dissipative, and Lemma 1.1 of [23] yields that, for each ν ∈ P(E 0 ), there exists a solution, (Y, λ 0 , l 1 , ..., l m ), of the patchwork martingale problem for ( A, E 0 , B 1 , E 1 , ... B m , E m ) with initial distribution ν. Then, for f ∈ D would be a martingale. Since we can approximate φ by C 2 functions {φ n } in such a way that φ n is constant on E 0 and ∇φ n → ∇φ uniformly on E, M φ is a martingale even if φ is not C 2 . M φ is a nonnegative martingale because ∇φ, g i ≤ 0 on E i . If Y (0) ∈ E 0 , then M φ (0) = 0 so, as in the proof of Lemma 1.4 of [23], M φ (t) = 0 for all t ≥ 0. As all terms in M φ are nonnegative, φ(Y (t)) must be zero for all t ≥ 0, and hence, by (6.12), Y (t) ∈ E 0 for all t ≥ 0. Therefore (Y, λ 0 , l 1 , ..., l m ) is a solution of the patchwork martingale problem EJP 24 (2019), paper 135.
Let (Y, λ 0 , Λ 1 ) be a solution of the controlled martingale problem for (A, E 0 , B, Ξ). It is easy to verify that Y is continuous and The following lemma is the analog of Lemma 3.1 of [10] and its proof is based on similar arguments. Lemma 6.8. For every solution (Y, λ 0 , Λ 1 ) of the controlled martingale problem for (A, E 0 , B, Ξ) defined by (6.5), λ 0 (t) > 0 for all t > 0, a.s.
As mentioned at the beginning of this section, a reflecting diffusion in E 0 with direction of reflection g i on {x ∈ ∂E 0 : ψ i (x) = 0, ψ j (x) > 0, for j = i}, i = 1, ..., m, is often defined as a weak solution of a stochastic differential equation with reflection of the form Definition 6.11. X, defined on some probability space, is a weak solution of (6.21) if there are λ a.s. continuous and nondecreasing, γ a.s. measurable and a standard Brownian motion W , all defined on the same probability space as X, such that (X, γ, λ) is compatible with W (i.e. W (t + ·) − W (t) is independent of F W,X,γ,λ t , where {F W,X,γ,λ t } is the filtration generated by (W, X, γ, λ)) and (6.21) is satisfied. Theorem 6.12. Every weak solution of (6.21) is a natural solution of the constrained martingale problem for (A, E 0 , B, Ξ) defined by (6.5).
Conversely, for every natural solution, X, of the constrained martingale problem for (A, E 0 , B, Ξ) there exists a weak solution of (6.21) with the same distribution as X.
Proof. Let X be a weak solution of (6.21). Setting Conversely, let X = Y • τ , where (Y, λ 0 , Λ 1 ) is a solution of the controlled martingale problem (A, E 0 , B, Ξ) with filtration {F t } and τ is given by (3.1). Without loss of generality we can suppose {F t } complete. Then (see [24], page 141) there is a {F t }-predictable, P(U )-valued process L such that, in particular, u L(s, du) dλ 1 (s).
Theorem 6.13. For each ν ∈ P(E 0 ), there exists a strong Markov solution of (6.21). If uniqueness in distribution holds among strong Markov solutions of (6.21), then it holds among all solutions.
We conclude this section with the proof of the equivalence between the controlled martingale problem (6.5) and the corresponding patchwork martingale problem (6.16) (see Definition 6.6). This equivalence is a valuable tool. For instance, in the last step of the proof of Theorem 6.7) we have already used one direction of the equivalence, which is immediate to see, namely the fact that every solution of the patchwork martingale problem yields a solution of the controlled martingale problem. On the contrary, the other direction of the equivalence is nontrivial and is proved in the following theorem. Let G(x) ≡ [g 1 (x), . . . , g m (x))], and let G + (x) be the Moore-Penrose pseudo-inverse (see [3], Chapter 1). G + (x) is a Borel function of G(x), hence of x. Then, for each u ∈ R d such that G(x)η = u has at least one solution, all solutions have the form η(w) = G + (x)u + (I − G + (x)G(x))w, w ∈ R m .
Then the controlled martingale problem requires Bf (Y (s))dλ 1 (s) to be a martingale. Note that the assumption that η(x, E 0 ) = 1 implies that for every solution of the controlled martingale problem P {τ (0) < ∞} = 1. In fact, if Y (0) ∈ E c 0 , P {τ (0) > t} ≤ e −t , since B is a generator of a pure jump process with unit exponential holding times. Consequently, by Lemma 3.3, λ 0 (t) → ∞.
Processes of this type have been considered in a variety of settings, for example [11,30]. Semigroups corresponding to processes with nonlocal boundary conditions of this type have been considered in [2]. Related models are considered in [29].

Wentzell boundary conditions
Let A ⊂ C(E)×C(E) and B ⊂ C(E)×C(E) be generators such that for every µ ∈ P(E) there exist solutions of the martingale problem for (A, µ) and (B, µ), every solution of the martingale problem for A has continuous sample paths and every solution for B has cadlag sample paths. In addition, assume that if Z is a solution of the martingale problem for B with Z(0) ∈ E 0 , then Z(t) ∈ E 0 for all t > 0. Set U ≡ {1} and Bf (·, 1) ≡ Bf .
Let µ ∈ P(E 0 ), let Y (0) have distribution µ and let Y evolve as a solution of the martingale problem for A until the first time τ 1 that Y hits ∂E 0 . After time τ 1 , let Y evolve as a solution of the martingale problem for B until σ 1 ≡ inf{t > τ 1 : inf x∈∂E0 |Y (t) − x| ≥ }. Recursively define τ k and σ k and assume σ 0 = 0. By pasting, Y is constructed so that for f ∈ D, f (Y (t)) − f (Y (0)) −