Stationary Solutions and Forward Equations for Controlled and Singular Martingale Problems

Stationary distributions of Markov processes can typically be characterized as probability measures that annihilate the generator in the sense that for ; that is, for each such , there exists a stationary solution of the martingale problem for A with marginal distribution . This result is extended to models corresponding to martingale problems that include absolutely continuous and singular (with respect to time) components and controls. Analogous results for the forward equation follow as a corollary.


Introduction
Stationary distributions for Markov processes can typically be characterized as probability measures that annihilate the corresponding generator. Suppose A is the generator for a Markov process X with state space E, where X is related to A by the requirement that f (X(t)) − f (X(0)) − Many processes of interest in applications (see, for example, the survey paper by Shreve [24]) can be modelled as solutions to a stochastic differential equation of the form dX(t) = b(X(s), u(s))ds + σ(X(s), u(s))dW (s) + m(X(s−), u(s−))dξ s (1.4) where X is the state process with E = R d , u is a control process with values in U 0 , ξ is a nondecreasing process arising either from the boundary behavior of X (e.g., the local time on the boundary for a reflecting diffusion) or from a singular control, and W is a Brownian motion. (Throughout, we will assume that the state space and control space are complete, separable metric spaces.) A corresponding martingale problem can be derived by applying Itô's formula to f (X(t)). In particular, setting a(x, u) = ((a ij (x, u) )) = σ(x, u)σ(x, u) T , we have Af (X(s), u(s))ds − Bf (x, u, δ) = f (x + δm(x, u)) − f (x) δ , δ > 0, (1.6) with the obvious extension to Bf (x, u, 0) = m(x, u) · ∇f (x). We will refer to A as the generator of the absolutely continuous part of the process and B as the generator of the singular part, since frequently in applications ξ increases on a set of times that are singular with respect to Lebesgue measure. In general, however, ξ may be absolutely continuous or have an absolutely continuous part.
Suppose the state process X and control process u are stationary and that the nondecreasing process ξ has stationary increments and finite first moment. Then there exist measures µ 0 and µ 1 satisfying for each s and for each t. Let D be the collection of f ∈ C 2 (R d ) for which (1.5) is a martingale. Then the martingale property implies Bf (X(s−), u(s−), δξ(s))dξ s = E[f (X(0))] t and, under appropriate integrability assumptions, for each f ∈ D.
As with (1.2), we would like to show that measures µ 0 and µ 1 satisfying (1.7) correspond to a stationary solution of a martingale problem defined in terms of A and B. The validity of this assertion is, of course, dependent on having the correct formulation of the martingale problem. only if there exists a sequence {t k }, with t k → ∞, such that, for each t k , ξ n t k converges weakly to ξ t k , which in turn implies ξ n t converges weakly to ξ t for each t satisfying ξ(S × {t}) = 0. Finally, we define L (m) (S) ⊂ L(S) to be the set of ξ such that ξ(S × [0, t]) = t for each t > 0.

Formulation of martingale problem
Throughout, we will assume that the state space E and control space U are complete, separable, metric spaces.
It is sometimes convenient to formulate martingale problems and forward equations in terms of multi-valued operators. For example, even if one begins with a single-valued operator, certain closure operations lead naturally to multi-valued operators. Let A ⊂ B(E) × B(E). A measurable process X is a solution of the martingale problem for A if there exists a filtration {F t } such that, for each (f, g) ∈ A, f (X(t)) − f (X(0)) − Let A S be the linear span of A. Note that a solution of the martingale problem or forward equation for A is a solution for A S . We will say that A is dissipative if and only if A S is dissipative, that is, for (f, g) ∈ A S and λ > 0, for each x ∈ E. Note that we have not assumed that µ n and λ n are measurable functions of x.

Remark 1.1 If A ⊂ C(E) × C(E) (C(E) denotes the bounded continuous functions on E) and
for each x ∈ E, there exists a solution ν x of the forward equation for A with ν x 0 = δ x that is right-continuous (in the weak topology) at zero, then A is a pre-generator. In particular, if which implies λf − g ≥ λf (x) and hence dissipativity, and if we take λ n = n and µ n (x, ·) = ν x 1/n , n E (f (y) − f (x))ν x 1/n = n(ν x 1/n f − f (x)) = n 1 n 0 ν x s gds → g(x).

(We do not need to verify that ν x t is a measurable function of x for either of these calculations.) If E is locally compact and D(A) ⊂ C(E) ( C(E), the continuous functions vanishing at infinity)
, then the existence of λ n and µ n satisfying (1.10) implies A is dissipative. In particular, A S will satisfy the positive maximum principle, that is, if (f, g) ∈ A S and f (x 0 ) = f , then g( If E is compact, A ⊂ C(E) × C(E), and A satisfies the positive maximum principle, then A is a pre-generator. If E is locally compact, A ⊂ C(E) × C(E), and A satisfies the positive maximum principle, then A can be extended to a pre-generator on E ∆ , the one-point compactification of E. See Ethier and Kurtz (1986), Theorem 4.5.4. To obtain results of the generality we would like, we must allow relaxed controls (controls represented by probability distributions on U ) and a relaxed formulation of the singular part. We now give a precise formulation of the martingale problem we will consider. To simplify notation, we will assume that A and B are single-valued.
and ν 0 ∈ P(E). (Note that the example above with B given by (1.6) will be of this form for D = C 2 c and U = U 0 × [0, ∞).) Let (X, Λ) be an E ×P(U )-valued process and Γ be an L(E × U )-valued random variable. Let Γ t denote the restriction of Γ to E × U × [0, t]. Then (X, Λ, Γ) is a relaxed solution of the singular, controlled martingale problem for (A, B, ν 0 ) if there exists a filtration {F t } such that (X, Λ, Γ t ) is {F t }-progressive, X(0) has distribution ν 0 , and for every f ∈ D, For the model (1.4) above, the L(E × U )-valued random variable Γ of (1.11) is given by , δξ(s))dξ s . Rather than require all control values u ∈ U to be available for every state x ∈ E, we allow the availability of controls to depend on the state. Let U ⊂ E × U be a closed set, and define Let (X, Λ, Γ) be a solution of the singular, controlled martingale problem for (A, B, µ 0 ). The control Λ and the singular control process Γ are admissible if for every t, . (1.13) Note that condition (1.12) essentially requires Λ s to have support in U x when X(s) = x.

Conditions on A and B
We assume that the absolutely continuous generator A and the singular generator B have the following properties.
v) D is closed under multiplication and separates points.

Remark 1.3 Condition (ii)
, which will establish uniform integrability, has been used in [27] with ψ only depending on the control variable and in [4] with dependence on both the state and control variables. The separability of condition (iii), which allows the embedding of the processes in a compact space, was first used in [2] for uncontrolled processes. The relaxation to the requirement that A and B be pre-generators was used in [19].
The generalization of (1.7) is for each f ∈ D. Note that if ψ A is µ 0 -integrable and ψ B is µ 1 -integrable, then the integrals in (1.14) exist.

Example 1.4 Reflecting diffusion processes.
The most familiar class of processes of the kind we consider are reflecting diffusion processes satisfying equations of the form where X is required to remain in the closure of a domain D (assumed smooth for the moment) and ξ increases only when X is on the boundary of D. Then there is no control, so where a(x) = ((a ij (x))) = σ(x)σ(x) T . In addition, under reasonable conditions ξ will be continuous, so If µ 0 is a stationary distribution for X, then (1.14) must hold with the additional restrictions that µ 0 is a probability measure on D and µ 1 is a measure on ∂D.
If m is not continuous (which is typically the case for the reflecting Brownian motions that arise in heavy traffic limits for queues), then a natural approach is to introduce a "control" in the singular/boundary part so that Bf (x, u) = u · ∇f (x) and the set U ⊂ D × U that determines the admissible controls is the closure of Then where again, under reasonable conditions, ξ is continuous and by admissiblity, Λ s is a probability measure on U X(s) . In particular, if m is continuous at X(s), then U uΛ s (du) = m(X(s)), and if m is not continuous at X(s), then the direction of reflection U uΛ s (du) is a convex combination of the limit points of m at X(s).

Assume that D is an open domain and that for
Assume that where ξ is required to be the counting process that "counts" the number of times that X has hit the boundary of D, that is, assuming X(0) ∈ D, X diffuses until the first time τ 1 that X hits the boundary (τ 1 = inf{s > 0 : X(s−) ∈ ∂D}) and then jumps to X(τ 1 ) = X(τ 1 −) + m(X(τ 1 −)). The diffusion then continues until the next time τ 2 that the process hits the boundary, and so on. (In general, this model may not be well-defined since the {τ k } may have a finite limit point, but we will not consider that issue.) Then A is the ordinary diffusion operator, . Models of this type arise naturally in the study of optimal investment in the presence of transaction costs. (See, for example, [8,25].) In the original control context, the model is of the form where ξ counts the number of transactions. Note that ξ is forced to be a counting process, since otherwise the investor would incur infinite transaction costs in a finite amount of time. We then have A as before and Bf (x, u) = f (x + u) − f (x). D and m are then determined by the solution of the optimal control problem.

Example 1.6 Tracking problems.
A number of authors (see, for example, [14,26]) have considered a class of tracking problems that can be formulated as follows: Let the location of the object to be tracked be given by a Brownian motion W and let the location of the tracker be given by where |u(s)| ≡ 1. The object is to keep X ≡ W − Y small while not consuming too much fuel, measured by ξ. Setting X(0) = −Y (0), we have extending to Bf (x, u, δ) = −u · ∇f (x) for δ = 0. As before, δ represents discontinuities in ξ, that is the martingale problem is For appropriate cost functions, the optimal solution is a reflecting Brownian motion in a domain D.

Statement of main results.
In the context of Markov processes (no control), results of the type we will give appeared first in work of Weiss [29] for reflecting diffusion processes. He worked with a submartingale problem rather than a martingale problem, but ordinarily, it is not difficult to see that the two approaches are equivalent. We say that an L(E)-valued random variable has stationary increments if for a i < b i , i = 1, . . . , m, the distribution of (Γ(H 1 × (t + a 1 , t + b 1 ]), . . . , Γ(H m × (t + a m , t + b m ])) does not depend on t. Let X be a measurable stochastic process defined on a complete probability space (Ω, F, P ), and let N ⊂ F be the collection of null sets. Then is a Borel probability measure on E 2 , and for each D ∈ B(E 2 ), q(·, D) ∈ B(E 1 ). If E = E 1 = E 2 , then we say that q is a transition function on E. Theorem 1.7 Let A and B satisfy Condition 1.2. Suppose that µ 0 ∈ P(E × U ) and (1.17) . Then there exist a process X and a random measure • For each f ∈ D,

Remark 1.8
The definition of the solution of a singular, controlled martingale problem did not require that Γ be adapted to {F X t+ }, and it is sometimes convenient to work with solutions that do not have this property. Lemma 6.1 ensures, however, that for any solution with a nonadapted Γ, an adapted Γ can be constructed. Theorem 1.7 can in turn be used to extend the results in the Markov (uncontrolled) setting to operators with range in M (E), the (not necessarily bounded) measurable functions on E, that is, we relax both the boundedness and the continuity assumptions of earlier results. Corollary 1.9 Let E be a complete, separable metric space. Let A, B : D ⊂ C(E) → M (E), and suppose µ 0 ∈ P(E) and Assume that there exist a complete, separable, metric space U , operators A, B : D → C(E × U ), satisfying Condition 1.2, and transition functions η 0 and η 1 from E to U such that Then there exists a solution (X, Γ) of the (uncontrolled) singular martingale problem for ( A, B, µ 0 ) such that X is stationary and Γ has stationary increments.
where a = ((a ij )) is a measurable function with values in the space of nonnegative-definite d × dmatrices, b is a measurable R d -valued function, and ν is an appropriately measurable mapping from R d into the space of measures satisfying R d |y| 2 ∧ 1γ(dy) < ∞. (1.23) Then there exists a solution (X, Γ) of the singular martingale problem for ( A, B, ν 0 ), that is, there exists a filtration {F t } such that If uniqueness holds for the martingale problem for ( A, B, ν 0 ) in the sense that the distribution of X is uniquely determined, then (1.23) uniquely determines {ν t } among solutions satisfying the integrability condition (1.22).
The results in the literature for models without the singular term B have had a variety of applications including an infinite dimensional linear programming formulation of stochastic control problems [1,21,28], uniqueness for filtering equations [3,5,20], uniqueness for martingale problems for measure-valued processes [9], and characterization of Markov functions (that is, mappings of a Markov process under which the image is still Markov) [19]. We anticipate a similar range of applications for the present results. In particular, in a separate paper, we will extend the results on the linear programming formulation of stochastic control problems to models with singular controls. A preliminary version of these results applied to queueing models is given in [22].
The paper is organized as follows. Properties of the measure Γ (or more precisely, the nonadapted precurser of Γ) are discussed in Section 2. A generalization of the existence theorem without the singular operator B is given in Section 3. Theorem 1.7 is proved in Section 4, using the results of Section 3. Theorem 1.11 is proved in Section 5.
2 Properties of Γ Theorems 1.7 and 1.11 say very little about the random measure Γ that appears in the solution of the martingale problem other than to relate its expectation to the measures µ 1 and µ. The solution, however, is constructed as a limit of approximate solutions, and under various conditions, a more careful analysis of this limit reveals a great deal about Γ.
Essentially, the approximate solutions X n are obtained as solutions of regular martingale problems corresponding to operators of the form where η 0 and η 1 are defined in Theorem 1.7 and β n 0 and β n 1 are defined as follows: For n > 1, Noting that µ E 0 and µ E 1 are absolutely continuous with respect to µ E n , we define which makes β n 0 + β n 1 = K n .

Remark 2.1
In many examples (e.g., the stationary distribution for a reflecting diffusion), µ 0 and µ 1 will be mutually singular. In that case, β n 0 = K n on the support of µ 0 and β n 1 = K n on the support of µ 1 . We do not, however, require µ 0 and µ 1 to be mutually singular.

It follows that
and the results of Section 3 give a stationary solution X n of the martingale problem for C n , where X n has marginal distribution µ E n . The proofs of the theorems in the generality they are stated involves the construction of an abstract compactification of E. In this section, we avoid that technicality by assuming that E is already compact or that we can verify a compact containment condition for {X n }. Specifically, we assume that for each > 0 and T > 0, there exists a compact set To better understand the properties of Γ * , we consider a change of time given by Note that since β n 0 + β n β n 0 (X n (s))ds and γ n is a martingale for each f ∈ D. Since γ n 0 (t) + γ n 1 (t) = t, the derivativesγ n 0 andγ n 1 are both bounded by 1. It follows that {Y n , γ n 0 , γ n 1 )} is relatively compact in the Skorohod topology. (Since {Y n } satisfies the compact containment condition and γ n 0 and γ n 1 are uniformly Lipschitz, relative compactness follows by Theorems 3.9.1 and 3.9.4 of [11].) We can select a subsequence along which (X n , Γ n ) converges to (X, Γ * ) and (Y n , γ n 0 , γ n 1 ) converges to a process (Y, γ 0 , γ 1 ). Note that, in general, X n does not converge to X in the Skorohod topology. (The details are given in Section 4.) In fact, one way to describe the convergence is that (X n • τ n , τ n ) ⇒ (Y, γ 0 ) in the Skorohod topology and X = Y • γ −1 0 . The nature of the convergence is discussed in [17], and the corresponding topology is given in [13]. In particular, the finite dimensional distributions of X n converge to those of X except for a countable set of time points. and
Proof. By invoking the Skorohod representation theorem, we can assume that the convergence of (X n , Γ n , X n • τ n , γ n 0 , γ n 1 ) is almost sure, in the sense that X n (t) → X(t) a.s. for all but countably many t, Γ n → Γ * almost surely in L(E), and (X n • τ n , γ n . Parts (a) and (b) follow as in the Proof of Theorem 1.7 applying (2.1) to avoid having to compactify E.
and since γ n 0 (t) and K n τ n (t) are asymptotically the same, we must have that Consequently, lim t→∞ γ 0 (t) = ∞ a.s.
Then for bounded continuous g and all but countably many t, which implies (2.4).
and Part (e) follows.
Proof. We show that (2.2) converges in distribution to (2.7). Then (2.7) can be shown to be a martingale by essentially the same argument used in the proof of Theorem 1.7. If Af and Bf were continuous, then the convergence in distribution would be immediate. Let g be continuous.
The right side can be made arbitrarily small by selecting the appropriate g ∈ C(E). Note that for any nonnegative, bounded continuous function h, and the inequality between the left and right sides extends to all nonnegative measurable h. It follows that and the convergence of (2.2) to (2.7) follows.
In general, γ −1 0 need not be continuous. Continuity of γ −1 0 is equivalent to γ 0 being strictly increasing. The following lemma, which is a simple extension of Lemma 6.1.6 of [15], gives conditions for γ 0 to be strictly increasing. We say that (Z, ζ) is a solution of the stopped Then γ 0 is strictly increasing.

Remark 2.5 In the case of reflecting diffusions in a domain
, and any solution of the stopped martingale problem for B satisfies Results on solutions of ordinary differential equations can then be used to verify (2.9).
In particular, (Z, ζ) is a solution of the stopped martingale problem for B. Since, with probability one, γ 1 increases only when Y ∈ K 1 , and by assumption ζ ∧ inf{t : and q, a transition function on E, so that and (2.4) and a counting process N such that a) (X, Γ * ) is a solution of the singular, controlled martingale problem for (A, B). .
andÃ andB satisfy Condition 1.2 with By Theorems 1.7 and 2.2, there exist (X, Θ,Γ * ) and (Y, Φ, For f depending only on θ, we have that ). Since the martingale problem for C is well-posed, it follows that Φ • β can be extended to a solution Ψ of the martingale problem for C (see Lemma 4.5.16 of [11]), and we can write Note that N is {G t }-adapted and that (2.10) is a {G t }-martingale. Since (2.11) is a martingale by Lemma 2.3, and the difference of (2.11) and (2.12) is it follows that (2.12) is a martingale.
Then, with probability one, and since the right side is a local martingale, the left side must be zero.
In the context of Theorem 2.6, the right analog of Lemma 2.4 would be a condition that implies N • γ −1 0 is still a counting process.
, that is, that every boundary jump lands inside the open set K c 1 , and hence that Y is in K c 1 for a positive time interval after each boundary jump.
and M f denote the martingale (2.11). Then and, using the fact that (2.13) is a martingale, is a martingale, and integrating against Φ, is a martingale for every f ∈ D. But the collection of f for which (2.14) is a martingale is closed under bounded pointwise convergence, so is all of B(E). Taking f = I K c 1 , we have that is a martingale, but, since the integrand is non positive, that can hold only if the integral is identically zero and hence

Stationary solutions of controlled martingale problems
The objective of this section is to establish the existence of a particular form of stationary solution for the controlled martingale problem for a generator A. The formulation is obtained by taking Bf ≡ 0 for each f ∈ D above, so we drop any reference to B. We also denote µ 0 of (1.7) by µ and ψ A by ψ, since there will not be a µ 1 or a ψ B .
The first result of this type was by Echeverria [10] in the context of an uncontrolled Markov process (see also Ethier and Kurtz [11,Theorem 4.9.15]). Stockbridge [27] extended the result to controlled processes. In [27], the state and control spaces were locally compact, complete, separable, metric spaces and the control process was only shown to be adapted to the past of the state process. Bhatt and Karandikar [2] removed the local compactness assumption (on the state space) for uncontrolled processes. The stationary control process was shown to be a feedback control of the current state of the process (where the particular control is determined from the stationary measure) by Kurtz and Stockbridge [21] and Bhatt and Borkar [1]. Kurtz and Stockbridge also established this result for generators whose range consisted of bounded, measurable (not necessarily continuous) functions. The results were proved by Kurtz and Stockbridge under the assumption that the state and control spaces are locally compact and by Bhatt and Borkar under the assumption that the state space E is a complete, separable metric space and that the control space U is compact.
Here we make certain that the results are valid if both the state and control spaces are complete, separable metric spaces. Many of the recent proofs simply refer back to previous results when needed. In this section, we compile the previous results and provide complete details.
Suppose µ is a probability measure on E × U with and which satisfies Denote the state marginal by µ E = µ(· × U ), and let η be the regular conditional distribution of u given x, that is, η satisfies Our goal is to show that there exists a stationary process X such that the E × P(U )-valued process (X, η(X, ·)) is a stationary, relaxed solution of the controlled martingale problem for (A, µ E ). Note that if X is a stationary process with X(0) having distribution µ E , the pair (X, η(X, ·)) is stationary and the one-dimensional distributions satisfy Following Bhatt and Karandikar [2], we construct an embedding of the state space E in a compact space E. Without loss of generality, we can assume that {g k } in the separability condition is closed under multiplication. Let I be the collection of finite subsets of positive integers, and for I ∈ I, let k(I) satisfy g k(I) = i∈I g i . For each k, there exists a k ≥ |g k |. Let Note that E is compact. Define G : E → E by Then G has a measurable inverse defined on the (measurable) set G(E). In this section and the next, we will typically denote measures on E by µ,μ, µ 0 , µ 1 , etc. and the corresponding measures on E by ν,ν, ν 0 , ν 1 , etc.
We will need the following lemmas.

Lemma 3.3 Let X n , X be processes in
Suppose for each t ≥ 0, X n (t) and X(t) have a common distribution µ t ∈ P(E).

7)
and define µ E and η by (3.3). Then there exists a stationary process X such that (X, η(X, ·)) is a stationary relaxed solution of the controlled martingale problem for (A, µ E ), η(X, ·) is an admissible (absolutely continuous) control, and for each t ≥ 0, for every H 1 ∈ B(E) and H 2 ∈ B(U ).

Remark 3.5
We will obtain X in the form X = G −1 (Z). It will be clear from the proof that there always exists a modification of Z with sample paths in D E [0, ∞), but our assumptions do not imply that X will have sample paths in . It is easy to check that µ(dx) = c(1 + x 4 ) −1 dx satisfies R Af (x)µ(dx) = 0, but the corresponding process will repeatedly "go out" at +∞ and "come back in" at −∞.
We first consider the case ψ ≡ 1.

Theorem 3.6 Let
A satisfy Condition 1.2 with ψ ≡ 1. Suppose µ ∈ P(E × U ) satisfies (3.1) and (3.2), and define µ E and η by (3.3). Then there exists a stationary process X such that (X, η(X, ·)) is a stationary relaxed solution of the controlled martingale problem for (A, µ E ) satisfying (3.8) and η(X, ·) is an admissible absolutely continuous control.
Proof. For n = 1, 2, 3, . . . , define the Yosida approximations A n by for g ∈ R(I − n −1 A), and note that for f ∈ D(A) and g = ( Let M be the linear subspace of functions of the form and g 1 , . . . , g m ∈ C(E × U ).
Define the measure-valued random variable Λ n by so, by the tightness of a measure on a complete, separable metric space, for each > 0, there exists a compact K such that Let Z n = G(X n ), with G defined by (3.4). Then by (3.21) and the definition of Λ n , for k = 1, 2, . . ., is a martingale. Recalling that k∈I Z n k = Z n k(I) , Theorems 3.9.4 and 3.9.1 of Ethier and Kurtz [11] imply the relative compactness of {Z n } in D E [0, ∞), and hence, (Z n , Λ n ) is rel- Then Ag k (G −1 (·), ·) can be approximated in L 1 (ν) by bounded, continuous functions in C( E × U ), and as in Lemma 3.3, we see that for any limit point (Z, Λ), (3.24) converges in distribution, at least along a subsequence, to which will be a martingale with respect to the filtration {F Z,Λ t }. Note that Z is a stationary process (even though as continuous time processes the X n are not). Since for each t ≥ 0, Z(t) has distribution ν E = ν(·× U ) which satisfies f dν E = f • Gdµ E , by Lemma 3.1, Z(t) ∈ G(E) a.s. and hence we can define X(t) = G −1 (Z(t)), and we have that By the same argument, (3.22) converges in distribution to Since A is contained in the bounded pointwise closure of the linear span of {(g k , Ag k )}, we see that (X, η(X, ·)) is a solution of the controlled martingale problem for (A, µ E ). Finally, η(X, ·) is admissible since µ(U ) = 1 implies η(x, U x ) = 1 a.e. µ(dx).

Proof of Theorem 3.4.
For each n ≥ 1, let Observe that ψ n ≥ 1 for all n, and that as n → ∞ ψ n (x, u) 1, c n 1 and k n 1.

Define the operators A n on D(A) by
For each n, A n and µ n satisfy the conditions of Theorem 3.6 and η n of (3.3) is given by Note, in particular, that A n = ψ ψn A 0 so Condition 1.1(iii) is satisfied since ψ ψn is bounded and A 0 satisfies the condition. Thus there exist stationary processes {Z n } with sample paths in D E [0, ∞) such that, setting X n = G −1 (Z n ), (X n , η n (X n , ·)) is a solution of the controlled martingale problem for (A n , µ n ).
The relative compactness of {Z n } is established by applying Theorem 3.9.1 of Ethier and Kurtz [11] and Theorem 4.5 Stockbridge [27] exactly as in the proof of Theorem 4.7 of Stockbridge [27]. Let Z be a weak limit point of {Z n }, and to simplify notation, assume that the original sequence converges. As before, set X = G −1 (Z).
Let t 1 , . . . , t m+1 ∈ {t ≥ 0 : P (Z(t) = Z(t−)) = 1} and h 1 , . . . , h m ∈ C( E). Since Z n ⇒ Z, as n → ∞, Lemma 3.3 does not apply directly to the integral term, but a similar argument works using the fact that Theorem 3.4 establishes the existence of stationary processes on the complete, separable, metric space E. The proof involves embedding E in the compact space E, demonstrating existence of appropriate stationary processes Z on E, and then obtaining the solution by applying the inverse G −1 . In the next section, it will be necessary to work directly with the processes Z.
We therefore state the corresponding existence result in terms of these processes; the proof, of course, has already been given.

Singular martingale problems
In this section, we characterize the marginal distributions of stationary solutions of the singular controlled martingale problem. Previous work of this nature includes the papers by Weiss [29] and Kurtz [16] which considered constrained processes. Weiss [29] characterized the marginal distribution of a stationary solution to a submartingale problem for diffusions in a bounded domain. Inspired by Weiss, Kurtz [16] used the results of Stockbridge [27] to characterize the stationary marginals for general constrained processes. The results of this section are more general than the previous results in that they apply to processes with singular control, constrained processes being a subclass of such processes, and the controls are identified in feedback form.
Let E be the compact space constructed in Section 3, and let G be the mapping from E into E given by (3.4). A and B

Lemma 4.1 Let
and let ν E , ν E 0 and ν E 1 denote the corresponding marginals on E. Then there exist a stationary process Z on E and non-negative, continuous, non-decreasing processes λ 0 and λ 1 such that • λ 0 and λ 1 have stationary increments,

Remark 4.2
By defining X = G −1 (Z), the conclusions of Lemma 4.1 can be stated in terms of a stationary E-valued process X. Since we will need to use the process Z in the sequel, we have chosen to express Lemma 4.1 in terms of this process.
We redefine µ so that it is a probability measure on E ×Ũ by setting where K = µ 0 (E × U ) + µ 1 (E × U ) is the normalizing constant above.
Observe that µ has marginal µ E and that both µ E 0 and µ E 1 are absolutely continuous with respect to µ E . Hence we can write It follows that for each f ∈ D, This identity, together with the conditions on A and B, imply that the conditions of Theorem 3.7 are satisfied. Therefore there exists a stationary process Z such that and For i = 0, 1, define Then λ 0 and λ 1 have stationary increments and λ 0 (t) + λ 1 (t) = t.

Proof of Theorem 1.7
Proof. For n = 1, 2, 3, . . . , consider the operators A and B n = nB. By (1.17), the measures µ 0 and µ n,1 = (1/n)µ 1 satisfy and A and B n satisfy the conditions of Lemma 4.1. Define the probability measure µ n = K −1 n (µ 0 + (1/n)µ 1 ), where K n is a normalizing constant, and the measures ν n , ν 0 and ν n,1 as in (4.2). Let ν E n , ν E 0 , and ν E n,1 denote the corresponding marginals on E. Then for each n, Lemma 4.1 implies that there exist a stationary process Z n and non-negative, continuous, nondecreasing processes λ n 0 and λ n 1 having stationary increments such that λ n 0 (t) + λ n 1 (t) = t and for each f ∈ D, is an {F Z n t }-martingale and Z n (t) has distribution ν E n (·) = ν n (· × U ). In particular, by consid- Also note that Now observe that Therefore E[λ n 1 (t)] < C t /n and converges to zero as n → ∞, which implies as n → ∞, (4.5) since λ n 0 (t) + λ n 1 (t) = t. Note that dµ E 0 dµ E n ≤ K n , and hence (4.4) and (4.5) imply λ n 0 (t) → t in probability as n → ∞.
We now show existence of a limiting process Z. We verify that the conditions of Corollary 1.4 of [17] are satisfied.
Consider the collection of coordinate functions {z k } Note that the compact containment condition is trivially satisfied and {z k } ⊂ C( E) separates points in E.
For t > 0, consider any partition Then where the last inequality follows from (1.16) and Condition 1.2. Thus condition (1.7) of [17, Corollary 1.4] is satisfied. By selecting a weakly convergent subsequence and applying the Skorohod representation theorem, if necessary, we may assume that there exists a process Z such that Z n (t) → Z(t) a.s., for all but countably many t.
Now for each n, define the random measure Γ n on E × [0, ∞) satisfying for all H 1 ∈ B( E), H 2 ∈ B[0, ∞). Then { Γ n } is a sequence of L( E)-valued random variables. We show that this sequence of measure-valued random variables is relatively compact.
Note that for a complete, separable metric space S, a collection of measures K ⊂ L(S) is relatively compact if sup µ∈K µ(S × [0, T ]) < ∞ for each T , and for each T and > 0, there exists a compact set K T, ⊂ S such that sup µ∈K µ(K c T, × [0, T ]) < . Recall, E is compact, so the second condition is trivially satisfied by each Γ n . Now observe that Taking H = E and applying Markov's inequality, Given > 0, taking a sequence {T j } with T j → ∞ and setting M T j = T j µ E 1 (E) /2 j shows that the sequence { Γ n } of random measures is tight and hence relatively compact. By passing to an appropriate subsequence {n k }, if necessary, and applying the Skorohod representation theorem, we can assume that there exists a random measure Γ on E×[0, ∞) such that Γ n k → Γ a.s. in L( E) and, for all but countably many t, We now show that for each k, each m ≥ 1, and each choice of 0 ≤ t 1 ≤ t 2 ≤ · · · ≤ t m < t m+1 which is true if and only if for each k, The analog of (4.8) for (Z n , Γ n ) is The idea is to let n → ∞ to establish (4.8). However, care needs to be taken since the { Γ n } are not necessarily bounded measures. To overcome this difficulty, for each n ≥ 0 and M ≥ 0, we define the stopping time τ M,n by (4.14) Note also that Now recall the definitions of the measures ν E 0 , µ E 1 , and ν E n on E (from the first paragraph of the proof). Observe that the measures ν E 0 and ν E 1 are absolutely continuous with respect to ν E n with Radon-Nikodym derivatives We claim that for each g ∈ L 1 (ν E 0 ), as n → ∞. To see this, fix g ∈ L 1 (ν E 0 ) and let > 0 be given and select g ∈ C( E) (recall, E is compact so g is bounded) such that Then, recalling that Z n (s) has distribution ν E n and the definition of λ n 0 in (4.4), Similarly, Z(t) will have distribution ν E 0 and so We now consider the convergence of Noting that for g ≥ 0, this convergence can be extended to all g ∈ L 1 (ν E 1 ) by approximating g by g ∈ C( E) as above.
, along with (4.14), we have , du), the expression in the expectation is dominated by Let R be defined as R M with Γ M replaced by Γ. Noting that R M R and E[R] < ∞, the dominated convergence theorem implies that as M → ∞, the expectation on the left side of (4.19) converges to the left side of (4.8) while the right side converges to zero. Consequently, (4.8) holds for t 1 , . . . , t m ∈ T . Right continuity of Z and Γ then implies (4.8) holds for all t, and thus (4.9) is a martingale.
The random measure Γ has stationary increments but need not be adapted to the filtration generated by Z. Without loss of generality, assume the process Z is defined for all t, not just for t ≥ 0, and assume that Γ takes values in measures on where N denotes the null sets, so that {F Z t } is the completion of the filtration generated by the process Z. Let F t = F Z t+ . Then by Lemma 6.1, using the space E and taking H(x, s) = e −|s| , there exists a predictable random measureΓ satisfying (6.16). As a result, (4.9) will be an F Z t+ -martingale withΓ replacing Γ. Note thatΓ has stationary increments.
Define X = G −1 (Z) and the random measure Γ on E × R by By working with the completions, F

Proof of Theorem 1.11
Proof. We essentially follow the proof of Theorem 4.1 of Kurtz and Stockbridge [21]. Let α be chosen to satisfy (1.21). Define the operatorsÃ andB bỹ The following computation verifies that (Ã,B,μ 0 ,μ 1 ) satisfy (1.19). For f ∈ D and φ ∈ D 1 , and setting φ(s) = (φ(−1, s) + φ(1, s))/2, where the last equality follows by interchanging the order of the integrals with respect to r and s and observing that all the terms cancel.
At this point we could apply Corollary 1.9 to obtain the existence of a stationary space-time process (Y, Θ, S) and boundary measure Γ with stationary distribution given byμ 0 ; however, we need to specify a more explicit form for the random measure. As in the proof of Theorem 1.7, for each n, define the operatorsB n = nB and the measuresμ n 1 = 1 nμ 1 andμ n = K −1 n (μ 0 +μ n 1 ). Apply Lemma 4.1 to get the stationary processes Z n , Θ n , and S n , and processes λ n 0 and λ n 1 satisfying (4.4) such that Existence of limiting processes Z, Θ, and S follow as in the proof of Theorem 1.7, and in the current setting, (Θ n , S n ) ⇒ (Θ, S) in the Skorohod topology. This stronger convergence of (Θ n , S n ) allows us to be more explicit in describing a boundary measure Γ.
For each n, let Γ n ∈ L( E) be the random measure satisfying Note that In terms of Γ n , (5.5) becomes φ(Θ n (t), S n (t))f (G −1 (Z n (t))) − (φf )(G −1 (z), Θ n (s), S n (s)) Γ n (dz × ds). Since the argument in the proof of Theorem 1.7 shows that the sequence { Γ n } is relatively compact, and the existence of the limit (Z, Θ, S, Γ), at least along a subsequence, follows as before. The E ×{−1, 1}×[0, ∞)-valued process (Z, Θ, S) (which we may take to be defined for −∞ < t < ∞) is stationary and the random measure Γ (which we may take to be defined on E × (−∞, ∞)) has stationary time-increments. The convergence of (5.8) to a martingale follows as before, except for the last term. Applying the Skorohod representation theorem, we will assume that the convergence is almost sure.
Taking f ≡ 1 in (5.8), we see that is a martingale, and it follows that (Θ n , S n ) can be written as (Θ n (t), S n (t)) = (Θ n (λ n 0 (t)), S n (λ n 0 (t))), where (Θ n ,S n ) is a solution of the martingale problem for C given by Cφ(θ, r) = φ (θ, r) + α(φ(−θ, 0) − φ(θ, r)), for φ ∈ D 1 . Uniqueness of this martingale problem (cf. [11,Theorem 4.4.1]) and the fact that λ 0 (t) → t implies that (Θ, S) is a stationary solution of the martingale problem for C. It follows (see [21], page 624) that S is exponentially distributed at each time t, increases linearly at rate 1 up to a random time that is exponentially distributed with parameter α at which time it jumps to 0, and the cycle repeats. Similarly, Θ(t) = Θ(0)(−1) N (t) , where N (t) is the number of returns to zero made by S in the time interval (0, t]. Note also that (Θ n , S n ) converges to (Θ, S) in the Skorohod topology.
Some care needs to be taken in analyzing the convergence of the last term in (5.8). We can approximateB(φf )(G −1 (z), Θ n (s), S n (s)) by h(z, Θ n (s), S n (s)) with h ∈ C( E×{−1, 1}×[0, ∞)) (that is, select h so that E×{−1,1}×[0,∞) |B(φf )(G −1 (z), θ, r)−h(z, θ, r)|μ 1 (dz ×dθ×dr) is small), but we cannot rule out the possibility that Γ has a discontinuity at the jump times of (Θ, S). In particular, µ(E × {0}) may not be zero. For instance, in the transaction cost models (see Example 1.5 and [8,25]), if the support of ν 0 is not a subset of the control region, then the optimal solution may instantaneously jump to the boundary of the control region at time zero. In this situation, {ν t } will not be right continuous at zero, but will satisfy Since in the process we are constructing, Y = G −1 (Z) "starts over" with distribution ν 0 at each time τ k that S jumps back to zero, in this situation Y must take an instantaneous jump governed by Γ so that Y (τ k +) has distribution ν 0+ .
Observe that, under P , is a {G t }-martingale which implies that for each m ≥ 1, 0 ≤ t 1 ≤ · · · ≤ t m < t m+1 , and where the elimination of the conditioning in the last term in the braces follows by Lemma 6.2.
The first term of the right hand side equals h(x, s)μ 0 (dx × {−1, 1} × ds) by the stationarity of (Y, S), and the other terms converge to 0 as T → ∞. By (5.3) and (5.12), we obtain Let {h k } ⊂ C(E) be a countable collection which is separating (see [11, p. 112]). Taking h(x, t) in (5.13) to be of the form φ(t)h k (x), we see that and since {h k } is separating, it follows that X(t) has distribution ν t for a.e. t.

Existence of an adapted compensator for a random measure
Let (Ω, F, P ) be a probability space with a filtration {F t }. Let P be the predictable σ-algebra, that is, the smallest σ-algebra of sets in [0, ∞) × Ω such that for each {F t }-adapted, leftcontinuous process X, the mapping (t, ω) → X(t, ω) is P-measurable. A process X is {F t }predictable (or simply predictable if the filtration is clear from context) if the mapping (t, ω) → X(t, ω) is P-measurable.

RZ(x, s)H(x, s) Γ(dx × ds)
= 0, so M Z is a martingale.  where the last equality follows by (6.18). This result implies that M V is a martingale.