Nonlinear filtering with degenerate noise

This paper studies a new type of filtering problem, where the diffusion coefficient of the observation noise is strictly positive only in the interior of the bounded interval where observation takes its values. We derive a Zakai and a Kushner–Stratonovich equation, and prove uniqueness of the measure–valued solution of the Zakai equation.


Introduction
We consider the following coupled pair of SDEs X t = X 0 + t 0 f (s, X s , Y s ) ds + t 0 g (s, X s , Y s ) dB s (1.1) and where the random process (X, Y ) takes its values in R d × R The aim of this work is to study the filtering problem consisting in describing the law of X t , conditionally upon the observation of {Y s , 0 ≤ s ≤ t}.There is a large literature on the filtering problem (see in particular [1,3,4]).
An essential assumption in all the filtering literature is that k 2 (t, Y t ) > 0 for all t > 0. We consider in this paper the case where y → k(t, y) vanishes at the boundary of an interval [a, b], while the process Y t cannot leave the interval [a, b].On the other hand, we assume that h(t, x, y) = h 1 (t, y) + h 2 (t, x, y), where the drift h 1 does not push Y t outside the interval [a, b], while k −2 (t, y)h 2 (t, x, y) is bounded, which forces h 2 (t, x, y) to vanish at y = a and y = b.
In this new set up, we derive the Zakai and Kushner-Stratonovich filtering equations, and we shall establish uniqueness of the measure-valued solution of the Zakai equation.

The filtering equations
Let Ω, F, P denote a complete probability space with a right continuous left limit (rcll) filtration (F t ) t∈[0,T [ such that F 0 contains all the P -negligible sets of F.
We assume that h(t, x, y) = h 1 (t, y) + h 2 (t, x, y).The point of this decomposition is that h 2 (t, x, y) will be assumed to vanish -together with k(t, y) -for y outside an open interval (a, b) ⊂ R, while h 1 (t, y) will be assumed to keep the observation Y t in the interval (a, b).The idea is that we want the observation process Y t to stay in the interval [a, b] forever.In particular we assume that a ≤ y 0 ≤ b.
Since we want to exploit the one-dimensionality of the SDE for Y t , while the two equations (1.1) and (1.2) are coupled, we will first solve the reduced one-dimensional equation where (B t , W t ) is a standard k + 1-dimensional F t -Brownian motion under P , then solve equation (1.1), and finally we shall make use of Girsanov's theorem in order to recover the original system.This motivates the following assumptions.
Claim 3.1.We assume that f and g are measurable maps from and moreover that there exists a constant C T such that for all (t, x, y) In the next assumption, κ is a Borel function from (0, +∞) into itself, which is such that 0+ dr/κ(r) = +∞.
Our assumption that Y t is one dimensional allows us to consider a diffusion coefficient k such that y → k(t, y) is 1/2-Hölder continuous.In case of a vector-valued observation process Y t , we would be forced to make the stronger assumption of Lipschitz regularity.
Claim 3.5.We assume that for all T > 0, sup ψ (t, x, y) In the case where the process Y t spends non zero time on the boundary {a, b} with positive probability, we shall need the additional assumption Claim 3.6.For all t > 0, there exists a continuous function Proof.Existence and uniqueness of a strong solution follows from the Yamada-Watanabe theorem, see e.g.Theorem IX.3.5 ii) in [6], or [7].
It is now plain that under our Assumption 3.1, assuming in addition that the R dvalued r.v.X 0 is F 0 -measurable, the SDE (1.1) has a unique non exploding R d -valued solution {X t , t ≥ 0}.We now define the probability measure P under which the process (X t , Y t ) solves the system of SDEs (1.1)-(1.2).Lemma 3.8.Under the Assumptions 3.5-3.6,there is a probability measure P such that its restriction to each F t is absolutely continuous with respect to P and on F t the Radon-Nikodym derivative dP d P = Z t , where Under P , the process (B t , W t ) is a R k+1 -valued standard F t -Brownian motion, where Proof.It is plain that our Assumptions 3.4, 3.5 and 3.6 imply that e.g.Novikov's condition is satisfied, so the Lemma follows from Girsanov's theorem, see e.g.Theorem 2.51 in [5].
It clearly follows from (3.3) that under P , (X t , Y t ) solves the system of SDEs (1.1)- Remark 3.9.Our assumption 3.5 is more than enough to insure that E[Z t ] = 1 for all t > 0. We could weaken it, so that one of the conditions in Proposition 2.50 from [5] is satisfied, or just assume that E[Z t ] = 1 for all t > 0.
It follows from e.g.Proposition 2.2.1 in [4] that we have the following Kallianpur-Striebel formula.For every real random variable ξ ∈ L 1 (Ω, F t , P ): where for each t ≥ 0, Y t denotes the σ-algebra generated by {Y s ; s ∈ [0, t]} and the P -negligible sets.Using Proposition 2.3.1 and Corollary 2.3.2 in [4], we deduce that the process ζ given by is a P , Y t -martingale with a continuous version.For any t > 0, the conditional law π t of X t given {Y s , 0 ≤ s ≤ t} is such that for every measurable function ϕ : R → R + , It is convenient to introduce the so-called "unnormalized conditional distribution" of X defined by From the Kallianpur-Striebel formula (3.4), we deduce that if 1 denotes the constant function equal to one, On the other hand, for every positive and F s -measurable random variable ξ, t > s, where Y := σ{Y t , t ≥ 0}.Indeed, it is plain that the filtration (Y t ) t≥0 is a subfiltration of the filtration W t t≥0 and since W is a P , F t Brownian motion, hence F s and W s,t are independent, where W s,t = σ{ W r − W s , 0 ≤ r ≤ s} up to null sets.Note that In the same manner we can then replace in the identities (3.6) and (3.7) the conditoning by Y t by conditoning by Y.
Before giving the first main theorem of this section, let us state a useful lemma which is similar to Lemma 2.2.4 in [4].Lemma 3.10.Let (ξ, η) denote a R k+1 -valued progressively measurable process such that for some t > 0, (3.10) ) , where ρ (t) is a deterministic function , (3.13) then for any t > 0, the linear combinations of the r.v.'s from K(t) are dense in L 2 Ω, Y t , P .
Therefore, the result is true if and only if for any t > 0 and any t ∈ K (t), The last identity follows from the fact that W and B are independent Brownian motions under P .Similarly, For ϕ ∈ C 2 (R d ), we define (with Dϕ denoting the gradient of ϕ, D 2 ϕ the matrix of second order derivatives of ϕ, g * the transposed of the matrix g) Taking the conditional expectation under P given Y in the above identity, we deduce that We now deduce from Lemma 3.10, Having in mind the definition of ς t the result follows.
We end this section with the second main result which shows that the process π solves the Kushner-Stratonovich equation.
Theorem 3.13.Under the assumptions of Theorem 3.11, ∀t ∈ [0, T ], for all ϕ ∈ C 2 c R d the following Kushner-Stratonovich equation holds a.s.for all t ≥ 0: Remark 3.14.Note that our Kushner-Stratonovich equation is rather similar to the Kushner-Stratonovich equation in the traditional filtering problem, see e.g.equation (KS) in Theorem 2.3.7 from [4].The division by k 2 which appears in that Theorem is here implicit, since h 2 is replaced by ψ = k −2 h 2 .Note however that there is a difference between the two equations, since here h is replaced by h 2 in the integrand (i.e. the first factor) of the second integral.
Proof.In this proof, we write h 1 (s) for h 1 (s, Y s ).It follows from Theorem 3.11 that In particular It follows from the Itô formula that Since from (3.8) π s (ϕ) = ςs(ϕ) ςs(1) , we have, applying the Itô formula for the product of the two continuous scalar semi-martingales ς s (ϕ) and 1 ςs(1) , where we have noted that ψ(t, x, y)k 2 (t, y) = h 2 (t, x, y).The result follows.

Uniqueness for the Zakai equation
The measure valued process {ς t (ϕ), t ≥ 0} defined by (3.7) solves the Zakai equation (3.17).In this section, we want to show that this is the unique solution of the Zakai equation.We will assume Claim 4.1.There exists p > 2 such that E[ X 0 p ] < ∞.
We deduce from that assumption Lemma 4.2.For any T > 0, 1 < q < p/2, with n 2 (x) := x 2 , E sup 0≤t≤T ς q t (n 2 ) < ∞.Proof.We first note that, as a consequence of Assumption 4.1, we can show, using standard arguments, that for any T > 0, there exists c T such that where we have used Jensen's and Hölder's inequalities for the conditional expectation, and Young's inequality.Hence where we have used Doob's inequality for martingales, and the fact that the boundedness of ψ implies that all moments of Z T are finite.
We now assume that the following additional regularity assumption is satisfied.

Claim 4.3. There exists
It follows from Itô's formula that Consequently, Hence from Itô's formula, Z s e −Ψ(s,Xs,Ys) D x Ψ(s, X s , Y s ), g(s, X s , Y s)dB s .
Taking Ẽ[•|Y] of both sides of this identity, we deduce Theorem 4.4.The measure-valued process ς t (ϕ) = ς t e −Ψ(t,.,Yt)ϕ satisfies the following "robust version" of the Zakai equation where Remark 4.5.It is not so easy to compare the present robust Zakai equation with e.g.equation (ZR) in Theorem 4.2.2 from [4].There k ≡ 1 and the drift in (1.2) does not depend upon the observation process Y t .However, it is clear that the definition of the process ς t in terms of the process ς t is very similar in spirit to the definition of σ t in terms of σ t on page 116 in [4].Note that even if we would take h 2 not depending upon the process Y , our ψ must depend upon Y .Hence there is no way that the above transformation, which amounts in integrating by parts the stochastic integral t 0 ψ (s, X s , Y s ) k(s, Y s )d W s which appears in the logarithm of Z t , in order to get rid of the dY t integral, can be done in general when Y t is a vector-valued process, see Remark 4.12 below.
We now show that we can write equation (4.2) with a test function ϕ which is a function of both t and x.

)) ds
The second term on the last right hand side converges to t 0 ς s ({A s + c s }ϕ(s, .))dsas sup 1≤i≤n (t i − t i−1 ) → 0, by a.e.convergence and uniform integrability which follows from Lemma 4.2, and the fact that for all T > 0, there exists C such that for all 0 ≤ s, t ≤ T , Note that the conditional expectation in the expression for ς t is in fact a (partial) expectation.
We now treat the first term on the right.First we note that for any ε > 0 and K ⊂⊂ R d a compact set, there exists φ ∂u ∂t (t, x) + A t u(t, x) + c(t, x)u(t, x) = 0, u(T, x) = κ(x).Given an R-valued continuous process {Y t , t ≥ 0}, and {B t , t ≥ 0} an independent R k -valued standard Brownian motion, we consider for each 0 ≤ t < T and x ∈ R d , the R d -valued process {U  It follows from Theorem 3.42 in [5] that the function v(t, x) given by (4.5) is a viscosity solution of the backward PDE (4.3).However, we want to prove more, and for that purpose, we need to add some regularity assumptions.
Claim 4.8.The coefficients f , g, c are of class C 0,2 (R + × R d ), the derivatives of order one and two of f and g, c and its derivatives of order one and two being bounded.
We have the Proposition 4.9.Under the assumptions 4. Proof.It is not hard to deduce from the assumption 4.8 that x → U t,x is twice meansquare differentiable, and the first and second order derivatives V t,x and Γ t,x satisfy We note that under the Assumption 4.8, all moments of sup t≤s≤T ( V The continuity of (t, x) → (D x v(t, x), D Proof.We apply Proposition 4.6 with ϕ replaced by v, which is possible in view of Proposition 4.9.Since v satisfies (4.3) and v(T, x) = κ, Proposition 4.6 implies that ς T (κ) = ς 0 (v(0, .)).
Remark 4.12.Would the observation process Y t be vector valued, then, in order to establish the "robust form" of the Zakai equation, we would need, instead of Assumption 4.3, to assume that there exists Ψ ∈ C 1,2,2 b (R + ×R d ×R ) such that ψ(t, x, y) = ∇ y Ψ(t, x, y) for all (t, x, y) ∈ R + × R d × R , which is an extremely restrictive and not at all natural assumption.This means that in case of vector-valued observation, there is no "robust form" of the Zakai equation in our filtering problem in general.

Lemma 3 . 7 .
Under Assumptions 3.2-3.4,the reduced observation equation (3.1) has a unique strong solution, which takes its values in the set [a, b].

2 b
7 and 4.8, the function v(t, x) defined by(4.5) is of class C 1,and it is a classical solution of the backward PDE (4.3).
.16) Now we state the first main theorem of this paper.(ϕψ (s, ., Y s )) [dY s − h 1 (s, Y s ) ds].We note that the Zakai equation here is very similar to the one for the traditional filtering problem, see e.g.equation (Z) in Theorem 2.3.3 from [4], with the small difference that dY t is replaced by dY t − h 1 (t, Y t )dt, which is quite natural, since the Girsanov theorem here only suppresses the piece h 2 of the drift h in equation (1.2).
Proof.Let ϕ ∈ C 2 c R d .(Zt )0≤t<T is a continuous martingale and from Itô's formula = f (t, x, y) − (gg * D x Ψ)(t, x, y).D x Ψ (t, x, y) ≤ C(1 + x ), for all t ≥ 0, x ∈ R d , y ∈ [a, b]; (ii) |c(t, x)| ≤ C, for all t ≥ 0, x ∈ R dand a.s..We now consider, for 0 ≤ t ≤ T and x ∈ R d , (i) gg * Consequently (t, x) → (v(t, x), A t v(t, x) + c(t, x)v(t, x)) is continuous.It remains to show that t → v(t, x) is of class C 1 , and to compute that derivative.For t ≤ s ≤ T , let µ s denote the law of U t,x s .From the Markov property of the process U t,x h → 0, where we have used Itô's formula and the regularity of x → v(t + h, x), together with the just established convergence of the last term.Since v satisfies clearly the final condition, we have established thatv ∈ C 1,2 (R + × R d) and is a classical solution of the backward PDE (4.3).Under the assumptions 4.1, 4.3, 4.7 and 4.8, the robust Zakai equation (4.2) has a unique solution.