Exponential growth rate for a singular linear stochastic delay differential equation

We establish the existence of a deterministic exponential growth rate for the norm (on an appropriate function space) of the solution of the linear scalar stochastic delay equation dX(t) = X(t-1) dW(t) which does not depend on the initial condition as long as it is not identically zero. Due to the singular nature of the equation this property does not follow from available results on stochastic delay differential equations. The key technique is to establish existence and uniqueness of an invariant measure of the projection of the solution onto the unit sphere in the chosen function space via asymptotic coupling and to prove a Furstenberg-Hasminskii-type formula (like in the finite dimensional case).

For a fixed chosen norm . on C we will be interested in the question whether for the C-valued solution X t of the SDDE (1.1) the limit λ(η, ω) := lim exists almost surely for each η ∈ C and is deterministic and independent of η as long as η = 0. We will show in our main result (Theorem 1.1) that there exists a deterministic number Λ ∈ R such that for every η = 0 we have λ(η, ω) = Λ almost surely. In this case, we call Λ the exact exponential growth rate of (1.1). To prove Theorem 1.1, we follow the path paved by Furstenberg [3] and Hasminskii [6] (see also [1]) in the finite dimensional case: project the solution of the equation to the unit sphere of an appropriate function space and show that the induced Markov process has a unique invariant probability measure µ. Then make sure that for each initial condition on the sphere the empirical measure converges to µ and represent the exponential growth rate as an integral with respect to µ as in the classical Furstenberg formula. While the existence of µ is rather easy to show, uniqueness is more involved. We follow the strategy developed in [4] to show uniqueness of µ. Contrary to [4] we have to deal with degenerate equations here requiring a modification of the approach. Let us first justify our restriction to such a simple equation as (1.1). In spite of its simplicity, the equation is known to be singular in the sense that there does not exist any modification of the solution which almost surely depends continuously upon the initial condition η with respect to the sup-norm (see [8]). In particular, the results in [9] establishing a Lyapunov spectrum and a corresponding decomposition of the state space for a large class of regular linear SDDEs cannot be applied.
Since equation (1.1) is the simplest possible singular stochastic delay equation, we believe that it is worthwhile studying its asymptotics in some detail. We are optimistic that in principle our method of proof can be generalized to a large class of (multidimensional) linear stochastic functional diffential equations but we expect the proofs to be quite a bit more technical.
Our main result in this paper is the following: There exists a number Λ ∈ R such that for each η ∈ C\{0}, the solution X of equation (1.1) with initial condition η satisfies It is easy to see (and will follow from Lemma 2.1) that for each η = 0, the process X t starting at X 0 = η will almost surely never become (identically) zero. Therefore, the process is well-defined. Since our equation (1.1) is linear, the process S t , t ≥ 0 is a Markov process with continuous paths (with respect to both the sup-norm and the M 2 -norm on C) on the unit sphere of M 2 . We will show that this process has a unique invariant probability measure µ. Suppose for a moment that this has been shown. Then, by Itô's formula, we have where f (η) = η 2 (0) and g(η) = 2η(0)η(−1). Hence, Therefore, by Birkhoff's ergodic theorem, for µ-almost every initial condition X 0 = η since f is bounded (and g 2 is non-negative) and since the stochastic integral is asymptotically negligible compared to its quadratic variation unless the latter process remains bounded as t → ∞ in which case the stochastic integral remains bounded in t as well and therefore does not contribute towards the limit in (1.2). This is almost everything we want to show except that we want to ensure that the limit exists almost surely for each initial condition η and not just for µ-almost every η. Since f is bounded, it follows that Λ < ∞ (in fact Λ ≤ 1/2) but it is not immediately obvious that Λ > −∞. This follows however from the following result which is Theorem 2.3. in [10].

Proposition 1.2.
There exists a real number Λ 0 such that for every η ∈ C\{0}, we have P{lim inf t→∞ 1 t log |||X t ||| ≥ Λ 0 } = 1, where X solves (1.1) with initial condition η. Note that as a consequence Proposition 1.2, it follows that the function g is square integrable with respect to µ.
It is easy to see that then, we also have In order to prove Theorem 1.1, it therefore remains to prove existence and uniqueness of an invariant probability measure µ of the Markov process S t , t ≥ 0 and to show that (1.2) holds for each initial condition η ∈ C\{0}.
We will need the following result which is Step 1 in the proof of Theorem 2.3 in [10].
Upper and lower bounds for the exponential growth rate Λ have been obtained (even for equations with an additional factor σ in front of dW (t)) in [10] and [11] (in those papers the existence of the limit (1.2) was not yet known: the authors obtained upper deterministic bounds for the lim sup and lower deterministic bounds for the lim inf).

Existence of an invariant measure
In this section, X is always the solution of equation (1.1) -possibly with a random initial condition which is independent of the σ−algebra generated by the driving Wiener process W . Let F t , t ≥ 0 be the filtration (right-continuous and complete) generated by the initial condition and the Wiener process W . We will always assume that the initial condition satisfies E X 0 2 < ∞ which ensures that all moments appearing below will be finite and conditional expectations well-defined. As before, we define S t := X t /|||X t |||. We need the following lemmas. Lemma 2.1. There exists some c 1 > 0 such that for each t ≥ 1 and α ≥ 0 we have since the supremum is finite. Defining c 1 := c ∨ 2, the assertion follows.

Lemma 2.2.
For each t ≥ 1 and each F t−1 -measurable positive random variable ξ, we have We regard the process S t , t ≥ 0 as a Markov process with state spaceC defined as the intersection of C and the unit sphere of M 2 equipped with the supremum norm . . Then S t , t ≥ 0 is a Feller process with values in the Polish spaceC (with complete metric induced by the supremum norm).

Proposition 2.3.
For any (possibly random) C-valued initial condition X 0 which is nonzero almost surely, the laws L(S t ), t ≥ 2 are tight inC.
is a local martingale which has a representation H(r) = B(τ (r)) for a Brownian motion B which is independent of F t−1 , where Further, where we used Lemma 2.1 (for the first two summands) and Lemma 2.2 (for the last summand) in the final step. For fixed α, ε > 0 we obtain The assertion follows since α > 0 can be chosen arbitrarily small.
Remark 2.4. The proof of the previous proposition shows that tightness holds even uniformly with respect to the initial condition, i.e. the family L(S (η) t ), t ≥ 2, η ∈ C\{0} is tight. Clearly, the family L(S t ), t ≥ 0 is also tight for each fixed initial condition η ∈ C\{0} but not uniformly with respect to η. Proof. This follows from the Krylov-Bogoliubov theorem (see [2], Theorem 3.1.1) by Proposition 2.3 and the fact that the process S t , t ≥ 0 is Feller.
For later use, we formulate the following straightforward corollary of Lemma 2.1 and Lemma 2.2. Corollary 2.6. For γ > 0 and t ≥ 1, we have The previous corollary immediately implies the following one.

Uniqueness of an invariant measure
Consider where ρ is an adapted process taking values in {0, 1} such that ρ is constant on each interval [n, n + 1), n ∈ N 0 . We will show that ρ can be defined in such a way that for any pair of deterministic initial conditions (X 0 , Y 0 ), the process Z(t) := X(t) − Y (t) satisfies lim λ→∞ lim sup t→∞ 1 t log Z t = −∞ almost surely and such that the law of Y is absolutely continuous with respect to the law of the solution of dȲ (t) =Ȳ (t − 1) dW (t) with the same initial condition as Y provided that λ is sufficiently large. Then, we project both X and Y to the unit sphereC and show that the distance between the projected processes converges to 0 as t → ∞ for large enough λ. Then we apply Corollary 2.2 in [4] and obtain uniqueness.
Observe that the choice ρ ≡ 1 in (3.3) will not work: Y will not be absolutely continuous with respect toȲ since it can happen that at some (random) time Y (t) is zero and Z(t) is not and then the additional drift prevents Y from being absolutely continuous with respect toȲ . To prevent this, we will switch off ρ when this happens. Roughly speaking, we will switch ρ on as often as possible (thus guaranteeing that Z converges to 0 sufficiently quickly) but we will switch ρ off whenever Y has not been bounded away from zero sufficiently during the past unit time interval. We will always assume that the initial conditions X 0 and Y 0 are almost surely different which implies that the process Z t will almost surely never hit zero.
We define ρ(t) = 1 on [n, n + 1) if A n occurs and ρ(t) = 0 otherwise. The following lemma shows that the conditional laws of the waiting times between successive A n 's have a geometric tail (uniformly in λ). Lemma 3.1. For all n ∈ N, n ≥ 2, and all λ ≥ 0,

Proof.
On the set A c n−1 , we have whereW is a Wiener process which is independent of F n−1 andW * (t) := sup s∈[0,t] |W (s)|. Corollary 2.7 shows that Therefore, on A c n−2 , we have Further, on A c n−1 , by Corollary 2.6, Hence, on A c n−2 , we have Therefore, on A c n−2 , we have which is the assertion.
We now have to show that whenever we have an interval [n, n + 1) on which ρ is one, then with high probability |||Z n+1 ||| is much smaller than |||Z n ||| (when λ is large). More precisely, the following lemma holds.
To obtain the assertion, it suffices to show (thanks to the first Borel-Cantelli Lemma) that for each δ > 0 the sum over P{ Z n+1 ≥ e δn |||Z n |||} is finite which is easily established by estimating the corresponding conditional probabilities (conditioned on F n ) like in the proof of Lemma 3.2. This proves the assertion.

Lemma 3.4.
There exists some λ 1 > 0 such that for all λ ≥ λ 1 , the law of the process Y is absolutely continuous with respect to that of the solution of (1.1) with the same initial condition as Y (λ 1 does not depend on the initial condition of (X 0 , Y 0 )).
Proof. We need to make sure that for λ sufficiently large, we have Then, the assertion follows from Girsanov's Theorem (see [7], Chapter 7). By the definition of ρ, we have Y 2 (t) ≥ 1 4 Y n 2 whenever t ∈ [n − 1, n] and ρ(n) = 1 (which is equivalent to ρ(s) = 1 for all s ∈ [n, n + 1)) which implies Proposition 3.5. The Markov process S t := X t /|||X t |||, t ≥ 0 has a unique invariant probability measure µ. The support of µ isC.
Proof. Existence of an invariant probability measure µ has been shown in Proposition 2.5. To establish uniqueness, observe that which converges to zero exponentially fast as long as λ is sufficiently large. Lemma 3.4, together with the fact that absolute continuity of measures is preserved under measurable maps, shows that the law L(Y t /|||Y t |||, t ≥ 0) is absolutely continuous with respect to L(Ȳ t /|||Ȳ t |||, t ≥ 0), whereȲ solves equation (1.1) with the same initial condition as Y . Now uniqueness follows from Corollary 2.2 in [4]. It remains to show that µ has full support. Let X solve (1.1) with initial distribution µ. Let G be a non-empty open subset ofC. We show that µ(G) > 0. Assume that G contains a function f such that f (0) > 0 (otherwise the proof is completely analogous). Let B + be the set of positive functions in B. It follows as in Lemma 3.1 that X n visits B + infinitely often almost surely. If X n ∈ B + , then P S n+1 ∈ G F n > 0 and therefore µ(G) > 0. Lemma 4.1. There exists some λ 2 > 0 such that for each φ ∈C and each λ ≥ λ 2 the following holds. Let η ∈C and let (X, Y ) solve (3.3) with initial condition (η, φ). Define ρ and Z as before. Then dt → 0 in probability as η − φ → 0.
Step 2: Fix an initial condition φ ∈C and denote the solution of (1.1) with initial condition φ byȲ . We will show that φ ∈ M. Let λ ≥ λ 3 with λ 3 as defined in the first step. We know that µ(M) = 1 by Birkhoff's ergodic theorem and the fact that g 2 is µ-integrable (cf. the statement after Proposition 1.2). From Proposition 3.5 we know that the support of µ isC, so M is dense inC. For a given λ ≥ λ 3 and ε > 0, applying Lemma 4.1, we can find some η ∈ M such that for V := By the Cameron-Martin-Girsanov Theorem,W is a Wiener process with respect to the measurẽ P defined as dP(ω) = U(ω)dP(ω), where By uniqueness of solutions of (4.16), Y andỸ agree almost surely up to τ . In particular, P{Y ≡Ỹ } ≥ 1 − ε. Let Γ ∈ F denote the set of all ω for which the empirical distribution of Y t /|||Ȳ t |||, t > 0 converges to µ weakly and the corresponding integrals of g 2 converge as well.