Uniform convergence to the Q-process

The first aim of the present note is to quantify the speed of convergence of a conditioned process toward its Q-process under suitable assumptions on the quasi-stationary distribution of the process. Conversely, we prove that, if a conditioned process converges uniformly to a conservative Markov process which is itself ergodic, then it admits a unique quasi-stationary distribution and converges toward it exponentially fast, uniformly in its initial distribution. As an application, we provide a conditional ergodic theorem.

We refer the reader to [7,9,4] and references therein for extensive developments and several references on the subject. It is well known that a probability measure α is a quasi-stationary distribution if and only if there exists a probability measure µ on E such that for all measurable subsets A of E.
In [2], we provided a necessary and sufficient condition on X for the existence of a probability measure α on E and constants C, γ > 0 such that where · T V is the total variation norm and P(E) is the set of probability measures on E. This immediately implies that α is the unique quasistationary distribution of X and that (1.1) holds for any initial probability measure µ. The necessary and sufficient condition for (1.2) is given by the existence of a probability measure ν on E and of constants t 0 , c 1 , c 2 > 0 such that The first condition implies that, in cases of unbounded state space E (like N or R + ), the process (X t , t ≥ 0) comes down from infinity in the sense that, there exists a compact set K ⊂ E such that inf x∈E P x (X t 0 ∈ K | t 0 τ ∂ ) > 0. This property is standard for biological population processes such as Lotka-Volterra birth and death or diffusion processes [1,3]. However, this is not the case for some classical models, such as linear birth and death processes or Ornstein-Uhlenbeck processes. Many properties can be deduced from (1.2). For instance, this implies the existence of a constant λ 0 > 0 such that P α (t < τ ∂ ) = e −λ 0 t and of a function η : E → (0, ∞) such that α(η) = 1 and as proved in [2,Prop. 2.3]. It also implies the existence and the exponential ergodicity of the associated Q-process, defined as the process X conditioned to never be extinct [2,Thm. 3.1]. More precisely, if (1.2) holds, then the family (Q x ) x∈E of probability measures on Ω defined by is well defined and the process (Ω, is an E-valued homogeneous Markov process. In addition, this process admits the unique invariant probability measure (sometimes refered to as the doubly limiting quasi-stationary distribution [5]) and there exist constants C ′ , γ ′ > 0 such that, for any x ∈ E and all t ≥ 0, The measure β The first aim of the present note is to refine some results of [2] in order to get sharper bounds on the convergence in (1.3) and to prove that the convergence (1.4) holds in total variation norm, with uniform bounds over the initial distribution (see Theorem 2.1). Using these new results, we obtain in Corollary 2.3 that the uniform exponential convergence (1.2) implies that, for all bounded measurable function f : E → R and all T > 0, for some positive constant a. This result improves the very recent result obtained independently by He, Zhang and Zu [6, Thm. 2.1] by providing the convergence estimate in 1/T . The interested reader might look into [6] for nice domination properties between the quasi-stationary distribution α and the probability β.
The second aim of this note is to prove that the existence of the Q-process with uniform bounds in (1.4) and its uniform exponential ergodicity (1.5) form in fact a necessary and sufficient condition for the uniform exponential convergence (1.2) toward a unique quasi-stationary distribution.

Main results
In this first result, we improve (1.3) and provide a uniform exponential bound for the convergence (1.4) of the conditioned process toward the Qprocess.
Theorem 2.1. Assume that (1.2) holds. Then there exists a positive constant a 1 such that where λ 0 and η are the constant and function appearing in (1.3) and where γ > 0 is the constant from (1.2). Moreover, there exists a positive constant a 2 such that, for all t ≥ 0, for all Γ ∈ F t and all T ≥ t, We emphasize that (2.1) is an improvement of (1.3), since the convergence is actually exponential and, in many interesting examples, inf x∈E P x (t < τ ∂ ) = 0. This is for example the case for elliptic diffusion processes absorbed at the boundaries of an interval, since the probability of absorption converges to 1 when the initial condition converges to the boundaries of the interval. The last theorem has a first corollary.
This follows from (2.2), the exponential ergodicity of the Q-process stated in (1.5) and the inequality where E Qx is the expectation with respect to Q x .
In particular, choosing µ T as the uniform distribution on [0, T ], we obtain a conditional ergodic theorem.
Considering the problem of estimating β from N realizations of the unconditioned process X, one wishes to take T as small as possible in order to obtain the most samples such that T < τ ∂ (of order N T = N e −λ 0 T ). It is therefore important to minimize the error in (2.3) for a given T . It is easy to check that µ T = δ t 0 with t 0 = γT /(γ + γ ′ ) is optimal with an error of the order of exp(−γ ′ γT /(γ + γ ′ )). Combining this with the Monte Carlo error of order 1/ √ N T , we obtain a global error of order In particular, for a fixed N , the optimal choice for T is T ≈ log N λ 0 +2γγ ′ /(γ+γ ′ ) and the error is of the order of N −ζ with ζ = γγ ′ 2γγ ′ +λ 0 (γ+γ ′ ) . Conversely, for a fixed T , the best choice for N is N ≈ exp((λ 0 + 2γγ ′ /(γ + γ ′ ))T ) and the error is of the order of exp(−γγ ′ T /(γ + γ ′ )).
We conclude this section with a converse to Theorem 2.1. More precisely, we give a converse to the fact that (1.2) implies both (1.5) and (2.2).
Theorem 2.4. Assume that there exists a Markov process (Q x ) x∈E with state space E such that, for all t > 0, and such that Then the process (P x ) x∈E admits a unique quasi-stationary distribution α and there exist positive constants γ, C such that (1.2) holds.
It is well known that the strong ergodicity (2.5) of a Markov process implies its exponential ergodicity [8,Thm. 16.0.2]. Similarly, we observe in our situation that, if (2.4) and (2.5) hold, then the combination of the above results implies that both convergences hold exponentially.

Proof of Theorem 2.1
For all x ∈ E, we set and we recall from [2, Prop. 2.3] that η t (x) is uniformly bounded w.r.t. t ≥ 0 and x ∈ E. By Markov's property By (1.2), there exists a constant C ′ independent of s such that Since η s dα = 1, there exists a constant a 1 > 0 such that, for all x ∈ E and s, t ≥ 0, Hence, multiplying on both side by η t (x) and letting s tend to infinity, we deduce from (1.3) that, for all x ∈ E, which is exactly (2.1). We also deduce that and hence, for t large enough, (3.2) Let us now prove the second part of Theorem 2.1. For any t ≥ 0, Γ ∈ F t and 0 ≤ t ≤ T , .
This concludes the proof of Theorem 2.1.
In particular, for all s ≥ 0 and all T ≥ s + T 1 , sup x,y∈E δ x R T s,s+t 1 − δ y R T s,s+t 1 T V ≤ 1/2, (3.4) where, for all 0 ≤ s ≤ t ≤ T , R T s,t is the linear operator defined by where we used the Markov property. Now, for any T > 0, the family (R T s,t ) 0≤s≤t≤T is a Markov semi-group. This semi-group property and the contraction (3.4) classically imply that, for all T ≥ T 1 , sup x,y∈E δ x R T 0,T − δ y R T 0,T T V ≤ (1/2) ⌊T −T 1 ⌋/t 1 .
Then, proceeding as in [2, Section 5.1], we deduce that (1.2) holds true. This concludes the proof of Theorem 2.4.