A user-friendly condition for exponential ergodicity in randomly switched environments

We consider random switching between finitely many vector fields leaving positively invariant a compact set. Recently, Li, Liu and Cui showed that if one the vector fields has a globally asymptotically stable (G.A.S.) equilibrium from which one can reach a point satisfying a weak H\"ormander-bracket condition, then the process converges in total variation to a unique invariant probability measure. In this note, adapting the proof of Li, Liu and Cui and using results of Bena\"im, Le Borgne, Malrieu and Zitt, the assumption of a G.A.S. equilibrium is weakened to the existence of an accessible point at which a barycentric combination of the vector fields vanishes. Some examples are given which demonstrate the usefulness of this condition.


Introduction
Let E = {1, . . . , N } be a finite set and F = {F i } i∈E a family of smooth globally integrable vector fields on R d . For each i ∈ E we let ϕ i = {ϕ i t } denote the flow induced by F i . We assume throughout that there exists a compact set M ⊂ R d which is positively invariant under each ϕ i . That is ϕ i t (M ) ⊂ M for all t ≥ 0. Our assumption that M ⊂ R d is mostly for convenience. The results of this note can readily be generalized to the situation where M is a subset of a finitedimensional smooth manifold.
Consider a Markov process Z = (Z t ) t≥0 , Z t = (X t , I t ), living on M × E whose infinitesimal generator acts on functions g : M × E → R, smooth in the first variable, according to the formula Lg(x, i) = F i (x), ∇g i (x) + j∈E a ij (x)(g j (x) − g i (x)), (1.1) where g i (x) stands for g(x, i) and a(x) = (a ij (x)) i,j∈E is an irreducible rate matrix continuous in x. Here, by rate matrix, we mean a matrix having nonnegative off diagonal entries and zero diagonal entries.
In other words, the dynamics of X is given by an ordinary differential equation dX t dt = F It (X t ), (1.2) while I is a continuous-time jump process taking values in E controlled by X : where F t = σ((X s , I s ) : s ≤ t}.
Using the terminology in [5], a point x ∈ M is said to satisfy the weak bracket condition if the Lie algebra generated by (F i ) i∈E at x has full rank. If such a point is furthermore accessible (meaning that every neighborhood of x is reached with positive probability by X t ), then the process admits a unique invariant probability measure which is absolutely continuous with respect to the Lebesgue measure on M × E (see e.g [1, Theorem 1] or [5,Theorem 4.5]). If the weak bracket condition is replaced by the so-called strong bracket condition (cf. Definition 2.5 below), the process then converges in total variation (see [5,Theorem 4.6]). Simple examples show that the weak bracket condition itself is not sufficient to ensure convergence (cf. [1]).
Recently, Li, Liu and Cui showed in [12] that the two following conditions yield convergence in total variation (see [12,Theorem 9]) : (i') There exists a globally asymptotically stable (G.A.S.) equilibrium for one of the vector fields, (ii) The weak bracket condition holds at an accessible point.
In this note we replace (i') by the more general condition (i) There exists an accessible point e at which a barycentric combination of the vector fields vanishes, and prove exponential convergence in total variation (see Theorem 2.6 and Corollary 2.7). Our proof is inspired by [12] but is simplified using results of [5].
It turns out that when the vector fields are analytic, (i) and (

Definitions and main results
We begin by recalling some general definitions. Let (P t ) t≥0 be a Markov semigroup on a metric space M. Definition 2.1. We say that z * ∈ M is a Doeblin point if there exists a neighborhood U of z * , a nonzero measure ν and positive real numbers t * , c such that P t * (z, ·) ≥ cν(·) for all z ∈ U . Definition 2.2. We say that z * ∈ M is (P t )-accessible from B ⊂ M if for every neighborhood U of z * and for all z ∈ B, there exists a positive real t such that P t (z, U ) > 0.
In the specific context of PDMPs, the latter definition can be expressed more intuitively as follows. For i = (i 1 , . . . , i m ) ∈ E m and u = (u 1 , . . . , u m ) ∈ R m + , we denote by Φ i u the composite flow : . For x ∈ M and t ≥ 0, we denote by γ + t (x) (resp. γ + (x)) the set of points that are reachable from x at time t (resp. at any nonnegative time) with a composite flow: From now on, we let (P t ) t≥0 be the semigroup induced by (Z t ) t≥0 on M = M × E. Because of the irreducibility assumption on the rate matrix a(x), Definitions Therefore, in the sequel, we will say that a point x * ∈ M is accessible from B ⊂ M if it is {F i }-accessible from B. We will simply say that Here [·, ·] stands for the Lie bracket operation, which is defined as for smooth vector fields V and W on R d with differentials DV and DW . The following definition is given in [5].
Definition 2.5. We say that the weak bracket (resp. strong bracket) condition holds at p ∈ M if the vector space spanned by the vectors {V (p) : It is clear from this definition that the strong bracket condition implies the weak one. Weak bracket and strong bracket conditions are equivalent to Condition B and Condition A in [1], respectively. The weak bracket condition is closely related to the classical Hörmander hypoellipticity condition that yields smoothness of transition densities for diffusions (see e.g. [13]). More background on the weak and strong bracket conditions with an emphasis on how they relate to controllability is provided in [14].

Main result
We now state our main result. Then for all j ∈ E, (e , j) is a Doeblin point.
Note that we do not impose that the α i are nonnegative. In particular, condition (i) holds whenever two vector fields at some point are collinear but not equal.
The following corollary is a consequence of standard results (see e.g [5, Theorem 4.6] for a proof). Corollary 2.7. In addition to the assumptions in Theorem 2.6, suppose that e is accessible. Then, the process Z admits a unique invariant probability measure π which is absolutely continuous with respect to Lebesgue measure. Moreover, there exist positive constants C, γ such that for all t ≥ 0 and for all (x, i) ∈ M × E, In Section 3, we give more applications in a stochastic persistence context, relying on recent results in [3]. Theorem 2.6 is a direct consequence of Theorem 4.2 in [5] and of Proposition 2.9 that we state below. For convenience, we also record a version of Theorem 4.2 from [5]. Here and throughout, for s > 0 and m ∈ N * , we set D s is a submersion at u. Then for all j ∈ E, (x, j) is a Doeblin point.
is a submersion at u.

Links with the strong bracket condition
In [5] and [1], the authors show that the conclusions of Theorems 2.6 and 2.7 hold when the weak bracket condition is replaced by the strong one. A natural question is whether our assumptions already imply that the strong bracket condition holds at some point. We address this question in Propositions 2.10 and 2.11. Proposition 2.10. Let e ∈ M satisfy condition (i) of Theorem 2.6. Suppose further that the weak bracket condition holds at e . Then, the strong bracket condition is also satisfied at e .
Proof To simplify notation, we set We will show that the linear spans of W (e ) and S(e ) are equal to each other, which then implies the proposition. It is clear that the span of S(e ) is a subspace of the span of W (e ). Therefore, it suffices to show that W (e ) is contained in the span of S(e ). Fix a vector field V ∈ ∪ k≥0 F k and let j be the smallest nonnegative integer such that V ∈ F j . By induction it is not hard to see that for any i ≥ 1, the collection of vector fields Since the vector fields (F l − F i ) i∈E lie in F 0 , we have again that V (e ) is in the span of S(e ). This finishes the proof. QED Proposition 2.11. Assume that for all i ∈ E, F i is analytic and that the assumptions of Theorem 2.6 hold. Then e satisfies the strong bracket condition.
In most applications, the vector fields governing the PDMP are analytic (see also Section 3). As a consequence, the interest of Theorem 2.6 lies essentially in the fact that the weak bracket condition is easier to verify than the strong one. The proof of Proposition 2.11 relies on the following result, due to Sussmann and Jurdjevic [14,Corollary 4.7]. Theorem 2.12 (Sussmann -Jurdjevic). Assume that the vector fields (F i ) i∈E are analytic, and let x be any point in M . Then, there is t > 0 such that γ + t (x) has nonempty interior if and only if the strong bracket condition holds at x.

Proof of Proposition 2.11
By Proposition 2.9, there are s > 0, and where f and g satisfy the following properties: 1. The functions f and g are C ∞ and 2π-periodic on R. 2. We have 0 < f ≤ 1 and 0 ≤ g ≤ 1.
3. We have f ( π 2 ) = 1 2 and g(0) > 0. Moreover, there is ∈ (0, π 4 ) such that f (θ) = 1 for |θ − π 2 | > and g(θ) = 0 for |θ| > . It is easy to see that such functions f and g exist and that they cannot be analytic. Also note that M is positively invariant under the flows associated with F 0 and F 1 because h( 1 2 ) > 0 and g(θ) + h(2) < 0 for all θ. Since M is compact and since f , g and h are smooth functions, the vector fields F 0 and F 1 are globally integrable.
The point e = ( π 2 , 1) T is an equilibrium point of the vector field 2F 1 − F 0 , so condition (i) is satisfied. Since h(r) > 0 for r ∈ (0, 1) and h(r) < 0 for r > 1, the unit circle is a global attractor of F 0 . Thus, any point on the unit circle, in particular the point e , is accessible from any starting point in M . The weak bracket condition holds at the point (0, 1) T because F 0 (0, 1) = (1, 0) T and F 1 (0, 1) = (1, g(0)) T generate the entire tangent space at (0, 1) T . As (0, 1) T lies on the unit circle, it is accessible from e .
It remains to show that the strong bracket condition is nowhere satisfied. We have and g (θ)k(θ, r) = 0 for |θ| > . Here, * stands for some term, possibly depending on θ and r, that may differ from equation to equation. This shows that for any (θ, r) ∈ M , V (θ, r) lies in the linear span of (1, 0) T for all V ∈ ∪ k≥0 F k , or V (θ, r) lies in the linear span of (0, 1) T for all V ∈ ∪ k≥0 F k . It follows that the strong bracket condition doesn't hold at any point (θ, r) ∈ M .
In the previous example, the origin had to be excluded from M in order to ensure that the unit circle is globally accessible. It could be interesting to determine whether there are PDMPs for which conditions (i) and (ii) are satisfied, the strong bracket condition nowhere holds, and M is simply connected.

Applications
In this section, we give some applications of Theorem 2.6 in the context of population models with an extinction set. For a general framework on Markov models with an extinction set, the reader is referred to [3]. Here we only give the results we will use in the specific context of PDMP on a compact set (see e.g [7] or [8]).

Stochastic persistence
In this section, we assume that there exists a closed subset M 0 of M which is invariant for the process : X t ∈ M 0 if and only if X 0 ∈ M 0 . The set M 0 will be referred to as the extinction set. We set M + = M \ M 0 and denote by D (resp. D 2 ) the domain of the generator L defined in (1.1) (resp. the set of functions in the domain such that f 2 is also in D). We also let Γ denote the carré du champ operator on D 2 : Γf = Lf 2 − 2f Lf , which acts on functions f ∈ D 2 as Definition 3.1. We say that the process Z is persistent if there exist continuous functions There exists C > 0 such that for any compact set K ⊂ M + , Γ(V K )| K ∞ ≤ C, 5. For any ergodic probability measure µ of Z supported on M 0 × E, one has µH < 0.
The following theorem is an immediate consequence of [3, Theorem 4.10] and Theorem 2.6. Theorem 3.2. Assume that conditions (i) and (ii) hold, that Z is persistent and that e is accessible from M + . Then Z admits a unique invariant probability measure Π on M + × E and there exist θ, C, γ > 0 such that for all t ≥ 0 and for all (x, i) ∈ M + × E,

Lotka-Volterra in random environment
In this section, we consider the competitive Lotka-Volterra model in a fluctuating environment studied in [7] and show how our method can be used to improve one of their results. More precisely, for i ∈ {0, 1}, let F i be defined as  It is shown in [7] that if Λ x > 0 and Λ y > 0, then the process admits a unique invariant probability measure Π in M + × E. But to show the convergence in total variation of the law of Z t toward Π, the authors needed to check that the strong bracket condition is satisfied at some accessible point. They proved, except in the particular case where β0α1 α0β1 = a0c1 c0a1 = b0d1 d0b1 , that this condition holds by using a formal calculus program. Thanks to Theorem 3.2, we withdraw this condition, and give an easier proof for the convergence in total variation. In [7], of particular importance is the study of the averaged vector fields This lemma combined with Proposition 3.3 and Theorem 3.2 implies the following corollary, which slightly improve [7, Theorem 4.1 -(iv)] Corollary 3.5. Assume Λ y > 0 and Λ x > 0. Then there exist C, γ, θ > 0 such that for all t ≥ 0 and for all (x, y, i) ∈ M + × E, Proof of Lemma 3.4 Since Λ y > 0, I is nonempty by Proposition 3.3. Then we have three cases: either I ∩ J c is nonempty, or I is a strict subset of J or I = J. We prove the lemma in these three cases. Assume first that I ∩ J c = ∅ and take s ∈ I ∩ J c . Then F s admits a G.A.S. equilibrium e s ∈ M + , in particular it is accessible. Assume now that I is a strict subset of J. In particular, I c ∩ J and I ∩ J are nonempty. Pick s ∈ I c ∩ J, then F s admits a unique equilibrium e s ∈ M + , which is a saddle whose stable manifold W s separates the basins of attraction of (1/a s , 0) and (0, 1/b s ). We show that e s is accessible. Choose a point (x, y) ∈ M + . Then, if (x, y) is above W s , follow the flow ϕ 0 . As the resulting trajectory converges to (1/a 0 , 0), it needs to cross W s . If (x, y) is below W s , one can find a trajectory leading to (0, 1/b u ) for some u ∈ I ∩ J. In particular, this trajectory also crosses W s . As e s is also accessible from every point in W s , it is accessible from everywhere in M + . Finally, assume that I = J = (s 1 , s 2 ). Then the vector field F s1 is of the form

Epidemiological models : SIS in dimension 2
In this section we discuss an application of Theorem 3.2 to an SIS model with two groups and two environments, as studied in [8,Section 4]. We look at random switching between differential equations on [0, 1] 2 having the form where for k ∈ E = {0, 1}, C k = (C k ij ) is an irreducible matrix with nonnegative entries and D k i > 0. Let A k = C k − diag(D k ) and let λ(A k ) denote the largest real part of the eigenvalues of A k . Then, we have the following result due to Lajmanovich and Yorke. Theorem 3.6 (Lajmanovich and Yorke, [11]). If λ(A k ) ≤ 0, 0 is a G.A.S equilibrium for the semiflow induced by (3.2) on [0, 1] 2 . If λ(A k ) > 0, there exists another equilibrium x * k ∈ (0, 1) 2 whose basin of attraction is [0, 1] 2 \ {0}.
Lemma 3.7. Assume that Then conditions (i) and (ii) are satisfied.
An example where the assumptions of this lemma hold can be found in [8,Example 4.7]. If the assumptions of Lemma 3.7 hold, Corollary 2.14 and Section 5 in [8] imply that Z is persistent provided the switching occurs sufficiently often. In that case, we get by Theorem 3.2 the convergence in total variation to a unique invariant probability measure. Compare this to [8,Theorem 4.11], which only gives convergence in a certain Wasserstein distance. Note that the conclusion of Lemma 3.7 is no longer true in general if λ(A 0 ) > 0 and λ(A 1 ) > 0. An easy counterexample is when the two equilibria x * 0 , x * 1 given by Theorem 3.6 coïncide (see e.g. [8,Example 4.10]). In that case, condition (i) is satisfied but condition (ii) obviously is not.
Proof of Lemma 3.7 For k ∈ E, we let F k denote the vector field given by the right hand side of (3.2). It is readily seen that for s ∈ (0, 1), the vector field F s = sF 1 + (1 − s)F 0 is of the same form as F 0 and F 1 , with matrix C s = sC 1 + (1 − s)C 0 and vector D s = sD 1 + (1 − s)D 0 . As a consequence, since there exists s ∈ (0, 1) such that λ(A s ) > 0, Theorem 3.6 implies that condition (i) is satisfied at some point x * s ∈ (0, 1) 2 , and we even have F s (x * s ) = 0. Moreover, since λ(A 0 ) < 0 and λ(A 1 ) < 0, the first part of Theorem 3.6 implies that neither F 0 nor F 1 can vanish at x * s . In particular, F 0 (x * s ) and F 1 (x * s ) are collinear and of opposite direction. For k ∈ {0, 1} let γ k (x * s ) denote the positive orbit of x * s under F k . Due to the first part of Theorem 3.6, γ 0 (x * s ) is a curve linking x * s and 0. To obtain a contradiction, assume that condition (ii) is not satisfied. Then F 0 and F 1 are collinear and of opposite direction on γ 0 (x * s ). We have for all x ∈ γ 0 (x * s ) that x * s ∈ γ 1 (x), meaning that for all ε > 0, one can find x with x < ε and t > 0 such that ϕ 1 t (x) = x * s . This is in contradiction with the fact that 0 is a G.A.S equilibrium for F 1 , hence condition (ii) holds as well. QED

Proof of Proposition 2.9
To prove Proposition 2.9, we will use [5, Theorem 4.1] that we quote here.
The following proposition is the key point of the proof : Under the hypothesis of Theorem 2.6, there exist s > 0, i ∈ E, i = (i 1 , . . . , i n ) ∈ E n and u = (u 1 , . . . , u n ) ∈ R n + with s > u 1 + . . . + u n such that the map is a submersion at (u, 0). This proposition remains valid if we replace e by any point in M from which one can access a point x * where the weak bracket condition holds. In particular, it is independent of our assumption that e is an equilibrium of a vector field of the form proposition is a consequence of the two lemmas we give now. Thanks to this lemma, we assume from now on that there existī = (ī 1 , . . . ,ī p ) and u = (ū 1 , . . . ,ū p ) such that x * = Φīū(e ). Since x * satisfies the weak bracket condition, Theorem 4.1 implies that there exists m ≥ d, i = (i 1 , . . . , i m ) ∈ E m and u = (u 1 , . . . , u m ) ∈ R m + such that the map ψ : v → Φ i v (x * ) is a submersion at u. We denote i − = (i 1 , . . . , i m−1 ) and v − = (v 1 , . . . , v m−1 ), and for all s > 0, we define the map Ψ s : D s m+p → R d by We also let σ t (v−,v) = v 1 + . . . + v m−1 +v 1 + . . . +v p + t. Note that in particular, for all (v − ,ū, t) ∈ D s . With this property, the next lemma is straightforward : and ∂Ψ s ∂t In particular, setting s = u 1 + . . . + u m +ū 1 + . . . +ū p and t = 0, one gets ∂Ψ s ∂v k (u − ,ū, 0) = − ∂ψ ∂v m (u) + ∂ψ ∂v k (u),

Proof of Proposition 2.9
We first construct a functionΨ and then verify that it is indeed a submersion. By Proposition 4.2, there exist s > 0, i = (i 1 , . . . , i n , i n+1 ) ∈ E n+1 and u = (u 1 , . . . , u n ) ∈ R n + such that the map Ψ : (v, t) → ϕ in+1 s− vi−t • Φ i v (e ) is a submersion at (u, 0). In the sequel, we denote by Ψ(v, t) the map given by Ψ(v, t)(x) = ϕ in+1 s− vi−t • Φ i v (x). We define the map Ψ on D s n+N with values in R d by whereī = (1, 2, . . . , N ). Then with the previous notation, Ψ(v,v) = Ψ(v, v i ) • Φīv(e ). Now, we show that the map Ψ is a submersion at (u, 0) -here, 0 denotes the zero vector in R N . For all 1 ≤ k ≤ n,