Large deviations for Markov processes with switching and homogenisation via Hamilton-Jacobi-Bellman equations

We consider the context of molecular motors modelled by a diffusion process driven by the gradient of a weakly periodic potential that depends on an internal degree of freedom. The switch of the internal state, that can freely be interpreted as a molecular switch, is modelled as a Markov jump process that depends on the location of the motor. Rescaling space and time, the limit of the trajectory of the diffusion process homogenizes over the periodic potential as well as over the internal degree of freedom. Around the homogenized limit, we prove the large deviation principle of trajectories with a method developed by Feng and Kurtz based on the analysis of an associated Hamilton--Jacobi--Bellman equation with an Hamiltonian that here, as an innovative fact, depends on both position and momenta.


Introduction
In biochemical and biophysical processes occurring in a cell, an important role is played by several classes of active enzymatic molecules, generally called motor proteins or molecular motors.These motors are protein molecules that convert chemical energy into mechanical work and motion (see [17][18][19]33] for more details).In the last decades, such biological phenomena have been largely investigated and this analysis was partly possible due to the contribution of the analysis of particular Markov processes called ''switching Markov processes'' (see for instance [3,4,17,27,34]).
The process that we will consider is such a process.It is a two-component process (  ,   ) where the first component   is a drift-diffusion process and the second component   is a jump process on a finite set.In the context of molecular motors, the spatial component   models the location of the motor, for example on a filament, while   models the molecular configuration.The two processes together evolve in accordance with the following stochastic differential equation d  = −∇(  ,   )d + d  , (1.1) and ∇ is the gradient with respect to  and   is the Brownian motion.It is clear that the two processes are linked by their rate functions.This means that   stays in a first discrete state for a random duration while the diffusion component   evolves following a stochastic differential equation with a particular drift.Then, when a switch of the configurational component occurs, the potential  changes and therefore   diffuses according to a new equation up to another switch of   (see Fig. 1 for a typical behaviour of this type of processes).For more details about the construction of such switching hybrid diffusions see [34].To allow more flexibility to separate the local dynamics as caused by the internal switching, and macroscopic effects, e.g.modelling the presence of energy molecules in the solution, we will work with   and   instead of  and , and we will typically assume that {  ,   }  exhibit a separation of scales.The simplest instance of this separation of scale is that   (, ) =  1 (, ) +  2 (, ),   (, , ) =  1 (, , ) +  2 (, , ), i.e.  1 and  1 model the global macroscopic scale while  2 and  2 correspond to the local dynamics.We will also assume that  2 is 1-periodic.Moreover, the most general context that we will consider is such that the sequences of functions   and   are actually given by two functions  ∈  ∞ (R  × R  × {1, … ,  }) and   ∈  ∞ (R  × R  ) as   (, ) = (, , ),   (, , ) = (, , , ). ( The process arising from the stochastic differential equation with   and   as drift and rate function will be called (   ,    ).However, we are interested in the macroscopic motion of the molecule.Therefore, we work with the rescaled process or ''zoomed out'' process that we obtain by scaling in space and time by the positive parameter  > 0.More precisely, we look at (   , Ī  ) ∶= (  with  and   given by (1.2), and we are interested in the limit  → 0.
Intuitively, looking from far away at the process the periodicity becomes smaller and smaller as  decreases and the internal flip rate diverges.Thus, we expect the periodicity and the internal dynamics to homogenise, effectively obtaining a deterministic limit   .The numerical simulation in Fig. 2 confirms our intuition.It shows sample paths of numerical approximations of a particular switching process (   , Ī  ) for various .The figure suggests that for small , the spatial component    tends to concentrate around a limiting path that, in the case of the simulated process, is a path with constant velocity.
The aim of this work is to investigate large deviations around such deterministic limit of this kind of process.Showing a large deviations principle, we then will be able to characterise the limit path using a Lagrangian rate function.Indeed, we will show in the main theorem, Theorem 2.8, that there exists a non negative rate function  ∶ C R  [0, ∞) → [0, ∞] with which {   } >0 satisfies a path-wise large deviation principle in the sense of Definition 2.5 below.Intuitively, it means that P (  ≈ ) ∼  −()∕  → 0, with  written in terms of a Lagrangian function.This means that   has a limit path x ∈ C R  and this limit is the unique minimiser of the rate function .Moreover, for any path  ≠ x such that () > 0, the probability that   is close to  is exponentially small in  −1 .In Corollary 2.10, we characterise the minimum of  by finding a representation of   x in terms of the drift  similar to what one would expect from an averaging principle.
Our work falls into a long tradition of studying the dynamical large deviations around limiting trajectories starting with [15] for small noise diffusions, and [25] for two-scale systems.Then, in the last decades it has been used for the study of different kind of processes (see for instance [13] or [23]).Regarding jump-diffusion, there are very few large deviations results but see for example [24,29].More recently, in [26], the authors prove large deviations for a class of switching Markov processes and apply their result to examples, including the molecular motors model.Regarding this example, our work diverges from [26] primarily due to two distinctive improvements: the transition from a compact to a non-compact setting and the introduction of the global macroscopic effects in rates with the two components  1 and  1 .These two facts complicate the proof of large deviations principle.Most important, but without going into details, we need to prove the comparison principle for a spatially inhomogeneous Hamilton-Jacobi-Bellman equation where the two above generalisations introduce non-trivial complications.The jump process switches from a value  ∈ {1, 2, 3} to the value  + 1 and from 4 to 1.In this way, the process starts diffusing around a minimum of sin(∕) ( = 1).The first horizontal part of the ''stair'' corresponds to this evolution.Then, a switch of Ī  takes place, so the value of  becomes  = 2, and then the spatial component goes to diffuse around a minimum of cos(∕), that is the second horizontal part, until another switch.Indeed, we prove the large deviations property using a method due to Feng and Kurtz [14] in which a central role is played by associated Hamilton-Jacobi-Bellmann equations.We will explain in more details the main innovations compared to [26] of our work in Section 6.
The work is organised as follows.We give some preliminary contents and the statement of the large deviations theorem in Section 2. In Section 3 we give an overview of the theory behind the method that we use for showing the large deviations result, proved in Section 4, for the switching process modelling molecular motors.Finally, in Section 5 we are able to extract the main mathematical structures that we use in the previous section and use them in a large deviations result for a more general class of Switching Markov processes.

General setting and main theorem
In this first section we describe the setting and give some basic notions for the statement of the main theorem.First of all, the following are frequently used notations.
• ( ,  ) the space of continuous functions from a set  ⊆ R  to a set  ⊆ R  ; •   ( ,  ), with  integer, the space of k times differentiable functions from the equivalence class of  with respect to the relation defined by Z  ; • () is the set of probability measures on a space ; • C  [0, ∞) is the space of functions defined on [0, ∞) and taking value in a metric space .

Preliminaries
We begin with the definition of the process that we are going to study.It is a two component Markov process (   ,    ) to which we refer in all the work with ''molecular motors model'' or ''motor proteins model''.Definition 2.1 (Molecular Motors).Given an integer  , we consider the setting a smooth and ∇  its gradient with respect to .We suppose that   grows at most linearly in the first component and is periodic in the second one.Finally, given the following operator we define the -valued Markov process (   ,    )| ≥0 as the solution to the martingale problem corresponding to Ã .More precisely, (   ,    ) is such that for all  ∈ ( Ã ), Remark 2.2.In our case   is regular enough that the martingale problem associated to   is well posed (see [12,31]).
Remark 2.3.It is straightforward to see that the above defined process solves the stochastic differential Eq. (1.1) given in the introduction.
We firstly study the above particular model for which we prove the large deviations property.Then, using this model, we lead to a theorem for a general class of processes called Switching Markov process (see Section 5).
As mentioned in the introduction, we will work with the rescaled process . Then, by the chain rule, the generator becomes We will assume in the main theorem that the matrix (  ())  ∶= (sup ∈R    (, ))  is irreducible.Here we give the rigorous definition.
The main goal of this work is to prove that the spatial component   of the above Markov process verifies the large deviation principle.Here we give the main definitions which are written down in terms of a general Polish space  but will later on be applied for e.g.
We recall the definition of exponential tightness and the compact containment condition, typical properties that come out in a large deviations context.

Definition 2.6 (Exponential Tightness).
A sequence of probability measures {  } on a Polish space  is said to be exponentially tight if for each  > 0, there exists a compact set   ⊂  such that lim sup In the next we will consider for the above definitions the space Remark 2.9. ′ captures the periodic behaviour and the internal state.In the homogenisation context described in the introduction,  ′ is exactly what is being homogenised over while  , describes the dynamics on it.

Law of large numbers and speed of the limit process
The following corollary characterises the limit process.We know that (, 0) = 0 and  ,0 () = 0 for all  ∈  ′ .Then, if  *  is the optimal measure for (, 0), we have that We can conclude that the optimal  *  is the unique stationary measure of  ,0 (see Proposition Appendix A.1 in the appendix for existence and uniqueness of  *  ).We thus find that   (, 0) = } and hence () = 0 ⟺    =   (, 0) for almost all  and Remark 2.11.The above corollary confirms the suggestion of Fig. 2 that, when there is no dependence on  in the drift, the spatial component is converging to a path with constant speed.Indeed, for small ,    tends to concentrate around a path with a constant velocity  =   (0).

Connection with Hamilton-Jacobi equations and strategy of proof
We will now present a brief overview of the technical aspects of the Hamilton-Jacobi approach introduced by [14] to the path-space large deviations theory for Markov processes.In the next we give an outline of the main steps of this argument.
Feng and Kurtz in [14,Theorem 5.15] used a variation of the projective limit method [8,9] to prove that the large deviations property can be obtained as a consequence of the large deviations of the finite dimensional distributions and the exponential tightness of the process.By Bryc's theorem and the Markov property, large deviations for the finite dimensional distributions follows by the convergence of the conditional ''cumulant function'' that forms the semigroup with P  (, , ) the transition probabilities of    .Note that   () =  log   ()  ∕ , where   is the linear semigroup associated to the generator   .Computing   and verifying its convergence is usually hard.In analogy to results for linear semigroups and their generators, the convergence of   follows from the convergence of the nonlinear generators   .Formally applying the chain rule to   () in terms of the linear semigroup   () yields the following definition, that can be put on more rigorous grounds as exhibited in [14,22].

Definition 3.1 (Nonlinear Generator).
Let   the generator of a process    .The nonlinear generator of    is the map defined in the domain More precisely, the problem comes down to two steps.First one needs to prove the convergence of the generators   →  in a suitable sense.Then, one has to show that the limiting operator generates a semigroup.The sufficient conditions are, using Crandall-Liggett Theorem [6], the range condition and the dissipativity property.

Definition 3.2 (Range Condition).
Let  be an arbitrary metric space and  ∶ () ⊆   () →   () a nonlinear operator.We say that  satisfies the range condition if for all 0 <  <  0 .

Definition 3.3 (Dissipative Operator).
We say that an operator The range condition corresponds to existence of classical solutions for the equation (1 − ) = ℎ.Hence, we can conclude that, in order to prove large deviations, we need the convergence of the nonlinear generators to a dissipative operator  such that the existence of classical solutions for (1 − ) = ℎ holds.However, it is well known that the existence of classical solutions is a too strong condition to make this method work in most cases.As observed by Crandall -Lions in [7], the use of viscosity solutions allows to overcome this problem.These weak solutions are defined in order to create an extension H of  that automatically satisfies the range condition and that is still dissipative.Below the definitions for both single and multi-valued operators.

Definition 3.4 (Suband Supersolutions for Single Valued Operators).
Let  ∶ () ⊆ () → () be a nonlinear operator.Then for  > 0 and ℎ ∈ (), define viscosity sub-and supersolutions of (1 − ) = ℎ as follows: (i) We say that  ∶  → R is a viscosity subsolution if it is bounded and upper semicontinuous and, for every  ∈ (), there exists a sequence   ∈  such that S. Della Corte and R.C. Kraaij (ii) We say that  ∶  → R is a viscosity supersolution if it is bounded and lower semicontinuous and, for every  ∈ (), there exists a sequence   ∈  such that A function  ∈ () is called a viscosity solution of (1 − ) = ℎ if it is both a viscosity sub-and supersolution.
Remark 3.6.Consider the definition of subsolutions.Suppose that the test function  ∈ () has compact sublevel sets, then instead of working with a sequence   , there exists  0 ∈  such that A similar simplification holds in the case of supersolutions.
In the classical context, the range condition, combined with the dissipativity of the operator can be shown to imply unique solvability of the equation  −  = ℎ.However, for viscosity solutions this argument does not work anymore.The main reason is that viscosity solutions are in general not in the domain of .In order to address this issue, an option can be to suppose that the following comparison principle (implying uniqueness) holds.

Definition 3.7 (Comparison Principle).
We say that a Hamilton-Jacobi equation (1 − ) = ℎ satisfies the comparison principle if for any viscosity subsolution  and viscosity supersolution ,  ≤  holds on .
The theory above was made rigorous in [14,22].We present the key result in our context and notations.Theorem 3.8 (Adaptation of 7.18 Of [14] Moreover,   satisfies the large deviation principle with rate function ] and the supremum above is taken over all finite tuples  0 = 0 <  1 <  2 < ⋯ <   .

Proof of the main theorem
Using the discussion of the previous section and Theorem 3.
We prove the above claims respectively in Propositions 4.3, (4.2), 4.16, 4.22, 4.25 in the following subsections.Then, once the above facts are proved, we can apply Theorem 3.8 and the required large deviation property follows.□

The convergence of generators and an eigenvalue problem
The first step of the proof of large deviations is based on operator convergence.Since the process and its limit do not live in the same space, we cannot work with the usual definition.In the following, we introduce a new definition of limit for functions and multivalued operator on different spaces. where ) .
The following basic example gives the idea of the intuition behind the definitions above.
where the images is such that  ⊆  −   .Moreover, for all  parametrising the images we have a map that for all  ∈ () and any  ∈ R  , the images   , of  are given by Proof.We want to prove that   converges to  in terms of Definition 4.2.With this aim, note that, by the definitions of   and   , we have Choosing functions   (, ) of the form we obtain, )) where ∇  and   denote the gradient and Laplacian with respect to the variable  = ∕.We can conclude that Note that for all  ∈ () the image   has the representation with  = ∇ () and where the map  is given by Note that  is jointly convex in  and .By Proposition 4.6, for every ,  there exists  , such that equality holds, i.e. for any  ′ ∈  ′ , we have (, ) =  (, ,  , )( ′ ).Therefore, we obtain for  ∈ [0, 1] and any  1 ,  2 ∈ R  with corresponding eigenfunctions   1 and   2 that Regarding coercivity of (, ), we isolate the  2 term in  , , to obtain Any  ∈  2 ( ′ ) admits a minimum (  ,   ) on the compact set  ′ , and with the thereby obtained uniform lower bound Using the lower bound  (, , ) Regarding (, 0) = 0, note that  (, 0, ) ≤ 0 for all  and .Then we have the first inequality (, 0) ≥ inf   (, 0, ) ≥ 0. For the opposite inequality we choose the function  = (1, … , 1) in the representation of .
We will prove the continuity of  by showing that it is lower and upper semicontinuous.For that, we need the following auxiliary results.In particular, for the lower semicontinuity we will make use of the  -convergence in the sense expressed in the following lemma in which we prove that property in a general context.Later, we will use it for  (, , ) =  , ().Lemma 4.9 ( -Convergence).Given two sets  ,  ⊆ R  and a constant  ≥ 0 we define   , , as ] .

𝜙(𝑥, 𝑝
) is non-empty as  0 , ∈ (, ) and it is compact because any closed subset of ( ′ ) is compact.We are left to show that  is upper hemi-continuous.Let (  ,   ,   ) → (, , ) with   ∈ (  ,   ).We establish that  ∈ (, ).By the lower semi-continuity of  and the definition of  we find which implies indeed that  ∈ (, ).Thus, upper semi-continuity follows by an application of Lemma 4.11.

Comparison principle
In this section we prove the comparison principle for the Hamilton-Jacobi equation in terms of  by relating it to a set of Hamilton-Jacobi equations constructed from  (Fig. 3).We introduce the operators  † ,  ‡ and  1 ,  2 .In both cases, the new Hamiltonians will serve as natural upper and lower bounds for  () = (, ∇ ()) and  respectively, where  and  are the operators introduced in Propositions 4.6 and 4.3.These new Hamiltonians are defined in terms of a containment function  , which allows us to restrict our analysis to compact sets.Here we give the rigorous definition.•  has compact sub-level sets, i.e. for every  ≥ 0 the set {| () ≤ } is compact ;
Proof.Firstly note that  has compact sub-level sets.Regarding the second property, by the definition of  , , we have for every Recalling that  grows at most linearly in , we can conclude that sup ,  ,∇ () < ∞. □ Using the above lemma we are now able to define the auxiliary operators in terms of  .In the following we will denote by  ∞  () the set of smooth functions on  that have a lower bound and by  ∞  () the set of smooth functions on  that have an upper bound.Definition 4.14.Fix  ∈ (0, 1) and given  () = 1  2 log ,   ∶= sup ,  ,∇ () () and  () = (, ∇ ()), we define
We now prove the comparison principle for  −  = ℎ based on the results summarised in Fig. 3.The rest of this subsection is devoted to establishing Fig. 3.More precisely, we establish Fig. 3 in results 4.17, 4.18, 4.19 and 4.24.
The next theorem contains the comparison principle for  † and  ‡ .The proof follows standard ideas that can be found for instance in [2,5].In order to be able to use both the subsolution and supersolution properties in the estimate of sup  () − (), we use the following strategy based on the introduction of double variables.
1. First of all, note that the supremum over  of () − () can be replaced, sending  → 0, with the supremum over  and  of the double variables function () − () − (2) −1 ( − ) 2 2. Once the supremum (x,y) is found, we are able to use the sub-super solution properties in the following way: • fixing  and optimising over , it can be used in the application of the subsolution property of  • fixing  and optimising over , it can be used in the application of the supersolution property of .Proof.Following the above steps we define the double variables function Note that the containment function  is introduced in order to be able to work in a compact set, and the positive constant  will allow us to use the convexity of .Since  , is upper semicontinuous and lim ||+||→∞ (, ) = −∞, for every  ∈ (0, 1) there exists (  ,   ) such that Suppose by contradiction that  = ( x) − ( x) > 0 for some x.We choose  such that and Therefore there exists   > 0 such that   and   belong to (0,   ).

𝑢(𝑥
With a similar argument for  2 and   2 , we obtain by the supersolution inequality that By the coercivity property obtained in Proposition 4.7 on Section 4.2 and by the inequality (4.6),   ∶= is bounded in , allowing us to extract a converging subsequence    .We conclude that for each As  is chosen such that 2 Proof.Fix  > 0 and ℎ ∈   ().Let  be a subsolution to (1 − ) = ℎ.We prove it is also a subsolution to (1 −  1 ) = ℎ.Fix  ∈ (0, 1),  ∈  2 ( ′ ) and  ∈  ∞  (), so that (  1 ,   1, , ) ∈  1 with   1 and   1, , as in Definition 4.15.We will prove that there are (  ,   ) such that Given  ∶=  −1 sup  () − (1 − ) () < ∞, as  is bounded and  ∈  ∞  (), we have that the sequence   along which the limit in (4.7) is attained, is contained in the compact set  ∶= {| () ≤ }.We define  ∶ R → R as a smooth increasing function such that Denote by   the function on  defined by By construction,   is smooth and constant outside a compact set and thus lies in ().We conclude that (  ,    , (  The definition of viscosity solutions, Definition 3.5, is written down in terms of the existence of a sequence of points that maximises  −  or minimises  −  .To prove the lemma above, we would like to have the subsolution and supersolution inequalities for any point that maximises or minimises the difference.This is achieved by the following auxiliary lemma.For a proof of the above Lemma see Lemma 5.7 of [23]. Proof of Lemma 4.19.We only prove the subsolution statement.Fix  > 0 and ℎ ∈   ().Let  be a subsolution of (1 −  1 ) = ℎ.We prove that it is also a subsolution of (1 −  † ) = ℎ.Let   1 = (1 − ) +  ∈ ( 1 ) and let  0 be such that ( 0 ) −    1 ( 0 ) = sup  () −   1 ().
For each  > 0, since (, ) is a principal eigenvalue for  , +   (as remarked in Proposition 4.6, there exists a function  such that we find by the subsolution property of  and that there exists  such that where the second inequality follows by (4.10) and it establishes that  is a subsolution for (1 −  † ) = ℎ.□ We conclude this subsection proving the right part of Fig. 3. Proof.Fix  > 0 and ℎ ∈   ().Let  be a subsolution to (1 − ) = ℎ.We prove it is also a subsolution to (

Exponential tightness
To establish exponential tightness, we first note that by [14,Corollary 4.19] it suffices to establish the exponential compact containment condition.This is the content of the next proposition.
Proposition For all  ⊂  compact,  > 0 and  > 0 there is a compact set K, By construction   is a martingale.Let  ⊂  be compact.We have Since sup ∈,∈ ′ (, ) ≤   , and  is the limit of   for  → 0 in the sense of Definition 4.2, we obtain that the term in the exponential is bounded by where K, , = .□

Action-integral representation of the rate function
In this section we establish a representation of the rate function as an integral of a Lagrangian function .We refer to this representation as the ''action-integral representation'' of the rate function .We argue on basis of Section 8 of [14] for which we need to check the following two conditions.12 (e) in [14]).Finally, Condition 8.11 is implied by (c) above, with the control ( × ) =  ẋ() () × .
The comparison principle for  follows from Proposition 4.21 and Theorem 4.17.□ In the following, we prove the integral representation of the rate function.Firstly, let us recall that Theorem 3.8 gives the existence of a semigroup  () and a family of functions () and let () ∶ (R  ) → (R  ) be the Nisio semigroup with cost function , that is Let ()ℎ be the operator given by The proof of the result below is based on the following four main steps.
• Starting from the equality of resolvents we work to an equality for the semigroups  () and ().

A more general theorem
Analysing the proofs in the previous sections, we can state the following facts: • In the proof of large deviations principle, the main steps are: 1. Convergence of the nonlinear operators   to a multivalued operator , 2. comparison principle for (1 − ) = ℎ.
• The existence of an eigenvalue (, ) and its convexity, coercivity and continuity are crucial for our approach to comparison principle and the arguments for existence, convexity and coercivity (proofs of Propositions 4.1 and 4.7) are based on the fact that (, ) is the eigenvalue of an operator of the type  , +  , +   with the three operators that verify particular properties such as coercivity and the maximum principle, to show the continuity of  the representation (2.2) is needed.In particular, some properties of  and , like convergence, are necessary.
The above observations allow for a straightforward generalisation in Theorem 5.3 and justify the assumptions of the next subsection.
In this section we indeed prove the large deviation principle for a general switching Markov process.In particular, we will study the Markov process (   , Ī  ), that is the solution to the Martingale problem corresponding to the following operator with  ()  ∶ ( ()  ) ⊆ (R  ) → (R  ) be the generator of a strong R  -valued Markov process, with domain ( ()  ).
(iii)  ↦  , is coercive uniformly with respect to .
The above assumption implies the convergence of the nonlinear operators and the existence of the principal eigenvalue .Moreover, it will imply convexity and coercivity of (, ).(ii) (, , ) is continuous and ‖(, , )‖  < ∞, (iii) there exists a containment function  for  in the sense of Definition 4.12, (iv) for all , there exists a unique measure  *  such that  ,0 ( *  ) = 0.
Assumption 5.2 implies the continuity of .

Large deviation for a switching Markov process
We are ready to state the general theorem.

Fig. 2 .
Fig. 2. Sample paths of a numerical simulation of the spatial component    of a switching process for different values of .We chose a drift    equal to the periodic part  2 for all  ∈ {1, … , 4}.We took  2 (   , ) equal to sin(∕), cos(∕), − sin(∕) and − cos(∕) for  = 1, 2, 3, 4 respectively and a rate equal to 1.The jump process switches from a value  ∈ {1, 2, 3} to the value  + 1 and from 4 to 1.In this way, the process starts diffusing around a minimum of sin(∕) ( = 1).The first horizontal part of the ''stair'' corresponds to this evolution.Then, a switch of Ī  takes place, so the value of  becomes  = 2, and then the spatial component goes to diffuse around a minimum of cos(∕), that is the second horizontal part, until another switch.

Theorem 5 . 3 (
Large Deviation for a Switching Markov Process).Let (   , Ī  ) be the solution of the Martingale problem corresponding to the operator given in (5.1).If Assumptions 5.1 and 5.2 hold and suppose further that at time zero, the family of random variables {  (0)} >0 satisfies a large deviation principle in R  with good rate function 0 ∶ R  → [0, ∞].Then, the spatial component {   } satisfies a large deviation principle in  R  [0, ∞).The proof of the above theorem follows the same lines of what is done in Section 4.3.

Definition 2.7. We
valued random variables is exponentially tight if the corresponding sequence of distributions is exponentially tight.say that the processes (  ()) satisfy the exponential compact containment condition if for all  > 0 and  > 0 there is a compact set  = ( , ) ⊆  such that Now we state the main theorem in which we prove sufficient conditions for the large deviation property for the spatial component of the switching process defined in Definition 2.1.

Theorem 2.8 (Large
Deviation for the ''Molecular Motors Model'').Let (   ,    ) be the Markov process of Definition 2.1.Suppose that the matrix   = (sup ∈R    ())  is irreducible.Denote    =   ∕ the rescaled process.Suppose further that at time zero, the family of random variables {  (0)} >0 satisfies a large deviation principle in R  with good rate function  0