A MODERATE DEVIATION PRINCIPLE FOR STOCHASTIC HAMILTONIAN SYSTEMS

. We prove a moderate deviation principle for stochastic diﬀerential equations (SDEs) with non-Lipschitz conditions. As an application of our result, we also study the stochastic Hamiltonian systems.

Hamiltonian systems. In this paper, we will give a positive answer. Our proof is based on the weak convergence approach. The core idea is to characterize the tightness by Arzelà-Ascoli theorem. The main difficulty of this paper is to deal with the non-Lipschitz property of the coefficients of SDEs, which requires the help of Lyapnov function technique and exponential martingale technique and stochastic Gronwall's inequality.
Consider the following stochastic nonlinear oscillator equation: where C 0 ∈ R, V ∈ C 2 (R) and ∇V (·) satisfies (A1) in Section 4, Θ ∈ C 2 (R) has a bounded first order derivative, and W t is a one-dimensional Brownian white noise. Intuitively, we can see that (1.1) is equivalent to the following SDEs: where x = (z, u), f t (x) = f t (z, u) := (u, C 0 u − ∇V (u)), g t (x) = g(z, u) := (0, Θ(z)). Consider the following general SDEs with small perturbation: where b : R + × R d → R d and σ : R + × R d → R d × R m are Borel measurable, and {W t } t 0 is an m-dimensional standard Brownian motion on the classical Wiener space (Ω, F , P ). Moreover, the coefficients are non-global Lipschitz and super-linear growth. Under assumptions (H1)-(H3) (see Sect. 2), by [29], there is a unique strong solution for equation (1.2). As ↓ 0, the solution X ε t of equation (1.2) will tend to the solution of the following deterministic equation We shall investigate the deviations of X from the deterministic solution X 0 , as ↓ 0, that is, the asymptotic behavior of the trajectory, in which h( ) is some deviation scale. In particular (1) For h( ) = 1/ √ , it is corresponding LDP, which has been established by Ren in [16]. (2) For h( ) = 1, it is associated with the central limit theorem (CLT), which is our future work. it is concerned with MDP, which is our main interest in this article. Throughout this paper, we always assume that h( ) satisfies equation (1.4). The organization of the paper as follows. In Section 2, we give some preliminaries and recall some useful results. Section 3 is devoted to the proof of the MDP. In Section 4, we apply our result to stochastic Hamiltonian systems.
Throughout the paper, C with or without indexes will denote different constants, whose values may change in different places. If there is no special declaration, all expectations E are taken with respect to P .

Preliminaries
In this section, we recall some basic notations, notions, assumptions and give some important theorems and lemmas that will be used later. Let us first introduce some notations and notions.
• For the simplicity of notation, we restrict our discussion to time interval [0, 1]. We shall use | · | and ·, · to denote the Euclidean norm and inner product. By σ * and ∇ j σ we denote the transpose and the j-order gradient of matrix σ.
• We set C as the space of R d -valued continuous functions on [0, 1] with the norm: • Let H be the Cameron-Martin space of functions which are absolutely continuous and whose derivative is square integrable, i.e., Moreover, H is a Hilbert space with inner product then, by Kolmogorov and Fomin [13], it is easy to find that B N is a compact Polish space under the weak topology, B N is endowed with the weak topology in this paper.
• Define Next, we recall the definition of large deviations from [9,10]. Let E be a Polish space with the Borel σ-field B(E ).
Definition 2.1. A family {Y } >0 of E -valued random elements is said to satisfy the large deviation principle on E with rate function I and with the speed function h 2 ( ) which is a sequence of positive numbers tending to +∞ as → 0, if the following conditions hold: where A o andĀ are the interior and closure of the set A.
We use the method of weak convergence to prove the MDP. By [5], it proves the equivalence of the Laplace principle and LDP, so it is enough to prove the following theorem. Theorem 2.2. Suppose {Γ } >0 satisfies the following assumptions: there exists a measurable map Γ 0 : H → E such that (i) Consider N < ∞ and a family {ϕ } ⊂ A N such that ϕ converges in distribution to ϕ ∈ A N as → 0. Then Then the family {Γ } >0 satisfies a large deviation principle in E with rate function I given by with the convention inf ∅ = ∞.
The following stochastic Gronwall's inequality can be found in [26], Lemma 3.8.
Lemma 2.3. Let ξ(t) and η(t) be two nonnegative càdlàg F t -adapted processes, A t a continuous nondecreasing F t -adapted process with A 0 = 0, M t a local martingale with M 0 = 0. Suppose that Then for any 0 < q < κ < 1 and stopping time τ , we have where ξ(t) * := sup We give the following assumptions throughout this paper: (H1) b and σ are continuous in x and b(x) is differentiable with respect to x. For a Lyapunov function V ∈ C 2 (R d ) and α ∈ [0, 1], there exist constants C 1 , C 2 , C 3 , C 4 > 0 such that for all t ∈ [0, 1], x, y ∈ R d , where ∇ is the gradient along the x direction, i.e., ∇b = ∂ ∂xj b i 1 i,j d is the Jacobain matrix of b, and (H2) There exist constants C 5 , C 6 ∈ R such that for all t ∈ [0, 1],

Moderate deviation principle
In this section, we shall prove the MDP. First of all, we give the main result of this section.
Theorem 3.1. Assume that (H1)-(H3) hold. Then Y satisfies a LDP with the speed h 2 ( ) and with the rate function I, which is defined by (s)ds) := Y ϕ (·) satisfies the following equation: Before giving the proof of Theorem 3.1, we need to give some necessary lemmas below.
Lemma 3.2. Assume that (H1)-(H3) hold, then equation (3.1) has a unique solution for any ϕ ∈ B N . Moreover, there exists a constant C > 0 such that Proof. By [16] We can obtain Y n t by introducing the following truncated equation: and χ n : R d → R is a family of non-negative smooth cutoff functions satisfying For any N > 0 and |x|, |y| N , by equations (2.3), (2.4) and (2.8), we can get Since the classical Carathéodory's existence theorem for ODEs in [8], (3.3) has a unique solution Y n t . For ϕ ∈ B N , by equations (2.4), (2.8), (3.2) and Hölder's inequality, we have Applying Gronwall's inequality, we have where C is a constant independent of n. By a method similar to Lemma 3.5 below (or see [19], Lem. 3.6), we can get {Y n } n∈N is compatible using the stopping time and Gronwall's inequality. Let it is easy to verify that Y ϕ t is a non-explosive solution of equation ( Next, we prove the uniqueness. Suppose that Y 1 and Y 2 are two different solutions of equation (3.1). By equations (2.8) and (3.2), we have Gronwall's inequality yields the uniqueness and we complete the proof.
Lemma 3.3. Assume that (H1)-(H3) hold, for any 0 < N < ∞, the family Proof. According to the compactness of B N , it is enough to prove that Γ 0 is continuous. Let ϕ n → ϕ weakly in B N , by equation (3.1), we can write By equations (2.4), (2.8) and (3.2), we have Since ϕ n → ϕ weakly in B N and using equation (3.5), we can get from the definition of weak convergence in For ϕ, ϕ n ∈ B N , by equation (3.5) and Hölder's inequality, we have For 0 s t 1, ϕ, ϕ n ∈ B N , by equations (2.4), (2.8), (3.2) and Hölder's inequality, we get Using equations (3.8), (3.9) and Gronwall's inequality, we find that The proof is complete.
Let Y := (X − X 0 )/ √ h( ), then Y satisfies the the following equation: By the Yamada-Watanabe theorem (see [27]), there exists a measurable map Γ : C → C such that Y · = Γ (W · ). For any ϕ ∈ A N , we can get that is, the Novikov's condition holds. Then we define a probability measure P by By Girsanov theorem, we have is a Brownian motion under the probability measure P . Moreover, we can obtain that Y ,ϕ · = Γ W · + h( ) · 0φ (s)ds satisfies the following equation: The following exponential integrability is critical.
where λ(t) = t(|C 5 | + |C 6 |/2 + C 1 αβ/4 + 2/(αβ)) + √ C 1 t 0 |φ (s)|ds. Proof. The proof is similar to [16], Lemma 3.7, so we omit the details. Proof. We denote Y t = Y ,ϕ t for the sake of simplicity. We use the truncation method in Lemma 3.2 to obtain the following truncated equation : Then equation (3.12) has a unique strong solution Y n t . For m < n, let By Itô's formula, we get For J 1 , by equation (2.5), we derive For J 2 , by equation (2.4), we have Similarly, we deal with J 4 . For ϕ ∈ A N , by Hölder's inequality, we have By equations (2.4), (3.13) and Burkholder-Davis-Gundy inequality, it is easy to derive that Combining this with Gronwall's inequality, we have that is, for almost all ω, (3.14) is non-explosive by Lemma 3.4. Thus τ ∞ = 1 and Y t = lim n→∞ Y n t . Moreover, we show X 0 t is non-explosive from [16], Theorem 3.2. Then we can claim Y t is a non-explosive solution.
It remains to prove the uniqueness. Suppose that Y and Y are two different solutions to equation (3.10). Let By Itô's formula we have (3.15) where (3.16) and
Proof. At first, we show that {Y ,ϕ } ∈(0,1) is tight in C . By virtue of the Arzelà-Ascoli theorem (see [12], Thm. 4.11), it is sufficient to verify that (a) sup for some positive constants α, β, γ and C. Let By Itô's formula, we have for any p 2, For L 1 , by equation (2.5), it is easy to derive that For L 2 , by equations (2.4), (2.8), (3.2) and Young's inequality, we get Similarly, we deal with L 3 . Putting all these estimates of L 1 (t) − L 3 (t) together, for ϕ ∈ A N , we can obtain that By stochastic Gronwall's inequality (2.1), for any 0 < q < κ < 1 and stopping time τ , we have Similar to the proof of equation (3.18), there exists a time T 0 such that Since A t is a continuous adapted process and we define Combining this with equation (3.22) gives that E sup C|t − s| p Therefore, we can choose p > 2 such that (a) and (b) hold. Thus {Y ,ϕ } ∈(0,1) is tight in C . Consequently, it suffices to show that Y ϕ is the unique limit point of {Y ,ϕ } ∈(0,1) . Let Since {Y ,ϕ } ∈(0,1) is tight, we can choose a subsequence of (Y ,ϕ , ϕ , M ) convergent weakly to (Y , ϕ, 0) as → 0. By Skorokhod representation theorem there exist a probability space (Ω,F ,P ), and on this basis, a Brownian motionW and also a family ofF -predictable process {φ}, ϕ taking values on A N , such that {(ϕ , ϕ, W )} has the same law as {(φ ,φ,W )} for each and lim →0 φ −φ, g = 0, g ∈ H,P − a.s.
For the sake of simplicity, we drop off the· in the notation. Thus, we may assume

Application to stochastic Hamiltonian systems
In this section, we shall apply the result of where x = (z, u) ∈ R d × R d , z and u denote the position and velocity of the motion of a particle in the statistical physics, respectively. V (z) is a potential function bounded from below 1. Consider the following stochastic Hamiltonian system in the phase space R 2d : where Φ t : R + × R 2d → R d and Θ t : R + × R 2d → R d × R m . A Lyapunov function V is given by V (x) :=H(x) (x = (z, u)) = 1 2 |u| 2 + V (z).
With the convention, we use ∂ u H(x) and ∇ u H(x) as synonymy.
Now we apply our result to stochastic Hamiltonian systems.
The corresponding perturbation equation is as follows: By equations (4.2), (4.5) and u CV 1 2 (x) , we have By equation (4.5) and V (x) 1, we get