L_1-distance for additive processes with time-homogeneous L\'evy measures

We give an explicit bound for the $L_1$-distance between two additive processes of local characteristics $(f_j(\cdot),\sigma^2(\cdot),\nu_j)$, $j = 1,2$. The cases $\sigma =0$ and $\sigma>0$ are both treated. We allow $\nu_1$ and $\nu_2$ to be equivalent time-homogeneous L\'evy measures, possibly with infinite variation. Some examples of possible applications are discussed.


Introduction and main result
In this note we give an upper bound for the L 1 -distance between the laws induced on the Skorokhod space by two additive processes observed until time T > 0. By the L 1distance between two σ-finite measures µ 1 and µ 2 on (E, E ) such that µ 1 is absolutely continuous with respect to µ 2 we mean Note that, with our definitions, the L 1 -distance is twice the so called total variation distance.
Giving bounds for the L 1 -distance is a classical problem, which, in the last decades, has been reinterpreted in more modern terms via Stein's method (see, e.g., [14,17,16]). This kind of problems arises in several fields such us Bayesian statistics, convergence rates of Markov chains or Monte Carlo algorithms (see [7], Section 4 and the references therein). However, to the best of our knowledge, results bounding the L 1 -distance between laws on the Skorokhod space are much less abundant. In this setting other kinds of distances have been privileged such as the Wasserstein-Kantorovich-Rubinstein metric (see [5]). More relevant to our purposes is a result due to Memin and Shiryayev [11] computing the Hellinger distance between the laws of any two processes with independent increments. In particular this gives a bound for the L 1 -distance between additive processes. In order to state their result let us fix some notation.
Observe that (1) implies γ νj < ∞, j = 1, 2. When σ 2 = 0 it follows from Theorem 1.2 that, for example, L 1 P The proof of Theorem 1.1, however, makes heavy use of general theory of semimartingales. This note originated from the research for a proof based only on classical results for Lévy processes, Esscher-type transformations and the Cameron-Martin formula. It turned out that this method, when applicable, gives sharper bound on the L 1 -distance. More precisely, our main result is as follows.
If σ 2 > 0 then If σ 2 = 0 and f 1 − f 2 ≡ γ ν1 − γ ν2 , then Remark that, in the case ν 1 = ν 2 = 0, i.e. where there are no jumps, the upper bound in Theorem 1.2 is achieved. Indeed, an explicit formula for the L 1 -distance between Gaussian processes is well known. Denoting by φ the cumulative distribution function of a normal random variable N (0, 1), we have, for any 0 < T < ∞: dt whenever the right-hand side term makes sense (see, e.g., [1]). The reason for our interest in the L 1 -distance lies in the fact that it is a fundamental tool in the Le Cam theory of comparison of statistical models ( [9,10]). More precisely, the presented result will be needed in a forthcoming paper by the second author, establishing an equivalence result, in the Le Cam sense, for additive processes. Similar estimations appear in many other results concerning the Le Cam ∆-distance. See for example [1,15,2], where the L 1 -distance between Gaussian processes is computed or [12,4,6] concerning diffusion processes without jumps. In recent years, however, there is a growing interest in models with jumps due to their numerous applications in econometrics, insurance theory or financial modelling. Because of that, it is useful to dispose of simple formulas for estimating distances between such processes. Theorem 1.2 is proved in Section 3. In Section 2 we collect some preliminary results about additive processes that will play a role in the proof. Before that, we give some examples of situations where our result can be applied. The choice of these examples are inspired by the models exhibited in [3].
t } is a Lévy process of characteristic triplet λ j |y|≤1 yG j (dy), 0, λ j G j . Furthermore, let A be a subset of R and suppose that G j is equivalent to the Lebesgue measure restricted to A. Denote by g j the density dGj dLeb |A ; then, an application of Theorem 1.2 yields: An additive process of jump-diffusion type on [0, T ] has the following form: where {W t } is a standard Brownian motion, {N t } is the Poisson process counting the jumps of {X t }, and Y i are jumps sizes (i.i.d. random variables). Consider now the additive processes of jump-diffusion type {X j t } having local characteristics (f j (·) + λ j |y|≤1 yG j (dy), σ 2 (·), λ j G j ), j = 1, 2 and suppose again that G j is equivalent to the Lebesgue measure restricted to some A ⊆ R. Letting g j denote the density of G j as above, we have: Example 1.5. (L 1 -distance between tempered stable processes) Let {X 1 t } and {X 2 t } be two tempered stable processes, i.e. Lévy processes on R with no gaussian component and such that their Lévy measures ν j have densities of the form for some parameters C ± > 0, λ j ± > 0 and α < 2. Then the hypothesis (1) is satisfied and Theorem 1.2 bounds the L 1 -distance by:
is an additive process if the following conditions are satisfied.
Thanks to the Lévy-Khintchine formula, the characteristic function of any additive process {X t } can be expressed, for all u in R, as: where f (·), σ 2 (·) are functions on L 1 [0, T ] and ν is a measure on R satisfying In the sequel we shall refer to (f (·), σ 2 (·), ν) as the local characteristics of the process {X t } and ν will be called Lévy measure. This data characterises uniquely the law of the process {X t }. In the case in which f (·) and σ(·) are constant functions, a process {X t } satisfying (2) is said a Lévy process of characteristic triplet (f, σ 2 , ν). Let D t and D be the σ-algebras generated by {x s : 0 ≤ s ≤ t} and {x s : 0 ≤ s < ∞}, respectively (here, we use the same notations as in [18]).
Here and in the sequel we will denote by ∆x r the jump of process {x t } at the time r: Definition 2.2. Consider {x t }, P (f,σ 2 ,ν) and define the jump part of {x t } as yν(dy) a.s. and its continuous part as We now recall the Lévy-Itô decomposition, i.e. the decomposition in continuous and discontinuous parts of an additive process. t } and {x c,ν t } as in 3 and 4, respectively. Then the following hold. (i) There is D 1 ∈ D with P (f,σ 2 ,ν) (D 1 ) = 1 such that, for any ω ∈ D 1 , x d,ν t (ω) is defined for all t ∈ [0, T ] and the convergence is uniform in t on any bounded interval, P (f,σ 2 ,ν) -a.s. The process {x d,ν t } is a Lévy process on R with characteristic triplet (0, 0, ν).
(ii) There is D 2 ∈ D with P (f,σ 2 ,ν) (D 2 ) = 1 such that, for any ω ∈ D 2 , x c,ν t (ω) is continuous in t. The process {x c,ν t } is an additive process on R with local characteristics (f (·), σ 2 (·), 0). (iii) The two processes {x d,ν t } and {x c,ν t } are independent. 2.2. Change of measure for additive processes. For the proof of Theorem 1.2 we also need some results on the equivalence of measures for additive processes. By the notation ≪ we will mean "is absolutely continuous with respect to". Let {x t }, P (0,0,ν) and {x t }, P (η,0,ν) be two Lévy processes on R, where is supposed to be finite. Then P for all t ≥ 0 if and only if ν ≪ν and the density dν dν satisfies Remark that the finiteness in (6) implies that in (5). When P The convergence in (7) is uniform in t on any bounded interval, P (0,0,ν) -a.s. Besides, {U t (x)} defined by (7) is a Lévy process satisfying E P (0,0,ν ) [e Ut(x) ] = 1, ∀t ∈ [0, T ].
Proof. The existence of the limit in (12) is guaranteed by (8)  t } are independent under P (f2,σ 2 ,ν2) . Moreover, the law of {C t (x)} (resp. {D t (x)}) is the same under P (f2,σ 2 ,ν2) or P (f2,σ 2 ,0) (resp. P (f2,0,ν2) or P (0,0,ν2) ). Further, using Theorem 2.4, we know that {D t (x)} is a Lévy process such that E P (0,0,ν 2 ) [exp(D t−s (x))] = 1 for all s < t. These facts together with the independence of the increments of ({x t }, P (f2,σ 2 ,ν2) ) and the stationarity of {D t (x)} imply: Then, using the same notations as above, P (f1,σ 2 ,ν1) t ≪ P (f2,σ 2 ,ν2) t for all t and the density is given by: Proof. For s < t, we prove that To that aim remark that, thanks again to Theorem 2.3: Let us now compute the first factor of (14): In the first equality we used the Girsanov theorem, thanks to the fact that t 0 1 σ(r) (dx r − f 2 (r)dr) is a Brownian motion under P (f2,σ 2 ,0) , while the second one follows from (2). We compute the second factor of (14) by means of Theorem 2.4 and another application L1-DISTANCE FOR ADDITIVE PROCESSES WITH TIME-HOMOGENEOUS LÉVY MEASURES 7 of (2): Consequently: (15) E Fix t and define a probability measure P t on D t by P t (B) = E P (f 2 ,σ 2 ,ν 2 ) [M t I B ] for B ∈ D t . As a consequence of Lemma 2.5 and the Bayes rule, the two processes given by {x s : 0 ≤ s ≤ t}, P (f1,σ 2 ,ν1) t and {x s : 0 ≤ s ≤ t}, P t are identical. Indeed, by (15), both have independent increments and the prescribed characteristic function. Consequently, (13) holds.

Proof of Theorem 1.2
For the proof we will need the following three calculus lemmas.
Lemma 3.1. Let X be a random variable with normal law N (m, σ 2 ). Then Proof. By definition we have To conclude, just split the sums inside the integrals and use the change of variables y = x−m σ − σ , resp. y = x−m σ .

Lemma 3.2.
For all x, y in R we have: Proof. By symmetry we restrict to x ≥ 0. • x, y ≥ 0: In this case we have that |1 − e x+y | is exactly equal to 1+e x 2 |1 − e y | + 1+e y 2 |1 − e x |. • x ≥ 0, y ≤ 0, x + y ≥ 0: Then the member on the right hand side of (16) is equal to e x − e y ≥ e x − 1 ≥ e x+y − 1. • x ≥ 0, y ≤ 0, x + y ≤ 0: In this case the member on the right of (16) is equal to e x − e y ≥ 1 − e y ≥ 1 − e x+y . Then, using Lemma 3.2 and the fact that A + (x) ≥ 0 and A − (x) ≤ 0 we get: In order to compute the last quantity we apply Theorem 2.4 and the fact that both A + (x) and A − (x) have the same law under P