Rate of convergence in total variation for the generalized inverse Gaussian and the Kummer distributions

Abstract: The generalized inverse Gaussian distribution converges in law to the inverse gamma or the gamma distribution under certain conditions on the parameters. It is the same for the Kummer’s distribution to the gamma or beta distribution. We provide explicit upper bounds for the total variation distance between such generalized inverse Gaussian distribution and its gamma or inverse gamma limit laws, on the one hand, and between Kummer’s distribution and its gamma or beta limit laws on the other hand.

In [1], the authors have established the rate of convergence of the GIG distribution to the gamma distribution by Stein's method. In order to compare the rate of convergence obtained via Stein's method with the rate obtained by using another distance, the authors have established an explicit upper bound of the total variation distance between the GIG random variable and the gamma random variable, which is of order n −1/4 for the case p = 1 2 . We generalize this result by providing the order of the rate of convergence in total variation of the GIG distribution to the gamma distribution for all p = k + 1 2 , k ∈ N. In particular, we obtain a rate of convergence of order n −1/2 for p = 1 2 , which is better than the one in [1]. For a > 0, b ∈ R, c > 0, the Kummer distribution K(a, b, c) has density function where ψ is the confluent hypergeometric function of the second kind and Γ is the gamma function. Details on the GIG and the Kummer distributions can be found in [1][2][3][4][5] and references therein. For θ > 0, λ > 0, the gamma distribution γ(θ, λ) has density function γ(θ, λ)(x) = λ θ Γ(θ) x θ−1 e −λx 1 {x>0} .
For θ > 0, λ > 0, the inverse gamma distribution Iγ(θ, λ) has density function The beta distributions of type 2 β (2) (a, b) has density We have the following definition and a Property of the total variation distance.

Definition 1.
Let W and Z be two continuous real random variables, with density f W and f Z respectively. Then, the total variation distance between W and Z is given by Property 1. Consider W and Z be two continuous random variables. Let f W (resp. f Z ) the density of W (resp. Z) on (0, ∞). Assume that the function which proves the item 1. For item 2, using similar arguments as in the previous case leads to the result.

Remark 1.
The support of the densities may be any interval, but here we take this support to be (0, ∞) in the purpose of the application to the GIG and Kummer's distributions.
The aim of this paper is to provide a bound for the distance between a GIG (resp. a Kummer's) random variable and its limiting inverse gamma or gamma variables (resp. gamma or beta variables), and therefore to give a contribution to the study of the rate of convergence in the limit theorems involved. Section 2 presents the main results and their proofs in Section 3.

On the rate of convergence of the generalized inverse Gaussian distribution to the inverse gamma distribution
The first main result is presented in Theorem 1 below. We recall the convergence of the GIG distribution to the inverse gamma distribution as Proposition 1. Proposition 1. For k ∈ N, b > 0, let (X n ) n≥1 be a sequence of random variables such that X n ∼ GIG −k − 1 2 , for each n ≥ 1. Then, as n → ∞, the sequence (X n ) n≥1 converges in law to a random variable X following the Iγ k + 1 2 , b 2 distribution.

Theorem 1.
Under the assumptions and notations of Proposition 1, we have: Remark 2. The upper bound provided by Theorem 1 is of order n −1/2 . Table 1 and Table 2 are some numerical results for k = 0. This case is particularly interesting since it corresponds to the inverse Gaussian distribution used in data analysis when the observations are highly right-skewed [6,7]. The inverse Gaussian law is the distribution of the first hitting time for a Brownian motion [8].

On the rate of convergence of the generalized inverse Gaussian distribution to the gamma distribution
where α n = (an) p/2 2K p a n and α = (a/2) p Γ(p) .

Corollary 1.
The upper bound provided by Theorem 2 is of order n −1/2 for p = 1 2 and of order n −1 for all p of the form bounded. For p = k + 1 2 , k ≥ 1, k integer, the upper bound provided in [1] by Stein method is of order n −1 (Proposition 3.3). This is the same in our result. In addition, our upper bound is quite simple when compared to the one in [1] obtained by Stein's method (Theorem 3.1), and sharper than the one obtained in Proposition 3.4 [1].

On the rate of convergence of the Kummer distribution to the gamma distribution
As in the previous subsection, the following theorem contains the rate of convergence in total variation of the Kummer distribution to the gamma distribution. Theorem 3. Let (V n ) n≥1 be a sequence of random variables such that V n ∼ K a, −a + 1 n , c with a > 0, c > 0. Then, and δ = c a Γ(a) . Tables 3 and 4

On the rate of convergence of the Kummer distribution to the beta distribution
We have the following result. Theorem 4. Let (W n ) n≥1 be a sequence of random variables such that W n ∼ K a, b, 1 n with a > 0, b > 0. Then, 1. As n → ∞, (W n ) converges in law to a random variable W following the β(a, b) distribution. 2.

Proofs of main results
Proof of Proposition 1. For all x > 0, We now use the well-known fact that (see for instance [9,10]), as x → 0, to see that Proof of Theorem 1. Let g n and g the densities of X n ∼ GIG −k − . We have g n (x) = β n x −k− 3 2 e − 1 2 ( 1 n x+b/x) and g( let v n (x) = β n e − 1 2n x − β, then v n is decreasing on (0, +∞) with lim Also, Then v n have a unique zero λ n = 2n ln(β n /β) on (0, ∞). Hence g n (x) − g(x) > 0 if x < λ n and g n (x) − g(x) < 0 if x > λ n . Using Property 1, we have: Then integrating λ n 0 g n (x)dx by part, we get: x is decreasing and positive on (0, ∞), for all x and t such that we have Therefore, for k ≥ 2, we have Proof of Theorem 2. Let α n = (an) p/2 2K p a n and α = (a/2) p Γ(p) . Denote by h n (rep. γ) the density of Y n ∼ GIG p, a, 1 n (resp. Y ∼ γ(p, a/2)). We have h n (x) = α n x p−1 e − 1 2 (ax+ 1 nx ) and γ(x) = αx p−1 e − a 2 x . Which x 0 t p−1 e − a 2 t dtdx.