On the gamma difference distribution

The gamma difference distribution is defined as the difference of two gamma distributions, with in general different shape and rate parameters. Starting with knowledge of the corresponding characteristic function, a second order linear differential equation characterisation of the probability density function is given. This is used to derive a Stein-type differential identity relating to the expectation with respect to the gamma difference distribution of a general twice differentiable function $g(x)$. Choosing $g(x) = x^k$ gives a second order recurrence for the positive integer moments, which are also shown to permit evaluations in terms of ${}_2 F_1$ hypergeometric polynomials. A hypergeometric function evaluation is given for the absolute continuous moments. Specialising the gamma difference distribution gives the variance gamma distribution. Results of the type obtained herein have previously been obtained for this distribution, allowing for comparisons to be made.


Introduction 1.Functional form of the probability density function
With α, β > 0, and χ A the indicator function for the condition A (χ A = 1 for A true and χ A = 0 otherwise) define the gamma random variable X ∈ Γ[α, β] by the probability density function p X (x) = β α Γ(α) x α−1 e −βx χ x>0 . (1.1) The corresponding characteristic function is From the additive property of the characteristic function, it follows that with (1. 3) The random variable X 1 − X 2 is said to define the gamma difference distribution.The 2015 review of Klar [16] discusses a number of statistical properties and provides historical context.In the case that X 1 , X 2 are identically distributed as chi-squared random variables, a comprehensive review is given in the recent work [15].Another recent work containing comprehensive review material is [12], which furthermore goes into some depth in relation to the question of numerical evaluations.By Fourier inversion it follows from (1.3) that the probability density function for the gamma difference distribution has the integral form (1.4) An alternative integral form of the probability density function for X 1 − X 2 follows from the convolution structure It is noted in [16,Eq. (5)] that the integrals in (1.5) can be found in [11,Eq. 3.383(4)], implying an evaluation in terms of Whittaker's confluent hypergeometric function.It is furthermore the case that the Whittaker function can be replaced in favour of the Kummer hypergeometric function of the second kind (Tricomi function) U(a, b; z) (also denoted Ψ(a, ; z)) according to [12, Eq. (2. 3)] This form can also be found in [13].From the evaluation formula for U(a, b; z)| z=0 with Re(b) < 0 [18, Eq. 13.2.22]one notes from (1.6) that With M(a, b; z) denoting the confluent hypergeometric function of the first kind (also denoted 1 F 1 (a, b; z)), we know from [18,Eq. 13.2.42] that Thus for α 1 a positive integer (1.9) which has the analytic structure of an exponential times a polynomial.For more on this particular special case, see [5] and [17].
Write Y = X 1 − X 2 for the special case of the gamma difference random variable α 1 = α 2 = r/2 and furthermore parameterised by setting 1/(β (1.11) A feature of p Y (x) known from [3, Eq. (2.26)] (see also [8]) is that it is the solution of a second order linear differential equation .12)This can be used to show [8] that for a twice differentiable g(x) and Y = Y r,θ,σ a variance gamma random variable, where it is assumed too that the growth of g(Y (1.14)These moments satisfy the three term recurrence [3, Eq. (2.28)] with initial condition m 0 = 1.This implies that for k even, m k (Y ) is a polynomial in r, σ, θ each of degree k, and even in σ, θ.For k odd the recurrence implies m k (Y ) is a polynomial in r, σ, θ of degree k in r, θ and degree k − 1 in σ, and is even in σ and odd in θ.Note that these structural properties are not immediate from the closed form (1.14).There is also a known closed form for the continuous absolute moment E(|Y | k ) with k real and satisfying k > max {−1, −r}.Thus from [10, Eq. (2. 3)] and [3, §2.7], (1.16) Our aims in this work are to give generalisations for the gamma difference distribution of (1.12), (1.13), (1.14), (1.15) and (1.16).

Summary of results
Our first result begins with (1.4) to deduce a differential equation for p X 1 −X 2 (x).Associated with this is a Stein-type differential identity relating to the expectation of a general twice differentiable function g(x) with respect to the gamma difference distribution.Proposition 1.For p X 1 −X 2 (x) denoting the probability density function for the gamma difference distribution, This is valid for general α 1 , α 2 > 0 except at x = 0 for α 1 + α 2 < 1 when p X 1 −X 2 (x) diverges and so is not differentiable.Consequently, for g(x) a twice differentiable function on x ∈ R, and X denoting a gamma difference random variable assuming the behaviour of g(x) as x → ±∞ is such that all implied expectations are well defined.
A recurrence for the positive integer moments easily follows from (1.18).
Corollary 2. The k-th positive integer moment m k (X) of the gamma difference random variable X satisfies the three term recurrence ) subject to the initial condition m 0 (X) = 1.
We turn now to a closed form evaluation of the positive integer moments, and the absolute continuous moments.
Proposition 3. The k-th positive integer moment m k (X) of the gamma difference random variable X admits the 2 F 1 hypergeometric polynomial expressions . The (b − 1)-th absolute continuous moment of the gamma difference random variable X can be expressed in terms of the 2 F 1 hypergeometric function according to (1.21) Proofs of the above results are given in Section 2, with a following discussion in Section 3.

Proof of Proposition 1 and Corollary 2
Taking the logarithmic derivative of (1.3) gives Now multiply both sides by 1 2π e −ixt and integrate over t ∈ R. We see that with the assumption α 1 + α 2 > 2 the integrand decays sufficiently fast that the integrals on both sides exist.
Focusing attention on the LHS, use of integration by parts gives Here the equality follows from (1.4) and the fact that for α 1 +α 2 > 2 the implied decay of the integrand allows for one differentiation under the integral sign, since the resulting integral is absolutely convergent.Strengthening the assumption to α 1 + α 2 > 3 allows the final term in (2.2) to be identified as On the RHS, the assumption α 1 + α 2 > 2 is sufficient for the resulting integral to be identified as Substituting (2.3) for the final term in (2.2) and equating with (2.4) gives the differential equation (1.17), proved at this stage under the assumption α 1 + α 2 > 3. To extend the validity of (1.17) to all α 1 , α 2 > 0 we examine the dependency on the variables of the various terms as functions of complex α 1 , α 2 in the domain Re α 1 > 0 and Re α 2 > 0. First, we see from the large t behaviour of the integrand in (1.4) that p X 1 −X 2 (x) is analytic in α 1 , α 2 for Re α 1 > 0 and Re α 2 > 0 for all fixed x excluding x = 0 (to include x = 0 would require the extra condition Re (α 1 + α 2 ) > 1).On this one sees the validity of repeated differentiation with respect to α 1 , α 2 under the integral sign, justified by the absolute convergence of the resulting integral.However we cannot immediately make the same conclusion in relation to p as only with the extra conditions Re (α 1 + α 2 ) > 2 and Re (α 1 + α 2 ) > 3 respectively can justification of taking the differentiation inside of the integrand be made using the criteria that the integrals are then absolutely convergent.
Fortunately, following an idea in [9, Paragraph above Remark 2.10] (see also [19, §2.1]) functional forms of p ′ X 1 −X 2 (x), p ′′ X 1 −X 2 (x) for general Re α 1 > 0 and Re α 2 > 0 can be given, and from these forms the sought analyticity can be concluded.Thus in the integrand of (1.4) the exponential function e −ixt is written in terms of a derivative with respect to t, and then integration by parts is carried out.This gives the identity where the notation | +,+ indicates that the parameters α 1 , α 2 are each to be increased by 1.
In this expression the derivative term p ′ X 1 −X 2 (x)| +,+ permits (is defined by) differentiation under the integral sign, which is justified by the absolute convergence of the integral.Furthermore, if we iterate this identity one more time, we see that the decay of all the resulting integrands on the RHS is, for Re α 1 > 0 and Re α 2 > 0, faster than 1/|t| 2 for large t.Hence differentiation with respect to x can be carried out under the integrand in this parameter range, giving rise to integrals which by inspection are analytic for Re α 1 > 0 and Re α 2 > 0. This then holds true of p ′ X 1 −X 2 (x), i.e. the derivative with respect to x of the LHS of (2.5), except possibly for x = 0 as seen by the factor of 1/x on the RHS of (2.5).A further iteration of (2.5) (bringing the total to three) allows for the same conclusion in relation to p ′′ X 1 −X 2 (x).Thus we have the situation that the identity (differential equation) (1.17) between functions analytic in Re α 1 > 0 and Re α 2 > 0 has proved in the dense subset of this domain Re (α 1 + α 2 ) > 3. The uniqueness of analytic continuation extends the identity to all of the domain of analyticity.
With (1.17) established, multiplying through by g(x) and integrating by parts gives (1.18).Here it assumed that the behaviour of g(x) for large |x| is such that the implied expectations are well defined.Choosing in (1.18) g(x) = x k for k a positive integer gives (1.19).

Proof of Propositions 3 and 4
In relation to Proposition 3, we return to the definition of the gamma difference random variable X as X = X 1 −X 2 , where X 1 , X 2 are gamma random variables from the distributions Γ[α 1 , β 1 ] and Γ[α 2 , β 2 ] respectively.Thus, as noted in [16, second displayed equation in Section 3] it follows For X 2 a gamma random variable from Γ[α 2 , β 2 ], we can compute directly from the corresponding probability density function (1.2) that for l a non-negative integer where (α 2 ) l denotes the increasing Pochhammer symbol.Starting with (2.7) as it applies to E(X l 1 ), then replacing l by k − l and manipulating (α Furthermore, we note that in terms of the Pochhammer symbol, we can write Substituting (2.7), (2.8) and (2.9) in (2.6) and recalling the series form of the degree k the first of the functional forms in (1.20) results.To obtain the second expression, we note from the fact (X 1 − X 2 ) k = (−1) k (X 2 − X 1 ) k that the only effect of the interchange (α 1 , β 1 ) ↔ (α 2 , β 2 ) is a factor of (−1) k .We turn our attention now to the derivation of (1.21) in Proposition 4. Our starting point is (1.6), from which we read off The integral in this expression is tabulated in [11,Eq. 7.621(6)], where subject to the requirements that b > max (0, 1−α 1 −α 2 ) it is expressed in terms of a particular 2 F 1 hypergeometric function, implying (1.21).

Discussion
It is a classical result that the Tricomi function U(a, b; z) satisfies the second order differential equation known as Kummer's equation [18,Eq. 13.2.1].Combining this knowledge with the functional form (1.6) allows for the differential equation characterisation of p X 1 −X 2 (x) (1.17) to be independently verified.
If we substitute g(x) = e itx in (1.18) we recover the first order equation (2.1) for the characteristic function φ X 1 −X 2 (t).This shows that validity of (1.18) for general twice differentiable g(x) implies that the random variable X is uniquely determined as having gamma difference distribution.
The structure of (2.1) can be generalised to the class of characteristic function φ X (t) say satisfying the first order differential equation In relation to the recurrence (1.19) for the moments, we see that it exhibits m k (X) to be a polynomial of degree k in each of 1/β 1 , 1/β 2 , α 1 and α 2 .This structural feature is consistent with the exact evaluation (1.20).
1 I thank R.E.Gaunt for pointing this out to me.
Each term on the RHS can be evaluated using the same integral evaluation as used in (2.11), leading to an evaluation formula for PV E(X −1 ) in terms of particular 3 F 2 and digamma functions [14,Th. 3.3].
It must be that with b − 1 = k, k even, the positive integer moment formula (1.20) agrees with the continuous moment formula (1.21).To get some insight into this, we first note that the argument of the second (implied) 2 F 1 function in (1.21) relates to the argument of the first 2 F 1 function by z → 1 − z.This suggest applying the transformation identity between 2 F 1 of argument z and a linear combination of two 2 F 1 functions of argument 1 − z [11, Eq. 9.131 (2)].Doing this gives the rewrite of (1.21) We see from this a simplification in the case that b−1 = k, k even, namely that the prefactor of the first 2 F 1 function then vanishes, leaving us with A standard Pfaff transformation mapping the 2 F 1 function with argument z to a 2 F 1 function with argument z/(1 − z) shows that (3.2) is equivalent to the second of the forms given in (1.20).We now specialise the parameters of the gamma difference distribution to those of the variance gamma distribution as specified above (1.10).Doing this in (1.17) we see that the known differential equation (1.12) for the probability density function of the variance gamma distribution is reclaimed.Similarly, we see that the three term recurrence for the positive integer moments (1.19) reduces to the three term recurrence for the positive integer moments (1.15) known for the variance gamma distribution.In relation to the formula (1.16) for the continuous moments, we apply to this the particular quadratic transformation formula [18,Eq. 15.8.7] as applies to 2 F 1 (a, b; 1/2, z).We obtain the appropriate specialisation of the gamma difference result (1.21).
Finally we comment on the appearance of the case 3) -to be denoted w(t; α 1 ) -in random matrix theory, and an associated linear differential and difference equation.In this field, one encounters the eigenvalue probability density function on the space of Hermitian matrices proportional to see [4,Eq. (3.124)].For this to be normalisable, it is required that Re α > 1/2.This functional form is well known to give rise to a determinantal point process (see [4,Ch. 5]), for which the eigenvalue density ρ (1),N (x) -this quantity is normalised to integrate to N -has the explicit functional form There are two points of interest in relation to ρ (1),N (x) in the present context.One is its characterisation as the solution of a third order linear differential equation valid for general positive integer N [7, Prop.3.5] (As we know from (2.1), the special case N = 1 admits a simpler first order differential equation characterisation.)The other is that in the so-called symmetric case of α real, the sum of successive absolute continuous moments µ k := m 2k+2 + m 2k , with m 2k := ∞ −∞ |x| 2k ρ (1),N (x) dx (these are well defined for −1/2 < Re k < α − 1/2) satisfy a three term recurrence [7,Cor. 4.3].Moreover, the µ k admit an explicit evaluation in terms of particular continuous Hahn polynomials [2].
We draw attention too to known formulas for the moments of p Y (x).In this regard, for k a positive integer, set ℓ = ⌈k/2⌉ + 1/2 and m = k mod 2. With m k (Y ) denoting the k-th moment of the variance gamma distribution, ), for polynomials A d and B d of degrees d and d respectively.From this starting point, the Stein characterisation of the law of X,