Moments of Gamma type and the Brownian supremum process area

We study positive random variables whose moments can be expressed by products and quotients of Gamma functions; this includes many standard distributions. General results are given on existence, series expansion and asymptotics of density functions. It is shown that the integral of the supremum process of Brownian motion has moments of this type, as well as a related random variable occuring in the study of hashing with linear displacement, and the general results are applied to these variables.


Introduction
We say that a positive random variable X has moments of Gamma type if, for s in some interval, for some integers J, K ≥ 0 and some real constants C, D > 0, a j , b j , a ′ k , b ′ k . We may and will assume that a j = 0 and a ′ k = 0 for all j and k. We often denote the right hand side of (1.1) by F (s); this is a meromorphic function defined for all complex s (except at its poles).
Similarly we say that a real random variable Y has moment generating function of Gamma type if, for s in some interval, for some integers J, K ≥ 0 and some real constants C, d, a j = 0, b j , a ′ k = 0, b ′ k , and that Y has characteristic function of Gamma type if, for all real t, Remark 1.2. The constant C is determined by the relation F (0) = E X 0 = 1, which shows that provided no b j or b ′ k is a non-positive integer. In general, C can be found by taking limits as s → 0. Remark 1.4. If r ∈ R, then x − r = Γ(x − r + 1)/Γ(x − r). Hence, any rational function Q(x) with all poles and zeros real can be written as a finite product ℓ Γ(x + c ℓ )/Γ(x + c ′ ℓ ) with c ℓ , c ′ ℓ ∈ R. Consequently we may allow such a rational factor Q(s) in (1.1) and (1.2), or Q(it) in (1.3), without changing the class of distributions. Remark 1.5. If X has moments of Gamma type and α is a real number, then X α has moments of Gamma type. (Just substitute αs for s in (1.1).) Similarly, if X 1 and X 2 are independent and both have moments of of Gamma type, then X 1 X 2 has too. (Just use E(X 1 X 2 ) s = E X s 1 E X s 2 .) Several well-known distributions have moments or moment generating functions of Gamma type. We give a number of examples in Section 3.
The main motivation for the present paper is that also several less wellknown distributions have moments of Gamma type. It is then straightforward to use Mellin transform techniques to obtain expansions or asymptotics of the density function, and it seems advantageous to do so, and to study other properties, in general for this class of distributions.
In particular, this paper was inspired by the realization that some recently studied random variables have moments of Gamma type. One is the integral of the supremum process of a Brownian motion, i.e., the area under the supremum process (up to some fixed time T ). Let B(t), t ≥ 0, be a standard Brownian motion. Consider the supremum process S(t) := max 0≤s≤t B(t), In particular, E A(T ) s = T 3s/2 E A s and it is enough to study A. The random area A was studied by Janson and Petersson [19], and using their results we will in Section 7 prove the following formula, showing that A has moments of Gamma type. (The result for the integer moments E A n , n ∈ N, was given in [19].) We give several different, but equivalent, formulas of the type (1.1) for E A s , which exemplifies Remark 1.1. The third version, with only two non-constant Gamma factors is perhaps the simplest. The last, where all Gamma factors are of the type Γ(s/2 + b) with 0 < b ≤ 1 is of a canonical type where there are no cancellations of poles, and it is thus easy to see the poles and zeros, cf. Remark 4.5. Further, E A s = ∞ for real s ≤ −1.
Remark 1.7. Several related Brownian areas are studied in Janson [17], for example the integral of |B(t)| or the integral of a normalized Brownian excursion. These areas do not have moments of Gamma type. In fact, most of the Brownian areas studied there have entire functions E X s [17, §29], which is impossible for moments of Gamma type, see Theorem 4.1(iv). (For the remaining two areas in [17], we have no formal proof that they do not have moments of Gamma type, but it seems very unlikely since the integer moments satisfy more complicated recursion formulas [17].) As a consequence of Theorem 1.6 and our general results in Section 6, we can express the density function of A using the confluent hypergeometric function 1 F 1 (denoted M in [1] and Φ in [20]) or the confluent hypergeometric function of the second kind U [1] (also denoted Ψ [20]). (Also the proofs of the next two theorems are given in Section 7.) Theorem 1.8. A has a density function f A (x) given by, for x > 0, (−1) n Γ(n + 5/6) n! Γ(n + 2/3) · 3 2 n+1/2 x 2n (−1) n Γ(n + 7/6) n! Γ(n + 4/3) · 3 2 n+5/6 x 2n+2/3 = 2 1/2 π 1/2 1 F 1 It follows (most easily from the second formula above) that f A has a finite, positive limit f A (0+) = 2/π as x ց 0. More precisely, f A (x) = 2/π + O(x 2/3 ). As x → ∞, we obtain from Theorem 1.6 and our general theorems in Section 6 the following asymptotic result. Note that the two terms in the first or second formula for f A in Theorem 1.6 are each much larger, of the order x −5/3 by the asymptotics of 1 F 1 in [1, (13.5.1)], but they cancel each other almost completely for large x.
This result was conjectured in [19], where the weaker result P(A > x) = exp −3x 2 /2 + o(x 2 ) was shown from the moment asymptotic E A s ∼ Γ(1/3) π 1/2 s 1/6 s 3e for integer s and a Tauberian theorem. (Only integer moments were considered in [19]. Note that (1.7) for arbitrary real s → ∞ follows easily from Theorem 1.6 and Stirling's formula; see Theorem 5.7 and (7.8).) Remark 1.10. Theorem 1.9 also follows from any of the last two formulas in Theorem 1.8 and the asymptotic formula for U in [1, (13.5.2)]. Indeed, this gives an asymptotic expansion with further terms, cf. Remark 6.3; in this case, by [1, (13.5.2)], the complete asymptotic expansion can be written where the hypergeometric series 2 F 0 is divergent and the asymptotic expansion is interpreted in the usual way: if we truncate the series after any fixed number of terms, the error is of the order of the first omitted term. (For the general definition of the (generalized) hypergeometric series p F q , see e.g. [13,Section 5.5].) Theorem 1.9 may be compared with similar results for several other Brownian areas in Janson and Louchard [18], see also Janson [17] and Remark 1.7. In these results for other Brownian areas, the exponent of x is always an integer (0, 1 or 2), while here the exponent is 1/3, which is related to the power s 1/6 in (1.7).
Another example with moments of Gamma type comes from Petersson [24]. He studied the maximum displacement in hashing with linear probing, and found for dense tables, after suitable normalization, convergence to a limit distribution given by a random variable M with the distribution [24, where ψ(s) := E e −sA is the Laplace transform of A. Equivalently, This type of relation preserves moments of Gamma type; we give a general result.
Lemma 1.11. Suppose that V and Z are two positive random variables and α > 0. Then (1.10) if and only if If (1.10) or (1.11) holds, then In particular, if one of Z and V has moments of Gamma type, then so has the other.
We postpone the simple proof until Section 8. By (1.9), Lemma 1.11 applies to M and A, and thus M has moments of Gamma type. More precisely, Theorem 1.6 implies the following, see Section 8 for details.
Our general theorems apply again; they show that M has a density, and they yield a series expansion and asymptotics for the density. (Proofs are given in Section 8.) Again, the results can be expressed using various hypergeometric functions and series. (Again, see [13] for definitions.) Theorem 1.13. M has a continuous density function given by, for x > 0, (−1) n Γ(1 + n/2) Γ(4/3 + n/2) Γ(7/6 + n/2) n! 2 3 In particular, for small x we have the asymptotic formula For large x, there is a similar formula, which is the beginning of a divergent asymptotic expansion (interpreted as in Remark 1.10): (1.14) More precisely, f M (x) has as x → ∞ an asymptotic expansion Yet another recent example of moments of Gamma type comes from the study of generalized Pólya urns [10], [16]; see Section 9.
We give some basic reults in Section 2, and further results on poles and zeros in Section 4. Many examples with standard distributions are given in Section 3. Asymptotics of the moments are studied in Section 5, and asymptotics and series expansions of the density function are given in Section 6. As said above, we give proofs of the results above for A and M in Sections 7 and 8, and we give some results for generalized Pólya urns in Section 9. We end with a couple of more technical examples (counter examples) in Section 10 and some further remarks in Section 11. Some standard formulas for the Gamma function are for convenience collected in Appendix A.
2. The basic theorem and some notation Let F (s) denote the right hand side of (1.1) or (1.2). (Thus, the right hand side of (1.3) is F (it).) Evidently, F is a meromorphic function in the complex plane, and all poles are on the real axis. Let ρ + and ρ − be the poles closest to 0: Note that we ignore any pole at 0 in the definitions (2.1); however, it follows from Theorem 2.1 that such a pole cannot exist; F (s) is always analytic at s = 0.
(iii) (1.2) holds for all real s in some non-empty interval.
(v) =⇒ (iv). Note first that ϕ(t) := E e itY → 1 as t → 0. Hence, F (it) → 1 as t ց 0, and thus F does not have a pole at 0. This shows that F (z) is analytic in the strip ρ − < Re z < ρ + , and thus F (iz) is analytic in the strip −ρ + < Im z < −ρ − . By continuity, ϕ(t) = F (it) also for t = 0 and thus for the entire interval (−t 0 , t 0 ). Hence, on this interval at least, ϕ(t) equals the boundary values of the function F (it) which is analytic for 0 ≤ Im z < −ρ − and by a theorem of Marcinkiewicz [22], It is well-known that (2.2) implies that ψ(z) := E e zY is defined and finite for ρ − < Re z < ρ + and that ψ(z) is an analytic function of z in this strip. Since and it follows that z → E e zY is defined and analytic for 0 < Re z < s 0 . Since E e zY = F (z) on an interval in this strip, E e zY = F (z) for 0 < Re z < s 0 .
For any real t, we may take z = it + ε for 0 < ε < s 0 ; letting ε ց 0 we have E e (it+ε)Y → E e itY by dominated convergence (using E(1+e s 0 Y ) < ∞). If further t = 0, then also E e (it+ε)Y = F (it + ε) → F (it) since F has only real poles, and thus E e itY = F (it). Hence (v) holds.
This completes the proof of the equivalences. Suppose E e sY < ∞ for some s ≥ ρ + (and thus ρ + < ∞). Letting z ր ρ + , we then have, by dominated convergence, F (z) = E e zY → E e ρ + Y < ∞, while the definition of ρ + as a pole yields F (z) = |F (z)| → ∞. This contradiction shows that E e sY = ∞ for s ≥ ρ + . Similarly, or by considering −Y , E e sY = ∞ for s ≤ ρ − .
Finally observe that F (s) = 0 only when some a ′ k s + b ′ k is a pole of Γ, i.e., a non-positive integer, which implies that s is real. However, if ρ − < s < ρ + , then F (s) = E e sY > 0.
Remark 2.2. The equivalence (iii) ⇐⇒ (iv) (or, equivalently, (i) ⇐⇒ (ii)) is an instance of the well-known fact that a (two-sided) Laplace transform of a positive function or measure has singularities where the real axis intersects the boundary of the natural strip of definition, see e.g. [6, §3.4]. The result by Marcinkiewicz [22] used above is a sharper version of this.
We make some simple but useful observations. Corollary 2.3. The distribution of X is determined by the function F (s) on the right hand side of (1.1): If E X s 1 = F (s) for s ∈ I 1 and E X s 2 = F (s) for s ∈ I 2 , for non-empty intervals I 1 and I 2 , then X 1 d = X 2 .
Remark 2.5. Every pole or zero of F (s) must be a pole of one of the Γ factors in (1.1). However, the converse does not hold, since poles in the Γ factors may cancel; if s 0 is a pole of some factors in the numerator, but also a pole of at least as many factors in the denominator, then s 0 is a removable singularity of F , and F (s 0 ) is well-defined by continuity. Note that such s 0 do not count in the definition (2.1) of ρ ± .
In particular, s = 0 may be a pole of some of the Gamma factors in F (s). (This happens when some b j or b ′ k is 0 or a negative integer. This is the reason we exclude t = 0 in Theorem 2.1(v).) However, by Remark 2.4, all such poles must cancel; i.e., there must be an equal number of such factors in the numerator and denominator in (1.1).
Remark 2.6. In Theorem 2.1(v), it is important that we consider an interval about 0 (unlike in (i) and (iii)). In fact, for any ε > 0, there exist a random variable Y and C, d, a j , b j , a ′ k , b ′ k such that (1.3) holds for |t| ≥ ε but not for all ε.
For an example, let Y be any random variable with characteristic function of Gamma type, say E e itY = F (it). Further, let Z be a random variable with characteristic function (1−|t|) + , see [9, Section XV.2], and let W be the mixture of Z and the constant 0 obtained as W := V Z with V ∈ Be(1/2), i.e. P(V = 0) = P(V = 1) = 1/2, and V independent of Z; assume further that Y is independent of Z and V . Then, for |t| ≥ 1, the characteristic functions E e itZ = 0 and E e itW = 1 2 (E e itZ + 1) = 1 2 , and thus Y := Y + ε −1 W has the characteristic function Here F (it) := 1 2 F (it) is another function of the type in (1.3); however (2.3) does not hold for all t since F (0) = 1/2 = 1.
We do not know whether (1.3) for some interval, together with F (0) = 1, implies that (1.3) holds for all t.
For future use, in particular for the asymptotic results in Sections 5 and 6, we define the following parameters, given a random variable X or Y or a function F (s) as in (1.1) or (1.2): The proof is given in Section 5.
Remark 2.8. To replace X by X −1 , or equivalently Y by −Y , means that F (s) is replaced by F (−s), which has the same form but with d replaced by −d (D by D −1 ) and similarly the sign of each a j and a ′ k is changed. This does not affect γ, δ, and C 1 , but γ ′ and κ change signs. (This also follows from (5.2) and (5.3) below.) Remark 2.9. More generally, X α , with α real and non-zero, has parameters |α|γ, αγ ′ , δ, ακ + γ ′ α log |α|, C 1 |α| δ . Remark 2.10. If X = X 1 X 2 with X 1 , X 2 independent and both having moments of Gamma type, cf. Remark 1.5, then the parameters γ, γ ′ , δ, κ for X are the sums of the corresponding parameters for X 1 and X 2 , while C 1 is the corresponding product.
Remark 2.11. If X has moments of Gamma type, then so has a suitably conjugated (a.k.a. tilted) distribution: If, for simplicity, X has a density function f (x), x > 0, let X have the density function x r f (x)/ E X r for a real r such that E X r < ∞. Then E X s = E X s+r / E X r and thus X has moments of Gamma type, obtained by a simple substitution in (1.1). It follows that γ, γ ′ and κ are the same for X as for X, while δ is increased by rγ ′ and C 1 is multiplied by e rκ / E X r . (Cf. (5.3) below.) Clearly, ρ + and ρ − are both decreased by r.
For a random variable Y with moment generating function of Gamma type, the same applies to Y with density function e ry f (y)/ E e rY , if Y has density function f . We note also the following relations, for a function F (s) as above: Proof.

Some examples
There are several well-known examples of distributions with moments of Gamma type. We collect some of them here. The results below are all well-known. We usually omit scale parameters that may be added to the definitions of the distributions.
This can be rewritten as a Gamma type formula by the functional equation (A.2), which yields Of course, it would be silly to claim that (3.5) is a simplification of (3.4), but it shows that the uniform distribution has moments of Gamma type and thus belongs to the class studied here. We Example 3.5 (Chi-square distribution). The chi-square distribution χ 2 (n) is the distribution of Q n := n i=1 N 2 i , where N 1 , N 2 , . . . are i.i.d. standard normal variables. It is well-known, see e.g. [9,Section II.3], that the chisquare distribution is a Gamma distribution, differing from the normalized version in Example 3.1 by a scale factor; more precisely, Q n d = 2Γ n/2 . Consequently, the chi-square distribution has moments of Gamma type, with, by (3.1), which also follows from (3.7) and (3.3).
Example 3.10 (Stable distribution). Let S α be a positive stable random variable with the Laplace transform E e −tSα = e −t α , with 0 < α < 1. Recall that any positive stable distribution is of this type, for some α ∈ (0, 1), up to a scale factor, see Feller [9, Section XIII.6]. (We may also allow α = 1, but in this exceptional case S α is degenerate with S 1 = 1 a.s.) For any s > 0, by (A.10), Hence, (In particular, this moment is finite.) Thus, for s < 0, We have shown (3.16) for s < 0, but Theorem 2.1 (with ρ + = α and ρ − = −∞) shows that (3.16) holds whenever Re s < α, while E S s α = ∞ for s ≥ α. (The case α = 1 is exceptional; E S s 1 = 1 for every real s, so (3.16) holds but ρ + = ∞.) Thus S α has moments of Gamma type. We Example 3.11 (Mittag-Leffler distribution). The Mittag-Leffler distribution with parameter α ∈ (0, 1) can be defined as the distribution of the random variable M α := S −α α , where S α is a positive stable random variable with E e −tSα = e −t α as in Example 3.4. Since S α has the moments given by (3.16), the Mittag-Leffler distribution too has moments of Gamma type given by, cf. Remark 1.5, In particular, the integer moments are given by The reason for the name "Mittag-Leffler distribution" is that its moment generating function is, by (3.18), which converges for any complex s and is known as the Mittag-Leffler function E α (s) since it was studied by Mittag-Leffler [23]. The formula (3.19), or equivalently (3.18), is often taken as the definition of the Mittag-Leffler distribution.
In the special case α = 1/2, the duplication formula (A.3) yields (3.20) which by comparison with (3.9) shows that M 1/2 which is equivalent to the well-known relation S 1/2 Example 3.12 (A different Mittag-Leffler distribution). Another distribution related to the Mittag-Leffler function E α (s) in (3.19), and, rather unfortunately, therefore also called "Mittag-Leffler distribution" was introduced by Pillai [25] as the distribution of a random variable L α with distribution function 1 − E α (−x α ), where 0 < α ≤ 1; equivalently, by (3.19), This is another instance of the relation (1.10), and Lemma 1.11 shows that L α has moments of Gamma type with, using (3.17), Note also that Lemma 1.11 yields the representation [25] with T ∈ Exp(1) and the stable variable S α independent. Equivalently, see Example 3.9, L α d = W α S α , with the Weibull variable W α and S α independent. It follows easily from (3.23) that L α has the Laplace transform Hence P α has density function αx −α−1 , x > 1. Direct integration shows that the moments are of Gamma type and given by , so alternatively these result follow from Example 3.3 and Remarks 1.5 and 2.9.
Example 3.14 (Shifted Pareto distribution). The shifted Pareto variable P α := P α − 1, where α > 0, has support (0, ∞) and density function α(x + 1) −α−1 , x > 0. The moments are given by, using (A.9), Hence also the shifted Pareto distribution has moments of Gamma type, with ρ + = α, (so log P 1 has a symmetric distribution). Using (A.6), we have Further, comparing (3.25) and (3.11) we see that P 1 3. This is also an instance of Lemma 1.11 (with α = 1 in (1.10) and (1.11)). Chapter I]. We let X I , X II,α , X III,α denote corresponding random variables; they have the distribution functions where (for types II and III) α > 0 is a real parameter. The distribution (3.26) (the Gumbel distribution) has the entire real line as support, and is therefore not qualified to have moments of Gamma type. It has, however, moment generating function of Gamma type, see Example 3. 18 The distribution (3.27) (the Fréchet distribution) satisfies, with T ∈ Exp (1), and thus X II,α Hence it has moments of Gamma type with, see (3.2) and (3.15), We have so far considered distributions with moments of Gamma type; we now turn to a few examples with moment generating function of Gamma type. Of course, if X is any of the examples above, log X yields such an example. (One such example was mentioned in Example 3.11.) Example 3.16 (Exponential distribution again). We noted in Example 3.2 that T ∈ Exp(1) has moments of Gamma type. It has moment generating function of Gamma type too, since Example 3.17 (Gamma distribution again). A Gamma distributed random variable Γ n ∈ Γ(n) with integer n ≥ 1 can be obtained by taking the sum of n independent copies of T ∈ Exp(1), and thus (3.31) implies that Γ n has moment generating function of Gamma type with More generally, for any real α > 0, Γ α has moment generating function E e sΓα = (1 − s) −α . If α is not an integer, then this function has a singularity as s ր 1 that is not a pole; hence E e sΓα cannot be extended to a meromorphic function in the complex plane, and Γ α does not have moment generating function of Gamma type. Consequently, the Gamma distribution Γ(α) has moment generating function of Gamma type if and only if α is an integer.
Example 3.18 (Gumbel distribution). If X I has the Gumbel distribution (3.26), and T ∈ Exp(1), then Consequently, the Gumbel distribution has moment generating function of Gamma type with, see (3.2), Example 3.19 (Lévy area). The Lévy stochastic area is defined by the stochastic integral , is a two-dimensional Brownian motion starting at 0 (i.e., X u and Y u , u ≥ 0, are two independent standard Brownian motions). By Brownian scaling, A t d = tA 1 , so we consider only A := A 1 . Then, for real t, see e.g. Protter [28,Theorem II.43], using (A.6), Consequently, by Theorem 2.1, A has moment generating function of Gamma type, with ρ ± = ±π/2 and It is known that A has the density function 1/2 cosh(πx/2), −∞ < x < ∞, see e.g. Protter [28, Corollary to Theorem II.43] and Example 6.18 below.
Of course, e A has moments of Gamma type, and so has e cA for every real c. A comparison with (3.11) shows that e πA d = F 1,1 . Hence, Remark 3.20. We have shown that a large number of classical continuous distributions have moments of Gamma type, but there are exceptions. For example, if X ∈ U(1, 2), then E X s = (2 s+1 − 1)/(s + 1) has complex zeros at −1 + 2πik/ log 2, k = ±1, ±2, . . . , which is impossible when (1.1) holds. More generally, if X is non-degenerate and is supported in a finite interval [a, b] with 0 < a < b < ∞, then E X s is an entire function of s, which by Theorem 4.1 below shows that X does not have moments of Gamma type.

Poles and zeros
In this section it will be convenient to consider the class F of all functions of the type in (1.1) and (1.2), regardless of whether they equal E X s [E e sY ] for some random variable X [Y ] or not. Thus, F is the set of functions where J, K ≥ 0 and C, D, a j , b j , a ′ k , b ′ k are real with D > 0 and a j = 0, a ′ k = 0 for all j and k. We let F m ⊂ F be the set of such functions that appear in (1.1) and (1.2), i.e., the set of F (s) ∈ F such that F (s) = E X s for some positive random variable X and s in some interval.
A function F (s) ∈ F is a meromorphic function of s in the complex plane C. We can easily locate its poles (and zeros) precisely as follows. Define if F has a pole of order m at s, −m if F has a zero of order m at s, 0 otherwise (i.e., F is regular at s with F (s) = 0).
Since D s has neither poles nor zeros, while Γ(z) has a simple pole at each z ∈ Z ≤0 := {0, −1, −2, . . . } but no zeros, Note that all poles and zeros of F lie on the real axis. We further define F 0 := {F ∈ F : ν F (0) = 0}, i.e., the set of functions F ∈ F that have neither a pole nor a zero at 0. By Remark 2.4, F m ⊂ F 0 ⊂ F. Note that F is a group under multiplication and that F 0 is a subgroup.
For F ∈ F m we defined ρ ± in Section 2; the definition (2.1) can be written For general F ∈ F we define 5) and note that this is consistent with (4.4) for F ∈ F m by the fact (Theorem 2.1) that such F has no zeros in (ρ − , ρ + ) and no pole at 0. If all a j , a ′ k , b j , b ′ k > 0, then F (s) has no poles or zeros in the right halfplane Re s ≥ 0, since none of the factors in (4.1) has; thus ρ + = ∞. Similarly, if all a j , a ′ k < 0 and b j , b ′ k > 0, then F (s) has no poles or zeros in the left halfplane Re s ≤ 0 and ρ − = −∞. The converses do not hold, because the representation (4.1) is not unique and we may, e.g., add cancelling factors that separately have poles at other places. However, we may always choose a representation of the desired type; this is part of the following theorem.
(i) It is always possible to find a representation (4. , F (s) is entire and without zeros), then F (s) = CD s for some real constants C and D > 0. In particular, if X is a random variable with moments of Gamma type, this applies to the meromorphic extension F (s) of E X s . In this case, it is not possible that both ρ + = ∞ and ρ − = −∞ (i.e., that E X s is entire), except in the trivial case X = D a.s. for some D > 0 (and thus E X s = D s ).
We begin by proving a lemma.
is analytic and non-zero in a half-plane Re s < B for some B ∈ (−∞, ∞). Then F (s) = e αs Q(s) for some rational function Q with real poles and zeros and some α ∈ R.
Proof. Say that two non-zero real numbers a and a ′ are commensurable if a/a ′ ∈ Q; this is an equivalence relation on R * := R \ {0}. (In algebraic language, the equivalence classes are the cosets of Q * in R * .) The poles of Γ(as + b) are regularly spaced with distances 1/|a|. Thus, if a and a ′ are incommensurable, and b, b ′ are any real numbers, then Γ(as + b) and Γ(a ′ s + b ′ ) have at most one common pole.
We divide the set {a j } J j=1 ∪{a ′ k } K k=1 into equivalence classes of commensurable numbers. This gives a corresponding factorization F (s) = L ℓ=1 F ℓ (s) where L is the number of equivalence classes and each F ℓ is of the same form as F but with all a j and a ′ k commensurable. It follows that two different factors F ℓ 1 (s) and F ℓ 2 (s) have at most a finite number of common zeros or poles; hence, by decreasing B, we may assume that there are no such common poles or zeros with Re s < B. Hence, a pole or zero of a factor F ℓ (s) in Re s < B cannot be cancelled by another factor, and thus such poles or zeros do not exist. Consequently, each factor F ℓ (s) satisfies the assumption of the lemma, and we may thus treat each F ℓ (s) separately. This means that we may assume that all a j and a ′ k are commensurable. In this case, there is a positive real number r such that all a j and a ′ k are (positive) integer multiples of r. Using Gauss's multiplication formula (A.5), we may convert each factor Γ(a j s + b j ) or Γ(a ′ k s + b ′ k ) into a product of a constant, an exponential factor e βs , and a number of Gamma factors Γ(rs + u i ) with u i ∈ R. Using the functional relation (A.2), we may further assume that each u i ∈ (0, 1], provided we also allow factors (rs + u) ±1 with u ∈ R. Collecting the factors, we see that for a constant α, a rational function Q(s) with real zeros and poles, and some u j , u ′ k ∈ (0, 1]. We may assume that Q has no poles or zeros with Re s < B (by again decreasing B if necessary). The factors Γ(rs + u j ) and Γ(rs + u ′ k ) have no zeros but each has an infinite number of poles with Re s < B, and two factors Γ(rs + u j ) and Γ(rs + u ′ k ) have disjoint sets of poles unless u j = u ′ k . Since the poles with Re s < B must cancel in (4.6), this shows that the Gamma factors must cancel each other completely, and thus (4.6) reduces to F (s) = e αs Q(s).
Proof of Theorem 4.1. (i): Using Γ(z) = Γ(z + 1)/z repeatedly on any term with b j ≤ 0 or b ′ k ≤ 0, we may write F (s) as withb j ,b ′ k > 0, where Q(s) is a rational function with only real poles and zeros; Q(s) = c i (s − r i )/ ℓ (s − r ′ ℓ ) for some real r i and r ′ ℓ , i = 1, . . . , I and ℓ = 1, . . . , L, say; we may further assume that r i = r ′ ℓ for all i and ℓ. Since 0 is not a pole or zero of F , it is by (4.7) not a zero or pole of Q, and thus all r i , r ′ ℓ = 0. If r < 0, then and if r > 0, then so Q(s) may be written as product of quotients of Gamma factors of the desired type (and a constant), and thus the result follows from (4.7).
(ii): We use (i) and may thus assume that (4.1) holds with b j , b ′ k > 0. We factorize F (s) as, with d = log D, where F + (s) contains all factors Γ(a j s + b j ) and Γ(a ′ k s + b ′ k ) with a j , a ′ k > 0, and F − (s) contains the factors with a j , a ′ k < 0, so F − is of the desired form. Since b j , b ′ k > 0, F − (s) has no poles or zeros with Re s ≤ 0. By assumption, ρ − = −∞, and thus F (s) is analytic and non-zero in the halfplane Re s ≤ 0. Consequently, by (4.10), F + (s) also has no poles or zeros in Re s ≤ 0. By Lemma 4.2, F + (s) = Q(s)e αs for a rational function where u j and v k are real. We may assume that u j = v k for all j, k, and then each u j or v k is a zero or pole of F + (s), and thus all u j , v k > 0. Using (4.9), we thus can write Q(s) in the desired form; hence F + (s) and F (s) can be so written.
(iii): Follows from (ii) by replacing F (s) by F (−s). (iv): In this case, F (s) is an entire function without zeros. We use again the factorization (4.10). By the proof of (ii), F + (s) = e αs Q(s) for a rational function Q. By symmetry, as in the proof of (iii), similarly F − (s) = e α − s Q − (s) for another rational function Q − . Thus, (4.11) Hence, the rational function Q(s)Q − (s) is entire and has neither poles nor zeros; consequently, it is constant. Thus F (s) = C 1 e d 1 s for some C 1 and d 1 .
As a consequence we show that the function ν F (s) describing the poles and zeros of F (s) essentially determines F , and thus the distribution of X and Y satisfying (1.1) or (1.2). Theorem 4.3. If F 1 , F 2 ∈ F and ν F 1 = ν F 2 , then F 2 (s) = cD s F 1 (s) for some real constants C = 0 and D > 0.
Proof. F := F 2 /F 1 ∈ F and ν F (s) = ν F 1 (s) − ν F 2 (s) = 0 for every s. Hence, F ∈ F 0 and ρ − = −∞, ρ + = ∞; thus Theorem 4.1(iv) shows that F (s) = CD s . Corollary 4.4. If X 1 and X 2 are positive random variables with moments of Gamma type and ν X 1 = ν X 2 , then X 2 d = DX 1 for some constant D > 0. In other words, a distribution with moments of Gamma type is uniquely determined up to a scaling factor by the function ν X (s).
Similarly, if Y 1 and Y 2 have moment generating functions of Gamma type with the same ν, then Y 2 d = Y 1 + d for some real constant d.
Proof. Let F j (s) be the meromorphic extension of E X s j , j = 0, 1. Then Proof. Use, for simplicity, a representation as in Theorem 4.1(i). Then, using (4.3), the terms with a j > 0 and a ′ k > 0 give no contributions to ν F (s) and N + (x) for s > 0 and x > 0, while each a j < 0 gives a contribution |a j |x + O(1) to N + (x) (poles regularly spaced at distances 1/|a j |), and similarly each a ′ k < 0 gives a contribution −|a ′ k |x + O(1) to N + (x). Consequently, for x > 0, using Lemma 2.12, and the result as x → ∞ follows. The result as x → −∞ follows similarly, or by replacing F (s) by F (−s).

Asymptotics of moments or moment generating function
In this section we assume that X > 0 and Y = log X are random variables such that (1.1)-(1.3) hold (for ρ − < Re s < ρ + and t ∈ R), i.e.
and we write as above D = e d . Recall the definitions (2.4)-(2.8).
We begin with asymptotics of F along the imaginary axis and close to it.
Moreover, for any fixed real σ, and uniformly for σ in any bounded set, Proof. It is an easy, and well-known, consequence of Stirling's formula, see e.g. (A.13), that for any complex constant c and all complex z in a sector | arg z| < π − ε (where ε > 0) with |z| large enough, for example |z| ≥ 2|c|/ sin ε,

4) uniformly for c in any bounded set and such |z|.
If a > 0 and b ∈ R, we thus have for real t → +∞, taking z = iat in (5.4) and in Stirling's formula (A.12), Taking the real part, we find log |Γ(ait + b)| = Re(log Γ(ait + b)) Consequently, for a > 0, For general real a and t we thus have (by Γ(z) = Γ(z)) (Alternatively, at least for fixed σ, we may apply (5.2) with b j replaced by b j + σa j , b ′ k replaced by b ′ k + σa ′ k and C replaced by CD σ = Ce dσ ; note that the proof holds for any function F of this type, without assuming the existence of random variables X and Y .) Proof of Proposition 2.7. By (5.2), the values of F (it) determine γ, δ and C 1 . Further, choosing any fixed σ > 0 in (5.3), we see that F determines γ ′ and κ too.
Proof. By Theorem 5.1, the characteristic function E e itY = F (it) is integrable, and thus Y has a continuous density f Y obtained by Fourier inversion: Since Y = log X, X also is absolutely continuous, with the density function (Alternatively and equivalently, F (s) is the Mellin transform of f X , and this is the Mellin inversion formula.) Since Theorem 5.1 further implies that |t| N F (it) is integrable for every N ≥ 0, f Y and f X are infinitely differentiable and we may differentiate (5.9) and (5.8) under the integral sign an arbitrary number of times.
The integrands in (5.9) and (5.8) are analytic in s for ρ − < Re s < ρ + , and thus the estimate (5.3) implies that we can move the line of integration to any line Re s = σ with ρ − < σ < ρ + .
Remark 5.5. For X, we consider the density only for x > 0, and 'infinitely differentiable' here means on (0, ∞). Continuity and differentiability of f X at 0 will be considered in Theorem 6.11. Remark 5.6. In the case γ = 0, the same argument shows that if δ < −1, then X and Y have continuous density functions, which have at least ⌈|δ|⌉−2 continuous derivatives. However, Example 3.4, where δ = −β, shows that we in general do not have more derivatives. Similarly, Examples 3.3 and 3.4 show that we do not necessarily have continuous density functions for γ = 0 and −1 ≤ δ ≤ 0. Example 10.1 gives an example with γ = δ = 0 where the distribution is mixed with a point mass besides the absolutely continuous part.
Note that, by (5.2), γ = δ = 0 if and only if | E X it | = | E e itY | has a non-zero limit as t → ±∞; by the Riemann-Lebesgue lemma, this implies that Y and X do not have absolutely continuous distributions.
We next consider asymptotics of F along the real axis, when possible. If ρ + < ∞, then F has poles, and possibly zeros, on the positive real axis. Typically, there is an infinite number of such poles (but see Example 3.3 for a counter example), and then we cannot consider asymptotics for all s → +∞. However, we can restrict s to a subset of R and obtain asymptotic results similar to Theorem 5.7 in this case too.
Lemma 5.8. Given real a j , b j , a ′ k , b ′ k for 1 ≤ j ≤ J and 1 ≤ k ≤ K, with a j , a ′ k = 0, there exists a closed set E ⊂ R and a constant ξ > 0 such that E ∩ I has measure greater than 1/2 for every interval I of length 1, and | sin(π(a j s + b j ))| ≥ ξ and | sin(π(a ′ k s + b ′ k ))| ≥ ξ for every j and k and all s ∈ E.
Proof. Let N be the set of all (real) s such that a j s + b j ∈ Z for some j or a ′ k s + b ′ k ∈ Z for some k. There exists a constant M such that no interval of length 1 contains more than M points of N . (For example, M = J + K + j |a j | −1 + k |a ′ k | −1 .) It follows that E := {x : |x − s| ≥ 1/(2M + 3) for all s ∈ N } satisfies the properties, for some ξ > 0.
In the sequel we let E denote this set, defined for a given representation (5.1) of F (s). By considering only s ∈ E, we can extend Theorem 5.7 to arbitrary F . Proof. If a < 0 and b ∈ R, then for real s → +∞, by (A.6) and (5.12), Note that if ρ + = ∞, then γ ′ = γ, while if ρ − = ∞, then γ ′ = −γ by (2.4)-(2.5) together with Theorem 4.1 and Proposition 2.7; hence the exponents in Theorems 5.7 and 5.9 agree (as they must).
For complex arguments, we will use the following estimate.
Proof. We may assume t > 0. Consider first a factor Γ(as + b) with a > 0 and s = σ + it, σ > 0. (We may assume that σ is large so that aσ + b > 0, e.g. by using (5.3) for small σ.) By (A.13), Consequently, integrating from 0 to t, If a < 0, we argue as in the proof of Theorem 5.9 and have by (A.6) If further (a, b) = (a j , b j ) or (a ′ k , b ′ k ) for some j or k, and σ ∈ E, then log | sin(π(a(σ + it) + b))| = π|a|t + O(1), and it follows, using (5.18) with |a| instead of a, that The result follows by multiplying the factors in F , using Lemma 2.12.

Asymptotics of density function
We continue to assume that X and Y = log X are random variables such that (1.1)-(1.3) hold; as above we write E X s = e sY = F (s). We assume γ > 0, so that density functions of X and Y exist by Theorem 5.4, and consider asymptotics of the density function f X (x) as x → 0 or x → ∞, or equivalently of f Y (y) as y → −∞ or y → ∞. By symmetry it suffices to consider one side, and we concentrate on x → ∞, but for convenience in applications we write most results for both sides and for both X and Y .
We consider first x → ∞ (y → ∞) and begin with the case ρ + = ∞, when X has moments of all (positive) orders and f X decreases rapidly (as we will see in detail soon). We use the saddle point method, see e.g. Flajolet and Sedgewick [12,Chapter VIII], in a standard way. Theorem 6.1. Suppose that ρ + = ∞ and γ > 0. Then where c 1 := (δ + 1/2)/γ, Proof. By Theorem 4.1 and Proposition 2.7, we may assume that all a j , a ′ k , b j , b ′ k > 0. We will use (5.6), which now is valid for all x > 0 and σ > 0.
Remark 6.2. The derivative f ′ X (x) and higher derivatives f (n) X (x) can be obtained by repeated differentiation of (5.6) under the integral sign, which multiplies the integrand by a factor (−s − 1) · · · (−s − n)x −n . The argument above, including the estimates (6.5) and (6.6), applies to this integral as well and shows that, for any n ≥ 0, In particular, every derivative of f X tends to 0 rapidly (faster than any power of x) as x → ∞.
Remark 6.3. The saddle-point method yields also more precise asymptotics including higher-order terms by refining the estimates around s = σ in the proof above, see e.g. Flajolet and Sedgewick [12, Section VIII.3]; we leave the details to the reader. This yields an asymptotic expansion in powers of σ −1 , with σ as in the proof above, i.e., in powers of x −1/γ . See Remark 1.10 for an example of such an expansion (there obtained from a known result rather than by performing the calculations).
We continue with the case ρ + < ∞, when F (s) has a pole at ρ + of order ν F (ρ + ) ≥ 1. (Recall the notion ν F from (4.2).) We denote the coefficients of the singular part of the Laurent expansion of F at a point s 0 by c ℓ (s 0 ): In particular, c 1 (s 0 ) is the residue Res s 0 F . We have the following standard result by Mellin inversion, see [11].
If ρ + is a simple pole of F , i.e. ν = 1, this can be written (ii) More precisely, there is an asymptotic expansion, for any fixed σ > 0, summing over all poles ρ of F in (0, σ]. (The inner sum vanishes unless ρ is a pole, so formally we may sum over all ρ.) Corresponding asymptotics for f Y (y) = e y f X (e y ) are obtained by replacing each x −r−1 by e −ry and log ℓ x by y ℓ .
Proof. As said above, this is a standard result, and we refer to [11] for details, but for completeness and later use we give the simple proof.
It suffices to prove (ii), since (i) follows by taking σ = ρ + + η. We may assume that σ is not a pole of F (otherwise we increase σ a little). We start with (5.6), where we integrate over a line with Re s ∈ (ρ − , ρ + ). We may, using Theorem 5.1, shift the line to Re s = σ > ρ + too, but then we have to subtract the residues of the traversed poles. Thus Res s=ρ (x −s−1 F (s)), (6.8) and the result follows by computing the residues, using (6.7) and In Theorem 6.4(ii) we have an asymptotic expansion, valid for fixed σ as x → ∞. It is natural to ask whether this asymptotic expansion actually yields a series representation for f X (x), i.e., whether we can let σ → ∞ for fixed x (with the error term tending to 0) so that f X (x) is represented as a convergent series. This is possible sometimes, but not always. In fact, the following theorem shows that this is possible exactly when γ ′ < 0, at least provided that there is an infinite number of poles ρ > 0 and that these are simple.
(i) If γ ′ < 0, then, for all x > 0, summing over all poles ρ > 0 of F . In particular, if F has only simple poles, (ii) If γ ′ > 0 and there is an infinite number of poles ρ > 0 of F , all simple, then the sum (6.10) diverges for all x > 0.
(iii) If γ ′ = 0, then (6.9) holds for x > e κ ; hence (6.10) holds for x > e κ provided all poles are simple. However, at least provided that there is an infinite number of poles ρ > 0 of F and all such poles are simple, the sum (6.10) diverges for 0 < x < e κ .
Corresponding results for f Y (y) are obtained by replacing x −ρ−1 by e −ρy and log ℓ x by y ℓ . The cut-offs in (iii) become y > κ and y < κ.
(ii) and (iii) (divergence): Let ρ > 0 be a pole of F that is not too close to a zero or another pole, meaning that the distance to every zero or other pole is at least some small constant ξ > 0. (This is true for all poles if all a j , a ′ k are commensurable and ξ is small enough; in general it is true for a large fraction of the poles, and certainly an infinite number of them.) A simple modification of the proof of of Theorem 5.9 then yields the same estimate as there for the residue at ρ: | Res ρ (F )| = ρ δ e γ ′ ρ log ρ+(κ−γ ′ )ρ+O (1) . and thus, for every fixed x, Letting ρ → ∞, we see that the terms of (6.10) are unbounded if γ ′ > 0 or γ ′ = 0 and κ > log x; hence the sum diverges.
Remark 6.6. To show divergence in (ii) and (iii), we assumed for simplicity that F has only simple poles on the positive axis; we conjecture that, more generally, (6.9) diverges also without this restriction.
To show divergence we also assumed that F has an infinite number of positive poles; this is, on the contrary, obviously necessary for divergence, since otherwise the sums (6.9) and (6.10) are finite. However, if F has only a finite number of positive poles, then the sum in (6.9) or (6.10) is not integrable, since it is ∼ cx −ρ−1 log ℓ x as x → 0, where ρ > 0 is the largest pole of F and c = 0, ℓ = ν F (ρ) − 1; hence the sum cannot equal f X (x) for all x > 0. Example 10.2 yields an example where the sum does not equal f X (x) for any x > 0 (although the difference tends to 0 rapidly as x → ∞ by Theorem 6.4).
In this connection, note that if γ > γ ′ , then there is an infinite number of poles in (0, ∞) by Proposition 4.6.
(iii) If γ ′ = 0, then (6.11) holds for 0 < x < e κ ; hence (6.12) holds for 0 < x < e κ provided all poles are simple. However, at least provided that there is an infinite number of poles ρ < 0 of F and all such poles are simple, the sum (6.12) diverges for x > e κ .
Corresponding results for f Y (y) are obtained by replacing x |ρ|−1 by e |ρ|y and log ℓ (1/x) by (−y) ℓ . The cut-offs in (iii) become y < κ and y > κ. Theorems 6.5 and 6.9 say that (at least if γ > 0), f X (x) has a series expansion in positive (but not necessarily integer) powers of x if γ ′ > 0, and a series expansion in negative (but not necessarily integer) powers of x if γ ′ < 0, in both cases allowing for terms with logarithmic factors too; if γ ′ = 0 one expansion holds for 0 < x < e κ and the other for x > e κ . Remark 6.10. Suppose that all a j , a ′ k are commensurable; then F (s) may as in Section 4 (see the proof of Lemma 4.2) be rewritten with all a j , a ′ k = ±r, for some real r > 0. The poles of F in (−∞, 0) then form one or several arithmetic series {s j −n/r} with gap 1/r, possibly apart from a finite number of other poles. If further all poles are simple, then the residue at such a pole s l − n/r is of the form (C/r)(−D) n (n!) −1 j =l Γ(n + c j )/ k (Γ(n + c ′ k ), and the contribution to (6.12) from this series of poles is a (generalized) hypergeometric series with argument −Dx 1/r , times a constant and a power of x. Consequently, if further γ, γ ′ > 0, then the density function may be expressed using one or several hypergeometric functions. Typical examples are given in Theorems 1.8 and 1.13.
As a corollary, we get results on continuity and differentiability at 0. Theorem 6.11. Suppose that γ > 0.
(i) The density f X is continuous at 0, and thus everywhere on R, if and only if ρ − < −1.
(ii) The density f X has a finite jump at 0 if and only if ρ − = −1 and this is a simple pole of F . In this case f X (0+) = Res ρ − (F ).
(iii) The density f X is infinitely differentiable on R if and only if ρ − = −∞.
Parts (i) and (ii) follow immediately from Theorems 6.7 and 6.8.
If f X is infinitely differentiable at 0, then every derivative f Conversely, if ρ − = −∞, then Theorem 6.7 shows that f X (x) tends to 0 rapidly as x ց 0. Moreover, by Remark 6.2 and the usual change of variables x → 1/x, the same holds for each derivative f Hence f X is infinitely differentiable. Remark 6.12. More generally, f X has n continuous derivatives (at 0) if and only if ρ − < −n − 1; we omit the details. Remark 6.13. We have in this section assumed γ > 0 in order to have good estimates of F (s) as | Im s| → ∞ in the proofs. It seems likely that the results can be extended to the case γ = 0 too, under suitable conditions, but we have not pursued this beyond noting that the results above hold also for the examples in Section 3 with γ = 0.
The same holds, mutatis mutandis, for the Pareto distribution in Example 3.13, where now there is a single pole at α > 0 and the density vanishes on (0, 1).
We give some examples of applying the theorems above to the distributions in Section 3. This is mainly as an illustration of the theorems; we cannot expect to obtain any new results for these classical distributions. Other applications of the theorems are given in Theorems 1.8, 1.9, 1.13, 1. 14, 9.1, 9.3, 9.6, 9.7. Example 6.14. For the exponential distribution in Example 3.2, Theorem 6.1 yields f (x) ∼ e −x as x → ∞ (this is actually an identity for all x > 0) and Theorem 6.9 yields, since the poles are at −n − 1, with n = 0, 1, . . . , f (x) = ∞ n=0 (−1) n x n /n!, x > 0, again a trival result. Example 6.15. Similarly, for the Gumbel distribution in Example 3.18, f (y) = e −y−e −y and the asymptotic formula in Theorem 6.7 is actually an equality for all real y. Example 6.16. Consider the stable distribution in Example 3.10 with 0 < α < 1. Since γ > 0 > γ ′ , we can apply Theorem 6.5(i). By (This includes the case when nα is an integer, in which case nα is not a pole because of cancellation; (6.13) then correctly yields Res nα (F ) = 0.) We thus obtain by (6.10) This is the well-known formula for the stable density, see Feller [9, XVII.(6.8)] (with γ = −α for the positive case studied here).

Brownian supremum process area
We consider the integral A = A(1) of the Brownian supremum process defined in (1.5). Janson and Petersson [19] used (7.2) to compute the integer moments E A n , n ∈ N; Theorem 1.6 extends their formula to all real and complex moments.
We guess that it also is possible to derive this equation directly from (7.2) by manipulations of Laplace transforms, but we have not pursued this.

A hashing variable
As said in Section 1, when studying the maximum displacement in hashing with linear probing, Petersson [24, Theorem 5.1] found as a limit a random variable M with the distribution where A is the Brownian supremum area studied in Section 7. Lemma 1.11 shows that this type of relation preserves moments of Gamma type; hence M has moments of Gamma type, but we have postponed the proof until now.
Proof of Lemma 1.11. For x ≥ 0, which shows that (1.10) and (1.11) are equivalent. If (1.10) or (1.11) holds, and thus both hold, then, for s > −α, using (3.2), Note also that Lemma 1.11 yields the representation where T 2/3 has a Weibull distribution with parameter 3/2, cf. Example 3.9, and is independent of A.

Triangular and diagonal Pólya urns
A generalized Pólya urn contains balls of several different colours. At each time n ≥ 1, one of the balls is drawn at random, and a set of new balls, depending on the colour of the drawn ball, is added to the urn. We consider for simplicity only the case of two colours, say black and white; the replacement rule may then be described by a matrix a b c d , meaning that if the drawn ball is black [white], it is replaced together with a black and b white balls [c black and d white balls]. It is here natural to let a, b, c, d be non-negative integers, but in fact, the model can be defined (and the results below hold) for arbitrary real a, b, c, d ≥ 0, see [15; 16]. (Further, under certain conditions some of the entries can be negative too, but that case is not interesting here.) Different values of the parameters yield a variety of different limit laws for the numbers B n and W n of black and white balls in the urn after n steps, see e.g. [10; 15; 16] and the references given there. We are here interested in the special case of a triangular urn, meaning that the replacement matrix is triangular, say b = 0. We start with B 0 = b 0 ≥ 0 black and W 0 = w 0 ≥ 0 white balls, and assume w 0 > 0 (otherwise, there will never be any white balls). 9.1. Balanced triangular urns. Assume that the urn is triangular and balanced, meaning that the total number of added balls does not depend on the drawn ball, i.e., a = c + d; we further assume that a, c, d > 0; thus a > d > 0 and c = a − d. In this case, it is shown by Puyhaubert [ In the special case (b 0 , w 0 ) = (c, d), and thus b 0 + w 0 = a, this simplifies to d s Γ(s + 1)/Γ(ds/a + 1), so W/d has a Mittag-Leffler distribution with parameter d/a ∈ (0, 1), see (3.17). All poles of F (s) := E W s are on the negative real axis, so ρ + = ∞. In general, (9.1) shows that there is a pole at −w 0 /d, but if b 0 = 0, then this singularity is removable and the first pole on the negative real axis is −w 0 /d − 1. We thus have ρ − = −w 0 /d when b 0 > 0, but ρ − = −w 0 /d − 1 when b 0 = 0. In fact, if b 0 = 0, so we start with only w 0 white balls, the first drawn ball is necessarily white, and thus urn after the first draw contains c black and w 0 + d white balls. Thus the limit random variable W is the same for the initial conditions (0, w 0 ) and (c, w 0 + d), and we may without loss of generality assume that b 0 > 0.
The function F (s) in (9.1) has simple poles at s = −w 0 /d−n, n = 0, 1, . . . , (except that some of these may in fact be removable singularities) and Theorems 5.4 and 6.9 yield by a straightforward calculation of the residues, using (A.11) and (A.6), the following: Theorem 9.1. The limit variable W for a balanced triangular urn a 0 c d with a = c + d and a, c, d, w 0 > 0 has a density function f W on (0, ∞) given by, for x > 0, In fact, [10] even gives a local limit theorem to this density function.
Remark 9.2. It follows from Theorem 9.1 by comparison with Example 6.17, or more simply directly from (9.1) and (3.17), that in the special case b 0 = 0, W/d has a Mittag-Leffler(d/a) distribution conjugated with x w 0 /d , see Remark 2.11; similarly, in the special case b 0 = c = a − d, W/d has a Mittag-Leffler(d/a) distribution conjugated with x (w 0 −d)/d . Theorem 9.1 shows immediately that as x ց 0, the density f W (x) satisfies f W (x) ∼ C ′ x w 0 /d−1 where C ′ = Γ((b 0 + w 0 )/a)(Γ(w 0 /d)Γ(b 0 /a)) −1 d −w 0 /d > 0, provided b 0 > 0. For large x, Theorem 6.1 yields: Theorem 9.3. As x → ∞, f W (x) ∼ C 2 x c 1 −1 e −c 2 x a/c , with c 1 = (δ + 1/2)a/c, c 2 = ca −a/c d −1 , C 2 = C 1 (2πc/a) −1/2 (da d/c ) −(δ+1/2) , where δ and C 1 are given above. Remark 9.4. For non-balanced triangular urns (a = c+d), limit results are given in [16], but the results are more complicated and we do not believe that the limits have moments of Gamma type. (See for example [16,Theorem 1.6], which gives a complicated integral formula for the moments in the case a > d > 0, c > 0. In the balanced case, it simplifies to (9.1), but as far as we know, there is no similar simplification in general.
x n which, of course, also follows directly from (10.4). However, in Theorem 6.5, although the sum in (6.10) consists of a single term x −2 and thus converges, the sum x −2 = f (x) for all x > 0, as asserted in Remark 6.6. (But the error is exponentially small, and the estimates in Theorem 6.4 apply.) 11. Further remarks Remark 11.1. Suppose that X is a positive random variable with finite moments (of all positive orders): E X n < ∞ for n ≥ 0. If X has moments of Gamma type, then ρ + = ∞ and (1.1) gives, in particular, a formula for all integer moments E X n in terms of Gamma functions. However, the converse does not hold; even if (1.1) holds for every integer s ≥ 0, it does not necessarily hold for other s. for any integer n ≥ 0 and any λ ∈ [−1, 1]; thus the variables X λ have the same integer moments. For λ = 0, the same calculation applies to noninteger n as well, and shows that E X s 0 = 1 6 Γ(4s + 4), −1 < s < ∞, so X 0 has moments of Gamma type. However, this formula cannot hold for any other λ (and s in an interval), by the uniqueness Corollary 2.3.
Remark 11.2. Many of the examples in Section 3 are infinitely divisible, for example the Gamma distribution Γ(α), W α [5, p. 26], P α and thus P α [5, p. 26], L α [25]. We do not know whether there are any interesting connections between moments of Gamma type and infinite divisibility. Remark 11.3. It is possible to consider, more generally, moments of the form (1.1) where a j , b j , a ′ k , b ′ k may be complex (and appearing in conjugate pairs to make the function real for real s). We have not pursued this extension and do not know whether there are any interesting results or examples for this class. A trivial example is the following.
where the logarithm log z is the principal value with imaginary part in (−π, π). (Here, ε > 0 is arbitrary, but the implicit constant in the O term depends on ε.) By differentiating (A.12) twice we find, for | arg z| < π − ε (e.g., for Re z > 0), (Note that also the error term may be differentiated since the functions are analytic in a larger sector and we may use Cauchy's estimate for the derivative.)