Beta-gamma tail asymptotics

We compute the tail asymptotics of the product of a beta random variable and a generalized gamma random variable which are independent and have general parameters. A special case of these asymptotics were proved and used in a recent work of Bubeck, Mossel, and R\'acz in order to determine the tail asymptotics of the maximum degree of the preferential attachment tree. The proof presented here is simpler and highlights why these asymptotics hold.


Introduction
There has been a lot of recent interest in various urn schemes due to their appearance in many graph growth models (see, e.g., [12,2,3,13,4,14,5,8,15]). The limiting distributions arising in these urn schemes are often related to the beta and gamma distributions. Consequently, the computation of various statistics in random graph models often boils down to using algebraic properties of these distributions, commonly referred to as the beta-gamma algebra [6].
The purpose of this note is to simplify and demystify a recent computation done in [5] involving beta and generalized gamma random variables. Bubeck, Mossel, and Rácz [5] were interested in the influence of the seed graph in the preferential attachment model, which led them to study the tail asymptotics of the maximum degree of the preferential attachment tree. This, in turn, essentially reduces to computing the asymptotics of P (BZ > t) as t → ∞, where B ∼ Beta (a, b) and Z ∼ GGa (a + b + 1, 2) are independent random variables and a and b are positive integers; here Beta (a, b) denotes the beta distribution with positive parameters a and b (with density Γ(a+b) is the beta function), and GGa (c, p) denotes the generalized gamma distribution with density p Γ(c/p) x c−1 e −x p 1 {x>0} for c, p > 0. We refer to [5] for details on these connections; see also [10,14].
The computation in [5] involves a few pages of alternating sums cancelling each other out in just the right way. Here, in contrast, we provide a short and simple proof of these asymptotics. The core calculation is only a few lines long, involving approximations at three points which are natural and which can be justified in a relatively straightforward manner. Moreover, the argument works for all positive values of the parameters a, b, c, and p. Throughout the paper we use standard asymptotic notation; for instance, f (t) ∼ g (t) as t → ∞ if lim t→∞ f (t) /g (t) = 1.
Claim 1. Let a, b, c, and p be positive, let B ∼ Beta (a, b), and let Z ∼ GGa (c, p), with B and Z independent. Then we have There has been lots of work on understanding the distribution and tail asymptotics of products of random variables; see, e.g., [16] for a paper from nearly half a century ago, and [9] and references therein for recent developments. In particular, Claim 1 is a special case of [9,Theorem 4.1], where the authors prove a general result for any product BZ where B ∼ Beta (a, b) and Z has a law which is in the maximum domain of attraction of the Gumbel distribution. Due to the generality of their result their proof is fairly involved. We thus believe that the simple proof we present here is useful in highlighting why these asymptotics hold.
The product BZ studied in Claim 1 has many nice properties, for instance it has moments of Gamma type [11]. When p = 1, Z is a gamma random variable and BZ has a so-called G distribution, with its moments described by Meijer's G-function [7]. The case of p = 2 appears in many settings, including the preferential attachment model as mentioned above, and see also [2,15] for connections to critical random graphs, random walks, and various random trees, including Aldous's Brownian continuum random tree (CRT). In a very interesting recent work, Peköz, Röllin, and Ross [15] showed that generalized gamma random variables with p being an integer greater than 2 arise as limits in time inhomogeneous Pólya-type urn schemes. It would be interesting to find connections to urn schemes and random graph models for general values of p.

The core calculation
In this section we prove Claim 1 modulo some approximations whose validity is justified later in Section 3. Put W = Z p , z = w 1/p , and dz = 1 In other words, W has a gamma distribution: W ∼ Gamma (c/p, 1). We have and so the claim is equivalent to We can write P (B p W > w) = P (W > w) P (B p W > w | W > w) .
The first factor has well-known asymptotics: P (W > w) ∼ f W (w) as w → ∞ [1, formula 6.5.32]. Also, it is well known that for any random variable W which has a Gamma (r, 1) distribution, we have where E is a standard exponential random variable. This convergence is rather strong, e.g., convergence of densities. So and the conclusion follows. The only thing that remains is to rigorously justify the three points where asymptotic equivalence was used in the line of reasoning above; see Section 3 for details.

Justifying the approximations
Here we justify why the expressions in (1), (2), (3), and (4) are all asymptotically equivalent as w → ∞. Asymptotic equivalence of (3) and (4). By definition we have we can neglect all terms that are o w −b as w → ∞, and so we only have to deal with the integral term on the right hand side of the display above. Using that 1 − B ∼ Beta (b, a) and also the change of variables z = pwu, we have The integral from 1/2 to 1 is negligible, since For the integral from 0 to 1/2 we first drop the factor (1 − x) a−1 and we justify the validity of this later. Integration by parts then tells us that Using that (1/2 + v) b−1 ≤ 2 + (2v) b−1 for all b > 0 and v > 0, we have that the integral from 1/2 to ∞ in the display above is negligible: where poly (w) is some polynomial in w. We have thus shown that Finally, to justify dropping the (1 − x) a−1 factor, note that 1 − (1 − x) a−1 ≤ max {a, 2} x for all a > 0 and x ∈ (0, 1/2), and so pw B (a, b) Asymptotic equivalence of (2) and (3). We need to show that, as w → ∞, for all x > 0, we have that the left hand side of (5) is at most We know that P E > p p+1 w = e − p p+1 w , so we only have to deal with the first term in the sum above. Using the change of variables z = pwu we have Using simple estimates we have u b+1 for all u ∈ 0, 1 p+1 and so which concludes the proof of (5).
Asymptotic equivalence of (1) and (2). We justify this via a direct calculation, though there might be a more elegant way to do this. To abbreviate notation, let r := c/p. By definition we have For r = 1 this is exactly equal to P (B p (w + E) > w). Since P (W > w) ∼ f W (w) as w → ∞, the fraction in front of the integral in the display above goes to 1 as w → ∞, so what remains to show is that We partition the integral into two parts: from 0 to √ w, and from √ w to ∞. For the first term note that for r ≥ 1 and z ∈ (0, √ w) we have (1 + z/w) r−1 − 1 ≤ e r/ √ w − 1, while for r ∈ (0, 1) and z ∈ (0, √ w) we have 1 − (1 + z/w) r−1 ≤ 1 − (1 + 1/ √ w) −1 ≤ 1/ √ w, and so For the second term we can bound the factor P (B p (w + z) > w) in the integral by 1. For r ∈ (0, 1) we have 1 − (1 + z/w) r−1 ≤ 1 − (1 + z/w) −1 ≤ z/w and so we get the upper bound of For r ≥ 1 we use the bound (1 + z/w) r−1 − 1 ≤ e rz/w to obtain the upper bound of ∞ √ w e (r/w−1)z dz = 1 1 − r/w e (r/w−1) where we assumed that w ≥ 2r.