On the tails of the limiting QuickSort density

We give upper and lower asymptotic bounds for the left tail and for the right tail of the continuous limiting QuickSort density f that are nearly matching in each tail. The bounds strengthen results from a paper of Svante Janson (2015) concerning the corresponding distribution function F. Furthermore, we obtain similar bounds on absolute values of derivatives of f of each order.


Introduction
Let X n denote the (random) number of comparisons when sorting n distinct numbers using the algorithm QuickSort. Clearly X 0 = 0, and for n ≥ 1 we have the recurrence relation where L = denotes equality in law (i.e., in distribution); X k L = X * k ; the random variable U n is uniformly distributed on {1, . . . , n}; and U n , X 0 , . . . , X n−1 , X * 0 , . . . , X * n−1 are all independent. It is well known that EX n = 2 (n + 1) H n − 4n, where H n is the nth harmonic number H n := n k=1 k −1 and (from a simple exact expression) that Var X n = (1 + o(1))(7 − 2π 2 3 )n 2 . To study distributional asymptotics, we first center and scale X n as follows: Using the Wasserstein d 2 -metric, Rösler [8] proved that Z n converges to Z weakly as n → ∞. Using a martingale argument, Régnier [7] proved that the slightly renormalized n n+1 Z n converges to Z in L p for every finite p, and thus in distribution; equivalently, the same conclusions hold for Z n . The random variable Z has everywhere finite moment generating function with EZ = 0 and Var Z = 7 − 2π 2 /3 . Moreover, Z satisfies the distributional identity On the right, Z * L = Z; U is uniformly distributed on (0, 1); U, Z, Z * are independent; and g(u) := 2u ln u + 2(1 − u) ln(1 − u) + 1.
Further, the distributional identity together with the condition that EZ (exists and) vanishes characterizes the limiting Quicksort distribution; this was first shown by Rösler [8] under the additional condition that Var Z < ∞, and later in full by Fill and Janson [1].
Fill and Janson [2] derived basic properties of the limiting QuickSort distribution L(Z). In particular, they proved that L(Z) has a (unique) continuous density f which is everywhere positive and infinitely differentiable, and for every k ≥ 0 that f (k) is bounded and enjoys superpolynomial decay in both tails, that is, for each p ≥ 0 and k ≥ 0 there exists a finite constant In this paper, we study asymptotics of f (−x) and f (x) as x → ∞. Janson [3] concerned himself with the corresponding asymptotics for the distribution function F and wrote this: "Using non-rigorous methods from applied mathematics (assuming an as yet unverified regularity hypothesis), Knessl and Szpankowski [4] found very precise asymptotics of both the left tail and the right tail." Janson specifies these Knessl-Szpankowski asymptotics for F in his equations (1.6)-(1.7). But Knessl and Szpankowski actually did more, producing asymptotics for f , which were integrated by Janson to get corresponding asymptotics for F . We utilize the same abbreviation γ := (2 − 1 ln 2 ) −1 as Janson [3]. With the same constant c 3 as in (1.6) of [3], the density analogues of (1.6) (omitting the middle expression) and (1.7) of [3] are that, as x → ∞, Knessl and Szpankowski [4] find for the left tail and for the right tail. We will come as close to these non-rigorous results for the density as Janson [3] does for the distribution function, and we also obtain similar asymptotic bounds for tail suprema of absolute values of derivatives of the density. Although our asymptotics for f imply the asymptotics for F in Janson's main Theorem 1.1, it is important to note that in the case of upper bounds (but not lower bounds) on f we use his results in the proofs of ours.
The next two theorems are our main results.   [4] suggest that the following asymptotics as x → ∞ obtained by repeated formal differentiation of (1.1)-(1.2) are correct for every k ≥ 0: But these remain conjectures for now. Unfortunately, for k ≥ 1 we don't even know how to identify rigorously the asymptotic signs of f (k) (∓x)! Concerning k = 1, it has long been conjectured that f is unimodal. This would of course imply that f ′ (−x) > 0 and f ′ (x) < 0 for sufficiently large x.
As already mentioned, Fill and Janson [2] proved that or each p ≥ 0 and Our technique for proving the upper bounds in Theorems 1.1 and 1.2 is to use explicit bounds on the constants C k := C 0,k together with the Landau-Kolmogorov inequality (see, for example, [9]).
Our paper is organized as follows. In Section 2 we deal with preliminaries: We recall an integral equation for f that is the starting point for our lowerbound results in Theorem 1.1, review the Landau-Kolmogorov inequality, and bound C k explicitly in terms of k. Sections 3 and 4 derive the stated lower bounds on the left and right tails, respectively, of f using an iterative approach similar to that of Janson [3] for the distribution function. In Section 5 we establish the left-tail results claimed in (1.3) and (1.6). In Section 6, we establish the right-tail results claimed in (1.4) and (1.7).
This integral equation will be used in the proofs of our lower-bound results for f .

Landau-Kolmogorov inequality.
For an overview of the Landau-Kolmogorov inequality, see [6,Chapter 1]. Here we state a version of the inequality well-suited to our purposes; see [5] and [9, display (21) and the display following (17)].
Lemma 2.1. Let n ≥ 2, and suppose h : R → R has n derivatives. If h and h (n) are both bounded, then for 1 ≤ k < n so is h (k) . Moreover, there exist constants c n,k (not depending on h) such that, for every x ∈ R, the supremum norm · x defined at (1.5) satisfies Using these two results, it is now easy to bound f (k) .
Proof. For every integer k ≥ 0 we have as desired.

Left Tail Lower Bound on f
Our iterative approach to finding the left tail lower bound on f in Theorem 1.1 is similar to the method used by Janson [3] for F . The following lemma gives us an inequality that is essential in this section; as we shall see, it is established from a recurrence inequality. For z ≥ 0 define We delay the proof of Lemma 3.1 in order to show next how the lemma leads us to the desired lower bound in (1.3) on the left tail of f by using the same technique as in [3] for F . Proof. By Lemma 3.1, for x > a we have provided ǫ is sufficiently small that 2ǫ 3 m 2a < 1. The same as Janson [3], we pick ǫ = x −1/2 and, setting γ = (2 − 1 ln 2 ) −1 , get 1 a = γ ln 2 + O(x −1 ) and ln f (−x) ≥ 2 Since f is everywhere positive, we can get a lower bound on f (−z − a) by restricting the range of integration in (3.1). Therefore, Here is a proof. Observe that when ǫ is small enough and u ∈ [ Also, in this integral region we have y + z ≤ ǫ 2 . So we conclude that y + z ≤ −a−g(u) 1−u . Next, we claim that −z−a−g(u)−(1−u)y u ≤ 0 in this integral region if z is large enough. Here is a proof. Let −z−a−g(u)−(1−u)y u = −z + δ with δ ≥ 0. Then in the integral region we have 0 ≤ y + z = −a−g(u)−uδ where the last inequality can be verified to hold for ǫ < 1/10. That means if we pick z large enough, for example, z ≥ 20ǫ 2 , then −z−a−g(u)−(1−u)y u = −z + δ will be negative. It can also be verified that a ≥ 30ǫ 2 for ǫ < 1/10. Now consider ǫ < 1/10, an integer k ≥ 3, z ∈ [(k − 2)a, (k − 1)a], and since 2ǫ 3 < 1 and m (k−1)a ≤ 1 by definition. Combine these two facts, we can conclude that for x ∈ [0, ka] we have f (−x) ≥ 2ǫ 3 m 2 (k−1)a . This implies the recurrence inequality m ka ≥ 2ǫ 3 m 2 (k−1)a . The desired inequality follows by iterating:

Right Tail Lower Bound on f
Once again we use an iterative approach to derive our right-tail lower bound on f in Theorem 1.1. The following key lemma is established from a recurrence inequality. Define f (x), z ≥ 0.
Lemma 4.1. Suppose b ∈ [0, 1) and that δ ∈ (0, 1/2) is sufficiently small that g(δ) ≥ b. Then for any integer k ≥ 1 satisfying We delay the proof of Lemma 4.1 in order to show next how the lemma leads us to the desired lower bound in (1.4) on the right tail of f .
Proof. Given x ≥ 3 suitably large, we will show next that we can apply Lemma 4.1 for suitably chosen b > 0 and δ and k = ⌈(x − 2)/b⌉ ≥ 2. Then, by the lemma, and we will use (4.1) to establish the proposition. We make the same choices of δ and b as in [3,Sec. 4], namely, δ = 1/(x ln x) and b = 1 − (2/ ln x). To apply Lemma 4.1, we need to check that x ln x (ln x + ln ln x) ≥ 1 − 4 x , where the elementary first inequality is (4.1) in [3], and so Finally, we use (4.1) to establish the proposition. Indeed, , as claimed. Now we go back to prove Lemma 4.1, but first we need two preparatory results.
Since f is positive everywhere, a lower bound on f (z + b) can be achieved by shrinking the region of integration: The equality comes from a change of variables. We next claim that the integral of integration for ξ contains (0, z − 1), and then the desired result follows. Indeed, if u ∈ (0, δ) and ξ ∈ (0, z − 1) then where the last inequality holds because b ≥ 0 and g(u) ≤ 1; and, because g(u) ≥ g(δ) and g(δ) ≥ b and z ≤ [g(δ) − b]/δ, we have u .
We are now ready to complete this section by proving Lemma 4.1.
Proof of Lemma 4.1. By iterating the recurrence inequality of Lemma 4.4, it follows that Lemma 4.1 then follows since b < 1.

Left Tail Bounds for Tail Suprema of Absolute Derivatives
From Section 3 (respectively, Section 4) we know the left-tail lower bound of (1.3) [resp., the right-tail lower bound of (1.4)]. In this section we establish the left-tail bounds of (1.3) and (1.6), and in the next section we do the same for right tails. 5.1. Lower bounds. As discussed in Remark 1.3(a), in light of the main theorem of Janson [3] and our Section 3, to finish our treatment of left-tail lower bounds we need only prove the lower bound in (1.6) for fixed k ≥ 2. For that, choose any x and apply the Landau-Kolmogorov Lemma 2.1, bounding the function F ′ (·) = −f (−·) in terms of the functions F and F (k) . This gives But recall (1) .
Plugging in these bounds, we obtain the desired result. In this subsection we prove the following stronger Proposition 5.1, which implies that λ k is non-increasing in k ≥ 0 and therefore that λ k < ∞ for every k. In preparation for the proof, see the definition of µ j in (5.2) and note that if µ j ≤ 0 for j = 0, . . . , k − 1, then λ j is non-increasing for j = 0, . . . , k; in particular, (5.1) then holds.
Proposition 5.1. For each fixed k ≥ 0 we have Proof. We proceed by induction on k. Choosing any x and applying the Landau-Kolmogorov inequality Lemma 2.1 to the function h = F (k) , we find for n ≥ 2 that We can bound the norm F (k+n) x using Proposition 2.4 simply by a n,k := 2 (k+n−1) 2 +10(k+n−1)+17 .
Thus the argument of the lim sup in (5.2) can be bounded above by − ln 1 − 1 n − 2 − ln 4 + ln n + n −1 ln a n,k − ln F (k) x . By Janson's bound giving λ 0 < ∞ if k = 0 and by induction on k if k ≥ 1, we know that (5.1) holds. Thus, letting n ≡ n(x) → ∞ with n(x) = o(e γx ), the claimed inequality follows.
Remark 5.2. According to Remark 1.3, it is natural to conjecture that for every k the lim sup in (5.1) is a limit and equals −c 3 and hence the lim sup in (5.2) is a vanishing limit.

Right Tail Bounds for Tail Suprema of Absolute Derivatives
In this section we establish the right-tail bounds of (1.4) and (1.7).
6.1. Lower bounds. As discussed in Remark 1.3(a), in light of the main theorem of [3] and our Section 4, to finish our treatment of right-tail lower bounds we need only prove the lower bound in (1.7) for fixed k ≥ 2. For that, proceed using the Landau-Kolmogorov Lemma 2.1 as in Section 5.1 to obtain But recall Plugging in these bounds, we obtain the desired result. x < ∞; (6.1) note also that the right-tail upper bound in (1.4) of Theorem 1.1 follows from ρ 1 < ∞. As discussed in Remark 1.3(a), (6.1) is known for k = 0 from Janson [3]. So to finish our treatment of right-tail upper bounds in Theorems 1.1-1.2 we need only prove (6.1) for k ≥ 1.
In this subsection we prove the next stronger Proposition 6.1, a right-tail analogue of Proposition 5.1, and it then follows by choosing r(x) ≡ x that ρ k is non-increasing in k ≥ 0 and therefore that ρ k < ∞ for every k. we again bound the norm F (k+n) x by (5.3). Thus the argument of the lim sup in (6.2) can be bounded above by r(x) −1 1 n (− ln F (k) x ) + 2 − ln 4 + ln n + 1 n ln a n,k .
By the right-tail lower bound for F (k) x in (1.7) (established in the preceding subsection), we know that − ln F (k) x ≤ x ln x + (k ∨ 1)x ln ln x + O(x) = (1 + o(1))x ln x. Thus, letting n ≡ n(x) satisfy n(x) = ω((x log x)/r(x)) and n(x) = o(r(x)), the claimed inequality follows. Remark 6.2. According to Remark 1.3, it is natural to conjecture that for every k we have ρ k = −∞ and the lim sup in (6.2) with r(x) ≡ x is a vanishing limit.