On Efficiency of the Plug-in Principle for Estimating Smooth Integrated Functionals of a Nonincreasing Density

We consider the problem of estimating smooth integrated functionals of a monotone nonincreasing density $f$ on $[0,\infty)$ using the nonparametric maximum likelihood based plug-in estimator. We find the exact asymptotic distribution of this natural (tuning parameter-free) plug-in estimator, properly normalized. In particular, we show that the simple plug-in estimator is always $\sqrt{n}$-consistent, and is additionally asymptotically normal with zero mean and the semiparametric efficient variance for estimating a subclass of integrated functionals. Compared to the previous results on this topic (see e.g., Nickl (2008), Jankowski (2014), and Sohl (2015)) our results hold under minimal assumptions on the underlying $f$ --- we do not require $f$ to be (i) smooth, (ii) bounded away from $0$, or (iii) compactly supported. Further, when $f$ is the uniform distribution on a compact interval we explicitly characterize the asymptotic distribution of the plug-in estimator --- which now converges at a non-standard rate --- thereby extending the results in Groeneboom and Pyke (1983) for the case of the quadratic functional.


Introduction
Estimation of functionals of data generating distributions is a fundamental research problem in nonparametric statistics. Consequently, substantial effort has gone into understanding estimation of nonparametric functionals (e.g., linear, quadratic, and other "smooth" functionals) both under density and regression (specifically Gaussian white noise models) settings -a comprehensive snapshot of this huge literature can be found in Hall and Marron (1987), Bickel and Ritov (1988), , Donoho and Nussbaum (1990), Fan (1991), Kerkyacharian and Picard (1996), Nemirovski (2000b), Laurent (1996), Cai and Low (2003, 2004, 2005 and the references therein. However, most of the above papers have focused on smoothness restrictions on the underlying (nonparametric) function class. In contrast, parallel results for purely shape-restricted function classes (e.g., monotonicity/convexity/log-concavity) are limited.
Estimation of a shape-restricted density is somewhat special among nonparametric density estimation problems. In shape-constrained problems classical maximum likelihood and/or least squares estimation strategies (over the class of all such shape-restricted densities) lead to consistent (see e.g., Grenander (1956), Groeneboom et al. (2001), Dümbgen and Rufibach (2009)) and adaptive estimators (see e.g., Birgé (1987), Kim et al. (2018)) of the underlying density. Consequently, the resulting procedures are tuning parameter-free; compare this with density estimation techniques under smoothness constraints that usually rely on tuning parameter(s) (e.g., bandwidth for kernel based procedures or resolution level for wavelet projections), the choice of which requires delicate care for the purposes of data adaptive implementations.
The simplest and the most well-studied shape-restricted density estimation problem is that of estimating a nonincreasing density on [0, ∞). In this case, the most natural estimator of the underlying density is the Grenander estimator -the nonparametric maximum likelihood estimator (NPMLE) which also turns out to be the same as the least squares estimator; see e.g., Grenander (1956), Prakasa Rao (1969), Groeneboom (1985), Groeneboom and Jongbloed (2014). Even for this problem only a few papers exist that study nonparametric functional estimation; see e.g., Nickl (2008), Giné and Nickl (2016, Chapter 7), Jankowski (2014), and Söhl (2015). Moreover, these papers often assume additional constraints such as: (i) smoothness, (ii) compact support, and (iii) restrictive lower bounds on the underlying true density.
In this paper we study estimation of smooth integrated functionals of a nonincreasing density on [0, ∞) using simple (tuning parameter-free) plug-in estimators. We characterize the limiting distribution of the plug-in estimator in this problem, under minimal assumptions. Moreover, we show that the limiting distribution (of the plug-in estimator) is asymptotically normal with the efficient variance for a large class of functionals, under minimal assumptions on the underlying density. To the best of our knowledge, this is the first time that such a large class of integrated functionals based on the Grenander estimator are treated in a unified fashion.

Problem formulation and a summary of our contributions
Suppose that X 1 , . . . , X n are independent and identically distributed (i.i.d.) random variables from a nonincreasing density f on [0, ∞). Let P denote the distribution of each X i and F the corresponding distribution function. The goal is to study estimation and uncertainty quantification of the functional τ defined as where g : R 2 → R is a known (smooth) function. Functionals of the form (1) belong to the class of problems considered by many authors including Levit (1978), Bickel and Ritov (1988), Ibragimov and Khas'minskiȋ (1991), Birgé and Massart (1995), Kerkyacharian and Picard (1996), Laurent (1996), Laurent (1997), Nemirovski (2000a) etc. and include the monomial functionals ∞ 0 f p (x)dx for p ≥ 2.
Of particular significance among them is the quadratic functional corresponding to p = 2 -being related to other inferential problems in density models such as goodness-of-fit testing and construction of confidence balls (see e.g., Giné and Nickl (2016, Chapter 8.3) and the references therein for more details). Letf n be the Grenander estimator in this problem (see Section 2 for its characterization). Then, a natural estimator of τ (g, f ) is the simple plug-in estimator τ (g,f n ) := ∞ 0 g(f n (x), x)dx. (2) In this paper we characterize the asymptotic distribution of τ (g,f n ) for estimating τ (g, f ), when g is assumed to be smooth; see Section 2.2 for the exact assumptions on g. Specifically, in Theorem 2.2, we show that the simple plug-in estimator τ (g,f n ) is √ n-consistent, and we characterize its limiting distribution. Further, when F is strictly concave we show that τ (g,f n ) (properly normalized) is asymptotically normal with the (semiparametric) efficient variance (see Theorem 2.3). Moreover, we can consistently estimate this limiting variance which readily yields asymptotically valid confidence intervals for τ (g, f ). Compared to the previous results on this topic (see e.g., Nickl (2007), Nickl (2008), Jankowski (2014, Giné and Nickl (2016, Chapter 7) and Söhl (2015)) our results holds under less restrictive assumptions on the underlying density (see conditions (A1)-(A3) in Section 2.2) -in particular, we do not assume that f is smooth, bounded away from 0, or compactly supported.
An important subclass of the above considered functional class is: imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 where h : R → R is a known (smooth) function. In Section 3 we study estimation of this functional in detail. The simple plug-in estimator for estimating µ(h, f ) is For the class of functionals µ(h, f ), under certain assumptions on the function h(·), we prove in Theorem 3.1 that the plug-in estimator is asymptotically normal with the semiparametric efficient variance, even when F is not strictly concave. In fact, the asymptotic normality even holds when f is piecewise constant. Finally, in the special case when f is constant (i.e., f is uniform on a compact interval) we explicitly characterize the asymptotic distribution of the plug-in estimator µ(h,f n ), which now converges at a non-standard rate; see Theorem 3.2. This theorem extends the results in Groeneboom and Pyke (1983) beyond the quadratic functional. Finally, in Section 4 we focus on a special case of µ(h, f ): where h : R → R is a known (smooth) function. As before, the simple plug-in . It is easy to show that in this case both the above plug-in estimators are exactly the same (see Lemma 4.1). Further, we show in Remark 4.1 that both these estimators are also equivalent to the standard one-step bias-corrected estimator used in nonparametric statistics. As a consequence, we show that all these three estimators are now asymptotically normal with the (semiparametric) efficient variance.
A general feature of the results for estimating "smooth" integrated nonparametric functionals is an elbow effect in the rate of estimation based on the smoothness of the underlying function class. For example, when estimating the quadratic functional, √ n-efficient estimation can be achieved as soon as the underlying density has Hölder smoothness index β ≥ 1/4, whereas the optimal rate of estimation imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 is n −4β/(4β+1) (in root mean squared sense) for β < 1/4. Since monotone densities have one weak derivative one can therefore expect to have √ n-consistent efficient estimation of τ (g, f ) (under suitable assumptions on g). However, standard methods of estimation under smoothness restrictions involve expanding the infinite dimensional function in a suitable orthonormal basis of L 2 and estimating an approximate functional created by truncating the basis expansion at a suitable point. The point of truncation decides the approximation error of the truncated functional as a surrogate of the actual functional and depends on the smoothness of the functional of interest and approximation properties of the orthonormal basis used. This point of truncation is then balanced with the bias and variance of the resulting estimator and therefore directly depends on the smoothness of the function. Consequently, most optimal estimators proposed in the literature depend explicitly on the knowledge of the smoothness indices. In this regard, our main result in the paper shows the optimality of tuning parameter-free plug-in procedure based on the Grenander estimator (under less restrictive assumptions) for estimating integral functionals of monotone densities.
We also mention that the results of this paper are directly motivated by the general questions tackled in Bickel and Ritov (2003) and Nickl (2007). In particular, Bickel and Ritov (2003) considered nonparametric function estimators that satisfy the desirable "plug-in property". An estimator is said to have the plug-in property if it is simultaneously minimax optimal in a function space norm (e.g., in l 2 -norm) and can also be "plugged-in" to estimate specific classes of functionals efficiently (and/or at √ n-rate). Subsequently, Bickel and Ritov (2003) demonstrated general principles for constructing such nonparametric estimators pertaining to linear functionals (allowing for certain extensions) with examples arising in various problems such as nonparametric regression, survival analysis, and density estimation. Indeed, in this paper we show that the Grenander estimator satisfies such a plug-in property for estimating smooth integrated functionals of monotone densities.
Note that such a plug-in property of the Grenander based estimator is also illustrated in the papers Söhl (2015) and Nickl (2007) where the authors study linear functionals and the entropy functional crucially using a Kiefer-Wolfowitz type uniform central limit theorem (CLT) for the Grenander estimator. Such a proof strategy however comes with the price that more restrictive assumptions need to be made on the density f to derive such a CLT. In particular, the results in Söhl (2015) and Nickl (2007) assume a bounded domain and a uniform positive lower bound on the true density f -assumptions we forgo in this paper. Jankowski (2014) also considers estimation of linear functionals (and the entropy functional) based on the Grenander estimator. In Jankowski (2014, Theorem 3.1), the limiting distribution of the Grenander based plug-in estimator is derived for estimating a linear functional (under possible misspecification, i.e., f need not be nonincreasing). Jankowski (2014, Theorem 4.1) gives the limiting distribution of the plug-in estimator when estimating the entropy functional. However, all these results assume that the support of f is bounded, an assumption that we do not imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 make in this paper.

Organization of the paper
The rest of the paper is organized as follows. In Section 2.1 we collect a few useful notation, definitions, and results which will help the presentation of the rest of the paper. In Section 2.2, we define the class of integrated functionals of interest along with our main assumptions. Section 2.3 discusses the construction of the plug-in estimator and the main results of this paper (Theorems 2.2 and 2.3) which gives √ n-consistency and a characterization of the asymptotic distribution of the estimator. Subsequently, in Sections 3 and 4 we focus on the special classes of functionals -µ(h, f ) and ν(h, f ) -and provide explicit results (Theorem 3.1 and Corollary 4.2) to show that our plug-in estimator is always √ n-consistent, asymptotically normal and semiparametric efficient. Since the case of the uniform density on a compact interval deserves a finer asymptotic expansion, we devote Section 3.1 to this purpose. Section 5 presents some numerical results that validate and illustrate our theoretical findings. In Section 6, we discuss some potential future research problems. Finally, all the technical proofs are relegated to Section 7.
2. Estimation of the Integrated Functional τ (g, f )

Preliminaries
In this subsection we introduce some notation and definitions to be used in the rest of the paper. Throughout we let R + denote the (compact) nonnegative real line [0, ∞]. The underlying probability space on which all random elements are defined is (Ω, F, P). Suppose that we have nonnegative random variables X 1 , . . . , X n i.i.d.

∼ P
having a nonincreasing density f and (concave) distribution function F . In the rest of the paper, we use the operator notation, i.e., for any function ψ : R → R, P [ψ] := ψ(x)dP (x) denotes the expectation of ψ(X) under the distribution X ∼ P . For any two functions γ 1 , γ 2 : [0, ∞) → R with |γ 1 (x)|dx < +∞, we view γ 2 (x)d(γ 1 (x)) as a Lebesgue-Stieltjes integral. Let P n denote the empirical measure of the X i 's and let F n denote the corresponding empirical distribution function, i.e., for x ∈ R, For a nonempty set T , we let ∞ (T ) denote the set of uniformly bounded, realvalued functions on T . Of particular importance is the space ∞ (R + ), which we imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 equip with the uniform metric · ∞ and the ball σ-field; see Pollard (1984, Chapters IV and V) for background.
Following Beare and Fang (2017), we now define the notion of the least concave majorant (LCM) operator. Given a nonempty convex set T ⊂ R + , the LCM over T is the operator M T : We write M as shorthand for M R + and refer to M as the LCM operator. Beare and Fang (2017, Proposition 2.1) shows that the LCM operator M is Hadamard directionally differentiable (see e.g., Beare and Fang (2017, Definition 2.2)) at any concave function θ ∈ ∞ (R + ) tangentially to C(R + ) (here C(R + ) denotes the collection of all continuous real-valued functions on R + vanishing at infinity). Let us denote the Hadamard directional derivative of M at the concave function θ ∈ ∞ (R + ) by M θ . The following result, due to Beare and Fang (2017, Proposition 2.1), characterizes the Hadamard directional derivative of M θ , for θ ∈ ∞ (R + ), tangentially to C(R + ) and is crucial to our subsequent analysis.
Proposition 2.1 (Proposition 2.1 of Beare and Fang (2017)). The LCM operator is uniquely determined as follows: for any ξ ∈ ∞ (R + ) and x ≥ 0, we have LetF n be the LCM of F n , i.e.,F n is the smallest concave function that sits above F n . Let B(·) be the Brownian bridge process on [0, 1]. By Donsker's theorem (see e.g., van der Vaart (1998, Theorem 19.3)) we know that the empirical process As the delta method is valid for Hadamard directionally differentiable functions (see Shapiro (1991)), as a consequence of the above discussion it follows that (see Beare and Fang (2017, Theorem 2.1)), for any concave F , the stochastic processesD n := √ n(F n − F ) converges in distribution toĜ, imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 To give more intuition about the stochastic processĜ and the operator M F , let us consider two simple scenarios. First, suppose that F is strictly concave. Then G = M F G = G as by Proposition 2.1, for every x > 0, also see Beare and Fang (2017, Proposition 2.2). Thus, D n andD n converge to the same limiting object in this case. Next, let us suppose that F is piecewise affine. In this case f is piecewise constant with jumps (say) at 0 < t 1 < t 2 < . . . < t k < ∞ and values v 1 > . . . > v k , for some k ≥ 2 integer, i.e., i.e., for x > 0, where t 0 ≡ 0. Then,

Assumptions
In this subsection we state the assumptions needed for our main results.
(A1) The true underlying density f : (A2) We assume that for some α ∈ (0, 1], (A3) Let S denote the support of f . The function g in (1) satisfies the following conditions: ) denotes the class of all twice continuously differentiable functions on [0, ∞).
(iii) Letġ andg denote the derivatives of g(·, ·) only with respect to the first coordinate, i.e., for z, x ∈ [0, ∞), We assume that the following hold: Let us briefly discuss assumptions (A1)-(A3). Conditions (A1) and (A2) on the true density f imply that f is bounded and satisfies a tail decay condition. Note that (A2) is satisfied if for some m > 1, lim sup x→∞ x m f (x) < ∞; see e.g., van de Geer (2000, Example 7.4.2). In particular, if f is bounded and compactly supported, conditions (A1) and (A2) immediately hold. Condition (A3)-(i) is unavoidable since it pertains to the existence of the functional we are interested in estimating. Parts (ii) and (iii,a) of (A3) pertain to the function g(·, ·) and quantifies the notion of the smooth functional we consider. Finally, part (iii,b) of (A3) is slightly stronger than the requirement that ġ(f (x), x)f (x)dx exists -which in turn appears in the information bound for estimating τ (g, f ) and is therefore needed to be finite for efficient estimators to exist. More precisely, the finite total variation ofġ(f (x), x) helps us define several Lebesgue-Stieltjes integrals with respect toġ(f (x), x) without further conditions.
Observe that our assumption (A3) implies twice (right) differentiability of g(·, x) (for every x) at 0 and hence does not include the entropy functional corresponding to g(z, x) = −z log z, unless f is assumed to be bounded away from 0.

2.3.
√ n-Consistency of τ (g,f n ) As mentioned before, a natural estimator of f in this situation isf n , the Grenander estimator -the (nonparametric) maximum likelihood estimator of a nonincreasing density on [0, ∞); see Grenander (1956). The Grenander estimatorf n is defined as the left-hand slope ofF n , the LCM of the empirical distribution function F n . Note thatF n is piecewise affine and a valid concave distribution function by itself. Further,f n is a piecewise constant (nonincreasing) density with possible jumps only at the data points We study estimation and uncertainty quantification of the functional τ (g,f n ). The following result shows that the plug-in estimator τ (g,f n ) is √ n-consistent and explicitly characterizes its asymptotic distribution.
Theorem 2.2. Assume that conditions (A1)-(A3) hold. Then, We defer the proof of Theorem 2.2 to Section 7.1 and before proceeding further, make a few comments regarding the implications of Theorem 2.2.
asĜ is zero outside S, the support of the distribution P . This follows from the following facts: (i)Ĝ is a (measurable) random element in ∞ (R + ) (see e.g., Beare and Fang (2017) Remark 2.2 (Assumptions in Theorem 2.2). Note that Theorem 2.2 is valid under much weaker assumptions on f than previously studied in the literature. Although, similar results have been derived in Nickl (2008) and Giné and Nickl (2016, Chapter 7), the derivations relied on further smoothness, lower bound, and compact support assumptions on f . In contrast, our only technical assumptions (A1) and (A2) are significantly less restrictive and only demand f to be bounded and have a polynomial-type tail decay.
Remark 2.3 (Connection to estimation in smoothness classes). For smoothness classes of functions it is well-known that √ n-consistent estimation of τ (g, f ) is possible if f has more than 1/4 derivatives (see e.g., Birgé and Massart (1995)) and a typical construction of such an estimator proceeds via a bias-corrected onestep estimator; see e.g., Bickel and Ritov (1988), Robins et al. (2008), Robins et al. (2017). Since monotone densities have one weak derivative, one can expect √ n-consistent estimation of τ (g, f ). Theorem 2.2 shows that in fact the plug-in principle based on the Grenander estimator is √ n-consistent with a distributional limit without any further assumptions on f . It will be further shown in Section 4 that for estimating the special functional ν(h, f ), defined in (4), the one-step biascorrected estimator is equivalent to the plug-in estimator. Consequently, this will provide more intuition on √ n-consistent efficient estimation of ν(h, f ), without further assumptions.
Remark 2.4 (Tuning parameter-free estimation). Being based on the Grenander estimator and the plug-in principle, the estimator τ (g,f n ) is completely tuning parameter-free unlike those considered in Robins et al. (2008), Mukherjee et al. (2016), Mukherjee et al. (2017).
In the following result (proved in Section 7.2), we show that when F is strictly concave, the plug-in estimator τ (g,f n ) is √ n-consistent and asymptotically normal with the (semiparametric) efficient variance. Theorem 2.3. Suppose that F is strictly concave on [0, ∞). Then, under the assumptions of Theorem 2.2, we have where, for X ∼ P , In general, when F is not strictly concave, we cannot say if Y (described by (8)) admits a simpler description. Indeed, the integral Ĝ (x)d[ġ(f (x), x)] can be a highly non-linear functional of the Gaussian process G and consequently there is no immediate reason to believe that the limiting distribution is normal. Surprisingly though, for a special class of functionals, namely µ(h, f ) introduced in (3) ] is always Gaussian. The next section is therefore devoted to the complete understanding of this special case.
Remark 2.5 (Linear functional). Theorem 2.2 covers the case of estimating an integrated linear functional w(x)f (x)dx where w(·) is any weight function with finite variation on S. Indeed, this follows by taking g(z, x) = w(x)z in Theorem 2.2.
When F is strictly concave, Theorem 2.3 shows that the limiting distribution of the plug-in estimator is asymptotically normal. When F has a density f which is piecewise constant and has the form (6) and w(·) is continuous, Theorem 2.2 along with (7) yields cf. Jankowski (2014) where the author derives the limiting distribution for linear functionals (under model-misspecification) and expresses the limit in a different form.

Plug-in Efficient Estimation of µ(h, f )
In this section we focus on the special case of estimating µ(h, f ) (as defined in (3)) where h : [0, ∞) → R is assumed to be a known function satisfying: ) denotes the class of all twice continuously differentiable functions on [0, ∞) and imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 The natural plug-in estimator of µ(h, f ) is Theorem 2.2 can be used to obtain the √ n-consistency and the asymptotic distribution of µ(h,f n ) by considering g(z, x) = h(z). Note that for h(·) satisfying (A4), the corresponding g automatically satisfies (A3) defined in Section 2.2. However, in this case we can simplify the form of the limiting distribution Y (see (8)) and prove that it is indeed a mean zero normal random variable with the efficient variance. Theorem 3.1 (proved in Section 7.3) formalizes this.
Remark 3.1 (Assumptions in Theorem 3.1). Assumption (A4) in Theorem 3.1 is implied by assumption (A3) where one considers g(z, x) = h(z) with h ∈ C 2 ([0, ∞)). Thus, the √ n-consistency and existence of asymptotic distribution for the plug-in principle is automatic from Theorem 2.2. However, unlike Theorem 2.3, the requirement of F being strictly concave is no longer necessary for the validity of Theorem 3.1.
Remark 3.2 (Asymptotic efficiency). Observe that the limiting variance in Theorem 3.1 (see (11)) matches the nonparametric efficiency bound in this problem (Bickel et al. (1993), van der Vaart (1998), van der Vaart (2002), Laurent (1996)). Indeed, this efficiency bound corresponds to the case when the maximal tangent space in the model is the whole of L 2 (F ). Although such a bound can be shown to hold for models where f has a lower bound on its slope (in absolute value) and is compactly supported, it is not immediate for more general monotone densities. Consequently, the statement that the "plug-in estimator µ(h,f n ) is asymptotically efficient for any concave F (under assumptions (A1), (A2), and (A4))" should be understood in the sense of simply achieving the efficiency bound in the nonparametric model. When the tangent space is L 2 (F ), this is indeed the semiparametric efficiency bound.

Remark 3.3 (Monomial functionals).
Of special interest is the case when h(x) = x p , where p ≥ 2 is an integer. For example, the quadratic functional f 2 is obtained when p = 2. In this case, Theorem 3.1 yields an asymptotically efficient variance of imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 for any concave F .

Remark 3.4 (Construction of confidence intervals).
As an important consequence of Theorem 3.1 one can construct asymptotically valid confidence intervals for the functional µ(h, f ). In particular, one can estimate the efficient asymptotic variance σ 2 eff as follows. Note that, where h 1 (z) := z(h (z)) 2 and h 2 (z) := zh (z) for z ≥ 0. As a result, if h ∈ C 3 ([0, ∞)) then h 1 , h 2 ∈ C 2 ([0, ∞)) and Theorem 3.1 implies thatσ 2 n := µ(h 1 ,f n )− (µ(h 2 ,f n )) 2 is a consistent estimator of σ 2 eff (h, f ). Consequently µ(h,f n ) ± z α/2σn (with z α/2 being the 1 − α/2 quantile of the standard normal distribution) is an asymptotically valid 100 × (1 − α)% confidence interval for µ(h, f ). Indeed, the additional assumption of h ∈ C 3 ([0, ∞)) compared to h ∈ C 2 ([0, ∞)) in Theorem 3.1 can be reduced since we only demand consistency ofσ n , instead of asymptotic normality. We omit the details for the sake of avoiding repetitions. Finally, the validity of this confidence interval is pointwise (true for every fixed underlying f ) and not in a uniform sense. (11). In Section 3.1 below we deal with this scenario (i.e., when f is uniform) and obtain a non-degenerate distributional limit, after proper normalization.

When f is uniformly distributed
As mentioned in Remark 3.5, Theorem 3.1 does not give a non-degenerate limit when f is the uniform distribution on the interval [0, c], for any c > 0. Indeed, as we will see, in such a case µ(h,f n ) has a faster rate of convergence. In this subsection we focus our attention to the case when f is the uniform distribution on [0, 1]; an obvious scaling argument can then be used to generalize the result to the case when f is uniform on [0, c], for any c > 0.
Suppose that P is the uniform distribution on [0, 1], i.e., f (x) = 1 [0,1] (x), x ∈ R. It has been shown in Groeneboom and Pyke (1983) (also see ) that in this case The above result yields the asymptotic distribution for the plug-in estimator of the quadratic functional (as f 2 (x)dx = 1), properly normalized. In the following theorem we extend the above result to general smooth functionals of the Grenander estimator.
imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 Theorem 3.2. Let X 1 , . . . , X n be i.i.d. Uniform([0, 1]). Suppose that h ∈ C 4 ([0, ∞)), where C 4 ([0, ∞)) is the space of all four times continuously differentiable functions on [0, ∞). Then The proof of the above result is given in Section 7.5 and follows closely the line of argument presented in Groeneboom and Pyke (1983). Indeed, as is somewhat apparent from the statement of the theorem, the proof relies on a Taylor expansion of h and thereafter controlling the error terms. In particular, we prove that and then a simple three term Taylor expansion yields the desired result.

Other Efficient Estimators of ν(h, f )
In this section we focus on the special case of estimating the functional ν(h, f ) (as defined in (4)) where h : [0, ∞) → R is assumed to be a known function and consider other (efficient) estimators of ν(h, f ). Our plug-in estimator of ν(h, f ) is whereP n denotes the probability measure associated with the Grenander estimator (i.e.,P n has distribution functionF n ). We first contrast the above plug-in estimator with two other natural estimators and argue that the special structure of the Grenander estimator implies their equivalence. Observe that another natural estimator of ν Indeed, Lemma 4.1 below shows that this natural estimator is exactly the same as the plug-in estimatorP n [h •f n ]. Remark 4.1 (One-step estimator). There is however another interesting by-product of Lemma 4.1: The plug-in estimatorf n is also equivalent to the classically studied one-step estimator which is traditionally efficient for estimating integrated functionals, such as ν(h, f ), over smoothness classes for f ; see e.g., Bickel et al. (1993), van der Vaart (1998), van der Vaart (2002). In particular, a first order imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 influence function of ν(h, f ) at P is P [h•f +f (h •f )] and consequently a one-step estimator obtained as a bias-corrected version of af -based plug-in estimator (herẽ f is any estimator of f ) is given bỹ whereP denotes the probability measure associated withf . However, Lemma 4.1 implies that whenf =f n (the Grenander estimator) Consequently, in this case of estimating a monotone density, the first order influence function based one-step estimator, obtained as a bias-corrected version of a Grenander-based plug-in estimator, coincides with the simple plug-in estimator (15).
Since all the three intuitive estimators in this problem turn out to be equivalent, one can expect them to be efficient as well (in a semiparametric sense). Indeed this is the case. The following result, an immediate consequence of Theorem 3.1, shows this.

Simulation study
In this section we illustrate the distributional convergence of our plug-in estimator τ (g,f n ) (see (2)) for estimating τ (g, f ). Let us consider the case of estimating the quadratic functional, i.e., τ (g, f ) = f 2 (x)dx which corresponds to g(z, x) = z 2 . We first consider the case when the true density f is strictly concave and thus, the asymptotic distribution of τ (g,f n ) is given by (9) (in Theorem 2.3) where σ 2 eff (f ) is given in (12) (with p = 2). For the Q-Q plots in Figure 1 we took f to be the exponential density with parameter 1 for which τ (g, f ) = 1/2 and σ 2 eff (f ) = 4/12. The plots show the sampling distribution of √ n{τ (g,f n ) − τ (g, f )} as the sample size increases (n = 5000, 20000, 100000). The sampling distributions is approximated from 1000 independent replications. The sampling distribution of √ n{τ (g,f n ) − τ (g, f )}, as n increases, seems to converge to N (0, σ 2 eff (f )), but even for moderate sample sizes there is a non-negligible bias. Although the Q-Q plots in Figure 1 show a visible deviation between the sampling quantiles and the limiting quantiles, the quantiles lie approximately on a straight line suggesting that the sampling distribution is well-approximated by a normal (with a non-zero mean).

Normal Q−Q Plot
Limiting Quantiles Sampling Quantiles   Fig 1: The Q-Q plots compare the sampling distribution of √ n{τ (g,f n ) − τ (g, f )}, for g(z, x) = z 2 , against the limiting normal distribution N (0, σ 2 eff (f )) (see Theorem 2.3) when F = Exponential(1). The three plots correspond to sample sizes n = 5000, 20000, and 100000. Drawn in red is the y = x line, which illustrates the difference between the finite sample distribution and the limiting distribution. The same plots as in Figure 1 for the case when F is given in (17).
Although these findings broadly corroborate the theoretical result in Theorem 2.3, it is indeed true that the asymptotic regime seems to kick in quite slowly. Let us now consider the case when f is piecewise constant. To fix ideas, as in Beare and Fang (2017), let us take (with c : Then, from Beare and Fang (2017, Proposition 2.1), it follows that, for any ξ ∈ C(R + ), Theorem 3.1 gives the asymptotic distribution of τ (g,f n ) = µ(h,f n ) = f 2 (x)dx (where h(x) = x 2 ) in this case. Figure 2 shows the corresponding Q-Q plots (for sample sizes n = 5000, 20000, 100000) when F is defined in (17)  τ (g, f ) ≈ 1.828 and σ 2 eff (f ) ≈ 3.314. We can see that the sampling distribution of √ n{τ (g,f n ) − τ (g, f )} converges to the desired limiting normal distribution. In this case, the normal approximation seems quite good even for moderately small sample sizes.
Let us now provide some numerical evidence to illustrate that the limiting distribution in (8) (in Theorem 2.2) need not always be normal when estimating a general functional of the form τ (g, f ) (see (1)). In this simulation study we consider the case when f is piecewise constant; in particular, we take F as defined in (17). Consider estimating the functional τ (g, f ) = xf 2 (x)dx (i.e., g(z, x) = xz 2 ). The Q-Q plots in Figure 3 show the sampling distribution of √ n{τ (g,f n ) − τ (g, f )} against the quantiles of a mean zero normal distribution with the efficient variance (which can be computed using (10)) as the sample size varies (n = 5000, 20000 and 100000). The Q-Q plots indicate several interesting features: (i) the sampling quantiles show a non-linear trend and is quite different from the theoretical quantiles (assuming the normal limit); (ii) the sampling distribution does not seem to change with n, suggesting that the sampling distribution is already close to its asymptotic limit (which is given by (8)). Thus, we conjecture that in this scenario, the sampling distribution of √ n{τ (g,f n ) − τ (g, f )} does not converge to a normal limit with (mean 0 and) the efficient variance. The same plots as in Figure 1 for the case when F is given in (17) and the functional of interest is τ (g, f ) = xf 2 (x)dx (i.e., g(z, x) = xz 2 ).

Discussion
In this paper we characterize the asymptotic distribution of the nonparametric maximum likelihood (NPML) based plug-in estimator (see Theorem 2.2) for estimating smooth integrated functionals of a monotone nonincreasing density f , under less restrictive assumptions on f . We also show that for a large class of functionals the asymptotic limit is normal with mean zero and the semiparametric efficient variance (see Theorem 3.1), under minimal assumptions on f . In particular, we do not assume that the underlying true density f is (i) smooth, or (ii) compactly supported, or (iii) lower bounded away from zero.
Finally, we believe that our basic proof idea can be used to study the asymptotic behavior of estimated functionals (based on the NPMLE) in other shape-restricted problems (e.g., decreasing convex densities or log-concave densities). A key open question in this direction is to express the NPMLE of the underlying density as a "nice" (e.g., Hadamard directionally differentiable) functional of the empirical distribution function of the data and to derive a corresponding weak convergence result analogous to Beare and Fang (2017, Theorem 2.1). As future research, we plan to explore this direction for other shape-restricted problems.

Proof of Theorem 2.2
We first give an overview of the main ideas and steps involved in the proof of Theorem 2.2. Observe that, where Step 1: We claim that n 1/2 w n = o P (1).
We prove (19) in Section 7.1.1. Below we complete the proof of Theorem 2.2 assuming the validity of (19).
Step 2: The first term in (18) can be handled as follows: .
We have to study the probability where we have used assumption (A3) which implies that |g(z, x)| ≤ K, and for any function ψ : R → R, ψ 2 2 := ψ 2 (x)dx. First, observe that by Woodroofe and Sun (1993, Theorem 2 and Remark 3), the sequence of random variablesf n(0+) f (0+) is asymptotically P -tight and consequently, given any > 0 there exists M * > 0 (depending on > 0) such that for n large enough (depending on > 0) Thus, the second term in (20) is bounded from above by /2.
As and η are arbitrary, we conclude w n = o P (n −1/2 ) as n → ∞.

Proof of Theorem 2.3
We first note that σ 2 eff (g, f ) < +∞. This follows from the fact thatġ 2 (f (·), ·) is a bounded function on [0, ∞) (asġ(f (·), ·) is a function of bounded total variation on [0, ∞)). Now, to show (9), first note that when F is strictly concave on R + , it follows from Proposition 2.1 that, M F ξ = ξ for all ξ ∈ C(R + ). Thus, if F is strictly concave thenĜ = G and consequently, Let S = [0, s] where s ∈ R or s = +∞. As ψ(·) is of finite total variation on S (by (A3)-(iii,b)), we shall assume without loss of generality that ψ(s) := lim x→s ψ(x) = 0 (otherwise we can work with ψ(·)−ψ(s) instead of ψ(·) and that would not change the value of Y ). If s ∈ R then it is easy to see that Y is a centered Gaussian random variable. In the following we show that even when s = +∞, Y is a centered Gaussian random variable. We write Consequently Y is the in probability (and hence in distribution) limit of − N 0 G(x)dψ(x) as N → ∞. Now for every N , − N 0 G(x)dψ(x) is a Gaussian random variable. Therefore to show (9), we only need to show that the mean and variance of − N 0 G(x)dψ(x) converge to 0 and σ 2 eff (g, f ) respectively as N → ∞.
We next show that Var(Y ) matches σ 2 eff (g, f ). If s = +∞, the same proof will also establish the validity of the convergence of the variance of − N 0 G(x)dψ(x) to σ 2 eff (g, f ) (by simply interpreting integrals appearing in the proof with respect to ψ on [0, ∞) as limits of integrals with respect to ψ on [0, N ]). To operationalize our argument, we begin with the following simplification: where W is the standard Brownian motion on [0, 1] and d = stands for equality in distribution. To simplify notation, in the following every integral is taken on the set [0, s]. Therefore, Now, by simple integration by parts, since lim x→∞ ψ(x) = 0, and consequently it is enough to prove the following lemma to complete the proof of Theorem 2.3 (as then Var(Y ) reduces to Var(ψ(X)) = Var(ġ(f (X), X))).

Proof of Theorem 3.1
Proposition 2.1 will help us demonstrate that indeed Y (as defined in (16)) has a Gaussian distribution with the desired efficient variance. However, to give more intuition to the reader, we begin by discussing two simple scenarios. Case (i): First, let F be strictly concave on R + . From Proposition 2.1 it follows that M θ is linear if and only if θ is strictly concave on R + (Beare and Fang (2017)). Thus, M F ξ = ξ for all ξ ∈ C(R + ). Thus, if F is strictly concave thenĜ = G and consequently In this form, it is easy to see that Y is a centered Gaussian random variable. The fact that its variance matches the efficiency bound can be demonstrated as in the proof of Theorem 2.3 in Section 7.2 (with ψ(x) = h (f (x))).
Case (ii): Next consider the other extreme case, i.e., F is piecewise affine. In this case f is piecewise constant with jumps (say) at 0 < t 1 < t 2 < . . . < t k < ∞ and values v 1 > . . . > v k , for some k ≥ 2 integer, i.e., where t 0 ≡ 0. As f only takes k + 1 values (namely, {v 1 , . . . , v k , 0}) we have that Further, from Proposition 2.1, we have, for every i = 1, . . . , k, which is the same distribution as in the case when F was strictly concave. Since in both the above cases we have Y = G(x)d[h (f (x))], one can conjecture that the result should be generally true for any concave F . This intuition indeed turns out to be correct as shown below.
Let ψ(x) := h (x), for x ≥ 0. Note that by assumption (A3) applied to g(z, x) = h(z), we have that ψ(f (x)) has bounded total variation. Consequently, let us assume that the function ψ • f is monotone (otherwise we can split ψ • f as the difference of two monotone functions and apply the following to each part). Note that it is enough to show that for any ξ ∈ C(R + ), Then we can take ξ to be G ≡ B • F (which is almost surely continuous) and we will obtain the desired result. Let It is easy to show that I is a Borel measurable set. Consider the collection {T F,x : x ∈ I}, where T F,x is defined in Proposition 2.1 (also see Beare and Fang (2017, Proposition 2.1)). First note that x ∈ T F,x , for every x ∈ I. We now claim that there are at most countably many such distinct T F,x 's, as x varies in I. This follows from the fact that for any x in the support of F , F has a strictly positive slope on T F,x (i.e., F cannot be affine with slope 0 on T F,x , as F is concave and nondecreasing and x is in the support of F ) which implies that we can associate a unique rational imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 number in the range of F to this interval T F,x . Let {T F,x i } i≥1 (for x i ∈ (0, ∞)) be an enumeration of this countable collection {T F,x : x ∈ I}. Obviously, where the last inequality follows from the fact that for x ∈ [0, ∞) \ I, M F ξ(x) = ξ(x) (see Beare and Fang (2017, Remark 2.2)) and the above integrals are viewed as a Lebesgue-Stieltjes. Notice now that on the open interval T F,x i , f is constant (as F is affine on T F,x i ). Hence each of the integrals in the last display equals 0. This completes the proof.
7.5.1. Proof of Let us start by considering the distribution of By an argument similar to that in Groeneboom and Pyke (1983, Section 3), it can be seen that L n has the same distribution as that of For constants c 0 , c n , γ n , we define the sequence of random variables The particular choice of c 0 , c n , γ n is crucial in the analysis that follows. In particular, a specific choice for the constant c 0 implies certain fine-tuned cancellations which are necessary for controlling the asymptotic variance of U n at a desired level. It is worth noting that such a definition and the eventual choice of c 0 = 1 is also present in Groeneboom and Pyke (1983, Equation (3.2)). Our choice of c 0 , c n , γ n is more tailored to the current problem. Finally, conditional on the joint event that W n = 1 and V n = 0 (i.e., T n = n and S n = n) one has Lemma 7.2. Let c n = √ log n, γ n = 0 and c 0 = −1. Then (U n , V n , W n ) d → (U, V, W ) where U = δ 0 , the dirac measure at 0, and (V, W ) has the infinitely divisible characteristic function φ V,W (s, t) = exp 1 0 (e itu−s 2 u/2 − 1)u −1 du . Proof. With the choice of c 0 , c n , γ n provided in the statement of the theorem, denote U n ≡ U n (c 0 , c n , γ n ). We only prove the fact that U n P → 0. The subsequent conclusion follows along similar lines as in the proof of Groeneboom and Pyke (1983, Lemma 3.1). In the following analysis we repeatedly use the fact that Γ(j) for j > k. First, define U * n to be U n without its first 3 summands in j, i.e., We will show that U * n P → 0 and as U n − U * n P → 0 (since c n → ∞), this will establish the desired result. Note that, .
This completes the proof of the lemma.
We finally note that Lemma 7.2 immediately implies that U n is asymptotically independent of (W n , V n ) and that U n − U m P → 0 for any n, m → ∞. Consequently, the analysis provided in the proof of Groeneboom and Pyke (1983, Theorem 3.1) goes through verbatim yielding that U n |{W n = 1, V n = 0} d → 0 which in turn proves that (using (26)) c −1 n (nL * n − γ n ) {T n = n, S n = n} d → 0 for our choice of γ n = 0. This yields the desired result.

Proof of
The proof technique is similar to that of the case 1 0 (fn(x)−1) 3 dx √ log n albeit with much more cumbersome details and a different choice of c 0 . As before, let us start by considering the distribution of As argued in Groeneboom and Pyke (1983, Section 3), it is equivalent to studying the (conditional) distribution of We once again introduce a sequence of random variables U n (c 0 , c n , γ n ) := 1 c n   n j=1 j 4 imsart-generic ver. 2014/10/16 file: Eff-Gren-Arxiv-III.tex date: March 9, 2021 Then, conditional on the joint event that W n = 1 and V n = 0 (i.e., T n = n and S n = n) one has U n (c 0 , c n , γ n ) {W n = 1, V n = 0} d = c −1 n (nL * n − γ n )|{T n = n, S n = n}.
Once again we have a crucial lemma which is the analogue of Groeneboom and Pyke (1983, Lemma 3.1); a correction to the characteristic function formula in Groeneboom and Pyke (1983, Lemma 3.1) was recently pointed out in Groeneboom (2019).
Lemma 7.3. Let c n = √ log n, γ n = 0 and c 0 = 1. Then (U n , V n , W n ) d → (U, V, W ) where U = δ 0 and (V, W ) has the infinitely divisible characteristic function φ V,W (s, t) = exp 1 0 (e itu−s 2 u/2 − 1)u −1 du . Proof. With the above choice of c 0 , c n , γ n , denote by U n ≡ U n (c 0 , c n , γ n ). We only prove the fact that U n P → 0. The subsequent conclusion follows along similar lines as in the proof of Groeneboom and Pyke (1983, Lemma 3.1) and Lemma 7.2.
This completes the proof of the lemma.
We finally note that as before Lemma 7.2 immediately implies that in the asymptotic distributional limit, U n is independent of W n , V n and that U n −U m P → 0 for any n, m → ∞. Consequently, the analysis provided in proving Groeneboom and Pyke (1983, Theorem 3.1) goes through verbatim yielding that U n |{W n = 1, V n = 0} d → 0 which in turn proves that c −1 n (nL * n − γ n )|{T n = n, S n = n} d → 0 for our choice of γ n = 0. Consequently, 1 0 (fn(x)−1) 4 dx √ log n P → 0.