A new computational framework for log-concave density estimation

In Statistics, log-concave density estimation is a central problem within the field of nonparametric inference under shape constraints. Despite great progress in recent years on the statistical theory of the canonical estimator, namely the log-concave maximum likelihood estimator, adoption of this method has been hampered by the complexities of the non-smooth convex optimization problem that underpins its computation. We provide enhanced understanding of the structural properties of this optimization problem, which motivates the proposal of new algorithms, based on both randomized and Nesterov smoothing, combined with an appropriate integral discretization of increasing accuracy. We prove that these methods enjoy, both with high probability and in expectation, a convergence rate of order $1/T$ up to logarithmic factors on the objective function scale, where $T$ denotes the number of iterations. The benefits of our new computational framework are demonstrated on both synthetic and real data, and our implementation is available in a github repository \texttt{LogConcComp} (Log-Concave Computation).


Introduction
In Statistics, the field of nonparametric inference under shape constraints dates back at least to Grenander (1956), who studied the nonparametric maximum likelihood estimator of a decreasing density on the non-negative half line.But it is really over the last decade or so that researchers have begun to realize its full potential for addressing key contemporary data challenges such as (multivariate) density estimation and regression.The initial allure is the flexibility of a nonparametric model, combined with estimation methods that can often avoid the need for tuning parameter selection, which can often be troublesome for other nonparametric techniques such as those based on smoothing.Intensive research efforts over recent years have revealed further great attractions: for instance, these procedures frequently attain optimal rates of convergence over relevant function classes.Moreover, it is now known that shape-constrained procedures can possess intriguing adaptation properties, in the sense that they can estimate particular subclasses of functions at faster rates, even (nearly) as well as the best one could do if one were told in advance that the function belonged to this subclass.
Typically, however, the implementation of shape-constrained estimation techniques requires the solution of an optimization problem, and, despite some progress, there are several cases where computation remains a bottleneck and hampers the adoption of these methods by practitioners.In this work, we focus on the problem of log-concave density estimation, which has become arguably the central challenge in the field because the class of log-concave densities enjoys stability properties under marginalization, conditioning, convolution and linear transformations that make it a very natural infinite-dimensional generalization of the class of Gaussian densities (Samworth, 2018).
The univariate log-concave density estimation problem was first studied in Walther (2002), and fast algorithms for the computation of the log-concave maximum likelihood estimator (MLE) in one dimension are now available through the R packages logcondens (Dümbgen and Rufibach, 2011) and cnmlcd (Liu and Wang, 2018).Cule et al. (2010) introduced and studied the multivariate log-concave maximum likelihood estimator, but their algorithm, which is described below and implemented in the R package LogConcDEAD (Cule et al., 2009), is slow; for instance, Cule et al. (2010) report a running time of 50 seconds for computing the bivariate log-concave MLE with 500 observations, and 224 minutes for computing the log-concave MLE in four dimensions with 2,000 observations.An alternative, interior point method for a suitable approximation was proposed by Koenker and Mizera (2010).Recent progress on theoretical aspects of the computational problem in the computer science community includes Axelrod et al. (2019), who proved that there exists a polynomial time algorithm for computing the log-concave maximum likelihood estimator.We are unaware of any attempt to implement this algorithm.Rathke and Schnörr (2019) compute an approximation to the log-concave MLE by considering − log p as a piecewise affine maximum function, using the log-sum-exp operator to approximate the non-smooth operator, a Riemann sum to compute the integral and its gradient, and obtain a solution via L-BFGS.This reformulation means that the problem is no longer convex.
To describe the problem more formally, let C d denote the class of proper, convex lowersemicontinuous functions ϕ : R d → (−∞, ∞] that are coercive in the sense that ϕ(x) → ∞ as x → ∞.The class of upper semi-continuous log-concave densities on R d is denoted as Cule et al. (2010, Theorem 1) proved that whenever the convex hull C n of x 1 , . . ., x n is d-dimensional, there exists a unique pn ∈ argmax (1) If x 1 , . . ., x n are regarded as realizations of independent and identically distributed random vectors on R d , then the objective function in (1) is a scaled version of the log-likelihood function, so pn is called the log-concave MLE .The existence and uniqueness of this estimator is not obvious, because the infinitedimensional class P d is non-convex, and even the class of negative log-densities ϕ ∈ C d : R d e −ϕ = 1 is non-convex.In fact, the estimator belongs to a finite-dimensional subclass; more precisely, for a vector φ = (φ 1 , . . ., φ n ) ∈ R n , define cef[φ] ∈ C d to be the (pointwise) largest function with cef[φ](x i ) ≤ φ i for i = 1, . . ., n. Cule et al. (2010) proved that pn = e −cef[φ * ] for some φ * ∈ R n , and refer to the function −cef[φ * ] as a 'tent function'; see the illustration in Figure 1.Cule et al. (2010) further defined the non-smooth, convex objective function f : R n → R by and proved that φ * = argmin φ∈R n f (φ).
The two main challenges in optimizing the objective function f in (2) are that the value and subgradient of the integral term are hard to evaluate, and that it is non-smooth, so vanilla subgradient methods lead to a slow rate of convergence.To address the first issue, Cule et al. (2010) computed the exact integral and its subgradient using the qhull algorithm (Barber et al., 1993) to obtain a triangulation of the convex hull of the data, evaluating the function value and subgradient over each simplex in the triangulation.However, in the worst case, the triangulation can have O(n d/2 ) simplices (McMullen, 1970).The non-smoothness is handled via Shor's r-algorithm (Shor, 1985, Chapter 3), as implemented by Kappel and Kuntsevich (2000).
In Section 2, we characterize the subdifferential of the objective function in terms of the solution of a linear program (LP), and show that the solution lies in a known, compact subset of R n .This understanding allows us to introduce our new computational framework for log-concave density estimation Figure 1: An illustration of a tent function, taken from Cule et al. (2010).
in Section 3, based on an accelerated version of a dual averaging approach (Nesterov, 2009).This relies on smoothing the objective function, and encompasses two popular strategies, namely Nesterov smoothing (Nesterov, 2005) and randomized smoothing (Lakshmanan and De Farias, 2008;Yousefian et al., 2012;Duchi et al., 2012), as special cases.A further feature of our algorithm is the construction of approximations to gradients of our smoothed objective, and this in turn requires an approximation to the integral in (2).While a direct application of the theory of Duchi et al. (2012) would yield a rate of convergence for the objective function of order n 1/4 /T + 1/ √ T after T iterations, we show in Section 4 that by introducing finer approximations of both the integral and its gradient as the iteration number increases, we can obtain an improved rate of order 1/T , up to logarithmic factors.Moreover, we translate the optimization error in the objective into a bound on the error in the log-density, which is uncommon in the literature in the absence of strong convexity.A further advantage of our approach is that we are able to extend it in Section 5 to the more general problem of quasi-concave density estimation (Koenker and Mizera, 2010;Seregin and Wellner, 2010), thereby providing a computationally tractable alternative to the discrete Hessian approach of Koenker and Mizera (2010).Section 6 illustrates the practical benefits of our methodology in terms of improved computational timings on simulated data.Additional experimental details and applications on real data sets are provided in Appendix A. Proofs of all main results can be found in Appendix B, and background on the field of nonparametric inference under shape constraints can be found in Appendix C.

Notation:
We write [n] := {1, 2, . . ., n}, let 1 ∈ R n denote the all-ones vector, and denote the cardinality of a set S by |S|.For a Borel measurable set C ⊆ R d , we use vol(C) to denote its volume (i.e.d-dimensional Lebesgue measure).We write • for the Euclidean norm of a vector.For µ > 0, a convex function f : R n → R is said to be µ-strongly convex if φ → f (φ) − µ 2 φ 2 is convex.The notation ∂f (φ) denotes the subdifferential (set of subgradients) of f at φ.Given a real-valued sequence (a n ) and a positive sequence (b n ), we write As mentioned in the introduction, in computing the MLE, we seek where Note that (4) can be viewed as a stochastic optimization problem by writing where ξ is uniformly distributed on C n and where, for set of all weight vectors for which x can be written as a weighted convex combination of x 1 , . . ., x n .Thus E(x) is a compact, convex subset of R n .The cef function is given by a linear program (LP) (Koenker and Mizera, 2010;Axelrod et al., 2019): and, with the standard convention that inf ∅ := ∞, we see that (Q 0 ) agrees with (3).From the LP formulation, it follows that φ Given a pair φ ∈ R n and x ∈ C n , an optimal solution to (Q 0 ) may not be unique, in which case the map φ → cef[φ](x) is not differentiable (Bertsekas, 2016, Proposition B.25(b)).Noting that the infimum in (Q 0 ) is attained whenever x ∈ C n , let Danskin's theorem (Bertsekas, 2016, Proposition B.25(b)) applied to −cef[φ](x) then yields that for each x ∈ C n , the subdifferential of F (φ, x) with respect to φ is given by Since both f and F (•, x) are finite convex functions on R n (for each fixed x ∈ C n in the latter case), by Clarke (1990, Proposition 2.3.6(b) and Theorem 2.7.2), the subdifferential of Observe that given any φ ∈ R n , the function x → −cef φ + log I(φ)1 (x) (where I(φ) is the integral defined in ( 5)) is a log-density.It is also convenient to let φ ∈ R n be such that exp{−cef[ φ]} is the uniform density on C n , so that f ( φ) = log ∆ + 1. Proposition 1 below (an extension of (Axelrod et al., 2019, Lemma 2)) provides uniform upper and lower bounds on this log-density, whenever the objective function f evaluated at φ is at least as good as that at φ.In more statistical language, these bounds hold whenever the log-likelihood of the density exp −cef φ + log I(φ)1 (•) is at least as large as that of the uniform density on the convex hull of the data, so in particular, they must hold for the log-concave MLE (i.e. when φ = φ * ).Let φ 0 := (n − 1) + d(n − 1) log 2n + 2nd log(2nd) + log ∆ and The following corollary is an immediate consequence of Proposition 1.
Corollary 1 gives a sense in which any φ ∈ R n for which the objective function is 'good' cannot be too far from the optimizer φ * ; here, 'good' means that the objective should be no larger than that of the uniform density on the convex hull of the data.Moreover, an upper bound on the integral I(φ) provides an upper bound on the norm of any subgradient g(φ) of f at φ.
3 Computing the log-concave MLE As mentioned in the introduction, subgradient methods (Shor, 1985;Polyak, 1987) tend to be slow for minimizing the objective function f defined in (5) Cule et al. (2010).Our alternative approach involves the minimizing the representation of f given in (6) via smoothing techniques, which offer superior computational guarantees and practical performance in our numerical experiments.

Smoothing techniques
We present two smoothing techniques to find the minimizer φ * ∈ R n of the nonsmooth convex optimization problem (4).By Proposition 1, we have that φ * ∈ Φ, where with φ 0 , φ 0 ∈ R. In what follows we present two smoothing techniques: one based on Nesterov smoothing (Nesterov, 2005) and the second on randomized smoothing (Duchi et al., 2012).

Randomized smoothing
Our second smoothing technique is randomized smoothing (Lakshmanan and De Farias, 2008;Yousefian et al., 2012;Duchi et al., 2012): we take the expectation of a random perturbation of the argument of f .Specifically, for u ≥ 0, let fu (φ where z is uniformly distributed on the unit 2 -ball in R n .Thus, similar to Nesterov smoothing, f0 = f , and the amount of smoothing increases with u.From a stochastic optimization viewpoint, we can write fu (φ) = EF (φ + uz, ξ) and ∇ φ fu (φ) = EG(φ + uz, ξ) where G(φ + uv, x) ∈ ∂F (φ + uv, x), and where the expectations are taken over independent random vectors z, distributed uniformly on the unit Euclidean ball in R n , and ξ, distributed uniformly on C n .Here the gradient expression follows from, e.g., Lakshmanan and De Farias (2008, Lemma 3.3(a)), Yousefian et al. (2012, Lemma 7); since F (φ + uv, x) is differentiable almost everywhere with respect to φ, the expression for fu (φ) does not depend on the choice of subgradient.Proposition 4 below lists some properties of fu and its gradient.It extends Yousefian et al. (2012, Lemmas 7 and 8) by exploiting special properties of the objective function to sharpen the dependence of the bounds on n.

Stochastic first-order methods for smoothing sequences
Our proposed algorithm for computing the log-concave MLE is given in Algorithm 1.It relies on the choice of a smoothing sequence of f , which may be constructed using Nesterov or randomized smoothing, for instance.For a non-negative sequence (u t ) t∈N0 , this smoothing sequence is denoted by ( ut ) t∈N0 , where ut := fut is given by (11) or ut := fut is given by (13).In Algorithm 1, P Φ : R n → Φ denotes the projection operator onto the closed convex set Φ, which is essentially a threshold clipping operator.In fact, Algorithm 1 is a modification of an algorithm due to Duchi et al. (2012), and can be regarded as an accelerated version of the dual averaging scheme (Nesterov, 2009) applied to ( ut ).
Algorithm 1 Accelerated stochastic dual averaging on a smoothing sequence with increasing grids Input: Smoothing sequence ( ut ) whose gradients have Lipschitz constants (L t ) t∈N0 ; initialization φ 0 ∈ R n ; learning rate sequence (η t ) t∈N of positive real numbers; number of iterations T ∈ N Compute an approximation g t of ∇ φ ut φ (y) t ; see Section 3.2.1 4: 9: end for 10: return φ (x) t+1

Approximating the gradient of the smoothing sequence
In Line 3 of Algorithm 1, we need to compute an approximation of the gradient ∇ φ u , for a general u ≥ 0. A key step in this process is to approximate the integral I(•), as well as a subgradient of I, at an arbitrary φ ∈ R n .Cule et al. (2010) provide explicit formulae for these quantities, based on a triangulation of C n , using tools from computational geometry.For practical purposes, Cule and Dümbgen (2008) apply a Taylor expansion to approximate the analytic expression.The R package LogConcDEAD (Cule et al., 2009) uses this method to evaluate the exact integral at each iteration, but since this is time-consuming, we will only use this method at the final stage of our proposed algorithm as a polishing step1 .
An alternative approach is to use numerical integration2 .Among deterministic schemes, Rathke and Schnörr (2019) observed empirically that the simple Riemann sum with uniform weights appears to perform the best among several multi-dimensional integration techniques.Random (Monte Carlo) approaches to approximate the integral are also possible: given a collection of grid points S = {ξ 1 , . . ., ξ m }, we approximate the integral as I S (φ) := (∆/m) m =1 exp{−cef[φ](ξ )}.This leads to an approximation of the objective f given by Since f S is a finite, convex function on R n , it has a subgradient at each φ ∈ R n , given by As the effective domain of cef[φ](•) is C n , we consider grid points S ⊆ C n .We now illustrate how these ideas allow us to approximate the gradient of the smoothing sequence, and initially consider Nesterov smoothing, with u = fu .If S = {ξ 1 , . . ., ξ m } ⊆ C n denotes a collection of grid points (either deterministic or Monte Carlo based), then ∇ φ u can be approximated by gu,S , where gu,S (φ In fact, we distinguish the cases of deterministic and random S by writing this approximation as gD u,S and gR u,S respectively.For the randomized smoothing method with u = fu , the approximation is slightly more involved.Given m grid points S = {ξ 1 , . . ., ξ m } ⊆ C n (again either deterministic or random), and independent random vectors z 1 , . . ., z m , each uniformly distributed on the unit Euclidean ball in R n , we can approximate with • ∈ {D, R} again distinguishing the cases of deterministic and random S.

Related literature
As mentioned above, Algorithm 1 is an accelerated version of the dual averaging method of Nesterov (2009), which to the best of our knowledge has not been studied in the context of log-concave density estimation previously.Nevertheless, related ideas have been considered for other optimization problems (e.g.Xiao, 2010;Duchi et al., 2012).Relative to previous work, our approach is quite general, in that it applies to both of the smoothing techniques discussed in Section 3.1, and allows the use of both deterministic and random grids to approximate the gradients of the smoothing sequence.Another key difference with earlier work is that we allow the grid S to depend on t, so we write it as S t , with m t := |S t |; in particular, inspired by both our theoretical results and numerical experience, we take (m t ) to be a suitable increasing sequence.

Theoretical analysis of optimization error of Algorithm 1
We have seen in Propositions 3 and 4 that the two smooth functions fu and fu enjoy similar properties -according to Proposition 3(a) to (c) and Proposition 4(a) to (d), both fu and fu satisfy the following assumption: Assumption 1 (Assumptions on smoothing sequence).There exists r ≥ 0 such that for any φ ∈ Φ, (a) we can find Recall from Section 3 that we have four possible choices corresponding to a combination of the smoothing and integral approximation methods, as summarized in Table 1.Once we select an option, in line 3 of Algorithm 1, we can take where ˇ∈ {˘, ¯} and • ∈ {D, R}.To encompass all four approximation choices in Line 3 of Algorithm 1, we make the following assumption on the gradient approximation error e t := g t − ∇ φ ut (φ Assumption 2 (Gradient approximation error).There exists σ > 0 such that where F t−1 denotes the σ-algebra generated by all random sources up to iteration t − 1 (with F −1 denoting the trivial σ-algebra).
When S is a Monte Carlo random grid (options 2 and 4), the approximate gradient g t is an average of m t independent and identically distributed random vectors, each being an unbiased estimator of ∇ ut (φ (y) t ).Hence, (17) holds true with σ 2 determined by the bounds in Proposition 3(d) (option 2) and Proposition 4(e) (option 4).For a deterministic Riemann grid S and Nesterov's smoothing technique (option 1), e t is deterministic, and arises from using gu,S (φ) in ( 15) to approximate For the deterministic Riemann grid and randomized smoothing (option 3), the error e t can be decomposed into a random estimation error term (induced by z 1 , . . ., z mt ) and a deterministic approximation error term (induced by ξ 1 , . . ., ξ mt ) as follows: It can be shown using this decomposition that E( e t 2 |F t−1 ) = O(1/m t ) under regularity conditions.Theorem 1 below establishes our desired computational guarantees for Algorithm 1.We write D := sup φ, φ∈Φ φ − φ for the diameter of Φ.
For related general results that control the expected optimization error for smoothed objective functions, see, e.g., Nesterov (2005), Tseng (2008), Xiao (2010), Duchi et al. (2012).With deterministic grids (corresponding to options 1 and 3), if we take √ m, and the upper bound in (19) does not converge to zero as T → ∞.On the other hand, if we take |S t | = t 2 , for example, then sup T ∈N M (1) 1), and we find that ε T = Õ(1/T ).For random grids (options 2 and 4), if we take |S t | = m for all t, then M

√
T ) rate for stochastic subgradient methods (Polyak, 1987).This can be improved to ε A direct application of the theory of Duchi et al. (2012) would yield an error rate of ε T = O(n 1/4 /T + 1/ √ T ).On the other hand, Theorem 1 shows that, owing to the increasing sequence of grid sizes used to approximate the gradients in Step 3 of Algorithm 1, we can improve this rate to Õ(1/T ).Note however, that this improvement is in terms of the number of iterations T , and not the total number of stochastic oracle queries (equivalently, the total number of LPs (Q 0 )), which is given by T query := Agarwal et al. (2012) and Nemirovsky and Yudin (1983) have shown that the optimal expected number of stochastic oracle queries is of order 1/ T query , which is attained by the algorithm of Duchi et al. (2012).For our framework, by taking m t = t, we have T query = T −1 t=0 m t = Õ(T 2 ), so after T query stochastic oracle queries, our algorithm also attains the optimal error on the objective function scale, up to a logarithmic factor.Other advantages of our algorithm and the theoretical guarantees provided by Theorem 1 relative to the earlier contributions of Duchi et al. (2012) are that we do not require an upper bound on I(φ) and are able to provide a unified framework that includes Nesterov smoothing and an alternative gradient approximation approach by numerical integration in addition to randomized smoothing scheme with stochastic gradients.Moreover, we can exploit the specific structure of the log-concave density estimation problem to provide much better Lipschitz constants for the randomized smoothing sequence than would be obtained using the generic constants of Duchi et al. (2012).For example, our upper bound in Proposition 4(a) is of order O(n −1/2 log 1/2 n), whereas a naive application of the general theory of Duchi et al. (2012) would only yield a bound of O(1).A further improvement in our bound comes from the fact that it now involves I(φ) directly, as opposed to an upper bound on this quantity.
In Theorem 1, the computational guarantee depends upon B 0 , B 1 , σ in Assumptions 1 and 2. In light of Propositions 3 and 4, Table 2 illustrates how these quantities, and hence the corresponding guarantees, differ according to whether we use Nesterov smoothing or randomized smoothing.
The randomized smoothing procedure requires solving LPs, whereas Nesterov's smoothing technique requires solving QPs.While both of these problems are highly structured and can be solved efficiently by off-the-shelf solvers (e.g., Gurobi Optimization, LLC, 2021), we found the LP solution times to be faster than those for the QP.Additional computational details are discussed in Section 6.Here, σ corresponds to random grid points (options 2 and 4), the optimal η is taken to be proportional to σ, the optimal u is proportional to B 1 /B 0 , we take C 1 = √ ∆e −φ0 ; √ B 0 B 1 determines the first term in the error rate.
Note that Theorem 1 presents error bounds in expectation, though for option 1, since we use Nesterov's smoothing technique and the Riemann sum approximation of the integral, the guarantee in Theorem 1 holds without the need to take an expectation.Theorem 2 below presents corresponding high-probability guarantees.For simplicity, we present results for options 2 and 4, which rely on the following assumption: Theorem 2. Suppose that Assumptions 1 and 3 hold, and define the sequence (θ t ) t∈N0 by θ 0 := 1 and θ t+1 := 2 1 + 1 + 4/θ 2 t −1 for t ∈ N 0 .Let u > 0, let u t := θ t u and take L t = B 1 /u t and η t = η for all t ∈ N 0 as input parameters to Algorithm 1. Writing M T )/D as in Theorem 1, for any δ ∈ (0, 1), we have with probability at least 1 − δ that T T .
For option 3, we would need to consider the approximation error from the Riemann sum, and the final error rate would include additional O(1/T ) terms.We omit the details for brevity.
Finally in this section, we relate the error of the objective to the error in terms of φ, as measured through the squared L 2 distance between the corresponding lower convex envelope functions.
Theorem 3.For any φ ∈ Φ, we have 5 Beyond log-concave density estimation In this section, we extend our computational framework beyond the log-concave density family, through the notion of s-concave densities.For s ∈ R, define domains D s and ψ s : D s → R by and ψ s (y) := Definition 1 (s-concave density, Seregin and Wellner (2010)).For s ∈ R, the class P s (R d ) of s-concave density functions on R d is given by For s = 0, the family of s-concave densities reduces to the family of log-concave densities.Moreover, for s 1 < s 2 , we have P s2 (R d ) ⊆ P s1 (R d ) (Dharmadhikari and Joag-Dev, 1988, p. 86).The s-concave density family introduces additional modelling flexibility, in particular allowing much heavier tails when s < 0 than the log-concave family, but we note that there is no guidance available in the literature on how to choose s.
For the problem of s-concave density estimation, we discuss two estimation methods, both of which have been previously considered in the literature, but for which there has been limited algorithmic development.The first is based on the maximum likelihood principle (Section 5.1), while the other is based on minimizing a Rényi divergence (Section 5.2).

Computation of the s-concave maximum likelihood estimator
Seregin and Wellner (2010) proved that a maximum likelihood estimator over P s (R d ) exists with probability one for s ∈ (−1/d, ∞) and n > max dr r−d , d , where r := −1/s, and does not exist if s < −1/d.Doss and Wellner (2016) provide some statistical properties of this estimator when d = 1.The maximum likelihood estimation problem is to compute or equivalently, argmax We establish the following theorem: Theorem 4. Let s ∈ [0, 1] and suppose that the convex hull C n of the data is d-dimensional (so that the s-concave MLE pn exists and is unique).Then computing pn in (21) is equivalent to the convex minimization problem of computing Remark 1.The equivalence result in Theorem 4 holds for any s (outside [0, 1]) as long as the sconcave MLE exists.However, when s ∈ [0, 1], ( 23) is a convex optimization problem.The family of s-concave densities with s < 0 appears to be more useful from a statistical viewpoint as it allows for heavier tails than log-concave densities, but the MLE cannot be then computed via convex optimization.Nevertheless, the entropy minimization methods discussed in Section 5.2 can be used to obtain s-concave density estimates for s > −1.

Quasi-concave density estimation
Another route to estimate an s-concave density (or even a more general class) is via the following problem: φ := argmin where Ψ : R → (−∞, ∞] is a decreasing, proper convex function.When Ψ(y) = e −y , ( 24) is equivalent to the MLE for log-concave density estimation (1), by Cule et al. (2010, Theorem 1).This problem, proposed by Koenker and Mizera (2010), is called quasi-concave density estimation.Koenker and Mizera (2010, Theorem 4.1) show that under some assumptions on Ψ, there exists a solution to (24), and if Ψ is strictly convex, then the solution is unique.Furthermore, if Ψ is differentiable on the interior of its domain, then the optimal solution to the dual of ( 24) is a probability density p such that p = −Ψ (ϕ), and the dual problem can be regarded as minimizing different distances or entropies (depending on Ψ) between the empirical distribution of the data and p.In particular, when β ≥ 1 and Ψ(y) = 1 {y≤0} (−y) β /β, and when β < 0 and Ψ(y) = −y β /β for y ≥ 0 (with Ψ(y) = ∞ otherwise), the dual problem of ( 24) is essentially minimizing the Rényi divergence and we have the primal-dual relationship p = |ϕ| β−1 .In fact, this amounts to estimating an s-concave density via Rényi divergence minimization with β = 1 + 1/s and s ∈ (−1, ∞) \ {0}.We therefore consider the problem min The proof of Theorem 5 is similar to that of Theorem 4, and is omitted for brevity.
Theorem 5. Given a decreasing proper convex function Ψ, the quasi-concave density estimation problem (24) is equivalent to the following convex problem: in the sense that φ = cef[φ * ], with corresponding density estimator pn The objective in ( 26) is convex, so our computational framework can be applied to solve this problem.

Computational experiments on simulated data
In this section, we present numerical experiments to study the different variants of our algorithm and compare them with existing methods based on convex optimization for the log-concave MLE.Our results are based on large-scale synthetic datasets with n ∈ {5,000, 10,000} observations generated from standard d-dimensional normal and Laplace distributions with d = 4. Code for our experiments is available from the github repository LogConcComp available at: https://github.com/wenyuC94/LogConcComp.
All computations were carried out on the MIT Supercloud Cluster (Reuther et al., 2018) on an Intel Xeon Platinum 8260 machine, with 24 CPUs and 24GB of RAM.Our algorithms were written in Python; we used Gurobi (Gurobi Optimization, LLC, 2021) to solve the LPs and QPs.
Our first comparison method is that of Cule et al. (2010), implemented in the R package LogConcDEAD (Cule et al., 2009), and denoted by CSS.The CSS algorithm terminates when ∞ ≤ τ , and we consider τ ∈ {10 −2 , 10 −3 , 10 −4 }.Our other competing approach is the randomized smoothing method of Duchi et al. (2012), with random grids of a fixed grid size, which we denote here by RS-RF-m, with m being the grid size.To the best of our knowledge, this method has not been used to compute the log-concave MLE previously.
We denote the different variants of our algorithm as Alg-V , where Alg ∈ {RS,NS} represents Algorithm 1 with Randomized smoothing and Nesterov smoothing, and V ∈ {DI, RI} represents whether we use deterministic or random grids of increasing grid sizes to approximate the gradient.Further details of our input parameters are given in Appendix A.3.
Figure 2 presents the relative objective error, defined for an algorithm with iterates φ 1 , . . ., φ t as against time (in minutes) and number of iterations.In the definition of the relative objective error in (27) above, φ * is taken as the CSS solution with tolerance τ = 10 −4 .The figure shows that randomized smoothing appears to outperform Nesterov smoothing in terms of the time taken to reach a desired relative objective error, since the former solves an LP (Q 0 ), whereas the latter has to solve a QP (Q u ); the number of iterations taken by the different methods is, however, similar.There is no clear winner between randomized and deterministic grids, and both appear to perform well.Table 3 compares our proposed methods against the CSS solutions with different tolerances, in terms of running time, final objective function, and distances of the algorithm outputs to the optimal solution φ * and the truth φ truth .We find that all of our proposals yield marked improvements in running time compared with the CSS solution: with n = 10,000, d = 4 and τ = 10 −4 , CSS takes more than 20 hours for all of the data sets we considered, whereas the RS-DI variant is around 50 times faster.The CSS solution may have a slightly improved objective function value on termination, but as shown in Table 3, all of our algorithms achieve an optimization error that is small by comparison with the statistical error, and from a statistical perspective, there is no point in seeking to reduce the optimization error further than this.Table 4 shows that the distances φ * − φ truth /n 1/2 are well concentrated around their means (i.e.do not vary greatly over different random samples drawn from the underlying distribution), which provides further reassurance that our solutions are sufficiently accurate for practical purposes.On the other hand, the CSS solution with tolerance 10 −3 is not always sufficiently reliable in terms of its statistical accuracy, e.g. for a Laplace distribution with n = 5,000.
Our further experiments on real data sets reported in Appendix A.4 provide qualitatively similar conclusions.
Finally, Figure 3 compares our proposed multistage increasing grid sizes (RS-DI/RS-RI) (see Tables 5 and 6) with the fixed grid size (RS-RF) proposed by Duchi et al. (2012), under the randomized smoothing setting.We see that the benefits of using the increasing grid sizes as described by our theory carry over to improved practical performance, both in terms of running time and number of iterations.
To approximate the integral, we use the same simple Riemann grid points mentioned in Section 3.2.1.Subgradients of the objective (28) are straightforward to compute via the chain rule and the subgradient of the maximum function (see, e.g., Bertsekas (2009)).After standardizing each coordinate of our data to have mean zero and unit variance, which does not affect the final outcome due to the affine equivariance of the log-concave MLE (Dümbgen et al., 2011, Remark 2.4), we generate m = 10 initializing hyperplanes from a standard (d + 1)-dimensional Gaussian distribution.We then obtain the initializer for our main algorithm by applying a vanilla subgradient method to the objective (28) (Shor, 1985;Polyak, 1987) with stepsize t −1/2 at the tth iteration, terminating when the difference in the objective function at successive iterations drops below 10 −4 , or after 100 iterations, whichever is the sooner.This technique is related to the non-convex method for log-concave density estimation proposed by Rathke and Schnörr (2019), who considered a smoothed version of (28).Our goal here is only to seek a good initializer rather than the global optimum, and we found that the approach described above was effective in this respect, as well as being faster to compute than the method of Rathke and Schnörr (2019).

A.2 Final polishing step
As mentioned in Section 3.2.1,once our algorithm terminates at φT , say, we evaluate the integral I( φT ) in the same way as Cule et al. (2010).Our final output, then is φ T := φT + log I( φT )1; this final step not only improves the objective function, but also guarantees that exp[−cef[φ T ](•)] is a log-concave density.This can be shown by following the same arguments as in Steps 2-3 in the proof of Theorem 4. For the competing RS-RF-m method, we present the better of the results from σ ∈ {10 −3 , 10 −4 }.

A.3 Input parameter settings
To illustrate the increasing grid size strategy we take in the experiments, we first present in Table 5 some potential schemes to achieve the Õ(1/T ) error rate on the objective function scale.In our experiments, we used the multi-stage increasing grid size scheme with C 1 = 8 and ρ 1 = 2.For the random grid (RI), we take C = 5,000 and ρ = 2.For the deterministic grid (DI), we first choose an axis-aligned grid with m 0,t points in each dimension that encloses the convex hull C n of the data.Then m t is the number of these grid points that fall within C n .Table 6 provides an illustration of this multi-stage strategy used in the numerical experiments for a Laplace distribution with n = 5,000 and d = 4. Code for the other settings is available in the github repository LogConcComp.

A.4 Experimental results on real data sets
We provide additional simulation results on three real data sets: • Stock returns: The Stock returns real data consist of daily returns of four stocks 3 over n = 10,000 randomly sampled days between 1970 and 2010, normalized so that each dimension has unit variance.The real data are available at https://stooq.com/db/h/.
• Census: The Census real data consist of percentages of the population of different age groups (18-24, 25-44, 45-64 and 65+) for n = 10,000 randomly sampled Census tracts based on the 2015-2019 5-year ACS (American Community Survery) 4 , and the data are normalized so that each dimension has unit variance.The data and description are available at https://www.census.gov/topics/research/guidance/planning-databases/2021.html.
• Gas turbine: The Gas turbine real data consist of 4 sensor measures5 aggregated over one hour from a gas turbine for n = 10,000 hours between 2011 and 2015, normalized so that each dimension has unit variance.The data are available at https://archive.ics.uci.edu/ml/datasets/Gas+Turbine+CO+and+NOx+Emission+Data+Set.
Table 7, Figure 4 and Figure 5 provide simulation results that correspond to those in Table 3, Figure 2 and Figure 3 respectively, but for three three real data sets.The table and figures reveal a qualitatively very similar story to that presented for the simulated data in Section 6: the main conclusion is that our randomized smoothing approaches are significantly more computationally efficient than both the Nesterov smoothing and CSS methods.
Table 3: Comparison of our proposed methods with the CSS solution (Cule et al., 2010) and RS-RF (Duchi et al., 2012).On a single dataset, we ran 5 repetitions of each algorithm with different random seeds and report the median statistics.Here, obj and relobj denote the objective and relative objective error, respectively, runtime denotes the running time (in minutes), dopt and dtruth denote the (Euclidean) distances between the algorithm outputs and the optimal solution and the truth, respectively, iter denotes the number of iterations, tO denotes the total number of oracles (grid points), aO denotes the average number of oracles (grid points) per iteration, and hO denotes the harmonic average of grid sizes (which equals T /(M (1) T ) 2 ).For CSS, param is the tolerance τ ; for RS-RF, param is the (fixed) grid size m.Here '-' means the running time of the corresponding algorithm exceeded 20 hours.(1) T = Õ(1) for random S t ).Here, C and C 1 are positive constants.For the multi-stage scheme, a ≥ 1 denotes the current stage number.

B Proofs B.1 Proofs of Propositions 1 and 2
The proof of Proposition 1 is adapted from the proof of Axelrod et al. (2019, Lemma 2), which in turn is based on Carpenter et al. (2018, Lemma 8).
Proof of Proposition 1.The proof has three parts.Part 1.We first prove that φ We proceed to obtain an upper bound on R. To this end, let pn denote the uniform density over C n .
If pn (x i ) = pn (x i ) = 1/∆ for all i, then (29) holds.So we may assume that pn = pn , so that R > 1 and M > 1/∆.For a density p on R d and for t ∈ R, let L p (t) := {x ∈ R d : p(x) ≥ t} denote the super-level set of p at height t.Since pn is supported on C n , and since pn (x) On the other hand, since inf x∈Cn pn (x) = M/R, we have (M/R) • ∆ ≤ 1, so for R < e, we have M ≤ R/∆ < e/∆.We deduce that for all R > 1, where log + (x) := 1 ∨ log x.Now, by the optimality of pn , we have so that R ≤ (M ∆) n .It follows that when R ≥ e, we have from (31) and the fact that log y ≤ y for y > 0 that Since (33) holds trivially when R < e, we may combine ( 33) with (31) to obtain Moreover, from (32) and (34), we also have as required.
Part 2. Now we extend the above result to all φ ∈ R n such that I(φ) = 1 and f (φ) ≤ f ( φ), where φ is defined just after Proposition 1.The key observation here is that the proof of Part 1 applies to any density with log-likelihood at least that of the uniform distribution over C n .In particular, for any φ satisfying these conditions, the density p ∈ F d given by p(x) = exp{−cef[φ](x)} has log-likelihood at least that of the uniform distribution over C n , so Part 3. We now consider the case for a general φ ∈ R n with f (φ) ≤ f ( φ).Let φ := φ + log I(φ)1, so that The result therefore follows by Part 2.

B.2 Proofs of Proposition 3 and Proposition 4
The proof of Proposition 3 is based on the following properties of the quadratic program q u [φ](x) defined in (Q u ), as well as its unique optimizer α * u [φ](x): Proposition 5.For φ ∈ Φ and x ∈ C n , we have (a) α * u [φ](x) ≤ 1 for any x ∈ C n and φ ∈ Φ; ≤ (1/u) φ − φ for any u > 0, φ, φ ∈ Φ, and any x ∈ C n .Proof.The proof exploits ideas from Nesterov (2005).For (a), observe that α and this simplex is the convex hull of n + 1 points that all lie in the closed unit Euclidean ball in R n .
The lower bound in (b) follows immediately from the definition of the quadratic program in (Q u ).For the upper bound, for u ∈ [0, u], we have . By the mean value theorem there exists t 0 ∈ [0, 1] such that where the final bound follows from Proposition 5(a) and (c).Now, for any x ∈ C n , we have by ( 38) as well as Proposition 5(a), (c) and (d) that It follows that for any φ, φ ∈ Φ, we have (d) For u ≥ 0 and φ ∈ Φ, it follows from Proposition 5(a) and (c) that Therefore, for any x ∈ C n , we have cef Recall that all subgradients of I at φ ∈ R n are of the form −γ( φ), where for some α ∈ A[ φ](x).Moreover, as we saw in the proof of Proposition 2, γ( φ) 1 = I( φ).We deduce from Rockafellar (1997, Theorem 24.7) that where the final inequality uses Proposition 6.
(b) By the convexity of f , we have where the last inequality uses property (a).(e) The proof is very similar to the proof of Proposition 3(d) and is omitted for brevity.

B.3 Proofs of Theorem 1 and Theorem 2
We will make use of the following lemma: Lemma 1 (Lemma 4.2 of Duchi et al. (2012)).Let ut (φ) t be a smoothing sequence such that φ → ut (φ) has L t -Lipschitz gradient.Assume that ut (φ) t ) T t=0 be the sequences generated by Algorithm 1.Let g t denote an approximation of ∇ ut (φ (y) t ) with error e t = g t − ∇ ut (φ (y) t ).Then for any φ ∈ Φ and t ∈ N, we have Recall the definition of the diameter D of Φ given just before Theorem 1.
Corollary 2. Fix u, η > 0, and assume that Assumption 1 holds with r ≥ u.Suppose in Algorithm 1 that u t = θ t u, L t = B 1 /u t and η t = η.Let (φ t ) T t=0 be the sequences generated by Algorithm 1 and let e t = g t − ∇ ut (φ (y) t ).Then for any φ ∈ Φ, we have Proof.By induction, we have that θ t ≤ 2/(t + 2) and t τ =0 1/θ τ = 1/θ 2 t for all t = 0, 1, . . ., T Tseng (2008); Duchi et al. (2012).Using Assumption 1, we have Hence, by Lemma 1, where we have used the facts that θ Proof of Theorem 1.According to Corollary 2, it suffices to bound θ 2 To this end, we have by Assumption 2 that We deduce that Moreover, by Assumption 2 again, The bound (18) follows from Corollary 2, together with ( 44) and ( 45), and the bound (19) then follows directly from the parameter choice of u and η and the fact that where the second equality uses the fact that φ − φ (z) t is F t−1 -measurable.This allows us to remove the last term of the two inequalities in the theorem.
Proof of Theorem 2. According to Corollary 2, it suffices to obtain a high-probability bound for θ 2 t , we have from the proof of Theorem 1 that (1/θ t ) e t , ζ t is a martingale difference sequence under Assumption 3. Note that ζ t is F t−1 -measurable, and we will now show that e t , ζ t is √ 2σ t D sub-Gaussian, conditional on F t−1 .For any x ∈ R, we have e x ≤ x + e x 2 .Hence, for λ ∈ R such that λ 2 σ 2 t D 2 ≤ 1, we have by the conditional version of Jensen's inequality that We deduce that e t , ζ t /θ t is ( √ 2σ t D)/θ t sub-Gaussian, conditional on F t−1 .Applying the Azuma-Hoeffding inequality (e.g.Azuma, 1967) therefore yields that for every > 0, T ) 2 , where the last inequality uses the facts that θ T −1 ≤ θ t and θ T −1 ≤ 2/T .Therefore, for every δ ∈ (0, 1), we have with probability at least 1 − δ/2 that Next we will turn to finding a tail bound for T −1 t=0 e t 2 .By Assumption 3 and Jensen's inequality, we have Now define the random variables Ξ t := e t 2 − E e t 2 F t−1 .Then by Markov's inequality, for every > 0, Moreover, by Markov's inequality again, and then Jensen's inequality, we have for every > 0 that It follows by, e.g., Duchi et al. (2012, Lemma F.7) that Ξ t is sub-exponential with parameters λ t := 1/(2σ 2 t ) = m t /(2σ 2 ) and τ 2 for |λ| ≤ 1/(2σ 2 t ).Now define Λ T := min t=0,...,T −1 λ t = m 0 /(2σ 2 ) (as we assume (m t ) is increasing) and C T := T .We claim that T −1 t=0 Ξ t is sub-exponential with parameters Λ T and C T , and prove this by induction on T .The base case T = 1 holds by ( 48), so suppose it holds for a given T ∈ N. Then for λ ∈ R with |λ| ≤ min(Λ T , λ T ) = Λ T +1 , we have which proves the claim by induction.We deduce by, e.g.Buldygin and Kozachenko (2000, Lemma 1.4.1), that for every > 0 and T ∈ N, In other words, with probability at least 1 − δ/2, Applying ( 46), ( 47) and ( 49) in Corollary 2, together with a union bound, yields that with probability at least 1 − δ, Taking the same choices of φ, u and η as in Theorem 1, we obtain the final result.

B.4 Proof of Theorem 3
To prove Theorem 3, we first introduce the following lemma.
(2) By ( 53 Proof of Theorem 4. The proof is split into four steps.The first three steps hold for any s ∈ R, while in Step 4, we show the convexity of the objective when s ∈ [0, 1]. Step 1: We claim that any solution pn to ( 21) is supported on C n , so that ϕ * (x) = ∞ when x / ∈ C n , where ϕ * is the solution to (22).Indeed, suppose for a contradiction that p = ψ s • ϕ ∈ P s (R d ) is such that n i=1 log p(x i ) > −∞, and that Cn p(x) dx = c < 1 = R d p(x) dx.We may assume that c > 0, because otherwise p(x) = 0 for almost all x ∈ C n , which would mean that This establishes our desired contradiction.
Beyond papers already discussed, early theoretical work on log-concave density estimation includes Pal et al. (2007), Dümbgen and Rufibach (2009), Walther (2009), Cule and Samworth (2010), Dümbgen et al. (2011), Schuhmacher et al. (2011), Samworth and Yuan (2012) and Chen and Samworth (2013).Sometimes, the class P d is considered as a special case of the class of s-concave densities (Koenker and Mizera, 2010;Seregin and Wellner, 2010;Han and Wellner, 2016a;Doss and Wellner, 2016;Han, 2021); see also Section 5. Much recent work has focused on rates of convergence, which are best understood in the Hellinger distance d H , given by For the case of correct model specification, i.e.where pn is computed from an independent and identically distributed sample of size n from p 0 ∈ P d , it is now known (Kim and Samworth, 2016;Kur et al., 2019) that sup where K d > 0 depends only on d, and that this risk bound is minimax optimal (up to the logarithmic factor when d ≥ 2).See also Carpenter et al. (2018) for an earlier result in the case d ≥ 4, and Xu and Samworth (2021) for an alternative approach to high-dimensional log-concave density estimation that seeks to evade the curse of dimensionality in the additional presence of symmetry constraints.It is further known that when d ≤ 3, the log-concave maximum likelihood estimator can adapt to certain subclasses of log-concave densities, including log-concave densities whose logarithms are piecewise affine (Kim et al., 2018;Feng et al., 2021).See also Barber and Samworth (2021) for recent work on extensions to the misspecified setting (where the true distribution from which the data are drawn does not have a log-concave density).

Figure 2 :Figure 3 :
Figure 2: Plots on a log-scale of Relative Objective versus time (mins) [left panel] and number of iterations [right panel].For each of our four synthetic data sets, we ran five repetitions of each algorithm, so each bold line corresponds to the median of the profiles of the corresponding algorithm, and each thin line corresponds to the profile of one repetition.For the right panel, we show the profiles up to 128 iterations.

Figure 4 :
Figure 4: Additional plots on a log-scale of Relative Objective versus time (mins) [left panel] and number of iterations [right panel].Details are given in the caption of Figure 2.

Figure 5 :
Figure 5: Plots on a log-scale of Relative Objective versus time (mins) [left panel] and number of iterations [right panel].

Table 1 :
Summary of options for smoothing and gradient approximation methods.

Table 2 :
Comparion of constants in Assumption 1 for different smoothing schemes with u ∈ [0, r].

Table 4 :
Statistics of the distance between the optimal solution and truth.For each type of data set, we drew 40 random samples of the sizes given, and computed the log-concave MLE by CSS with tolerance 10 −4 .

Table 6 :
Summary of increasing grid size strategy (illustrated with n = 5,000 observations from a Laplace distribution in four dimensions).We take a four stage grid strategy and 128 iterations in total, with stage lengths shown in second line.For deterministic grids (denoted by DI), we use m 0,t to determine the grid size (third line in the table), and the fourth line of the table is the corresponding grid size.For random grids (denoted by RI), the fifth line is the grid size of random sample.

Table 7 :
Comparison of our proposed methods with the CSS solution Cule et al. (2010) and RS-RF Duchi et al. (2012), but on 3 real datasets.Details are given in the caption of Table3.
Recall that C d denotes the class of proper, convex lower-semicontinuous functions ϕ : R d → (−∞, ∞] that are coercive in the sense that ϕ(x) → ∞ as x → ∞.Recall further from Dümbgen et al. (2011, Theorem 2.2) that if P is a distribution on R d with R d x dP (x) < ∞ and P (H) < 1 for all hyperplanes H, then the strictly convex function