Nonparametric estimation of simplified vine copula models: comparison of methods

In the last decade, simplified vine copula models have been an active area of research. They build a high dimensional probability density from the product of marginals densities and bivariate copula densities. Besides parametric models, several approaches to nonparametric estimation of vine copulas have been proposed. In this article, we extend these approaches and compare them in an extensive simulation study and a real data application. We identify several factors driving the relative performance of the estimators. The most important one is the strength of dependence. No method was found to be uniformly better than all others. Overall, the kernel estimators performed best, but do worse than penalized B-spline estimators when there is weak dependence and no tail dependence.


Introduction
Simplified vine copulas, or pair-copula constructions (PCC), have become very popular over the last decade Cooke, 2002, Aas et al., 2009, Czado, • The estimators of Kauermann and Schellhase (2014) and Scheffer and Weiß (2016) are extended to allow for general R-vine structures (opposed to just D-and/or C-vine structures).
• Beyond the classical kernel density estimator used in Nagler and Czado (2016), we further consider local linear and local quadratic likelihood kernel estimators.
• All pair-copula estimators can be combined with structure selection algorithms using both Kendall's τ and a corrected AIC as target criterion.
The remainder of this article is organized as follows. Section 2 introduces simplified vine copula models. Section 3 presents and extends several existing nonparametric methods for pair-copula estimation, describes a step-wise estimation algorithm for vine-copula estimation, and discusses approaches for model selection. We describe the design of our simulation study in Section 4 and summarize the results in Section 5. In Section 6, a real data set is used to illustrate the estimators' behavior and demonstrate the necessity for nonparametric estimators. Section 7 contains our conclusions.

Background on simplified vine copula models
This section gives a brief introduction to pair-copula constructions. For a more extensive treatment, we refer to Aas et al. (2009), Czado (2010, and Joe (2014). By Sklar's theorem (Sklar, 1959), any multivariate distribution function F can be split into its marginal distributions F 1 , . . . , F d and a copula C: The copula C describes the dependence structure of the random vector X. It is, in fact, the joint distribution of the random vector U = (U 1 , . . . , U d ) = F 1 (X 1 ), . . . , F d (X d ) . Note that U 1 , . . . , U d are uniformly distributed on the unit interval. If F admits a density, we can differentiate the above equation to get where c, f 1 , . . . , f d are the probability density functions corresponding to C, F 1 , . . . , F d respectively. Following Joe (1997) and Cooke (2001, 2002), any copula density c can be decomposed into a product of d(d − 1)/2 bivariate (conditional) copula densities. The decomposition is not unique, but all possible decomposition can be organized as graphical structure, called regular vine (R-vine). It is a sequence of trees V = T m = (V m , E m ), m = 1, . . . , d − 1 satisfying the following conditions: (i) T 1 is a tree with nodes V 1 = {1, . . . , d} and edges E 1 .
(ii) For m ≥ 2, T m is a tree with nodes V m = E m−1 and edges E m .
(iii) (Proximity condition) Whenever two nodes in T m+1 are joined by an edge, the corresponding edges in T m must share a common node. Figure 1 shows an example of a regular vine with each edge e annotated by (j e , k e ; D e ). The notation for an edge e in T i depends on the two shared edges in T i−1 , denoted by a = (j a , k a ; D a ) and b = (j b , k b ; D b ) with V a = {j a , k a , D a } and Here D e is called conditioning set while {j e , k e } is the conditioned set of an edge e. In Tree T i , the nodes a and b are joined by edge e = (j e , k e ; D e ), with j e = min{l : l ∈ (V a ∪ V b ) \ D e }, k e = max{l : l ∈ (V a ∪ V b ) \ D e } and D e = V a ∩ V b .
A vine copula is a graphical model describing the dependence of a d-variate random vector U = (U 1 , . . . , U d ) ∼ C. The vine tree sequence is also called the vine structure of the vine copula model. This model identifies each edge e of the vine with a bivariate copula c je,ke;De (called pair-copula). The joint density of the vine copula can then be written as the product of all pair-copula densities: where u De := (u ) ∈De is a subvector of u = (u 1 , . . . , u d ) ∈ [0, 1] d and C je|De is the conditional distribution of U je |U De = u De . The pair-copula density c je,ke;De is the copula density corresponding to the two variables U je and U ke , conditional on U De = u De . The density decomposition (2) holds for any copula density c. In this general form, the pair-copulas c je,ke;De depend on the value of the conditioning vector u De . To make the model more tractable, one usually makes the simplifying assumption that the pair-copula densities do not change with u De . In this case, the model is called a simplified vine copula model and the corresponding density can be written as Example 1. The density of a simplified vine copula model corresponding to the tree sequence in Figure 1 is where we used the abbreviation u je|De := C je|De (u je |u De ).
Vine copula densities involve conditional distributions C je|De . We can express them in terms of conditional distributions corresponding to bivariate copulas in the previous tree as follows: Let l e ∈ D e be another index such that c je,le;De\le is a pair-copula in the previous tree, and define D e = D e \ l e . Then, we can express where the h-function is defined as The arguments C je|D e (u je |u D e ) and C le|D e (u le |u D e ) of the h-function in (3) can be rewritten in the same manner. In each step of this recursion the conditioning set D e is reduced by one element. Eventually, this allows us to write any of the conditional distributions C je|De as a recursion over h-functions that are directly linked to the pair-copula densities in previous trees.

Nonparametric estimation of simplified vine copula models
We now discuss how simplified vine copula models can be estimated nonparametrically. First, we give an overview of nonparametric estimators of bivariate copula densities. Second, we outline a general step-wise estimation algorithm for the full vine copula density, which can be used with any bivariate copula density estimator. We also describe a data-driven structure selection algorithm that was initially proposed by Dißmann et al. (2013).

Nonparametric estimation of bivariate copula densities
The classical approach to density estimation is to assume a parametric model and estimate its parameters by maximum likelihood. There is a large variety of bivariate parametric copula models. Special classes are the elliptical copulas (including the Gaussian and Student t families), and the Archimedean class (including the Clayton, Frank and Gumbel families) (for more, see, Joe, 2014).
However, parametric models notoriously lack flexibility and bear the risk of misspecification. Nonparametric density estimators are designed to remedy these issues. In the context of copula densities, these estimators have to take the bounded support ([0, 1] d ) into account. In the following we summarize the state-of-the-art of the major strands of nonparametric copula density estimation. For simplicity, we only consider the bivariate case. We assume throughout that we are given n observations (U 2 ), i = 1, . . . , n, from a copula density c that we want to estimate.

Empirical Bernstein copula
A classical tool in function approximation are Bernstein polynomials (see Lorentz (1953)). The normalized Bernstein polynomial of degree K is defined as The collection of all Bernstein polynomials form a basis of the space of all squareintegrable functions on [0, 1]. A natural idea is to approximate an arbitrary function by a linear combination of a finite number of basis functions. Based on this idea, Sancetta and Satchell (2004) defined the Bernstein copula density. It is an approximation of the true copula density, and can be expressed as andK = (K + 1). Note that the coefficient v k 1 ,k 2 describes the probability that (U 2 ) is contained in the cell [k 1 /K, (k 1 + 1)/K] × [k 2 /K, (k 2 + 1)/K]. The empirical copula density estimator is defined by c(u 1 , u 2 ), but replacing v k 1 ,k 2 by the empirical frequencies obtained from a contingency table: which is the maximum-likelihood estimator for v k 1 ,k 2 . The Bernstein copula density estimator was used in the context of vine copulas by Scheffer and Weiß (2016). As the marginal distributions of the Bernstein copula density do not need to be uniform, the authors calculate an approximation to the contingency table by solving a quadratic program, imposing constraints for uniform marginal distributions. The smoothing parameter for the Bernstein copula density estimator is K, the number of knots. Rose (2015) proposed selection rules for K that adapt to the sample size and strength of dependence. Our implementation is available in the kdecopula R package (Nagler, 2016b), and uses the rule where ρ is the empirical Spearman's ρ.

Penalized Bernstein polynomials and B-splines
For fixed K, the Bernstein copula density estimator is a parametric model with (K + 1) 2 parameters. As any parametric model with many parameters, it is prone to overfitting. Kauermann and Schellhase (2014) formulated a penalized likelihood approach to gain control of the smoothness of the fit.
Viewing the Bernstein copula density as a parametric model with parameter vector v = (v 00 , . . . , v 0K , . . . , v KK ), i.e., we can estimate the parameters by maximizing the log-likelihood, As each of the normalized Bernstein polynomials is a density, the weighted sum of normalized Bernstein polynomials is a density, if we ensure that We will need additional constraints to enforce uniform marginal distributions: for Bernstein polynomials c(u 1 , u 2 ) du 1 ≡ 1 holds if the marginal coefficients fulfill v k 1 . = k 2 v k 1 ,k 2 = 1/(K + 1), for all k 1 = 0, . . . , K.
The same constraints follow for c(u 1 , u 2 ) du 2 ≡ 1. These constraints can be formulated in matrix notation yielding where A K sums up the elements of v k 1 ,k 2 column-wise (i.e. over k 2 ) and row-wise (i.e. over k 1 ), i.e. A T K = ((I K ⊗ 1 T K )), (1 T K ⊗ I K )), where 1 K is the column vector of dimension K with elements 1 and I K is the K dimensional identity matrix.
The log-likelihood (6) can be maximized under these constraints using quadratic programming (e.g., with the quadprog R package Weingessel, 2013). But since this is a parametric model with many parameters, the fitted copula density may be wiggly (see e.g., Wahba, 1990). This issue can be resolved by imposing an appropriate penalty on the basis coefficients. Kauermann and Schellhase (2014) postulate that the integrated squared second order derivatives are small (see also, Wood, 2006) and formulate the penalty as This can be written as a quadratic form of a penalty matrix P (see, Kauermann and Schellhase, 2014, Appendix) and the corresponding penalized log-likelihood is defined as The penalty parameter λ needs to be selected adequately, that is data driven. In Section 2.5 of Kauermann and Schellhase (2014), the authors propose a method that formulates the penalized likelihood approach as linear mixed model and comprehend the penalty as normal prior imposed on the coefficient vector. We apply this methodology, too. Additionally, Kauermann and Schellhase (2014) propose to use B-spline basis functions (see de Boor (1978)) instead of Bernstein polynomials in (5). They replace each B Kk in (5) with a B-spline, located at equidistant knots κ k = k/K with k = 0, . . . , K − 1 + q, normalized so that it satisfies 1 0 B Kk (u) du = 1 for k = 0, . . . , K. Kauermann and Schellhase (2014) only used normalized linear (q = 1) B-splines. To allow for more flexibility, we will also use normalized quadratic (q = 2) B-splines in our study.
In order to guarantee that c(u 1 , u 2 ; v) is a bivariate copula density, we impose similar constraints as the ones for the Bernstein polynomials. The linear constraints (7) will be the same for B-splines, but the uniform margins condition (8) has to be adapted. The condition takes the form For the penalization, we work with a penalty on the m-th order differences of the spline coefficients v, as suggested for B-spline smoothing in Eilers and Marx (1996), defining a penalty matrix P m , where we choose m = q + 1. Further details of this smoothing concept can be found in Ruppert et al. (2003). In the following, we define the difference based penalty matrix P m for the m-order differences through with e.g.
Then for B-splines, the penalized log-likelihood becomes Note, that we achieve an independence copula, if we set the penalty parameter λ to infinity in (10) or (12). The penalized Bernstein and B-splines estimators are implemented in the R package penRvine (Schellhase, 2016). The package will be published soon and is available upon request.

Kernel weighted local likelihood
Kernel estimators are well-established tools for nonparametric density estimation. Several kernel methods have been tailored to the problem of copula density estimation. Their main challenge is to avoid bias and consistency issues at the boundaries of the support. The earliest contribution is the mirror-reflection method (Gijbels and Mielniczuk, 1990). Later, Charpentier et al. (2006) extended the beta kernel density estimator of Chen and Hall (1993) to the bivariate case. The more recent contributions all focus on a transformation trick. Assume we want to estimate a copula density c given a random sample U , i = 1, . . . , n. Let Φ be the standard normal distribution function and φ its density. Then the random vectors (Z 2 ) have normally distributed margins and are supported on the full R 2 . In this domain, kernel density estimators work very well and do not suffer from any boundary problems. By Sklar's Theorem for densities, eq. (1), the density f of (Z By isolating c in (13) and the change of variables u j = Φ(z j ), j = 1, 2, we get We can use any kernel estimator f of f to define an kernel estimator of the copula density c via (14): Estimators of this kind have an interesting feature. The denominator of (15) vanishes when u 1 or u 2 tend to zero or one. If the numerator vanishes at a slower rate, the estimated copula density explodes towards the corners of the unit square. This behavior is common for many popular parametric families, including the Gauss, Student, Gumbel, and Clayton families. The transformation estimator (15) is well suited to resemble such shapes. However, its variance will also explode towards the corners and the estimator will be numerically unstable.
To accommodate for this, we restrict the estimator to [0.001, 0.999] 2 and set estimates outside of this region to the closest properly defined estimate.
To estimate the density f , Nagler and Czado (2016) proposed to use the classical bivariate kernel density estimator (see, e.g., Silverman, 1986). We will extend this approach by resorting to the more general class of local polynomial likelihood estimators; see Loader (1999) for a general account and Geenens et al. (2014) in the context of bivariate copula estimation.
Assume that the log-density log f (z 1 , z 2 ) of the random vector 2 ) can be approximated locally by a polynomial of order q. For example, using a log-quadratic expansion, we get for (z 1 , z 2 ) in the neighborhood of z = (z 1 , z 2 ). The polynomial coefficients a can be found by solving the weighted maximum likelihood problem a = arg max where the kernel K is a symmetric probability density function, is the product kernel, and B ∈ R 2×2 is a matrix with det(B) > 0. B is called the bandwidth matrix and controls the degree of smoothing. The kernel K serves as a weight function that localizes the above optimization problem around z. We obtain a 1 as an estimate for log f (z 1 , z 2 ) and, consequently, exp( a 1 ) as an estimate for f (z 1 , z 2 ). An estimate of the copula density can be obtained by plugging this estimate in (14). For a detailed treatment of this estimator's asymptotic behavior we refer to Geenens et al. (2014). In general, the estimator Algorithm 1 Sequential estimation of simplified vine copula densities Output: Estimates of pair-copula densities and h-functions required to evaluate the vine copula density (16).
for all e ∈ E m : ..,n , obtain an estimate of the copula density c je,ke;De which we denote as c je,ke;De .
(iii) Set end for end for does not yield a bona fide copula density because the margins may not be uniform. This issue can be resolved by normalizing the density estimate (see, Nagler, 2016a, for details).
For applications of the estimator, an appropriate choice of the bandwidth matrix is crucial. For the local constant approximation, a simple rule of thumb was shown to perform well in Nagler (2016a). We use an improved version of this rule that also adjusts to the degree of the polynomial q: where Σ Z is the empirical covariance matrix of Z (i) , i = 1, . . . , n, and ν 0 = 1.25, ν 1 = ν 2 = 5. An implementation of the estimator is available in the R package kdecopula (Nagler, 2016b).

Step-wise estimation of vine copula densities
We now turn to the question how a simplified vine copula density can be estimated. Most commonly, this is done in a sequential procedure introduced by Aas et al. (2009). The procedure is generic in the sense that it can be used with any consistent estimator for a bivariate copula. It is summarized in Algorithm 1.
From now on we use c to denote a d-dimensional vine copula density. Assume we have a random sample Recall that this density can be written as In the first tree, the conditioning set D e is empty. So for e ∈ E 1 , estimation of the pair-copula densities c je,ke;De is straightforward, since no conditioning is involved. We simply apply one of the estimators from Section 3.1 to the bivariate random vectors (U . This gives us estimates c je,ke;De , e ∈ E 1 . By one-dimensional integration (eq. (4)) we can derive estimates of the corresponding h-functions. They can be derived in closed form for Bernstein and B-spline estimators (see, Kauermann and Schellhase, 2014). For kernel estimators, the h-functions have to be computed numerically.
In a next step, we transform the initial copula data by applying the estimated h-functions to obtain pseudo-observations from the pair-copulas in the second tree. Using these, we can estimate the pair-copula densities c je,ke;De , e ∈ E 2 . We iterate through the trees in this manner until all pair-copula densities and h-functions have been estimated. Nagler and Czado (2016, Theorem 1) show that simplified vine copula density estimators defined by Algorithm 1 are consistent under rather mild conditions. An appealing property of these estimators is the absence of curse of dimensionality: the convergence rate does not depend on the dimension. In fact, the convergence rate achieves the optimal rate for a two-dimensional nonparametric density estimator.

Selection strategies for the vine structure
So far we assumed that the structure of the vine (i.e., the edge sets E 1 , . . . , E d−1 ) is known. In practice, however, the structure has to be chosen by the statistician. This choice is very difficult, since there are d!/2 × d (d−2)(d−3)/2 possible vine structures (Morales-Nápoles et al., 2011), which grows excessively with d. When d is very small, it may still be practicable to estimate vine copula models for all possible structures and compare them by a suitable criterion (such as AIC). But already for a moderate number of dimensions one has to rely on heuristics. Dißmann et al. (2013) proposed a selection algorithm that seeks to capture most of the dependence in the first couple of trees. This is achieved by finding the maximum spanning tree using a dependence measure as edge weights, e.g., the absolute value of the empirical Kendall's τ . The resulting estimation and structure selection procedure is summarized in a general form in Algorithm 2.
Several specifications of the edge weight were investigated in a fully parametric context by Czado et al. (2013). The most common edge weight w e is the absolute value of the empirical Kendall's τ as proposed by Dißmann et al. (2013) and used in a non-parametric context by Nagler and Czado (2016). Kauermann and Schellhase (2014), on the other hand, used the corrected Akaike information criterion (cAIC) (Hurvich et al., 1998) as edge weight. Using the notation of Calculate weights w e for all possible edges e = {j e , k e ; D e } that satisfy the proximity condition (see Section 2) and select the edge set E m as under the constraint that E * m corresponds to a spanning tree. for all e ∈ E m : ..,n , obtain an estimate of the copula density c je,ke;De which we denote as c je,ke;De .
(iii) Set je|De , i = 1, . . . , n. end for end for Algorithm 2, the cAIC for edge e is defined as cAIC e = −2 e + 2df e + 2df e (df e + 1) is the log-likelihood and df e is the effective degrees of freedom (EDF) of the estimator c e . For explicit formulas for the EDF we refer to Kauermann and Schellhase (2014) for the spline approach and to Loader (1999, Section 5.3.2) for the kernel estimators. For parametric copula estimation, the EDF equals the number of estimated parameters for the chosen copula family. From a computational point of view, the cAIC has a big disadvantage: before a tree can be selected, the pair-copulas of all possible edges in this tree have to be estimated. Just for the first tree, this amounts to estimating d 2 bivariate copula densities. The empirical Kendall's τ on the other hand can be computed rapidly for all pairs. It allows to select the tree structure before any pair-copula has been estimated. Then, only d − 1 pair-copulas have to be estimated in the first tree. The situation is similar for subsequent trees. Both approaches will be compared in our simulation study with regard to estimation accuracy and speed.

Description of the simulation study design
We compare the performance of the vine copula density estimators discussed in Section 3 over a wide range of scenarios. We consider several specifications of sample size, dimension, strength of dependence, and tail dependence. We randomize the simulation models and characterize the scenarios by probability distributions for the pair-copula families and dependence parameters. A detailed description of the study design procedure will be given in the following sections.

Simulation scenarios based on model randomization
To investigate how various factors influence the estimators' performance, we create a number of scenarios. Each of these scenarios is characterized by a combination of the factors shown in Table 1.
To make the results for a particular dependence scenario as general as possible, we randomly generate a model in the following steps: Step 1. Draw R-vine structure: We do this in the following steps: (i) Draw n samples for d independent uniform random variables,Ũ i,j , i = 1, . . . , n, j = 1, . . . , d.
(ii) On these samples, run the structure selection algorithm of Dißmann et al. (2013) (only allowing for the independence family).
(iii) Set the model structure to the one selected by the algorithm.
• no tail dependence: draw each of the d(d − 1)/2 pair-copula families with equal probabilities from the Gaussian and Frank copulas.
• both: for each of d(d − 1)/2 pair-copulas: (i) choose with equal probabilities whether the copula has tail dependence or not, (ii) proceed as above.
The densities are shown in Figure 2.
(iii) Usually, partial dependence is weaker than direct pair-wise dependence. To mimic this behavior we decrease the simulated absolute Kendall's τ by a factor of 0.8 m , where m is the tree level of the pair-copula.
(iv) For all families under consideration there is a one-to-one relationship between the copula parameter and Kendall's τ (see, e.g., Brechmann and Schepsmeier, 2013, Table 2). Hence, we set the copula parameter by inversion of the reduced value of Kendall's τ .
Step 4. Draw observations from the final model: With the selected structure, copula families and their parameters, the vine copula model is fully specified. We can draw random samples from this vine copula model using the algorithm of Stöber and Czado (2012) (as implemented in the VineCopula R library Schepsmeier et al., 2016).
The stochastic model characterized by steps 1-4 can be interpreted as a whole. It is a mixture of vine copula models, mixed over its structure, families, and parameter. The mixing distribution for the families is uniform over sets determined by the 'type of dependence' hyper-parameter. The mixing distribution for the absolute Kendall's τ follows a Beta distribution with parameters characterized by the 'strength of dependence' hyper-parameter. Each scenario corresponds to a particular specification of the mixture's hyperparameters. The benefit of this construction is that it yields models that are representative for a wide range of scenarios encountered in practice. It also limits the degrees of freedom we would have when specifying all pair-copula families and parameters manually.
We further implemented two structure selection methods for each estimation paircopula estimator (based on Kendall's τ and cAIC, see Section 3.3); additionally we computed each estimator under the true structure.

Performance measurement
As a performance measure, we choose the integrated absolute error (IAE) where c is the estimated and c is the true copula density. The above expression requires us to calculate a d-dimensional integral, which can be difficult when d becomes large. To overcome this, we estimate this integral via importance sampling Monte Carlo integration (e.g., Ripley, 1987, Section 5.2). That is, where U i iid ∼ c is a random vector drawn from the true copula density c. This results in an unbiased estimator of the IAE with relatively small variance: usually the numerator is large/small when the denominator is large/small. Hence, the variance of the terms of the sum is small and, thereby, the variance of the sum is small. All results will be based on an importance sample of size N = 1 000.
For each estimator and each possible simulation scenario emerging from Table 1, we record the IAE on R = 100 simulated data sets. Figure 4 present the results of the simulation study described in Section 4. The analysis will be divided into several sections. The first takes a very broad view, whereas the remaining ones investigate the influence of individual factors. We acknowledge that the information density in these figures is extremely high. So we start with a detailed description of the figures' layout. Figure 3 contains the results for all scenarios with weak dependence; Figure 4 with strong dependence. The left columns correspond to the smaller sample size (n = 400) and the right columns to the larger sample size (n = 2 000). The figures are also partitioned row-wise with an alternating pattern of the dimensions d = 5 and d = 10. Two subsequent rows correspond to the same type of dependence (no tail dependence, both, only tail dependence). In total there are 32 panels, each representing one of the 32 possible combinations of the factors listed in Table 1.

Figure 3 and
Each panel contains 24 boxes in 8 groups. Each group corresponds to one estimation method for the pair-copulas. The three boxes in each group represent the three different methods for structure selection: known structure, maximum spanning trees with Kendall's τ , maximum spanning trees with cAIC (from left to right). The the box spans the interquartile range, the median is indicated by a horizontal line, the whiskers represent the 10% and 90% percentiles. par bern pbern pspl1 pspl2 tll0 tll1 tll2 average rank 1.28 7.17 6.35 5.00 5.01 5.01 3.83 2.35

Overall ranking of methods for pair-copula estimation
We begin our analysis with a broad view on the relative performance of the pair-copula estimators. We want to assess the performance of the estimation methods, averaged over all scenarios and structure selection strategies. But just taking the average IAE could be misleading. It is evident from Figures 3 and 4 that the scale of the IAE varies between scenarios. Averaging the bare IAE leads to an unbalanced few, laying more weight on particular scenarios. As a more robust alternative, we take the following approach: in each scenario, average the IAE over replications and structure selection strategies. Then rank the estimation methods by their relative performance. Ranks are comparable across scenarios, so our final criterion will be the average rank across all scenarios. These numbers are listed in Table 2. The parametric estimator performs best, which is no surprise since our simulation models consist of only parametric copula families. We included it in this study mainly to get a sense of what is possible in each scenario. Remarkably, it is outperformed in very few cases by a nonparametric estimator. This is due to the need for structure selection which will be discussed in more detail later on.
Among the nonparametric estimators, the kernel estimators (tll2, tll1, tll0) perform best, followed by the spline methods (pspl1, pspl2) which perform as well as the worst kernel estimator tll0. The Bernstein estimators (pbern, bern) perform worst. Within these three classes, the accuracy improves mostly by how complex the estimation method is: going from regular Bernstein copulas to penalized ones; and going from local constant, to local linear, to local quadratic likelihood. It is the other way around for the B-spline methods, but the difference in the average rank is minuscule.
We will find that this relative ranking is fairly robust across scenarios. In the following analysis, we treat it as the benchmark ranking and focus on deviations from it.

Strength and type of dependence
By looking at the scale in each panel, we see that the performance of all estimators gets worse for increasing strength of dependence and increasing proportion of tail dependent families. This is explained by the behavior of the true densities. Many copula densities (and their derivatives) explode at a corner of the unit square. From the pair-copula families in our simulation model, only the Frank copula is bounded. Within each family, the tails explode faster when the strength of dependence increases. And tail dependence means that the tails explode particularly fast. Exploding curves are difficult to estimate for nonparametric estimators because their asymptotic bias and variance are usually proportional to the true densities' derivatives. Our results give evidence that this effect transfers to finite samples.
The estimators' response to these difficulties is the main driver behind their relative performance. In most scenarios, the ranking of estimators is similar to the benchmark rankings. But there are deviations. Let us walk through the scenarios one by one.
• weak, no tail dependence: pbern1 and pspl1 perform better than pspl2, the kernel estimators, and even the parametric estimator for n = 400. For n = 2 000, the parametric estimator gets ahead and the penalized methods are on par with tll1 and tll20.
• weak, only tail dependent copulas: similar to the benchmark ranking.
• strong, no tail dependence: bspl2 beats tll0 and tll1 for n = 400 and is on par for n = 2 000.
• strong, both: similar to benchmark ranking.
• strong, only tail dependent copulas: similar to benchmark ranking.
Overall, the penalized estimators tend to do better under weak dependence and only little tail dependence, whereas the kernel estimators do better in the other scenarios. The method tll2 is the top performer in all but a few cases.

Sample size and dimension
When the sample size increases, the estimators become more accurate. Any reasonable estimator should satisfy this property. The kernel estimators and the non-penalized Bernstein estimator seem to benefit more from an increased sample size. The effect is most obvious in the weak dependence, no tail dependence case. This has an explanation: theoretically, the number of knots used by the penalized estimators should increase with the sample size. But our implementation uses a fixed number of knots, as the computational burden is already substantial compared with the other methods (see Section 5.5). All other methods adapt their smoothing parameterization to the sample size. It is very likely that the penalized methods improve when the number of knots is further increased.
Comparing a pair of panels corresponding to d = 5 and d = 10, we see very little differences. We conclude that the results are very robust to changes in the dimensionality.

Structure selection
The first aspect we want to discuss is the loss in accuracy caused by the need to select the tree structure. Recall that the three subsequent boxes for each estimator correspond to: estimation under the true structure (in practice unknown), selection based on Kendall's τ , selection based on cAIC.
The IAEs for the two selection methods are always higher than the 'oracle' results with known structure. This makes sense: the true model is a simplified vine copula; if the true structure is known, the models are correctly specified and all estimators are consistent. In practice, the true structure is unknown, and a different structure will be selected most of the time. For the selected structure, there is no guarantee that the model is still simplified or that the estimators are consistent. For more details, we refer to Spanhel and Kurz (2015), and Nagler and Czado (2016, Section 8).
Overall, the average loss in accuracy when going from the true to a heuristically selected structure increases with strength of dependence and prevalence of tail dependence. But the extent of this effect varies between estimation methods. The parametric estimator suffers the most substantial losses. In fact, the the parametric estimator's performance is often very close to that of the best nonparametric estimator when the structure is unknown. This is quite remarkable considering that we simulate from parametric models. Interestingly, the loss for the penalized Bernstein and B-spline methods (pbern, pspl1, pspl2) is negligible in most scenarios when cAIC is used-but not when Kendall's τ is used. This is a distinct property of these penalized methods. The non-penalized Bernstein and kernel methods perform similarly for the two structure selection criteria. In most scenarios, the relative performance ordering of the estimators is the same for each type of structure. But there are a few cases (strong dependence, n = 400) where the bspl2 estimator is worse than tll0 or tll1 with Kendall's τ , but better with cAIC.
The results give evidence that the cAIC is the better criterion in terms of the estimators' accuracy. But it also makes the vine copula estimators more costly to fit (see Section 3.3). So there is a trade-off between speed and accuracy. It usually depends on the problem at hand which to prioritize. We will investigate this issue further in the next section. Table 3 lists the average computation time 1 required to fit a vine copula and evaluate its density on 1 000 importance Monte-Carlo samples. The results are divided into the combinations of dimension d and sample size n.

Computation time
Let us first focus on the selection criterion. We clearly see that the computation time increases substantially for all estimators when cAIC is used instead of Kendall's τ . This effect size differs, but is usually a factor of around two or three.  The fastest two estimators are the simplest ones: bern and tll0. The other two kernel estimators are in the same ballpark, but the computation time increases slightly with the order of the polynomial. Only slightly slower is the parametric estimator. The reason is that the parametric estimator has to iterate through several different copula families before it can select the final model. The penalized estimators are about orders of magnitude slower than their competitors. Take for example the case d = 10 and n = 2 000, where most estimators take around one minute (using τ ), but the penalized estimators take more than one and a half hours.
The large difference in computational demand is caused by the penalized estimation problem. One has to optimize over more than 100 parameters with more than 100 side constraints. Even worse, such a problem has to be solved multiple times until an optimal choice for the penalty parameter λ has been found. Reducing K (the number of knots) does significantly reduce this burden, but also limits the flexibility of the estimators. In the end, the statistician has to choose which K yields the best balance between speed and accuracy.

Limitations
The referees pointed out some limitations of our study which are addressed in the following.

Performance measure
All results focus on a single performance measure and therefore only provide a limited view on the estimators' performance. Although this is true, we considered several other measures in preliminary versions of this study and found the results to be quite robust w.r.t. to the measure.

Estimation of marginal distributions
The study neglects the fact that observations from the copula are never observed and one has to rely on pseudo-observations that depend on estimated marginal distributions. Kim et al. (2007) showed in an extensive simulation study that this can be a problem in misspecified fully parametric models. But the issue is largely resolved when the margins are estimated nonparametrically. In this case, maximum likelihood estimators are unbiased and only slightly less efficient (Genest et al., 1995).
In a purely nonparametric context, it is even less of an issue. In fact, many authors have found that errors stemming from estimating the marginal distributions are asymptotically negligible when estimating the copula density (e.g., Janssen et al., 2014, Geenens et al., 2014, Nagler, 2016a. This is explained by the fact that distribution functions can be estimated consistently at the parametric rate, whereas density estimators are bound to (slower) nonparametric rates. Accordingly, we can expect similar results to the ones presented even if margins were treated unknown.

Choice of smoothing parameters
It is a common theme in nonparametric estimation that the quality of estimators depends heavily on the choice of smoothing parameters. This is certainly also the case for the estimators considered in this study. However, we do not think it is feasible to assess the sensitivity of our results to this choice: • Smoothing parameters are hardly comparable across estimation methods because they arrive at the density estimate in fundamentally different ways.
• There are too many smoothing parameters in a vine copula model: There are 10 (d = 5) resp. 45 (d = 10) pair-copulas, and for each pair-copula there are between one and three smoothing parameters (depending on the estimation method).
• Due to the sequential nature of the joint estimator, pair-copula estimators in later trees are affected by the estimates in earlier trees. This leads to significant interactions between smoothing parameters at different levels.
In our study, all smoothing parameters were selected by automatic procedures that are state-of-the-art. This realistically reflects statistical practice. But one should keep in mind that the performance of most estimators can likely be improved by advances in automatic selection procedure.

Illustration with real data
In the simulation study, the parametric estimator performed best in virtually all scenarios. But this is simply a consequence of simulating from parametric models. But real data does not always behave that nicely and nonparametric methods are required to appropriately capture the dependence. Such a case is illustrated in the following real-data example.
We consider a data set representative of measurements taken on images from the MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) Telescopes 2 with 19020 observations, focusing on gamma observations. It has been previously analyzed with a kernel-based simplified vine copula estimator by Nagler and Czado (2016). To show exemplary results of the different nonparametric copula density estimators, we select a random subset (n = 2000) from the MAGIC data with respect to to the variables fConc1, fM3Long and fM3Trans. We compute pseudo-observations from the data by applying the marginal empirical distribution functions to each variable. Figure 5 shows scatter plots of the three pairs of the pseudo-observations. The shapes we see are different from what we know from popular parametric families. We fit several copula density estimators to each pair and show the results in Figure 6. The first column of Figure 6 shows the fitted pair-copula density between fConc1 (U 5 ) and fM3Long (U 7 ), the second column between fConc1 (U 5 ) and fM3Trans (U 8 ) and the third column contains the copula density between fM3Long (U 7 ) and fM3Trans (U 8 ).
The first pair of variables fConc1 (U 5 ) and fM3Long (U 7 ) a lot of pseudoobservations accumulate around the point (0, 1), which is reflected as high density peaks in all fitted copula densities. But for the accumulation around the point (1, 0.3), we observe a difference between the nonparametric estimators bern, pspl2 and ttl2 and the parametric copula density, which does not mirror this accumulation.
For the second pair, fConc1 (U 5 ) and fM3Trans (U 8 ), the estimated density varies considerably between methods. Estimates of pspl2 and ttl2 show peaks around the points (0, 0) and (0, 1), which reflects the the large concentration of points in the scatter plot in Figure 5. The estimators bern and par do not contain these peaks. We observe similar differences for the estimated densities for the third data pair, presented in the right column of Figure 5. While bern, pspl2 and ttl2 show density peaks around the accumulation points (1, 0) and (1, 1), but the estimated parametric copula does not map this structures of the data.
The previous examples have illustrated situations parametric estimator fails because of its lack of flexibility. In such situations, nonparametric methods are required to adequately capture the true dependence structure. However, we merely looked at two unconditional pairs of variables, not a full dependence model. Similar to our simulation study, we assess the performance of the estimators via cross-validation. We randomly draw a size n subset U train of the data, apply the estimators and calculate the mean out-of-sample log-likelihood on 1 000 randomly selected remaining observations U test , i.e., whereĉ is a vine copula density estimator based on U train . This is repeated N = 100 times for sample sizes n = 400 and n = 2 000. The results are summarized as box plots in Figure 7 for all estimators and structure selection based on Kendall's τ (left box) and cAIC (right box).
The parametric estimator performs unsatisfactory for n = 400 since it varies enormously for both data sets. But also for n = 2000, the parametric estimator is outperformed by most nonparametric alternatives. The performance of the nonparametric methods varies notably between methods. The methods bern and tll1 do not perform well, but the other methods clearly outperform par. Furthermore, the performance differs significantly with respect to the structure selection criterion for bern1 and pspl1, pspl2. For small sample size (n = 400) and using Kendall's τ as selection criterion, tll0 results with highest mean, directly followed by pspl2. But choosing cAIC instead, the log-likelihood of pspl2 increases, but not for tll0. The situation is similar for n = 2000. We conclude that the more sophisticated nonparametric methods adequately reflect the distribution of the data. In contrast, the dependence structure observed in Figure 5 can not be captured adequately with standard parametric models.

Conclusion
This articled compared existing methods for nonparametric estimation of simplified vine copula densities. The estimators considered are the non-penalized Bernstein estimator, the penalized Bernstein estimator, penalized B-spline estimators (linear and quadratic), and kernel weighted local likelihood estimators (local constant, linear, and quadratic). We compared these methods by an extensive simulation study and on two real data sets.
The simulation study comprises several scenarios for sample size, dimension, strength of dependence, and tail dependence. The simulation models are set up as parametric vine copulas with randomized vine structure, pair-copula families, and parameters. Overall, the kernel methods were found to perform best (especially the local quadratic version), followed by the penalized B-spline estimators. The Bernstein estimators performed worst. An exception to this pattern was found in scenarios with small sample size, weak dependence, and no tail dependence. Here, the penalized B-spline and Bernstein estimators outperformed the kernel methods. Additionally, we demonstrated the need for nonparametric methods on real data whose dependence structure that cannot be adequately captured by a the parametric estimator.
Overall, we found that no estimator is uniformly better than the others; it depends on the data which is to be preferred. Our analysis highlighted which factors drive the performance of the various methods, and which methods should be preferred for certain scenarios. In applications, statisticians can determine the characteristics of their data by an exploratory analysis, and make a well-informed choice based on these results.