VaR bounds in models with partial dependence information on subgroups

Abstract We derive improved estimates for the model risk of risk portfolios when additional to the marginals some partial dependence information is available.We consider models which are split into k subgroups and consider various classes of dependence information either within the subgroups or between the subgroups. As consequence we obtain improved VaR bounds for the joint portfolio compared to the case with only information on the marginals. Our paper adds to various recent approaches to obtain reliable and usable risk bounds resp. estimates of the model risk by including partial dependence information additional to the information on the marginals. In particular we extend an approach suggested in Bignozzi, Puccetti and Rüschendorf (2015) and in Puccetti, Rüschendorf, Small and Vanduffel (2017), which is based on positive dependence resp. on independence information available for some subgroups.


Introduction
Several approaches have been developed in recent years in order to obtain realistically usable risk bounds for portfolio vectors based on reliable information on the marginals and on the dependence structure. It has become clear that in order to obtain a not too wide range of model risk the inclusion of further partial dependence information in addition to the marginal information is necessary. This paper extends methods which were introduced in Bignozzi et al. (2015) and in Puccetti et al. (2015) who are considering models which are split into k subgroups. We consider a variety of possible types of dependence information either within the subgroups or between the subgroups. As a result we obtain usable and typically strongly improved risk bounds for the joint portfolio where we concentrate mainly on the Value of Risk (VaR) or on the TVaR. For a survey on various further methods to reduce the model risk by partial dependence information we refer to Rüschendorf (2016).
In Section 2 we introduce the model of risks with information on subgroups and describe some basic notions and results. Section 3 collects some results on stochastic ordering used for the comparison of different models. Section 4 is concerned with partial dependence information within the subgroups keeping the dependence between the subgroups fixed. In particular we consider the case of elliptical subgroup copulas with only partial knowledge on the correlations, the case of completely unknown dependence structure and the case of subgroups with partially specified factor model structure. In Section 5 we consider the case with additional dependence information between the subgroups. As examples we consider subgroup models with a copula between the subgroups bounded above (or below) by a Gaussian copula, a t-copula and by Clayton or Gumbel copulas. The effects of the various inclusions of dependence information on the risk bounds resp. on the reduction of dependence uncertainty (DU) is demonstrated at several examples of copula models.
The paper develops a flexible class of tools which may lead to a relevant reduction of risk bounds in various applications.

Risk bounds in risk models with a subgroup structure
We consider a risk vector X = (X 1 , . . . , X d ) of d risks defined on a probability space (Ω, A, P ). Our basic assumption is that the marginal distributions (resp. distribution functions) F i = F X i , 1 ≤ i ≤ d, are known while only partial information is available on the joint distribution (function) F .
Our aim is to derive (good) upper and lower bounds for the Value at Risk (VaR) of the joint portfolio S = d i=1 X i = S d resp. for further risk measures like the TVaR. Here VaR α (s) = F −1 S (α) is defined as the α-quantile of S. Dual representations of (sharp) upper and lower VaR bounds were given in Embrechts and Puccetti (2006) and in Puccetti and Rüschendorf (2012a). In some homogeneous cases exact sharp bounds were derived in Wang and Wang (2011) and extended in Puccetti and Rüschendorf (2013) resp.  and in Wang (2014). Since the dual bounds are difficult to calculate in higher dimensions in the inhomogeneous case the introduction of the rearrangement algorithm (RA) in Puccetti and Ruschendorf (2012b) and Embrechts et al (2013) was an important step to approximate the sharp VaR bounds also in high dimensional examples.
As a result it has been found that the DU interval typically is very wide and thus the VaR bounds are not tight enough to be useful in practice. The comonotonic sum S c = d i=1 X c i is typically not the worst dependence structure and often the worst case VaR exceed the comonotonic VaR denoted as VaR + by a factor of 2 or more. For a detailed discussion of these issues see [8].
Defining the TVaR resp. the LTVaR by  Since the bounds with only marginal information are typically too wide, some additional dependence information of the form G ≤ F -a positive dependence restriction on the lower tail probabilities -a negative dependence restriction on the upper tail probabilities (2.5) have been considered in the literature, leading to 'improved standard bounds' (for the ample history see [8] and [26] An interesting class of models with additional dependence information was introduced in Bignozzi et al. (2015) and an effective variant of it was considered in Puccetti et al. (2015). Assume that the risk set {1, . . . , d} = k i=1 I j is split into k disjoint subgroups. In Bignozzi et al. (2015) it is assumed that a vector Z is available such that Z ≤ X (positive dependence restriction) or X ≤ Z (negative dependence restriction). (2.6) Here ≤ is a positive dependence order like the upper orthant order ≤ uo resp. ≤ lo the lower orthant order, the concordance ordering ≤ c or ≤ wcs the weakly conditionally increasing in sequence order (see Section 3). Z is assumed to have independent subgroups Z I j = (Z i ) i∈I j while the components within the subgroups I j are comonotonic.
In [22] it is assumed that the subgroups X I i are themselves independent, while within the subgroups any kind of dependence is possible. This kind of model assumptions is motivated from hierarchical models like in insurance, where f. e. several independencies can naturally be expected. The usefulness of these assumptions has been demonstrated in several applied examples (see [22] or [4]).
In the following sections we extend this kind of model assumptions and discuss related possibilities of reduction of the DU spread. In our extension we consider the case that copula information in terms of upper resp. lower bounds on the copulas of the subgroup vectors X I i is available. We further assume that bounds are available for the copula of the vector of subgroup sums, i. e. we have information on the copula between the subgroups. Besides the motivation on possible independencies or strong positive dependence within subgroups as often available in hierarchical models f. e. in insurance models (see the above mentioned papers of [5] and [22]), this kind of information is often also available f. e. in models with uncertainty on some dependence parameters. This uncertainty then often leads to stochastic ordering bounds for the copulas within or between the subgroups. As a result it turns out that our models yield some flexible tools to formulate partial dependence information and to determine (often strongly) reduced VaR and TVaR bounds.
Denoting, by Y i := j∈I i X j and W i := j∈I i Z j the subgroup sums, we have S = k i=1 Y i which has to be compared with the sum T = k i=1 W i of the comparison vector Z. We denote the distribution function of Y i by G i and that of W i by H i . The basic comparison results between S and T use dependence conditions inside the groups I i in order to infer that G i ≤ H i or G i ≥ H i w.r.t. suitable orderings ≤ and further on the copulas between the groups C = C Y for Y = (Y 1 , . . . , Y k ) and D = C W for W = (W 1 , . . . , W k ), to conclude the comparison between the generic sum S and the bound T .
In the next section we review and develop some of the ordering results used then in the following sections to obtain risk bounds.

Some results from stochastic ordering
In this section we collect some results from stochastic ordering which are useful for the ordering of the subgroup structure models. Let ≤ cx , ≤ dcx and ≤ sm denote the convex, the directionally convex and the supermodular ordering on the class of random vectors resp. probability measures on R m , i. e. X ≤ cx Y iff Ef (X) ≤ Ef (Y ) for all convex functions f : R m → R 1 such that the integrals exist; similarly for ≤ dcx and ≤ sm . The orderings ≤ dcx and ≤ sm are positive dependence orders, while ≤ cx is a order on diffusiveness. For general properties of these orders we refer to [15] and [27].
Further we denote by ≤ wcs the weakly conditional in sequence order (see Rüschendorf (2004) for all increasing functions f such that the expectations exist.
In particular X is called weakly associated in sequence where ≤ st is the stochastic order and P X (i+1) is the distribution of X (i+1) .
X is called conditionally increasing in sequence (CIS) if Some of the basic connections between these dependence and variability orderings are the following (see [25]).

Theorem 3.1 (Relations between dependence orderings).
d) If X and Y have the same CI copula and Remark 3.2. [13] established that for X and Y with the same CI copula and This conclusion also follows from a combination of (3.7) and (3.8).
In the following examples we consider the classes of elliptical and Archimedean copulas and describe criteria for the various orderings within these classes.

A) Elliptical copulas
Elliptical copulas are generalizations of the multivariate normal and also of the multivariate t-distributions. A random vector X ∈ R d is elliptically distributed for some φ : R + → R, the characteristic generators. Elliptical distributions are characterized by a stochastic representation of the form where Σ = AA , U is uniformly distributed on the unit sphere S d−1 and the radial part R is independent of U . Σ is called the correlation matrix of X (see [11], [10], and [12]).
Recall that a matrix A = (a ij ) i,j≤d is called M -matrix if a ij ≤ 0, ∀i = j and all principal minors are positive. For a positive definite matrix A holds: There exists a lower triangular M -matrix L such that A = LL (3.13) (see [24,Lemma 1]). The following result extends Theorem 2 in Rüschendorf (1981) from the normal case to elliptical distributions (see also [28]).
(QΣQ ) −1 is also an M -matrix and thus by (3.13) there exists a lower triangular Defining L = L 11 0 L 12 L 22 the partition of L according to (X i , X (i) ) then L 11 and 22 ≥ 0 and also that L 12 ≤ 0, h is increasing in both arguments and we get ( As consequence we obtain [1,Lemma 4.8,pg. 147]). This implies that X is CI.
"⇒" The proof of this direction is similar as in the normal case (see [24, Proof of Theorem 2]).
As consequence Theorem 3.1 combined with Proposition 3.3 implies a dcx-ordering result between elliptically distributed vectors X and Y with identical CI-copulas.
The following theorem gives criteria for the increasing convex ordering ≤ icx and the supermodular ordering ≤ sm for elliptical distributions.

B) Archimedean Copulas
A copula is called Archimedean if it has the form is d-alternating and further Ψ(0) = 1 and lim x→∞ Ψ(x) = 0, then Ψ is the generator of an Archimedean copula. In the following we restrict to this subclass of Archimedean copulas.
The following characterization of positive dependence properties is due to Scarsini (2001, 2005). b) For an Archimedean copula C Ψ the following conditions are equivalent: The condition for M T P 2 is strictly stronger than the condition for CI.
Let C * ∞ denote the class of completely monotone generators, i.e. which are dalternating for all d ≥ 1. The following criterion for the ≤ sm ordering of Archimedean copulas is due to Wei and Hu (2002).

Dependence structures within the subgroups
We consider risk bounds for risk models with a subgroup structure as introduced in Section 2. We consider various partial dependence assumptions within the subgroups I i while keeping the copula C = C Y of the subgroup sums Y 1 , . . . , Y k fixed.
We assume first that C is a positive dependent copula in the sense that C is weakly associated in sequence (WAS). In particular by Theorem 3.1 this is fulfilled if C is CI or CIS. WAS holds in particular in the case that the subgroups X I i are independent.
In particular, Proof. a) The assumption that C is WAS is by definition equivalent to the ordering condition C ⊥ ≤ wcs C, where C ⊥ is the independence copula.
From the assumption Y i ≤ cx W i we, therefore, conclude by Theorem 3.1 c) that then by Theorem 3.1 c) X I i ≤ dcx Z I i and as consequence and the converse conclusion holds if switching the role of X and Z in order to get lower bounds for the portfolio risk vector X. As consequence under these ordering conditions Proposition 4.1 applies and delivers risk bounds.

) Completely unknown dependence structure within subgroups
If besides the marginal distributions F j of X j no dependence information is available on the i-th subgroups X I i , then with Z j = F −1 j (U i ), j ∈ I i , U i ∼ U (0, 1) we have by the well-known ordering property of the comonotonic vector and as consequence If (U 1 , . . . , U k ) ∼ C, then we obtain X ≤ sm Z and S ≤ cx T. (4.9) c) Partially specified risk factor model Let the i-th subgroup be modelled by a partially specified risk factor model i. e.
where Z f i are systemic risk factors and ε j are individual risk factors. It is assumed that the joint distributions of (X j , Z f i ), j ∈ I i , are known, but the joint distribution of (ε j ) and of (Z f i ) is not specified. These partially specified risk factor models were introduced and analyzed in Bernard et al. (2016a) and they turned out to be very flexible models with considerable potential for risk reduction. It was shown in [3] that where X c , j ∈ I i is the conditionally comonotonic vector given Z f i , F j|z i are the conditional distribution functions of X j given Z f i = z i . If (W 1 , . . . , W k ) have the copula C, specifying dependence between the groups, then with where V ∼ U (0, 1) is independent of (U i ) ∼ C, the following bounds were shown in [3] for S = d s=1 X j : (4.13) (4.13) gives upper resp. lower estimates of the VaR in the partially specified risk factor models based on upper resp. lower estimates by TVaR in the conditional models.
In the following example we consider the case where the different subgroups are independent, i. e. C = C ⊥ is the independence copula. We compare various risk bounds and demonstrate the effects of various dependence restrictions within the subgroups. In particular we compare the case of marginal information only, the case with independent subgroups and no dependence information within the subgroups, and the case of independent subgroups with additional partial factor information within the subgroups.
Example 4.1. We consider a portfolio X of d risks with k independent subgroups I i of the same size m, i. e. d = km. Within the subgroups we consider partially specified risk factor models. One half of the elements in the i-th subgroup are of the form X j = (1 − U i ) −1/3 − 1 + ε j , while the other half is of the form (4) and p ∈ (0, 1) is a parameter describing the dependence of the systemic risk factors within the subgroups; p = 0 ∼ antimonotonic, p = 1 ∼ comonotonic behaviour.
The variables ε j and U i are independent for j ∈ I i , while the {ε j } j∈I i may have any dependence. An example of this kind of factor model without different subgroups (i. e. for k = 1) was considered in [3]. If we use for this case only the marginal information we get for d = 100 and α = 0.95 by the rearrangement algorithm (RA) the following sharp VaR bounds VaR α resp. VaR α .   partial factor information within the subgroups from (4.13), It has been shown in [3] that the determination of the bounds VaR f α , VaR f α can be reduced to VaR upper and lower bounds for the conditional models given the values of the risk factors.
As a result one finds as expected a strong improvement of the upper bounds with increasing number k of independent subgroups. Also the assumption of partially specified factor models within the subgroups strongly reduces the risk bounds compared to the case of arbitrary dependence within the subgroups. The improvements of the bounds are shown in Table 4 For p ≈ 0 the chosen structure produces strong negative dependence, for p ≈ 1 it produces strong positive dependence between the two parts in the subgroups. In the first case this leads to a strong improvement of the upper bounds while in the second case the improvement of the lower bound is more pronounced. a) Gauss factor model: Let X = (X i ) be a one factor Gauss model of the form where b) Partially specified Gauss factor submodels: Let X be a portfolio of d risks with k independent subgroups I i of the same size m. For the subgroups we assume homogeneous Gauss factor models as in (4.14), i. e.
where r i = r ∈ [−1, 1], Z f i ∼ N (0, 1) are independent of ε j for j ∈ I i and (Z f i ) and (ε I i ) are independent.  If the ε i within the subgroups are independent, then with the assumption of independence between the subgroups the distribution of X is uniquely determined and VaR α ( d i=1 X i ) can be easily calculated (see Table 4.4). With increasing value of r the dependence within the subgroups increases and the VaR of the sum increases. With increasing number of independent subgroups it decreases.
Here, ϕ and Φ are the pdf and the cdf of N (0, 1). Therefore, we obtain from (4.16) and (2.3) The conditional distributions F j|Z f i are independent of j. Therefore, the convex bounds in (4.16), i. e. the conditional comonotonic sums for the partially specified risk factor models in the subgroups coincide with the comonotonic sums  In comparison the VaR bounds in (4.13) are for r > 0 improvements of the TVaR bounds in Table 4.5, in particular of the lower bounds.

Dependence structure between the subgroups
In comparison to Section 4 we now consider the case where the dependence structure between the subgroups is estimated from above or below. Let as in Section 2 C denote the copula of the vector Y of subgroup sums and let D denote the copula of the comparison vector W = (W 1 , . . . , W k ) of subgroup sums of Z.
In particular Proof.
The assumption that C ≤ wcs D and that Thus part a) follows. Part b) follows by an analogue argument.
Also the following related criterion in terms of the supermodular ordering can be given.
Proof. Let D be CI and let V be a random vector with Then by [14] holds V ≤ dcx W , which implies that and a) follows from transitivity of the convex ordering ≤ cx .
The proof under the assumption that C is CI is similar. The proof of b) is analogous.
In Section 3 we have recollected stochastic ordering results for a variety of concrete copulas like the independence copula, the Gauss copula, the t-copula, the Clayton copula or the Gumbel copula. We use these kind of copulas in the following examples as bounds for the dependence structure between the groups. Within the subgroups we allow any dependence structure.
Example 5.1 (Bounds for the copulas between subgroups). Let X be a risk portfolio of d = 50 risk variables. We assume that The portfolio is split into k subgroups of equal size. The copula C between the subgroups can be bounded by one of the above mentioned copulas D in the sense of supermodular order as in Proposition 5.2. For the application of Proposition 5.2 we need to verify the CI condition of D.
If D is a Gauss copula or a t-copula and all elements σ ij of the correlation matrix Σ are positive, then Σ −1 is an M -matrix and, therefore, by Proposition 3.3 D is CI. If D is a Clayton or a Gumbel copula, then D is an Archimedean copula with completely monotone generator and thus by Theorem 3.5 D is CI. Thus by application of Proposition 5.2 we obtain by means of the comparison vector W estimates for the Value of Risk of S. Since there are no usable convolution properties for the Pareto distribution we calculate all bounds in the following by means of Monte Carlo simulations.
In the first table (  In the following table (Table 5.2) we consider the case that the copula C of the vector Y of subgroup sums is bounded above by the independence copula D = C ⊥ . Within the subgroups we allow arbitrary dependence. From (5.1) we obtain the improved TVaR bounds with this subgroup information.  We observe strong improvements of the upper bounds and minor improvements of the lower bounds. This is to be expected when posing as upper bound the independence copula which restricts only the positive dependence from above.      In Table 5.3 we consider as examples for the comparison copula D the case of Gaussian copulas and t-copulas with various values of the correlation parameter, = ij , i = j.
Again we see a strong improvement of the upper bounds. The improvements increase with decreasing correlation. For the case of t-copulas the improvements increase with increasing degree of freedom ν. The independence copula gives the strongest improvement w.r.t. all upper bounds by CI copulas considered. Table 5.4 is concerned with the case of a Clayton copula and a Gumbel copula. Again with decreasing value of the dependence parameter the upper bounds improve considerably. k = 2 k = 5 k = 10 k = 25 ∆      Remark 5.3. a) Proposition 5.1 also allows to derive lower bounds for TVaR α (S) assuming that D ≤ wcs C and W i ≤ cx Y i . b) Lower bounds for VaR α (S) are also obtainable by the conclusion: S ≤ cx T implies LTVaR α (T ) ≤ VaR α (S). (5.5) c) If the copula C between the subgroups is modelled by an elliptical copula with negative correlations σ ij < 0 between subgroups i = j. Then by Proposition 5.2 an upper bound for VaR α (S) is obtained by the independence copula D = C ⊥ . Table 5.2 shows some results for the VaR bound induced by the independence copula D, for Pareto(3,1)-marginals.
The results in this section show that the assumption of an CI upper bound D for the copula C between the subgroups leads to considerable reduction of the upper VaR and TVaR bounds and of the DU spread of the portfolio. The largest reduction is obtained by this method when D is the independence copula.
We consider in this section the case of no dependence information within the subgroups. By the results in Sections 4 and 5 both kinds of dependence information those between and those within the subgroups can however be combined leading to accumulated reduction effects. The magnitude of the single reduction effects are considerable and can be well estimated from the examples treated in the above sections. Altogether, this approach gives quite flexible tools with promising potential which seems to be of interest for various real applications.