1 Introduction

Researchers and practitioners have long been disputing the effective ways of allocating the assets in an investment portfolio. We assume that an efficient portfolio provides the best-expected return on a given level of risk that the investor is willing to take. The first quantitative treatment of the trade-off between profit and risk refers to Harry Markowitz (1952), pioneering Modern Portfolio Theory (MPT), where the probabilistic formalization of "return" and "risk" concepts were introduced. The main problem of the Markowitz model is the assumption of linear dependence between asset returns. In practice, estimation errors of the expected returns and the covariance matrix of these returns might significantly affect the asset allocations. A small change in the estimates may lead to a drastic change in portfolio weights. Moreover, during the past two decades, researchers have come to an agreement on several stylized facts about financial markets, i.e., heavy tails in asset return distributions, volatility clustering, etc. (Cont. 2001; Mantegna et al. 2000). Therefore, the application of the classical Markowitz theory with the assumption of the normality of the logarithmic return distribution might lead to significant drawdown during nonstable periods of the economy. Several approaches e.g., Bayesian framework, higher moments, robust portfolio optimization, tail risk optimization, and weight constraints have been developed to alleviate the effects of estimation error.

A major novelty of this paper is that the suggested copula-based Black-Litterman portfolio’s performance is compared against the copula-CVaR portfolio performance since they share a common part of multidimensional distribution modeling and avoid excessive corner solutions that many optimization approaches would generate in case of extreme values of parameters’ estimates. Moreover, to further control sensitivity to input parameters we evaluate the performances of optimization strategies at three levels of weight constraints (30%, 40%, and 50%). Weights constraints are often encountered in practice and can be inherent to the internal investing fund policy or set by regulators in the case of bank-based markets. Moreover, instead of setting the level of confidence exogenously as usually done in other papers (Beach and Orlov 2007; Sahamkhadam et al. 2022), we test a range of τ values, as the level of confidence may significantly affect the obtained results and inferences. The confidence in estimates determines the extent to which the model will deviate from the equilibrium weightings. We also support our results with the classical mean–variance and equally weighted optimization results as benchmarks. In addition, we provide the tail risk measures, maximum drawdown for 1 day and 3 months, portfolio turnover, and the break-even transaction cost of all optimization approaches.

In Sect. 2 we present a literature review, notations, and definitions. The data we use are briefly discussed in Sect. 3. In Sect. 4 we describe the methodology. Section 5 summarizes the empirical results of the analyses, while Sect. 6 concludes and provides suggestions for future research.

2 Literature review

Modern portfolio management and asset pricing theory based on the famous Markowitz portfolio optimization concept makes intuitive sense and logic. Nevertheless, the Markowitz model has a number of disadvantages that inspired the clarifications in the existing approach and the development of new portfolio optimization approaches.

In particular, the proxy of risk is like Achilles' heel. Among the vast majority of risk measures innovations, only a few have been widely accepted by practitioners, despite their active interest in this area. Traditionally standard deviation has been considered as the main measure of the risk of an asset or portfolio of assets and is widely used in different coefficients like the Sharpe ratio and models like Black–Scholes Option Pricing Model. However, the volatility measure implies normal distribution of underlying data, which makes it a weaker instrument for risk estimation when this condition is not met.

Another popular tool for managing risk is VaR which was introduced by experts of the investment bank J.P. Morgan after the 1987 crisis when all the basic model assumptions concerning the correlation of stock markets, currencies, and bonds were violated due to the market inefficiency. Initially, VaR was designed in order to include the likelihood of extremely high losses that are observed on historical data in the appraisal of market risk. VaR can be defined as a measure of the maximum potential loss that the portfolio of financial instruments will suffer from a given probability over a defined period. Soccorsi (2016) concludes that despite the criticism of VaR for highly volatile markets, the risk is measured accurately. The use of VaR for market risk quantification is also mentioned in Basel II regulatory requirements. In the academic literature, there are many works that exploited this risk indicator in terms of investment decisions (Basak and Shapiro 2001).

However, Artzner et al. (1999) shows that the VaR measure suffers from some theoretical limitations and requires additional adjustments for emerging capital markets (Teplova and Ruzanov 2019). First, VaR is not a convex and smooth function in the case of discrete distributions and therefore can have many local extremes. Second, VaR is not a coherent measure of risk. In particular, there may be a situation where portfolio diversification may increase the value of VaR (subadditivity will be violated). CVaR is proposed as the alternative measure to VaR and measures how much investment is lost on average given that the VaR limit is exceeded. Rockafellar and Uryasev (2000, 2002) analyze portfolio optimization with the expected shortfall as a measure of risk. They show that CVaR can be used in conjunction with an optimization algorithm reducing the problem to a linear programming problem. It allows for optimizing a portfolio with large dimensions and gives stable numerical implementation. CVaR as a measure of risk has become a big research interest dealing with portfolio management and other economic and financial problems (Mulvey and Erkan 2006; Huang et al. 2008; Zhu and Fukushima 2009).

Another important question that takes the attention of many researchers and practitioners in portfolio optimization problems is modeling the dependencies between financial assets. The linear correlation metric provides no accurate estimation especially during unstable periods when the asymmetry of the financial assets increases and distributions strongly deviate from normal distributions (Erb et al. 1994; Longin and Solnik 2001; Ang and Chen 2002; Patton 2006). The first stylized fact about financial returns is that they have heavy-tailed distribution. The second is the clustering volatility, meaning that there are periods of low volatility and periods where volatility is high. The importance of taking this effect into account when predicting the future volatility of returns inspired the creation of a new class of GARCH models, formulated by Engle (1982), and Bollerslev (1986). The asymmetric volatility in financial market returns has been widely documented and indicates a different response to positive and negative events. Traditionally, negative events cause a greater spillover in volatility than positive, which is empirically proven by Nelson (1991). The asymmetric volatility is associated with the financial leverage effect: the company's debt increases after the initial fall in the stock market and, consequently, leads to an increase in the risk of this specific security, further increasing the volatility of the market (Black 1976; Christie 1982).

Copula models take step toward preserving mentioned stylized facts in multidimensional distribution modeling, allowing simulating any types of structural dependencies in both the upper and lower tails of the distributions (Embrechts et al. 2003; Lee and Long 2009). The theory of copula functions originates from the work of Hoeffding (1940) and Sklar (1959) but its development and widespread application occurred only by the end of the 1990s. The copula is a multidimensional distribution function defined on an n-dimensional unit cube. According to Sklar's (1959) theorem, any multidimensional distribution can be constructed from a set of marginal distributions and a particular copula function that specifies the structure of the relationship between random variables. The number of studies investigating copula functions application has increased quite rapidly. In recent years, various methods for estimating the parameters of copula functions have been proposed, starting with parametric ones (Jondeau and Rockinger 2003), semiparametric (Breymann et al. 2003), and ending with nonparametric methods (Fermanian and Scaillet 2003).

One of the first studies of portfolio optimization based on copula functions belongs to Lauprete et al. (2002). Authors consider the problem of selecting optimal portfolios with risk minimization, considering the deviation from the standard Gaussian distribution of returns. The meta-elliptical t copula with non-central t-GARCH univariate margins is studied as a model for time-series forecasting and portfolio optimization is performed with respect to the mean-CVaR measure. The authors provide an out-of-sample backtesting exercise and compare performance with common asset allocation techniques. Chirag and Mark (2017) show that the copula function is capable of rendering simulations that retain the most essential statistical traits of the underlying data, and that the copula-CARGH optimization framework can capture the broad range of risk preferences. Autchariyapanitkul et al. (2014) evaluate the minimum risk portfolios using the Student distribution for the copula. Bai and Sun (2007) exploit three-dimensional Archimedean copulas for data analysis. The authors demonstrate that the copula-based CVaR method overperforms the approaches with assumptions of the normal distribution. Kakouris and Rustem (2014) show how copula-based models can be incorporated into the Worst Case CVaR (WCVaR) framework. Krzemienowski and Szymczyk (2016) introduce a new measure of risk named Copula-based conditional value-at-risk (CCVaR). Vine copulas are the most flexible multivariate distribution modeling tool that are able to model highly complicated dependence structures even in high dimensions (Joe 1996; Bedford and Cooke 2001, 2002; Kurowicka and Cooke 2006; and Aas et al. 2009). Vine copulas found a wide application in the portfolio optimization problem. Mendes and Marques (2012) empirically show that the pair-copulas-based robust portfolios always outperform the classical versions based, providing higher gains in the long run and requiring a smaller number of updates. Hernandez (2014) concludes that the combination of a pair of C-vine copula and nonlinear portfolio optimization produces the highest return relative to risk. Bekiros et al. (2015) perform vine copula-based minimum risk allocation for mining stock portfolios during a financial crisis and show the overperformance of vine copula in forecasting tail dependence. Hernandez et al. (2017) show overperforming of R-vine copulas in modeling the dependence between stocks in the different sectors of the economy. Pang and Karan (2018) found that the overperformance of the vine copula models is more prominent when the portfolio size increases.

Another popular in the financial industry approach that deals with the instability of Markowitz’s optimization is the Black-Litterman model. Black and Litterman (1991, 1992) make a long-awaited step in closing the gap between investment theory and practice by allowing investors to include their subjective views on expected returns. This model functions within a Bayesian framework in which these views are inputs to an optimization procedure and leads to portfolio weights that can be close to, or far from, the market equilibrium weights, depending on the investor’s confidence in their views. Despite the popularity of the Black-Litterman model in the investment industry, it has received limited attention in the academic literature. Beach and Orlov (2007) suggest using the GARCH process in defining views vectors and its uncertainty. Kolm and Ritter (2017) utilize a generalized BL approach using views modeled with asset pricing models’ parameters, for example, risk premia in the Arbitrage Pricing Theory. Silva et al. (2017) address the difficulty of quantifying subjective views and suggest an alternative approach using the Formal Index of Quality. Deng (2018) incorporates investors’ views using the VECM model augmented with DCC. Pang and Karan (2018) suggest a closed-form solution for the classical BL portfolio optimization problem using conditional value-at-risk (CVaR) as the risk measure. Bessler et al. (2017) utilizing a sample-based version of the Black–Litterman model showed its significant overperformance in comparison with naive-diversified, mean–variance, Bayes–Stein, and minimum-variance strategies. Platanakis and Urquhart (2019) show that the Black-Litterman framework with variance-based constraints yields superior out-of-sample risk-adjusted returns in comparison with equally weighted and Markowitz optimization.

In our paper, we suggest extending the Black-Litternan approach with copula-generated views since copula models capture many properties of financial returns in an elegant and systematic way. In practice, the contribution of these views to the final solution can be calibrated through confidence level τ based on the validation performance. Sahamkhadam and et al. (2022) use a similar extension to the Black-Litterman model and show better performance of the copula-based views portfolios in terms of lower tail risk and higher risk-adjusted returns compared to the Markowitz’s optimization. In our paper, we evaluate the performance of the copula-based Black-Litterman model against the copula-CVaR approach. Since they share a common part of multidimensional distribution modeling, it is interesting to estimate how much the view’s confidence calibrating (contribution of the views to the final optimization) can bring in terms of profitability and risk metrics control. Therefore, unlike Sahamkhadam and et al. (2022) where authors use an exogenously set confidence level τ at 0.5, we test a range of values starting from 0.1 and ending with 0.5. Typically, in the literature, the confidence level τ is not set higher than 0.5 (Black and Litterman 1992; He and Litterman 1999; Idzorek 2007; Drobetz 2001). Additionally, we provide Markowitz’s and equally weighted portfolios performance as benchmarks. Since Markowitz’s optimization is known as being highly sensitive to the input data, we analyze the performance of considered strategies under different levels of weight constraints. Weights constraints can be inherent to the internal investing fund policy or set by regulators in the case of bank-based markets. Therefore, another contribution of this paper is the empirical evaluation of weights limits the impact on different strategies, which are practically actively used in the financial industry and helpful in decreasing rebalancing volumes and increasing break-even points.

3 Dataset and sources

Our study analyses the portfolio optimization on such instruments as ETFs. The selection among all listed ETFs is made according to the following criteria: the asset class of ETFs, liquidity, size, and ETF provider. We have firstly limited our sample to 84 ETFs—middle and large-capitalization, high degree of liquidity, invested in Equity and Fixed Income, with inception date not later than 02/01/2012, and provided by Blackrock (https://www.blackrock.com). “Appendix Table 3” provides the details about chosen ETFs for current research: EEM Equity, TDXPEX GR Equity, AIA US Equity, DVY US Equity, SHY US Fixed Income, IEMB LN Fixed Income, IEF US Fixed Income, EWK US Equity, SOXX US Equity, and CSNDX SW Equity. The data are obtained from Eikon Thompson Reuters’ database and runs from 02/01/2012 to 03/01/2022 expressed in US dollars. Our analysis is based on the logarithmic daily returns, obtained using the formula:

$$r_{i} = ln\frac{{p_{i} }}{{p_{i - 1} }},$$
(1)

where \(p_{i}\) – is the asset price at time i.

Before building a copula function, it makes sense to check whether the distribution is subject to a normal distribution whose density is given by the formula:

$$f\left( x \right) = \frac{1}{{\sigma \sqrt {2\pi } }}e^{{ - \frac{{\left( {x - \mu } \right)^{2} }}{{2\sigma^{2} }}}} ,$$
(2)

where \(\mu\) stands for the distributions mean and \(\sigma\) stands for the distribution’s standard deviation.

Skewness and leptokurtic are presented in Table 4, “Appendix”. The distribution of logarithmic ETFs returns is characterized by strong skewness and heavy tails, indicating that these return time series are non-normally distributed. We additionally run the statistical Shapiro–Wilk test, where the null hypothesis is that the distribution of logarithmic returns is a normal distribution. The null hypothesis is rejected in all cases, that is, the logarithmic return time series of the ETFs cannot be considered normally distributed. In turn, the mean–variance optimization assumes that the returns are distributed according to the normal law, which could lead to not optimal solution.

4 Methodology

4.1 Black-Litterman equation

Capital market expectations are the key inputs for asset allocation in MV optimization framework, which however often leads to instability, concentration, and underperformance. The BL model is a way to incorporate investor’s views into the portfolio optimization process. Based on the Bayesian method BL approach incorporates the investor’s views about expected asset returns. In the BL model, the posterior distribution of returns is estimated using (i) the prior distribution, (ii) investors’ views, and (iii) the dependency structure between assets. Therefore, copula models are applied to obtain both the prior covariance and the dependency structure. In this paper, we use R-vine copula due to its high flexibility and popularity.

Black-Litterman expected return is calculated based on vector of equilibrium returns (prior mean \({\uppi }\)) and the vector of investor’s views (V):

$$E\left( R \right) = \left[ {\left( {\tau \Omega } \right)^{ - 1} + P^{^{\prime}} \sum {^{ - 1} P} } \right]^{ - 1} \left[ {\left( {\tau \Omega } \right)^{ - 1} \pi + P^{^{\prime}} \sum {^{ - 1} V} } \right]$$
(3)

The \({\Omega }\) matrix is the covariance matrix of excess returns, \({\Sigma }\) is a diagonal matrix of error terms (or variances) of the views. The P matrix selects the assets for which views are imposed. The effective weight placed on the views is set with the value of \({\uptau }\). In the Black-Litterman equation a lower value of \({\uptau }\) gives greater weight to the implied equilibrium return vector \({\uppi }\).

The equilibrium returns vector represents the required (excess) returns that would clear the market, based on the given vector of market capitalization weights \(w^{mkt}\), \({\Omega }\)—covariance matrix of excess returns; and \(\lambda\)—risk aversion coefficient.

$$\pi = \lambda {\Omega }^{ - 1} w^{mkt} ,$$
(4)

The value of \(\lambda\) is an estimate of the required investor reward-to-risk

$$\lambda = \frac{{E\left( {R_{m} - r_{f} } \right)}}{{\sigma_{m}^{2} }},$$
(5)

We estimate returns’ posterior distribution using the expected return—\(V\) and covariance matrix \({\Sigma }\) from the returns generated based on vine copula model. In the copula-based, both the simulated residuals and posterior covariance matrix preserve the dependency structure between assets.

4.2 Simulation and optimization procedure

In this section, the steps involved in constructing the CBL-based portfolio strategies are presented. Let \(T_{t}\) = \(O_{t}\) + \(H_{t}\) be the time points in the observation interval \(O_{t}\) (250 days) and the out-of-sample holdout interval \(H_{t}\) (here we utilize one period forward estimate—the next day after the last observed day in \(O_{t}\)) at time t, \(O_{t}\) ∩ \(H_{t}\) = ∅, ∀t. Repeat steps 1–8 for a specified level of confidence and levels of weight constraint.

Step 1 :

Estimate the return’s prior distribution.

Calculate the prior covariance matrix \({\Omega }\), and prior mean \(\pi\) defined in Eq. (4).

Step 2 :

Fit the ARMA-GARCH model and get standardized residuals.

Let yt be the yield of a financial asset at time t, then the ARMA (p, q) model's regression equation looks like this:

$$y_{t} = \mu + \mathop \sum \limits_{i = 1}^{p} a_{i} y_{t - i} + \mathop \sum \limits_{j = 1}^{q} b_{j} \varepsilon_{t - j} + \varepsilon_{t} ,$$
(6)

where \(\mu\)—mean value,\(a_{i}\)—autoregressive coefficient,\(b_{j}\)—moving average coefficient, and \(\varepsilon_{t}\)—is a random regression error in the model for at t. The formal criterion for choosing the order of the model is the value of the Akaike Informational Criterion (AIC) or Schwartz Informational Criterion (SIC). The regression equation for the standard GARCH model is as follows:

$$\begin{aligned} \varepsilon_{t} & = \sigma_{t} z_{t} , z_{t} \sim i \cdot i \cdot d \cdot \left( {0,1} \right), \\ \sigma_{t}^{2} & = \omega + \mathop \sum \limits_{i = 1}^{p} \alpha_{i} \varepsilon_{t - i}^{2} + \mathop \sum \limits_{j = 1}^{q} \beta_{j} \sigma_{t - j}^{2} \\ \end{aligned}$$

where \(\sigma_{t}\) —quantities that depend on the previous error values and their lagged values and \(\varepsilon_{t}\)—is a random regression error in the model for the mean. The results of the ARMA-GARCH fitting to data are given in Table 5 in “Appendix”. Using the estimated parametric model, we obtain standardized residuals:

$$\hat{z}_{i,t} = \frac{{\widehat{{}}_{i, t} }}{{\hat{\sigma }_{i, t} }}$$
(8)
Step 3 :

Estimate the conditional multivariate cumulative distribution function of the returns using the R-vine copula model on the pseudo-observations defined by:

$$u_{i, t} = F_{i} \left( {\hat{z}_{i,t} } \right)$$
(9)
Step 4 :

Estimate the copula-based investors’ views.

  1. (a)

    Simulate the return from the fitted cumulative distribution copula-function a sufficient number of times. Get matrix of innovation parameters using the quantile function:

    $$\tilde{\varepsilon }_{i,t} = { }F_{t}^{ - 1}$$
    (10)
(b):

Then we utilize the estimated values of \(\hat{\omega }\), \(\hat{\alpha }\), and \(\hat{\beta }\) from ARMA-GARCH model to obtain the simulated return series:

$$\begin{aligned} \sigma_{t}^{2} & = { }\omega + { }\alpha (r_{t - 1} - { }\mu )^{2} + { }\beta \sigma_{t - 1}^{2} \\ r_{t} & = { }\mu + { }\sigma_{t} \varepsilon_{t} \\ \end{aligned}$$
(11)

where \(\tilde{\sigma }_{0}^{2}\) = \(\frac{{\hat{\omega }}}{{\left( {1 - \hat{\alpha } - { }\hat{\beta }} \right){ }}}\).

(c):

Based on simulated returns estimate vector of investor’s views (V) and matrix of error terms (or variances) of the views \({\Sigma }\)

Step 5 :

Estimate the Black-Litterman returns’ posterior distribution defined in Eq. (3).

Step 6 :

Solve the portfolio optimization problem with CVaR as a risk measure.

$$\mathop {\max }\limits_{{w_{i} }} \frac{{E\left( {r_{p} } \right) - r_{f} }}{CVaR}$$
(12)
Step 7 :

Register the performance of the portfolio.

Calculate \(P_{t} = \hat{w}_{t}^{T} p_{t}\) using the observed prices \(p_{t}\), with comparisons to the representative benchmark portfolios constructed and held over \(H_{t}\). In copula-CVaR portfolio we utilize the copula-generated returns and covariance matrix as posterior distribution parameters. Copula-CVaR portfolio is optimized based on maximum ratio of excess return over CVaR.

Step 8 :

Step ahead and repeat.

Let t = t + Δt and repeat Step 1–Step 8 till the end of data sample.

4.3 Performance metrics

In this study we compare results of vine-copula generated views BL portfolio with copula-CVaR, mean–variance and equally weighed portfolios. We assess out-of-sample portfolio allocation performance and its associated risks by means of the statistics namely, Mean return, Standard deviation, Maximum drawdown between two consecutive days, Maximum drawdown between two days within a period of 3 months, Sharpe Ratio, Sortino Ratio, Turnover, Breakeven costs and CVaR 0.95. We compute the portfolio mean excess return by:

$$\hat{\mu }_{p} = { }\frac{1}{M - T}\mathop \sum \limits_{t = T + 1}^{M - 1} w_{t - 1}^{{\text{T}}} r_{p,t}$$
(13)

The portfolio standard deviation and Sharpe Ratio are given, respectively, by:

$$\hat{\sigma }_{p} = { }\sqrt {\frac{1}{M - T - 2}\mathop \sum \limits_{t = T + 1}^{M - 1} \left( {w_{t - 1}^{{\text{T}}} r_{{p,{ }t}} - { }\hat{\mu }_{p} } \right)^{2} }$$
(14)
$$SR = \frac{{\hat{\mu }_{p} - r_{f} }}{{\hat{\sigma }_{p} }}$$
(15)

Maximum of the drawdown at time t is an indicator of the risk that measures the largest difference between the maximum and minimum the cumulative returns over the history preceding time t.

$$Max\,DD = \mathop {\max }\limits_{0 \le \tau \le t} r_{p} \left( {w, \tau } \right) - \mathop {\min }\limits_{0 \le \tau \le t} r_{p} \left( {w, \tau } \right)$$
(16)

The Sortino’s Ratio (Sortino and Price, 1994) is given by:

$$SR = \frac{{\hat{\mu }_{p} - r_{f} }}{{\hat{\sigma }_{p, n} }}$$
(17)

where \(\hat{\sigma }_{{p,{ }n}} = { }\sqrt {\frac{1}{M - T - 2}\mathop \sum \limits_{t = T + 1}^{M - 1} min\left( {0,{ }w_{t - 1}^{{\text{T}}} r_{{p,{ }t}} - { }\hat{\mu }_{p} } \right)^{2} }\).

We further compute the break-even transaction cost that defines the level of transaction cost leading to zero net profit. In other words, the break-even point is the transaction cost that can be imposed before making the strategy not profitable. We consider the net returns on transaction costs \(\hat{\mu }_{net}\) by:

$$\hat{\mu }_{net} = \frac{1}{L - T}\mathop \sum \limits_{{t = { }T + 1}}^{M - 1} \left[ {\left( {1 + { }w_{t - 1}^{{\text{T}}} r_{{p,{ }t}} } \right)(1 - c\mathop \sum \limits_{j = 1}^{N} \left( {\left| {w_{{j,{ }t}} - { }w_{{j,{ }t + m}} } \right|} \right) - 1} \right]$$
(18)

where c is break-even transaction cost when we solve \(\hat{\mu }_{net}\) = 0.

We also report the portfolio turnover calculated as the sum of the absolute changes in the N asset weights for a certain period:

$$TO = { }\frac{1}{M - T - 1}\mathop \sum \limits_{t = T}^{M - 1} \mathop \sum \limits_{j = 1}^{N} \left( {\left| {w_{{j,{ }t}} - { }w_{{j,{ }t + m}} } \right|} \right)$$
(19)

where \(w_{{j,{ }t}}\)—weight in asset j before rebalancing at the next period and \(w_{{j,{ }t + m}}\) weight in asset j after rebalancing at time t + m. By portfolio turnover, we measure the required amount of trading operations by each optimization strategy under consideration.

5 Empirical results

Figure 1 presents the performance of the portfolios under consideration: copula-based Black-Litterman with τ = 0.2 (CBL), Copula-CVaR, Mean–Variance (MV), and equally weighted portfolio (EW). We can notice that EW portfolio cumulative return trajectory shows the least variation, while other strategies have a higher magnitude of fluctuations. EW portfolio shows significant underperformance till 2020 in comparison with other portfolios. In 2020 cumulative returns under all strategies experienced a dramatic drop due to the COVID-19 breakthrough, however, we can notice that the CBL portfolio had the lowest drawdown and highest speed of upward rebound. MV portfolio showed the least stable performance after 2020. In the second part of 2020 it recovers faster than Copula-CVaR and EW portfolios, however, has a significant dropdown in 2021. Copula-CVaR portfolio shows the most stable positive trend after the 2020 crisis, and the CBL portfolio which is partially dependent on copula-based optimization also showed a dropdown in 2021, however, less than in the case of the MV approach. The main reason for overperforming of the CBL portfolio after 2020 lies behind the lower dropdown at the beginning of the global pandemic crisis.

Fig. 1
figure 1

Out-of-sample cumulative excess returns of the portfolio strategies without maximum weight constraint

Table 1 summarizes out-of-sample performance statistics under the different strategies. We examined the impact of the level of \(\tau\)—the measure of confidence in the Black-Litterman implied returns. As \(\tau\) increases, the weight of the inclusion of the views, proxied by the copula-CVaR model, also increases. We tested portfolio performance at different levels of τ (starting at 0.1 and limiting τ at 0.5 with 0.1 steps). We empirically obtained that the optimal level of τ for our optimization strategy is 0.2 and include out-of-sample performance statistics of three CBL best portfolios with τ being set to 0.1, 0.2, 0.3 MDD (1 day) is the maximum drawdown between two consecutive days and MDD (3 months) is the maximum drawdown within 3 months. The results indicate that CBL portfolios show better risk-return performance compared to Copula-CVaR, and two benchmarks MV and EW portfolios. The performance of portfolios with τ being set to 0.4 and 0.5 showed underperforming results and therefore was not included in Table 1. According to the obtained results, allocations with the copula-based strategies overperform the MV in terms of profitability. The copula-CVaR approach shows the best results, as expected, in terms of the conditional risk measure. Judged by CVaR and maximum drawdown, the CBL (τ = 0.1) and CBL (τ = 0.2) portfolios yields less risky trajectory than CBL (τ = 0.3). Copula-CVaR and CBL (τ = 0.3) portfolios can be identified as the riskiest with respect to volatility. MV portfolio showed lower risk than copula-based models measured by standard deviation. The maximum Sharpe Ratio is reached with the CBL strategy at τ = 0.2 and equal to 0.202. Overall, Copula-CVaR standard deviations are larger than obtained with the MV approach. Thus, our results show that the suggested method can be recognized as an effective strategy for an investor who tries to minimize tail loss, rather than the standard deviation. Another important notice is that the copula-CVaR portfolio has the lowest turnover after EW portfolio, while the MV portfolio has the highest turnover. A high level of turnover in the MV portfolio affects the break-even point and makes this approach less effective when transaction costs are taken under consideration. This proves the fact that the MV portfolio is the most sensitive to even small changes in the input parameters, which leads to greater turnover. EW portfolio has the lowest turnover as expected since there is no rebalancing.

Table 1 Out-of-sample statistics of portfolios

Figure 2 depicts the cumulative returns of the MV portfolio with the maximum weight constraint imposition at three levels: 30%, 40%, and 50%. Maximum weight constraint imposes limits on the portfolio weights to obtain a more robust portfolio with lower turnover and lower risks regarding high concentrations. Figure 2 shows that in the case of the MV approach less strict constraints at the level of 50% and 40% effectively increased the cumulative return. Since we have only 10 assets in the portfolio 30% of weight constraint might be too strict and lead to less effective allocations, even though helping to decrease turnover: even though gaining higher cumulative return. 30% weight constraint imposition shows greater drawdown in 2016, 2019, and 2020.

Fig. 2
figure 2

Out-of-sample cumulative excess returns of the MV portfolio with weight constraints

Figure 3 depicts the cumulative returns of the Copula-CVaR portfolio with the maximum weight constraint imposition. We can notice that overall weight constraints are less effective for the copula-based approach in comparison with MV portfolios. We can notice that the least effective constraint is 30% likewise in the case of the MV portfolio, leading to the greatest drawdown in 2016, 2019, and 2020. Figure 3 shows that 40% of weight constraint significantly improved performance at the beginning of 2021.

Fig. 3
figure 3

Out-of-sample cumulative excess returns of the Copula-CVaR portfolio with weight constraints

Figure 4 presents the cumulative returns of the CBL portfolio with the maximum weight constraint imposition. Till the beginning of 2020, any level of weight constraint was ineffective in terms of cumulative return and the CBL optimization without any constraint provided the best performance. After 2020 30% of limitation shows the best performance and has a similar effect as in the case of MV optimization after 2020. The reason behind this effect might be explained by too much fear in financial markets at the beginning of 2020 and a lower level of predictability of financial series. Therefore, optimization based on a more complex dependence structure might lead to higher estimation errors and more conservative weight constraint helps additionally diversify the portfolio and brings better out-of-sample cumulative return.

Fig. 4
figure 4

Out-of-sample cumulative excess returns of the CBL portfolio with weight constraints

Table 2 presents out-of-sample performance statistics under the weight constraints for Copula-CVaR, CBL, and MV optimization approaches. Without weight constraints (100%) the highest out-of-sample Sharpe Ratio was gained with CBL optimization (0.202). However, we can notice that the 40% level of weight constraint applied to the copula-CVaR approach led to a significant decrease in standard deviation metric and an increase in Sharpe Ratio to the level of 0.206. Mean–Variance optimization with 50% of maximum weight limit also showed an increase in Sharpe ratio due to both increases in return and decrease in standard deviation. Figure 4 shows another positive effect of imposing weight constraints for all strategies—a decrease in turnover and an increase in break-even points for all strategies. The Break-even point helps to account for transaction costs depending on rebalancing volume. The lowest turnover and the highest break-even point were reached under the Copula-CVaR approach with 30% of the weight constraint. Very close result in terms of profitability was reached with the benchmark of MV optimization and 50% of weight constraint. However, tail risk measures, maximum drawdown, and break-even point are still lower in the case of copula-based optimization. The lowest CVaR values were realized under the Copula-CVaR strategy for all levels of weight constraints in comparison with CBL and MV portfolios. This is also true for the 1-day MDD measure of risk: regardless of the level of weight constraint Copula-CVaR approach provides the lowest risk.

Table 2 Statistics of out-of-sample portfolios with maximum weight constraints

We found that the maximum weight constraint provided the highest effectiveness to the MV approach in terms of increasing profitability, improving the Sharpe-Ratio metric, and decreasing turnover. This result however might depend on the rebalancing period since with a frequent rebalancing period estimation error might be higher and additional control of weights are helpful for lowering the total risks. We found that the most optimal weight constraint for MV portfolio in terms of improving risk-adjusted profitability and break-even point is the least strict – 50%; however, in terms of lowering the standard deviation, tail risk, and turnover the optimal level of weight limit was found to be 40%. Copula-CVaR portfolio showed the best performance in terms of tail risk control at 30% of weight constraint and at 40% for all the rest of the metrics. CBL strategy has the most doubtful effect of weight constraint imposing. Risk measured by CVaR decreased at 40% and 30% of weight constraint, however at cost of noticeable decrease in Sharpe Ratio. Moreover imposing weight limits helped to decrease turnover and increase the break-even point, however, the effectiveness of this increase is lower than in the case of the MV portfolio. 30% of weight constraint managed to decrease turnover from 0.585 to 0.521 at the cost of decreasing the Sharpe Ratio from 0.202 to 0.153. The obtained results suggest that the weight limits must be tested and chosen specifically for a certain portfolio with all possible factors taken into consideration: strategy, number of assets in a portfolio, rebalancing period, etc.

As a robustness check, we compare the performance of the portfolios with the different rolling windows (200 days and 300 days, against 250 in the original version). We also change the rebalancing period from daily to weekly and monthly. Changing the rebalancing period preserves the relative inferences among different strategies however decreases the turnover metrics and increases the break-even point, being more significant for the MV approach. While the CBL portfolio still overperforms competitive portfolios the absolute difference between risk-adjusted return and tail risk measures was least significant with the weekly-based rebalancing period.

There are few limitations regarding the proposed optimization approach and the study of its effectiveness. First, the procedure implies 100% placement in assets without short selling, therefore the portfolio value will lose in falling markets. Second, the procedure, in fact, does not offer any criterion for the selection of assets. Portfolio managers can select assets according to various criteria depending on the management style and the risk preferences of investors which were not covered in this paper.

6 Concluding remarks

The output of the Black-Litterman model is a mixture of equilibrium returns and investor views. The main value added by this paper goes from using copula-based views instead of relying on financial analyst views in the Black-Litterman model, which received limited attention in the literature. Copula models capture many properties and dependencies of financial returns in an elegant and systematic way. We utilize vine-copula in our analysis due to its flexibility and use CVaR as a risk measure in optimization procedure instead of classical variance. We compare the CBL model with copula-CVaR optimization since they share a common part of multidimensional distribution modeling and avoid excessive corner solutions that many optimization approaches would generate in case of extreme values of parameters’ estimates. The results presented in this paper indicate the great potential of the Black-Litterman methodology in generating global portfolios. Our empirical analysis indicates better performance for the CBL portfolio regarding risk-adjusted returns, and the copula-CVaR portfolio is better regarding tail risk control, lower turnover, and higher break-even point. However, EW and MV portfolios showed the lowest risk measured by standard deviation. We showed that the MV portfolio is the most sensitive to the input parameters and has the greatest turnover leading to further decreasing effectiveness of this approach when taking into account transaction costs. Therefore, we impose an additional risk control instrument—weight constraint to evaluate the performance of all strategies under three levels of weight limits. We showed that the weight constraint imposition is the most effective tool for MV optimization, and did not provide a significant well-defined positive effect on the CBL portfolio.

Finally, we offer a few suggestions for future research. First, the dynamic confidence parameter could be incorporated into the Black-Litterman approach, instead of using a static value. The level of confidence could depend on a certain threshold of the variance matrix of the returns from the previous 250-days period. If the portfolio risk is higher than acceptable, the investor can alter the confidence parameter until a risk-appropriate allocation is generated. Second, we use the rolling 250-days historical covariance matrix as the estimate of the prior covariance matrix and prior mean in the Black-Litterman equation. Another approach could be using a decay factor to weigh more heavily on recent observations for prior input parameter estimates. Additional suggestions include relaxing the assumption of no short selling and including other copula-function models.