Range-based DCC models for covariance and value-at-risk forecasting

The dynamic conditional correlation (DCC) model by Engle (2002) is one of the most popular multivariate volatility models. This model is based solely on closing prices. It has been documented in the literature that the high and low prices of a given day can be used to obtain an efficient volatility estimation. We therefore suggest a model that incorporates high and low prices into the DCC framework. We conduct an empirical evaluation of this model on three datasets: currencies, stocks, and commodity exchange traded funds. Regardless of whether we consider in-sample fit, covariance forecasts or value-at-risk forecasts, our model outperforms not only the standard DCC model, but also an alternative range-based DCC model.


Introduction
Models that can describe the dynamic properties of two or more asset returns play an important role in financial econometrics. Multivariate volatility models have been used to understand and predict the temporal dependence in second order moments of asset returns. These models can explain how covariances change over time and therefore describe temporal dependencies among assets. Such relations are vital in many financial applications, such as asset pricing, portfolio optimization, risk management, the estimation of systemic risk in banking, value-at-risk estimation, asset allocation and many others.
One of the most popular multivariate volatility models is the dynamic conditional correlation (DCC) model introduced independently by Engle (2002) and Tse and Tsui (2002). The latter representation however has attracted considerably less interest in the literature. The advantages of the DCC model are the positive definiteness of the conditional covariance martices and the ability to describe time-varying conditional correlations and covariances in a parsimonious way. The parameters of the DCC model can be estimated in two stages, which makes this approach relatively simple and possible to apply even for very large portfolios. The DCC model has become extremely popular and has been widely applied and modified (e.g. Heaney and Sriananthakumar, 2012;Lehkonen and Heimonen, 2014;Bouri et al., 2017;Bernardi and Catania, 2018;Dark, 2018;Karanasos et al., 2018).
Most volatility models are return-based models, i.e. they are estimated on returns, which are calculated based only on closing prices. Meanwhile, the use of daily low and high prices leads to more accurate estimates and forecasts of variances (see e.g. Chou, 2005;Brandt and Jones, 2006;Lin et al., 2012;Fiszeder and Perczak, 2016;Molnár, 2016) and covariances (see e.g. Fiszeder, 2018). Daily low and high prices are almost always available alongside closing prices in financial series. Therefore, making use of them in volatility models is very important from a practical viewpoint. DCC models formulated with the usage of where −1 is the set of all information available at time − 1, Normal is the multivariate normal distribution and is the × symmetric conditional covariance matrix.
The DCC( , )-GARCH( , ) model by Engle (2002) can be presented as: where = diag(ℎ 1∕2 1 , ℎ 1∕2 2 … , ℎ 1∕2 ), conditional variances ℎ (for = 1, 2, … , ) are described as univariate GARCH models (Eqs. (5)-(6)), is the standardized × 1 residual vector assumed to be serially independently distributed given as = −1 ε , is the time varying × conditional correlation matrix of , is the unconditional × covariance matrix of (according to Engle, 2002) and * is the diagonal × matrix composed of the square root of the diagonal elements of . The parameters (for = 1, 2, … , ), (for = 1, 2, … , ) are nonnegative and satisfy the condition ∑ =1 + ∑ =1 < 1. The univariate GARCH( , ) model applied in the DCC-GARCH model can be written as: where 0 > 0, ≥ 0, ≥ 0 (for = 1, 2, … , ; = 1, 2, … , ; = 1, 2, … , ), weaker conditions for non-negativity of the conditional variance can be assumed (see Nelson and Cao, 1992). The requirement for covariance stationarity of is ∑ =1 + ∑ =1 < 1. A nice feature of the DCC-GARCH model is that its parameters can be estimated by the quasi-maximum likelihood method using a two-stage approach (see Engle and Sheppard, 2001). Let the parameters of the model Θ be written in two groups Θ ′ = (Θ ′ 1 , Θ ′ 2 ), where Θ 1 is the vector of parameters of conditional means and variances and Θ 2 is the vector of parameters of the correlation part of the model. The log-likelihood function can be written as the sum of two parts: where (Θ 1 ) represents the volatility part: while ( Θ 2 | | Θ 1 ) can be viewed as the correlation component: (Θ 1 ) can be written as the sum of log-likelihood functions of univariate GARCH models: This means that in the first stage the parameters of univariate GARCH models can be estimated separately for each of the assets and the estimates of ℎ can be obtained. In the second stage residuals transformed by their estimated standard deviations are used to estimate the parameters of the correlation part (Θ 2 ) conditioning on the parameters estimated in the first stage (Θ 1 ).

The CARR model
The second benchmark to compare with our new model is the range-based DCC model. This is based on the CARR model by Chou (2005), which we describe now.
Let assume that and are high and low prices over a fixed period such as day, week or month and the observed price range is given as = ( ) − ( ) . The CARR( , ) model can be described as: where is the conditional mean of the range and is the disturbance term. The exponential distribution is a natural choice for the conditional distribution of because it takes positive values. To ensure the positivity of the parameters of the CARR model have to meet conditions analogous to those in the GARCH model (see Nelson and Cao, 1992). The process is covariance stationary if the following condition is met: It is worth emphasizing that the CARR model describes the dynamics of the conditional mean of the price range, not the conditional variance of returns as in the case of the GARCH model. The parameters of the CARR model can be estimated by the quasi-maximum likelihood method. The log-likelihood function can be written as: where ς is a vector containing unknown parameters of the model. The estimators obtained by the quasi-maximum likelihood method are consistent (see Engle and Russell, 1998;Engle, 2002;Chou, 2005).

The DCC-CARR model
In this paper the new DCC-RGARCH model is compared not only with the DCC-GARCH model, formulated on closing prices, but also with the range-based DCC model which, like the proposed model, is formulated using low and high prices.  combined the CARR model by Chou (2005) with the DCC model by Engle (2002) to propose the range-based DCC model, which we refer to as the DCC-CARR model in this paper. The CARR model describes the dynamics of the conditional mean of the price range, and so in order to estimate values of the conditional standard deviation of returns the conditional price range has to be scaled according to the formula: * = adj for = 1, 2, … , , where adj = ∕ . The scaling factor adj is estimated as the quotient of unconditional standard deviation of returns by the sample mean of the conditional range.
The DCC( , )-CARR( , ) model can be expressed as: , is the standardized × 1 residual vector which contains the standardized residuals CARR calculated from the CARR model (Eqs. (11)-(13)) as CARR = ∕ * , the other variables are defined in the same way as in the DCC-GARCH model.
The parameters of the DCC-CARR model can be estimated by the quasi-maximum likelihood method using a two-stage approach. The log-likelihood function can be written as the sum of two parts, the volatility part and the correlation part: This means that in the first stage the parameters of the CARR models can be estimated separately for each of the assets. In the second stage the standardized residuals CARR are used to maximize Eq. (22) in order to estimate the parameters of the correlation component.

The Range-GARCH model
In the new specification of the DCC-RGARCH model we use the Range-GARCH model introduced by Molnár (2016). The RGARCH( , ) model can be formulated as: where 2 is the Parkinson (1980) estimator calculated as 2 = [ ( ∕ )] 2 ∕(4 ln 2). In this formulation other variance estimators based on low, high and opening or closing prices, like the Garman and Klass (1980) or Rogers and Satchell (1991) estimators, can be applied instead of the Parkinson estimator. For an overview of range-based volatility estimators see Molnár (2012), Fiszeder and Perczak (2013).
To ensure the positivity of ℎ the parameters of the RGARCH model must meet conditions analogous to those in the GARCH model (see Nelson and Cao, 1992). The RGARCH process is covariance stationary if the following condition is met: It is worth emphasizing that the RGARCH model describes the dynamics of the conditional variance of returns, not the conditional mean of the price range, as in the case of the CARR model. The parameters of the RGARCH model can be estimated by the quasi-maximum likelihood method and the likelihood function is the same as in the return-based GARCH model.
The parameters of the DCC-R-GARCH model can be estimated by the quasi-maximum likelihood method using a two-stage approach. The log-likelihood function can be written as the sum of two parts, the volatility part and the correlation part: This means that in the first stage the parameters of univariate RGARCH models can be estimated separately for each of the assets.
In the second stage the standardized residuals RGARCH are used to maximize Eq. (32) in order to estimate the parameters of the correlation component.

Data
We apply the proposed model and its competitors to three different sets of data: three currency rates, three commodity exchange traded funds and five stocks. The currency rates are the three most heavily traded currency pairs in the Forex market, namely: EUR/USD, USD/JPY and GBP/USD.
The second set are three exchange-traded funds (ETF) listed on the New York Stock Exchange Arca, namely (the names given in the brackets will be used later in tables): the United States Oil Fund (Oil), the United States Natural Gas Fund (Natural Gas) and the Energy Select Sector SPDR Fund (Energy). Commodity exchange traded funds provide investors with the convenience of commodity exposure without a commodity futures account. The first two ETFs offer exposure to a single commodity (oil/gas), whereas the third ETF tracks the price and performance of the Standard and Poor's Energy Select Sector Index.
The third set of data consists of five selected U.S. stocks, namely: Amazon, Apple, Goldman Sachs, Google and IBM. Since there are many stocks that could be chosen for this purpose, we decided to follow CBOE and select the stocks for which CBOE calculates implied volatility indices (even though implied volatility indices are not used in this paper).
We evaluate the models considered for daily data in the nine-year period from January 2, 2008, to December 30, 2016. This is a relatively long period, which includes both very volatile periods -the collapse of Lehman Brothers, the worst phase of the global financial crisis, the European sovereign debt crisis and Brexit -but also tranquil periods with low volatility.
The descriptive statistics for the percentage returns calculated as = 100 ln( ∕ −1 ), where is the closing price at time , are presented in Table 1. The means of returns are positive for stocks and the Energy Select Sector SPDR Fund and negative for currencies and the other ETFs. The standard deviation of returns is significantly lower for currencies. Most distributions of returns are asymmetric, and all display high leptokurtosis.

Results
We consider three DCC models in the analysis: (1) The DCC-GARCH model by Engle (2002) summarized by Eqs. (1)-(6), where parameters are estimated based only on closing prices.
We also consider a DCC model using two asymmetric GARCH models, i.e. the EGARCH (Nelson, 1991) and GJR (Glosten et al., 1993) models, instead of the standard GARCH model. These models are able to capture often-reported asymmetric responses to positive and negative shocks in the conditional variance. However we find that covariance forecasts based on the DCC-EGARCH and DCC-GJR models are not significantly better than forecasts from the DCC-GARCH model for any of the currencies and ETFs considered, or for most stocks (the results are given in Tables A.1 and A.2 in the Appendix), and so we do not extend our models to describe the effect of asymmetry in variance.
The considered exchange rates, ETFs and stocks are not cointegrated (according to the Johansen test). Mean equations for returns are very simple: each mean equation is a constant, because in our data the sample return of any asset is not dependent on its own past returns nor on the past returns of other assets.
We first compare the fit of the models estimated on the whole sample of data, and then compare the forecasts from these models. We analyse forecasts of variances and forecasts of covariances separately, because models for variances already exist whereas forecasting covariances is our main contribution.

In-sample comparison of models
The parameters of the considered models are estimated using the quasi-maximum likelihood method. The results of the estimation are presented in Tables 2-4 separately for exchange rates, ETFs and stocks.
The estimation of parameters for the GARCH, R-GARCH and CARR models is based on different kinds of data: on closing prices for the first two models 1 and on range data for the third model. However, for the DCC-CARR, which uses the CARR model, it is possible to calculate the likelihood function based on the scaled conditional price range according to formula (21). Thanks to this, it is possible to evaluate all the DCC models based on the whole likelihood function, including both the volatility and correlation parts. In order to assess whether the differences between values of likelihood function are statistically significant, we apply the Rivers and Vuong (2002) and Clarke (2007) tests for non-nested model selection. The values of the likelihood function are higher for the DCC-RGARCH model than for the benchmark DCC-GARCH model for all analysed data sets, which means that the DCC-RGARCH model better describes the considered time series. The results for the DCC-CARR model are ambiguous and depend on the type of test applied.
The application of range data changes the parameter estimates for the considered models significantly. Specifically, the estimates of the parameters 1 are much higher and the estimates of the parameters 1 much lower in the CARR and RGARCH models compared with the GARCH model. This is important in terms of both modelling and forecasting volatility, because for the CARR and RGARCH models the shocks in the previous period have a stronger impact on the current volatility than the impact you observe for the GARCH model. Thus models formulated with range data respond more quickly to changing market conditions. Slow response to abrupt changes in the market is widely cited as one of the greatest weaknesses of GARCH-type models formulated based on closing prices (e.g. Andersen et al., 2003;Hansen et al., 2012).
Direct comparison of the parameters of the CARR model with the parameters of the GARCH and RGARCH models is, however, difficult, because they describe different measures of volatility. The CARR model describes the dynamics of the conditional mean of the price range, while the GARCH and RGARCH models describe the conditional variance of returns.
One can also notice that the sum of the estimates of the parameters 1 and 1 in the RGARCH model is higher than one for ETFs and stocks. However, this does not mean that the analysed processes are covariance non-stationary. It results from the fact that the Parkinson estimator underestimates the volatility of returns in the presence of opening jumps (such jumps do not occur in the Forex market since it does not close overnight), causing an increase in the estimate of the parameter 1 (see Molnár, 2016).
On the other hand, there are no considerable differences between the considered models in the estimates of parameters for the correlation component. Thus, the main differences in the behaviour of the time-varying covariances from those models results from the usage of the different standardized residuals , CARR and RGARCH in Eqs. (4), (19) and (29) of the DCC-GARCH, DCC-CARR and DCC-RGARCH models, respectively.

Comparison of variance forecasts
In this section we compare the forecasting performance of the three univariate models, which are used in the DCC models. We formulate out-of-sample one-day-ahead forecasts of variance based on the GARCH, CARR and RGARCH models, where parameters are estimated separately each day based on a rolling sample of a fixed size of 500 (approximately a two-year period; the first  (6)), the CARR model (Eq. (13)) and the RGARCH model (Eq. (24)), = 1, 2, 3 for EUR/USD, JPY/USD and GBP/USD, respectively, 1 , 1 are the parameters of the correlation part (Eqs. (4), (19) and (29) for the DCC-GARCH, DCC-CARR and DCC-RGARCH models, respectively), ln L is the logarithm of the likelihood function, the Rivers-Vuong and Clarke are test statistics for model selection, where comparisons are made with the DCC-GARCH model, p-values are given in brackets. A low p-value means that the indicated model is superior to the benchmark DCC-GARCH model.  (6)), the CARR model (Eq. (13)) and the RGARCH model (Eq. (24)), = 1, 2, 3 for Natural Gas, Oil and Energy, respectively, 1 , 1 are the parameters of the correlation part (Eqs. (4), (19) and (29)  The sum of squares of 15-min returns (the realized variance) is used as a proxy of the daily variance. The forecasts from the models are evaluated based on two primary measures, namely, the mean squared error (MSE) and the mean absolute error (MAE). In order to evaluate the statistical significance of the results the Diebold-Mariano test (Diebold and Mariano, 1995) corrected for small-sample bias (Harvey et al., 1997) is applied.
A pairwise comparison is performed and the results for the RGARCH model are presented with respect to the two benchmarks: first the GARCH model and second the CARR model. The GARCH and CARR models are the most popular univariate volatility  (6)), the CARR model (Eq. (13)) and the RGARCH model (Eq. (24)), = 1, 2, 3, 4, 5 for Amazon, Apple, Goldman Sachs, Google and IBM, respectively, 1 , 1 are the parameters of the correlation part (Eqs. (4), (19) and (29) for the DCC-GARCH, DCC-CARR and DCC-RGARCH models, respectively), ln L is the logarithm of the likelihood function, the Rivers-Vuong and Clarke are test statistics for model selection, where comparisons are made with the DCC-GARCH model, p-values are given in brackets. A low p-value means that the indicated model is superior to the benchmark DCC-GARCH model. models formulated based on returns constructed on closing prices and price range, respectively. The forecasting performance results are presented in Tables 5 and 6 for the MSE and MAE criteria, respectively.
According to the MSE criterion, the forecasts of variance from the RGARCH model are more accurate for currencies and the Energy Select Sector SPDR Fund. For the other ETFs and stocks, the results are mixed. However, there are large outliers in the data set, which affect the MSE measure. Such outliers are present for ETFs and stocks (see e.g. minimum and maximum returns in Table 1). A quite different picture emerges from the MAE criterion. According to this measure the best forecasts are formulated based on the RGARCH (except Amazon and Apple stocks) and, in almost all cases, the higher forecasting accuracy of this model is statistically significant at the 10% significance level (the exceptions are the GBP/USD currency pair and Google's stock with respect to the CARR benchmark model). The CARR and RGARCH models' forecasting superiority over the GARCH model has already been documented by Chou (2005) and Molnár (2016), respectively. Higher forecast accuracy based on the RGARCH model in comparison to the CARR model has not previously been demonstrated in the literature.
In order to check the robustness of the results, we also consider 5-min returns instead of 15-min returns and three additional evaluation measures (the coefficient of determination, the logarithmic loss function and the linear exponential loss function). The results for the MSE and MAE criteria for 5-min returns are presented in Table A.3 in Appendix. The conclusions are very similar to those presented for 15-min returns.
The first additional measure is the coefficient of determination from the Mincer-Zarnowitz regression. A proxy of volatility is regressed on a constant and the forecast of volatility. It is a very simple and popular way to evaluate the forecasting performance of volatility models (see e.g. Poon and Granger, 2003). The values of the coefficient of determination for the competing models are presented in Table 7. These results are in accordance with those for the MSE measure.
To reduce the impact of outliers, we also use the logarithmic loss function. This is calculated similarly to the MSE measure, but the logarithm of a volatility proxy and the logarithm of the volatility forecast are applied (see Pagan and Schwert, 1990). The estimates of the logarithmic loss function are given in Table 8. These results are very similar to those for the MAE criterion and indicate that the forecasts from the RGARCH model are superior.
Additionally, we apply a linear exponential loss function (LINEX). For the positive coefficient of the LINEX, the function is approximately linear for over-prediction errors and exponential for under-prediction errors. This means that under-prediction errors have a higher impact on the loss function than over-prediction errors. For the negative coefficient the situation is exactly the opposite. The values of the LINEX function for = −1 and = 1 are presented in the Appendix in Tables A.4 and A.5 respectively. The results for all currency rates indicate that the variance forecasts based on the RGARCH model are more accurate than the  forecasts from the competing models. The outcomes for other assets are ambiguous, but they depend heavily on outliers. When the highest 1% of values are excluded, the values of the LINEX loss function are much smaller and more often indicate the RGARCH model as the best forecasting model.

Comparison of covariance forecasts
In this section, we compare out-of-sample one-day-ahead forecasts of covariance from the DCC-GARCH and DCC-CARR models with the forecasts from the DCC-RGARCH model. We use the same estimation and forecasting samples as for variances in Section 4.2. The sum of products of 15-min returns (the realized covariance) is employed as a proxy of the daily covariance for the evaluation of the forecasts. We use the same evaluation measures as in the previous section. We perform a pairwise comparison by the Diebold-Mariano test for the DCC-RGARCH model with respect to the two benchmarks: first the DCC-GARCH model and second the DCC-CARR model.
The forecasting performance results for the covariance of returns are presented in Tables 9 and 10 for the MSE and MAE criteria, respectively. For all analysed relations except the one between the United States Oil and United States Natural Gas Funds based The evaluation period is January 4, 2010, to December 30, 2016, the realized variance is used as a proxy of variance and estimated as the sum of squares of 15-min returns. The highest values of 2 are marked in bold. on the MSE measure, the lowest values of loss functions are found for the DCC-RGARCH model. In most cases, this model's higher forecasting accuracy is statistically significant. 2 In the MAE measure less weight is assigned to outliers and the results for this measure clearly indicate that the DCC-RGARCH model is the best forecasting model. The forecasts formulated based on the DCC-RGARCH are more precise than the forecasts from both the benchmark models. The first benchmark, DCC-GARCH, is based on returns formulated on the closing prices. This result shows that the application of range data in the standard univariate GARCH model increases the accuracy of covariance forecasts based on the DCC model. The second benchmark, DCC-CARR, is based on range data. This means that the way in which range data is utilized in the univariate volatility model is decisive in determining the forecasting accuracy of the DCC model. Since both benchmarks, i.e. the DCC-GARCH and DCC-CARR models, share the same structure in the correlation component as the DCC-RGARCH model, our results clearly show that more precise volatility estimates improve covariance forecasts.
The DCC-CARR model, which can be treated as the main benchmark model for models constructed based on range data, was not only inferior to the DCC-RGARCH model for most assets, but also inferior to the DCC-GARCH model for currencies and ETFs.
To check the robustness of the results, we also consider 5-min returns instead of 15-min returns and two other loss functions (the coefficient of determination and the LINEX loss function). The results for the MSE and MAE criteria for 5-min returns are presented in Table A.6 in Appendix. The outcomes are very similar to those presented for 15-min returns.
2 Under the MSE criterion the difference between the loss function of the DCC-RGARCH model and the benchmark model is not statistically significant for EUR/USD-GBP/USD, JPY/USD-GBP/USD, Apple-IBM (with both benchmark models), Oil-Energy (with the DCC-GARCH benchmark) and Amazon-Apple, Amazon-Goldman Sachs, Apple-Google (with the DCC-CARR benchmark). Under the MAE measure there are only two relations for which there is no evidence to reject the null hypothesis of equal predictive ability. These are JPY/USD-GBP/USD (with both benchmark models) and Amazon-Apple (with the DCC-CARR benchmark).   Table 11 presents the coefficient of determination values from the Mincer-Zarnowitz regression. A proxy of covariance is regressed on a constant and the forecast of covariance. We are unable to calculate the logarithmic loss function (see Section 4.2) because some covariances are negative. For all covariances except the relation between the United States Oil and United States Natural Gas Funds the highest R 2 values are obtained for the DCC-RGARCH model. In most cases the superiority of this model is considerable.
We obtain different results for the asymmetric loss function LINEX. The values of the function for = −1 and = 1 are presented in the Appendix in Tables A.7 and A.8, respectively. The results for all relations between currencies rates indicate that the covariance forecasts based on the DCC-RGARCH model are more accurate than the forecasts from the competing DCC models. The outcomes for other assets are mixed but outliers have considerable influence on the evaluation. After excluding the highest 1% of values the results depend on the valuation of the over-and under-prediction errors. For = −1, i.e. when over-prediction errors have a higher impact on the loss function, then the best forecasts are based on the DCC-RGARCH model, whereas for = 1, i.e. when under-prediction errors have a greater influence on the LINEX, then the DCC-CARR is better according to this criterion.

Forecasting value-at-risk
Covariance forecasting is crucial for most multivariate financial applications, such as portfolio construction, valuation of assets, risk management and contagion effect. More accurate covariance forecasts give an advantage in various financial applications. That is why covariance forecasting, similarly like volatility forecasting, has not only statistical but also economic consequences.
In this subsection we apply the considered DCC models to one such application, namely the evaluation of risk, using the valueat-risk (VaR) measure. VaR was developed by financial practitioners as an easily interpretable number which encodes information about a portfolio's risk. Despite being a single number, VaR enables managers to interpret the cost of risk and allocate capital efficiently. We formulate daily forecasts of VaR for three separate portfolios of currency rates, commodity exchange traded funds and stocks. All the portfolios are constructed with equal weights. The same assets and forecasting period are assumed as in the analysis of variances and covariances in Sections 4.2 and 4.3 We construct VaR forecasts for the 95% and 99% confidence levels.
Our evaluation of the forecasts is based on two approaches: the first involves testing the competing VaR models for statistical accuracy, while the second pertains to measuring the loss to the economic agent as a result of using the model. We test the statistical adequacy of the forecasts based on: the unconditional coverage test by Kupiec (1995), the independence and conditional coverage tests by Christoffersen (1998), and the unconditional coverage, independence and conditional coverage tests by Candelon et al. (2011). The results of these tests for the 95% VaR forecasts are presented in Table 12 (the outcomes for the 99% confidence level are given in Table A.9 in the Appendix). The results for the Candelon et al. (2011) tests are presented for 5 moments, but we also obtained very similar results for 1, 2, 3, 4 and 6 moments.
We do not obtain fully satisfactory results for all portfolios for any of the models, but the outcomes depend heavily on the kind of assets and tests applied. The statistical test results do not differ sufficiently between the competing models to clearly indicate which is a better model.
In the second approach, we perform an economic evaluation of the models based on loss functions. We concentrate on firm loss functions. This approach emphasizes the role of the utility function of risk managers, who have to consider their firms' profitability, and therefore prefer smaller scaled VaR measures for efficient capital allocation. In order to assess whether the differences between loss functions are statistically significant, we apply the Diebold-Mariano test. The results for the 95% VaR forecasts are given in Table 13 (the outcomes for the 99% confidence level are presented in Table A.10 in the Appendix). The evaluation period is January 4, 2010, to December 30, 2016, LR UC is the statistic for the Kupiec (1995) unconditional coverage test, LR IND is the statistic for the Christoffersen (1998) independence test, LR CC is the statistic for the Christoffersen (1998)   The evaluation period is January 4, 2010, to December 30, 2016, FLF(STS) is the loss function by Sarma et al. (2003), FLF(C1), FLF(C2), FLF(C3) are three loss functions by Caporin (2008). The lowest values of loss functions are marked in bold. The p-values of the Diebold-Mariano test are presented for pairs of models with respect to the two benchmarks: the DCC-GARCH and DCC-CARR models. A p-value lower than the significance level means that economic losses for the DCC-RGARCH model are lower than losses for a benchmark model (here DCC-GARCH or DCC-CARR).
For most of the considered loss functions, significantly more accurate VaR forecasts are constructed based on the DCC-RGARCH model than the DCC-GARCH or DCC-CARR models. This means that risk managers should prefer the DCC-RGARCH model for the estimation of their VaR forecasts. The results are very similar for both commonly employed confidence levels, 95% and 99%.  The realized covariance is used as a proxy of covariance and estimated as the sum of products of 15-min returns. The lowest values of MAE are marked in bold. The p-values of the Diebold-Mariano test are presented for pairs of models with respect to the benchmark DCC-GARCH model. A p-value lower than the significance level means that the forecasts of covariance from the DCC-EGARCH or DCC-GJR models are more accurate than the forecasts from the DCC-GARCH model.

Conclusion
The DCC-GARCH model is one of the most popular multivariate volatility models, due to its simplicity and ease of estimation. However, its parameters are usually estimated based only on closing prices, even though high and low prices contain more information about volatility. In this study, we have proposed a new specification of the DCC model called the DCC-Range-GARCH  The evaluation period is January 4, 2010, to December 30, 2016, the realized variance is used as a proxy of variance and estimated as the sum of squares of 15-min returns. The lowest values of the LINEX function are marked in bold.  The evaluation period is January 4, 2010, to December 30, 2016, the realized variance is used as a proxy of variance and estimated as the sum of squares of 15-min returns. The lowest values of the LINEX function are marked in bold. The evaluation period is January 4, 2010, to December 30, 2016, the realized covariance is used as a proxy of covariance and estimated as the sum of products of 5-min returns. The lowest values of MSE and MAE are marked in bold.  The evaluation period is January 4, 2010, to December 30, 2016, the realized covariance is used as a proxy of covariance and estimated as the sum of products of 15-min returns. The lowest values of the LINEX function are marked in bold. The evaluation period is January 4, 2010, to December 30, 2016, the realized covariance is used as a proxy of covariance and estimated as the sum of products of 15-min returns. The lowest values of the LINEX function are marked in bold.
model, which is a combination of the DCC model by Engle (2002) and the Range-GARCH model by Molnár (2016). The DCC-Range-GARCH model is very similar to the DCC model by Engle but it is based on a much more efficient volatility estimator formulated on the daily range, the log-difference between the high and low prices.
We have compared our DCC-Range-GARCH model to the DCC-GARCH model by Engle (2002) and the DCC-CARR model by . All these three models are very similar in their correlation part, but differ in their specification for conditional variances. The DCC-GARCH model is based on the GARCH model, the DCC-Range-GARCH model is formulated on the Range-GARCH model and the DCC-CARR model is based on the CARR model by Chou (2005). We have evaluated these models on three data sets: currencies, exchange traded funds and stocks.
The univariate range-based models, CARR and Range-GARCH, had not been previously compared. We therefore first compare forecasting accuracy of these models. We found that the CARR model is outperformed by the Range-GARCH model. Surprisingly, the CARR model is often inferior even to the standard GARCH model, whereas the Range-GARCH model outperforms it in most cases. We then turned our attention to multivariate models and the comparison of covariance forecasts, which were the main focus of this paper. We found that the proposed DCC-Range-GARCH model is superior not only to the standard DCC-GARCH model but also to the DCC-CARR model.
Our results illustrate that the use of range data in the DCC model can improve the estimation of covariances of returns and increase the accuracy of covariance and VaR forecasts based on this model, compared with using closing prices only. Moreover, the way the range is utilized matters, as our proposed model outperforms the DCC-CARR model, which is also based on range. Therefore, other multivariate range-based volatility models such as the double smooth transition conditional correlation CARR model by Chou and Cai (2009), the range-based copula models by Chiang and Wang (2011) and Wu and Liang (2011) and the range-based regimeswitching dynamic conditional correlation model by Su and Wu (2014) would probably also benefit from using the Range-GARCH model in place of the CARR specification. The evaluation period is January 4, 2010, to December 30, 2016, LR UC is the statistic for the Kupiec (1995) unconditional coverage test, LR IND is the statistic for the Christoffersen (1998) independence test, LR CC is the statistic for the Christoffersen (1998) conditional coverage test, J UC is the statistic for the Candelon et al. (2011) unconditional coverage test, J IND is the statistic for the Candelon et al. (2011) independence test for up to five lags, J CC is the statistic for the Candelon et al. (2011) conditional coverage test with the number of moments fixed to 5, p-values for J UC , J IND , J CC were corrected by Dufour's (2006) Monte Carlo procedure.