Compatibility of market risk measures

An important element of everyday financial decisions is to assess the scale of the risk of investing in various financial products. Knowledge on the degree of risk of the activities undertaken in this field allows a certain predictability about the negative effects that may occur. This is of great importance in the context of aversion to risk, and thus a better allocation of resources. A multitude of market risk measures is substantial and in addition they provide information about the risk of investment considered in a different perspective and form. A very interesting issue is the scale compatibility of these measures. It is important whether such measures to the same extent define the scale of risk and whether any signals about the dangers overlap in time. The above considerations based on the value at risk and so-caled RiskGrade have become a contribution to the creation of this publication.


INTRODUCTION
An inherent element in modern finance is a continuous "struggle" with the risk and its marginalization.Over the years, a number of methods have been developed which to a lesser or greater extent enable the identification of potential threats as well as their measurement.Every day we face the problem of choosing the right one, both in terms of correctness of indications or in so-called simplicity of estimates.It is important to remember that the whole methodology should also serve average investors and not just a small circle, which either has a vast knowledge of mathematics or the appropriate software for more advanced calculations.
The purpose of this publication is, therefore, an attempt to assess two interesting, yet different, approaches to the analysis of market risk of financial instruments.One of them is a well-known Value at Risk 1 , the second one rarely used RiskGrades ™ methodology.These two distinct concepts show the risk differently.The first of these clarifies the scale of losses we may incur based on specific confidence level, the second, in turn shows the amount of risk in relation to baseline.The fundamental question that the authors of this publication tried to answer is a matter of indications "compatibility" of these methodologically distinct concepts with regard to information about the scale of the threats that we receive in their case.
This publication is, therefore a fragment of a series of articles devoted directly to VaR methodology and its determinants -an impact assessment at the significance level on the effectiveness of the VaR estimates (Mentel, 2011), comparison of parametric and nonparametric methods (Mentel, 2013), discussion focuses on the significance of historical observations which affect current market situations, and consequently impact short term forecasts (Mentel & Brożyna, 2014), analyzing the impact of the decay factor  for estimating VaR (Mentel & Brożyna, 2015a), the characteristics of a computer program for the calculation Value at Risk (Mentel & Brożyna, 2015b).

A REVIEW OF RESEARCH -THE RISKMETRICS CONTRIBUTION
The practical, widespread use of VaR should be seen in the publication of 1996 issued by JP Morgan (RiskMetrics, 1996).It describes not only the basis of value at risk, methods of estimation but also the assumptions underlying its determination.Thus, this publication is a kind of compendium of knowledge about the practical applications of these measures, from the definition to the mechanisms of its operation.Most importantly it was possible then create a method that served the overall level of risk, regardless of the type of assets.Its additional advantage was that the information that was obtained was given in fully intelligible units.
The basic version of the VaR method, which was initially developed (the variance-covariance method) was built on the assumption of normal distribution.But we must remember that it is so much "normal" that in the everyday world we quite frequently encountered nothing else.While many variables observed in nature actually take values in accordance with a Gaussian distribution, whereas in the financial world it is not always the case.This fact is a fundamental drawback of the original concept of VaR, which at times of smooth market functioning one tried not to notice (Brown, 2004).
The main problem of classical VaR are the tails of the distribution of variables.Well, what is the most interesting in the market, this happens at the ends of the distribution of variables.This is about the issue of so-called."black swans" (Taleb, 2007).The biggest gains and the greatest losses do not bring normal trading days, but the extreme ones.If you postulate for a better risk pricing model, it should be based on the distribution, which pays attention to the ends of the distribution of the variables.However, the distribution, which is sensitive at the ends of the distribution, will be vague in its central part.Meanwhile, the VaR has been used primarily in a period of normal functioning of markets.
Without going more into the details, quite an extensive publication devoted to the Value at Risk is a book by Philippe Joriona (Jorion, 2006).This publication is valuable when it comes to explore the intricacies of measuring potential losses.
The second of the measures under consideration, i.e.RiskGrades was presented by RiskMetrics in 2001 in technical document (Kim & Mina, 2001).Despite the wide range of applications in the form of several of its "derivatives", like the RiskGrade Diversivication Benefit, RiskImpact or Chance of Losing Money, it is not as common as previously discussed.In 2002 it went through a further development in the publication RiskGrade Your Investments... (Elmiger & Kim & Berman, 2002).It is worth emphasizing that this measure, described fully later in this article, does not require assumptions about variable distribution.

VALUE AT RISK IN THE RISK ANALYSIS
Value at Risk is, as mentioned earlier, the most common method of measuring uncertainty of the future states of a given asset.Significant in its determination is that it can be estimated in several ways.The basic division, in this respect, is the division into parametric and nonparametric methods.
The first group is based on a variety of models describing the "behavior" of financial instruments in the portfolio.The development of such models belongs to the large organizations of the financial world such as the group RiskMetrics ™, recognized as the greatest expert in this field.This group of methods for calculating VaR uses the assumption that the phrases taken for an analysis have a certain probability distribution.
Since the establishment of this type can sometimes be malefic, as it is often very difficult to adjust the distribution, one turns into the second group of VaR estimation methods, namely nonparametric methods.In this case the simulation methods i.e. a historical simulation or the Monte Carlo method are the most significant.
The resultant element in this respect may be another, not distinguished in the literature class of methods, namely semi-parametric ones.This group is usually included into the non-parametric methods, hence the lack of a clear separation of it as a separate class.The representative of this group is the EKT method (S.Emmer, C. Klüppelberg, M. Trüstedt) (Mentel, 2008).
There are many analytical models describing the fluctuation of financial instruments over time.Most of them are very important because of their practical applications.These are mainly models introduced and successfully used by analysts and financial engineers gathered around the group of RiskMetrics ™.The main difference between them is mainly a different approach to the modeling of random disturbances (e.g.Normal distribution, t-Student or even a GED) and the methodology itself for calculating VaR.In this case we can distinguish, e.g.models based on generalized variance conditional autoregressive processes (such as GARCH (1,1)), so-called Mean Reversion models and Random Walk, which are also successfully used in many issues of financial engineering (Pisula & Mentel, 2003).
The most common model in this class is the RiskMetrics Normal Drift model, with random interference modeled by a normal distribution (RiskMetrics Technical Document, 1996), (RiskMetrics Monitor, 1996).
In this model, it is assumed that the logarithmic returns of stock prices where: t P -share price in the considered period "t", 0 P -share price at baseline are generated according to the following relation: , : (0,1) In this model, the so-called conditional variance of daily returns of the share prices (at the practical assumption that their average value is zero) is calculated as an infinite moving average with exponential weights: . Approximately for a sufficiently large number of historical observations ( n  ¥ ), this relation can be written as follows , and recursively as: (1 ) For the returns with longer time horizons (T> 0) it is applied (practical for logarithmic phrases) the scaling of the variance relative to the length of the horizon (Mentel, 2012).
RiskMetrics™ is used in its analysis a universal constant smoothing  = 0.97 for daily returns.
The limits of VaR (on an assumed level of significance ), estimated on the basis of the above model, for the daily time horizon will be appropriate for returns and share prices and will be where: -respectively the quantile of the given row in normal distribution.
The relevant parameters of the model ( and ) can be determined by the method of maximum likelihood.

RISKGRADES™ METHODOLOGY
RiskGrade™ statistics is a measure of the volatility developed by the RiskMetrics Group ™.Its measurements are based exactly on the same data and analyses as in RiskMetrics ™ Value at Risk, however, it enables investors to understand more fully the market risk.This measure is scaled, which makes it more intuitive and easier to use than the VaR.RiskGrade ™ are measured on a scale from 0 to 1,000, however, it often happens that the upper limit is sometimes exceeded.A value of 100 corresponds to the average value of RiskGrade for the major stock market indices in normal market conditions in 1995-1999.Zero level of these measures is adopted for funds kept in cash.
It is a measure of risk based on volatility, expressed as a standard deviation determined for the logarithmic (annual) returns of the given financial instrument.The higher the volatility, the greater the " risk grade" (RG), and thus a higher risk of monetary investment in the instrument for a potential investor.It is a good measure of risk because it is a measure of the dynamic (time-varying), which enables the investor to keep control of the risks of the investment.It also allows to compare of investment risk within a given class of financial instruments (e.g.shares), because it is a relative measure, that is comparative one.The risk grade (RG) is defined as follows (Kim & Mina, 2001): where: ( ) ,252 i t

s
-the standard deviation of annual (252-day) logarithmic returns: for the prices of the studied instrument, bazowe s -base volatility taken as a reference point.
Estimates for ] ( ) ,252 i t s in the formulae (4) can be determined from the following relationships: In the formula (5) for the estimator of the standard deviation of daily logarithmic price returns of the testes instrument, a constant "smoothing" λ = 0.97 and one takes into account n = 151 of recent historical observation (this is determined by RiskMetrics of sufficient number of necessary historical observations).
This estimator was derived assuming that the logarithms of the daily price changes of the tested financial instrument are modeled by the random walk process, and assuming that the expected daily returns of this instrument are zero ( Table 1 C omposition of the group of international indexes and the value of the base volatility calculated on their basis of Base volatility, against which the volatility of the financial instrument for the RG calculation is compared, was determined taking into account a group of 21 international stock indices for the financial markets with the largest capitalization (the data were obtained from the statistics of the London Stock Exchange).For each index ( )   ,252 j t s -was determined -the average standard deviation for its annual logarithmic returns (calculated using the formula ( 5)) within 5 years, i.e. from January 1995 to December 1999.The base volatility was assumed as a weighted average (with the weights resulting from market capitalization) calculated from the average annual volatility of the studied group of indices during the relevant period: During the considered 5-year period the weighted average annual volatility of the studied indexes was approx.20 [%] (Kim & Mina, 2001).Therefore, as a base volatility in the formula (4) it is assumed , and thus the value of RG is determined ultimately by the formula: Table 1 shows the composition of a group of international indexes and the determined average annual volatility for each index in the period from 1995 to 1999 and the average weighted by the base volatility.

ANALYSIS
In order to assess the compatibility of the two presented measures, their estimates for the companies included in the WIG30 index of Warsaw Stock Exchange was done.The time horizon considered was a period of years from 2010 to 2014.The five-year research period seems to be long enough to confirm any regularity.The studies omitted the indications for shares of ING BSK 2 and the Boryszew 3 which depletes a research attempt to the twenty-eight elements.
Both VaR and RiskGrades were designated for one-day data based on a logarithmic rate of return.Such an approach aimed at real and relative assessment of coverage of these two different indications measures.

2
In the case of ING BSK in November 2011, has been a split of 1:10.

3
In the case of Boryszew we deal with with two events interfering VaR estimates.The first one is the resolution on the capital increase (October 2010), resulting in reduced rate of 3,19 zl.Secondly, in turn, it is a split of the company's shares at a ratio of 10: 1 (April 2014).As a result of the conducted estimates, the results were obtained which have been presented in graphs 1 and 2. As it can be observed the indications in a form of box-plots overlap significantly, which allows to assume that the assessment of the risk scale using the first or second option is almost identical.
Confirmation of this state can be traced in the analysis of interdependence.Designated correlation coefficients (Figure 3) throughout the analyzed period reinforces the belief that the overall scale of risks which is presented by both methods is largely the same.In more than twenty cases the value of the correlation coefficient was greater than 0.9, with the average at 0.91.It is also worth making a relative evaluation of indications of the estimated Value at Risk and RiskGrade in relation to the classical measure of risk determined based on a one-index model by William Sharpe.Starting directly from the regression equation, commonly known as a characteristic line of security, we can determine the measure of derivatives, i.e. market risk and specific risk.Market risk reflects undiversifiable risk, resulting mainly from the game on the stock exchange.The latter, the diversifiable risk connected with a specific share, which in theory can be reduced to zero.The total risk of security is the sum of the two above.
The analysis of the inter-relations of indications of the previously considered risk measures in relation to the risk estimated by the Sharpe's model has been shown in figure 4. Perhaps the values of correlation coefficients are not too large, however, for the most part they exceed the level of 0,4.There are entities for which the measured correlation of VaR and RiskGrade in relation to the total risk is quite large even reaching the level of 0,7.
But quite a "big" coverage in the indications mainly due to the different approaches to the variability that in the case of methods developed by a group JP Morgan is created using the exponentially weighted moving average (EWMA) (Crowder, 1987).In the case of risk determined based on the Sharpe's model the volatility is determined in the classic way, in relation to the WIG index.Thus, the models proposed by Risk-Metrics™ are a good suggestion for a constantly changing value of assets.Non-stationary volatility, which we have to deal with in this case, is constantly updated and the received estimates are more adequate to reality (Mentel & Brożyna, 2015).It seems, therefore, that the risk estimated based on VaR and RiskGrade is more flexible and actually supported by recent market events.

CONCLUSIONS
As mentioned earlier RiskGrade™ statistics is a measure of the volatility developed by the RiskMetrics™ Group affiliated with Bank of JP Morgan Company.Thus, its measurements are based exactly on the same data and analysis as RiskMetrics ™ Value at Risk.It seems, however, that it allows the average investor to understand better the market risk.Its idea is a clear presentation of the scale of threats in relation to the resultant market value.The big advantage is scaling, which makes it more intuitive compared to the VaR.Its reference to the level of zero, as the funds kept in cash, or up to 100 which is equal to the RiskGrade average for major stock indexes, makes it much easier to interpret.
RiskGrades changes over time, is dynamic and adapts to current market conditions.In difficult times such as the Asian and the Russian crisis RiskGrades can easily exceed the level of 200, while in the more stable period it could drop below 50.This ratio can help investors to monitor dynamically the exposure to market risk.It also allows a comparison between the investment.It is a standard measure of volatility, and thus allows a comparison of investment risks of various asset classes and regions.It can be said that the Brazilian share of RiskGrade of 300 is six times riskier than Asian bond fund for RiskGrade of 50.
Basically, there are two elements that are the key advantage of this measure.The first is the fact that RiskGrade estimates are based on historical data weighted exponentially, allowing to adjust better to current market conditions.This approach significantly improves the accuracy of forecasting and response to extreme cases.Another important consideration is the issue of calibration of measurement which makes it easier to interpret for the public and not only for professional players.
It seems, therefore, that the use of RiskGrade, for a typical stock exchange player more reasonable than the VaR estimates.Significant indications of similarity that exists in measuring the scale of threats of both metrics makes the first one more attractive.But we must remember about an important asset of the Value at Risk, namely the valuable measurement of scale of potential losses we may incur.In the case of RiskGrade we do not receive such information.An important issue in the method selection, therefore, is the information we want to get.
To sum up, an important prerequisite for making the right decisions is the continuous updating of theoretical knowledge and practical skills.A positive result of the decision depends largely on the choice of method.It is, however, made by a man who relies on confidence in the choice of the solution.Each time the solution is based on certain assumptions and can give specific guidance as to the specific elements of decisions, but usually it is made in a general manner of the optimal strategy.There is, therefore, a need to develop a detailed tactical implementation.

Figure 1 .
Figure 1.A of the Value at Risk in the cross-section of the concerned companies Source: own work.

Figure 2 .Figure 3 .
Figure 2. A box-plot of RiskGrades in the cross-section of the concerned companiesSource: own work

F igure 4 .
The values of correlation coefficients determined for the VaR and RiskGrades in relation to the total risk in the cross-section of the considered companiesSource: own work.