Parametric or Non-Parametric Estimation of Value-At-Risk

In the financial analyses the fact of predicting future states of the instruments subjected to investments is extremely important. It allows reducing risk and maximizing potential profits. That is why any ways which enable to predict the further negative results of taking decisions are very important and the knowledge about the measures and their efficiency are an additional advantage. Thus, the paper in which value at risk and an assessment of this measure are discussed seems to be of interest.


Introduction
The financial world has its own rules and all decisions result in financial consequences. Therefore, it is crucial to understand the mechanisms that create behaviors in this area, and most importantly, the methods which allow to some extent on their management and possible reduction of their negative effects. Indeed, it is important to have the knowledge of the instruments in which to invest, but even more it is important to know how to deal with risk reduction which appears always with these type financial investments. Any measure, including the value at risk, may not only allow for a full exploration of knowledge about the mechanisms that create the financial markets but, more importantly can be used as a tool to fight over the negative consequences of our decisions.
Thus, in the paper Value at Risk (VaR) is presented as an instrument which reduces the risk and defines its scale. The analysis is made among twenty companies of the Warsaw Stock Exchange currently included in the WIG20. Taking into account the multitude of varieties of VaR calculation methods, some of them are proposed to carry out some kind of confrontation. Such an approach was designed to assess the effectiveness of simulation methods for more complex parametric methods. As a part of the resultant, which combines both of these groups, also a method which belongs to semi-parametric EKT (Note 1) group is proposed. At the same time the Monte Carlo and historical simulation as non-parametric methods are taken into consideration. The methods developed by the Risk Metrics with different approaches to the modeling of random noise (normal distribution, the Student's t distribution and GED (Note 2) are representatives of the second group). In order to evaluate the full effectiveness of these methods three levels of significance of 0,01, 0,05 and 0,10 are taken into account. The research horizon covers the period from 01.01.2010 to 31.12.2012. In the analyses 151 historical observations are used as the information necessary to estimate these models. In order to estimate the quantity there was used the dependence developed by Risk Metrics™, namely:

Parametric Methods
There are many analytical models describing the fluctuation of financial instruments at the time. Most of them are very important because of their practical applications. They are mostly models introduced and successfully used by financial analysts and engineers gathered around a group of Risk Metrics™ (Note 3). The core difference between them is mainly due to different approaches to the modeling of random noise (e.g. Normal distribution, t-Student distribution or even GED) and with the same method of calculating VaR. In this case there might be distinguished, inter alia, the models based on the generalized autoregressive processes of conditional variance (such as GARCH (1,1)), so-called "reverting to the mean" models (Mean Reversion) and Random Walk, which also are successfully used in many issues of financial engineering (Pisula & Mentel, 2003).
The most common model in this class is Risk Metrics Normal Drift, the model with random noise modeled by a normal distribution (Risk Metrics Technical Document, Risk Metrics Monitor, 1996).
In the model it is assumed that the logarithmic returns of stock prices where: t P -stock price in "t" research period, 0 P -stock price in the baseline period.
are generated according to the following process: In this model, the so-called variance conditional of daily returns of stock prices (at the practical assumption that the average value is zero) is calculated as an infinite moving average with exponential weights: . Approximately for a sufficiently large number of historical www.ccsenet.org/ijbm International Journal of Business and Management Vol. 8, No. 11;2013 observations (   n ) this dependence can be written as following: , and recursively as: For the returns with longer time horizons (T> 0) the variance scaling with respect to the horizon length is applied. (Pisula, 2002).
Risk Metrics™ applies in the analyses a universal constant smoothing  = 0.97 for daily returns. Estimated on the basis of this model, VaR limits (the accepted level of significance ) for the daily time horizon will be the following for the refunds and stock prices: respectively the quantiles of the row in normal distribution The relevant parameters of the model ( and ) are determined by the maximum probability method.
Another model recommended by Risk Metrics™ is a Risk Metrics t-Student model with random noise modeled by t-Student distribution.
In this model, it is assumed that the returns are generated according to the following process: VaR limits estimated on the basis of this model (at the accepted level of significance ) for the daily time horizon are appropriate for returns and stock prices as: -respectively the quantiles of the row in t-Student distribution.
The parameter models (,,) are determined as in the previous ones applying the maximum probability method. Risk Metrics GED model discussed in the article has differently modeled noises. In this case the above noises are modeled by General Error Distribution.
In this model, it is assumed that the returns are generated according to the following process: Density function for a generalized error distribution ) , , ( can be given with the following parameters: location  , scale and shape  is of the form: www.ccsenet.org/ijbm International Journal of Business and Management Vol. 8, No. 11;2013 GED distribution is often used in practice, because it has so-called "fat tails". This means that the forecasts constructed on the basis of GED has an easier way to capture the extreme observations ( Figure 2). If the shape parameter is 2   , then the GED distribution is a normal distribution of ) , Estimated on the basis of this model, VaR limits of returns and stock prices for daily time horizon can be given as: -respectively the quantiles of the row in GED distribution.
The parameter models (,,) are determined as in the previous ones applying the maximum likelihood method.

Simulation and Nonparametric Models
In the historical simulation method the real data is used to estimate VaR, which makes it even better than the covariance method reflects the actual behavior of the market. The main advantage of this method is that it is a non-parametric method. This means that not only there is no restriction resulting from the need for the assumption of normality, but also the estimation of some parameters (such as mean and standard deviation) on the basis of historical data are avoided (Mentel, 2011;Jajuga, 2000).
In the case of "fat tails" in the real distribution of prices, historical simulation method gives a more reliable level of VaR. The historical simulation advantage is also the fact that unlike other methods this one is easier to estimate.
Historical approach is a very intuitive method of estimating VaR. It is based on historical return rates of the instrument (or portfolio) and their empirical distribution. It is important that the rates of return need to be calculated with the same period of VaR (if the investment horizon is one-day, then the rates of return should be determined daily). While using historical model it is necessary to collect a large series of data. The higher the number, the more accurate, but the data is often very distant which is not important as the less distant data. Sometimes gathering sufficient data is not possible and the use of historical method is then limited. www.ccsenet.org/ijbm International Journal of Business and Management Vol. 8, No. 11;2013 Historical simulation is also sensitive to extreme return rates included in the distribution. As a result, the size of VaR varies discretely and the size of the risk is often underestimated or overestimated.
Historical model assumes that the development of risk is determined by its historical behavior.
Turning into the second of the considered simulation methods, namely the Monte Carlo method it is worth emphasizing that it is based on a hypothetical stochastic model that describes the evolution of the prices of the financial instrument. The essence of the stochastic processes is that it is not possible to predict the values of the process; one can only determine the probability with which a given value is reached.
In the Monte Carlo method hypothetical model is assumed to describe the mechanism of the formation of prices (or return rates) of the financial instruments. It is often assumed that this process is a geometric Brownian motion.
Using as a basis this or other models, a lot of observations of financial instruments prices are generated. This way one gets the distribution of return rates of the financial instrument. Determining of the distribution quantile allows determining VaR in a direct way. The process parameters are usually estimated based on historical data (Jajuga, Kuziak & Papla, 1999).

Semi-Parametric Concepts
The resultant of a group of models in estimating value at risk is so-called group of semi-parametric methods. Their common representative is EKT.
This method is a modification of the historical method. It is based on extreme value theory dealing with probability distributions having thick tails (Emmer, Klüppelberg & Trüstedt, 1999). It is quite an important assumption because, in spite of the widely used approach which is the distribution of return rates of financial instruments having the condition of normal distribution, in fact the extreme observations exclude such assumptions. In practice, the greater applications have the distributions catching more detached observations than does the above-mentioned distribution. Much better in that regard is modeling the return rates even by GARCH (1,1) or t-Student (Mentel, 2011).
In the discussed method, as defined by the value at risk, it is examined the left tail of the returns distribution from the portfolio, i.e. all the negative return values. Then, the value of these returns is multiplied by -1 receiving at the same time positive values. We sort the values of these phrases: r1 ≥ r2 ≥ ... The designation of the threshold value r M is performed with the application of QQ-plot chart which illustrates the observations above and below of the threshold value.
The formula for the estimator of the tail distribution is as follows: From the above one gets the value at risk formula: where: T -time during which one decides to keep their portfolio.

Efficiency Assessment
As indicated before VaR is estimated by using three proposed levels of significance. Such an approach is designed to assess more fully the efficiency of the methods in terms of the changing probability. Frequently some methods are "highly effective" in specific circumstances, in this case, under certain established levels of significance. www.ccsenet.org/ijbm International Journal of Business and Management Vol. 8, No. 11;2013 Along with the change in the value of this index their overall assessment may be subjected either to improvement or deterioration. Value at Risk imaging is done on the twenty instruments, which in turn eliminates any chance indications that can take place in case of very low study sample. The estimation was done on the basis of the daily return rates of the considered instruments.
The basic summary of the results is illustrated in Table 1, in which there are presented the percentage values of exceeding beyond the VaR threshold, resulting in the adopted level of significance. Thus, the levels are 1%, 5% and 10%. So radically different values allow for the good evaluation of the considered models.
While doing a rapid assessment on the basis of that statement, one can have an illusion that it has the EKT method which is highly efficient. This is due to the fact that the interest rates of the acceptable limits are sensational in this case, and it's hard to relate them to other models that are lagging behind. However, as can be seen in Figures 3-5, the method is not very reliable in case of small values of the levels of significance. Then, the differences are significant enough that its use as a measure of limiting the risk is at all unreasonable.  Values of that kind are improved with the increase of α, but nevertheless the estimates of value at risk by EKT method overestimate the risk scale compared to other already discussed models.
Quite characteristic method in the context of visual is the historical simulation method whose values are less frequent change. It explains the shape of some kind stepped curve as for the VaR values determined by this www.ccsenet.org/ijbm International Journal of Business and Management Vol. 8, No. 11;2013 method. Changes in the values of the tail occur so little that the estimated value very often maintains a constant level over a longer period of time. Thus, this method is much less flexible in comparison to the others.
All other parametric methods and the Monte Carlo simulation are visually much more positive, mainly due to the fact of the rapid response to market changes, and much weaker aversion to risk.  Taking into account previous observations that should be emphasized and which are equally important, it should be noted that the highest efficiency of the value-at-risk estimates has the t-Student Risk Metrics method. It's fair to say that this method is the most universal one when it comes to the estimated probability values. In each considered option it was much more efficient than it would indicate the threshold values It seems in this type of analysis the Monte Carlo simulation method performs well. What is important, it is a representative of a group of non-parametric methods.

Conclusions
While drawing conclusions it can be argued that the best estimates and the most efficient forecasts, irrespective of the level of significance, provide models of random noise modeled by Student's t-distribution. Furthermore, they are characterized by relatively low efficiency variation of the calculated forecasts.
Apart from EKT and historical simulation VaR forecasts follow market changes. Thanks to this the group of such methods is more flexible and more relevant to the actual market conditions.
Relatively good indications are obtained in the case of simulation methods regardless of the assumed level of significance. Better in this respect the Monte Carlo simulation is characterized by a high stability of indications and thus more resistant to extreme observations. But one cannot say the same about the historical simulation, where the influence of the detached observations is visible.
It may be noted that for small levels the models with random noises modeled by GED distribution also give satisfactory forecast, much better than the normal distribution. This is due to the so-called "fat tails", allowing for better catching the extreme observations. However, the situation is becoming worse with the increase of the α level.
Some models, in spite of the significant efficacy, often overestimate the Value at Risk and then some indications are much higher than the observed values. An example of such a situation could be here the historical simulation or semi-parametric EKT method. estimated for rates or gains / losses of the analyzed values. Thus, the majority of the dependences are subjected to unification regardless of the calculated VaR.
With the increase in the value the assumed for the analysis the level of significance increases the overall efficiency of the analyzed models. An exception may be already mentioned here is Risk Metrics GED.