A goodness-of-fit test for ARCH models
Introduction
We shall consider an observable process satisfyingwhere is an independent identically distributed (iid) sequence of random variables with zero mean and unit variance, and , where is the sigma-algebra generated by . Existing literature deals with parametric modeling of the conditional heteroskedasticity . One very general model iswhere is an dimensional parameter vector.
The model (1.1)–(1.2) was introduced by Robinson (1991) as a class of alternatives for testing serial independence of , and it is a generalization of Bollerslev's (1986) GARCH model defined asThe latter model is a further generalization of Engle's (1982) original ARCH model. However, contrary to the previous models, the model given in (1.1)–(1.2) allows for long memory behavior in , being many fractional models nested into (1.1)–(1.2). Among others, we can cite Ding and Granger's (1996) long memory GARCH model and the fractionally integrated GARCH (FIGARCH) of Baillie et al. (1996); see also Robinson and Zaffaroni (2006), henceforth abbreviated as RZ, who discuss these and other parameterizations of interest covered by (1.1)–(1.2).
However, the previous class of models are only one possibility among many alternatives, some of them nonnested, considered in the theoretical and empirical literature of volatility modelling. For instance, one can consider ARCH-type asymmetric models where, unlike in (1.2), is an asymmetric function of . Some examples are Engle's (1990) asymmetric GARCH, the exponential GARCH model of Nelson (1991), the linear ARCH model of Robinson (1991) or the power GARCH model of Ding and Granger (1996). See also the work of Linton and Mammen (2005) and Glosten et al. (1993). Another class of models are the stochastic volatility (SV) models introduced by Taylor (1986) and explored by Harvey et al. (1994). The SV model is a popular approach for modelling dynamic conditional heteroskedasticity, and in particular after the work of Hull and White (1987), as a way to approximate continuous time diffusion underlying option pricing models featuring changing volatility. See also the surveys by Ghysels et al. (1995) and Shephard (2004).
The previous discussion suggests that when testing the adequacy of (1.2), a sensible way to proceed is to leave the alternative model unspecified. That is, to provide a goodness-of-fit test. The latter type of test, also known as omnibus test, traces back to Kolmogorov's (1933) pioneer work, when testing for a specific probability distribution function, or Grenander and Rosenblatt (1957), see their Chapter 6, for testing the hypothesis of white noise dependence. More recently, these type of tests have gained growing interest, see for example Stute (1997) or Delgado et al. (2005).
In the framework of model (1.3), there are several rival procedures to test the adequacy of . Among them, the Box–Pierce–Ljung's Portmanteau test, see Ljung and Box (1978), which resembles Neyman's (1937) smooth test. In our context, they are defined as where is an estimator of the jth correlation coefficient of and is a parameter to be chosen by the practitioner. The Portmanteau test has been relatively explored, see Li and Mak (1994) and Berkes et al. (2003a) for GARCH models with finite p and q, although their validity for the general models considered in this paper is an open question. This test is regarded as a compromise between omnibus and directional tests. On the other hand, as discussed in Delgado et al. (2005), our omnibus test, described in (2.7) below, as well as can be obtained as particular functionals of (2.5) given in Section 2. However, contrary to the goodness-of-fit test that we propose, the power of depends very much on the choice of . For instance, they have only trivial power in the direction of local alternatives converging to the null at the rate , though they are able to detect local alternatives converging to the null at the rate . Therefore, for size accuracy we will require to choose a fairly large , although a smaller may be desirable for power improvements; that is, the choice of induces a trade-off between size and power. Finally, there still exists the issue of how to choose in empirical applications.
On the other hand, contrary to the test based on , which has a standard distribution, the distribution of our test is not standard, but model-based, whose critical values, if possible, are difficult to compute. So, the paper provides valid bootstrap-based tests for goodness of fit for the model (1.1), being our result valid for both (short memory) GARCH as well as for (long memory) FIGARCH models, among many others. Furthermore, we will not require finite second moments for the observable process . Finally, it should be mentioned that the results of the paper follow if (1.1) is modified to where is some finite dimensional parameter and is a stationary strong mixing sequence. However, to simplify the already lengthy arguments of our results, we have decided to use (1.1)–(1.2) instead.
The remainder of the paper is organized as follows. Section 2 describes the test and its properties. Because of its nonstandard limiting distribution, Section 3 presents a bootstrap procedure, showing its asymptotic validity. A small Monte Carlo experiment to examine the performance of our test in small samples is given in Section 4, whereas Section 5 gives the proofs of our main results of Sections 2 and 3, which employ a series of Lemmas confined into Section 6.
Section snippets
The test
We will describe and examine a test for the adequacy of model (1.1)–(1.2). To that end, let where is a compact parameter space. Hence, our null hypothesis becomes to check whether in (1.1) belongs to for some value of the parameter space . That is, the null hypothesis becomes The alternative hypothesis is the negation of .
Before we describe the test we need some preliminaries. Given a stretch of data ,
The bootstrap test
The basic idea of the bootstrap is, given a stretch of data say, to treat the data as if it were the true population, and to carry out Monte-Carlo experiments in which pseudo-data is drawn from . Since Efron's (1979) work on the bootstrap, an immense effort has been devoted to its development. We can cite two main motivations/reasons. First, bootstrap methods are capable of approximating the finite sample distribution of statistics better than those based on their asymptotic
Monte-Carlo experiment
In order to investigate how well the bootstrap test given in (3.4) performs in finite samples, a small Monte Carlo experiment was carried out. All throughout our Monte Carlo experiment we have employed 1,000 replications with samples sizes , 512 and 1024. To calculate the bootstrap statistics, for all the models and sample sizes considered, 999 bootstrap samples were employed, that is we have chosen .
To examine the empirical size of the test, we have considered the GARCH model
Proofs
In several places of the proofs, we make use of the following expansion obtained by recursive substitutionObserve thatLet us introduce some notation used in this and next sections. Denotewhereas the bootstrap counterparts
Lemmas
Lemma 6.1 Assuming C1–C7, we have that for all and some , Proof We begin with part (a). First, Taylor's expansion implies thatwhere is an intermediate point between and . So, the left side of part (a) of (6.1) is bounded by Using the inequalityand proceeding as in the proof of Lemma 6 of RZ, for
Acknowledgments
The first author's research was supported by ESRC Grant R000239936. Also, we thank the comments of two referees on a previous version of the paper. Of course, all remaining errors are our sole responsibility.
References (34)
- et al.
The maximum of the periodogram
Journal of Multivariate Analysis
(1983) - et al.
Fractionally integrated generalized autoregressive conditional heteroskedasticity
Journal of Econometrics
(1996) - et al.
Stationarity of GARCH processes and some nonnegative time series
Journal of Econometrics
(1992) - et al.
Modeling volatility persistence of speculative returns: a new approach
Journal of Econometrics
(1996) - et al.
Bootstrap specification tests for linear covariance stationary processes
Journal of Econometrics
(2006) Testing for strong serial correlation and dynamic conditional heteroskedasticity in multiple regression
Journal of Econometrics
(1991)- et al.
Asymptotics for GARCH squared residual correlations
Econometric Theory
(2003) - et al.
GARCH processes: structure and estimation
Bernoulli
(2003) Convergence of Probability Measures
(1968)Time Series, Data Analysis and Theory
(1981)
Generalized autoregressive conditional heteroscedasticity
Journal of Econometrics
Distribution free goodness-of-fit tests for linear processes
Annals of Statistics
Bootstrap methods: another look at the Jackknife
Annals of Statistics
Autoregressive conditional heteroskedasticity with estimates of the variance of the United Kingdom inflation
Econometrica
Discussion: stock market volatility and the crash of
Review of Financial Studies
Stationary ARCH models: dependence structure and central limit theorem
Econometric Theory
Cited by (17)
A residual bootstrap for conditional Value-at-Risk
2024, Journal of EconometricsBootstrapping the transformed goodness-of-fit test on heavy-tailed GARCH models
2023, Computational Statistics and Data AnalysisBootstrap based probability forecasting in multiplicative error models
2021, Journal of EconometricsBootstrapping impulse responses of structural vector autoregressive models identified through GARCH
2019, Journal of Economic Dynamics and ControlCitation Excerpt :For the VAR framework considered in the present study, Brüggemann et al. (2016) point out, however, that the wild bootstrap is not valid asymptotically for structural impulse response analysis and propose an asymptotically valid residual-based moving blocks bootstrap instead for models with conditional heteroskedasticity of unknown form. Assuming that the conditional heteroskedasticity is of GARCH type, yet another possibility is to use a GARCH residual-based bootstrap (see Bruder, 2018; Hidalgo and Zaffaroni, 2007; Jeong, 2017). Some previous studies on bootstrap methods for conditionally heteroskedastic SVAR models focus on possible improvements for inference if identified structural shocks and impulse responses are considered.
The uncertainty of conditional returns, volatilities and correlations in DCC models
2016, Computational Statistics and Data AnalysisCitation Excerpt :The asymptotic validity of proposed bootstrap procedure is not formally established in this paper. However, several authors show that, when using a residual bootstrap similar to that described above to obtain bootstrap replicates of the parameters, the bootstrap distribution of the QML estimator of the parameters is consistent to estimate its asymptotic distribution in the context of univariate GARCH models; see, for instance, Hidalgo and Zaffaroni (2007), Corradi and Iglesias (2008), Paparoditis and Politis (2009) and Shimizu (2010). We might expect that the asymptotic convergence of the bootstrap distribution of the QML estimator of univariate GARCH models could be extended to the three-step estimator of the cDCC parameters.
Bootstrap refinements for QML estimators of the GARCH(1,1) parameters
2008, Journal of EconometricsCitation Excerpt :If one knew the data generating process (DGP), then a residual based bootstrap approach, which makes direct use of the structure of the model, seems to be more natural than a nonparametric bootstrap approach, such as the block bootstrap. Hidalgo and Zaffaroni (2007) show the first order validity of the residual-based bootstrap for ARCH(∞) process, which indeed includes finite GARCH(1,1). Hence, it is interesting to investigate whether the residual-based bootstrap provides higher order improvements over asymptotic normality, and whether these improvements are sharper than those provided by the block bootstrap.