Bootstrap conditional distribution tests in the presence of dynamic misspecification
Introduction
In recent years, there has been growing interest in providing tests for the correct specification of conditional distributions. One reason for this is that testing for the correct conditional distribution is equivalent to jointly evaluating many conditional features of a process, including the conditional mean, variance, and symmetry. Along these lines, Bai and Ng (2001) construct tests for conditional asymmetry. Just as importantly, these sorts of tests allow for the evaluation of predictive densities, thus generalizing the evaluation of point and interval forecasts.1
In this paper, we show the first order validity of the block bootstrap in the context of Kolmogorov-type conditional distribution tests when there is dynamic misspecification and parameter estimation error. Our approach differs from the literature to date because we construct a bootstrap statistic that allows for dynamic misspecification under both hypotheses, rather than assuming correct dynamic specification under the null hypothesis. This difference between our approach and that taken elsewhere can be most easily motivated within the framework used by Diebold et al. (DGT) (1998a), Hong (2002) and Bai (2003).2 In their paper, DGT use the probability integral transform (see e.g. Rosenblatt, 1952) to show that , is identically and independently distributed as a uniform random variable on , where is a parametric distribution with underlying parameter , is the random variable of interest, and is the information set containing all “relevant” past information (see below for further discussion). They thus suggest using the difference between the empirical distribution of and the 45°-degree line as a measure of “goodness of fit”, where is some estimator of . This approach has been shown to be very useful for financial risk management (see e.g. Diebold et al. (1999)), as well as for macroeconomic forecasting (see e.g. Diebold et al., 1998b, Clements and Smith, 2000, Clements and Smith, 2002). Likewise, Bai (2003) proposes a Kolmogorov-type test based on the comparison of with the CDF of a uniform on . As a consequence of using estimated parameters, the limiting distribution of his test reflects the contribution of parameter estimation error and is not nuisance parameter free. To overcome this problem, Bai (2003) uses a novel approach based on a martingalization argument to construct a modified Kolmogorov test which has a nuisance parameter free limiting distribution. This test has power against violations of uniformity but not against violations of independence. Two features differentiate our approach from that taken in the above papers. First, we assume strict stationarity, while they do not. Second, we allow for dynamic misspecification under the null hypothesis, while they do not. While our approach is clearly less general because of the first feature, the second feature allows us to obtain asymptotically valid critical values even when the conditioning information set does not contain all of the relevant past history. More precisely, we are interested in testing for correct specification, given a particular information set which may or may not contain all of the relevant past information. This is relevant when a Kolmogorov test is constructed, as one is generally faced with the problem of defining . If enough history is not included, then there may be dynamic misspecification. Additionally, finding out how much information (e.g. how many lags) to include may involve pre-testing, hence leading to a form of sequential test bias. By allowing for dynamic misspecification, we do not require such pre-testing. Another key feature of our approach concerns the fact that the limiting distribution of Kolmogorov-type tests is affected by dynamic misspecification. Critical values derived under correct specification given are not in general valid in the case of correct specification given a subset of . Consider the following example. Assume that we are interested in testing whether the conditional distribution of is . Suppose also that in actual fact the “relevant” information set has including both and , so that the true conditional model is , where differs from . In this case, we have correct specification with respect to the information contained in ; but we have dynamic misspecification with respect to , . Even without taking account of parameter estimation error, the critical values obtained assuming correct dynamic specification are invalid, thus leading to invalid inference. Stated differently, tests that are designed to have power against both uniformity and independence violations (i.e. tests that assume correct dynamic specification under ) will reject; an inference which is incorrect, at least in the sense that the “normality” assumption is not false. In summary, if one is interested in the particular problem of testing for correct specification for a given information set, then our approach is appropriate.
We consider two Kolmogorov-type test statistics; one is the CK test of Andrews (1997), and the other is based on the arguments presented in DGT (1998a), and is similar to the statistic proposed by Bai (2003). The limiting distribution of both tests is a Gaussian process with a covariance kernel that reflects dynamic misspecification and parameter estimation error.3 Therefore, critical values are data dependent and cannot be tabulated. In addition to the generalized spectrum test mentioned above, Hong (2002) also proposes a test for uniformity that is robust to nonindependence, and that is based on the comparison of a kernel density estimator and the uniform density. His test has a normal limiting distribution, but converges at a nonparametric rate. The tests suggested here converge instead at a parametric rate and do not require the choice of the bandwidth, although nuisance parameters free limiting distributions do not obtain. With regard to the CK test, for the case of nonvanishing parameter estimation error and independent observations, Andrews (1997) suggests a parametric bootstrap based on drawing observations from the distribution implied under the null, which is in turn evaluated at some given estimated parameters, conditional on observed covariates, say . If our null is correct dynamic specification (i.e. if , then we can still use Andrews’ parametric bootstrap and draw observations from . However, if instead , then the long run variance of the resampled statistic does not properly mimic the long run variance of the original statistic, thus leading to invalid asymptotic critical values. In the case of dependent observations and dynamic misspecification, but no parameter estimation error, we could almost straightforwardly apply an empirical process version of either the block bootstrap (see e.g. Bühlmann, 1994, Naik-Nimbalkar and Rajarshi, 1994, Peligrad, 1998) or the stationary bootstrap of Politis and Romano, 1994a, Politis and Romano, 1994b, as the only difference is that we are evaluating conditional rather than marginal distributions. In the present context, though, we require a bootstrap that is valid for dependent observations, possible dynamic misspecification under both hypotheses, and nonvanishing parameter estimation error. One possibility in this case is to use the conditional -value approach of Corradi and Swanson (2002), which extends Inoue's (2001) approach to the case of parameter estimation error.4 A drawback of this approach is that the simulated critical values under the alternative are of order (where l plays the same role as the block length in the block bootstrap) and so the finite sample power can be somewhat low with small and medium size samples. Another possibility, which we examine in this paper, is the use of an extension of the empirical process version of the block bootstrap to the case of nonvanishing parameter estimation error.
The rest of this paper is organized as follows. In Section 2 we describe our setup, and examine the asymptotic behavior of the two statistics. In Section 3 we show the first order validity of the block bootstrap in our context. The fourth section contains the results of a small Monte Carlo study, and Section 5 concludes. All proofs are contained in the appendix.
Section snippets
Setup and asymptotic behavior of the tests
Before stating the hypotheses and defining the test statistics, it is worthwhile to sketch some examples of conditional distributions which are correctly specified for a given information set, but misspecified for a larger information set.
Assume that is jointly elliptically distributed; then the density can be expressed as (see e.g. Ingersoll, 1987, Chapter 4, Appendix B)where is the mean vector and is a positive definite matrix,
Validity of the block bootstrap
Given that the limiting distributions of and are not nuisance parameters free, our approach is to construct bootstrap critical values for the tests. In order to show the first order validity of the bootstrap, we shall obtain the limiting distribution of the bootstrapped statistic and show that it coincides with the limiting distribution of the actual statistic under . Then, a test with correct asymptotic size and unit asymptotic power can be obtained by comparing the value of the
Monte Carlo results
In this section we report the results of a small Monte Carlo study of and . Data are generated according to the following processes:
Size Experiments
Size 1: Generate , with . Estimate an AR(1) model.
Size 2: Generate , with . Estimate an AR(1) model.
The null hypothesis is that ; that is , while the
Concluding remarks
We propose an extension of two conditional Kolmogorov-type tests to the case of dynamic misspecification under both the null and alternative hypotheses. We additionally outline conditions under which a version of the block bootstrap can be used to construct valid critical values for the tests, in the context of parameter estimation error and dynamic misspecification. Our approach is useful because critical values derived under correct (dynamic) specification given are not in general valid
Acknowledgements
We are grateful to the organizers (Jean-Marie Dufour and Benoit Perron), as well as the participants of the 2001 C.R.D.E. Conference on Resampling Methods in Econometrics, Université de Montréal for providing many useful comments and suggestions. Additionally, we would like to thank two anonymous referees, Walter Distaso, Marcelo Fernandes, Sílvia Gonçalves, Atsushi Inoue, Lutz Killian, Shinichi Sakata, Paolo Zaffaroni, and seminar participants at Brunel University, Cambridge University,
References (52)
- et al.
A consistent test for conditional symmetry in time series models
Journal of Econometrics
(2001) - et al.
On the theory of elliptically contoured distributions
Journal of Multivariate Analysis
(1981) - et al.
Tests of equal forecast accuracy and encompassing for nested models
Journal of Econometrics
(2001) - et al.
Evaluating multivariate forecast densitiesa comparison of two approaches
International Journal of Forecasting
(2002) - et al.
A consistent test for out of sample nonlinear predictive ability
Journal of Econometrics
(2002) - et al.
Predictive ability with cointegrated variables
Journal of Econometrics
(2001) Robust out of sample inference
Journal of Econometrics
(2000)Consistent specification testing for conditional moment restrictions
Economics Letters
(2001)Generic uniform convergence
Econometric Theory
(1992)An introduction to econometric applications of empirical process theory for dependent random variables
Econometric Reviews
(1993)
A conditional Kolmogorov test
Econometrica
Higher-order improvements of a computationally attractive k-step bootstrap for extremum estimators
Econometrica
A three step method for choosing the number of bootstrap replications
Econometrica
Testing parametric conditional distributions of dynamic models
Review of Economics and Statistics
The blockwise bootstrap for general empirical processes of stationary sequences
Stochastic Processes and their Applications
The use of subseries methods for estimating the variance of a general statistic from a stationary time series
Annals of Statistics
An out of sample test for granger causality
Macroeconomic Dynamics
Evaluating interval forecasts
International Economic Review
Evaluating the forecast densities of linear and nonlinear modelsapplications to output growth and unemployment
Journal of Forecasting
Bootstrap testing in nonlinear models
International Economic Review
Bootstrap testshow many bootstraps
Econometric Reviews
Comparing predictive accuracy
Journal of Business and Economic Statistics
Cited by (66)
Predictive ability tests with possibly overlapping models
2024, Journal of EconometricsBootstrap specification tests for dynamic conditional distribution models
2023, Journal of EconometricsAnalytic moments for GJR-GARCH (1, 1) processes
2021, International Journal of ForecastingSpectral backtests of forecast distributions with application to risk management
2020, Journal of Banking and FinanceAre financial returns really predictable out-of-sample?: Evidence from a new bootstrap test
2019, Economic ModellingAlternative tests for correct specification of conditional predictive densities
2019, Journal of EconometricsCitation Excerpt :We derive our tests within the class of Kolmogorov–Smirnov and Cramér–von Mises-type tests commonly used in the literature and show that our proposed tests have good size properties in small samples. There are several existing approaches for testing the correct specification of a parametric density in-sample (e.g. Bai, 2003, Hong and Li, 2005, Corradi and Swanson, 2006a).5 Our paper focuses instead on the out-of-sample evaluation of predictive densities.