Skip to content
BY 4.0 license Open Access Published by De Gruyter February 24, 2020

Macroeconomic uncertainty and forecasting macroeconomic aggregates

  • Magnus Reif EMAIL logo

Abstract

Can information on macroeconomic uncertainty improve the forecast accuracy for key macroeconomic time series for the US? Since previous studies have demonstrated that the link between the real economy and uncertainty is subject to nonlinearities, I assess the predictive power of macroeconomic uncertainty in both linear and nonlinear Bayesian VARs. For the latter, I use a threshold VAR that allows for regime-dependent dynamics conditional on the level of the uncertainty measure. I find that the predictive power of macroeconomic uncertainty in the linear VAR is negligible. In contrast, using information on macroeconomic uncertainty in a threshold VAR can significantly improve the accuracy of short-term point and density forecasts, especially in the presence of high uncertainty.

JEL Classification: C11; C53; C55; E32

1 Introduction

Since the seminal contribution of Bloom (2009), the contractive effects of uncertainty shocks on the real economy are uncontroversial.[1] Moreover, recent studies show that uncertainty shocks have nonlinear effects. On the one hand, uncertainty shocks induce stronger effects during recessionary episodes or in times of financial distress (see, for instance, Alessandri and Mumtaz 2019; Caggiano, Castelnuovo, and Groshenny 2014; Ferrera and Guérin 2018). On the other hand, the magnitude of the variables’ response to the uncertainty shock depends on the shock’s sign Foerster 2014; Jones and Enders 2016. While a great deal of the literature focuses on structural analysis of fluctuations in uncertainty, evidence regarding the impact of uncertainty on forecast performance is, however, rather sparse.

This paper explores the link between economic uncertainty and forecast performance, making two contributions to the literature. First, I assess the predictive power of uncertainty in a linear model. I derive the baseline results using the large Bayesian VAR (BVAR) approach introduced by Bańbura, Giannone, and Reichlin (2010).[2] The impact of economic uncertainty on forecast performance is assessed by adding a recursively estimated version of the macroeconomic uncertainty index of Jurado, Ludvigson, and Ng (2015) to a medium-sized dataset of macroeconomic indicators for the US. Second, I investigate whether allowing for nonlinearity improves forecast accuracy relative to standard, linear models. To this end, I employ a threshold BVAR (T-VAR) that accounts for nonlinear relations between macroeconomic uncertainty and the real economy. This model allows to directly link the nonlinearity to the threshold variable, which in my application is the uncertainty index mentioned above.[3] Moreover, the T-VAR facilitates the possibility of two distinct regimes, which can be interpreted as high and low uncertainty regimes. Since these regimes can differ in all of the model’s parameters, the model allows for regime-dependent shock propagation processes and heteroscedasticity. As shown by several studies (for example, Alessandri and Mumtaz 2017; Barnett, Mumtaz, and Theodoridis 2014; Clark and Ravazzolo 2015), although not in the context of uncertainty, both features can significantly increase forecast accuracy. To estimate the threshold VAR, I combine the Gibbs sampler provided by Chen and Lee (1995) with the large Bayesian VAR framework mentioned above.

First, I perform an in-sample analysis based on quarterly US data from 1960 to 2017 to demonstrate that the T-VAR yields reasonable full-sample estimates. I illustrate that the estimated high uncertainty regimes are similar, but do not fully coincide with the recession dates provided by the NBER business cycle dating committee. Moreover, I assess whether the T-VAR captures nonlinearity with respect to the effects of sudden hikes in uncertainty and whether this nonlinearity is rather driven by regime-dependent shock sizes or regime-dependent shock propagation. I compute generalized impulse responses à la Koop, Pesaran, and Potter (1996) that allow for nonlinear shock propagation and find that uncertainty shocks have both negative effects on the real economy and nonlinear effects, depending on the level of the uncertainty proxy. In particular, the effects of an uncertainty shock on labor market variables are much stronger during episodes of high uncertainty. The peak response of the unemployment rate, for instance, is roughly twice the size in times of high uncertainty compared to normal times.

Second, I conduct a rigorous out-of-sample forecast exercise using a recursive estimation scheme that mimics the information set of the actual forecaster at each point in time. I evaluate the forecasts with respect to both point forecasts and predictive densities. The point forecasts are evaluated in terms of mean forecast errors and root mean squared forecast errors. The predictive densities are evaluated using log predictive scores and continuous ranked probability scores.

My main results are that information on economic uncertainty can improve forecast accuracy and that density forecasts benefit more from this information than point forecasts. Concerning the point forecasts, I find that adding the uncertainty proxy to the otherwise standard linear BVAR yields only marginal improvements. Although, in most cases, the T-VAR is outperformed by the linear specifications, interest and unemployment rate forecasts can be significantly improved. With regard to the predictive densities, the linear models are dominated by the T-VAR. Indeed, in most cases, each model overestimates the true uncertainty of the data, indicated by too wide predictive densities. Controlling for uncertainty regimes, though, reduces this bias and provides a better description of the data. This suggests that accounting for state-dependent disturbances is more important for forecasting purposes than state-dependent shock propagation. Finally, I document substantial variation of the model’s predictive abilities over time and show that the gains in forecast accuracy are particularly high when uncertainty is high. Thus, the T-VAR can serve as a complement to existing approaches to get a better picture of the actual uncertainty surrounding the point estimate in times of high uncertainty.

This paper adds to the literature investigating the predictive power of uncertainty indicators. Balcilar, Gupta, and Segnon (2016) and Pierdzioch and Gupta (2017) focus on forecasting recessions and show that information on uncertainty improves forecast accuracy. Bekiros, Gupta, and Paccagnini (2015) and Segnon et al. (2018) employ bivariate models including information on uncertainty and suggest that uncertainty can be helpful in predicting GNP growth and oil prices already in small-scale models. None of these contributions considers a large set of indicators that an applied forecaster would probably use, or directly allows for nonlinearity with respect to the uncertainty measure.

The paper is structured as follows. Section 2 describes the Bayesian VAR as well as the Bayesian threshold VAR and outlines the estimation methodology. Section 3 describes the dataset and the forecast methodology. Section 4 presents the in-sample results. Section 5 discusses the results of the forecast experiment. Section 6 concludes.

2 The models

In this section, I first describe a standard Bayesian VAR model, following which the Bayesian threshold VAR is outlined.

2.1 The Bayesian VAR

The VAR(p) is specified as follows:

(1) y t = c + j = 1 p A j y t j + ε t with  ε t N ( 0 , Σ ) ,

where yt and c are n × 1 vectors of endogenous variables and intercept terms, respectively. ε t denotes the vector of normally distributed residuals. Aj are n × n matrices of coefficients with j = 1 , , p . I employ Bayesian estimation techniques to estimate the model. Specifically, I use the Minnesota prior developed by Litterman (1986), which assumes that every economic time series can be sufficiently described by a random walk with drift. Thus, the prior shrinks all coefficients on the main diagonal of A 1 towards one while the remaining coefficients are shrunk towards zero. Moreover, the classical Minnesota prior assumes a diagonal covariance matrix of the residuals. In the following, I use the generalized version of the classical Minnesota prior provided by Kadiyala and Karlsson (1997), which allows for a non-diagonal residual covariance matrix while retaining the idea of the Minnesota prior described above. As demonstrated by Bańbura, Giannone, and Reichlin (2010), using a normal-inverse Wishart prior generates accurate forecasts despite the additional parameters to be estimated. In addition, I follow Doan, Litterman, and Sims (1984) as well as Sims (1993) by implementing the “sum-of-coefficients” and “co-persistence” prior. The former accounts for unit roots in the data; the latter introduces beliefs on cointegration relations among the series. Each prior is implemented using dummy observations. For details regarding the prior implementation and the estimation procedure, see Appendix A.

2.2 The Bayesian threshold VAR

The threshold VAR is defined as follows:

(2) y t = ( c 1 + i = 1 p A 1 , i y t i + Ω 1 0.5 ε t ) S t + ( c 2 + i = 1 p A 2 , i y t i + Ω 2 0.5 ε t ) ( 1 S t )

(3) with:  S t = { 1 , if  r t d r 0 , if  r t d > r ,

(4) and:  ε t N ( 0 , 1 ) .

where yt is the vector of endogenous variables. Contrary to the linear VAR in (1), the intercept terms cj , the matrices of coefficients Aj , and the variance of the residuals Ω i are state-dependent, with j ∈ {1, 2}. The regime prevailing in period t depends on whether the level of the threshold variable, r, in period t d is below/above a latent threshold level, r . This mechanism allows for different model dynamics depending on the respective regime. As in the previous section, I use natural conjugate priors for the VAR coefficients and implement the priors using dummy observations. While it is, in general, possible to apply different prior information for the two regimes, I follow Alessandri and Mumtaz (2017) and impose the same prior beliefs for both regimes. On the one hand, this specification shrinks both regimes towards same model dynamics. On the other hand, it allows to let the data decide on how important and pronounced different model dynamics are. I follow Chen and Lee (1995) for the threshold level and the delay coefficient:

(5) p ( d ) = 1 d max and  p ( r ) N ( r ¯ , ν ) ,

where d max = 8 denotes the maximal delay and r ¯ is the sample average of r. Given the variability of the macroeconomic uncertainty index (see Figure 1 in Section 4), I set ν = 1 to keep the prior for the threshold level loose. To simulate the posterior distribution of the model’s parameters, I apply the Gibbs sampler introduced by Chen and Lee (1995), which works as follows:

  1. At iteration k = 1 set starting values for d k = d 0 , r , k = r 0 .

  2. Draw Σ j k | d k , r , k , Λ j k , y j , and β j k | d , r , k , Λ j k , Σ j k , y j from their posteriors given by (21).

  3. Draw a candidate value for r by: r = r , k 1 + Φ ϵ with:  ϵ N ( 0 , 1 ) and Φ is a scaling factor ensuring an acceptance rate of about 20%.

  4. Accept the draw with probability

    (6) p k = min { 1 , p ( Y t | r , θ ) p ( Y t | r ¯ k 1 , θ ) }

    where p ( ) denotes the posterior density given all other parameters of the model.

  5. Draw d from

    (7) p ( d = i | Y t , θ ) = p ( Y t | d , θ ) d = 1 d 0 p ( Y t | d , θ ) for:  i = 1 , , d max .

  6. Generate e j , T + 1 , , e j , T + h from ϵ j , t N ( 0 , Σ j k ) and compute h-step-ahead forecasts recursively by iterating (2) and (3) h periods into the future.

  7. Redo until k = D + R.

I employ 25,000 iterations of the Gibbs sampler and discard the first 20,000 as burn-ins. Convergence statistics for the algorithm are provided in Appendix B.

The key element of this model is the threshold variable r, which governs the regime dependency. Different specifications for r are proposed in the literature. Caggiano, Castelnuovo, and Figueres (2017) and Caggiano, Castelnuovo, and Groshenny (2014) argue that recessions are particularly informative regarding the identification of uncertainty shocks. These studies follow Auerbach and Gorodnichenko (2012) and use a moving average of GDP growth rates as threshold variable. Other studies emphasize the importance of the uncertainty proxy itself and condition on either the historic change (for example, Foerster 2014; Henzel and Rengel 2017) or the historic level of the uncertainty proxy (for example, Berg 2017a; Jones and Enders 2016). Since this paper aims at identifying uncertainty regimes, I follow the latter and specify r as the level of the uncertainty indicator.

However, nowadays there are various uncertainty proxies available, for example, stock market volatility (Bloom 2009), newspaper-based indices (Baker et al., 2016), firm-level data-based indices (Bachmann, Elstner and Sims 2013), indices based on macroeconomic forecast errors (Rossi and Sekhposyan 2015), and indices based on the residuals from factor augmented regressions (Jurado, Ludvigson, and Ng 2015). I choose the macroeconomic uncertainty index provided by Jurado, Ludvigson, and Ng (2015), a choice motivated by two factors. First, this proxy defines uncertainty in terms of the variation in the unforecastable component of macroeconomic variables and not in terms of the variables’ raw volatility.[4] Second, and in contrast to other measures, it is based on a large number of economic indicators and, hence, should represent an aggregate uncertainty factor that affects many series, sectors, or markets (Jurado, Ludvigson, and Ng 2015).[5]

I recursively construct the index to avoid that the index at a given point in time includes information that would not be available to the forecaster at this moment. As already pointed out by Jurado et al. (2015), the indices based on both in-sample forecasts and out-of-sample forecasts are highly correlated.

3 Data and forecast methodology

The dataset includes 11 quarterly US macroeconomic series from 1960Q3 through 2017Q4 covering a broad range of economic activity especially relevant for policymakers and central bankers.[6] The series are obtained via the Federal Reserve Economic Database (FRED). To study the impact of macroeconomic uncertainty on the forecast performance, I further augment the dataset with the economic uncertainty index developed by Jurado et al. (2015). Specifically, I employ the index for horizon h = 1.

Most of the series enter the model in annualized log levels, that is, I take logarithms and multiply by 4, except for those series that are already expressed as annualized rates. For the stationary variables, I utilize a white noise prior ( δ i = 0 ), whereas for integrated series a random walk prior ( δ i = 1 ) is used. A detailed description of the data, their corresponding transformations and sources is provided in Table 1. For both models, I generate 1- up to 4-quarter-ahead forecasts by a recursive estimation scheme over an expanding window. The initial sample runs from 1960Q3 to 2004Q3. Thus, I generate forecasts for 2004Q4 until 2005Q3 in the first recursion. Subsequently, I iterated the procedure by updating the estimation sample with the observations from the next quarter until 2016Q4 is reached. This procedure generates a total of 50 forecasts for each horizon. Forecasts for horizons larger than one are obtained iteratively. The lag length in all VARs is set to four. While I estimate the model with both stationary and integrated variables, I report results solely in terms of annualized percentage growth rates. To this end, I transform the models’ level forecasts for the integrated variables into growth rates based on these level forecasts.

Table 1:

Dataset.

Variable Mnemonic Source Transformation
Real GDP GDPC1 FRED log × 400
GDP Deflator GDPDEF FRED log × 400
Industrial Production Index INDPRO FRED log × 400
All Employees: Total Nonfarm PAYEMS FRED log × 400
Civilian Unemployment Rate UNRATE FRED
Real Gross Private Domestic Investment GPDIC1 FRED log × 400
ISM Manufacturing: PMI Composite Index NAPM FRED
Personal Consumption Expenditures, Price Index PCECTPI FRED log × 400
Capacity Utilization: Total Industry TCU FRED
Federal Funds Rate FEDFUNDS FRED
S&P 500 Composite – Price Index S&PCOMP FRED log × 100
Macroeconomic Uncertainty Index Own calculations
  1. The macroeconomic uncertainty index is calculated using the codes provided by Jurado et al. (2015) modified to provide a recursively estimated index.

4 In-sample analysis

Now that we have outlined the empirical setup, we turn to investigate the in-sample properties of the Bayesian threshold VAR, which are based on full-sample estimates. Figure 1 plots the macroeconomic uncertainty index along with NBER recessions. The solid-dotted line refers to the episodes of the endogenously identified high uncertainty regime, while the dashed line corresponds to the normal times regime. The figure reflects the common knowledge that macroeconomic uncertainty is countercyclical. Moreover, while the uncertainty regimes partly coincide with NBER recessions, they are more persistent and more frequently identified.[7] These discrepancies can be explained by differences in the concepts. NBER defines recessions as a significant decline in economic activity, whereas the macroeconomic uncertainty index focuses on predictability. Obviously, the latter implies that booms and recoveries, which are characterized by high growth rates of macroeconomic aggregates, are excluded from the NBER recessions but can be part of the high uncertainty regime if the evolution of these aggregates is hard to predict during these episodes. Nevertheless, these results suggest that recessions are a useful proxy for uncertainty regimes. To directly identify regimes based on the prevailing level of uncertainty, however, might be more appropriate for capturing possible nonlinear dynamics.

Having identified uncertainty regimes, I assess whether uncertainty has different effects on the economy depending on the prevailing regime. For this purpose, I perform a structural analysis based on impulse responses.[8] As the threshold VAR from Section 2.2 is nonlinear, standard impulse responses are not appropriate for capturing the effects of a shock. Thus, I follow Koop, Pesaran, and Potter (1996) and compute generalized impulse responses (GIRFs).[9] The shocks are identified using a recursive estimation scheme based on a Cholesky decomposition with uncertainty ordered second and the S&P 500 ordered first. The latter allows real and nominal variables to react instantaneously to an uncertainty shock (see Baker et al., 2016; Bloom 2009; Fernández-Villaverde et al., 2015, among others). Due to space constraints, I only present the results for GDP, GDP deflator, investment, consumption, the unemployment rate, and the federal funds rate.[10] The red line is the response in the high uncertainty regime; the blue line corresponds to the normal times regime. Shaded areas and dashed lines refer to 68% error bands.

Figure 1: 
Estimated uncertainty regimes.
Shaded areas correspond to NBER recessions. The Dashed-dotted line refers to the high uncertainty regime, i.e. the median estimate of 


$(1-S_{t})$


(
1
−

S

t


)



 from (2) and (3).
Figure 1:

Estimated uncertainty regimes.

Shaded areas correspond to NBER recessions. The Dashed-dotted line refers to the high uncertainty regime, i.e. the median estimate of ( 1 S t ) from (2) and (3).

Figure 2 shows that, independently of the regime, the uncertainty shock resembles a negative supply shock, leading to an increase in the price level and a drop in output. Thus, the responses support the findings of Alessandri and Mumtaz (2019) and Mumtaz and Theodoridis (2015) 2018), and Popescu and Smets (2010). Other studies stress the deflationary effects of uncertainty shocks (see, for instance, Carriero et al., 2015; Christiano, Motto, and Rostagno 2014; Leduc and Liu 2016). From a theoretical point of view, the responses provide evidence in favor of an “inverse Oi (1961)-Hartman (1972)-Abel (1973) effect”. As pointed out by Born and Pfeifer (2014) and Fernández-Villaverde et al. (2015), given sticky prices, firms can set a price, which is either too low or too high. The former is obviously not optimal because the firm has to sell too many units at a too low price. In the latter case, the firm sells too few units but is compensated by a higher price per unit. Therefore, firms are prone to an upward bias in future prices, which can lead to inflationary effects of an uncertainty shock.

The variables’ response to the uncertainty shock is, in each case, stronger in the high uncertainty regime than in the normal times regime. Thus, in line with previous studies, the contractionary effects of uncertainty shocks are especially large when uncertainty is already at a high level (Bijsterbosch and Guérin 2013; Jones and Enders 2016). However, in most cases the differences between the regime-dependent responses are small and insignificant, indicating that the autoregressive dynamics are similar in both regimes. For the unemployment rate, however, I document a significantly stronger increase during the first year after the shock has occurred, suggesting that particularly unemployment rate forecasts might benefit from modeling regime-dependency with regard to the level of uncertainty. Since also shifts in the residual variances can play an important role for forecasting, I compute impulse response according to state-specific shock sizes (see Figure 9 in the Appendix). This analysis yields that the estimated size of the uncertainty shock is roughly 1.5 times larger in the high uncertainty regime than in the normal times regime (0.22–0.33). Thus, modeling shifts in the variance-covariance matrix seem to be a more important driver for nonlinearity with respect to the effects of sudden hikes in uncertainty than regime-dependent autoregressive dynamics. In total, the in-sample analysis hence suggests that particularly density forecasts should benefit from using the T-VAR approach because the latter is probably better suited to identify the true variance of the data generating process.

Figure 2: 
Generalized impulse responses to an uncertainty shock.
Figure displays the impact of a unit uncertainty shock to selected variables in normal times and in times of high uncertainty. The responses are generated using a recursive identification scheme with uncertainty ordered second. Gray shaded areas and dashed blue lines refer to 68% error bands. The macro uncertainty index enters the model standardized.
Figure 2:

Generalized impulse responses to an uncertainty shock.

Figure displays the impact of a unit uncertainty shock to selected variables in normal times and in times of high uncertainty. The responses are generated using a recursive identification scheme with uncertainty ordered second. Gray shaded areas and dashed blue lines refer to 68% error bands. The macro uncertainty index enters the model standardized.

5 Forecast evaluation

In this section, the forecasts of the competing models are evaluated. I first discuss the measures used for the evaluation of both point forecasts and the predictive densities. Subsequently, the forecast performance is highlighted. In the following, j, i, and h denote the model, variable, and forecast horizon, respectively, for the forecast sample t = 1, …, N.

5.1 Forecast metrics

I measure point forecast accuracy using root mean squared errors:

(8) RMSFE i , j h = 1 N ( y ¯ i , T | T h j y i , T ) 2 ,

where y ¯ i , T | T h j and yi, T denote the mean of the models’ predictive density and the corresponding realization. The RMSFE is only useful in assessing the accuracy of a model compared to that of other models. Therefore, I report the RMSFEs relative to a benchmark model ( RMSFE i , B h ):

(9) relative RMSFE i , j h = RMSFE i , j h / RMSFE i , B h .

I apply the test provided by Diebold and Mariano (1995) adjusted for the small-sample correction of Harvey, Leybourne, and Newbold (1997) to gauge whether the point forecasts are significantly different from each other.

To take into account the uncertainty around the point estimate, additionally, I evaluate the predictive densities. Specifically, I apply the average log predictive score, which has become a commonly accepted tool for comparing the forecast performance of different models (see Clark 2012; Geweke and Amisano 2010, among others). It is defined as the logarithm of the predictive density evaluated at the realized value:

(10) LS i , j h = 1 N log p t ( y i , t + h | j ) .

The predictive density, p ( y t + h | j ) , is obtained by applying a kernel estimator on the forecast sample.[11] Hence, if the competing model has a lower log score than the benchmark, its forecasts are closer to the realizations with a higher probability. As for the RMSFE, the log scores are not informative on their own, which is why I report them relative to the benchmark model ( L S B , i h ):

(11) relative LS i , j h = L S i , j h L S i , B h .

Furthermore, I evaluate the predictive densities in terms of the (average) continuous ranked probability score (CRPS). As argued by, for instance, Gneiting and Raftery (2007), the CRPS has two advantages compared to the log scores. First, it can be reported in the same units as the respective variable and therefore facilitates a direct comparison of deterministic and probabilistic forecasts. Second, in contrast to log scores, CRPSs are both less sensitive to extreme outcomes and better able to assess forecasts close but not equal to the realization. I follow Gneiting and Ranjan (2011) and express the CRPS in terms of a score function:

(12) S ( p t j , i , y i , t , ν ( α ) ) = 0 1 QS α ( P t ( α ) 1 , y i , t ) ν ( α ) d α ,

where QS α ( P t ( α ) 1 , y i , t ) = 2 ( I { y i , t < P t ( α ) 1 } α ) ( P t ( α ) 1 y i , t ) is the quantile score for forecast quantile P t ( α ) 1 at level 0 < α < 1. I { y i , t < P t ( α ) 1 } is an indicator function taking the value one if y i , t < P t ( α ) 1 and zero otherwise. ν(α) is a weighting function. Applying a uniform weighting scheme (ν(α) = 1), and dividing by the number of generated densities yields the average CRPS:

(13) CRPS i , j h = S ( p t + h , y i , t + h , 1 ) .

To compute this expression, P(⋅) is approximated by the empirical distribution of forecasts and the integral is calculated numerically.[12] According to this definition, a lower CRPS implies that the predictive density is more closely distributed to the actual density. As for the log scores, I report the CRPS in terms of the average across all evaluation periods and relative to the benchmark model. To provide a rough gauge on whether these scores are significantly different from the benchmark, I follow D’Agostino, Gambetti, and Giannone (2013) by regressing the differences between the scores of each model and the benchmark on a constant. A t-test with Newey-West standard errors on the constant indicates whether these average differences are significantly different from zero.

Finally, I compute probability integral transforms (PITs) that are defined as the CDF corresponding to the predictive density evaluated at the respective realizations:

(14) z t + h i = y t + h i p t ( u ) d u for  t = 1 , , N .

Thus, with regard to the respective predictive density, the PIT denotes the probability that a forecast is less or equal to the realization. For example, a realization that corresponds to the 10th percentile receives a PIT of 0.1. Hence, if the predictive densities match the true densities, the PITs are uniformly distributed over the unit interval. To assess the accuracy of the predictive density according to the PIT, I divide the unit interval into k equally sized bins and count the number of PITs in each bin. If the predictive density equals the actual density, each bin contains N/k observations. I set k = 10, implying that each bin accounts for 10% of the probability mass. Moreover, I follow Rossi and Sekhposyan (2014) and compute 90% confidence bands by using a normal approximation to gauge significant deviations from uniformity.

5.2 Point forecasts

Table 2 summarizes the results of the forecast evaluation based on MFEs and RMSFEs. The dimension for the measures is percentage points. While the models provide forecasts for each variable in the dataset, for the sake of brevity, I present results only for the variables depicted in Section 4, namely, inflation (measured in terms of the GDP deflator growth), GDP growth, consumption growth, investment growth, the unemployment rate, and the federal funds rate.[13] Let us begin by analyzing the results for MFE presented in the left panel of Table 2. The table shows that the benchmark VAR on average and in most cases overestimates the realization. Inflation for the next quarter, for instance, is overpredicted by 0.14 annualized percentage points. Adding the uncertainty index to the otherwise standard VAR (VARU) tends to increase this bias except for the unemployment rate and for investment growth. In the latter case, the MFE is on average over all horizons about one percentage point smaller. The MFEs of the threshold VAR (T-VAR) are distinct from the former ones. First, compared to the linear models, the MFEs from the T-VAR are in most cases larger. Only for certain variables and horizons (for example, output growth at h = 3) reductions are detectable. Thus, identifying uncertainty regimes seems to be less fruitful for generating well-calibrated predictive means. Second, while the linear models consistently underpredict unemployment and overpredict the federal funds rate, the T-VAR overpredicts unemployment and underpredicts the federal funds rate. The latter result stems from the fact that the T-VAR predicts federal funds rate values below zero even though the federal funds rate is fixed at its lower bound.[14] Overall, the evaluation of the MFEs, thus, provides only little evidence in favor of both the VARU and the T-VAR. In fact, the benchmark model provides very competitive MFEs and in some cases outperforms the remaining models.

Table 2:

MFEs and RMSEs.

MFE
RMSFE
Specification h = 1 h = 2 h = 3 h = 4 h = 1 h = 2 h = 3 h = 4
Inflation
VAR 0.14 0.17 0.18 0.18 0.95 1.02 1.09 1.15
VARU 0.16 0.21 0.25 0.27 1.10 1.06 1.02 1.04
T-VAR 0.35 −0.34 −0.84*** 1.20*** 1.19*** 1.08 1.28*** 1.45***
Output growth
VAR 0.97** 1.11** 1.15*** 1.00 2.35 2.79 2.91 2.80
VARU 1.14*** 1.16*** 1.25*** 1.15*** 1.06* 1.03 1.01 0.99
T-VAR 1.39** 0.77 −0.05 0.79 1.17*** 1.04 0.92 0.97
Investment growth
VAR 4.30*** 5.44*** 5.65*** 4.48*** 10.27 15.01 15.52 13.76
VARU 3.36*** 4.00*** 4.64*** 3.83*** 0.96 0.99 1.03 0.97
T-VAR 4.88*** 0.55 −5.11*** −4.42*** 1.18*** 0.92 1.00 1.06
Consumption growth
VAR 0.77*** 0.72*** 0.73*** 0.87*** 2.14 2.13 2.08 2.34
VARU 0.77*** 0.74*** 0.82*** 0.95** 1.01 0.98 1.03 0.97
T-VAR 1.11*** 1.00** 0.86 2.07*** 1.12** 1.19*** 1.11 1.30***
Unemployment rate
VAR −0.04 −0.09 −0.15 −0.20 0.22 0.47 0.73 1.00
VARU −0.03 −0.06 −0.11 −0.16 1.01 0.97 0.97 0.96
T-VAR −0.03 0.08 0.29*** 0.50*** 0.94* 0.93* 1.06 1.13***
Federal funds rate
VAR 0.01 0.09 0.17 0.27 0.65 1.16 1.42 1.58
VARU 0.08 0.19 0.29** 0.40*** 0.90* 0.98 1.06 1.14***
T-VAR 0.04 −0.24 −0.51** −0.71*** 0.86** 0.92* 1.04 1.16***
  1. VAR and VARU denote the linear VAR both without macro uncertainty and including macro uncertainty, respectively. T-VAR refers to the threshold VARs. RMSFEs are reported in absolute terms for the benchmark model (top row of each panel) and in ratios relative to the benchmark for the remaining specifications. Ratios below unity indicate that the model outperforms the benchmark. ***, **, and * denote that the errors are significantly different from zero (MFE) or the benchmark (RMSFE) on the 1%, 5%, and 10% level, respectively. Sample: 1960Q3–2017Q4.

The right panel of Table 2 depicts the results for the RMSFE. With respect to the benchmark model, the RMSFEs are reported in absolute terms, while the remaining specifications are reported as ratios relative to the benchmark model, i.e. a figure below unity indicates that the model outperforms the benchmark specification. The differences between the VAR and the VARU are again very small and in most cases insignificant, suggesting that the uncertainty index has on average only marginal impact on the forecast performance in a linear setting. Only for the federal funds rate, the VARU provides significantly smaller RMSFEs. The results for the threshold VAR are mixed. In most cases, the latter is outperformed by its linear counterparts, implying that identifying uncertainty regimes is not beneficial with regard to point forecasting. The worst relative performance is obtained for inflation forecasts. Moreover, neither for GDP growth nor for investment or consumption growth, the T-VAR delivers a reduction in RMSFEs.

While for the former indicators regime-dependency apparently does not pay off, unemployment and interest rate forecasts benefit significantly. Regarding the federal funds rate at the one and two-quarter ahead horizons, the T-VAR’s forecasts are on average 14% and 8% more precise, respectively, while with regard to the unemployment rate forecast, accuracy increased by 6% and 7% for these horizons. These results are particularly appealing since labor market variables possess an especially strong regime dependency with regard to uncertainty (see Figure 2). In addition, these findings underpin the results of Alessandri and Mumtaz (2017) and Barnett, Mumtaz, and Theodoridis (2014). While the former demonstrates that regime-dependent VARs are inferior to linear VARs and VARs with time-varying parameters with regard to GDP growth and inflation, the latter provides evidence that financial variables particularly benefit from regime dependency. Thus, it is suggested that for activity variables there is, if any, only gradual structural change, which cannot be covered by a threshold VAR, while for labor market and financial variables the structural shift is more abrupt and thus can be captured by the T-VAR.

Figure 3 explores the models’ forecast performance over time. To this end, I calculate four-quarter moving averages of the MFE (upper panel) and relative RMSFE (lower panel) for one-quarter-ahead forecasts of the unemployment rate (left column) and federal funds rate (right column). Evidently, the degree of predictability varies substantially over time. Regarding unemployment rate forecasts, the VARU and the T-VAR work particularly well during the Great Recession and the subsequent recovery when uncertainty was high. In the remaining periods, when uncertainty was rather low, the forecast performance is very similar (VARU) or even worse (T-VAR) compared to the linear VAR, suggesting that uncertainty is especially relevant when it is high. A similar pattern arises for the federal funds rate. The largest gains in forecast accuracy are obtained from 2008–2012 when uncertainty was high. However, in contrast to the unemployment rate, federal funds rate forecasts are also more precise from 2013 to 2016, while the short hike of the federal funds rate in 2012 is captured best by the linear VAR; both the VARU and the T-VAR strongly overestimate the actual increase. Overall, the results suggest that including information on economic uncertainty can improve point forecast accuracy for some variables and for short horizons, with the largest gains during episodes of high uncertainty.

Figure 3: 
Forecast performance over time – point forecasts.
Figure displays mean forecast errors (upper panel) and relative root mean squared forecast errors (bottom panel) computed as a four-quarter moving average over the forecast sample for unemployment and federal funds rate forecasts.
Figure 3:

Forecast performance over time – point forecasts.

Figure displays mean forecast errors (upper panel) and relative root mean squared forecast errors (bottom panel) computed as a four-quarter moving average over the forecast sample for unemployment and federal funds rate forecasts.

5.3 Density forecasts

Subsequently, we evaluate the models’ forecasts with respect to the predictive densities. Thus, apart from the predictive mean evaluated above, the variances have to be precisely estimated as well to ensure an accurate predictive density. Table 3 sets out the results for the CRPS and the LS. First, we consider the results for the LS, which are reported in levels for the benchmark and in differences for the remaining models. Positive differences indicate that the respective model outperforms the benchmark. With regard to the linear models, the LS provide a pattern similar to that in the previous section. Again, the differences between both models are rather small, indicating that the marginal impact of the uncertainty index in a linear setting is on average almost negligible. However, in some cases, already the linear VAR using additional information on economic uncertainty provides significantly better (lower) LS. Turning to the T-VAR reveals that for medium- to long-term forecasts, controlling for regime-dependency with respect to uncertainty leads to considerably less accurate predictive densities. Regarding short-term forecasts, though, the T-VAR provides, for most variables, remarkably better log scores, with the largest improvements obtained for the activity variables. For instance, the LS for output growth at h = 1 is 19% lower than the benchmark’s score. Hence, while the T-VAR is inferior in generating precise point forecasts for the activity variables, it is superior in computing the complete predictive distribution of these indicators and thus is better suited for describing the uncertainty around the point estimate.

In total, the CRPS underpin the findings of the LS. However, there are noteworthy differences in regard to the unemployment rate. While according to the LS the predictive distributions of the T-VAR are virtually identical to the ones of the benchmark, according to the CRPS, the T-VAR provides significantly more accurate densities. For instance, the one-quarter-ahead CRPS for the unemployment rate is 16% lower than the benchmark’s CRPS while the average log score is virtually identical. The latter suggests that the log scores regarding the unemployment forecasts are partly distorted by outliers. Overall, the evaluation of both the LS and CRPS underpins the findings of previous studies demonstrating that nonlinearity is particularly useful in calibrating accurate predictive densities (see Chiu, Mumtaz, and Pintér 2017; Groen, Paap, and Ravazzolo 2013; Huber 2016, among others). However, while the former studies mainly focus on forecasting output, inflation, and interest rates, this paper shows that unemployment rate forecasts also benefit significantly. Figure 4 presents evidence on time-varying predictability. Similar to Figure 3, the T-VAR provides more accurate densities during the Great Recession and the subsequent recovery.

Table 3:

CRPS and LS.

CRPS
LS
Specification h = 1 h = 2 h = 3 h = 4 h = 1 h = 2 h = 3 h = 4
Inflation
VAR 0.50 1.91 3.62 5.53 −2.17 −3.20 −3.75 −4.12
VARU 0.89 0.97 1.02 1.09*** 0.04 0.10** −0.04 −0.05
T-VAR 1.60*** 3.20*** 3.73*** 4.24*** −0.36*** −1.23*** −1.29*** −1.31***
Output growth
VAR 3.55 10.40 5.73 5.94 −3.61 −4.72 −4.28 −4.30
VARU 1.00 0.99* 0.98 0.99 0.01 0.07** 0.01 −0.01
T-VAR 0.92*** 1.59*** 2.26*** 3.03*** 0.19*** −0.35*** −0.73*** −1.00***
Investment growth
VAR 16.64 54.25 32.74 34.62 −5.38 −6.53 −6.11 −6.07
VARU 0.97* 0.97* 0.98* 0.98* −0.02 0.02 0.07 0.07
T-VAR 1.10*** 1.79*** 2.46*** 3.12*** −0.04 −0.55*** −0.83*** −1.02***
Consumption growth
VAR 3.26 7.77 12.38 15.79 −3.56 −4.43 −4.86 −5.10
VARU 0.96** 0.95** 0.97** 0.96*** 0.05** 0.04 0.03 0.01
T-VAR 0.87*** 1.61*** 2.14*** 2.75*** 0.23*** −0.41*** −0.67*** −0.96***
Unemployment rate
VAR 0.15 0.47 0.76 1.10 −1.99 −2.29 −2.49 −2.67
VARU 0.89*** 0.92*** 0.92*** 0.91*** −0.01 0.05 0.07* 0.07**
T-VAR 0.84*** 1.00 0.94 0.89*** −0.00 −0.05 0.02 0.11**
Federal funds rate
VAR 0.36 1.09 1.77 2.42 −2.14 −2.73 −3.12 −3.39
VARU 0.91*** 1.04 1.05 1.07 0.02 0.01 0.02 −0.03
T-VAR 0.79*** 0.84*** 0.93*** 0.99 0.10*** 0.16*** 0.14*** 0.13***
  1. VAR and VARU denote the linear VAR without macro uncertainty and including macro uncertainty, respectively. T-VAR refers to the threshold VARs. The scores are reported in absolute terms for the benchmark model (top row of each panel). For the remaining models, LSs are expressed in differences to the benchmark and CRPSs in ratios to the benchmark model. A positive difference and a ratio below unity indicate the model outperforms the benchmark. ***, **, and * denote significance on the 1%, 5%, and 10% level, respectively, according to a t-test on the average difference in scores relative to the benchmark model with Newey-West standard errors. Sample: 1960Q3–2017Q4.

Between 2011 and the end of 2013, the T-VAR’s entire forecast distribution is stretched by a few forecasts far away from the realizations, which leads to low log scores. Since the CRPS is better able to reward the observations close to the realization and is more robust to outliers, according to the CRPS, the T-VAR provides more precise densities even for this period and thus for almost the entire evaluation period. For the federal funds rate, the picture is more clear-cut. The LS indicate that the T-VAR is superior at the beginning of the Great Recession, but the CRPS display more accurate predictive densities for almost the entire sample. As for the unemployment rate, the T-VAR provides the best relative forecast performance during the Great Recession and the subsequent recovery when economic uncertainty was very high. In total, Figure 4 provides evidence for important changes in the predictive ability of the models.

Figure 4: 
Forecast performance over time – density forecasts.
The figure displays log scores (upper panel) and continuous ranked probability scores (bottom panel) computed as a four-quarter moving average over the forecast sample for unemployment and federal funds rate forecasts.
Figure 4:

Forecast performance over time – density forecasts.

The figure displays log scores (upper panel) and continuous ranked probability scores (bottom panel) computed as a four-quarter moving average over the forecast sample for unemployment and federal funds rate forecasts.

Finally, I compute PITs to gauge the calibration of the predictive densities. Figure 5 facilitates a graphical inspection of the PITs and shows that the predictive densities look similar for the different models.[15] As I computed 50 forecasts for each horizon, each bin in Figure 5 should contain five observations (depicted by the solid black line) to ensure uniformity. Thus, the closer the histograms are to the solid black line, the more accurate are the predictive densities. In the case of inflation, output, investment, and consumption, the PITs appear hump-shaped, with significant departures from uniformity. In fact, the models assign too much probability to the center of the distribution with too many PIT-values around 0.5. The latter indicates that the kurtosis of predictive densities at each horizon and recursion is higher than the kurtosis of true density, which implies that the models overestimate the actual uncertainty around the point estimate. This pattern is frequently found in the VAR forecasting literature – see, for example, Bekiros and Paccagnini (2015) and Rossi and Sekhposyan (2014) or Gerard and Nimark (2008) – and can be caused by a too dense parametrization of the model.[16] With regard to one-quarter-ahead forecasts (blue bars), the T-VAR mitigates this issue by generating more forecasts that correspond to the lower percentiles of the actual distribution and thus provides a better description of the data. At higher horizons, however, the densities are again too wide. Regarding unemployment rate forecasts, the PITs of each model are closer to uniformity for h = 1 and h = 2; both the lower and the upper percentiles of the actual distribution are captured by the models. At the remaining horizons, the models again overestimate the actual uncertainty. The PITs for the interest rate forecasts appear to be right-skewed, and thus missing the left tail of the actual distribution. The latter stems from the phase of extraordinary low interest rates at the end of the sample, which are barley captured by the models. Only the VARU is able to generate forecasts corresponding to the lower percentiles. Jointly with the results from Table 3, the evaluation of the PITs suggests that estimating regime-dependent covariance matrices with respect to the prevailing level of uncertainty helps calibrating accurate predictive densities.

Figure 5: 
Probability integral transform (PITs).
The figure displays the CDF of the probability integral transforms (PITs). Solid and dashed black lines denote uniformity and 90% confidence bands, respectively.
Figure 5:

Probability integral transform (PITs).

The figure displays the CDF of the probability integral transforms (PITs). Solid and dashed black lines denote uniformity and 90% confidence bands, respectively.

5.4 The role of financial uncertainty

While the previous sections have demonstrated that using information on macroeconomic uncertainty can be helpful regarding economic forecasting, they excluded the question whether fluctuations in uncertainty can be rather specified as an exogenous force, driving the business cycle, or an endogenous response to fluctuation in the business cycle. Indeed, as shown by Ludvigson, Ma, and Ng (2019), macroeconomic uncertainty is best characterized as an endogenous response to the business cycle, while a measure of financial uncertainty is more likely to be an exogenous impulse to the business cycle. In the following, therefore I repeat the forecast experiment and substitute the macro uncertainty index for the financial uncertainty index provided by Ludvigson, Ma, and Ng (2019).[17] The results from this analysis are depicted in Table 4 and suggest that – from a forecasting perspective – it is less important to differentiate between macro and financial uncertainty. In a linear framework, the impact of financial uncertainty on point forecast accuracy is only minor; the RMSES of VAR with financial uncertainty (VARFU) are rather close to the ones of the VARU and the VAR without information on uncertainty. Using the threshold VAR with financial uncertainty (T-VARFU) improves on the linear VAR. Compared with the T-VAR from Section 5.2, however, it provides a very similar forecast performance. Only for the interest rate and for investment growth, the T-VARFU provides slightly more precise forecasts in the short-run. This result might be driven by two factors. First, as suggested by the previous sections, gains in forecast accuracy are mainly driven by identifying uncertainty regimes, but not by the additional information on uncertainty. However, the T-VARFU identifies uncertainty regimes that are broadly in line with those of the T-VAR.[18] Second, both series are, partly, based on the same information and display significant comovement (the correlation between both series is roughly 0.60 in the final recursion). Thus, both the models’ regimes and the information on uncertainty are similar, leading to similar forecast performance.

Table 4:

Forecast results for financial uncertainty.

CRPS
RMSE
Specification h = 1 h = 2 h = 3 h = 4 h = 1 h = 2 h = 3 h = 4
Inflation
VARFU 0.86 0.95 1.03 1.06** 1.08* 1.06* 1.06* 1.04
T-VARFU 1.60*** 3.20*** 3.73*** 4.24*** 1.21*** 1.07* 1.28*** 1.43***
Output growth
VARFU 1.01 0.99* 0.96* 0.98* 1.07** 1.05** 1.02 1.00
T-VARFU 0.92*** 1.59*** 2.26*** 3.03*** 1.14*** 1.03 0.93 0.99
Investment growth
VARFU 0.99 0.97* 0.96* 0.96* 0.96 0.98 1.00 0.99
T-VARFU 1.10*** 1.79*** 2.46*** 3.12*** 1.14*** 0.91 0.99 1.01
Consumption growth
VARFU 0.98* 0.97* 0.98* 0.96*** 1.03 1.00 1.00 0.99
T-VARFU 0.87*** 1.61*** 2.14*** 2.75*** 1.16** 1.21*** 1.16** 1.26***
Unemployment rate
VARFU 0.92** 0.95** 0.95** 0.94** 1.03 0.99 1.03* 1.01
T-VARFU 0.84*** 1.00 0.94 0.89*** 0.95* 0.93* 1.05 1.15**
Federal funds rate
VARFU 0.89*** 1.01 1.02 1.05 0.87** 0.96 1.01 1.08
T-VARFU 0.79*** 0.84*** 0.93*** 0.99 0.83*** 0.91*** 1.02 1.18***
  1. VARFU and T-VARFU denote the linear VAR and the threshold VAR including financial uncertainty, respectively. RMSFEs and CRPSs are reported as ratios relative to the benchmark. A ratio below unity indicates the model outperforms the benchmark. ***, **, and * denote significance on the 1%, 5%, and 10% level, respectively, according to a t-test on the average difference in scores relative to the benchmark model with Newey-West standard errors. Sample: 1960Q3–2017Q4.

Figure 6: 
Estimated uncertainty regimes.
Shaded areas correspond to NBER recessions. The Dashed-dotted line refers to the high uncertainty regime, i.e. the median estimate of 


$(1-S_{t})$


(
1
−

S

t


)



 from (2) and (3).
Figure 6:

Estimated uncertainty regimes.

Shaded areas correspond to NBER recessions. The Dashed-dotted line refers to the high uncertainty regime, i.e. the median estimate of ( 1 S t ) from (2) and (3).

6 Conclusion

Evidence from studies on the effects of uncertainty shocks suggests that uncertainty impacts real economy variables and that these impacts depend on the prevailing level of uncertainty. This paper answers the questions of whether these insights can be used to achieve more accurate forecasts from VAR models and whether one has to account for nonlinearities to achieve this goal. I compared the forecast performance of different Bayesian VAR specifications. The analysis provides four main results. First, in a linear setting, point forecast accuracy cannot be significantly improved by considering information from the macroeconomic uncertainty index. Second, accounting for regime-specific model dynamics depending on the level of uncertainty improves the point forecast accuracy for unemployment rate and interest rate forecasts, while the accuracy for real activity variables deteriorates. Third, predictive densities benefit significantly from the macroeconomic uncertainty index both in a linear and nonlinear setting. However, the nonlinear model outperforms the linear models, especially at short horizons. The largest gains are obtained for unemployment rate forecasts. Moreover, and in contrast to the point forecasts, the threshold VAR also provides strong improvements for the predictive densities of the real activity variables. Finally, I document substantial variation in the models’ predictive ability. In particular, during episodes of high uncertainty, the T-VAR provides strong gains in forecast accuracy with respect to the predictive densities. Thus, it can serve as a complement to existing approaches in arriving at a better picture of the actual uncertainty surrounding the point estimate in times of high uncertainty and especially for unemployment forecasts.

Acknowledgement

I am thankful to editor-in-chief Bruce Mizrach, two anonymous referees, Tim Oliver Berg, Markus Heinrich, Christian Grimme, Robert Lehmann, Svetlana Rujin, Maik Wolters, and the participants of the 27th Annual Symposium of the SNDE and the 2018 Spring Meeting of Young Economist for helpful comments.

Appendix

A Prior implementation

For the prior implementation, I express the VAR(p) in (1) in companion form:

(15) Y = X B + U ,

with Y = ( y 1 , , y T ) , X = ( X 1 , , X T ) with X t = ( y t 1 , , y t p , 1 ) , U = ( u 1 , , u t ) and B = ( A 1 , , A p , c ) .

The normal-inverse Wishart prior takes the following form:

(16) Σ i W ( Ψ , α _ ) and  v e c ( B ) | Σ N ( v e c ( B _ ) , Σ Ω _ ) ,

where B _ , Ω _ , α _ , and Ψ are functions of hyperparameters. To implement these prior beliefs, I follow Bańbura, Giannone, and Reichlin (2010) and augment the dataset with dummy observations:

(17) Y D , 1 = ( diag ( δ 1 σ 1 , , δ n σ n ) / λ 1 0 n ( p 1 ) × n diag ( σ 1 , , σ n ) 0 1 × n ) X D , 1 = ( J p diag ( σ 1 , , σ n ) / λ 1 0 n p × 1 0 n × n p 0 n × 1 0 1 × n p ϵ ) .

δ 1 to δ n denote the prior means of the coefficients on the first lag. δ i is set to one, implying a random walk prior for non-stationary variables, and set to zero for stationary variables. σ 1 to σ n are scaling factors, which are set to the standard deviations from univariate autoregressions of the endogenous variables using the same lag length as in the VAR. I impose a flat prior on the intercept terms by setting ε to 1/10,000. The hyperparameter λ 1 controls the overall tightness of the prior. Hence, with increasing λ 1 the degree of shrinkage declines. The “sum-of-coefficients” prior imposes the restriction that the sum of the coefficients of the lags of the dependent variables sum up to unity, whereas the lags of other variables sum up to zero. It is implemented by the following dummy observations:

(18) Y D , 2 = diag ( δ 1 y 1 , , δ n y n ) / λ 2 X D , 2 = ( ( 1 1 × p ) diag ( δ 1 μ 1 , , δ n μ n ) / λ 2 0 n × 1 ) ,

where μ i denotes the sample average of variable i. The degree of shrinkage is determined by the hyperparameter λ 2. The prior becomes less informative for higher values of λ 2.

The “co-persistence” prior allows for the possibility of stable cointegration relations among the variables. Sims (1993) proposes to add the following dummy observations to the sample to implement the prior:

(19) Y D , 3 = diag ( δ 1 μ 1 , , δ n μ 2 ) λ 3 X D , 3 = ( ( 1 1 × p ) diag ( δ 1 μ 1 , , δ n μ 2 ) ) λ 3 ,

where λ 3 controls the degree of shrinkage of this prior. If λ 3 approaches zero, the prior becomes more tight. Defining Y = [ Y , Y D , 1 , Y D , 2 , Y D , 3 ] , X = [ X , X D , 1 , X D , 2 , X D , 3 ] , and U = [ U , U D , 1 , U D , 2 , U D , 3 ] yields the augmented dataset, which is used for inference via:

(20) Y = X B + U .

The posterior expectations are determined by an OLS regression of Y on X . The posterior takes the form:

(21) Σ | λ , y I W ( Σ ~ , T + n + 2 ) v e c ( B ) | Σ , λ , y N ( v e c ( B ^ ) , Σ ( X X ) 1 ) ,

where B ^ is the matrix of coefficients from the regression of Y on X , and Σ ^ is the corresponding covariance matrix. In sampling B, I follow Cogley and Sargent (2001) and discard draws leading to an unstable VAR.

B Convergence diagnostics

Table 5 depicts convergence statistics using the algorithm described in Section 2.2. I report both the convergence diagnostics (CD) according to Geweke (1991) and inefficiency factors. The statistics refer to full sample estimates. The Geweke statistics is obtained as

(22) Z = θ 1 ¯ θ 2 ¯ S 1 ( 0 ) N 1 + S 2 ( 0 ) N 2

where θ i ¯ refers to the means of the posterior for coefficient θ i from subsample Ni for i = 1,2. As recommended by Geweke (1991), N 1 = 0.1 N and N 2 = 0.5 N , where N refers to the total number of retained draws from the Gibbs sampler. S i ( 0 ) is the spectral density of subsample i at frequency zero. The inefficiency factors (IF) are obtained as the inverse of the relative numerical efficiency measure, that is [ v a r ( θ ) S ( 0 ) 1 ] 1 . Inefficiency factors of below 20 are regarded as satisfactory Primiceri 2005. For the sake of brevity, the upper panel of Table 5 reports only summary statistics of the distribution of the k = 588 VAR coefficients per regime. The statistics show that the IFs of the VAR coefficients are far below 20 in both regimes. The same holds for the threshold and delay coefficient (middle and bottom panel of Table 5). For the latter, I moreover report p-values corresponding to the test-statistic (22), which indicate that the null of identical means across subsamples is strongly not rejected for both coefficients. Figure 7 moreover shows the Markov chain and the autocorrelation function for the threshold value. The figure indicates that the Markov chain has converged to its ergodic distribution and that the autocorrelation in the chain dies out quickly.

Table 5:

Convergence statistics.

VAR Coefficients
Regime 1
Regime 2
Mean Median Min Max 10th 90th Mean Median Min Max 10th 90th
B 0.99 0.96 0.44 1.80 0.71 1.30 0.92 0.91 0.43 1.68 0.67 1.21

Threshold Coefficient
Geweke statistic Inefficiency factor

r 0.58 4.23

Delay Coefficient
Geweke statistic Inefficiency factor

d 0.77 1.17
  1. Geweke statistics refers to the p-Values corresponding to the test-statistic (22); inefficiency factors are defined as the inverse of the relative numerical efficiency measure.

Figure 7: 
Convergence analysis for r
∗.
Figures display the Markov chain of the threshold coefficient (left panel) and its autocorrelation function (right panel) taken from the estimation of the final period.
Figure 7:

Convergence analysis for r .

Figures display the Markov chain of the threshold coefficient (left panel) and its autocorrelation function (right panel) taken from the estimation of the final period.

C Generalized impulse responses

Formally. the GIRF at horizon h of variable y to a shock of size ϵ and conditional on an initial condition It−1 is defined as the difference between two conditional expectations:

(23) GIRF y ( h . ϵ . I t 1 ) = E [ y t + h | ϵ . I t 1 ] E [ y t + h | I t 1 ] .

where the terms on the right-hand side are approximated by a stochastic simulation of the model as proposed by Kilian and Vigfusson (2011). Specifically, for each block of p consecutive values of x and y (initial conditions), I randomly draw from the Gibbs sampler output one matrix of VAR coefficients B and a corresponding variance-covariance matrix Σ. Based on this random draw from the model’s posterior, I compute the reduced-form residuals ut , using the relationship u t = A 0 ϵ t where ϵ t contains the structural shocks. Using the reduced-form residuals, I simulate two time paths for xt+h and yt+h for h = 0 , 1 , , H . For the first path, I set ϵ i , 1 = ϵ i , 1 + 1 , while for the second path ϵ i , 1 = ϵ i , 1 where ϵ i,1 refers to the structural uncertainty shocks. The impulse response is obtained by calculating the differences between both paths for yt+h , h = 0 , 1 , , H . For each initial condition, I repeat this procedure 500 times and average over these 500 differences to obtain the GIRF for the respective histories. To compute regime-dependent responses, I average over the GIRFs based on the histories of the normal times and high uncertainty regime, respectively.

Figure 8: 
Generalized impulse responses to a unit uncertainty shock.
Figure displays the impact of a unit uncertainty shock to selected variables in normal times and in times of high uncertainty. The responses are generated using a recursive identification scheme with uncertainty ordered second. Gray shaded areas and dashed blue lines refer to 68% error bands. The macro uncertainty index enters the model standardized.
Figure 8:

Generalized impulse responses to a unit uncertainty shock.

Figure displays the impact of a unit uncertainty shock to selected variables in normal times and in times of high uncertainty. The responses are generated using a recursive identification scheme with uncertainty ordered second. Gray shaded areas and dashed blue lines refer to 68% error bands. The macro uncertainty index enters the model standardized.

Figure 9: 
Generalized impulse responses to a one standard deviation uncertainty shock.
Figure displays the impact of a one standard deviation uncertainty shock in normal times and in times of high uncertainty. The responses are generated using a recursive identification scheme with uncertainty ordered second. Gray shaded areas and dashed blue lines refer to 68% error bands. The macro uncertainty index enters the model standardized.
Figure 9:

Generalized impulse responses to a one standard deviation uncertainty shock.

Figure displays the impact of a one standard deviation uncertainty shock in normal times and in times of high uncertainty. The responses are generated using a recursive identification scheme with uncertainty ordered second. Gray shaded areas and dashed blue lines refer to 68% error bands. The macro uncertainty index enters the model standardized.

References

Abel, A. B. 1973. “Optimal Investment Under Uncertainty.” American Economic Review 73 (1): 228–233.Search in Google Scholar

Adolfson, M., J. Lindé, and M. Villani. 2007. “Forecasting Performance of an Open Economy DSGE Model.” Econometrics Review 26 (2-4): 289–328.10.1080/07474930701220543Search in Google Scholar

Alessandri, P., and H. Mumtaz. 2017. “Financial Conditions and Density Forecasts for US Output and Inflation.” Review of Economic Dynamics 24: 66–78.10.1016/j.red.2017.01.003Search in Google Scholar

Alessandri, P., and H. Mumtaz. 2019. “Financial Regimes and Uncertainty Shocks.” Journal of Monetary Economics 101: 31–46.10.1016/j.jmoneco.2018.05.001Search in Google Scholar

Auerbach, A. J., and Y. Gorodnichenko. 2012. “Measuring the Output Responses to Fiscal Policy.” American Economic Journal: Economic Policy 4 (2): 1–27.10.3386/w16311Search in Google Scholar

Bachmann, R., and C. Bayer. 2013. “Wait-and-See Business Cycles?” Journal of Monetary Economics 60 (6): 704–719.10.1016/j.jmoneco.2013.05.005Search in Google Scholar

Bachmann, R., S. Elstner, and E. R. Sims. 2013. “Uncertainty and Economic Activity: Evidence from Business Survey Data. American Economic Journal: Macroeconomics 5 (2): 217–249.10.3386/w16143Search in Google Scholar

Baker, S. R., N. Bloom, and S. J. Davis. 2016. “Measuring Economic Policy Uncertainty.” The Quarterly Journal of Economics 131 (4): 1593–1636.10.3386/w21633Search in Google Scholar

Balcilar, M., R. Gupta, and M. Segnon. 2016. “The Role of Economic Policy Uncertainty in Predicting US Recessions: A Mixed-Frequency Markov-Switching Vector Autoregressive Approach.” Economics: The Open-Access, Open-Assessment E-Journal 10 (2016-27): 1–20.10.5018/economics-ejournal.ja.2016-27Search in Google Scholar

Bańbura, M., D. Giannone, and L. Reichlin. 2010. “Large Bayesian Vector Auto Regressions.” Journal of Applied Econometrics 25 (1): 71–92.10.1002/jae.1137Search in Google Scholar

Barnett, A., H. Mumtaz, and K. Theodoridis. 2014. “Forecasting UK GDP Growth and Inflation Under Structural Change. A Comparison of Models with Time-Varying Parameters.” International Journal of Forecasting 30 (1): 129–143.10.1016/j.ijforecast.2013.06.002Search in Google Scholar

Bekiros, S., R. Gupta, and A. Paccagnini. 2015. Oil Price Forecastability and Economic Uncertainty.” Economics Letters 132: 125–128.10.1016/j.econlet.2015.04.023Search in Google Scholar

Bekiros, S., and A. Paccagnini. 2015. Estimating Point and Density Forecasts for the US Economy with a Factor-Augmented Vector Autoregressive DSGE Model. Studies in Nonlinear Dynamics & Econometrics 19 (2): 107–136.10.1515/snde-2013-0061Search in Google Scholar

Berg, T. O. 2016. “Multivariate Forecasting with BVARs and DSGE Models.” Journal of Forecasting 35: 718–740.10.1002/for.2406Search in Google Scholar

Berg, T. O. 2017a. “Business Uncertainty and the Effectivness of Fiscal Policy in Germany.” Macroeconomic Dynamics 23 (4): 1442–1470.10.1017/S1365100517000281Search in Google Scholar

Berg, T. O. 2017b. “Forecast Accuracy of a BVAR Under Alternative Specifications of the Zero Lower Bound.” Studies in Nonlinear Dynamics & Econometrics 21 (2): 1081–1826.10.1515/snde-2015-0084Search in Google Scholar

Bernanke, B. S. 1983. Irreversibility, Uncertainty, and Cyclical Investment.” The Quarterly Journal of Economics 98 (1): 85–106.10.3386/w0502Search in Google Scholar

Bijsterbosch, M., and P. Guérin. 2013. “Characterizing Very High Uncertainty Episodes.” Economics Letters 121 (2): 239–243.10.1016/j.econlet.2013.08.005Search in Google Scholar

Bloom, N. 2009. “The Impact of Uncertainty Shocks.” Econometrica 77 (3): 623–685.10.3386/w13385Search in Google Scholar

Born, B., and J. Pfeifer. 2014. “Policy Risk and the Business Cycle.” Journal of Monetary Economics 68: 68–85.10.1016/j.jmoneco.2014.07.012Search in Google Scholar

Caballero, R., and R. S. Pindyck. 1996. “Uncertainty, Investment, and Industry Evolution.” International Economic Review 37 (3): 641–662.10.3386/w4160Search in Google Scholar

Caggiano, G., E. Castelnuovo, and J. M. Figueres. 2017. “Economic Policy Uncertainty and Unemployment in the United States: A Nonlinear Approach.” Economics Letters 151: 31–34.10.1016/j.econlet.2016.12.002Search in Google Scholar

Caggiano, G., E. Castelnuovo, and N. Groshenny. 2014. “Uncertainty Shocks and Unemployment Dynamics in U.S. Recessions.” Journal of Monetary Economics 67: 78–92.10.1016/j.jmoneco.2014.07.006Search in Google Scholar

Carriero, A., G. Kapetanios, and M. Marcellino. 2009. “Forecasting Exchange Rates with a Large Bayesian VAR.” International Journal of Forecasting 25 (2): 400–417.10.1016/j.ijforecast.2009.01.007Search in Google Scholar

Carriero, A., H. Mumtaz, K. Theodoridis, and A. Theophilopoulou. 2015. “The Impact of Uncertainty Shocks Under Measurement Error: A Proxy SVAR Approach.” Journal of Money, Credit and Banking 47 (6): 1223–1238.10.1111/jmcb.12243Search in Google Scholar

Chen, C. W. S., and J. C. Lee. 1995. “Bayesian Inference of Threshold Autoregressive Models.” Journal of Time Series Analysis 16 (5): 483–492.10.1111/j.1467-9892.1995.tb00248.xSearch in Google Scholar

Chiu, C.-W. J., H. Mumtaz, and G. Pintér. 2017. “Forecasting with VAR Models: Fat Tails and Stochastic Volatility.” International Journal of Forecasting 33 (4): 1124–1143.10.1016/j.ijforecast.2017.03.001Search in Google Scholar

Christiano, L., R. Motto, and M. Rostagno. 2014. “Risk Shocks.” American Economic Review 104 (1): 27–65.10.3386/w18682Search in Google Scholar

Clark, T. E. 2012. “Real-Time Density Forecasts From Bayesian Vector Autoregressions With Stochastic Volatility.” Journal of Business & Economic Statistics 29 (3): 327–341.10.1198/jbes.2010.09248Search in Google Scholar

Clark, T. E., and F. Ravazzolo. 2015. “Macroeconomic Forecasting Performance Under Alternative Specifications of Time-Varying Volatility.” Journal of Applied Econometrics 30 (4): 551–575.10.1002/jae.2379Search in Google Scholar

Cogley, T., and T. J. Sargent. 2001. “Evolving Post-World War II US Inflation Dynamics.” In NBER Macroeconomics Annual 2001, edited by B. S. Bernanke and K. Rogoff, Volume 16, 331–373. Cambridge, MA: MIT Press.10.1086/654451Search in Google Scholar

D’Agostino, A., L. Gambetti, and D. Giannone. 2013. “Macroeconomic Forecasting and Structural Change.” Journal of Applied Econometrics 28 (1): 82–101.10.1002/jae.1257Search in Google Scholar

Diebold, F. X., and R. S. Mariano. 1995. “Comparing Predictive Accuracy.” Journal of Business & Economic Statistics 13 (3): 253–263.10.3386/t0169Search in Google Scholar

Doan, T., R. Litterman, and C. Sims. 1984. “Forecasting and Conditional Projection Using Realistic Prior Distributions.” Econometric Reviews 3 (1): 1–100.10.3386/w1202Search in Google Scholar

Fernández-Villaverde, J., P. Guerrón-Quintana, K. Kuester, and J. Rubio-Ramírez. 2015. “Fiscal Volatility Shocks and Economic Activity.” American Economic Review 105 (11): 3352–3384.10.1257/aer.20121236Search in Google Scholar

Ferrera, Laurent, and Pierre Guérin. 2018. “What are the macroeconomic effects of high-frequency uncertainty shocks?” Journal of Applied Econometrics 33 (5): 662–679.10.1002/jae.2624Search in Google Scholar

Foerster, A. T. 2014. “The Asymmetric Effects of Uncertainty.” Economic Review-Federal Reserve Bank of Kansas City Q III: 5–26.Search in Google Scholar

Gerard, H., and K. Nimark. 2008. “Combining Multivariate Density Forecasts Using Predictive Criteria.” Economics Working Papers 1117, Department of Economics and Business, Universitat Pompeu Fabra.Search in Google Scholar

Geweke, J., and G. Amisano. 2010. “Comparing and Evaluating Bayesian Predictive Distributions of Asset Returns.” International Journal of Forecasting 26 (2): 216–230.10.1016/j.ijforecast.2009.10.007Search in Google Scholar

Geweke, J. F. 1991. “Evaluating the Accuracy of Sampling-Based Approaches to the Calculation of Posterior Moments.” Staff Report 148, Federal Reserve Bank of Minneapolis.10.21034/sr.148Search in Google Scholar

Gilchrist, Simon, Jae Sim, and Egon Zakrajšek. 2014. “Uncertainty, Financial Frictions, and Investment Dynamics.” NBER Working Paper 20038. DOI: 10.3386/w20038.Search in Google Scholar

Gneiting, T., and A. E. Raftery. 2007. “Strictly Proper Scoring Rules, Prediction, and Estimation.” Journal of the American Statistical Association 102 (477): 359–378.10.1198/016214506000001437Search in Google Scholar

Gneiting, T., and R. Ranjan. 2011. “Comparing Density Forecasts Using Threshold and Quantile Weighted Scoring Rules.” Journal of Business & Economic Statistics 29 (3): 411–422.10.1198/jbes.2010.08110Search in Google Scholar

Groen, J. J. J., R. Paap, and F. Ravazzolo. 2013. “Real-Time Inflation Forecasting in a Changing World.” Journal of Business & Economic Statistics 31 (1): 29–44.10.1080/07350015.2012.727718Search in Google Scholar

Hartman, R. 1972. “The Effects of Price and Cost Uncertainty on Investment.” Journal of Economic Theory 5 (2): 258–266.10.1016/0022-0531(72)90105-6Search in Google Scholar

Harvey, D., S. Leybourne, and P. Newbold. 1997. “Testing the Equality of Prediction Mean Squared Errors.” International Journal of Forecasting 13 (2): 281–291.10.1016/S0169-2070(96)00719-4Search in Google Scholar

Henzel, S. R., and M. Rengel. 2017. “Dimensions of Macroeconomic Uncertainty: A Common Factor Analysis.” Economic Inquiry 55 (2): 843–877.10.1111/ecin.12422Search in Google Scholar

Huber, F. 2016. “Density Forecasting Using Bayesian Global Vector Autoregressions with Stochastic Volatility.” International Journal of Forecasting 32 (3): 818–837.10.1016/j.ijforecast.2015.12.008Search in Google Scholar

Jones, P. M., and W. Enders. 2016. “The Asymmetric Effects of Uncertainty on Macroeconomic Activity.” Macroeconomic Dynamics 20 (5): 1219–1246.10.1017/S1365100514000807Search in Google Scholar

Jurado, K., S. C. Ludvigson, and S. Ng. 2015. “Measuring Uncertainty.” The American Economic Review 105 (3): 1177–1216.10.1257/aer.20131193Search in Google Scholar

Kadiyala, K. R., and S. Karlsson. 1997. “Numerical Methods for Estimation and Inference in Bayesian VAR-Models.” Journal of Applied Econometrics 12 (2): 99–132.10.1002/(SICI)1099-1255(199703)12:2<99::AID-JAE429>3.0.CO;2-ASearch in Google Scholar

Kilian, L., and R. J. Vigfusson. 2011. “Are the Responses of the U.S. Economy Asymmetric in Energy Price Increases and Decreases?” Quantitative Economics 2 (3): 419–453.10.3982/QE99Search in Google Scholar

Koop, G. M. 2013. “Forecasting with Medium and Large Bayesian VARs.” Journal of Applied Econometrics 28 (2): 177–203.10.1002/jae.1270Search in Google Scholar

Koop, G. M., M. H. Pesaran, and S. M. Potter. 1996. “Impulse Response Analysis in Nonlinear Multivariate Models.” Journal of Econometrics 74 (1): 119–147.10.1016/0304-4076(95)01753-4Search in Google Scholar

Leduc, S., and Z. Liu. 2016. “Uncertainty Shocks are Aggregate Demand Shocks.” Journal of Monetary Economics 82: 20–35.10.1016/j.jmoneco.2016.07.002Search in Google Scholar

Litterman, R. B. 1986. “Forecasting with Bayesian Vector Autoregressions – Five Years of Experience.” Journal of Business & Economic Statistics 4 (1): 25–38.10.1080/07350015.1986.10509491Search in Google Scholar

Ludvigson, S., S. Ma, and S. Ng. 2019. “Uncertainty and Business Cycles: Exogenous Impulse or Endogenous Response?” American Economic Journal: Macroeconomics forthcoming.10.3386/w21803Search in Google Scholar

McCracken, M. W., and S. Ng. 2016. “Fred-Md: A Monthly Database for Macroeconomic Research.” Journal of Business & Economic Statistics 34 (4): 574–589.10.20955/wp.2015.012Search in Google Scholar

Mumtaz, H., and K. Theodoridis. 2015. “The International Transmission of Volatility Shocks: An Empirical Analysis.” Journal of the European Economic Association 13 (3): 512–533.10.1111/jeea.12120Search in Google Scholar

Mumtaz, H., and K. Theodoridis. 2018. “The Changing Transmission of Uncertainty Shocks in the U.S.” Journal of Business & Economic Statistics 36 (2): 239–252.10.1080/07350015.2016.1147357Search in Google Scholar

Oi, W. Y. 1961. “The Desirability of Price Instability Under Perfect Competition.” Econometrica 29 (1): 58–64.10.2307/1907687Search in Google Scholar

Panagiotelis, A. and M. Smith. 2008. “Bayesian Density Forecasting of Intraday Electricity Prices Using Multivariate Skew t Distributions.” International Journal of Forecasting 24 (4): 710–727.10.1016/j.ijforecast.2008.08.009Search in Google Scholar

Pierdzioch, C., and R. Gupta. 2017. “Uncertainty and Forecasts of U.S. Recessions.” Working Papers 201732, University of Pretoria, Department of Economics.10.1515/snde-2018-0083Search in Google Scholar

Popescu, A., and F. R. Smets. 2010. “Uncertainty, Risk-taking, and the Business Cycle in Germany.” Cesifo Economic Studies 56 (4/2010): 596–626.10.1093/cesifo/ifq013Search in Google Scholar

Primiceri, G. E. 2005. “Time Varying Structural Vector Autoregressions and Monetary Policy.” Review of Economic Studies 72 (3): 821–852.10.1111/j.1467-937X.2005.00353.xSearch in Google Scholar

Rossi, B., and T. Sekhposyan. 2014. “Evaluating Predictive Densities of US Output Growth and Inflation in a Large Macroeconomic Data Set. International Journal of Forecasting 30 (3): 662–682.10.1016/j.ijforecast.2013.03.005Search in Google Scholar

Rossi, B., and T. Sekhposyan. 2015. “Macroeconomic Uncertainty Indices Based on Nowcast and Forecast Error Distributions.” The American Economic Review 105 (5): 650–655.10.1257/aer.p20151124Search in Google Scholar

Segnon, M., R. Gupta, S. Bekiros, and M. E. Wohar. 2018. “Forecasting US GNP Growth: The Role of Uncertainty.” Journal of Forecasting 37 (5): 541–559.10.1002/for.2517Search in Google Scholar

Sims, C. A. 1993. “A Nine-Variable Probabilistic Macroeconomic Forecasting Model.” In Business Cycles, Indicators, and Forecasting, edited by J. H. Stock and M. W. Watson, 179–212. NBER Book Series Studies in Business Cycles. Chicago: The University of Chicago Press.Search in Google Scholar

Smith, M. S., and S. P. Vahey. 2015. “Asymmetric Forecast Densities for U.S. Macroeconomic Variables from a Gaussian Copula Model of Cross-Sectional and Serial Dependence.” Journal of Business & Economic Statistics 34 (3): 416–434.10.1080/07350015.2015.1044533Search in Google Scholar

Wolters, M. H. 2015. “Evaluating Point and Density Forecasts of DSGE Models.” Journal of Applied Econometrics 30 (1): 74–96.10.1002/jae.2363Search in Google Scholar


Supplementary Material

The online version of this article offers supplementary material (DOI: https://doi.org/10.1515/snde-2019-0073).


Published Online: 2020-02-24

© 2020 Magnus Reif, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.5.2024 from https://www.degruyter.com/document/doi/10.1515/snde-2019-0073/html
Scroll to top button