Nowcasting in a Pandemic using Non-Parametric Mixed Frequency VARs

This paper develops Bayesian econometric methods for posterior and predictive inference in a non-parametric mixed frequency VAR using additive regression trees. We argue that regression tree models are ideally suited for macroeconomic nowcasting in the face of the extreme observations produced by the pandemic due to their flexibility and ability to model outliers. In a nowcasting application involving four major countries in the European Union, we find substantial improvements in nowcasting performance relative to a linear mixed frequency VAR. A detailed examination of the predictive densities in the first six months of 2020 shows where these improvements are achieved.


Introduction
Mixed Frequency Vector Autoregressions (MF-VARs) have enjoyed great popularity in recent years as a tool for producing timely high frequency nowcasts of low frequency variables. A common practice (see, e.g., Schorfheide and Song, 2015) 1 is to choose a quarterly macroeconomic variable such as GDP and a set of monthly variables and model them together in a VAR so as to produce monthly nowcasts of GDP. The fact that statistical agencies release data such as GDP with a delay, whereas appropriately chosen monthly variables are released with less of a delay further enhances the benefits of the MF-VAR. Nowcasts can be produced in a timely fashion.
The pandemic lockdown of 2020 has further increased the need for timely, high frequency nowcasts of economic activity. And the increasing availability of a variety of high frequency (i.e. monthly, weekly or daily) and quickly released data (i.e. some variables are released almost instantly) presents rich opportunities for the mixed frequency modeller. However, the pandemic also poses challenges to the conventional, linear, MF-VAR. During the pandemic, we have seen values of variables that are far from the range of past values. Linear time series econometric methods seek to find patterns in past data. If current data is very different, using such patterns and linearly extrapolating them may be highly questionable. This has led researchers to try to develop new VAR frameworks for nowcasting during the pandemic. For instance, Schorfheide and Song (2020) find that the model developed in Schorfheide and Song (2015) nowcasts poorly, but that if they estimate their MF-VAR using data through 2019 and then produce conditional forecasts for the first half of 2020, improvements were obtained. In essence, the extreme data in the first half of 2020 caused estimates of the full sample MF-VAR coefficients to change in a manner which led to poor forecasts. Lenza and Primiceri (2020) propose an alternative VAR-based approach which allows the error covariance matrix to have a mixture distribution. In essence, the pandemic is treated as a large variance shock and pandemic observations are, thus, drastically downweighted in the model estimation. They conclude "Our results show that the ad-hoc strategy of dropping these observations may be acceptable for the purpose of parameter estimation. However, disregarding these recent data is inappropriate for forecasting the future evolution of the economy, because it vastly underestimates uncertainty." Thus, although Schorfheide and Song (2020) and Lenza and Primiceri (2020) adopt very different approaches, they end up with similar advice: discard the pandemic observations when estimating the model.
It is possible to envisage other approaches to modifying the MF-VAR for pandemic times. These would involve parameter change of some form (e.g. structural break or time-varying parameter, TVP, models). But structural break models would be plagued by the fact that there are too few observations post-break to permit reliable estimation. This problem would not occur with TVP models which assume smoothly adjusting coefficents. But TVP models are not capable of adjusting for sudden and strong jumps in the endogenous variables within few months such as have been occurring in the pandemic. In light of these considerations, in this paper we adopt a different, non-parametric, approach. We argue that such an approach should automatically decide how to treat the pandemic observations in a sensible fashion. In an empirical exercise involving four European countries, we demonstrate the superior nowcasting performance of our approach.
The non-parametric model we adopt involves Bayesian Additive Regression Trees (BART, see Chipman et al., 2010). BART is a flexible and popular approach in many fields of statistics. 2 But BART has been rarely used in time series econometrics. Huber and Rossini (2020) develop Bayesian methods which build BART into a VAR leading to the Bayesian Additive Vector Autoregressive Tree (BAVART) model and demonstrate it forecasts well. In this paper, we develop Bayesian methods for the mixed frequency version of this model (MF-BAVART). This development is non-trivial and, thus, represents a substantial econometric contribution to the literature even apart from the pandemic context. The MF-VAR is a Gaussian linear state space model and well-established Bayesian methods exist for estimation and forecasting. However, MF-BAVART is is not linear and, thus, these methods are not directly available. MF-VARs treat the unobserved high frequency values of the low frequency variables as latent states. Conditional on these latent states, we obtain the BAVART and the methods of Huber and Rossini (2020) can be used. It is drawing the latent states (conditional on the BAVART parameters) which is more challenging. We deal with this challenge by rendering the model conditionally Gaussian using recently developed methods for estimating effect sizes in so-called black box models such as BART, 3 see Crawford et al. (2018Crawford et al. ( , 2019. We apply the resulting model to nowcast GDP growth in selected Eurozone economies (Germany, Spain, France and Italy) and show that our approach outperforms the linear MF-VAR model. With some exceptions, it produces slightly better nowcasts through 2019. But when the first two months of 2020 are included the improvements offered by MF-BAVART rise substantially. We investigate where these improvements are coming from in a detailed study of the predictive densities for the first six months of 2020.
The remainder of this paper is organized as follows. In the next section, we define the MF-BAVART and illustrate how it can effectively handle extreme observations such as have occurred during the pandemic and briefly sketch the Markov Chain Monte Carlo (MCMC) algorithm for posterior and predictive Bayesian inference. The third section of the paper contains our empirical work. The fourth section offers a summary and conclusions. Appendix A provides full details of our Bayesian methods including the prior and MCMC algorithm. Appendix B provides additional empirical results.

The MF-BAVART
Suppose we are interested in modeling an M -dimensional vector of time series y t = (y m,t , y q,t ) where y m,t is an M m vector and y q,t is an M q vector and t = 1, . . . , T indicates time at the monthly frequency. The variables in y m,t are observed, but we do not observe y q,t at any point in time. Instead the statistical agency produces a quarterly figure, y Q,t . Assuming that y q,t are monthly growth rates (log difference relative to previous month) and y Q,t are quarterly growth rates (log difference relative to previous quarter), the relationship between them is (see Mariano and Murasawa, 2003): 4 y Q,t = 1 9 y q,t + 2 9 y q,t−1 + 1 3 y q,t−2 + 2 9 y q,t−3 + 1 9 y q,t−4 .
We refer to this as the intertemporal restriction and note that it applies every third month (e.g. the statistical agency produces quarterly data for the quarter covering January, February and March, but not the quarter covering February, March, April). We assume that y t evolves according to a VAR of the form: . . , f M (X t )) being a vector of potentially nonlinear functions and f j : This is a state space model where unobserved monthly growth rates, y q,t , are treated as states. The state equations are given by (2). The measurement equations are the intertemporal restriction in (1) (applicable every third month) and those which simply state that y m,t are observed every month.
If F (X t ) is a vector of linear functions then we obtain the linear MF-VAR of, e.g., Schorfheide and Song (2015). Assuming a conditionally Gaussian prior for the VAR coefficients (e.g. the Minnesota prior or a conditionally Gaussian global-local shrinkage prior), posterior and predictive inference is straightforward. That is, standard Bayesian MCMC methods such as Forward-Filtering Backward-Sampling (FFBS, see e.g., Frühwirth-Schnatter, 1994) for Gaussian linear state space models can be used.
In this paper, we wish to treat F (X t ) non-parametrically. In principle, any model can be used for F (e.g. kernel regression, deep neural networks, tree-based models, Gaussian process regression) and the methods derived below could be used with minor modifications. In this paper, we approximate F using BART as, for reasons discussed below, it should be well-designed to capture large shocks and outliers such as those produced by the pandemic. BART approximates each f j (X t ) as follows: where T js are so-called tree structures related to the j th element in y t , µ js are tree-specific terminal nodes and S denotes the total number of trees used. The dimension of µ js is denoted by b js which depends on the complexity of the tree (i.e. this dimension is the number of leaves on the tree). In our empirical work, we follow Chipman et al. (2010) and set S = 250. To understand how BART works, we begin with a single tree (and, for simplicity, suppress the js subscripts which distinguish the various trees and equations in the VAR). In the language of regression, a tree takes as an input the value for the explanatory variables for an observation and produces as an output a fitted value for the dependent variable for that observation. These fitted values are the parameters related to the terminal nodes. It does this by dividing the space of explanatory variables into various regions using a sequence of binary rules. These socalled splitting rules take the form {X ∈ A r } or {X ∈ A r } with A r being a partition set for r = 1, . . . , b and X = (X 1 , . . . , X T ) a full-data matrix of dimension T × K. The partition rules involve an explanatory variable and depend on whether they are above or below a threshold, c. If we let X •i denote the i th column of X, then the partition set takes the form The fitted value of the dependent variable for an observation with explanatory variables in the set A r produced by a single regression tree takes the form g(X; T, µ) = µ r , if X ∈ A r , r = 1, . . . , b.
A key point to emphasize is that everything defining the tree is treated as an unknown parameter and estimated. This includes the terminal node parameters (µ which is the vector of fitted values the algorithm can choose between), their number (b) as well as all the elements of the tree structure (i.e. the explanatory variable, X •i , and threshold, c chosen to define each splitting rule) and even the number of splitting rules each tree involves. In the following sub-section, we provide a brief empirical illustration of what an estimated tree looks like in our data set.
The preceding discussion involved a single tree and illustrated its flexibility. But BART involves not just one regression tree, but rather a sum of them. By adding up various regression trees even greater flexibility is produced. Thus, BART can be interpreted as a non-parametric approach capable of approximating any nonlinear function. But, as with any non-parametric approach, additive regression trees risks over-fitting. This why Bayesian methods have been commonly used as prior information can mitigate this problem. We use regularization priors to reduce the complexity of the tree structures and to shrink the terminal nodes. In the jargon of this literature, we force each tree to be small and, thus, act as a weak learner. This essentially implies that for a large S, each tree explains only a limited fraction of the variation in y t .
In terms of posterior and predictive computation, the point to note is that efficient MCMC algorithms have derived for estimating BART models. In our MF-BVART model, we use these conditional on y q,t . That is, one block of the MCMC algorithm (to be discussed below) provides draws of y q,t and, conditional on these draws, we use standard algorithms for drawing the BART parameters. In principle, we could draw the parameters of the trees and Σ as an entire M dimensional system. However, we follow Carriero et al. (2019) and estimate the model on an equation-by-equation basis by conditioning on the lower Cholesky factor of Σ. This speeds up computation time enormously. Complete details of our prior and posterior simulation methods are provided in Appendix A.

Empirical illustration of how BART works and handles the pandemic
To provide some additional intuition of what BART is doing and why it might be a good approach to handle the extreme observations associated with the pandemic, we preview our empirical application in a simple way. Full details of our data and application are provided below, suffice it to note here that results in this sub-section are for GDP growth for Germany and estimated on the full sample of data which runs through 2020Q2. We use a single tree with a relatively non-informative prior so as to allow for more complex tree structures. This is just for illustration. In our main empirical work, we use many trees and a regularization prior. Figure 1 shows the estimated regression tree for Germany. The tree is organized with a condition (e.g. XIP (t − 1) < −16.96) at the top of every binary split. If this condition holds, you move down the left branch, else you move down the right branch. So, for example, the rightmost terminal node (1.594) is chosen by observations with XIP (t − 1) being greater than or equal to −16.96 (go right at the first split), but have XGDP (t − 1) greater than or equal to −1.392 (go right at the second split) and have XIP (t − 1) greater than or equal to 1.774 (go right at the third split). Hence, the fitted value for GDP growth for observations with last month's industrial production growth in the interval [−16.96, 1.774] and last month's estimated GDP growth above −1.382 is 1.594.
The next point to emphasize is that everything in the tree is estimated by the algorithm. This includes all the numbers (i.e. the values of the terminal nodes and the thresholds in the splitting conditions), the choice of variables in the splitting conditions (e.g. some of the conditions depend on GDP growth, others depend on the growth in industrial production and both appear at various lags) and the number of splits that occur. For instance, to get to the leftmost terminal node involves checking one condition (one split), to get to the rightmost terminal node involves checking three conditions (three splits). To get to some terminal nodes there are multiple splits involving different variables, which is particularly useful for correlated explanatory variables. All in all, BART has great flexibility in capturing any sort of behavior, including characteristics common with macroeconomic data.
Another point to note is that our MF-BAVARTs involve six variables, not just the four variables which appear in the tree. The BART algorithm has decided that the other two variables should not be involved in the splitting conditions and have no useful explanatory power for GDP growth (loosely analogous to these other variables being insignificant).
With regards to modelling during the pandemic, note that there are some terminal nodes chosen by very few observations. And a couple contain a single observation. One of these is the leftmost node in the figure −23.99. This is a pandemic observation. What BART is doing is creating nodes for capturing outliers. Whereas the parameter estimates in a linear model can be substantially affected by an outlier, BART can simply add a new branch to control for it without affecting the main body of the tree. We will explore this issue in more detail in our empirical work, but this is the intuition for why our MF-BAVART ends up nowcasting better than the linear MF-VAR, particularly around the time of the pandemic.

Drawing the latent states in the MF-BAVART
Sub-section 2.1 defined the MF-BAVART and discussed how well-established MCMC methods can be used to draw the BART parameters conditional on the states (i.e. the unobserved high frequency values of the low frequency variables). To complete the MCMC algorithm we need a method for drawing the states, conditional on the BART parameters. In a linear MF-VAR this is done using standard Bayesian state space algorithms such as FFBS. But with MF-BAVAR this is more complicated since the model is highly non-linear and FFBS is not directly applicable. Accordingly, we borrow from the literature that deals with estimating effect sizes in black-box models (see Crawford et al., 2018Crawford et al., , 2019 to produce a linear approximation to F (X t ). Given this linear approximation FFBS can be used to draw y q,t . Thus, this step in the MCMC algorithm is an approximate one, but our empirical results indicate the approximation is a good one.
To explain our linear approximation, note that in linear regression models, the effect size is commonly interpreted as the magnitude of the projection of X onto Y = (y 1 , . . . , y T ) which takes the form: In the case where X is a full rank matrix then this projection is simply (X X) −1 X Y and the effect size is simply the least squares estimate (i.e. it is an estimate of the magnitude of the marginal effect of the explanatory variables on the dependent variables). We follow Crawford et al. (2018Crawford et al. ( , 2019 and adopt this idea but using the non-parametric functions F = (F (X 1 ), . . . , F (X T )) in place of Y . This produces the following estimate which can be interpreted as an effect size: The argument for whyÃ can be interpreted as an effect size similar to the least squares estimator in linear models is provided in detail in papers such as Crawford et al. (2018), Crawford et al. (2019) and Ish-Horowicz et al. (2020). But in essence the justification is based on the idea that it can be shown that at the T observations it is the case that F ≈ XÃ. We use this fact to produce a linear approximation to the non-parametric multivariate model: Since we now have a linear model with Gaussian shocks, standard techniques such as FFBS can be used to draw y q,t based on the Gaussian linear state space model defined by (4) and the intertemporal restriction, (1).

Empirical Results
In this section, we investigate the performance of our MF-BAVART model for forecasting GDP growth using four data sets with relatively short samples. The short sample arises since some of the variables have only been collected for a short time period. This is an issue which arises with many of the new data sets that are becoming popular (e.g. internet search data) and, accordingly, we felt it useful to test our methodology in the type of context where it might be used in the future. All models use a lag length of 5 since this is the number of lags in the intertemporal restriction in (1).
Figure 1: Estimated tree structure for Germany using a single tree. Notes: The variables in the tree are GDP, IP (industrial production), ESI (economic sentiment indicator), PMI (purchasing manager's index) and X denotes the growth rate.
Complete definitions are given in the empirical section of this paper. The number of observations choosing each terminal node is denoted by n. The splitting rules are defined such that, if the condition holds you move down the left branch of the tree, else you move down the right branch.

Data
We use monthly and quarterly data on Germany (DE), France (FR), Italy (IT) and Spain (ES) from 2005M03/2005Q1 to 2020M06/2020Q2 on the following M = 6 variables: 1. GDP growth: quarterly GDP growth (abbreviated GDP), released six weeks after the end of the respective quarter.
2. Industrial production: monthly growth rate of industrial production (abbreviated IP), released with approximately 6 week lag.
3. Economic sentiment indicator: monthly growth rate of the economic sentiment indicator (abbreviated ESI), released on the next-to-last working day of the respective month.
4. New car registrations: monthly growth rate of new car registrations (abbreviated CAR), released with a delay of two and a half weeks.
5. Purchasing manager index: monthly growth rate of the purchasing manager index (abbreviated PMI), released on the first working day of the next month.
6. One-year-ahead interest rates (abbreviated EUR), monthly average, available immediately after the end of the respective month.
Data on GDP and industrial production is obtained from Eurostat, the Economic Sentiment Indicator is provided by the European Commission, figures on new car registrations are released by the European Automobile Manufacturers Association (ACEA), PMI readings come from Markit and the interest rate data is obtained from Macrobond.

The design of the pseudo-real time nowcasting exercise
Given the relatively short sample size we begin evaluating nowcasts in 2011Q1. Within each quarter, we produce three nowcasts, one for each month in the quarter. Our model nowcasts monthly growth rates, y q,t , which are turned into quarterly growth rates for comparison with the actual realization of quarterly GDP growth. All of our nowcasts respect the release calendar (e.g. a nowcast produced for January will be made at the beginning of February using the data that has been released by then). We compare results from our MF-BAVART specification to a standard linear MF-VAR which is identical in all respects except that it is linear. This implies that we set F (X t ) = AX t with A being an M × K coefficient matrix. On a = vec(A) we use a Horseshoe prior analogous to the one defined in (A.2). The prior on Σ is the same in the two models.

Comparing the MF-BAVART to the MF-VAR
Tables 1 and 2 summarise our findings. They offer a comparison of MF-BAVART to the conventional MF-VAR in terms of root mean squared forecast errors (RMSEs), log predictive scores (LPSs which are log predictive likelihoods summed over the nowcast evaluation period) and continuously ranked probability scores (CRPSs). To investigate the pandemic period, we produce two sets of results: one for the full sample (including the pandemic period) and one ending in 2019.
Note first that, as we move from month to month within a quarter, our nowcasts almost always improve. This statement holds true for all nowcast evaluation metrics and countries. This provides evidence that mixed frequency methods are useful for nowcasting in these data sets. As new information is released each month, our nowcasts of GDP growth improve. In terms of the comparison of linear versus non-parametric mixed frequency methods, with some exceptions, prior to the pandemic, we are finding that MF-BAVART nowcasts somewhat better than the linear MF-VAR. But the improvements from using the non-parametric model are not large. The main exception is Germany where the MF-VAR nowcasts slightly better than MF-BAVART. For Spain and Italy, which had more volatile GDP growth over our sample period, MF-BAVART is nowcasting substantially better than the MF-VAR. For France the various methods of nowcast comparison tell slightly different stories. LPSs indicate the nonparametric approach is nowcasting better, but CRPSs indicate the linear model is doing slightly better.
When we turn to Table 2 which includes the pandemic period, we tend to see much better nowcast performance of the MF-BAVART relative to the MF-VAR. There are exceptions to this, both for Germany and in some RMSEs. It is interesting to note that measures using the entire predictive density (i.e. LPLs and CRPSs) are leading to large improvements for the MF-BAVART relative to the MF-VAR, whereas the measure which uses only point nowcasts are not. Clearly, the benefits of using the non-parametric approach lie largely in their ability to better model second and higher predictive moments for the extreme observations for the first half of 2020.
The pattern for Germany's LPSs is interesting. In Table 1 MF-VAR was producing slightly better LPSs for every month within the quarter. However, in Table 2, MF-BAVART is producing LPS's which are worse for the first month within a quarter, but better for the second and third month. To investigate such patterns more deeply, consider Figures 2a, 2b, 2c and 2d which plot cumulative sums of log predictive likelihoods over time. For Spain and Italy, it can be seen that MF-BAVART is nowcasting well relative to MF-VAR throughout the entire sample, but there is a particularly large jump in 2020. For France, pre-2020 the performance of MF-BAVART is mixed, but in the first half of 2020 there is the same kind of large jump in BART's performance as was observed for Italy and Spain. For these three countries for every month within each quarter, BART is clearly doing a much better job of modelling the pandemic shock than the linear model. For Germany, a similar jump in BART's performance is observed during the pandemic but only in the second and third months of the quarter. In the first month, however, BART is nowcasting very poorly during the pandemic and it is this that is driving the aforementioned finding for Germany. But with this one exception, the MF-BAVART model is doing an excellent job of handling the pandemic.

Are the nowcasts well calibrated?
The preceding sub-section compared the relative performance of the MF-BAVART to the MF-VAR, but did not present any evidence on the nowcast performance of either in an absolute sense.
In Appendix B, we provide graphs of the nowcasts of both approaches plotted against realized GDP growth for the four countries and three monthly nowcasts within each quarter. An examination of them indicates that the MF-BAVART's nowcasts are better calibrated, particularly for Spain. In this sub-section, we investigate this issue more formally using Probability Integral Transforms (PITs). In particular, we follow a common practice (e.g. Clark, 2011) and produce PITs for our nowcasts and transform them using the inverse of the c.d.f. of a standard Gaussian. We denote these transformed PITs as r t for the time of our nowcast evaluation period. Perfectly calibrated nowcasts should lead to r t having mean zero, variance one and being uncorrelated over time. We calculate the sample mean (labelled µ in the tables), variance (labelled σ 2 ) and estimated AR(1) coefficient (labelled AR(1)) and 95% credible intervals. Tables 3 and 4 plot these summary statistics for the sample through 2019 and the full sample, respectively.
Beginning with the linear MF-VAR, note that even in the pre-pandemic sample, there is some evidence of poor calibration. For the sample mean, the point estimates are consistently well away from zero, although the credible intervals always contain zero. The sample variances are substantially higher than one and credible intervals all lie completely above 1.0 indicating the predictive variance of the linear model is too small. There is sometimes evidence of autocorrelation in r t , particularly for the first month in a quarter. When we move to the full sample, these problems get much worse, particularly the sample variance of r t which now becomes very large.
If we turn to the MF-BAVART in Table 4 it can be seen that they are better calibrated. Even for the full sample, the credible intervals for the sample mean of r t always contain zero and, with the exception of a couple cases in the first month, the estimated AR(1) coefficient is insignificant. It is the case that the sample variance of r t is still too high, but to a much lesser extent than for the MF-VAR. Indeed this sample variance tends to be roughly half of what it was with the MF-VAR (again with the exception of Germany in the first month of the quarter). Thus, use of the MF-BAVART has gone a large way towards improving the calibration problems of the MF-VAR, even if it has not completely fixed them.

A deeper look at the pandemic
In this section, we provide more insight as to how MF-BAVART is nowcasting the two pandemic quarters and provide greater insight into the role of the individual variables. We do so by estimating five different versions of our models using different sets of variables. The first version is the one we have used thus far, involving all six variables (this is labelled Full in the figures). The other models all involve GDP growth and industrial production (the two main variables) along with one additional high frequency variable (these are labelled by the name of the additional variable in the tables). We can then examine aspects of the six predictive densities (i.e. for the six months in the first half of 2020) for the five different models for each country for MF-BAVART and MF-VAR. Figure 3 presents the log predictive likelihoods for each individual observation. The story that emerges reinforces our previous evidence that MF-BAVART offers substantial advantages in nowcasting during pandemic times. When the pandemic hit the log predictive likelihoods from both models did tend to become quite negative, particularly during 2020Q2. But, with one main exception, this drop in predictive likelihoods was much larger for the linear model than the non-parametric one. The one exception was noted previously and occured for Germany for nowcasts made for the first month of each quarter. It can now be seen why this occurs. In the first month of 2020Q2, the MF-VAR produced a nowcast of German GDP growth which was If we turn to the issue of which variables are most useful in the nowcasts, for MF-BAVART the five different models tend to produce similar log predictive scores during the pandemic months. However, for MF-VAR there are sometimes substantial differences between models. We noted previously how the one time and country where the MF-VAR nowcasts better than MF-BAVART was Germany in the first month of 2020Q2. This finding occurred for the full model, but it can be seen that for several of the smaller models the MF-VAR is actually nowcasting worse than MF-BAVART for this month.
It is interesting to note that, particularly for 2020Q1, we often observe that the full model produces slightly inferior density forecasts compared to the smaller models. For instance, we are finding in 2020Q1 that models which contain just GDP, IP and CAR perform as well or slightly better than the full model. In 2020Q2, however, the full specifications tend to nowcast better with different variables tending to be of varying importance across countries (e.g. interest rates seem to be important for Italy and France, while car registrations works well for Germany). Figures 4a, 4b, 4c and 4d plot the predictive densities for the various models and countries for first six months of 2020. The key general finding is that, as expected, MF-BAVART is much more flexible than the MF-VAR. Particularly in 2020Q2, the predictive densities it produces tend to be much more dispersed, feature fatter tails, are often asymmetric and there is even some slight evidence (in the case of Germany) of multi-modality. This contrasts with the MF-VAR where the predictive densities tend to be closer to Gaussian. In light of the recent interest in macroeconomics in models involve asymmetries and multimodalities (see, e.g., Adrian et al., 2019a,b) this feature of BART is particularly attractive and is the source of the improvements in nowcast performance during the pandemic.
Another feature of the predictive densities worth noting is that for the MF-VAR, the nowcast density for the Full model is sometimes shifted towards zero relative to the smaller models. It does not look like a combination of the nowcast densities for the smaller models. These properties do not occur with MF-BAVART. This is due to the horseshoe prior used with the MF-VAR shrinking more aggressively larger models and highlights the importance of prior elicitation in linear VARs. With MF-BAVART we are using a standard prior from the BART literature (see Chipman et al., 2010) and are obtaining results which are robust over model dimension such that the Full model nowcast density appears like a sensible and flexible combination of those of the small models.

Summary and Conclusions
MF-VARs have been a standard tool for producing timely, high frequency nowcasts of low frequency variables for several years. With the arrival of the pandemic the need for such nowcasts has become even more acute. However, conventional linear MF-VARs have nowcast poorly during the pandemic due to their inability to effectively deal with the extreme observations that have occurred. In this paper, we have developed MF-BAVART which is a non-parametric model using additive regression trees. MF-BAVART can be cast as a nonlinear state space model. We develop an approximate MCMC algorithm where the parameters defining the conditional mean of the VAR are drawn using a standard BART algorithm and, conditional on these, the states are drawn using a linear approximation. This linear approximation is taken from the machine learning literature on black box models. Our nowcasting exercise, involving four major EU countries, shows that MF-BAVART, with few exceptions, forecasts better than the linear MF-VAR at all times in our sample, but particularly big nowcasting benefits occur during the pandemic. We show how and why this occurs by providing a detailed comparison of nowcast densities in the first six months of 2020. Makalic E, and Schmidt DF (2015), "A simple sampler for the horseshoe estimator," IEEE Signal Processing Letters 23(1), 179-182.
Mariano R, and Murasawa Y (2003), "A new coincident index of business cycles based on monthly and quarterly series," Journal of Applied Econometrics 18, 427-443.

A Priors and Posterior Simulation Algorithm
The model outlined in Section 2 is estimated using Bayesian techniques. This implies that we have to specify suitable priors on the parameters associated with the trees and as well as on Σ.
Before discussing the precise prior setup we show how to rewrite the VAR as a system of unrelated regression models. This approach has the advantage that the computational burden is drastically reduced since we can perform equation-by-equation estimation. Let Q be a M × M lower triangular matrix with unit diagonal such that Σ = QHQ and H = diag(σ 2 1 , . . . , σ 2 M ) denotes a diagonal matrix with variances σ 2 j . Notice that the first equation of (2) can be written as: The second equation is given by: y 2t = f 2 (X t ) + q 21 η 1t + η 2t , η 2t ∼ N (0, σ 2 2 ).
In general, the j th > 1 equation can be written as: This implies that, conditional on the shocks to the previous j − 1 equations, the j th equation is a standard regression model that features a non-parametric part given by f j (X t ) and a regression part q j Z jt with q j = (q j1 , . . . , q jj−1 ) and Z jt = (η 1t , . . . , η j−1t ) . The j −1-dimensional vector q j stores the first j − 1 elements of the j th row of Q. These equations are conditionally independent and standard MCMC techniques can be readily applied. Alternative algorithms replace the shocks with the contemporaneous values of y t . This introduces order dependence which we avoid by conditioning on the shocks. Thus, we are using a standard sampling algorithm that is commonly used to sample from the multivariate Gaussian (Carriero et al., 2019).

A.1 The Prior
The priors we use are all specified in an equation-specific manner and are thus (up to minor differences caused by the fact that the dimension of Z jt differs across equations) symmetric across equations. For each equation j, we closely follow Chipman et al. (2010) and use a regularization prior that can be factorized as follows: p ((T j1 , µ j1 ), . . . , (T jS , µ jS ), σ j , q i ) = s p(µ js |T js )p(T js ) p(q j ) p(σ j ).
with p(µ js |T js ) = i p(µ ij,s |T js ) and µ i,js being the i th element of µ js . This prior implies independence between equations, trees, covariance parameters and error variances. Within trees, we assume that the terminal leaf parameters are independent of each other but depend on the specific tree structure T js . Starting with the prior on T js we follow Chipman et al. (1998) and specify a tree generating stochastic process that consists of three parts. The first part relates to the probability that a given node at stage n = 0, 1, 2, . . . is not a terminal node. This probability is specified such that α ∈ (0, 1) and β > 0 denote scalar hyperparameters. Larger (smaller) values of β (α) introduce a larger penalty on more complex tree structures. This prior thus controls for overparameterization by keeping trees rather small and simple (and thus act as weak learners). In our empirical application we set α = 0.95 and β = 2. This is the standard choice proposed by Chipman et al. (2010) that works well for a wide range of different datasets and in simulations. The second part concerns the possible values the thresholds c can take. Here we assume a discrete uniform distribution over all possible values of the i th covariate X •i . Finally, the last part deals with the specific variables used in the splitting rule. Again, in the absence of substantial prior information we use a uniform distribution over the K columns of X.
Consistent with Chipman et al. (2010), we construct the prior on µ i,js by transforming Y such that the transformed values range from −0.5 to 0.5. This allows us to use a zero mean Gaussian prior on µ i,js that places substantial posterior mass of µ i,js between the minimum and maximum values of the columns of Y .
The prior variance V µ is set as follows: with π denoting a suitable positive constant. Notice that if S or π are increased, the prior is increasingly pushed towards zero and the effect of a single tree becomes smaller. This prior has the big advantage that values far outside the range of Y are highly unlikely but not ruled out a priori. We follow much of the recent literature and set π = 2. For σ 2 j we use the conjugate inverse chi-square distribution: whereby ν j and ξ j denote hyperparameters that are calibrated using some data-based estimate of σ j ,σ j . This data-based estimate is taken to be the OLS standard deviation from a univariate AR(5) model. The values of ν j and ξ j are then chosen such that the v th quantile of the prior is centered onσ j with P (σ j <σ j ) = v. In our application we use v = 0.75 and set the degrees of freedom ν j = T /2. We found that this choice avoids too large values of σ 2 j during the pandemic. Smaller values of ν j yields similar but slightly more unstable results if the sample is expanded to include the first two quarters of 2020.

A.2 Posterior Simulation
Under this prior and likelihood configuration we can derive a posterior simulation algorithm that consists of simple, well known steps. Hence, we only briefly summarize the main steps involved and provide relevant references that provide more details on the specific steps.
The tree structure T js can be obtained marginally of µ js using the strategy outlined in Section 5.1 of Chipman et al. (1998) . In brief, this consists of sampling the tree T js marginally of µ js and conditional on the other trees T js for s = s. Using a Metropolis-Hastings algorithm, we propose a new tree using the last accepted tree and then pick one of four moves. The first move grows a terminal node with a probability of 0.25, the second move prunes two terminal nodes with a probability of 0.25 and the third changes a non-terminal rule with probability 0.4. The final move swaps a decision rule between a parent (i.e. the node above) and child (i.e. the node below) node (with probability 0.10). The key feature of this algorithm which leads to nice properties is that µ js is integrated out and thus the dimension of the estimation problem is kept fixed.
Next, we simulate the terminal node parameters µ js . These can be obtained by simulating µ ji,s from independent Gaussian distributions which take a textbook conjugate form. The same can be said about the error variances. These can also be obtained by simulating from a conditional posterior which follows an inverse Gamma distribution.
We sample q j using (A.1) from a multivariate Gaussian posterior. This posterior is given by: with moments Ω j = (Z j Z j + V j ) −1 m j = Ω j Z jỹ j , andỹ j = y •j − f j (X) while y •j denotes the j th column of Y and Z j = (Z j1 , . . . , Z jT ) . The • notation indicates that we condition on the remaining model parameters and the latent states.
Finally, we use the methods outlined in Sub-section 4 to simulate y q,t .
We repeat this algorithm 30, 000 times and discard the first 15, 000 draws as burn-in. Standard convergence diagnostics point towards rapid convergence towards the joint posterior distribution and thus closely mirror the excellent performance of the original algorithm of Chipman et al. (2010).

B Additional Empirical Results
In this appendix, we plot the nowcasts against the realizations. Our model produces monthly nowcasts of GDP growth which are converted into quarterly nowcasts to be comparable to the realization. To improve readability, we present results through 2019 and through 2020Q2 as separate graphs.