svars : An R Package for Data-Driven Identiﬁcation in Multivariate Time Series Analysis

Structural vector autoregressive (SVAR) models are frequently applied to trace the contemporaneous linkages among (macroeconomic) variables back to an interplay of orthogonal structural shocks. Under Gaussianity the structural parameters are unidentiﬁed without additional (often external and not data-based) information. In contrast, the often reasonable assumption of heteroskedastic and/or non-Gaussian model disturbances oﬀers the possibility to identify unique structural shocks. We describe the R package svars which implements statistical identiﬁcation techniques that can be both heteroskedasticity based or independence based. Moreover, it includes a rich variety of analysis tools that are well known in the SVAR literature. Next to a comprehensive review of the theoretical background, we provide a detailed description of the associated R functions. Furthermore, a macroeconomic application serves as a step-by-step guide on how to apply these functions to the identiﬁcation and interpretation of structural VAR models.


Introduction
Particularly in macroeconometrics, structural vector autoregressive (SVAR) models have become a prominent tool to determine the impacts of different (economic) shocks in a system of variables.Within these models, the unobserved structural shocks represent information that is hidden in the reduced form vector autoregressive (VAR) model.Nevertheless, analysts might be interested in the system's reaction to exactly this type of isolated shocks, which is commonly visualized by means of impulse-response functions.For instance, policy makers could be interested in revealing the effects of an unexpected interest rate cut.Estimating the reduced form VAR by means of least squares (LS) or maximum likelihood methods (ML) is straightforward (see, e.g., Lütkepohl 2005), however, identifying the non-unique structural form is a controversial topic in the SVAR literature.
Beginning with the pioneering work of Sims (1980), two main types of identification strategies have been developed.On the one hand, following Sims (1980) original ideas such strategies refer to economic theory.Theory based methods implement economic restrictions (e.g., short-run restrictions (Sims 1980), long-run restrictions (Blanchard and Quah 1989) or specific sign patterns (Uhlig 2005)) a-priori.On the other hand, statistical identification methods which have been developed more recently exploit the informational content of specific data features (heteroskedasticitiy of structural shocks, uniqueness of non-Gaussian independent components).The R package svars, which we describe in this paper, focuses on these statistical methods to identify the structural shocks.
The R (R Core Team 2017) archive network comprises several widely applied packages for multivariate time series models and, in particular, for analyzing VAR models.The vars package (Pfaff 2008) contains estimation techniques for reduced form VAR models, and functions to determine the lag order and to perform several diagnostic tests.Moreover, the vars package allows for the estimation of a basic structural form by means of theory-based short-and longrun restrictions.Further R packages for multivariate time series analysis and VAR estimation are tsDyn (Stigler 2010) and MTS (Tsay 2015).To the authors' knowledge, currently only the VARsignR package (Danne 2015) contains functions for SVAR identification by means of theory-based sign restrictions.
Given the lack of implementations of statistical identification techniques in R, the package svars has been explicitly developed to fill this gap by providing various recent statistical methods to estimate SVAR models.These methods build upon distinct but not mutually exclusive statistical properties of the data (i.e., covariance changes and the uniqueness of independent non-Gaussian distributed structural shocks).The svars package supports six identification techniques.Three identification methods make use of the assumption of heteroskedastic shocks, i.e., the identification (i) via changes in volatility (Rigobon 2003), (ii) via smooth transitions of covariances (Lütkepohl and Netsunajev 2017b) and (iii) via generalized autoregressive conditional heteroskedasticity (GARCH) (Normadin and Phaneuf 2004;Bouakez and Normandin 2010).Three further identification methods connect to the uniqueness of non-Gaussian independent components, namely the detection of least dependent innovations based on (iv) Cramér-von Mises (CVM) statistics (Herwartz 2018), (v) the distance covariances (Matteson and Tsay 2017) and (vi) a parametric non-Gaussian ML approach (Lanne, Meitz, and Saikkonen 2017).
By offering a variety of identification methods, the svars package can be applied in diverse data settings.Additionally, it extends the existing pool of SVAR techniques in R with more recent bootstrap procedures, further statistics and hypothesis tests directly related to inference in SVAR models.In this sense, the svars package is designed as a complete toolbox for the structural analysis of multivariate time series.Based on objects from reduced form estimations, svars is compatible with other packages such as vars, tsDyn and MTS.Moreover, computationally demanding modules are fully implemented in C++ and linked to R using the Rcpp (Eddelbuettel and François 2011) and RcppArmadillo (Eddelbuettel and Sanderson 2014) libraries.The package is available on CRAN at https://cran.r-project.org/package=svars.The article is organized as follows: Section 2 outlines the SVAR model and the alternative identification methods.In Section 3, we describe bootstrap methods and further diagnostic tools for SVAR analysis.Section 4 details the package design, and Section 5 provides an illustrative application of two identification schemes to a real world dataset.Lastly, a summary and an outlook on future extensions of the svars package complete this article.

Structural vector autoregressive models
Consider a K-dimensional VAR model of order p y t = µ + A 1 y t−1 + ... + A p y t−p + u t , (1) where y t = [y 1t , ..., y Kt ] ⊤ is a vector of observable variables, A i , i = 1, . . ., p, are (K × K) coefficient matrices, and intercept parameters are collected in µ.We focus on the case of time invariant deterministic terms for notational clarity.Model augmentation with time-varying deterministic terms (e.g., breaks, linear trends), however, is straightforward.Furthermore, the VAR model is stationary (invertible) by assumption.The vector u t consists of reducedform residuals, which are serially uncorrelated with E(u t ) = 0 and Cov(u t ) = Σ u .The nonsingular matrix B captures the instantaneous effects of the structural shocks ε t = B −1 u t on the variables of the system.
In the following, we briefly discuss the identification problem in SVAR analysis.Subsequently, we present six alternative statistical approaches to uniquely determine the structural shocks.Finally, we provide a short guidance on how to choose between these alternative identification approaches.

The identification problem
Cross-equation relations between the reduced-form residuals in Equation 1 are characterized by the covariance matrix where the covariance of the structural shocks Cov(ε t ) = Σ ε is a diagonal matrix.Thus, structural shocks are uncorrelated, which enables a meaningful impulse-response analysis (Lütkepohl 2005).Without any further model specification, Equation 3 holds for every matrix B which decomposes the covariance matrix Σ u .Hence, additional restrictions are necessary to identify a (unique) matrix B. 1 In this paper, we focus on identification techniques which use the underlying data structure to determine the structural matrix.After estimating the model in Equation 1 by means of LS or ML methods, the resulting reduced form residual estimates ût and the corresponding covariance estimate Σ u provide the starting point for the subsequent identification techniques.The following two Sections introduce the statistical identification methods which constitute the core functions of the svars package.

Identification by means of heteroskedastic innovations
Time series are often characterized by time-varying covariance structures.Therefore, it is tempting to unravel the structural relationships by means of such changes in the second order moments (see, e.g., Sentana and Fiorentini 2001;Rigobon 2003).The svars package includes three alternative heteroskedasticity based SVAR identification schemes.The first approach is built upon unconditional shifts in the covariance (Rigobon 2003), while the second procedure allows for a smooth transition between the covariance regimes (Lütkepohl and Netsunajev 2017b).The third scheme implements the identification of the structural shocks via conditional heteroskedasticity (Normadin and Phaneuf 2004).
Changes in volatility (CV) Rigobon (2003) uses the presence of shifts in the time series' variance at known time points for the identification of structural shocks.He considers a model of exogenous covariance changes.More precisely, the changes of the covariance matrix occur at prespecified break dates implying Here, the index m = 1, . . ., M indicates the respective variance regime.In the most simple framework of two volatility states (i.e., M = 2) with a structural break at time point T sb ∈ ¶1, . . ., T ♢, the reduced form covariance matrix is where Σ 1 ̸ = Σ 2 .The two covariance matrices can be decomposed as Σ 1 = BB ⊤ and Σ 2 = BΛB ⊤ , where Λ is a diagonal matrix with diagonal elements λ ii > 0, i = 1, ..., K.The matrix Λ formalizes the change of the variance of structural shocks ε t in the second regime.
In other words, the structural shocks have unit variance in the first regime, and variances λ ii , i = 1, . . ., K, in the second regime.The structural shocks are uniquely identified if all diagonal elements in Λ are distinct.Under the assumption of Gaussian residuals u t , the log-likelihood function for the estimation of B and Λ is where Σ 1 and Σ 2 are retrieved from estimated residuals u t , respectively, as For the numerical log-likelihood optimization of (4), the initial matrix B is the lower triangular decomposition of T −1 T t=1 u t u ⊤ t , and the initial matrix Λ is set to the identity matrix.Lanne and Lütkepohl (2008) introduce an iterative procedure to improve the estimation precision of this routine.The matrices B and Λ, which are obtained from maximizing the log-likelihood function, are used for iterative generalized least squares (GLS) estimation of the deterministic and autoregressive parameters where Then, the GLS estimator β is used to update the covariance estimates by means of u t = y t − (Z ⊤ t ⊗ I K ) β.This algorithm iterates until the log-likelihood converges.Furthermore, standard errors for the structural parameters can be obtained from the square root of the inverted information matrix (Hamilton 1994).Identification through changes in volatility is conditional on the determination of the variance regimes.If available, the analyst might use external information for the selection of suitable break points (T sb ).Typically these are extraordinary events in history which can be associated with a change in data variation (see, e.g., Rigobon and Sack 2004).Alternatively, the model might be evaluated conditional on a range of alternative break point candidates from which the analyst selects the model with the highest log-likelihood as described in Lütkepohl and Schlaak (2018).

Smooth transition (co)variances (ST)
The implementation of identification via smooth transition covariances follows the descriptions in Lütkepohl and Netsunajev (2017b) and generalizes the identification via changes in volatility.The covariance matrix of the error terms u t consists of several volatility states, and the transition from one state to another is formalized by means of a non-linear function.For two volatility regimes with distinct covariance matrices Σ 1 and Σ 2 , the covariance structure In ( 5), G(•) is the transition function determined by the transition variable s t .While the transition variable is usually deterministic (e.g., s t = t), the model also allows for stochastic transition variables, for instance, lagged dependent variables (see Lütkepohl and Netsunajev 2017b, for more details).The most frequently employed transition function is the logistic function proposed by Maddala (1977), which is of the form The coefficient γ determines the slope of the function and c is the time point of transition.
Based on the covariance structure in Equation 5 and Equation 6, and the assumption of normally distributed residuals u t , the log-likelihood function reads as Grid optimization enables the determination of the transition parameters γ and c.Lütkepohl and Netsunajev (2017b) suggest an iterative procedure for every pair of parameters (γ, c).The first step is the maximization of the log-likelihood in (7) with respect to the structural parameters B and Λ.In the second step, the estimated matrices B and Λ are used to reestimate the reduced form VAR parameters by means of GLS estimation where W T is a blockdiagonal (KT × KT ) weighting matrix The GLS step obtains β to update the covariance estimates by means of u t = y t −(Z ⊤ t ⊗I K ) β.The two steps are performed until the log-likelihood converges.The iterative procedure is evaluated at every parameter pair (γ, c) within a prespecified range.The parameter pair which maximizes the log-likelihood in Equation 7 is considered to provide the best estimate for the true transition.For a more detailed discussion of the parameter choice see Lütkepohl and Netsunajev (2017b).

Conditional heteroskedasticity (GARCH)
As proposed by Normadin and Phaneuf (2004), Lanne and Saikkonen (2007) and Bouakez and Normandin (2010) structural shocks are unique if their conditional variances are of the GARCH type.For the formal exposition let F t denote a filtration that summarizes systemic information which is available until time t.Accordingly, the time-varying covariance can be represented as where is a (K × K) matrix with GARCH implied variances on the main diagonal.In the context of SVAR identification typically low order GARCH(1,1) specifications are assumed, such that the individual variances exhibit a dynamic structure as Higher-order GARCH structures are rarely employed in practice, even though this can be done in principle.Under suitable distributional and parametric restrictions, γ k > 0, g k ≥ 0 and γ k + g k < 1, the marginal GARCH processes ε k,t are covariance stationary (Milunovich and Yang 2013).Sentana and Fiorentini (2001) have shown that the structural parameters in B are uniquely identified, if there are at least K − 1 GARCH-type variances present in Λ t♣t−1 .The parameters γ k and g k can be estimated by means of standard univariate ML approaches.The multivariate Gaussian log-likelihood to obtain the structural parameters in B is For the practical implementation of identification through patterns of conditional heteroskedasticity, we follow the approach suggested by Lütkepohl and Milunovich (2016), and estimate the parameters in ( 10) and ( 11) iteratively until the log-likelihood in (11) converges.

Identification through independent components
As implied by a result of Comon (1994), independence of the components of ε t could serve to identify the matrix B if at most one component ε it exhibits a Gaussian distribution.Furthermore, partial identification of the non-Gaussian components is possible if the system contains multiple Gaussian components (cf.Maxand 2019).The svars package implements three distinct approaches for identification by means of independent components.Referring to principles of Hodges-Lehman estimation (HL estimation, Hodges and Lehmann 2006), the first two identification strategies allow for the detection of least dependent structural shocks by the minimization of nonparametric dependence criteria.More specifically, the first technique reveals the structural shocks by minimizing the CVM distance of Genest, Quessy, and Rémillard (2007).Following a suggestion of Matteson and Tsay (2017), the distance covariance statistic of Székely, Rizzo, and Bakirov (2007) is employed as a nonparametric independence diagnostic for the second estimator.The third identification scheme is a fully parametric ML approach for detecting independent Student-t distributed shocks (Lanne et al. 2017).

Least dependent innovations build on Cramér-von Mises statistics (CVM)
Under Gaussianity, the decomposition factor B of the covariance matrix Σ u is not unique as Gaussian random vectors do not change their joint distribution under rotation.In contrast, assuming not more than one Gaussian distributed component ε it in ε t , the structural matrix B can be uniquely determined.Introducing the nonparametric identification scheme, let D denote a lower triangular Choleski factor of the covariance matrix of the reduced-form errors, Σ u = DD ⊤ , which links the structural and reduced form errors by ε t = D −1 u t .Further candidate structural shocks can be generated as where Q is a rotation matrix such that Q ̸ = I K , QQ ⊤ = I K .The rotation matrix could be parameterized as the product of K(K − 1)/2 distinct forms of orthogonal Givens rotation matrices.In the case of K = 3, for instance, Q(θ) is defined as with rotation angles 0 ≤ θ i ≤ π, i = 1, 2, 3.By definition, the random vector εt in Equation 12is a rotation of ε t .The set of possible structural matrices B(θ) = D Q(θ) is defined in terms of the Choleski factor D and the vector of rotation angles θ of the Givens matrices Q(θ).
To avoid any restrictive assumption on the distribution of ε t , nonparametric independence tests are applied to measure the degree of dependence.For instance, the copula-based CVM distance of Genest et al. (2007) has been successfully applied in the SVAR literature (Herwartz and Plödt 2016a;Herwartz 2018) to assess mutual dependence.The CVM distance is where C is the empirical copula and U is the distribution function of a uniformly distributed variable on ¶1/T, . . ., T /T ♢.The CVM algorithm provides a matrix estimate B such that the rotated structural shocks εt minimize the CVM dependence criterion.Hence, the obtained structural shocks are least dependent according to the statistic in (13) and the corresponding structural matrix B is the HL estimator.Standard errors for B are obtained by means of bootstrap procedures as presented in Section 3.6.

Least dependent innovations build on distance covariance (DC)
There is a variety of nonparametric criteria available to measure the degree of dependence between random variables, one of which, namely the CVM distance, has been described before.
The ICA algorithm by Matteson and Tsay (2017) provides a matrix estimate B such that the respective structural shocks εt = B −1 u t minimize the distance covariance of Székely et al. (2007), which we denote as U T (ε t ), i.e., the elements in εt are least dependent according to U T (.).Similar to the procedure building on the CVM statistic, the set of possible structural matrices B(θ) is defined in terms of the Choleski factor D and the vector of rotation angles θ of Q(θ).The rotation angles θ = argmin θ U T (ε t (θ)) determine the estimated structural matrix B = B( θ).2In the svars package, we take advantage of the function steadyICA from the R package steadyICA (Risk, James, and Matteson 2015) to estimate B. The minimum is determined by means of a gradient algorithm.

Non-Gaussian maximum likelihood (NGML)
The identification technique described by Lanne et al. (2017) is also based on the assumption of non-Gaussian structural error terms.They propose ML estimation to determine the set of independent structural innovations, which are assumed to exhibit a Student t-distribution.Moreover, Lanne et al. (2017) suggest a three-step estimation method for computationally demanding situations.The first step consists of LS estimation of the VAR parameters β = vec[µ, A 1 , ..., A p ] and of the reduced form residuals In the second step the log-likelihood function is maximized conditional on the first step estimates β.The log-likelihood function is where and ′ i is the i-th unit vector.The parameter vector of the log-likelihood function is composed of β and δ = (b, σ, df ).Regarding the latter, b is a K(K − 1) × 1 vector which contains the off-diagonal elements of the covariance decomposition matrix B. The parameters σ i and df i are the scale and the degrees of freedom parameters of the density function f i of a Student tdistribution, respectively.In the third step, the parameter vector δ is replaced by the estimate δ and the log-likelihood is maximized.

Choice of an adequate identification technique
In the face of a variety of statistical approaches available to model latent structural relationships, method selection becomes an important step of statistical identification.To facilitate this selection step Table 1 provides an overview of the assumptions on the error terms ε t within the alternative identification models.Estimating the structural parameters by means of heteroskedasticity based approaches necessitates the corresponding type of covariance structure.
Contrarily, identification through independent components is only possible in non-Gaussian distributional frameworks.Note that we distinguish between nonparametric models (i.e., CVM and DC) where no further specification of the distribution of the innovations is required and fully parametric ML approaches.

Model
Assumptions on the variance of ε t the distribution of ε t Homoskedasticity Heteroskedasticity Gaussian Non-Gaussian Unconditional Conditional Arbitrary t-distribution Table 1: Overview of identification models and respective underlying assumptions on the error term ε t .
A more detailed discussion on method selection in the context of identification via heteroskedasticity can be found in Lütkepohl and Netsunajev (2017a) and Lütkepohl and Schlaak (2018).Moreover, Herwartz, Lange, and Maxand ( 2019) compare heteroskedasticity and independence based models in a large scale simulation study.They show that identification by means of covariance changes provides precise estimation results if the log-likelihood is correctly specified, whereas under (co)variance misspecification such identification schemes lack efficiency or might suffer from estimation bias.In contrast, simulation based evidence suggests that identification via independent components is more robust with respect to alternative distributional frameworks and heteroskedasticity as long as the innovations are non-Gaussian.

SVAR tests, tools and bootstrap methods
As a basis for the six identification techniques, the statistical analysis of SVAR models requires a diagnostic analysis of the underlying data structure.The presented package comprises two types of data-driven procedures where the first group assumes heteroskedasticity and the second one non-Gaussianity of the error terms.

Tests for structural breaks
As described in Section 2, identification based on changes in volatility presumes at least one break point to occur in the covariance structure.To detect different types of breaks in the data, several tests that have been implemented in the strucchange package (Zeileis, Leisch, Hornik, and Kleiber 2002) are accessible for VAR analysis via the method stability() of the vars package.In the following, we consider two additional types of multivariate Chow tests, the sample split and the break point test.The sample split test addresses the null hypothesis of constant VAR parameters µ and A i , i = 1, 2, . . ., p.The break point test works similarly, but also tests if the covariance matrix of the residuals u t is constant over time (Lütkepohl and Kraetzig 2004, Chapter 3).To implement suitable likelihood ratio statistics, the VAR is estimated conditional on the full sample of T observations and conditional on the first T 1 = T sb − p − 1 and the last T 2 = T − p − T sb observations with T sb indicating the break point.The resulting residuals are denoted by u t , u (1) t and u (2) t .Then, the sample split and break point test statistic are defined, respectively, as and where the covariance estimators are , and (2) ⊤ t .Candelon and Lütkepohl (2001) show that both test statistics λ BP and λ SP converge to a non-pivotal asymptotic limit distribution.Hence, bootstrap procedures are a natural device to obtain critical values for the statistic at hand.

Testing for identical diagonal elements
Since the structural shocks are estimated by the volatility models under the assumption that the variance of the structural shocks change differently, respective diagnostic tests are frequently employed in the SVAR literature (see, e.g., Herwartz and Plödt 2016b;Lütkepohl and Velinov 2016;Lütkepohl and Netsunajev 2017a).A suitable Wald statistic to test the null hypothesis of proportional variance shifts, H 0 : λ ii = λ jj is defined as where parameter estimates and (co)variances obtain from the ML estimation.The null hypothesis is rejected for large values of λ W,ij .

Test for overidentifying restrictions
The non-Gaussian ML and heteroskedasticity based models rest on a stylized log-likelihood optimization, which also allows for restricting the structural parameter space.Subsequently, the implied restrictions can be tested by means of likelihood ratio statistics where B is the unrestricted ML estimator as defined in Equation 4, Equation 7or Equation 14.Moreover, B r denotes the restricted ML estimator, and N is the number of restrictions.The null hypothesis that the restricted model holds is rejected for large values of λ LR (Lütkepohl 2005).

Test on joint parameter significance
To test joint hypotheses of parameter significance for non likelihood based models as in Herwartz ( 2018) the package provides a χ 2 -test.The statistic for testing a number of J linearly independent hypotheses is defined as where R is a known J × K 2 dimensional selection matrix of rank J, and r is a known J × 1 vector, which represents the considered restrictions, such that the composite null hypothesis is H 0 : Rvec(B) = r.The matrix B * * is the bootstrap version of the covariance decomposition matrix, and can be obtained from one of the bootstrap procedures described in Section 3.6 below.

Tools for SVAR analysis
The identified structural matrix B can help capturing the dynamic and instantaneous impacts of the structural shocks within the set of variables under consideration.Several tools to analyze these relations are described, for instance, in Kilian and Lütkepohl (2017) and Lütkepohl (2011).The svars package provides impulse-response functions, forecast error variance decompositions as well as historical decompositions.

Impulse-response functions
Impulse-response functions describe the impact of isolated unit shocks on the variables of the system with respect to a certain response delay (e.g., the zero delay gives the instantaneous impact).For the model formulation in Equation 1 the response matrices can be derived as follows (see, e.g., Lütkepohl 2005) where ν is the unconditional mean of the series and The elements of Θ i := Φ i B can be interpreted as the responses of the system to shocks ε t which summarize the informational content of dynamic parameters in Φ i , i = 1, 2, 3, . . .and of the structural matrix B. In particular, Θ 0 = B.

Forecast error variance decompositions
Forecast error variance decompositions (FEVD) highlight the relative contribution of each shock to the variation a variable under scrutiny.For the multivariate series y t , the corresponding h-step ahead forecast error is y t+h − y t♣t (h) = Θ 0 ε t+h + . . .+ Θ h ε t+1 , and the forecast error variance of the k-th variable is 2005).Since Σ ε = I K holds by assumption, the relative contribution of shock ε it to the h-step forecast error variance of variable y kt is

Historical decompositions
Further information on the contribution of structural shocks to a variable of interest can be drawn from historical decompositions.The contribution of shock ε jt to a variable y kt in time period t is where α p ] consists of the first K rows of the companion form matrix with exponent t, A t (see Lütkepohl 2005, for more details).

Wild bootstrap
Inferential issues (e.g., estimating standard errors of point estimates or confidence intervals of impulse-responses) might rely on the so-called wild bootstrap approach, which is robust in case of various forms of heteroskedasticity (Goncalves and Kilian 2004;Hafner and Herwartz 2009).For instance, under a fixed-design, bootstrap samples can be constructed as where A j , j = 1, . . ., p, and µ are LS parameter estimates retrieved from the data.To determine bootstrap error terms u * t = ω t u t , the scalar random variable ω t is drawn from a distribution with zero mean and unit variance (ω t ∼ (0, 1)) which is independent of the observed data.A prominent distribution choice for sampling ω t is the Gaussian distribution.Two other frequently considered approaches are drawing ω t (i) from the so-called Rademacher distribution with ω t being either unity or minus unity with probability 0.5 (Liu 1988), and (ii) from the distribution suggested by Mammen (1993), where ω t = −( √ 5−1)/2 with probability ).For the error terms u * t , estimated from (20), we determine the bootstrap structural parameter matrix as Here, B * is a decomposition of Σ û * derived by the described identification procedures.The matrices Σ 1/2 u and Σ 1/2 u * are symmetric eigenvalue decompositions of Σ u and Σ u * , respectively.Thus, B * * provides a factorization of the sample covariance matrix Σ u such that it can be used for inference on the structural parameters as depicted, for instance, in (19).
Moving-block bootstrap Brüggemann, Jentsch, and Trenkler (2016) suggest the moving-block bootstrap for inference in VAR models characterized by conditional heteroskedasticity.The moving-block bootstrap depends on a chosen block length ℓ < T , which determines the number of blocks n = T /ℓ needed for data generation.The (K × ℓ)-dimensional blocks M i,ℓ = ( u i+1 , ..., u i+ℓ ), i = 0, ..., T − ℓ, are laid randomly end-to-end together to obtain the bootstrap residuals u * 1 , ..., u * T .After centering the residuals, the bootstrap time series may be constructed recursively as It is important to note that asymptotic theory for block bootstrap schemes is typically derived under the assumption that ℓ → ∞ as T → ∞.Yet, there is no consensus in the literature on the choice of ℓ in finite samples and, hence, choosing a block length in practice is not straightforward.In general, the chosen block length should ensure that residuals being more than ℓ time points apart from each other are uncorrelated.A more thorough discussion on the choice of the block length can be found in Lahiri (2003).The bootstrap covariance decomposition B * * is determined analogously to the case of wild bootstrap sampling described before.Note that both the wild bootstrap and the moving-block bootstrap can be implemented either under a fixed-design as in ( 20) or a recursive-design as in ( 21).

Bootstrap-after-bootstrap
In the second stage, the actual bootstrap samples can be obtained from substituting β BC for β in Equation 20 or Equation 21.Kilian (1998) shows by means of a simulation study that in small samples the bootstrap-after-bootstrap method tends to be more accurate than standard bootstrap approaches.Kilian and Lütkepohl (2017) provide more insights into the merits of bias adjustments in resampling, as well as a detailed overview of further bootstrap approaches in the context of SVAR models.

Package design
Table 2 summarizes the design of the svars package.The package is built around the six core functions for identification of the structural VAR form (id.cv, id.cvm, id.dc, id.garch, id.ngml, id.st).Moreover, various methods and further diagnostic tools are available for the resulting objects of class svars which have been described in Section 3. In the following, we describe the mandatory and optional input arguments of the implemented functions in a detailed manner.

Function or method Class
Methods for class Functions for class Description

Core functions for SVAR identification
To apply the implemented identification techniques the user needs to provide an estimated reduced form VAR or vector error correction model (VECM) object of class varest or vec2var from the vars package.Alternatively, an object of class nlVar or VECM from the tsDyn package or the list delivered by the function VAR of the MTS package can serve as an input argument for id.cv, id.cvm, id.dc, id.garch, id.ngml or id.st.Besides the estimated VAR objects, the identification procedures allow for further input arguments which differ across the techniques.In the following, we describe these options separately.

SVAR models built on (co)variance changes
For identification by means of changes in volatility the following command can be used id.cv(x,SB, start = NULL, end = NULL, frequency = NULL, format = NULL, dateVector = NULL, max.iter = 50, crit = 0.001, restriction_matrix = NULL).
The function id.cv() requires the specification of a structural break point.Conditional on the data structure, the user may provide the breakpoint SB in various formats.Firstly, the sample can be separated into two parts by specifying the breakpoint in either integer or date formats.Secondly, single time instances can be assigned to a variance regime by passing a vector consisting of zeros and ones to the function.If the estimation of the reduced form VAR is based on a non-time series class object (e.g., ts), the user can add the information on the date and frequency by making use of the parameter dateVector or by specifying start/end and format/frequency.However, providing time series class objects or specifying dates is optional and the function also handles conventional observation numbers.
The log-likelihood and VAR coefficients are re-estimated in the algorithm until the loglikelihood changes by less than the value of crit or the maximum number of iterations (max.iter) is reached.Additionally, the function id.cv() allows for restricted ML estimation via the input argument restriction_matrix.There are two formats of specifying the restriction matrix, either pass (i) a K × K matrix, in which NA indicates unrestricted elements and 0 a restricted element, or (ii) a K 2 × K 2 matrix of rank M , where M is the number of unrestricted coefficients (Lütkepohl 2005).In this case, unit (zero) values on the main diagonal refer to the unrestricted (restricted) coefficients.In case of over-identifying restrictions, id.cv() estimates the unrestricted and the restricted SVAR to perform the likelihood ratio test outlined in Section 3.3.
The function id.garch(x, max.iter = 5, crit = 0.001, restriction_matrix = NULL) provides model identification if structural shocks exhibit conditional heteroskedasticity.Identification proceeds in two steps.In the first step K univariate GARCH(1,1) models (see Equation 10) are estimated.In the second step a full, joint ML estimation of the parameters in B is performed.These two steps are executed until the multivariate log-likelihood changes by less than the value of crit or the maximum number of iterations (max.iter) is reached.
Analogously to the id.cv() function, passing a restriction_matrix enables the user to estimate and test restricted models.
Identification by means of smooth covariance transitions is implemented as id.st(x,nc = 1,c_lower = 0.3,c_upper = 0.7,c_step = 5,c_fix = NULL,transition_variable = NULL,gamma_upper = 2,gamma_step = 0.5,gamma_fix = NULL,max.iter = 5,crit = 0.001,restriction_matrix = NULL,lr_test = FALSE), which entails several input arguments for adjustments.However, the user may run the function without any further specifications of input arguments only by passing the reduced form estimated VAR object.Since finding the optimal parameters γ and c as described in Section 2.2.2 is computationally demanding, the id.st function supports parallelization with nc determining the number of cores used.Grid optimization is optional.By default, the function searches for the transition point c to be located between 0.3T (c_lower) and 0.7T (c_upper) with a step width of 5 time points (c_step).If the user wants to specify the transition point in advance, she can pass an observation number to c_fix.Analogously, for the slope parameter γ the user can either specify a fixed slope parameter gamma_fix, or let the function optimize the transition coefficient between gamma_lower and gamma_upper.
Conditional on the location (c) and slope (γ) parameter the algorithm consists of an iterative procedure of log-likelihood optimization and GLS estimation until the improvement of the loglikelihood is smaller than crit or the maximum number of iterations (max.iter) is reached.By default, the transition variable corresponds to time, however, the user may choose another transition variable by passing a numeric vector to transition_variable.Note that the input argument for the location parameter has to be adjusted to the scale of the transition variable.Analogously to the previous functions, passing a restriction_matrix enables the estimation of restricted models.Due to the fact that the smooth transition covariance model is computationally demanding, it is possible to decide if the function performs a likelihood ratio test or not by specifying lr_test as either TRUE or FALSE.

SVAR models built on independent components
For identifying independent components by means of the CVM distance the function id.cvm (x,dd = NULL,itermax = 500,steptol = 100,iter2 = 75) can be employed.In Section 2 we have elaborated on how this approach evaluates a CVM test for rotated versions of the shocks.We use the implementation of the CVM test in the package copula (Hofert, Kojadinovic, Maechler, and Yan 2017).The function indepTestSim from the copula package generates an independent sample to calculate the p-value for the test statistic.
The sample is passed to the svars function id.cvm as argument dd.If dd = NULL the sample is simulated within the id.cvm function.Simulating the independent sample in advance and passing the object to the id.cvm function may save computation time if the estimation is repeatedly applied to the same data set.The estimation of independent components through CVM statistics proceeds in two steps.The first stage is a global optimization using the differential evolution algorithm from the DEoptim package (Ardia, Mullen, Peterson, and Ulrich 2016).In the second stage, the test statistic is optimized locally around the estimated parameters from the first stage.The precision of the algorithm can be determined by the input arguments itermax and steptol at the first stage (for more details see the help file of DEoptim) and iter2 at the second stage.The function

id.dc(x, PIT = FALSE)
identifies the structural shocks by means of distance covariance statistics.The implementation is built on the ICA algorithm from the package steadyICA (Risk et al. 2015).The function steadyICA therein applies a gradient algorithm to determine the minimum of the dependence criterion.The option PIT determines if probability integral transformation (PIT) is applied to transform the marginal densities of the structural shocks prior to the evaluation of the dependence criterion.
Estimating the structural shocks via non-Gaussian ML estimation is implemented with the function id.ngml(x, stage3 = FALSE, restriction_matrix = NULL).
The input argument stage3 indicates if the autoregressive parameters of the VAR model are estimated by maximizing the log-likelihood function which Lanne et al. (2017) describe as the third step of their model.Since this step does not change the result of the estimated covariance decomposition, and the estimation of the autoregressive parameter is computationally rather demanding, the default is set to FALSE.Analogously to the functions id.cv, id.garch and id.st, the user may run a restricted estimation by passing an appropriate restriction_matrix argument to id.ngml.
All identification functions (id.cv, id.garch, id.st, id.cvm, id.dc, id.ngml) return an object of class svars.The summary method for this class returns the estimated impact relation matrix with standard errors and various further information depending on the chosen identification method, while print only returns the covariance decomposition.The plot method is only applicable to objects from the function id.st and shows the optimized transition function of the variance from the first to the second volatility regime.

Functions and methods for SVAR analysis
The following functions and methods are built around the cornerstone functions which have been introduced in the last section.To obtain a user-friendly environment within the svars package, most of the implementations are feasible only by passing an object of class svars or sboot and leaving further specifications optional.Moreover, to facilitate compatibility with other R packages, we refer to the vars package (Pfaff 2008), and adapt methods for parameter tests, impulse-response analysis and forecast error variance decompositions.

Pre-tests and joint significance tests
For prior analysis of parameter stability, the function chow.test(x,SB, nboot = 500, start = NULL, end = NULL, frequency = NULL, format = NULL, dateVector = NULL) includes two versions of structural break tests.The input argument x needs to be a reduced form estimation result of class varest, vec2var or nlVar.The time point of the assumed structural break has to be passed in SB.The user can work with date formats, in the same way as described for the id.cv() function above.To calculate the p-values and critical values, the function employs a fixed-design wild bootstrap.The number of bootstrap replications needs to be provided by nboot.The summary() method returns the results from the sample split and break point tests.Additionally, the package includes an augmentation of the stability() method of the vars package (Pfaff 2008), which provides access to a variety of parameter stability analysis tools of strucchange (Zeileis et al. 2002).The method has been extended to contain multivariate Chow tests stability(x, type = "mv-chow-test", h = 0.15).
By specifying type = "mv-chow-test" and h = 0.15 the test statistics for all possible structural break points between (h/2)T and (1-h/2)T are calculated.The resulting object of class chowpretest from stability() can be used as an input argument for x in chow.test()afterwards without any further input specifications.Subsequently, the function provides the test results for the structural break at the observation with the corresponding highest break point test statistic resulting from stability().
After obtaining point-and bootstrap estimates, the user can test joint hypotheses on the estimated elements in the structural matrix B by means of the function where x is an object of class sboot.If r = NULL, the function performs a test of the hypothesis

Further SVAR statistics
Following the descriptions in Section 3, impulse-response functions can be calculated by means of irf (x, n.ahead = 20) where x is an object of class svars.The user can specify the time horizon of the impulseresponse functions, which is 20 periods by default.The same input arguments are passed to calculate forecast error variance decompositions using fevd(x, n.ahead = 10).
Historical decompositions are calculated by the function hd(x, series = 1).
By default, the first series, i.e., the series in the first column of the original data set is decomposed.For all three analysis tools plot methods are available to visualize the resulting objects.

Bootstrap procedures
The bootstrap procedures described in Section 3 are implemented in the functions mb.boot, wild.boot and ba.boot.The required input object x is of class svars.Furthermore, it is possible to record how often one or multiple bootstrap shocks hold a specific sign pattern.This helps to evaluate the plausibility of the signs of instantaneous effects as described in Herwartz (2018).The appearance of specific sign patterns is documented by passing a list of vectors containing 1 and −1 to the input argument signrest.Every list entry represents the impact effects of a shock to the variables in the system.Thus, each list entry is of the same size as the VAR model, i.e., contains K elements.The list can consist of 1 up to K entries, one for each structural shock.By default, the bootstrap functions evaluate the occurrence of the sign pattern of the point estimate.The R function for the moving-block bootstrap is mb.boot (x,design = "recursive",b.length = 15,n.ahead = 20,nboot = 500,nc = 1,dd = NULL,signrest = NULL,itermax = 300,steptol = 200,iter2 = 50), where the user needs to specify the block length with input argument b.length.As described in Section 3.6 there is no consensus in the literature about the optimal block length in finite samples.In applied work, however, a typical block length is about 10% of the sample size (see, e.g., Brüggemann et al. 2016;Lütkepohl and Schlaak 2019).The wild bootstrap method is implemented as wild.boot(x,design = "fixed", distr = "rademacher", n.ahead = 20, nboot = 500, nc = 1, dd = NULL, signrest = NULL, itermax = 300, steptol = 200, iter2 = 50).
The user can choose to draw ω t from a Rademacher distribution with distr = "rademacher", from a Gaussian distribution with distr = "gaussian" or from the distribution suggested in Mammen (1993) with distr = "mammen".The remaining input arguments for the two bootstrap functions are identical, e.g., both can be called as fixed-design (design = "fixed") or as recursive-design (design = "recursive").Bootstrap impulse-responses are calculated in the functions for which the horizon needs to be determined via n.ahead.An integer for the number of bootstrap replications is supplied by the nboot argument.Parallelization is possible with a suitable choice of nc.The arguments dd, itermax, steptol and iter2 correspond to the input arguments of the id.cvm model and are only applied if the point estimates have been derived by this method.Both bootstrap functions return an object of class sboot for which summary and plot methods can be applied.
In contrast to the other bootstrap functions of svars, x is of class sboot, since the function only performs the bias correction and the second step of the procedure described above in Section 3.6.The necessary results from the first step of the algorithm are determined from the bootstrap object, which is passed to the function to obtain the most efficient implementation of this hierarchical bootstrap procedure.The second step bootstrap (after the bias correction) is executed with exactly the same specifications as in the first stage.Hence, no further input arguments are needed.

Example
To illustrate the functions and methods of the svars package, we replicate the empirical results of Herwartz and Plödt (2016b) obtained through the identification by means of unconditional covariance shifts (id.cv()).We augment their analysis by further statistics and complement the analysis with results from identification through independent components using the DC approach (id.dc()). 5The main objective of the application in this Section is to present the usage of the functions rather than discussing the results in depth.Herwartz and Plödt (2016b) apply identification by means of the CV approach to investigate the effects of a monetary policy shock on the economy of the United States (US).They consider three series: the output gap "x", which is defined as the log-deviation of real gross domestic product (GDP) from the potential output, the inflation "pi" as quarter-on-quarter growth rates of the GDP deflator and the federal funds rate "i".The data comes from the Federal Reserve Economic Data (FRED) database of the Federal Reserve Bank of St. Louis.The time series are sampled at the quarterly frequency and cover the time period from 1965Q1 until 2008Q3.The svars package contains this example data set labeled "USA".
The first step of the analysis is to load the svars package into the workspace.Furthermore, the ggplot2 (Wickham 2009) package enables to display the data in a convenient way.
R> library("svars") R> library("ggplot2") R> data("USA") In order to estimate the structural shocks via the id.cv() function, the user has to specify the time point of the variance shift in advance.An appropriate time point might be found by visual inspection of the series, historical information or previous analyses.Figure 1 depicts the three time series.Inflation data ("pi") show less fluctuation during the second half of the data set.

R> autoplot(USA, facets = T) + theme_bw() + ylab('')
Herwartz and Plödt (2016b) determine the break point at 1979Q3 due to a policy shift of the Federal Reserve Bank which caused a reduction of the volatility in US macroeconomic data (Stock and Watson 2003).
The next step of the analysis is the estimation of the reduced form VAR, for instance, by means of the function VAR() from the vars package.We specify a VAR model with intercept of order p = 6.After model estimation, we can use the resulting varest object to estimate the structural form with the function id.cv().We provide the structural break point with the function argument SB in ts date format.
R> plain.var<-vars::VAR(USA, p = 6, type = 'const') R> usa.cv <-id.cv(plain.var, SB = c(1979, 3)) R> summary(usa.cv) Identification Results ----------------------  The summary of the identified object displays the estimated decomposition of the covariance matrix B, as well as the covariance shift matrix Λ and their corresponding standard errors.Moreover, the summary provides the results of pairwise Wald-type tests for distinct diagonal elements of Λ which is necessary for unique identification of the structural shocks.In the present case, all three tests statistics yield a rejection of the null hypotheses of equal diagonal elements with 10% significance.The ordering of the columns of B is arbitrary and the user has to arrange them in an economically meaningful way.For instance, Herwartz and Plödt (2016b) order the columns according to a unique sign pattern which indicates the direction of the shocks on impact.The code below orders the columns in the same way.

R> usa.cv$B <-usa.cv$B[, c(3, 2, 1)] R> usa.cv$B[,3] <-usa.cv$B[, 3] * (-1)
R> usa.cv$B_SE c(3,2,1)] R> usa.cv$Lambda 2,1)]) 2,1)]) R> round (usa.cv$B, 3) [,1] [,2] [,3] x 0.224 -0.593 -0.612 pi 0.113 1.299 -0.756 i 0.708 0.157 0.029 R> round (usa.cv$Lambda,3)[,1] [,2] [,3] x 1.244 0.000 0.000 pi 0.000 0.192 0.000 i 0.000 0.000 0.393 Herwartz and Plödt (2016b) interpret the impact effects in the first column of the matrix B to characterize a demand shock.Similarly, the effects in the second (third) column indicate a supply (monetary policy) shock.The authors argue that their shock labeling according to the estmated sign patterns is in line with the relevant literature.Since the matrix Λ represents the variance of structural shocks in the second regime, Herwartz and Plödt (2016b) interpret the diagonal elements of Λ such that the supply and monetary policy shocks have relatively lower variances and the demand shock a higher variance in regime two (i.e., for time instances t > T SB = 59 or after the second quarter of 1979).The authors compare the results from this statistical identification scheme with a model structure implied by covariance decomposition matrix B which is lower triangular by assumption (Sims 1980).The id.cv() function enables the user to test for such restrictions by setting up a restriction matrix as described in the code below.
restMat 9) R> restricted.model <-id.cv(plain.var, SB = c(1979, 3), + restriction_matrix = restMat) R> summary(restricted.model)--------------------- Since the structural shocks are just identified by the change in the covariance matrix, any further restriction on B over identifies the model and makes the restrictions testable.The function automatically performs a likelihood ratio test in case of such over identifying restrictions.The summary depicts the estimation results from the restricted model as well as the test statistics and p-values.The likelihood ratio test indicates that the null hypothesis of a lower triangular structural impact matrix has to be rejected at the 5% significance level.Herwartz and Plödt (2016b) argue that identification by means of zero restrictions according to a lower triangular matrix lacks economic intuition which we can support with the obtained diagnostic.Therefore, the unrestricted model should be preferred for further analysis.The next step is the calculation of impulse-response functions with boostrap confidence bands to investigate future effects of the economically labeled structural shocks on the variables included in the model.Moreover, the implemented bootstrap functions allow for an evaluation of the significance of unique sign patterns in B as described in Herwartz (2018).We define a list of sign restrictions and label them as demand, supply and monetary policy shock respectively.

Identification Results
R> signrest <-list(demand = c(1, 1, 1), supply = c(-1, 1, 1), + monetary_policy = c(-1, -1, 1)) For illustration, we use the wild bootstrap implemented with a Rademacher distribution, fixed-design and 1000 bootstrap replications as in Herwartz and Plödt (2016b).To reduce computation time we parallelize the bootstrap and specify a seed to obtain reproducible results.The time horizon for the impulse-response analysis has to be determined in advance using the argument n.ahead.R> plot(usa.cv.boot, lowerq = 0.16, upperq = 0.84) The summary reveals that only 12.7% of all bootstrap estimates are in line with all economically motivated sign patterns jointly.The sign pattern of the monetary policy shock appears in only 28.4% of all bootstrap samples.Furthermore, the bootstrap means indicate that the third shock is more in line with the sign pattern of the demand shock.This result is plausible noting that the point estimate in the lower right corner is close to zero and, therefore, lacks a significantly positive effect on the interest rate.Figure 2 shows the impulse-response functions of normalized shocks having unit variance in the first regime.Herwartz and Plödt (2016b) argue that the negative reaction of the interest rate to a monetary policy shock after the initial period is implausible, and puts the interpretation of this shock as a monetary policy shock into question.The results from the bootstrap support the authors' argumentation with regard to the shock labeling.Furthermore, we can calculate the forecast error variance decomposition to investigate the contribution of each shock to the prediction mean squared error of the variables.The fevd() method creates an object for visual inspection of the forecast error variance decomposition by means of the plot function.

R> cores
R> fev.cv <-fevd(usa.cv, n.ahead = 30) R> plot(fev.cv) Figure 3 depicts the forecast error variance decomposition.It is evident that the monetary policy shock accounts for more than 50% of the prediction mean squared error of the output gap, whereas the demand shock constantly accounts for only about 5% of the prediction mean squared error.Moreover, the demand shock contributes almost 100% of the forecast error variance of the interest rates on impact.Thus, the forecast error decompositions hint at a shock labeling which differs from the one developed above on the basis of sign patterns of B. Furthermore, they confirm the conclusion of Herwartz and Plödt (2016b) that the empirical model fails to identify a monetary policy shock according to its theoretical effect patterns.
We re-estimate the structural form with the DC method under the assumption of independent non-Gaussian shocks.The estimated structural matrix differs from the estimated matrix obtained from the CV approach.The matrix identified by means of the DC method does not allow for a labelling of the shocks that accords with a unique sign pattern.Nevertheless, it is possible to label the shocks in a meaningful way, since one could assume that the loading of the structural shocks on reduced form errors is stronger for own effects in comparison with cross variable effects.The finding that a positive monetary policy shock has a positive effect on output and inflation might seem to be at odds with intuition at first, although this mechanism can be observed rather frequently in the literature (e.g., Lütkepohl and Netsunajev 2017a) and is usually referred to as a so-called price puzzle (Eichenbaum 1992).Conditional on the estimate B, we construct the historical decomposition.As an example, we decompose the output into its underlying determinants over the sample period.In the data set output is the first column and, hence, series = 1 is the provided option.
R> hd.cv.1 <-hd(usa.dc,series = 1) R> plot(hd.cv.1) Figure 4 indicates that output fluctuations are mainly explained by demand shocks rather than supply or monetary policy shocks.

Summary
The R package svars provides a vast set of estimation techniques that build on several assumptions on the data and a variety of input arguments.In the present article we describe how the implemented identification techniques for SVAR models depend on assumptions of heteroskedasticity and independence coupled with non-Gaussianity to retrieve the structural shocks from the reduced form VAR model.Furthermore, we provide a set of auxiliary functions which complement the cornerstone identification methods, and thereby offer a complete toolbox for structural analysis in a multivariate time series context.
We give a step-by-step guideline on how to use the functions on a real dataset comparing one representative from both groups of heteroskedasticity and independence based identification.Even though the estimation results are similar, identification by means of covariance shifts might imply a misleading sign pattern which is indicated in the forecast error variance decomposition.Moreover, we illustrate how to test sign and zero restrictions by means of restricted log-likelihood estimation and bootstrap methods.
The svars package contains six alternative and recent SVAR identification techniques.Besides these, further popular data-driven identification approaches include, e.g., the heteroskedastic model with Markov switching mechanisms (Lanne, Lütkepohl, and Maciejowska 2010;Herwartz and Lütkepohl 2014) or pseudo ML estimation (Gourieroux, Monfort, and Renne 2017).Moreover, an option to test for long run restrictions by means of likelihood based identification schemes is a possible augmentation of the package.We regard both directions as promising for future developments of svars.

Figure 2 :
Figure 2: Impulse-response functions with 68% confidence bands based on 1000 bootstrap replications.Structural shocks identified through unconditional shift in the covariance structure.

Figure 3 :
Figure 3: Forecast error variance decomposition for 30 periods.Structural shocks identified by means of the CV model. 6 R> usa.dc <-id.dc(plain.var,PIT = FALSE)  R> summary(usa.dc)

Figure 4 :
Figure 4: Historical decomposition of the US output gap in percent deviations from the mean.Structural shocks are identified by means of the DC algorithm.
To decide on Gaussianity of the data a number of normality tests are available in respective R packages (see e.g., normtest and ICtest, Gavrilov and Pusev 2015; Nordhausen, Oja, Tyler, and Virta 2018).Furthermore, the svars package contains several useful tests for SVAR analysis which have not yet been implemented in R. Next we describe the diagnostics and discuss several tools which support the economic interpretations of SVAR estimation results.
The approach consists of two stages.In the first stage, bootstrap replications for β Subsequently, the modulus of the largest root of the companion matrix associated with β can be calculated, which is denoted by m Kilian (1998)proposes a bias-corrected bootstrap procedure to account for small sample biases.By means of the so-called bootstrap-after-bootstrap method, the true underlying data generating process (DGP) is not approximated by the bootstrap DGP as in Equation20and Equation 21, but rather by means of a bootstrap DGP with bias-corrected VAR parametersβ BC = [ µ BC , A BC 1 , ..., A BC p ]. * = [ µ * ,A * 1 , ..., A * p ] are generated according to Equation 20 or Equation 21, and bias terms are approximated as Ψ = ¯ β * − β.