Skip to content
Licensed Unlicensed Requires Authentication Published online by De Gruyter December 21, 2023

Financial Condition Indices in an Incomplete Data Environment

  • Miguel C. Herculano EMAIL logo and Punnoose Jacob

Abstract

We construct a Financial Conditions Index (FCI) for the United States using a dataset that features many missing observations. The novel combination of probabilistic principal component techniques and a Bayesian factor-augmented VAR model resolves the challenges posed by data points being unavailable within a high-frequency dataset. Even with up to 62 % of the data missing, the new approach yields a less noisy FCI that tracks the movement of 22 underlying financial variables more accurately both in-sample and out-of-sample.

JEL Classification: C11; C32; C52; C53; C66

Corresponding author: Miguel C. Herculano, Adam Smith Business School, University of Glasgow, Glasgow, Scotland, E-mail:

The views expressed herein are those of the authors, and do not necessarily represent the official views of the Reserve Bank of New Zealand. We are grateful for the advice and guidance of Dimitris Korobilis and Gary Koop who commented on an early draft. We thank the Editor and an anonymous referee for their helpful comments.


Appendix A: Econometric Methods

A.1 Bayesian Kalman Filter with Incomplete Data

The model defined in (1)(4) configures a MF-TVP-FAVAR and can be written compactly, in state space form as follows

(6) X t = Z t λ t + u t , u t N ( 0 , V t ) ,

(7) Z t = Z t 1 β t + ε t , ε t N ( 0 , Q t ) ,

(8) β t = β t 1 + η t η t N ( 0 , R t ) ,

(9) λ t = λ t 1 + v t v t N ( 0 , W t ) ,

where λ t = λ t y λ t f and Z t = Y t F t . Note that Z t depends on the latent factor F t that is taken as data .[6] Let θ t = {λ t , β t } denote the parameter set and D t = {X t , Z t } the data for t = {1, …, T}. Assuming that we know the posterior of θ at time t − 1, Bayesian filtering/smoothing is based on the equations below

(10) p ( θ t , θ t 1 | D t 1 ) = p ( θ t | θ t 1 , D t 1 ) p ( θ t 1 | D t 1 ) ,

(11) p ( θ t | D t 1 ) = Ω p ( θ t | θ t 1 , D t 1 ) p ( θ t 1 | D t 1 ) d θ t 1 ,

where Ω is the support of θ t−1. The prediction step is given by the Chapman–Kolmogorov Equations (10) and (11).

Next, at each iteration t, the prior p(θ t |D t−1) gets updated according to Equation (11) and the measurement likelihood p(D t |θ t ) is augmented by an additional observation of D t . Hence the posterior distribution is updated according to the Bayes rule

(12) p ( θ t | D t ) = 1 H t p ( D t | θ t , D t 1 ) p ( θ t | D t 1 ) .

where H t = ∫p(D t |θ t )p(θ t |D t−1)) is the normalizing constant. Equation (12) is refered to as the updating step. To summarize, the algorithm extends the one derived in Koop and Korobilis (2014) to an incomplete data environment where D t is allowed to contain missing values. It consists of 2 steps, iterating through prediction (11) and updating (12) after the system is initialized. These two main steps are repeated for t = {1, …, T}.

A.1.1 Kalman Filter

A.1.1.1 Initialization (Priors)

All quantities are initialized according to their priors which are chosen following the diffuse choices of Koop and Korobilis (2014): f 0N(0, 10), λ 0N(0, I), β 0N(0, I). The variances of the innovations in Equation (6)(9) can be seen as hyperparameters and are set to V ̂ 0 = 0.1 * I , Q ̂ 0 = 0.1 * I , R ̂ 0 = 1 0 5 * I and W ̂ 0 = 1 0 5 * I . However, in this setting the hyperparameter are allowed to smoothly change over time following a Exponentially Weighted Moving Average (EWMA).

A.1.1.2 Prediction

(13) λ t N λ t | t 1 , Σ t | t 1 λ ,

(14) β t N β t | t 1 , Σ t | t 1 β .

where λ t|t−1 = λ t−1|t−1, β t|t−1 = β t−1|t−1 and

(15) Σ t | t 1 β = Σ t 1 | t 1 β + R ̂ t | t 1 ,

(16) Σ t | t 1 λ = Σ t 1 | t 1 λ + W ̂ t | t 1 .

The state covariances in the equations above are estimated by

(17) R ̂ t | t 1 = 1 κ 3 R ̂ t 1 | t 1 ,

(18) W ̂ t | t 1 = 1 κ 4 W ̂ t 1 | t 1 .

where κ 3 and κ 4 are forgetting factors that define the law of motion of the parameters. We set these quantities to κ 3 = κ 4 = 0.99.[7] The prediction step allows us to compute measurement equation prediction errors that are necessary inputs for the updating step and computed as

(19) u ̂ t = x t x ̂ t | t 1 ,

(20) ε ̂ t = z t z ̂ t | t 1 .

where x ̂ t | t 1 = z t λ t | t 1 and z ̂ t | t 1 = z t 1 β t | t 1 . With missing data, we simple set to zero the error corresponding to variables missing at a given point in time t.

A.1.1.3 Update

Update each λ it for i = 1, …, n and β t

(21) λ i t N λ i t | t , Σ i i , t | t λ ,

(22) β t N β t | t , Σ t | t β .

The terms in (21) are calculated as

(23) λ i t | t = λ i t | t 1 + Σ i i , t | t 1 λ z t V ̂ t + z t Σ i i , t | t 1 λ F t 1 ε ̂ t ,

(24) Σ i i , t | t λ = Σ i i , t | t 1 λ Σ i i , t | t 1 λ z t V ̂ t + z t Σ i i , t | t 1 λ z t 1 z t Σ i i , t | t 1 λ ,

where the term Σ i i , t | t 1 λ z t V ̂ t + z t Σ i i , t | t 1 λ F t 1 is the Kalman Gain for each time period t and is set to zero for the variables missing at each step.

The terms in (22) are calculated as

(25) β t | t = β t | t 1 + Σ t | t 1 β z t Q ̂ t + z t 1 Σ t | t 1 β z t 1 1 ( z t z t 1 β ̂ t 1 ) ,

(26) Σ t | t β = Σ t | t 1 β Σ t | t 1 β z t Q ̂ t + z t Σ t | t 1 β z t 1 z t Σ t | t 1 β ,

where the term Σ t | t 1 β z t Q ̂ t + z t 1 Σ t | t 1 β z t 1 1 is the Kalman Gain for each time period t and is set to zero for the variables missing at each step.

The only outstanding terms that need defining are the measurement equation error covariance matrices that can be obtained using EWMA as follows

(27) V ̂ t = κ 1 V ̂ t 1 + ( 1 κ 1 ) u ̂ t u ̂ t ,

(28) Q ̂ t = κ 2 Q ̂ t 1 + ( 1 κ 2 ) ε ̂ t ε ̂ t ,

where κ 1 and κ 2 are forgetting factors that define the law of motion of the idiosyncratic volatilities in the measurement equation and the volatilities of the observable variables and the factors in the state equation. We set these quantities to κ 1 = κ 2 = 0.99.

A.1.2 Kalman Smoother

The Kalman Filter algorithm described above works by forward recursion and outputs estimates of E(θ|D t ) for all parameters θ = [ V ̂ t , Q ̂ t , R ̂ t , W ̂ t , β ̂ t , λ ̂ t ] in the model using the data available D t for t = 1, …, t. However, we are ultimately interested in an estimate of E(θ|D T ) which yields the parameter states conditional on the entire sample t = 1, …, T. Therefore, the Kalman Smoother is applied to the output of the Kalman Filter working by backward recursion. Given that the Kalman Filter takes into account missing data, no alteration is necessary to the Kalman Smoother.

A.1.3 Kalman Smoother/Filter for Factors

The full three step procedure described in Section 4.2 is complete by applying the Kalman filter and smoother algorithm to the factors, which are initialized with a PCA estimate. The algorithm and incomplete data extensions are analogous to that previously described.

A.2 PCA Algorithms for Incomplete Data

A.2.1 Least-Squares PCA in the Presense of Missing Data

Several simple ways to deal with missing values in a classical LS PCA framework consist in setting missing values to zero or using an Expectation Maximization PCA (EM PCA). The later technique is used by Stock and Watson (2002) and consists of an iterative procedure that alternates between imputing missing values in X (E-step) and applying standard PCA to the pseudo-balanced panel of data (M-step) until convergence is reached. To summarize the algorithm proceeds as follows:

  1. E-step: Reconstruct X t by filling in its missing values:

    (29) X t * = X t for observed values Λ k ̂ F t k ̂ for missing values .

  2. M-step: Perform standard PCA by SVD on the infilled matrix X t * and obtain new values for Λ k ̂ , F t k ̂ .

The algorithm alternates between the E–M step until convergence is reached in which case new estimates for the parameters in iteration k − 1 do not improve the Least-squares minimization problem solved in iteration k.

A.2.2 Variation View of the Expectation Maximization (EM) Algorithm for Incomplete Data Environments

Consider the standard PC regression

(30) X t = Λ F t + ξ t , ξ t N ( 0 , v x I ) .

where θ = {Λ, F t , v y } are model parameters and a subset X mis of the data matrix is missing and treated as hidden variables. The variational view of the EM algorithm (see Attias 2000; Neal and Hinton 1999) consists in minimizing the objective function

(31) V ( θ , p ( X mis ) ) = p ( X mis ) log p ( X mis ) p ( X | θ ) d X mis

(32) = p ( X mis ) log p ( X mis ) p ( p ( X mis | θ ) d X mis log p ( X obs | θ ) ,

wrt the model parameters θ and the density over the missing data p(X mis). X obs denotes the observed data such that X = X misX obs.

E-step. Equation (32) is the Kullback–Leibler divergence between the pdfs over observable and unobservable data. The minimization of this expression wrt p(X mis), given θ is shown to yield

(33) p ( X mis | θ ) = i j O N ( x ̂ ( θ ) i j , v x ) .

where O is the set of indices for which observation x ij is observed, x ̂ ( θ ) i j result from the reconstruction of the incomplete data matrix X from expression (32), for a given θ. This procedure is refered to as the E-step of the algorithm.

M-step. Next, the proceedings from the E-step are substituted back into expression (32). The terms in the resulting expression which depend on θ are given by

(34) p ( X mis ) log p ( X | θ ) d X mis .

It can be shown that minimization of (34) wrt. θ is equivalent to minimizing the LS objective function in case of no missing data (Neal and Hinton 1999). Thus, the M-step of the algorithm consists in performing SVD decomposition to the imputed data matrix X. The algorithm alternatives between the E-M step until convergence is reached (i.e. when the reconstruction error stabilizes).

A.2.3 Probabilistic PCA (PPCA)

A probabilistic PCA specification has been found to provide a good foundation to handle missing data Ilin and Raiko (2010). The probabilistic PCA set forth by Tipping and Bishop (1999) can be written as follows

(35) X t = m + Λ F t + ξ t ,

where both the principal component and the noise term are assumed normaly distributed as follows

(36) p ( F t ) N ( 0 , I K ) ,

(37) p ( ξ t ) N 0 , τ 1 I N ,

where θ = {m, Λ, τ} are model parameters, I N and I T denote identity matrices and τ is the scalar inverse variance of ξ. It can be shown that the Maximum Likelihood (ML) estimation of the PPCA is identical to PCA in the case of non-missing data. The great advantage of the PPCA is that, in case of incomplete data, it allows for regularization that arises naturally from the choice of Gaussian priors. The model can then be estimation with a standard EM algorithm. The necessary extensions to handle missing data are discussed in Ilin and Raiko (2010). Below I summarize their procedure.

E-step. Estimate the conditional distribution of the hidden variables F given the data X and model parameters θ,

(38) p ( F | X , θ ) = j = 1 K N ( F ̄ j , Σ F j ) ,

based on the following updating rules

(39) Σ F j = τ 1 τ 1 I + i O j λ i λ i T 1 ,

(40) F ̄ j = τ Σ F j i O j λ i ( x i j m i ) , j = 1 , , K ,

(41) m i = 1 | O i | j O i x i j λ i T F ̄ j .

M-step. Re-estimate the model parameters as

(42) λ i = j O i F ̄ j F ̄ j T + Σ F j 1 j O i F ̄ j ( x i j m i ) , i = 1 , , N ,

(43) τ = 1 N i j O x i j λ i T x ̄ j m i 2 + λ i T Σ F j λ i 1 .

where O i , O j and O denote the set of indices i, j for which x ij is observed.

A.2.4 Variational Bayesian PCA (VBPCA)

Some studies suggest that the standard PPCA is still vulnerable to over-fitting (see for example Ilin and Raiko 2010). One possible reason that might lead to overfitted solution might be the nontrivial choice of the number of principal components to include in (35). Including a large number of common components F t might cause the model to over-learn the data.

One possible solution to this problem consists in penalizing large values in the matrices Λ and F t . The probabilistic PCA model is flexible enough to allow for an automatic, data-driven selection of relevant common components by shrinking to zero the solutions λ j that are small relative to the noise variance. This can be achieved through a Variational Bayesian PCA algorithm as explained below. We follow Oba et al. (2003) in imposing additional regularization to penalize parameter values that yield more complex explanations of the data. Hence, in addition to (35)(37) one can further impose

(44) p ( m | τ ) N ( 0 , ( γ m 0 τ ) 1 I T ) ,

(45) p ( λ j | τ , α j ) N ( 0 , ( α j τ ) 1 I T ) ,

(46) p ( τ ) G ( τ | τ 0 ̄ , γ τ 0 ) ,

where ψ = { γ m 0 , γ τ 0 , τ 0 ̄ , α j } are hyperparameters and λ j are the parameters in column j of the loadings matrix Λ that define the importance of each principal component F j , j = {1, 2, …, K}. The prior p(Λ|α, τ), which has a hierarchical structure, is called an automatic relevance determination (ARD) prior. This structure plays a key role in guaranteeing parsimony of the model. Its variance ( α j τ ) 1 is determined by a hyperparameter α j that becomes large when the euclidean distance ‖λ j ‖ is small relative to the noise variance τ −1.

Estimation of the model now requires a variational EM algorithm as proposed by Attias (2000) to cope with the unknown analytical form of the posterior of the parameters p(Λ, F t , m|X, ψ), which invalidates the E-step of the standard EM algorithm. To overcome this difficulty, the author proposes an approximation to this quantity by a simpler q(Λ, F t , m). Using a variational approach, the E-step is modified such that the objective function approximates p(θ|X) with a simpler pdf p(θ), written as follows

(47) V ( p ( θ ) , ψ ) ) = p ( θ ) log p ( θ ) p ( X , θ | ψ ) d θ

(48) = p ( θ ) log p ( θ ) p ( X , θ | ψ ) d θ log p ( X | ψ ) ,

E-step. Equation (48) is the Kullback–Leibler divergence between the true posterior and its approximation. In this step the approximation p(θ) is updated. This corresponds to minimizing this distance wrt p(θ).

M-step. Next, the approximation p(θ) is used as if it was the actual posterior p(θ|X, ψ) in order to increase p(X|ψ). This consists in deriving the expression (47) wrt. ψ.

The algorithm alternatives between the E-M steps until convergence is reached (i.e. when the reconstruction error stabilizes).

Appendix B: Additional Tables and Figures

Table 2:

Description of the data used to construct the financial conditions index.

# Mnemonic Description Frequency Sample start t-code Source
1 GDPC1 Real gross domestic product SA. Annual rate Q 1971-01-01 5 St. Louis FRED
2 CPIAUCSL Consumer price index for all urban consumers M 1971-01-01 5 St. Louis FRED
3 UNRATE Unemployment rate, percent, SA M 1971-01-01 5 St. Louis FRED
4 DFF Effective federal funds rate D 1971-01-01 1 St. Louis FRED
5 CMDEBT Households and nonprofit organizations debt % GDP Q 1971-01-01 5 St. Louis FRED
6 ABSITCMAHDFS Issuers of asset-backed securities % GDP Q 1983-07-01 5 St. Louis FRED
7 DRTSCILM Net percentage of domestic banks tightening standards for commercial and industrial loans Q 1990-04-01 1 St. Louis FRED
8 TERMCBAUTO48NS Finance rate on consumer installment loans at commercial banks, new autos 48 month loan, percent Q 1972-01-01 5 St. Louis FRED
9 TDSP Household debt service payments as a percent of disposable personal income Q 1980-01-01 1 St. Louis FRED
10 TOTALSL Total consumer credit owned and securitized % GDP M 1971-01-01 1 St. Louis FRED
11 LOANHPI Loan performance index U.S. M 1976-03-01 5 Bloomberg
12 CONSEXFI UMich expected change in financial conditions M 1978-02-01 1 Uni Michigan
13 BPLR Bank prime loan rate/libor spread M 1971-01-01 1 St. Louis FRED
14 JPMNEER JPMorgan broad nominal effective exchange rate (2010 = 100) M 1971-01-01 5 Bloomberg
15 LDR All commercial banks loan to deposit ratio M 1973-01-01 1 Haver analytics
16 2/3TBS 2 year/3 m treasury bill spread M 1976-06-01 1 St. Louis FRED
17 MORTGAGE30US Mortgage rate/10 year treasury bill spread W 1971-04-02 1 St. Louis FRED
18 T10Y2Y 10-year minus 2-year treasury constant maturity yield, percent D 1976-06-01 1 St. Louis FRED
19 BAMLH0A0HYM2EY ICE BofAML US high yield master II effective yield, percent D 1996-12-31 1 Bloomberg
20 MOVE index Yield curve weighted index of the normalized implied volatility on 1-month treasury options. D 1988-04-04 1 Bloomberg
21 CRY index Thomson reuters/CoreCommodity CRB commodity index D 1994-01-03 1 Bloomberg
22 VXOVIX Cboe S&P 100/500 volatility index D 1990-01-02 1 St. Louis FRED
23 BASPTDSP Ted spread D 2001-01-02 1 St. Louis FRED
24 WILL5000PRFC Wilshire 5000 full cap price index D 1971-01-01 5 St. Louis FRED
25 CPFF 3-month commercial paper minus federal funds rate, percent, daily, not seasonally adjusted D 1997-01-02 1 St. Louis FRED
26 SP500 S&P 500 price index D 1971-01-01 5 St. Louis FRED
  1. Notes: Mnemonic refers to the statistical reference with which the time series can be fetched from the source. Frequency is either Q: quarterly, M: monthly, W: weekly or D: daily. Sample start date refers to the first observation for a specific time-series in our sample period 1971–2020. t-code refers to transformation applied to each variable. 1: levels; 5: log-differences.

Figure 5: 
FCI calculated with different signal extraction methods within the FAVAR. The four methods include the PCA – simple principle components, EM-PCA which stands for expectation maximization PCA, PPCA – probabilistic PCA and VBPCA – variational Bayes PCA.
Figure 5:

FCI calculated with different signal extraction methods within the FAVAR. The four methods include the PCA – simple principle components, EM-PCA which stands for expectation maximization PCA, PPCA – probabilistic PCA and VBPCA – variational Bayes PCA.

Figure 6: 
Reconstruction mean squared error across four alternative PCA methods. Notes: Mean-squared error statistics are calculated in-sample 


MSE
=


1


N




∑


i
j
∈
O





(



x


i
j


−




x

̂



i
j



)



2




$\text{MSE}=\frac{1}{N}{\sum }_{ij\in O}{({x}_{ij}-{\hat{x}}_{ij})}^{2}$



, where O includes all indice for which x

ij
 is observed and 






x

̂



i
j




${\hat{x}}_{ij}$



 results from the projection of X

t
 on the factors F

t
 in Equation (1), permitting the reconstruction of the incomplete data set of financial variables that loan onto the FCI. For the observable section of the standardized unbalanced panel of financial indicators across the different signal extraction methods.
Figure 6:

Reconstruction mean squared error across four alternative PCA methods. Notes: Mean-squared error statistics are calculated in-sample MSE = 1 N i j O ( x i j x ̂ i j ) 2 , where O includes all indice for which x ij is observed and x ̂ i j results from the projection of X t on the factors F t in Equation (1), permitting the reconstruction of the incomplete data set of financial variables that loan onto the FCI. For the observable section of the standardized unbalanced panel of financial indicators across the different signal extraction methods.

References

Adrian, T., N. Boyarchenko, and D. Giannone. 2019. “Vulnerable Growth.” The American Economic Review 109 (4): 1263–89. https://doi.org/10.1257/aer.20161923.Search in Google Scholar

Attias, H. 2000. “A Variational Bayesian Framework for Graphical Models.” In Advances in Neural Information Processing Systems, 18. Cambridge, MA: MIT Press.Search in Google Scholar

Bańbura, M., and G. Rünstler. 2011. “A Look into the Factor Model Black Box: Publication Lags and the Role of Hard and Soft Data in Forecasting GDP.” International Journal of Forecasting 27 (2): 333–46. https://doi.org/10.1016/j.ijforecast.2010.01.011.Search in Google Scholar

Bańbura, M., D. Giannone, M. Modugno, and L. Reichlin. 2013. “Chapter 4 – Now-Casting and the Real-Time Data Flow.” In Handbook of Economic Forecasting, Vol. 2, edited by G. Elliott, and A. Timmermann. Amsterdam: Elsevier (North Holland Publishing Co.).10.1016/B978-0-444-53683-9.00004-9Search in Google Scholar

Bernanke, B. S., J. Boivin, and P. Eliasz. 2005. “Measuring the Effects of Monetary Policy: A Factor-Augmented Vector Autoregressive (FAVAR) Approach.” Quarterly Journal of Economics 120 (1): 387–422. https://doi.org/10.1162/0033553053327452.Search in Google Scholar

Bordo, M. D. 2017. “An Historical Perspective on the Quest for Financial Stability and the Monetary Policy Regime.” NBER Working Papers 24154. National Bureau of Economic Research, Inc.10.3386/w24154Search in Google Scholar

Brave, S., R. A. Butters, and D. Kelley. 2011. “Monitoring Financial Stability: A Financial Conditions Index Approach.” Economic Perspectives 35 (1): 22–43. https://doi.org/10.21033/ep-2019-1.Search in Google Scholar

Doz, C., D. Giannone, and L. Reichlin. 2011. “A Two-Step Estimator for Large Approximate Dynamic Factor Models Based on Kalman Filtering.” Journal of Econometrics 164 (1): 188–205. https://doi.org/10.1016/j.jeconom.2011.02.012.Search in Google Scholar

Eraslan, S., and M. Schröder. 2022. “Nowcasting GDP with a Pool of Factor Models and a Fast Estimation Algorithm.” International Journal of Forecasting 39 (3): 1460–76. https://doi.org/10.1016/j.ijforecast.2022.07.009.Search in Google Scholar

Foroni, C., and M. G. Marcellino. 2013. “A Survey of Econometric Methods for Mixed-Frequency Data.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2268912.Search in Google Scholar

Giannone, D., L. Reichlin, and D. Small. 2008. “Nowcasting: The Real-Time Informational Content of Macroeconomic Data.” Journal of Monetary Economics 55 (4): 665–76. https://doi.org/10.1016/j.jmoneco.2008.05.010.Search in Google Scholar

Giglio, S., B. Kelly, and S. Pruitt. 2016. “Systemic Risk and the Macroeconomy: An Empirical Evaluation.” Journal of Financial Economics 119 (3): 457–71. https://doi.org/10.1016/j.jfineco.2016.01.010.Search in Google Scholar

Ilin, A., and T. Raiko. 2010. “Practical Approaches to Principal Component Analysis in the Presence of Missing Values.” Journal of Machine Learning Research 11: 1957–2000.Search in Google Scholar

Koop, G., and D. Korobilis. 2012. “Forecasting Inflation Using Dynamic Model Averaging.” International Economic Review 53 (3): 867–86. https://doi.org/10.1111/j.1468-2354.2012.00704.x.Search in Google Scholar

Koop, G., and D. Korobilis. 2013. “Large Time-Varying Parameter VARs.” Journal of Econometrics 177 (2): 185–98. https://doi.org/10.1016/j.jeconom.2013.04.007.Search in Google Scholar

Koop, G., and D. Korobilis. 2014. “A New Index of Financial Conditions.” European Economic Review 71: 101–16. https://doi.org/10.1016/j.euroecorev.2014.07.002.Search in Google Scholar

Koopman, S. J., and J. J. Commandeur. 2008. Introduction to State Space Time Series Analysis. Oxford: Oxford University Press.Search in Google Scholar

Mariano, R. S., and Y. Murasawa. 2003. “A New Coincident Index of Business Cycles Based on Monthly and Quarterly Series.” Journal of Applied Econometrics 18 (4): 427–43. https://doi.org/10.1002/jae.695.Search in Google Scholar

Neal, R. M., and G. E. Hinton. 1999. “A View of the EM Algorithm that Justifies Incremental, Sparse, and Other Variants.” In Learning in Graphical Models, 89. Dordrecht: Springer.10.1007/978-94-011-5014-9_12Search in Google Scholar

Oba, S., M. A. Sato, I. Takemasa, M. Monden, K. I. Matsubara, and S. Ishii. 2003. “A Bayesian Missing Value Estimation Method for Gene Expression Profile Data.” Bioinformatics 19 (16): 2088–96. https://doi.org/10.1093/bioinformatics/btg287.Search in Google Scholar PubMed

Raftery, A. E., M. Kárný, and P. Ettler. 2010. “Online Prediction under Model Uncertainty via Dynamic Model Averaging: Application to a Cold Rolling Mill.” Technometrics 52 (1): 52–66. https://doi.org/10.1198/tech.2009.08104.Search in Google Scholar PubMed PubMed Central

Stock, J. H., and M. W. Watson. 2002. “Macroeconomic Forecasting Using Diffusion Indexes.” Journal of Business & Economic Statistics 20 (2): 147–62. https://doi.org/10.1198/073500102317351921.Search in Google Scholar

Stock, J. H., and M. W. Watson. 2016. “Chapter 8 - Dynamic Factor Models, Factor-Augmented Vector Autoregressions, and Structural Vector Autoregressions in Macroeconomics.” In Handbook of Macroeconomics, Vol. 2, 415–525. Amsterdam: Elsevier (North Holland Publishing Co.).10.1016/bs.hesmac.2016.04.002Search in Google Scholar

Tipping, M. E., and C. M. Bishop. 1999. “Probabilistic Principal Component Analysis.” Journal of the Royal Statistical Society – Series B: Statistical Methodology 61 (3): 611–22. https://doi.org/10.1111/1467-9868.00196.Search in Google Scholar


Supplementary Material

This article contains supplementary material (https://doi.org/10.1515/snde-2022-0115).


Received: 2022-12-16
Accepted: 2023-11-09
Published Online: 2023-12-21

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/snde-2022-0115/html
Scroll to top button