A method for constructing the Composite Indicator of business cycles based on information granulation and Dynamic Time Warping
Introduction
Business cycles are a type of fluctuations found in the aggregate economic activity of nations over several months or years: a cycle consists of expansions occurring at about the same time in many economic activities, followed by similarly general recessions, contractions, and revivals which merge into the expansion phase of the next cycle. Business cycles analysis is much important for policy-makers to assess the current state of the economy in realtime and make further decisions. Real Gross Domestic Product (GDP) may be the best measure series of the aggregate economy. But, unfortunately, it is released quarterly, which seems a little “slower” when compared with the frequency of decision-making. To address this problem, [2] suggested using a wide range of higher-frequency released series, such as some monthly coincident indicators, in place of real GDP series. The Business Cycle Dating Committee of the National Bureau of Economic Research (NBER) primarily relies on four monthly indicator measures of economic performance including the index of industrial production, payroll employment, real manufacturing and trade sales as well as real personal income less current transfers for the analysis of business cycles.
A Composite Indicator (CI) is formed when individual indicators are compiled into a single index on the basis of an underlying model. The composite indicator should ideally measure multidimensional concepts which cannot be captured by a single indicator, e.g., competitiveness, industrialisation, sustainability, single market integration, knowledge-based society, etc [20]. In the U.S., the Conference Board calculates the CI by taking a weighted cross-section average of those four monthly coincident indicators [19]. Starting with the seminal paper of [33], Dynamic Factor Models (DFMs) have been increasingly applied to business cycles analysis. In their work, they constructed a single-index factor model and the only common factor was extracted from four monthly coincident indicators of the U.S. to work as the CI, which had a good performance. Afterwards, being aware of the paramount importance of real GDP, [17], [18] extended the Stock–Watson model by including the quarterly real GDP rate in DFMs, that is, the mix-frequency data case. They set up the state space equation framework by considering the quarterly real GDP as the temporal aggregation of monthly estimate of real GDP. [17] used the estimated common factor as the CI, while the monthly estimate of real GDP worked as an alternative of the CI in [18]. The success of mix-frequency DFMs inspired researchers to develop more mix-frequency models to account for business cycles. Because of its flexibility and easy to use, mix-frequency Vector Auto Regressive (VAR) model was widely applied to the analysis process [8], [31]. Moreover, some other models based on VAR model were developed. For instance, Markov-switching mix-frequency VAR model [6], mix-frequency Vector Error Correction Model (VECM) [32], etc.
Evidently, more and more researchers are inclined to incorporate quarterly real GDP data to establish the CI, so the CI construction method used by the Conference Board seems to be somewhat outdated. As to the models with mix-frequency data, variables are normally taken natural logarithms before being modelled so as to make their sample variances more constant, which adds trouble to the temporally aggregated low-frequency series. Although the sum of the original variables is observed, the logarithm of a sum is not equal to the sum of the logarithms. To address that problem, a temporally-aggregated variable is normally treated as the geometric average of an unobserved high-frequency variable. However, in fact, this approximation is satisfied on condition that the adjacent monthly real GDP values in the same quarter are nearly equal, otherwise problems will occur. Unfortunately, that assumption is not always true in practice.
The ultimate objective of this study is to develop a fast construction method of the CI, which can not only take the GDP variable into consideration, but also refrain from some uncertain assumptions for temporal aggregation. Information granularity is a pretty suitable tool to achieve that purpose. Since the seminal concept of information granularity was proposed by [37], there have been more and more influential works concentrating on the generalization and development of this concept [1], [27], [38]. By virtue of its powerful abilities of data compressing and information extraction, information granularity has emerged as a useful vehicle to represent and solve some given problems in many domains, such as fuzzy time series [16], [35], digital images [36], spatial network structure [28] and decision making [3], [4] etc. In light of the advantages of information granularity to deal with incomplete information [14], this paper incorporates information granularity into establishing the CI of business cycles. First, guided by the principle of justifiable granularity [23], this paper divides the quarterly real GDP series into several consecutive information granules (intervals) by the Particle Swarm Optimization (PSO) algorithm [7], [11]. Next, monthly coincident indicators are segmented correspondingly and DTW [29], [30], [34] is applied to compute the distance between monthly coincident indicators segments and quarterly GDP data segments. The weights are derived by normalizing reciprocals of the above distance values. Finally, the composite indicator of business cycles is obtained by taking a weighted cross-section average of those four monthly coincident indicators. The numerical experiment shows that the proposed method can not only construct an accurate CI but make the CI more interpretable and meaningful.
The remaining parts of this paper are arranged as follows: Section 2 presents some basic concepts of information granulation and dynamic time warping; The proposed CI construction method is detailed in 3 The proposed CI construction method, 4 Numerical Studies shows the numerical experiment; The conclusions are covered in Section 5.
Section snippets
Preliminaries
Before elucidating the proposed CI construction method of business cycles, it is necessary to recall some basic concepts of information granulation and dynamic time warping.
The proposed CI construction method
This paper aims to develop a method to establish a new CI for the analysis of business cycles, and this method should take the quarterly real GDP which is considered as the most important coincident indicator into account, meanwhile, it should try to avoid the complex mix-frequency model estimation. In light of the ability of information granularity to extract valuable information from data, we divide the mix-frequency coincident indicators of interest into several meaningful information
Numerical Studies
The detailed information of the experimental data used in this paper is listed in Table 1. Those data are the standard data for the analysis of business cycles by the Business Cycle Dating Committee of the NBER. One picture is worth a thousand words. Fig. 3 visualizes the monthly coincident indicators EMP, INC, IIP, SLS and quarterly GDP sequence. As shown in Fig. 3, on the one hand these five coincident indicators present quite a consistent overall trend, but on the other hand, compared to
Conclusions
This paper develops a fast method to construct the CI of business cycles by incorporating information granulation and DTW measure. The proposed method can take both monthly coincident indicators and quarterly real GDP data into account, meanwhile it is able to avoid some uncertain assumptions and tedious computation caused by temporal aggregation. The numerical experiment reveals that the CI constructed by the proposed approach can well reflect the state of the national economy and accurately
Acknowledgments
This work is supported by the Natural Science Foundation of China under Grant 61175041 and 61533005, and Boshidian Funds 20110041110017.
References (38)
- et al.
A method based on PSO and granular computing of linguistic information to solve group decision making problems defined in heterogeneous contexts
European Journal of Operational Research
(2013) - et al.
Building consensus in group decision making with an allocation of information granularity
Fuzzy Sets and Systems
(2014) - et al.
Piecewise statistic approximation based similarity measure for time series
Knowledge-Based Systems
(2015) Mixed-frequency var models with markov-switching dynamics
Economics Letters
(2013)- et al.
A novel three-way decision model based on incomplete information system
Knowledge-Based Systems
(2016) - et al.
Using interval information granules to improve forecasting in fuzzy time series
International Journal of Approximate Reasoning
(2015) - et al.
The modeling and prediction of time series based on synergy of high-order fuzzy cognitive map and fuzzy c-means clustering
Knowledge-Based Systems
(2014) - et al.
Shape-based template matching for time series data
Knowledge-Based Systems
(2012) Information granules and their use in schemes of knowledge management
Scientia Iranica
(2011)- et al.
Building granular fuzzy decision support systems
Knowledge-Based Systems
(2014)
Data description: A general framework of information granules
Knowledge-Based Systems
A fuzzy ensemble of parallel polynomial neural networks with information granules formed by fuzzy clustering
Knowledge-Based Systems
Automatic recognition of 200 words
International Journal of Man-Machine Studies
Time series long-term forecasting model based on information granules and fuzzy clustering
Engineering Applications of Artificial Intelligence
Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic
Fuzzy sets and systems
Recursive information granulation: aggregation and interpretation issues
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
Measuring businesscycles
Particle swarm optimization: developments, applications and resources
Evolutionary computation, 2001. Proceedings of the 2001 congress on
Mixed frequency vector autoregressive models
Technical Report
Cited by (6)
A novel distance measure for time series: Maximum shifting correlation distance
2019, Pattern Recognition LettersCitation Excerpt :Based on the distance calculation, these measures can be classified into lock-step measures (e.g. ED, DISSIM and other Lp norms) and elastic measures (e.g. DTW, LCSS, ERP and EDR). The DTW outperforms ED in many reported literature studies [26,29,30]. Wang [33] conducted extensive experiments to test the effectiveness of the similarity measures.
Weighted Fuzzy Clustering for Time Series With Trend-Based Information Granulation
2024, IEEE Transactions on CyberneticsInformation Granulation-Based Fuzzy Clustering of Time Series
2021, IEEE Transactions on CyberneticsImproved Computation of Affine Dynamic Time Warping
2020, ACM International Conference Proceeding SeriesHidden Markov Models Based Approaches to Long-Term Prediction for Granular Time Series
2018, IEEE Transactions on Fuzzy Systems