A New Approach for Reconstruction of IMFs of Decomposition and Ensemble Model for Forecasting Crude Oil Prices

,


Introduction
Crude oil is a very important commodity in the world because of its unique nature, as it affects the life of every individual in many ways.According to the IEA, in early 2018, the world currently consumes 99.3 million barrels of oil and liquid fuels daily.In global market, it is the most active and heavily traded commodity.Due to the high demand of crude oil in every field of life, it needs more attention as compared to other commodities.Oil is a nonrenewable commodity, but the world consumes it in different ways; thus, it is a challenge for mathematicians, statisticians, and econometricians to develop a better strategy for understanding the price changing aspect of crude oil.Due to the irregular and stochastic nature of oil prices, it is a very complex and challenging task for researchers to develop an appropriate model for COPs forecasting.e compound and complex nature of the COPs make this area widely opened for researchers to develop many different procedures to forecast feasibly.
In the existing literature, several models have been proposed for forecasting the time series data and applied for forecasting the COPs, which is generally described in [1,2], y t+l � f(Y t ) + ϵ t+l , where y t represents the price of crude oil at time t, Y t � (y t−1 , y t−2 , . . ., y t−h ) is the previous value of time t with lagging of h, l is the horizon of the prediction, and ϵ t+l denotes the prediction error which is independent and identically distributed random variable.e function f( * ) is distinguishably designed with the respective parameter evaluation techniques, while the current COPs forecasting models can be divided broadly into three categories.In the first category, traditional econometric models fall with strict assumptions such as stationarity and linearity in parameter estimation but with relatively simple fixed functions.e second category is called AI, which consists of flexible functions having the capability of self-learning at the stage of model training.e last category is very popular these days as the combination of several single models into a hybrid model systematically.is study focuses on the third category of hybridizing the popular single models into one for achieving the high forecasting accuracy with a simple technique.
e first category consists of ARIMA, RW, GARCH, VAR, and ECM models which are more frequently used to forecast the COPs.For the prediction of monthly COPs, Y. Xiang and X. H. Zhuang [3] used the ARIMA model for Brent data covering a period from November 2012 to April 2013.For estimation of the volatility and the conditional mean [4], GARCH model is used for daily COPs data of WTI covering a period from 9 December 2000 to 2 January 2010.
e VAR model is utilized in [5] for US monthly COPs covering from January 1980 to November 2002.e RW model is also used as a benchmark to forecast COPs moments [6].e ECM model was used in [7] for the prediction of COPs of Brent and WTI covering the period from 1994 to 2002.
e second category concerned about the AI methods, the predominant techniques, which are frequently used to forecast the COPs, and empirical studies which proved that their superiority over the traditional econometrics models are SVR, ANN, and LSSVR.For SVR models, W. Xie et al. [8] used this technique for the prediction of WTI monthly COPs covering a period from January 1970 to December 2003, and the empirical study showed that the SVR model provides more accurate predictions as compared to the ARIMA model and BPNN models.Similarly, the SVR model was used in [9] for the prediction of COPs of WTI covering a period from January 1986 to December 2009.Movagharnejad et al. [10] used the ANN model for forecasting the COPs data covering a period from January 2000 to April 2010.
e evolutionary neural network was presented by Chiroma et al. [11] for forecasting the WTI monthly COPs covering a period from May 1987 to December 2011.Li et al. [12] utilized the LSSVR model for forecasting WTI COPs from 4 January 2008 to 18 October 2018, with a conclusion that BPNN, ARIMA, and SVR models were outperformed by LSSVR model.
e AI models performed better than the traditional models; however, overfitting and best parameter selection are the weakness of these models [13].e third category is about the hybridization of the models into a single model.In this category, one of the most famous concepts "divide and conquer" [14] is used to decompose the original series into meaningful components and then ensemble them after applying different techniques on each component.Nowadays, the decomposition and ensemble strategy was used frequently to forecast COPs. is methodology has three main steps: in the first step, the time series is divided into small independent components; in the second step, a suitable technique is applied to each component and predict; and the last step, ensembles the individual predictions for the original series prediction [13,15].Using the "divide and conquer" concept in the decomposition and ensemble modelling techniques, some recent studies have demonstrated the usefulness of these techniques which identifies the enhancement of the forecasting accuracies.Yu et al. [15] decomposed the original series by using the EMD and predicted WTI and Brent daily COPs covering periods from 20 May 1987 to 30 September 2006 and 1 January 1986 to 30 September 2006, respectively.Subsequently, a decision is reached, whereby the "divide and conquer" approach performed well and enhanced the forecasting accuracy for both WTI and Brent COPs series.
e complementary ensemble empirical mode decomposition technique is introduced in [16] for forecasting the COPs, the empirical results of the decomposition, and ensemble strategy which is supported by the theory of improving the forecasting accuracy of the model.e same outcomes were achieved in [17] which suggested a new AI learning-paradigm with compressed sensing for forecasting the daily COPs covering a period from 3 January 2011 to 17 July 2013.In the EEMD and EELM paradigm which is introduced in [18] for the prediction of WTI COPs covering a period from 2 January 1986 to 21 October 2103, the results supported the decomposition and ensemble methodologies.In conclusion, the decomposition and ensemble methodologies worked well and significantly improved the performance of the models used for forecasting the COPs.
Moreover, Lin and Sun [19] also proposed a hybrid method that is a combination of the CEEMDAN and MLGRU neural network to forecast the COPs and concluded that the new model enhanced the forecasting accuracy.Xian et al. [20] suggested a new model integrating the two methods called EEMD-ICA which has been proposed for financial time-series signal processing.Wu et al. [21] also proposed a novel model based on EEMD and LSTM.e proposed model EEMD-LSTM is applied to the COPs of WTI, and the numerical results have proved the superiority of the new technique.
Furthermore, the complex data of COPs will be effectively handled by the decomposition ensemble models as compared to the ARIMA, GARCH, and ANN single models.However, an essential issue could arise regarding the model computational time, cost, and complexity.Due to the break down of decomposition ensemble models, the time series data are divided into several independent components called IMFs.After the decomposition, the modelling of all IMFs 2 Mathematical Problems in Engineering could be a time-consuming method due to the fact of individual IMF modelling and forecasting.Moreover, at the end, sometimes, produced poor results due to the estimation errors of all models are accumulated in the last ensemble step of forecasting [2,22].Addressing the problem of reconstruction of IMFs is incorporated after the decomposition step to forecast the individual IMF.During the IMFs reconstruction step, all the IMFs, after the decomposition of the original series, are reconstructed into some meaningful components.To overcome the above problem in decomposition and ensemble models, B. Zhu et al. [23] proposed a new multiscale paradigm through a kernel function incorporating EEMD, PSO, and LSSVM.Using the EEMD method, the original series was decomposed into IMFs and reconstructed into groups such as HFs, LFs, and trend.Later, the HFs were forecasted using the ARIMA model, while LFs and trend were forecasted by the PSO method, and for the final prediction, the whole forecasted results were simply combined.Aamir et al. [24] also reconstructed the IMFs of the EEMD model using the order of the ARIMA model for forecasting the weekly COPs of Brent and WTI. e authors decided that the reconstruction of IMFs was an effective technique that reduces time and cost and enhances the forecasting accuracy.Rios and De Mello [25] empirically proved that the acquired IMFs present various levels of stochasticity. is was the motivation to develop formal research to prove that the decomposed IMFs might be separated into two components, namely, stochastic and deterministic.
e problem arises as how to divide these IMFs into SD components.e IMFs were divided in [25] into two components based on the RP and AMI and concluded that dividing the IMFs into two components enhanced the forecasting accuracy.Furthermore, the authors in [26] claimed that the IMFs could be divided into two additive components called stochastic and deterministic, and the researchers can work to better model these components.Aamir and Shabri [27] also reconstructed the IMFs in two components called stochastic and deterministic through AMI. e authors concluded that the reconstructed components enhanced the forecasting accuracy taken WTI and Brent daily COPs as a sample.Furthermore, Du et al. [28] also proposed the BPE index method to solve the problem based on PEEMD.e authors also used the reconstruction of IMFs procedure by implementing the T-test.
Most of the above studies have used one or two criteria for reconstructing the IMFs into components, such as HFs, LFs, trend, RP, AMI, the order of ARIMA model, T-test, and average, while avoiding some other data characteristics.In order to capture some important components from the data dynamics efficiently, a comprehensive data analysis or some effective technique is strongly recommended for all the decomposed modes as well as a better procedure for forecasting reconstructed modes.In this study, the EEMD technique is used to decompose the time series for analyzing a wider class of time series.
To overcome the above problem and further improve the forecasting efficiency, in this study, we aim at proposing a novel method of the use of decomposed IMFs obtained from EEMD. e innovation and contributions of this study are outlined as (i) a new forecasting approach for COPs of utilizing the reconstructed IMFs was proposed, following the well-known "decomposition and ensemble" framework.(ii) To the best of our knowledge, this is the first time that IMFs are reconstructed in this format and used for time-series forecasting.(iii) A threshold value of autocorrelation is fixed based on simulations and statistical tests for separating the IMFs into SD components.(iv) In publicly accessible Brent and WTI (daily and weekly) COPs, extensive experiments were conducted, and it was shown that the proposed approach outperformed several state-of-the-art methods for forecasting COPs.(v) We further analyzed the characteristics of the SC which needs more consideration and can help in improving the forecasting accuracy of the COPs significantly.
In the following section, the framework of the proposed study, methods, and materials are outlined in detail.

Data Description.
e data used in this study were obtained from DataStream and can be found in the supplementary files.A total of four datasets are used to evaluate the performance of the proposed procedure.All datasets consist of the daily and weekly COPs of Brent and WTI.
ere are a different number of observations in each dataset to check the correctness, robustness, and generalizability of the suggested procedure.Every time series was divided into training and testing sets of observations.e first 80% of the total observations in every time series were used as training set, whereas the rest 20% were used as the testing set.e details of every time series are discussed in brief in subsequent sections.)) of every IMF.If the autocorrelation is less than a threshold value, then that IMF will be considered stochastic, otherwise deterministic.To obtained two components add all stochastic IMFs for SC and deterministic for DC. e idea of reconstructing the IMFs into SD components should be utilized in more detail for further improvement because the SC requires more attention which can help in improving the forecasting accuracy significantly.e IMFs being part of the SC handled separately because most of the variation in total shared by the SC [29].us, in this study, the stochastic IMFs modelled separately, for every stochastic IMF a different model is chosen and for final forecasting ensemble all the models' forecast, whereas all deterministic IMFs would be treated as one component.e ARIMA and ANN models will be fitted for stochastic IMFs and DC and for the final output, add all these components, so the proposed models are denoted by EEMD-R-ARIMA and EEMD-R-ANN.For ANN, the AR terms of the ARIMA model would be used as inputs.e complete framework of the proposed method is demonstrated in Figure 1.

Experimental Study.
It is discussed that the main aim of this study is to use the minimum number of IMFs with higher forecasting accuracy.For a minimum number of IMFs use, the reconstruction of IMFs is proposed in this study.us, first obtain all IMFs from EEMD and divide them into two components, namely, stochastic and deterministic.However, a procedure or a threshold value is required which decides that the given IMF is stochastic or deterministic.e proposed procedure is the autocorrelation of IMFs from which a decision will be taken that the given IMF has stochastic or deterministic influences.Based on four different threshold values, i.e., ρ (y t ,y t−1 ) � |0.500, 0.900, 0.950, 0.990| of autocorrelation, the obtained IMFs would be divided into two components.e SC consist of all those IMFs whose autocorrelation is less than the threshold value; otherwise, the IMF belongs to the DC.To obtain the best threshold value for the division of IMFs into SD components, synthetic time series would be used which is characterized through the additive noise and organized into four different scenarios as follows: (1) In a time series composed of a sine function and normal distribution, the sine function represents the DC, while the normal distribution represents SC: (2) In a time series composed of a sine function and ARMA model, the sine function represents the DC, while the ARMA model represents the SC with an error of 0.25: (3) In a time series composed of a sine function, ARMA model, and normal distribution, the sine function represents the DC, while the ARMA model and normal distribution represent SC: (4) In a time series composed of a Lorenz system and normal distribution, the Lorenz system represents the DC, while the normal distribution represents SC: erefore, to divide the IMFs into SD components, in this study, autocorrelation is used.IMFs have stochastic influences being part of the SC while having deterministic influences being part of the DC.To obtain the original series, add both SD components.e following formula is used to compute the autocorrelation of all decomposed IMFs: e problem is how to decide that the given IMF has a stochastic or deterministic influence.To fix this problem, an experimental study is performed to obtain a threshold value rather a confidence interval from which we will be able to detach the IMFs regarding their influences.When these IMFs are detached according to their influences, the next step is to obtain SD components by summing up all IMFs, respectively.A correlation test is performed to check whether the detached components are independent.For testing, the following test statistic is used with the alternative hypothesis H A : ρ SD ≠ 0: where n denotes the total number of observations and r SD represents the correlation among the SD components and is defined as follows: To double-check the distribution of these two components, another assessment is also steered which can test the difference of the two correlations produced by the synthetic data SD components and the reconstructing SD components obtained from IMFs.For testing, the following test statistic is used with an alternative hypothesis that the difference between the two correlations is not equal to zero, i.e., H 0 : where , "s" represents the synthetic data, and "d" is used for decomposed data.
e test is performed for different threshold values of the autocorrelation which divided the IMFs into two components (SD).
e best threshold value will be chosen for the real-life datasets and should implement.
e above four different scenarios are used to check how well the proposed method extracted the SD components.e purpose of using different series is to check the generalizability and robustness of the proposed method.One of the advantages of the use of synthetic data is that it provided a controlled scenario and better supports the conclusions.

e ARIMA Box-Jenkins Approach.
e inclusion of both AR and MA terms makes the Box-Jenkins models exceptionally flexible.e interdependencies in time series (Y t ) are measured by the AR terms, while the dependencies on preceding error terms are measured by the MA terms [30,31].An ARMA model of order (p, q) for a univariate series has the following form: where p and q represent the number of AR and MA terms, respectively, c represents the constant and AR coefficients, whereas β is used for MA terms.e white noise process is denoted by ϵ t ∼ N(0, σ 2 ).Using the Box-Jenkins procedure which needed stationary data, this can be achieved by differencing the time series many times (but at most two times usually).e ACF and PACF plots can be used to select the appropriate order of the polynomials and compare them with the theoretical bounds of [32].Rival ARIMA models can also be selected based on AIC [33] and BIC [34].e next is the diagnostic checking that the fitted model is adequate by using the LB test [35].After obtaining the appropriate ARIMA model, the last step is the forecasting of the series.

Artificial Neural Network.
ANNs have many distinguishing characteristics over the ARIMA models and very successful alternative to linear models to handle the nonlinearity factors when forecasting the time series.One of the best characteristics of an ANN is the universal approximation of any nonlinear function up to the best degree of accuracy of your desired [36,37].A most commonly used ANN is the FNN with a single hidden layer having one  Mathematical Problems in Engineering output node for forecasting the time series applications [38,39].An m × n × 1 ANN model has the following form: where μ j (j � 0, 1, 2, . . ., n), ϕ ij (i � 1, 2, . . ., m, j � 1, 2, . . ., n) denotes the weights, μ 0 and ϕ 0j represent the bias terms, ϵ t is used for the white noise, Y t−i (i � 1, 2, . . ., m) represents the input patterns, Y t represents the single output of the model, m is used for the number of input variables, n is used for the number of hidden nodes, and g(•) is used for the transfer function of the hidden layer.Equation ( 11) performs nonlinear mapping from past observations where ϑ is a function determined by the neural network training and ω is a vector representing all parameters.erefore, from some viewpoint, the FNN model looks like or equivalent to an NAR model [40,41].e ANN effectively modelled the nonlinear time series, but for linear problems, it provided mixed results [38].

EEMD.
e new opportunities are provided in [42] to extract the time opposing elements from data after the advancement of transient local and versatile strategy for EMD. e transient region is a standout amongst the most critical qualities of EMD, which is procured from the spline fitting through maxima/minima of inputted data.e spline fitting has a high transient area and can be easily affirmed using numerical programming.
e transient domain of EMD is all around spared if the quantity of filtering is smaller and fixed.e filtering process of EMD is worse with the excessive use of both transient region and oscillatory components.One of the benefits of the EMD procedure is that it automatically bypasses the stationarity assumption of the data.For the physical interpretability of the results, EMD territory gives a significant condition but not adequate.An EMD approach involves continuing with the levels of removing oscillatory sections of lesser replication, and the noise-wrapped results at one level can provoke continuing with the twisting of subsequent oscillatory components, making EMD happening hardly physically interpretable.Hence, due to the absence of this robustness problem, the EMD procedure becomes inadequate.
Several attempts have been made to overcome the problem of robustness.For example, Huang et al. [42] proposed the discontinuity test in which variation is controlled by one element.Due to the controlled scenarios, it reduced the adaptiveness properties of EMD, demanding for more fascinating methodologies.To fulfill this robustness problem, a noise-assisted data analysis procedure, namely, EEMD was introduced in [43] which exactly solved the highlighted issue raised in EMD. e following are the steps involved in EEMD: (1) A series of unique white noise summed with true series Y t (2) In the next step, the new series obtained in (9) can be decomposed accordingly into IMFs (3) Repeat ( 9) and ( 10) by summing the different white noise series to the true series Y t (4) As the outcome attained the ensemble means of the respective IMFs e EEMD utilized the properties of white noise in the above steps and are as follows: the minima and maxima distribution should be uniform transiently and EMD being adequately a dual-channel bank for white-noise [44][45][46].e minima and maxima distribution on all-time spans are relatively uniform with the added white noise.Another property of EEMD is the control on oscillatory elements after adding the white noise to the series which works like a dual filter bank which significantly reduces the mode mixing problem.us, as an outcome, the decomposition turns out to be steadier and physically significant.

EEMD-ARIMA and ANN Model.
Wang et al. [47] summarized the hybrid model of EEMD-ARIMA by incorporating the steps as follows: (1) Firstly, the true series Y t be decomposed into different IMFs (2) Secondly, for every k th IMF the best ARIMA model will be selected and forecast accordingly (3) e last step is to obtain the targeted series summed all the k th IMFs output e EEMD-ANN model should be fitted in the same way above.Furthermore, for every ANN model, the delaying parameter or starting input parameter be the AR terms of every ARIMA model of k th IMF.

Forecast Evaluation Criterions.
Forecasting accuracy is used to measure the degree of exactness among the proposed and state-of-the-art methods.When there are competing models, the forecasting accuracy measures are the most important criteria.In this study, four-typical forecasting accuracy measures are used consisting of MAE, RMSE, MAPE, and DS.Typically, these methods are defined as follows: where F represents the total number of forecasts of the testing set, the original value of the given time series at time t is denoted by Y t and forecasted value is denoted by  Y t .Four different datasets are used in this study for validation and justification purposes.
e next is the directional prediction measure which is more appropriate for business purposes since the investors are concerned with the market trend.us, DS is used for competing models to measure the directional forecasting accuracy and defined as follows.
(4) DS where and F corresponds to the number of total forecasts of testing set.Apparently, the lower the values of MAPE, RMSE, and MAE, the higher the forecast accuracy in contrast with higher the values of DS produced more accurate forecasts.Next, to compare the two models prediction errors, we employed the DM test which is defined as follows.
(5) DM TEST It is used to test the statistical significance of the forecast accuracy of two forecast methods.Variable d t is the subtraction of absolute errors of the two methods.
is the autocovariance at lag k.DM∼N(0, 1), if the p value > α, we conclude there is no significant difference between the two forecasts.

ARIMA Model.
Data stationarity is one of the assumptions which is required when using the ARMA model.In most of the cases, the time series for real data is not stationary because of the trend factor, so the ARIMA model applicability comes out.To get the stationary time series, we take the successive differences of the original series [48,49].ADF test is used to check the stationarity of a time series [50].Using the ARIMA model, first identify the appropriate model by selecting the order of AR and MA terms after obtaining the stationary time series.ACF and PACF plots are used for the selection of best orders of AR and MA terms.
Other approaches are also available for obtaining the orders of AR and MA terms called theoretical approaches consisting of AIC and BIC.For model adequacy checking, the LB test is used.In the last step, we forecast the appropriate time series.In this section, the ARIMA model is fitted for all training periods and forecast the testing periods.Table 1 represents the order of best ARIMA models for all datasets.
e fitted ARIMA models are found in Table 2, where the estimated values of their respective parameters are presented with the standard error in parenthesis.e next step after estimating the parameters is the extraction of the fitted model residuals and their testing to check whether the residuals are uncorrelated.
e model diagnostic tests are also performed for all four datasets that the fitted models are adequate and can be used for forecasting.
From Figure 2, it is observed that the LB test reveals that all the fitted models are adequate and could be used for forecasting.Figure 2 shows on the third graph of LB statistic that all p values for all datasets are greater than 0.05, which means that the hypothesis of no serial correlation among fitted model residuals is not significant.So, all these models are adequate and provide the best forecast for the future.e last is the forecasting accuracy of these models which are presented in Tables 3 and 4 for daily and weekly datasets, respectively.

ANN Model.
e input structure of ANN is subject to the order of AR terms of the best chosen ARIMA model.To fix the input parameters of ANN, the order of AR terms p is used provided in Table 1.After fulfilling the following requirements, the ANN structure is constructed: (   Mathematical Problems in Engineering and testing processes only need to import the input and target variables for training and testing periods, respectively.

Data Decomposition.
For using EEMD, the two parameters should be fixed in advance which are the white noise amplitude and the number of ensembles because it affects the decomposition process, and Wu and Huang [43] described the rule for these parameters in the form of equation, i.e., ), where h is the ensemble number, ε h representing the error of standard deviation, and the amplitude of added white noise represented by e.One important thing should be kept in mind when added white noise to the original time series is that its amplitude should not be too large or small because if large, the EEMD will return some redundant IMFs; on the contrary, if small, the

Input layer Hidden layer
Output layer Mathematical Problems in Engineering EEMD IMFs remain the same as EMD.e authors [43,52] suggested that the amplitude of the added white noise is equal to 20% of the standard deviation.About the white noise amplitude and size of the ensemble number, more details can be found in the study of [43].In this study the EEMD technique is used.e two parameters of EEMD, the size of the ensemble number, and the white noise amplitude are equal to 100 and 0.2 times the standard deviation, respectively.EEMD technique is employed to all datasets using the above description.In this study, the r package "Rlibeemd" [53] is used for data decomposition.All the decomposed IMFs are presented in Figure 4 for all four datasets.e total number of IMFs including residual for daily Brent and WTI is 13, while for weekly, Brent and WTI are 11 and 10, respectively.From Figures 4(a)-4(d), IMF.1, IMF.2, IMF.3, and IMF.4 have no specific trend, and almost the observations distributed randomly.But after the IMF.4, the IMFs follow a trend and move slowly around the long-term mean, whereas the last IMF called residual represents the overall trend of the series.e overall decomposition provided by EEMD is physically meaningful, all IMFs are independent, and each IMF is consistent with the previous IMF [54].
erefore, the decomposition obtained from EEMD is very helpful because it transforms the nonstationary and nonlinear series into stationary series which improve the forecasting accuracy of the models.

Simulations for reshold Value Determination.
is study aims to determine a threshold value for the division of IMFs into SD components.After obtaining the decomposed IMFs for all synthetic datasets, the autocorrelation is computed for every IMF.From autocorrelation, it is easy to determine that the current observation depends how much on its past observations.us, closing value to zero means there is no dependency on the current value on its past values, whereas closing value to one means the current value strongly influenced by its past values.e problem is how to fix a point that one can say that, below this value, the considered IMF is stochastic.To solve this issue different values of autocorrelation are considered for dividing the IMFs into two components.First, the IMFs are separated into two different components through autocorrelation using the selected four values, i.e., ρ (y t ,y t−1 ) � |0.500, 0.900, 0.950, 0.990|.
e SC consists of all IMFs whose autocorrelation is less than the given threshold value otherwise included in the DC.Initially different values of the autocorrelation were selected and tested, i.e., ρ (y t ,y t−1 ) � |0.50, 0.60, 0.70, 0.80, 0.90, 0.95, 0.99| for the reconstruction of IMFs, but after the computation, the values ρ (y t ,y t−1 ) � |0.60, 0.70, 0.80, 0.90| almost provide the same results; thus, for avoiding some redundant analysis, only ρ (y t ,y t−1 ) � | 0.900| is chosen from the proceeding group.erefore, in this study, the following four values of autocorrelation are used for reconstruction of IMFs, i.e., ρ (y t ,y t−1 ) � |0.500, 0.900, 0.950, 0.990|.us, in this section, only the synthetic data are used because it provided a controlled scenario and better support the conclusion.e number of IMFs being part of the SD components is presented in Table 5 for all scenarios.e next step after dividing the IMFs into two components using the above threshold values is the testing of the obtained two components that they are independently distributed.e T-test is used to test the hypothesis of zero correlation among the SD components.e best threshold value is selected for which the null hypothesis is accepted for all scenarios and all sample sizes.Table 6 presented the p values that both SD components are independently distributed.
To double-check the threshold values another test is also performed which is the difference of correlation test. is test will confirm that the correlation among the original components and the components obtained from the reconstruction of IMFs is equal to zero.So, only those threshold values of autocorrelation are selected, for which the null hypothesis of no difference is not rejected for all scenarios and all samples.
From Tables 6 and 7, it is observed that, from scenarios I and IV, the recommended threshold values for dividing the IMFs into two components are ρ (y t ,y t−1 ) � |0.90, 0.95|, and from scenario III, the recommended values are ρ (y t ,y t−1 ) � |0.50, 0.90, 0.95|, whereas from scenario II, the recommended value is ρ (y t ,y t−1 ) � |0.95|.So, it is observed from all scenarios that the only one value which significantly divided the IMFs into two components is ρ (y t ,y t−1 ) � |0.95|.
us, the recommended value of autocorrelation for dividing the IMFs into two components is ρ (y t ,y t−1 ) � |0.95|.e summary of the threshold value determination is presented in Table 8.
e only recommended value at both levels of significance is |0.95|.
us, for division of IMFs into SD components, the threshold value of |0.95| is recommended and could be used.In Section 3.5, the real-world applications are tested by utilizing the recommended threshold value of |0.95|.

IMFs Reconstruction.
In this study, the reconstruction of IMFs is proposed based on the autocorrelation of every k th IMF obtained from EEMD.Each IMF autocorrelation is compared with a threshold value of ρ (y t ,y t−1 ) � |0.95|.An IMF is considered stochastic if its autocorrelation is less than the threshold value; otherwise, it is considered as deterministic.For comparison, the IMFs are also reconstructed through AMI. e AMI is simply the visual inspection of the plots which are obtained for all IMFs of each dataset.e detailed analysis of every dataset is followed in the subsequent sections.
From Section 3.3, it is detected that the total number of IMFs for daily datasets is 12 plus residue.For simplicity, the residue term is replaced with the IMF, so the total number of IMFs for both datasets is 13.For weekly data, the total number of IMFs is 11 for Brent, while for WTI, it is 10.Autocorrelation of all IMFs is computed and presented in Table 9.

Mathematical Problems in Engineering
It is observed from Table 9 that the autocorrelation of the IMF.1, IMF.2, and IMF.3 for all datasets is less than |0.95|.So, the first three IMFs are considered as stochastic in all datasets, while the rest is considered as deterministic.So, to make a SC, all stochastic IMFs are summed, and for DC, all deterministic IMFs are summed.Next, to obtain the SD components from the AMI method for Brent daily data, their plots are presented in Figure 5.
From Figure 5, it is observed that the AMI plots have the same pattern after the 6 th IMF, so the first six IMFs are  considered as stochastic, and the remainder, as deterministic.To make two components, add the first six IMFs for SC and from seven to thirteen for DC.Next, to obtain the SD components of WTI daily data, the plots of AMI for all IMFs are presented in Figure 6.
From Figure 6, it is observed that the AMI plots have the same pattern after the 6 th IMF, so the first six IMFs are considered as stochastic, and the remainder, as deterministic.To make two components, add the first six IMFs for SC and from seven to thirteen for DC.Next, to obtain the SD components of Brent weekly data, the plots of AMI for all IMFs are presented in Figure 7.
From Figure 7, it is observed that the AMI plots have the same pattern after the 6 th IMF, so the first six IMFs are considered as stochastic, and the remainder, as deterministic.To make two components, add the first six IMFs for SC and from seven to eleven for DC.Next, to obtain the SD components of WTI weekly data, the plots of AMI for all IMFs are presented in Figure 8.
From Figure 8, it is observed that the AMI plots have the same pattern after the 6 th IMF, so the first six IMFs are considered as stochastic, and the remainder, as deterministic.To make two components, add the first six IMFs for SC and from seven to ten for DC. e next is the correlation test that the SD components are independently obtained from the reconstruction of IMFs.
From Table 10, it is observed that, for all reconstructed components, the hypothesis of zero correlation is not rejected, meaning that the division of SD components is independent at a 5% level of significance.Moreover, there is a theorem, put forward by Takens [55], known as the embedding theorem, stating that a time series can be reconstructed in vectors with m values (described as the embedding dimension).Each of these values corresponds      1.000 1.000 1.000 -IMF.12 1.000 1.000 --IMF.13 1.000 Mathematical Problems in Engineering to an observation that is spaced out in intervals, following a time delay (or separation dimension) called τ.As numerous studies have shown, these vectors represent the interdependent relationships between observations, and since they increase the accuracy of the modelling, they also enhance the accuracy of the prediction [56].e plots of the AMI of the SD components are presented in Figure 9, showing the interdependency among the SD components.

Mathematical Problems in Engineering
From Figures 9(a)-9(d), it is observed that the division of IMFs into SD components is independently distributed.After obtaining the two components, the next step is the appropriate model selection for every component using the same steps discussed above.Every model is selected based on the LB test for which the p values are greater than 0.05 for both daily and weekly datasets, all IMFs, and components so that all the fitted models are adequate and could be used for forecasting.In the next step, all the selected models are used for forecasting, and the last 20% data of every series were retained as the testing set forecasted.For forecasting of ARIMA models, r software is used.For ANN, the data are transferred to MATLAB, and ntstool is used for forecasting every component.After forecasting the series, the accuracy measures are calculated and presented in Tables 3 and 4 for both daily and weekly series.

Discussion
e performance of the proposed model EEMD-R-ANN is evaluated using four quantitative measures including RMSE, MAPE, MAE, and DS. e results of all these methods for all models are obtained and presented in Table 3 for both markets of daily COPs.e benchmark models include the well-known single ARIMA and ANN models, the hybrid models including EEMD-ARIMA and EEMD-ANN, the hybrid models using only the SD components obtained from the proposed method of reconstruction including EEMD-SD-ANN and EEMD-SD-ARIMA, models using AMI for reconstruction of IMFs including EEMD-AMI-ARIMA, and EEMD-AMI-ANN, and the last is the proposed method of R of IMFs based on autocorrelation which are denoted by EEMD-R-ARIMA and EEMD-R-ANN.
e RMSE is computed for all models included in this study and shown in Tables 3 and 4 for both markets and all datasets of COPs and plotted in Figures 10 and 11 for a clearer picture.In all models, EEMD-R-ANN and EEMD-R-ARIMA almost achieved the same and the best (lowermost) values for both the COPs series.e reconstructed hybrid models EEMD-AMI-ARIMA and EEMD-AMI-ANN achieved nearly the similar values but high; EEMD-SD-ANN, EEMD-SD-ARIMA, and the hybrid models EEMD-ANN and EEMD-ARIMA achieved the lowermost values but higher than the EEMD-R-ANN.
e single models ARIMA and ANN achieved the worst (highest) values among all considered models.erefore, from the RMSE perspective, the proposed model EEMD-R-ANN confirmed their usefulness.
As another level performance of forecasting, the MAE values of all the models on WTI and Brent (daily and weekly) COPs are shown in Tables 3 and 4 and plotted in Figures 10  and 11 for clear picture.It can be seen that EEMD-R-ANN outperformed all other models, and their interpretations are not different from RMSE for both markets and all datasets.Hence, from the MAE viewpoint, the proposed model EEMD-R-ANN confirmed their usefulness by attaining the lowest values.
e MAPE is another level performance of forecasting.e MAPE values of all models on both markets WTI and Brent (daily and weekly) are presented in Tables 3 and 4 and plotted in Figures 10 and 11 for clear picture.From Tables 3  and 4, the model EEMD-R-ANN attained the lowermost values for both markets and all datasets (daily and weekly).
e three models EEMD-ANN, EEMD-ARIMA, and EEMD-R-ARIMA attained nearly similar values, whereas the other two models EEMD-SD-ANN and EEMD-SD-ARIMA attained nearly the similar values but higher than the preceding three models.e model's ANN, ARIMA, EEMD-AMI-ANN, and EEMD-AMI-ARIMA achieved nearly similar values but highest and ranked high among all models.
erefore, from the MAPE perspective, the    e model EEMD-R-ANN produced the lowest values of MAPE for Brent and WTI daily COPs which are 0.542% and 0.572%, respectively, and attained the category of highly accurate forecasts [57].e values of the MAPE for Brent and WTI daily COPs of three models which are EEMD-R-ARIMA, EEMD-ANN, and EEMD-ARIMA in the range of 0.559% through 0.697% and achieved the same category as attained by the model EEMD-R-ANN but with high values.e values of the MAPE for models EEMD-SD-ARIMA, EEMD-SD-ANN, EEMD-AMI-ARIMA, EEMD-AMI-ANN, ARIMA, and ANN were in the range of 1.018% to 1.489% of both COPs and fall under the category of good accurate forecasts.us, from the MAPE perspective, the best model for obtaining highly accurate forecasts is EEMD-R-ANN for daily COPs of both markets, i.e., Brent and WTI.
For weekly data, the model EEMD-R-ANN produced the lowest values of MAPE for Brent and WTI COPs, which are 0.933% and 0.936%, respectively, and attained the class of highly accurate forecasts [57].e values of the MAPE for Brent and WTI weekly COPs of the model EEMD-R-ARIMA were 0.936% and 0.952% and achieved the same category as attained by the EEMD-R-ANN model but with high values.e values of the MAPE for models ANN, ARIMA, EEMD-SD-ARIMA, EEMD-SD-ANN, EEMD-AMI-ARIMA, EEMD-AMI-ANN, EEMD-ARIMA, and EEMD-ANN were in the range of 1.018% to 1.489% of both COPs and fall under the class of good accurate forecasts.us, from the MAPE perspective, the best model for obtaining highly accurate forecasts is EEMD-R-ANN for weekly COPs of both markets, i.e., Brent and WTI.
e next forecast accuracy measure is the directional forecasts which are much more important for policymakers and investors since they look to the market trend either the COPs are going up or down.e DS values are presented in Tables 3 and 4 for daily and weekly data, respectively, and plotted in Figure 12 for both markets.
e DS values of the ANN and ARIMA models of both daily and weekly COPs for both markets are in the range of 51.29%-55.55% and almost equal to the random guess.Hence, it is very challenging to achieve the directional forecasts at a reasonable level for single models, and the reason may be the complex and compound nature of COPs.e next is the decomposition and ensemble model that used all IMFs obtained from EEMD of both COPs daily and weekly which are in the range of 80.17%-83.38%.us, decomposition and ensemble models achieved a higher forecasting accuracy and having a good capability of producing more accurate directional forecasts in comparison with single models.But the reason with these models is that they are too time-consuming.To cope with this issue, the idea of reconstruction of IMFs is introduced.So, the next is the directional forecasting performance of the models which used the reconstructed IMFs.
e forecasting performance of the models which used the ese results are better than the single models but not far from the random guess.e next is the forecasting performance of the models which used the proposed method of reconstruction of IMFs, and their DS values are in the range of 75.01%-78.39%and showed some satisfactory directional forecasts.In the last, the directional forecasts of the proposed models EEMD-R-ARIMA and EEMD-R-ANN are in the range of 88.25%-94.46%for both COPs as well as for both markets which are the highest among all models.us, the proposed reconstruction of the IMFs method attained the highest values for directional forecasts and outperformed all other benchmark models including ARIMA, ANN, EEMD-ARIMA, EEMD-ANN, EEMD-AMI-ARIMA, EEMD-AMI-ANN, EEMD-SD-ARIMA, and EEMD-SD-ANN.
e model EEMD-R-ANN relatively performed better than the EEMD-R-ARIMA, so the highly recommended model regarding the DS values is EEMD-R-ANN for both COPs and as well as for both markets.us, the proposed method of IMFs reconstruction through autocorrelation significantly improved the performance of the well-known ARIMA and ANN models.us, from this study, the suggested model is EEMD-R-ANN which produced the highest values for DS and lowermost values for RMSE, MAPE, and MAE for both Brent and WTI (daily and weekly) datasets.
Furthermore, for visual inspection, the forecasting accuracy measures are also plotted and presented in Figures 10-12 for both Brent and WTI daily and weekly datasets.
To check the superiority of the proposed model EEMD-R-ANN, the DM test is also conducted.e test results of DM statistics with their p values are shown in Table 11 for both Brent and WTI COPs.e output of the DM test statistically validated the conclusion drawn from the above forecasting accuracy measures.e proposed model EEMD-R-ANN statistically outperformed ARIMA, ANN, EEMD-ANN, EEMD-ARIMA, EEMD-AMI-ANN, EEMD-AMI-ARIMA, EEMD-SD-ANN, and EEMD-SD-ARIMA models with their respective p values which were far below than 0.01 for both daily and weekly data as well as for both Brent and WTI COPs.However, the DM test statistic values were not statistically significant for EEMD-R-ANN and EEMD-R-ARIMA models for both markets and all datasets which confirm the usefulness of the proposed reconstruction strategy.
e above analysis and discussions are summarized in Section 4.1.

Summary of the Work.
From the above discussions and analysis, some important findings are summarized as follows: (1) To accurately forecast the COPs, it is hard to attain satisfactory results when using single models (i.e., ARIMA and ANN) because of the nonstationary and nonlinearity structure of the data.(2) e models which used all IMFs relatively performed well than the models which used the reconstructed IMFs into two components, namely, stochastic and deterministic.(3) e recommended threshold value of the autocorrelation ρ (y t ,y t−1 ) � |0.95| based on synthetic data was used for the reconstruction of IMFs on the four real datasets.e analysis has proved that the reconstruction of IMFs through autocorrelation is a better and simple strategy that enhanced the performance of all models.
Mathematical Problems in Engineering (6) e experimental results demonstrated that the reconstruction of IMFs into SD components using the proposed method was effective but modelling the IMFs being part of the stochastic component separately was more effective and powerful approach for forecasting COPs.(7) As one of the most important commodities in the world, we analyze the COPs for illustration and verification of the proposed method.e empirical results showed that our proposed reconstruction of decomposition and ensemble model-based analysis approach is vital and effective and could be tested for more complex tasks.

Conclusion
is paper proposed a new approach for the reconstruction of IMFs of decomposition and ensemble model according to their influences, i.e., stochastic and deterministic.We considered four different synthetic series to fix a value of autocorrelation for reconstruction of IMFs with minimum human interference.All these analysis confirm that the new approach supports the fast selection of reconstruction of IMFs and helps researchers to make predictions in situations where there are time constraints.In that sense, we believe that our approach can improve the most complex stage of reconstruction of IMFs according to their influences.Hence, the reconstruction of IMFs based on autocorrelation is one of the best methods to reconstruct the IMFs according to their influences (Section 3.5).
us, from this study, the threshold value of |0.95| of autocorrelation is recommended for separating the IMFs into SD components.To evaluate the performance of the proposed approach, four real-world COPs series were considered.
e experimental findings demonstrated that all methodologies including two single benchmark models and eight ensemble models were effective.However, the forecasting accuracy measures in terms of MAE, RMSE, MAPE, and DS highlighted that the models EEMD-R-ANN and EEMD-R-ARIMA based on reconstruction of IMFs through autocorrelation were the most efficient methods for forecasting COPs.Moreover, the performance of the model EEMD-R-ANN is slightly better than the EEMD-R-ARIMA model.us, finally, the recommended model for forecasting the world COPs is EEMD-R-ANN.Hence, the reconstruction of IMFs based on autocorrelation is the best method to reconstruct the IMFs according to their influences.e advantages of the new proposed reconstruction approach were the results of handling the stochastics IMFs separately in which the linear and nonlinear parts of the ARIMA and ANN models were combined to handle the stochastic uncertainty.Furthermore, the proposed method of reconstruction of IMFs has also overcome the problem of supervising the method in which the reconstruction is carried out through visual inspection using the AMI plots, while in the proposed method, the reconstruction of IMFs is done through the autocorrelation of each IMF from which every researcher can obtain the same number of reconstructed IMFs using the same data with high forecasting accuracy and less computational time.
In future, the work could be extended in two aspects: (1) studying the stochastic IMFs in more detail to improve the performance on forecasting COPs and (2) applying the EEMD-R-ANN to forecast other time series of energy, such as wind speed and electricity price.

Figure 1 :
Figure 1: e framework of the proposed method.

Figure 8 :
Figure 8: AMI plots of WTI weekly data for all IMFs.

Figure 11 :
Figure 11: Forecasting accuracy measures of weekly Brent and WTI data.

18 Mathematical
Problems in Engineering proposed model EEMD-R-ANN confirmed their usefulness by attaining the lowermost values.

( 4 )
With the benefits of EEMD, reconstruction, ARIMA, and ANN, the proposed ensemble models EEMD-R-ANN and EEMD-R-ARIMA significantly outperform all other models listed in this paper in terms of MAE, RMSE, MAPE, DS, and DM test.(5) e performance of the proposed models EEMD-R-ANN and EEMD-R-ARIMA is statistically insignificant in terms of the DM test.However, in terms of MAE, RMSE, MAPE, and DS, the model EEMD-R-ANN relatively performed well compared to EEMD-R-ARIMA.us, the suggested model for forecasting COPs is EEMD-R-ANN.

Table 1 :
complete structure of ANN is presented in Figure3for both daily and weekly datasets.After completing the ANN construction, the next step is the training and testing processes to obtain the fitted and forecasting series.Training Mathematical Problems in Engineering p values of ADF unit root test and order of selected ARIMA models for all data. e

Table 2 :
Estimated parameters and their standard errors of the ARIMA models.
1 p value less than 0.01.

Table 3 :
Measurements of testing forecasts accuracies of daily COPs.

Table 4 :
Measurements of testing forecasts accuracies of weekly COPs.

Table 5 :
e number of IMFs being part of the SD components for all threshold values and all scenarios.

Table 6 :
p values of Pearson correlation test for all samples and threshold values.

Table 7 :
p values of correlation difference test of true correlation and among the components obtained from the reconstruction of IMFs.bold values indicate that they are statistically significant at 5% and communicated that the true correlation and among the components obtained from the reconstruction of IMFs are not independent. e

Table 8 :
Summary of the threshold values for all four scenarios.
✓e threshold value is recommended. 7e threshold value is not recommended.

Table 9 :
Autocorrelation of Brent and WTI daily and weekly data.

Table 10 :
Test of correlation among the two components for daily and weekly data.
Figure 10: Forecasting accuracy measures of daily Brent and WTI data.

Table 11 :
DM test results of Brent and WTI daily and weekly data.