Next Article in Journal
Exergy Analysis and Human Body Thermal Comfort Conditions: Evaluation of Different Body Compositions
Previous Article in Journal
Multichannel Signals Reconstruction Based on Tunable Q-Factor Wavelet Transform-Morphological Component Analysis and Sparse Bayesian Iteration for Rotating Machines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting

ESAT-STADIUS (Department of Electrical Engineering-Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics), KU Leuven, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(4), 264; https://doi.org/10.3390/e20040264
Submission received: 27 February 2018 / Revised: 30 March 2018 / Accepted: 7 April 2018 / Published: 10 April 2018
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. Sample entropy is based on the conditional entropy where a major concern is the number of past delays in the conditional term. In this study, we deploy a lag-specific conditional entropy to identify the informative past values. Moreover, considering the seasonality structure of data, we propose a clustering-based sample entropy to exploit the temporal information. Clustering-based sample entropy is based on the sample entropy definition while considering the clustering information of the training data and the membership of the test point to the clusters. In this study, we utilize the proposed method for transductive feature selection in black-box weather forecasting and conduct the experiments on minimum and maximum temperature prediction in Brussels for 1–6 days ahead. The results reveal that considering the local structure of the data can improve the feature selection performance. In addition, despite the large reduction in the number of features, the performance is competitive with the case of using all features.

1. Introduction

Entropy measures have been used for many years to exploit the amount of information that a system contains. They play a significant role in interpreting and describing the dynamics of real-life complex networks such as climate, financial, physiological, Earth and medical systems [1,2,3,4,5,6]. There can be model-based or model-free approaches to evaluate the entropy measures. While model-based approaches benefit from the prior knowledge about the probability distribution of the data, model-free methods estimate the probability distribution based on the data. Since in many real-life applications, the probability distribution of the data is unknown, in this study, we use a model-free approach known as sample entropy, which is one of the popular methods for analyzing the complexity of a dynamical system.
Moreover, in time series analysis, entropy measures can be utilized to illustrate the strength and the direction of causality. The authors in [7] investigate a bivariate dynamical system and suggest conditional entropy to evaluate the amount of information in one particular state of a process when the history of the other one is known. One major concern while using conditional entropy is the number of previous values, known as lag or delay, in the conditioning term. Furthermore, it is important to indicate which lags are more influential. In [8], a lag-specific transfer entropy method was proposed, which evaluates the causality between two time series only based on the informative lags; i.e., the informative delays are selected and the others are discarded.
In this study, we focus on a weather forecasting application as a complex system. Reliable weather forecasting is a central issue since weather conditions can affect our daily life and activities in different ways. State-of-the-art methods make use of Numerical Weather Prediction (NWP), which requires thousands of CPUs for the simulations and consequently is an intensive approach with regards to the computational complexity [9]. In recent years, black-box modeling has been used to address the issue of reliable weather forecasting. Some studies take into account the spatial and temporal properties of the dataset, e.g., Geographically Weighted Regression (GWR) explores the variation of regression coefficients considering spatial information [10]. Some studies have taken advantage of the locality structure of weather conditions to have a better performance. In global learning methods, the same weights are considered for all data points in the training data, while transductive learning algorithms assume that the samples in the test point vicinity are more influential for model fitting [11]. In [12], a clustering-based feature selection for temperature prediction is proposed in which the feature selection and model fitting are done for each cluster independently, and the trained models are used afterwards based on the membership values of the test point to each cluster. In [13], Moving Least Squares Support Vector Machines (M-LSSVM) has been proposed as a soft localization of Least Squares Support Vector Machines (LSSVM) for temperature prediction in Brussels. In this study, we propose a transductive approach for measuring the sample entropy in dynamical systems.
In a data-driven approach, weather forecasting can be seen as a Nonlinear AutoRegressive eXogenous (NARX) model; i.e., the historical data of some nearby cities are taken into account as input features. One may use a feature vector, which is the concatenation of the weather variables from different cities. Taking into account several lags for each variable leads to a high dimensional feature vector. Different studies have deployed information theory to find relevant features in static or dynamic problems [14,15,16,17,18,19]. In this paper, we investigate a global and transductive feature selection for a weather forecasting application. Note that the terms “local” and “global” are considered here in the machine learning sense as in [11] and are not referring to the geographical location of the weather stations. In the global approach, the same weights are considered for all data points in the training data for feature selection, while in the case of the transductive method, the samples in the test point vicinity in feature space are more influential. For the purpose of feature selection, in this study, we deploy the lag-specific information transfer idea to find relevant features in our problem as the globally selected features for the prediction task. In addition, we propose a clustering-based sample entropy methodology, which can be beneficial for transductive feature selection when the local structure of the data is taken into account. In this approach, depending on the clustering information of the training data and the membership values of the test point to the clusters, the samples have different impacts on the sample entropy. Deploying hard clustering can result in using only a part of the training data to measure the sample entropy with the same impact while discarding the other samples. However, using soft clustering, one may exploit the information of the whole dataset while considering different weights for the training samples. In this study, Soft Kernel Spectral Clustering (SKSC) is utilized for the clustering task. Least Squares Support Vector Machines (LSSVM), which is one of the popular learning methods, is used to model the data using the globally- and transductively-selected features.
In this study, the experiments are carried out for temperature prediction in weather forecasting. The data have been collected from the Weather Underground website [20] and include real measurements of weather elements such as temperature, dew point, humidity and wind speed for 10 cities including Brussels, Liege, Antwerp, Amsterdam, Eindhoven, Dortmund, London, Frankfurt, Groningen and Dublin. To evaluate the performance of the proposed method, there are two test sets in different periods of the year: (i) from mid-November 2013 to mid-December 2013 (November/December) and (ii) from mid-April 2014 to mid-May 2014 (April/May). Temperature forecasting is done for both minimum and maximum temperature prediction for 1–6 days ahead in Brussels.
The remaining part of the paper proceeds as follows: first, we explain the background and the proposed method. Then, we present and discuss the results for the application of temperature prediction in weather forecasting, and finally, concluding remarks are presented.

2. Materials and Methods

In this section, first we will explain the background of the methodologies that are used as baselines for transductive feature selection using clustering-based sample entropy. Afterwards, the proposed methods will be described in detail.

2.1. Background

In this section, we cover the methods used in our algorithm. First, we explain different entropy measures in static and dynamic cases. Secondly, we describe a lag-specific information transfer method, which is used as the main idea of feature selection in our proposed method. Later, Soft Kernel Spectral Clustering (SKSC), which is utilized to find the local structure in our data, and Least Squares Support Vector Machines (LSSVM), which is deployed as a learner, are explained respectively.

2.1.1. Entropy and Information Transfer

Entropy measures are popular methods to investigate the uncertainty of the data. In [21], Shannon discusses that the fundamental problem in a communication system is the reproduction of the message sent from one point to the other point. A communication system includes five elements: (1) the information source, which generates one or more messages to be delivered to the destination; (2) the transmitter, which manipulates the messages to pass them through the channel; (3) the channel, which is a medium to transfer the messages to the destination; (4) the receiver, which retrieves the original message by inverting what the transmitter did; and (5) the destination, which is the intended target of the message.
Shannon defines a measure of uncertainty for the outcome of the system known as Shannon entropy. Given a set of possible events Δ = { δ 1 , δ 2 , , δ n } with occurrence probability of p ( δ i ) for i { 1 , , n } , the Shannon entropy can be defined as follows:
H ( Δ ) = i = 1 n p ( δ i ) log 2 p ( δ i ) .
In (1), H ( Δ ) shows the uncertainty in the information that the variable Δ gives about itself. Joint entropy is a measure that evaluates the uncertainty of the outcome when there is more than one random process. Assuming there is another variable Π with a set of possible events Π = { π 1 , π 2 , , π n } and with occurrence probability of p ( π j ) for j { 1 , , n } , the joint entropy can be defined as follows:
H ( Δ Π ) = j , i = 1 n p ( δ i , π j ) log 2 p ( δ i , π j ) .
where p ( δ i , π j ) is the probability of the joint occurrence of δ i and π j [21].
Conditional entropy is a measure to assess the uncertainty of a random process while the other one is known. Given the value of Π , the conditional entropy of Δ given Π can be defined as the average of the Shannon entropy as follows [22]:
H ( Δ | Π ) = j = 1 n p ( π j ) H ( Δ | Π = π j ) = j , i = 1 n p ( δ i , π j ) log 2 p ( δ i | π j ) .
The aforementioned Equations (1)–(3) do not consider the time of the occurrence, and hence, they are known to be static. However, these definitions in information theory play significant roles in the analysis of dynamical systems [23,24,25]. In dynamic processes, the entropy measures can be useful to express the information content of a process over time, e.g., the information that the process is contained at a specific state or the one that is received from the previous states. These methods have been used in a wide range of real-world applications such as climatology, physiology, finance and biology [2,3,4,8,26].
The definition of entropy measures in dynamical processes is similar to the static case. To express the uncertainty in these systems, assume X i indicates a random variable sampled from a dynamic process X at time i, and X i = { X 1 , X 2 , , X i 1 } shows its past states. Given that p ( x i ) is the probability that X i holds the value x i and S i is the set of possible values for x i , then the Shannon entropy explains the information content at the current state of the process ( H ( X i ) = x i S i p ( x i ) log 2 p ( x i ) ).
Furthermore, joint entropy expresses the information content of the current and the past states of the random variable X as follows:
H ( X i X i ) = H ( X i , X i ) = x j S j p ( x 1 , , x i ) log 2 p ( x 1 , , x i ) .
Moreover, conditional entropy is equal to the amount of information that the current state of the random process contained in addition to the past states and can be written as follows:
H ( X i | X i ) = H ( X i , X i ) H ( X i ) = x j S j p ( x 1 , , x i ) log 2 p ( x i | x 1 , , x i 1 ) ,
where p ( x i | x 1 , , x i 1 ) is the probability that X i holds the value of x i given that X 1 to X i 1 are x 1 to x i 1 , respectively.
In [27], the entropy in dynamical processes is introduced as a predictability measure. Considering that conditional entropy in dynamical processes can be interpreted as new information that can be gained by the current state and is unknown by the previous ones, one may say that if the value of the current state is completely predictable by the previous ones, then there is no new information in the current state; hence, the conditional entropy is equal to zero. Nevertheless, a large conditional entropy shows that the amount of the information generated by the current state is large; thus, there is a lack of information to predict the current state based on its history.
In order to use these measurements in real-world datasets, the experiments rely on the time series prediction. Assume X = { x 1 , x 2 , , x N } is a time series of length N. To deploy the entropy measures, the probability density function can be approximated as follows [28]:
p ( x i ) = 1 N j = 1 N K ( x j , x i ) ,
where K ( · , · ) is a kernel function to measure the similarity of x j and x i . After having the probability distribution, the Shannon entropy of the time series can be written as follows:
H ( X i ) = ln ( < p ( x i ) > ) ,
where < p ( x i ) > indicates the average of p ( x i ) over all possible values x i . Furthermore, substituting (7) into (5), the conditional entropy can be expressed in terms of joint probabilities as follows [23]:
H ( X i | X i ) = H ( X i , X i ) H ( X i ) = ln ( < p ( x 1 , , x n 1 , x n ) > < p ( x 1 , , x n 1 ) > ) .
In this study, in order to measure entropy values, we use the special case of the dynamical entropy known as sample entropy [3]. In the sample entropy method, the kernel function K ( · , · ) is taken to be the Heaviside kernel. The Heaviside kernel sets a threshold r on the distance of x i to each sample, i.e., it indicates how many samples are within distance r of x i . One of the popular approaches is to measure the distance based on the maximum norm, which is the maximum of the absolute difference between each feature of two samples. Considering θ ( x i , x j ) = max 1 q d | x j q x i q | where x i q is the q-th component (feature) of the i-th data point and d is the number of features, the Heaviside kernel is expressed as follows:
K ( x j , x i ) = { 1 , θ ( x i , x j ) r 0 , θ ( x i , x j ) > r .
Measuring entropy in time series has its own challenges. One important issue in computing entropy criteria is the curse of dimensionality [29]. As the size of the time series becomes larger, the conditional entropy gets closer to zero. Thus, in practice, a limited length of history is taken into account. Considering only m previous values for joint probability (to avoid the curse of dimensionality) and excluding the self-match, Equation (8) is equivalent to the sample entropy of the time series [3,23].
Sample entropy can be described as follows: assuming X is a realization of a time series { x 1 , x 2 , , x N } with length N, X i m is a vector of length m defined as follows:
X i m = { x i , x i + 1 , , x i + m 1 } ; i = 1 , , N m + 1 .
Excluding the self-match, for i ranges from 1 to N m , A i m ( r ) and B i m ( r ) in m- and ( m + 1 )-dimensional space are calculated as follows:
A i m ( r ) = 1 N m 1 j = 1 j i N m K ( X j m , X i m )
B i m ( r ) = 1 N m 1 j = 1 j i N m K ( X j m + 1 , X i m + 1 ) ,
where K ( · , · ) is the Heaviside kernel, which is used to indicate how many samples are within distance r of X i m . Afterwards, A m ( r ) and B m ( r ) are defined to be equal to the average of A i m ( r ) and B i m ( r ) over all possible X i m :
A m ( r ) = 1 N m i = 1 N m A i m ( r )
B m ( r ) = 1 N m i = 1 N m B i m ( r ) .
Finally, the sample entropy in m-dimensional space is calculated as follows:
S a m p E n t ( m , r , N ) = ln ( B m ( r ) A m ( r ) ) .
Note that B m ( r ) is always smaller than or equal to A m ( r ) ; thus, S a m p E n t ( m , r , N ) has a non-negative value.
As previously explained, the entropy measures can be interpreted as a predictability power. In this study, we deploy the sample entropy definition to find relevant delays in a NARX model. In this scheme, in m dimensions (refer to (11)), the predictor time series are presented and in the m + 1 dimension (refer to (12)), and the target time series is added to them.

2.1.2. Lag-Specific Information Transfer

As previously mentioned, entropy criteria can be utilized to determine how much information is transferred from the previous states of a dynamical process to the current one. This can be extended to investigate more than one dynamical process and evaluate their relations and influences on each other. The authors in [8] have proposed a lag-specific transfer entropy method to evaluate the information transfer. Given X n and Y n are the values of time series X and Y at time n, X n = { X n 1 , X n 2 , } and Y n = { Y n 1 , Y n 2 , } indicate the past values of the time series, based on the Granger causality (G-causality), there is a G-causality from X to Y if X n includes the information that can improve the prediction of Y n above and beyond the information that Y n involves [30]. The amount of the information contained in X n can be measured using the following:
I ( Y n ; X n | Y n ) = H ( Y n | Y n ) H ( Y n | X n , Y n ) .
Based on the definition, there is G-causality from X n to Y if and only if I ( Y n ; X n | Y n ) > 0 [31].
The authors in [8] have discussed the fact that this approach generally accumulates the G-causal influence of all past values, and therefore, it does not consider the lag-specific information; i.e., the amount of information that specific state X n t gives to X n is unknown. In order to make it lag-specific, an itemized approach is proposed: there is G-causality from X n t to Y if X n t includes information that can improve the prediction of Y n above and beyond the information that both Y n and X n \ X n t involve. Note that X n \ X n t = { X n 1 , , X n t 1 , X n t + 1 , } . This approach can exploit the amount of information in each lag of different dynamical processes, and therefore, the informative lags can be identified. Eventually, the transfer entropy can only aggregate the information contained in the informative past values. The procedure of finding informative past values can be described as follows.
Assuming V as a set of selected influential and informative components and V as a set of candidate components, then V V = and V V = { X n 1 , , X n L m a x , Y n 1 , , Y n L m a x } where L m a x is the maximum lag to be taken into account. Note that in this study, X i and Y i for i { 1 , , n } are uni-variate time series. The procedure of detecting influential lags is an iterative procedure where V k and V k indicate V and V at iteration k, respectively. The algorithm starts with V 0 as an empty set. In each iteration, for each W V k 1 , a candidate set { W , V k 1 } is created, and the conditional entropy H ( Y n | W , V k 1 ) is computed. The component W that results in the minimum conditional entropy ( arg min W H ( Y n | W , V k 1 ) ) is selected, and subsequently, V and V are updated as follows: V k = { W , V k 1 } and V k = V k 1 \ W k . The procedure terminates when an irrelevant component is added to the selected set V.
The relevance of the selected component is evaluated based on the significance of the reduction in the conditional entropy as follows:
I ( Y n ; W k | V k 1 ) = H ( Y n | V k 1 ) H ( Y n | V k ) .
To determine the significance of the reduction in the conditional entropy, a statistical approach is used. The statistical significance is estimated by deploying time shifted surrogate data [8]. In this approach, the surrogate data are generated by multiple shifting of the original series W k for randomly-selected lag with respect to Y n . For example, assuming W k has N elements and W k = [ W k 1 , W k 2 , , W k N ] , then the shifted time series with lag equal to l is [ W k ( l + 1 ) , W k ( l + 2 ) , , W k N , W k 1 , , W k l ] [32]. Afterwards, the reduction of the conditional entropy is evaluated for the original series and the new shifted ones. If the reduction of the conditional entropy for the original one is below the 100(1–a) percentile of its distribution on the surrogate data, W k is considered to be an irrelevant feature (delay), and the termination condition is fulfilled; otherwise, W k is a relevant variable and is added to the selected set V. Note that the lag variable for shifting W k has to be large enough (not close to one and N) to eliminate the causality effect between the new shifted time series and the output. If the null hypothesis is rejected, one can be sure that the reduction in the conditional entropy is in fact because of causality and not random.
The authors in [8] have used the idea of relevant component selection to evaluate the transfer entropy between the variables and measuring the amount of the information that they transfer to each other. Nevertheless, in this paper, regardless of the amount of the transferred information, we use the lag-specific component selection idea as a forward feature selection approach to find relevant features in an NARX model. Therefore, if a lag-specific component contains information based on the (17), then it is selected as a relevant feature.

2.1.3. Soft Kernel Spectral Clustering

To take advantage of the local structure of the data, one may use clustering. As previously said, using soft clustering can be beneficial to exploit the knowledge of all samples while considering different weights for the samples in each cluster depending on the membership of the test point to that cluster. In this study, we use Soft Kernel Spectral Clustering (SKSC), which is one of the popular non-linear clustering methods [33].
Assume κ is the number of clusters and x i is a row vector including d features for i { 1 , 2 , , N } . Considering l as the number of score variables needed to encode the κ clusters, the projection of the training data in the feature space can be represented by e ( l ) = [ e 1 ( l ) , , e N ( l ) ] T . Let γ l R + be the regularization parameter. The SKSC primal formulation is expressed as follows [33,34]:
min w ( l ) , b l , e ( l ) 1 2 l = 1 κ 1 w ( l ) T w ( l ) 1 2 N l = 1 κ 1 γ l e ( l ) T D Ω 1 e ( l ) subject to e ( l ) = Φ w ( l ) + b l 1 N , l = 1 , , κ 1 .
Here, φ ( · ) : R d R h is the feature map that maps the data to a high or infinite dimensional space and Φ = [ φ ( x 1 ) T , , φ ( x N ) T ] is a N × h matrix. Ω is the kernel matrix where Ω i j = K ( x i , x j ) , and Mercer’s theorem [35] can be expressed as follows:
Ω i j = φ ( x i ) T φ ( x j ) = K ( x i , x j ) i , j = 1 , 2 , , N .
Note that for positive definite kernel function K ( · , · ) , one may exploit Mercer’s theorem to implicitly use the feature map, and thus, φ ( · ) does not have to be explicitly defined.
In addition, D Ω R N × N is the diagonal degree matrix associated with the Ω where D Ω ( i , i ) = j Ω i j .
Let α ( l ) R be the Lagrange multipliers. Then, based on the Lagrangian L ( w ( l , b l , e ( l ) ; α ( l ) ) = 1 2 l = 1 κ 1 w ( l ) T w ( l ) 1 2 N l = 1 κ 1 γ l e ( l ) T D Ω 1 e ( l ) l = 1 κ 1 α ( l ) ( e ( l ) ( Φ w ( l ) + b l 1 N ) ) , the optimality conditions for l = 1 , , κ 1 are as follows [34]:
L w ( l ) = 0 w ( l ) = Φ T α ( l ) L b l = 0 1 N T α ( l ) = 0 L e ( l ) = 0 α ( l ) = γ l N D Ω 1 e ( l ) , L α ( l ) = 0 e ( l ) = Φ w ( l ) + b l 1 N .
After eliminating w ( l ) , b l and e ( l ) , the dual problem is described as follows:
D Ω 1 M D Ω α ( l ) = λ l α ( l )
where α ( l ) is the vector of dual variables, λ l = N γ l , and M D = I N ( 1 / 1 N T D Ω 1 1 N ) ( 1 N 1 N T D Ω 1 ) is a centering matrix.
The clustering models in dual space for a given sample x i is as follows:
e i ( l ) = j = 1 N α j ( l ) K ( x j , x i ) + b l , l = 1 , , κ 1 , j = 1 , , N .
After finding the initial borders of clusters, the prototypes in the score variables’ space are recalculated to improve the clusters’ borders, and subsequently, the data points are assigned to a cluster based on their distance to the prototypes. The prototype ψ c of cluster c for c = 1 , , κ can be found as follows:
ψ c = 1 N c i = 1 N c e i ( l ) ,
where N c is the number of samples in cluster c and e i ( l ) are the score variables of the samples in cluster c.
In this study, we use the Radial Basis Kernel (RBF) (24) as the kernel function; thus, the kernel bandwidth σ together with the number of clusters κ are the two parameters that have to be tuned. The RBF kernel is:
K ( x i , x j ) = exp ( | | x i x j | | 2 2 / σ 2 ) .
In this study, Average Membership Strength (AMS) is employed to tune the hyperparameters based on the grid search, and the values that yield the maximum AMS are selected. Thus, for different numbers of clusters (in this study, from 2–10) and different values of the kernel bandwidth, AMS on the validation set is evaluated, and the one that has the maximum AMS is selected. In AMS, the average membership value for the samples in the validation set to each cluster is calculated based on the cosine similarities between the samples and the prototypes of the corresponding cluster.
For a given test point x t e s t , the membership value to the cluster c is expressed as follows [33]:
M e m b t e s t ( c ) = j c d t e s t , j c o s h = 1 κ j h d t e s t , j c o s , h = 1 κ M e m b t e s t ( h ) = 1 ,
where κ is the number of cluster and d t e s t , j c o s is the cosine similarity between the test sample and the prototype of the cluster j in the score variables space.

2.1.4. Least Squares Support Vector Machines

Least Squares Support Vector Machines (LSSVM) is a well-known machine learning method proposed in [36,37]. The main difference between Support Vector Machines (SVM) and LSSVM is the fact that instead of quadratic programming in SVM, LSSVM solves a set of linear equations. Let x R d , y R and φ : R d R h where φ ( · ) is a feature map and h is the dimension of the feature map. The model in primal space is formulated as follows:
y ( x ) = w T φ ( x ) + b
where b R and w R h . The optimization problem is written as follows [37]:
min w , b , e 1 2 w T w + γ 2 j = 1 N e j 2 subject to y j = w T φ ( x j ) + b + e j , j = 1 , , N ,
where { x j , y j } j = 1 N is the training set, γ R + is the regularization parameter and e j = y j y ^ j is the error between the actual and predicted output for data point j.
Let α j R be the Lagrange multipliers. Then, based on the Lagrangian L ( w , b , e ; α ) = 1 2 w T w + γ 2 j = 1 N e j 2 j = 1 N α j ( w T φ ( x j ) + b + e j y j ) , the optimality conditions are as follows:
L w = 0 w = j = 1 N α j φ ( x j ) L b = 0 j = 1 N α j = 0 L e j = 0 α j = γ e j , j = 1 , , N L α j = 0 y j = w T φ ( x j ) + b + e j , j = 1 , , N .
After eliminating w and e, the dual problem can be formulated as follows:
0 1 N T 1 N Ω + 1 γ I N b α = 0 y ,
where Ω is the kernel matrix. In this study, we deploy RBF as a kernel function, which is formulated in (24). Thus, the regularization parameter γ and the kernel parameter σ are the tuning parameters.
Having α j and b as the solution for the linear system, the LSSVM model as a function estimator is expressed as follows:
y ^ ( x ) = j = 1 N α j K ( x , x j ) + b .
In this study, we use LSSVM to learn the model based on the selected features; thus, good performance can be an indication that relevant features have been selected.

2.2. Transductive Feature Selection Using Clustering-Based Sample Entropy

In this study, we propose a methodology for transductive feature selection based on the clustering-based sample entropy. The seasonal behavior of the weather condition is the intuition to investigate the transductive feature selection. Mostly, feature selection methods take into account the relevance of the features for prediction in the whole dataset. However, in the transductive feature selection, we assume that some features can be considered relevant in some part of the data while being irrelevant when all samples are taken into account and vice versa. In [12], a clustering-based feature selection is deployed in weather forecasting. It is shown that selecting features based on the clustering information can result in a better performance for weather prediction.
Given that weather forecasting can be seen as a Nonlinear AutoRegressive eXogenous (NARX) model [12,38] and assuming Y t and X p , t for p = 1 , , d are the output and p-th exogenous inputs of the system at time t and s is a positive integer denoting the number of steps ahead in the future to predict, the NARX model can be written as follows:
Y ^ t + s = f ( Y t , Y t 1 , , Y t L m a x , X 1 , t , X 1 , t 1 , , X 1 , t L m a x , , X d , t , X d , t 1 , , X d , t L m a x )
Having X R N × d and Y R N , where X j , i is the value of the exogenous input j at time i and Y i is the value of the uni-variate time series Y at time i, we define X t r a i n = [ X 1 : d , 1 : N L m a x , X 1 : d , 2 : N L m a x + 1 , , X 1 : d , L m a x : N 1 , Y 1 : N L m a x , , Y L m a x : N 1 ] and X t e s t = [ X 1 : d , N L m a x + 1 , X 1 : d , N L m a x + 2 , , X 1 : d , N , Y N L m a x + 1 : N ] . Note that X t r a i n R ( N L m a x ) × ( ( d + 1 ) × L m a x ) and X t e s t R 1 × ( ( d + 1 ) × L m a x ) .
The diagram of the proposed method is depicted in Figure 1. As is shown, the algorithm consists of three main steps. In the first block, a clustering algorithm is applied on the data. The output of this block includes the clustering information of the training samples and the membership of the test point to each cluster. Depending on the membership values of the test point, some parts of the data-set are considered for the feature selection. Using hard clustering, only a subset of the data, which includes the samples of the cluster that the test point belongs to, is passed to the next block to find relevant features. However, using soft clustering, all data points can be used in the next block depending on the membership values of the test point to the clusters. In this study, we deploy SKSC in the clustering block. Thus, we exploit the information of all data: the data in each cluster affect the feature selection procedure based on the membership of the test point to the corresponding cluster.
In the second block, the feature selection procedure is defined by finding informative lags or delays of the input time series. Knowing the samples in each cluster and the membership values of the test point to each cluster, the feature selection using the clustering-based sample entropy is an iterative procedure that has been shown in Figure 2 and can be expressed as follows.
Let V be the set of selected informative lags of the time series and V be the candidate components; thus, similar to lag-specific transfer entropy, V V = and V V = { X 1 , 1 : N L m a x , , X 1 , L m a x : N 1 , , X d , 1 : N L m a x , , X d , L m a x : N 1 , Y 1 : N L m a x , , Y L m a x : N 1 } where X i , t 1 : t 2 is a column vector including the values of the time series X i in the time period of t 1 to t 2 . The feature selection is done in an iterative procedure where V k and V k indicate V and V at iteration k, respectively. The algorithm starts with V 0 as an empty set. Each iteration can be explained in three steps:
  • For each W V k 1 , a candidate set { W , V k 1 } is created, and the conditional entropy H ( Y n | W , V k 1 ) is computed based on the clustering-based sample entropy.
  • The component W that minimizes the conditional entropy ( arg min W H ( Y n | W , V k 1 ) ) is selected to be added to the selected set V.
  • V and V are updated as follows: V k = { W , V k 1 } and V k = V k 1 \ W k , and the termination condition is checked.
The procedure terminates when an irrelevant component is added to the selected set V. In this study, we utilize the surrogate data to evaluate the relevance of the selected feature as explained in Section 2.1.2.
The clustering-based sample entropy in iteration k can be expressed as follows:
  • Assuming S k R N × k is the concatenation of the selected set of features V k for all samples, the samples can be partitioned into separated groups based on the clustering information such that S k c R N c × k represents the selected features for the samples in the cluster c.
  • In each cluster, for i ranging from 1– N c , A i k , c ( r ) and B i k , c ( r ) in k and k + 1 dimensional space are calculated as follows:
    A i k , c ( r ) = 1 N 1 j = 1 j i N c K ( S j , k c , S i , k c ) ,
    B i k , c ( r ) = 1 N 1 j = 1 j i N c K ( [ S j , k c , Y j c ] , [ S i , k c , Y i c ] ) ,
    where K ( · , · ) is the Heaviside kernel which is used to indicate how many samples are within distance r of S i , k c . In this step, the probability density function is calculated for two cases: using only selected features and based on selected features together with the target value. Therefore, the conditional entropy of the target value given selected features can be calculated. Note that, c A i k , c ( r ) = A i k ( r ) and c B i k , c ( r ) = B i k ( r ) where A i k ( r ) and B i k ( r ) are equivalent to (11) and (12) in the sample entropy definition.
  • Similar to sample entropy, A k , c ( r ) and B k , c ( r ) in k and k + 1 are defined to be equal to the average of A i k , c ( r ) and B i k , c ( r ) over all possible V i , k c :
    A k , c ( r ) = 1 N i = 1 N A i k , c ( r ) ,
    B k , c ( r ) = 1 N i = 1 N B i k , c ( r ) .
    Note that c A k , c ( r ) = A k ( r ) and c B k , c ( r ) = B k ( r ) where A k ( r ) and A k ( r ) are equivalent to (13) and (14) in sample entropy definition.
  • Finally, the clustering-based sample entropy (CluSampEnt), which represents the conditional entropy, in k dimensional space is calculated as follows:
    C l u S a m p E n t ( k , r , N ) = ln ( c M e m b t e s t ( c ) B k , c ( r ) c M e m b t e s t ( c ) A k , c ( r ) )
    where M e m b t e s t ( c ) is the membership value of the test point to cluster c. Note that B k , c ( r ) is always smaller than or equal to A k , c ( r ) ; thus, C l u S a m p E n t ( k , r , N ) has a non-negative value.
Clustering-based sample entropy can be considered as a transductive entropy measure as it gives us more information about the samples that are more similar to the given test point. Note that if the membership values of the test point to all clusters are equal, the clustering-based sample entropy is equivalent to the sample entropy; thus, all training data points have the same influence on the conditional entropy.
Finally, in the last block in Figure 1, a learner is used to model the data using the selected features. In this study, we use LSSVM for learning the data based on the selected features. A better performance on prediction indicates that more relevant features have been selected.

3. Results

3.1. Experiments on the Simulated Dataset

In this section, we have deployed the proposed methods to find the relevant delays of variables in linear and nonlinear synthetic datasets. In addition, we have compared the proposed method with three other methodologies. The first one is Automatic Relevance Determination (ARD), which is a popular feature selection approach in a Bayesian framework. We have used the implementation of ARD in the framework of LSSVM (LSSVM Toolbox Version 1.8, KU Leuven, Leuven, Belgium) [39]. This method involves three levels of inference: in the first level, the model parameters (the primal weights and bias) are estimated based on the prior, which corresponds to the sum of the squared error and the regularization parameters; in the second level, the hyperparameters, which are utilized to avoid over-fitting and under-fitting, are estimated; and in the third level, the kernel parameter estimation and the model comparison are done [37]. The second approach deploys partial conditioning based on Mutual Information (MI) as an entropic measure. In this method, to select the first feature, the mutual information of each feature with the target value is evaluated, and the one that leads to the maximum value is added to the selected set. In the next iterations, the feature that jointly with the previously selected features has the maximum mutual information with the target value is added to the selected set. The procedure continues until a pre-defined number of features is selected [18]. Finally, we utilized Least Absolute Shrinkage and Selection Operator (LASSO), which is a popular feature selection approach proposed by [40]. LASSO is a regularization method that produces sparse models by imposing an L1-norm penalty on the regression coefficients. Note that both proposed methods and the MI-based method are model-free approaches, while ARD and LASSO are model-based methods.
We have created 10 realizations of all systems for 1000 time steps and with random initialization. To evaluate the performance of different methods for a linear system, consider the following system:
u t = 0.9 u t 1 0.6 u t 2 2.1 + e u , t , f o r 1 t 1000 y t = 0.7 y t 1 + 0.8 u t 3 + 1.8 + e y , t , f o r 1 t 1000 ,
where e u , t and e y , t are independent white noise with zero mean and 0.5 variance. As was previously mentioned, in this paper, we assume that data in different clusters are a function of different variables. Therefore, we define the localized linear system example as follows:
C l u s t e r 1 ( 1 t 500 ) : u t = 0.9 u t 1 0.6 u t 2 + e u , t , y t = 0.7 y t 1 + 0.8 u t 3 + e y , t C l u s t e r 2 ( 501 t 1000 ) : u t = 0.9 u t 1 0.6 u t 2 + e u , t y t = 0.81 y t 2 + 0.95 u t 4 + e y , t .
Note that in the first 500 points, y t is a function of y t 1 and u t 3 , while in the next 500 points, it is related to y t 2 and u t 4 .
In the rest of the paper, we refer to Systems (37) and (38) as the global and localized linear system, respectively. Similar to the linear systems, consider a nonlinear global system defined as follows:
u t = 3.4 u t 1 ( 1 u t 1 2 ) e x p ( u t 1 2 ) + e u , t , f o r 1 t 1000 y t = 3.4 y t 2 ( 1 y t 2 2 ) e x p ( y t 2 2 ) + 1.5 u t 1 2 + e y , t , f o r 1 t 1000 .
We have made some changes in the global system such that the underlying processes for the first and the second 500 time steps are different such that the localized nonlinear system is defined as follows:
C l u s t e r 1 ( 1 t 500 ) : u t = 3.4 u t 1 ( 1 u t 1 2 ) e x p ( u t 1 2 ) + e u , t , y t = 3.4 y t 1 ( 1 y t 1 2 ) e x p ( y t 1 2 ) + 3.9 u t 3 2 + e y , t . C l u s t e r 2 ( 501 t 1000 ) : u t = 1.4 u t 1 ( 1 u t 1 2 ) e x p ( u t 1 2 ) + e u , t , y t = 4.4 y t 2 ( 1 y t 2 2 ) e x p ( y t 2 2 ) 3.9 u t 1 2 + e y , t .
Note that in the first 500 points, y t is a function of y t 1 and u t 3 , while in the next 500 points, it is related to y t 2 and u t 1 .
Considering L m a x = 5 and r = 0.1 , the occurrence of the first two relevant features in both global and transductive feature selections together with ARD, MI-based and LASSO approaches are shown in Table 1. As is shown, all methods perform equally well for the global linear system, and they are competitive in the case of the global nonlinear system. Note that in all cases, the most relevant features for the prediction of y t are selected.
In Table 2 and Table 3, the occurrence of the first two selected features for two test samples with different membership values to the clusters in the localized systems are presented; that is, Table 2 shows the occurrence when the test point membership values to the first and second cluster are 0.8 and 0.2, respectively; thus, the dependency of variables in the test point is closer to the first cluster underlying model. However, in Table 3, the membership values to the clusters are 0.2 and 0.8; so, the pattern in the test point is closer to the second cluster underlying model. The results reveal that in this scenario, the proposed transductive feature selection approach outperforms other approaches as it selects the relevant features more times. This is expected as other methods select the features based on considering that all data points have the same effect.
In the rest of the experiment part, we evaluate the performance of the proposed global and transductive feature selection approaches on the application of weather forecasting.

3.2. Weather Dataset

In this study, data have been gathered from the Weather Underground website and include real measurements of weather variables such as minimum and maximum temperature, dew point, humidity and wind speed for 10 cities including Brussels, Antwerp, Liege, Amsterdam, Eindhoven, Dortmund, London, Frankfurt, Groningen and Dublin, as shown in Figure 3.
In order to assess the performance of the proposed methods in different weather conditions, the performance is reported on two test sets in different time periods: (i) from mid-November 2013–mid-December 2013 (November/December) and (ii) from mid-April 2014–mid-May 2014 (April/May).
The data cover a time period from the beginning of 2007–mid-2014 and contain 180 measured weather variables for each day. Note that the number of training samples is different for each test data point, and it is based on the number of days from the beginning of 2007 until the day before the test date.

3.3. Weather Forecasting Experiments

In this study, the experiments are done for minimum and maximum temperature prediction in Brussels for 1–6 days ahead. For both SKSC and LSSVM, the dual problem is implemented, and there are different parameters that need to be tuned: the number of clusters and the kernel bandwidth parameter in SKCS are tuned using AMS considering 60 % of data for training and 40 % for validation; furthermore, the regularization parameter and the kernel bandwidth in LSSVM are tuned using 10-fold cross-validation. In this study, we consider L m a x to be 10 and generate 50 realizations of the shifted surrogate data. Three different values [ 0.7 , 1 , 1.6 ] have been considered for the r value in the Heaviside kernel, and the results are reported for all of them.
Figure 4 shows the AMS value for different numbers of clusters, and it can be seen that the maximum value of AMS is when the data are divided into two clusters. In addition, the clusters have been depicted in Figure 4, and one may say that the clusters indicate different patterns in different periods of a year. Note that having one cluster means that the same weights are considered for all data points, and hence, it is equivalent to the global feature selection approach.
In order to compare the accuracy of the data-driven approaches with the state-of-the-art methods in weather forecasting, the performances of the black-box methods are compared with the one of the Weather Underground company. In addition, we utilize the global and transductive feature selection to identify relevant delays of each weather variable. Note that there are 180 weather variables in the dataset, and for each of them, we consider L m a x to be 10. In this case, there are 1800 features, which is equal to 180 × L m a x . Furthermore, we compare the results with the one when there is no feature selection.
In Table 4 and Table 5, the average Mean Absolute Error (MAE) of the prediction using the global and transductive feature selection for the minimum and maximum temperature in two test sets are compared. For each value of r, the method that has a better performance is bolded. In most of the cases, transductive feature selection results in a lower MAE, which can be an indication that selecting features transductively is able to identify relevant features better than the global approach.
In Figure 5 and Figure 6, the performance of the Weather Underground prediction is compared with the ones of the black-box methods for both test sets. The black-box approaches utilized LSSVM when: (1) there is no feature selection; (2) global feature selection is deployed; and (3) transductive feature selection is used. Note that in the case of utilizing feature selection, the r value is tuned using cross-validation. As is shown, the data-driven approaches are competitive with the state-of-the-art methods in weather forecasting. In the test set November/December, the black-box methods outperform Weather Underground, while Weather Underground shows more reliable weather forecasting in April/May. This can be due to the lower variance in observations in the test set November/December. In addition, the performances of the black-box methods when feature selection methods are employed are competitive with the case that there is no feature selection.
In order to analyze the reduction in the complexity of the methods in terms of the number of features, Figure 7 depicts the average number of selected features for different values of r. Obviously, increasing the value of r results in a larger number of selected features. This phenomenon is expected as with a larger r value, it takes more iterations for the conditional entropy to decay to zero; thus, more features are selected at the end of the feature selection procedure.
As was mentioned, there are 180 weather variables, and considering L m a x to be 10, the total number of features is 1800. Figure 7 suggests that in all cases, there is more than a 97 % reduction in the number of features. Taking Figure 5 and Figure 6 into account, it can be seen that although there is a huge reduction in the number of features, the results are competitive with the case of using all features. In addition, we have investigated the prediction intervals as described in [41], and we have observed that even though we are reducing the number of features significantly, the prediction intervals are competitive, which indicates that the level of uncertainty is not increased.
In addition to LSSVM as an NARX machine learning approach, we have investigated the impact of the feature selection on a linear approach such as the AutoRegressive with eXogenous input (ARX) model. The overall performances of prediction on both test sets together with using the ARX model in three cases (1) without feature selection, (2) features selected by the global approach and (3) features selected by the transductive approach are presented in Figure 8. As is shown, the feature selection methods can improve the performance of the linear models significantly even though very few features are selected.
The comparison between the performance of the linear model (ARX) and the nonlinear model (LSSVM), while the proposed feature selection methods are used, is depicted in Figure 9. As is shown, the performances of both models are competitive, and this may be due to the efficient feature selection approach.
Moreover, since the proposed method benefits from the clustering information to find informative features, we have compared the results on the weather dataset with the one proposed in [12], which also deploys clustering information to improve the feature selection performance. The mean absolute error of the minimum and maximum temperature prediction is shown in Figure 10. The experimental results reveal that while the number of selected features using the method in [12] is three to four-times larger than the number of features selected by the proposed transductive feature selection approach, the performances of both methods are competitive.
Note that there are some major differences between the proposed method and the one in [12]. While the proposed transductive feature selection method in this study is model-free, the one in [12] is a linear feature selection approach. In addition, in the proposed methods, depending on the membership values of each test point, the selected features can be different, while in [12], the features are selected per cluster, and the membership values of the test point affect the prediction.
In order to investigate the influence of each neighboring city on the temperature of Brussels, Figure 11 and Figure 12 show the percentage of the selected features from different cities and different delays. As is depicted, in both global and local feature selection, for short-term prediction (one day ahead), closer cities such as Brussels itself, Dortmond and Amsterdam seem more relevant, while for long-term prediction (six days ahead), farther cities such as Dublin, London and Groningen are more important. Moreover, in short-term prediction, short histories (smaller delays) are more relevant, while for the long-term one, larger delays should also be taken into account.

4. Discussion

In this study, we propose a feature selection approach based on the entropy measures in an application of weather forecasting. The sample entropy was used to measure the conditional entropy of the target value, which is the maximum and minimum temperature in Brussels, while a set of features, which includes weather variables, is given. This set was formed in a forward selection of the time series that are affecting the target time series. The influence of time series on each other is measured using the conditional entropy; i.e., smaller conditional entropy shows a higher power of predictability.
The results suggest that selecting the informative time series that are affecting the target value can reduce the complexity of the model in terms of the number of features significantly. However, the performance of the model remains good even when there is a smaller number of variables in the model. Surprisingly, in many cases, the performance is improved. In addition, a smaller number of features can be beneficial for the visualization task in a complex network such as climate data.
The results of these experiments support the conclusion of the paper [12] in which it is shown that taking into account the local structure of data can result in better performance in weather forecasting. In addition, it is depicted that in different periods of the year, which means different weather conditions, the influence of the neighboring cities on the weather variables of the target city can be different. For example, in Figure 11 and Figure 12, in the case of six-day-ahead prediction, London seems more influential in the April/May test set, while in the November/December test set, Frankfort is more informative.
A major drawback of the proposed transductive feature selection method, which uses the clustering-based sample entropy, is the fact that for each test point the whole procedure should be done independently. In daily weather forecasting, in each day, there is only one test point for which the weather conditions for 1–6 days ahead should be predicted. Considering the fact that in this application, the trained model should be updated on a daily basis, the transductive approach does not have higher complexity than the global one. However, in some datasets with more than one test point, the transductive feature selection becomes time consuming. One possible solution for this problem can be clustering the test points. Since the test points in each cluster are similar to each other, their membership values to the clusters in the training data should be similar, as well. Therefore, transductive feature selection can be done for each cluster of the test points independently. Note that the proposed method is applicable for any time series prediction, such as climate, financial or medical systems, since it investigates the impact of regressor time series on the target one.

5. Conclusions

In this study, we investigated a feature selection approach based on entropy measures in an application of weather forecasting. We deployed the sample entropy to evaluate the conditional entropy of the target value when a set of selected features is given. The forward selection approach is followed; thus, in each iteration, the variable that minimizes the conditional entropy was added to the set of selected features. In addition, considering the local structure of the data, we proposed the clustering-based sample entropy, which is similar to the sample entropy definition except the fact that the clustering information of the training data and the membership of the test point to the clusters are taken into account to perform the feature selection.
The performances of black-box methods are compared with the one of the Weather Underground company, and the experiments show that the data-driven weather forecasting is competitive with the state-of-the-art methods in this field. The results reveal that utilizing the proposed feature selection methodologies leads to a significant decrease in the number of features, while the performance remains adequate. Moreover, the experiments suggest that the transductive feature selection can improve the performance of finding relevant variables.

Acknowledgments

EU: The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007–2013)/ERC AdG A-DATADRIVE-B (290923). This paper reflects only the authors’ views, and the Union is not liable for any use that may be made of the information contained within it. Research Council KU Leuven: CoE PFV/10/002 (OPTEC), BIL12/11T; Ph.D./Postdoc grants. Flemish Government: FWO: projects: G.0377.12 (Structured systems), G.088114N (Tensor-based data similarity), G0A4917N (Deep restricted kernel machines); Ph.D./Postdoc grant iMinds Medical Information Technologies SBO 2015 IWT : POM II SBO 100031 Belgian Federal Science Policy Office: IUAP P7/19 (DYSCO , Dynamical systems, control and optimization, 2012–2017).

Author Contributions

In this work, Zahra Karevan and Johan A. K. Suykens conceived and designed the experiments; Zahra Karevan performed the experiments; Zahra Karevan and Johan A. K. Suykens analyzed the data and contributed analysis tools and Zahra Karevan wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anand, K.; Bianconi, G. Entropy measures for networks: Toward an information theory of complex topologies. Phys. Rev. E 2009, 80, 045102. [Google Scholar] [CrossRef] [PubMed]
  2. Sandoval, L. Structure of a global network of financial companies based on transfer entropy. Entropy 2014, 16, 4443–4482. [Google Scholar] [CrossRef]
  3. Richman, J.S.; Moorman, J.R. Physiological time series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef] [PubMed]
  4. Shuangcheng, L.; Qiaofu, Z.; Shaohong, W.; Erfu, D. Measurement of climate complexity using sample entropy. Int. J. Climatol. 2006, 26, 2131–2139. [Google Scholar] [CrossRef]
  5. Balasis, G.; Donner, R.V.; Potirakis, S.M.; Runge, J.; Papadimitriou, C.; Daglis, I.A.; Eftaxias, K.; Kurths, J. Statistical mechanics and information-theoretic perspectives on complexity in the Earth system. Entropy 2013, 15, 4844–4888. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, Z.; Li, Y.; Childress, A.R.; Detre, J.A. Brain entropy mapping using fMRI. PLoS ONE 2014, 9, e89948. [Google Scholar] [CrossRef] [PubMed]
  7. Porta, A.; Baselli, G.; Lombardi, F.; Montano, N.; Malliani, A.; Cerutti, S. Conditional entropy approach for the evaluation of the coupling strength. Biol. Cybern. 1999, 81, 119–129. [Google Scholar] [CrossRef] [PubMed]
  8. Faes, L.; Marinazzo, D.; Montalto, A.; Nollo, G. Lag-specific transfer entropy as a tool to assess cardiovascular and cardiorespiratory information transfer. IEEE Trans. Biomed. Eng. 2014, 61, 2556–2568. [Google Scholar] [CrossRef] [PubMed]
  9. Bauer, P.; Thorpe, A.; Brunet, G. The quiet revolution of numerical weather prediction. Nature 2015, 525, 47–55. [Google Scholar] [CrossRef] [PubMed]
  10. Brunsdon, C.; Fotheringham, S.; Charlton, M. Geographically weighted regression. J. R. Stat. Soc. Ser. D 1998, 47, 431–443. [Google Scholar] [CrossRef]
  11. Bottou, L.; Vapnik, V. Local learning algorithms. Neural Computation 1992, 4, 888–900. [Google Scholar] [CrossRef]
  12. Karevan, Z.; Suykens, J.A.K. Clustering-based feature selection for black-box weather temperature prediction. In Proceedings of the 2016 International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016. [Google Scholar]
  13. Karevan, Z.; Feng, Y.; Suykens, J.A.K. Moving Least Squares Support Vector Machines for weather temperature prediction. In Proceedings of the European Symposium on Artificial Neural Networks, Brugge, Belgium, 27–29 April 2016; pp. 611–616. [Google Scholar]
  14. Hmamouche, Y.; Casali, A.; Lakhal, L. Causality based feature selection approach for multivariate time series forecasting. In Proceedings of the International Conference on Advances in Databases, Knowledge, and Data Applications, Barcelona, Spain, 21–25 May 2017. [Google Scholar]
  15. Van Dijck, G.; Van Hulle, M.M. Speeding up the wrapper feature subset selection in regression by mutual information relevance and redundancy analysis. In Proceedings of the International Conference on Artificial Neural Networks, Athens, Greece, 10–14 September 2006; pp. 31–40. [Google Scholar]
  16. Ramırez-Gallego, S.; Mourino-Talın, H.; Martınez-Rego, D.; Bolón-Canedo, V.; Benıtez, J.M.; Alonso-Betanzos, A.; Herrera, F. An Information Theory-Based Feature Selection Framework for Big Data under Apache Spark. IEEE Trans. Syst. Man Cybern. Syst. 2017. [Google Scholar] [CrossRef]
  17. Wang, Y.; Wang, J.; Liao, H.; Chen, H. An efficient semi-supervised representatives feature selection algorithm based on information theory. Pattern Recognit. 2017, 61, 511–523. [Google Scholar] [CrossRef]
  18. Marinazzo, D.; Pellicoro, M.; Stramaglia, S. Causal information approach to partial conditioning in multivariate data sets. Comput. Math. Methods Med. 2012, 2012, 303601. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, H.; Wang, G.; Zeng, X.; Peng, S. Online Streaming Feature Selection Based on Conditional Information Entropy. In Proceedings of the 2017 IEEE International Conference on Big Knowledge (ICBK), Hefei, China, 9–10 August 2017; pp. 230–235. [Google Scholar]
  20. Weather Underground. Available online: www.wunderground.com (accessed on 5 April 2018).
  21. Shannon, C.E. A mathematical theory of communication. ACM Sigmob. Mob. Comput. Commun. Rev. 2001, 5, 3–55. [Google Scholar] [CrossRef]
  22. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  23. Xiong, W.; Faes, L.; Ivanov, P.C. Entropy measures, entropy estimators, and their performance in quantifying complex dynamics: Effects of artifacts, nonstationarity, and long-range correlations. Phys. Rev. E 2017, 95, 062114. [Google Scholar] [CrossRef] [PubMed]
  24. Kolmogorov, A.N. Entropy per unit time as a metric invariant of automorphisms. Dokl. Akad. Nauk SSSR 1959, 124, 754–755. [Google Scholar]
  25. Sinai, Y.G. On the Notion of entropy of a dynamical system. Dokl. Akad. Nauk SSSR 1959, 124, 768–771. [Google Scholar]
  26. Keller, K.; Unakafov, A.M.; Unakafova, V.A. Ordinal patterns, entropy, and EEG. Entropy 2014, 16, 6212–6239. [Google Scholar] [CrossRef]
  27. Ebeling, W. Entropy, information and predictability of evolutionary systems. World Futures J. Gen. Evol. 1997, 50, 467–481. [Google Scholar] [CrossRef]
  28. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  29. Runge, J.; Heitzig, J.; Petoukhov, V.; Kurths, J. Escaping the curse of dimensionality in estimating multivariate transfer entropy. Phys. Rev. Lett. 2012, 108, 258701. [Google Scholar] [CrossRef] [PubMed]
  30. Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econom. J. Econom. Soc. 1969, 37, 424–438. [Google Scholar] [CrossRef]
  31. Amblard, P.O.; Michel, O.J. The relation between Granger causality and directed information theory: A review. Entropy 2012, 15, 113–143. [Google Scholar] [CrossRef]
  32. Faes, L.; Nollo, G.; Porta, A. Information-based detection of nonlinear Granger causality in multivariate processes via a nonuniform embedding technique. Phys. Rev. E 2011, 83, 051112. [Google Scholar] [CrossRef] [PubMed]
  33. Langone, R.; Mall, R.; Suykens, J.A.K. Soft Kernel Spectral clustering. In Proceedings of the International Joint Conference on Neural Networks, Dallas, TX, USA, 4–9 August 2013; pp. 1–8. [Google Scholar]
  34. Alzate, C.; Suykens, J.A.K. Multiway spectral clustering with out-of-sample extensions through weighted kernel PCA. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 335–347. [Google Scholar] [CrossRef] [PubMed]
  35. Mercer, J. Functions of positive and negative type, and their connection with the theory of integral equations. Philos. Trans. R. Soc. Lond. Ser. A 1909, 209, 415–446. [Google Scholar] [CrossRef]
  36. Suykens, J.A.K.; Vandewalle, J. Least Squares Support Vector Machine Classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  37. Suykens, J.A.K.; Van Gestel, T.; De Brabanter, J.; De Moor, B.; Vandewalle, J. Least Squares Support Vector Machines; World Scientific: Singapore, 2002. [Google Scholar]
  38. Leontaritis, I.; Billings, S.A. Input-output parametric models for non-linear systems part I: Deterministic non-linear systems. Int. J. Control 1985, 41, 303–328. [Google Scholar] [CrossRef]
  39. De Brabanter, K.; Karsmakers, P.; Ojeda, F.; Alzate, C.; De Brabanter, J.; Pelckmans, K.; De Moor, B.; Vandewalle, J.; Suykens, J.A.K. LS-SVMlab Toolbox User’s Guide: Version 1.8. 2011. LS-SVMlab. Available online: https://www.esat.kuleuven.be/sista/lssvmlab/ (accessed on 10 April 2018).
  40. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 73, 267–288. [Google Scholar]
  41. De Brabanter, K.; De Brabanter, J.; Suykens, J.A.K.; De Moor, B. Approximate confidence and prediction intervals for least squares support vector regression. IEEE Trans. Neural Netw. 2011, 22, 110–120. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of transductive feature selection.
Figure 1. The flowchart of transductive feature selection.
Entropy 20 00264 g001
Figure 2. Feature selection using clustering-based sample entropy.
Figure 2. Feature selection using clustering-based sample entropy.
Entropy 20 00264 g002
Figure 3. Weather data stations.
Figure 3. Weather data stations.
Entropy 20 00264 g003
Figure 4. (a) Average Membership Strength (AMS) value for different numbers of clusters; (b) clustered training data.
Figure 4. (a) Average Membership Strength (AMS) value for different numbers of clusters; (b) clustered training data.
Entropy 20 00264 g004
Figure 5. Mean absolute Error (MAE) of maximum (top) and minimum (bottom) temperature prediction in the test set November/December. LSSVM: Least Squares Support Vector Machines.
Figure 5. Mean absolute Error (MAE) of maximum (top) and minimum (bottom) temperature prediction in the test set November/December. LSSVM: Least Squares Support Vector Machines.
Entropy 20 00264 g005
Figure 6. MAE of maximum (top) and minimum (bottom) temperature prediction in the test set April/May.
Figure 6. MAE of maximum (top) and minimum (bottom) temperature prediction in the test set April/May.
Entropy 20 00264 g006
Figure 7. Average number of selected features for the November/December and April/May test set for minimum (left) and maximum (right) temperature using different r values.
Figure 7. Average number of selected features for the November/December and April/May test set for minimum (left) and maximum (right) temperature using different r values.
Entropy 20 00264 g007
Figure 8. Mean Absolute Error (MAE) of minimum (top) and maximum (bottom) temperature prediction of the AutoRegressive with eXogenous input (ARX) model in both test sets.
Figure 8. Mean Absolute Error (MAE) of minimum (top) and maximum (bottom) temperature prediction of the AutoRegressive with eXogenous input (ARX) model in both test sets.
Entropy 20 00264 g008
Figure 9. MAE of minimum (top) and maximum (bottom) temperature prediction of the ARX and LSSVM models while the proposed feature selection methods are deployed in both test sets.
Figure 9. MAE of minimum (top) and maximum (bottom) temperature prediction of the ARX and LSSVM models while the proposed feature selection methods are deployed in both test sets.
Entropy 20 00264 g009
Figure 10. MAE of minimum (top) and maximum (bottom) temperature prediction of the proposed transductive feature selection and the method in [12] in both test sets.
Figure 10. MAE of minimum (top) and maximum (bottom) temperature prediction of the proposed transductive feature selection and the method in [12] in both test sets.
Entropy 20 00264 g010
Figure 11. The average percentage of the selected features per delay in each city in the test set November/December.
Figure 11. The average percentage of the selected features per delay in each city in the test set November/December.
Entropy 20 00264 g011
Figure 12. The average percentage of the selected features per delay in each city in the test set April/May.
Figure 12. The average percentage of the selected features per delay in each city in the test set April/May.
Entropy 20 00264 g012
Table 1. Number of times that the corresponding feature is selected using 10 different initial values (test point in the global system (37), (39)).
Table 1. Number of times that the corresponding feature is selected using 10 different initial values (test point in the global system (37), (39)).
MethodLinear SystemNonlinear System
u t 1 y t 1 u t 2 y t 2 u t 3 y t 3 u t 4 y t 4 u t 5 y t 5 u t 1 y t 1 u t 2 y t 2 u t 3 y t 3 u t 4 y t 4 x t 5 y t 5
Global-FS010001000000100010000000
Transductive-FS010001000000100010000000
ARD [37]010001000000100010000000
MI-based [18]01000100000010412100200
LASSO [40]01000100000010301200600
FS: Feature Selection; ARD: Automatic Relevance Determination; MI: Mutual Information; LASSO: Least Absolute Shrinkage and Selection Operator.
Table 2. Number of times that the corresponding feature is selected using 10 different initial values (test point in the localized systems (38), (40): membership values to clusters [0.8, 0.2]).
Table 2. Number of times that the corresponding feature is selected using 10 different initial values (test point in the localized systems (38), (40): membership values to clusters [0.8, 0.2]).
MethodLinear SystemNonlinear System
u t 1 y t 1 u t 2 y t 2 u t 3 y t 3 u t 4 y t 4 u t 5 y t 5 u t 1 y t 1 u t 2 y t 2 u t 3 y t 3 u t 4 y t 4 u t 5 y t 5
Global-FS09027011008109200000
Transductive-FS0900101000001000900100
ARD [37]00010001000010091000000
MI-based [18]05073050001009070102
LASSO [40]01001000000000010610003
Table 3. Number of times for which the corresponding feature is selected using 10 different initial values (test point in the localized system (38), (40): membership values to clusters [0.2, 0.8]).
Table 3. Number of times for which the corresponding feature is selected using 10 different initial values (test point in the localized system (38), (40): membership values to clusters [0.2, 0.8]).
MethodLinear SystemNonlinear System
u t 1 y t 1 u t 2 y t 2 u t 3 y t 3 u t 4 y t 4 u t 5 y t 5 u t 1 y t 1 u t 2 y t 2 u t 3 y t 3 u t 4 y t 4 u t 5 y t 5
Global-FS09027011008109200000
Transductive-FS000100010000100010000000
ARD [37]00010001000010091000000
MI-based [18]05073050001009070102
LASSO [40]01001000000000010610003
Table 4. Mean Absolute Error (MAE) for minimum and maximum temperature prediction in November/December.
Table 4. Mean Absolute Error (MAE) for minimum and maximum temperature prediction in November/December.
Step AheadTemp.r = 0.7r = 1r = 1.6
Global-FSTransductive-FSGlobal-FSTransductive-FSGlobal-FSTransductive-FS
1Min1.48 ± 0.0011.54 ± 0.0011.52 ± 0.0011.45 ± 0.0041.68 ± 0.0011.50 ± 0.001
Max1.76 ± 0.0011.73 ± 0.0031.42 ± 0.0011.47 ± 0.0031.45 ± 0.0031.39 ± 0.001
2Min2.15 ± 0.00011.95 ± 0.0041.98 ± 0.0011.77 ± 0.011.76 ± 0.0011.89 ± 0.001
Max2.13 ± 0.0011.72 ± 0.0021.88 ± 0.0031.80 ± 0.0011.73 ± 0.0031.49 ± 0.02
3Min2.07 ± 0.0052.00 ± 0.0031.90 ± 0.0011.98 ± 0.012.16 ± 0.0012.33 ± 0.004
Max1.77 ± 0.0021.88 ± 0.032.13 ± 0.0012.33 ± 0.22.14 ± 0.0011.90 ± 0.003
4Min1.59 ± 0.0031.80 ± 0.0022.21 ± 0.0012.05 ± 0.012.22 ± 0.0011.96 ± 0.02
Max2.37 ± 0.0012.25 ± 0.0012.18 ± 0.0032.15 ± 0.0011.54 ± 0.0022.06 ± 0.001
5Min2.37 ± 0.0012.21 ± 0.0012.20 ± 0.0012.25 ± 0.0012.46 ± 0.0012.29 ± 0.004
Max2.19 ± 0.0011.94 ± 0.011.92 ± 0.0012.29 ± 0.21.79 ± 0.0011.89 ± 0.05
6Min2.40 ± 0.0062.31 ± 0.0051.66 ± 0.0012.19 ± 0.022.17 ± 0.0012.30 ± 0.1
Max1.95 ± 0.0011.93 ± 0.0022.42 ± 0.0011.82 ± 0.0052.36 ± 0.0041.71 ± 0.01
Table 5. MAE for minimum and maximum temperature prediction in April/May.
Table 5. MAE for minimum and maximum temperature prediction in April/May.
Step AheadTemp.r= 0.7r = 1r = 1.6
Global-FSTransductive-FSGlobal-FSTransductive-FSGlobal-FSTransductive-FS
1Min1.65 ± 0.0011.59 ± 0.0011.74 ± 0.0011.46 ± 0.0011.63 ± 0.0011.53 ± 0.001
Max2.09 ± 0.0012.04 ± 0.0012.23 ± 0.0012.23 ± 0.0012.31 ± 0.0012.18 ± 0.003
2Min2.01 ± 0.0012.20 ± 0.0022.09 ± 0.0011.98 ± 0.0022.06 ± 0.0011.98 ± 0.002
Max2.31 ± 0.0012.18 ± 0.0052.09 ± 0.0012.29 ± 0.0022.12 ± 0.0012.25 ± 0.001
3Min2.11 ± 0.0012.29 ± 0.0042.27 ± 0.0012.03 ± 0.0022.12 ± 0.0012.12 ± 0.01
Max2.52 ± 0.0012.48 ± 0.0042.83 ± 0.0022.56 ± 0.0012.47 ± 0.0012.40 ± 0.002
4Min3.01 ± 0.0012.69 ± 0.0012.59 ± 0.0042.63 ± 0.0012.01 ± 0.0012.25 ± 0.003
Max2.39 ± 0.0042.10 ± 0.0012.32 ± 0.0042.42 ± 0.032.49 ± 0.0012.28 ± 0.003
5Min2.90 ± 0.0012.98 ± 0.0022.50 ± 0.0012.40 ± 0.0022.87 ± 0.0022.80 ± 0.001
Max2.56 ± 0.0042.39 ± 0.0012.62 ± 0.0052.54 ± 0.0052.27 ± 0.0012.37 ± 0.001
6Min2.74 ± 0.0032.59 ± 0.0012.66 ± 0.0012.70 ± 0.0042.80 ± 0.0012.57 ± 0.001
Max2.25 ± 0.022.35 ± 0.0011.96 ± 0.0022.64 ± 0.0082.26 ± 0.0051.91 ± 0.002

Share and Cite

MDPI and ACS Style

Karevan, Z.; Suykens, J.A.K. Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting. Entropy 2018, 20, 264. https://doi.org/10.3390/e20040264

AMA Style

Karevan Z, Suykens JAK. Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting. Entropy. 2018; 20(4):264. https://doi.org/10.3390/e20040264

Chicago/Turabian Style

Karevan, Zahra, and Johan A. K. Suykens. 2018. "Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting" Entropy 20, no. 4: 264. https://doi.org/10.3390/e20040264

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop