Improved Permutation Entropy for Measuring Complexity of Time Series under Noisy Condition

This isanopen accessarticledistributedundertheCreativeCommons AttributionLicense,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Measuring complexity of observed time series plays an important role for understanding the characteristics of the system under study. Permutation entropy (PE) is a powerful tool for complexity analysis, but it has some limitations. For example, the amplitude information is discarded; the equalities (i.e., equal values in the analysed signal) are not properly dealt with; and the performance under noisy condition remains to be improved. In this paper, the improved permutation entropy (IPE) is proposed. The presented method combines some advantages of previous modifications of PE. Its effectiveness is validated through both synthetic and experimental analyses. Compared with PE, IPE is capable of detecting spiky features and correctly differentiating heart rate variability (HRV) signals. Moreover, it performs better under noisy condition. Ship classification experiment results demonstrate that IPE achieves 28.66% higher recognition rate than PE at 0dB. Hence, IPE could be used as an alternative of PE for analysing time series under noisy condition.


Introduction
Measuring complexity of observed time series allows a better understanding of the characteristics of the system under study [1].There is a lack of consensus on the definition of complexity [2][3][4][5].Entropy is one of the most powerful metrics to evaluate the complexity of a signal [6].By this definition, complexity is associated with disorder degree (randomness) and unpredictability.Many entropy approaches have been proposed in recent years, such as permutation entropy (PE) [7], approximate entropy [8], sample entropy [9], and fuzzy entropy [10], each of which has its own strengths and weaknesses.Compared with other entropy algorithms, PE is famous for its uniqueness of being computationally fast and conceptually simple.Furthermore, it is applicable for any types of signals, be they deterministic, chaotic, stochastic, stationary, or nonstationary [11].We will therefore concentrate on PE in what follows.
PE was firstly introduced by Bandt and Pompe in 2002.Since then, it has been extensively applied in various fields.Without being exhaustive, applications such as fault diagnosis [12,13], biomedical signal processing [6,[14][15][16], and stock market analysis [17,18] can be enumerated.Despite considerable success, there are still defects in PE, which have motivated researchers to present modifications for the original algorithm.Firstly, PE is single-scale-based.Signals generated by complex systems usually show structures on multiple temporal scales.As a result, the single-scale-based PE cannot describe such time series comprehensively [19].To remedy this, Zunino et al. proposed to calculate PE as a function of the time delay, offering a way to unveil the presence of structures on multiple temporal scales [20].Moreover, Costa et al. proposed a coarse-graining technique [19,21,22].Based on Costa's work, Aziz introduced the multiscale permutation entropy (MPE) [23] by combining the coarsegraining procedure with PE.The method is able to provide more precise descriptions for complex signals.Secondly, when a signal is mapped to permutation patterns (or ordinal patterns) using Bandt and Pompe's approach, information regarding the amplitudes is not taken into consideration.To this end, weighted-permutation entropy (WPE) [2] and amplitude-aware permutation entropy (AAPE) [24] have been developed, respectively.By assigning weights to distinct patterns, the modified methods greatly improve the ability 2 Complexity to detect abrupt changes in magnitude.Thirdly, the PE estimation is liable to be affected by the equal values in time series [25][26][27].In the case that the sequence under study has a continuous distribution, equal values are very rare and can be simply ignored.Unfortunately, real-world data are digitalized; thus equalities are inevitable to exist.The situation could be more serious if the amplitude resolution is low.Bandt and Pompe suggested ranking the equal values according to their temporal order or breaking them by adding random perturbations [7].However, as pointed out in a recent study [26], Bandt's method for processing equal values might lead to erroneous conclusion.To address this issue, Bian et al. have proposed modified permutation entropy (mPE) as an alternative [27], which assigns the same symbol to equal values.Although mPE can significantly improve the performance of distinguishing the heart rate variability (HRV) signals under different conditions, it also brings additional problems.For example, mPE does not assign the maximum entropy value to the white Gaussian noise (WGN), which disagrees with the fact that WGN is completely random.Lastly, PE is susceptible to noise.In order to improve PE's ability under noisy condition, researchers have suggested to apply symbolic dynamics to symbolize the time series prior to entropy estimation [28,29].For example, Porta et al. proposed an integrated approach based on uniform quantization (IAUQ), which has shown great ability to differentiate normal subjects and heart failure patients.
Although previous works solve some problems of PE, these methods are still deficient in some aspects: (I) mPE still overlooks the amplitude information; (II) the presence of equal values will also do harm to the WPE and AAPE algorithm; and (III) the fluctuations of signals are not taken into account by the IAUQ.In the present study, the improved permutation entropy (IPE) is proposed.The presented method not only considers the amplitude information and fluctuations of signals but also tackles the limitation of equal values.Besides, it can be directly combined with coarse-graining technique for multiscale analysis.As it will be shown below, IPE is capable of detecting spiky features and correctly differentiating HRV signals (time series with a lot of equal values).Moreover, compared with PE and its modifications, IPE performs better under noisy condition.The experimental results further validate the effectiveness of the proposed method.
The remainder of this paper is organized as follows: detailed description of the IPE algorithm is provided in Section 2; the effect of different parameters is studied in Section 3; synthetic and experimental data are analysed in Section 4; the paper is concluded in Section 5.

Methods
In this section, the proposed IPE algorithm is described in detail.To gain insight into the advantages of IPE, differences between the PE, IAUQ, and IPE are compared.For the purpose of multiscale analysis, a multiscale version of IPE is also introduced.

Improved Permutation Entropy.
The IPE algorithm is mainly composed of two major parts: (I) definition of pattern and (II) entropy estimation.Consider the reconstruction vectors in (1).The first column of the embedding vectors, that is, (:, 1), is firstly symbolized based on uniform quantization (UQ).As shown in (5),  min and  max stand for the minimum and maximum value of the observed time series , respectively. denotes the discretization level and Δ = ( max −  min )/.For an input data , the UQ procedure produces an integer ranging from 0 to  − 1.

Ordinal Pattern
Possible situations Figure 1: An example of some m-dimensional subvectors that are symbolized to the same ordinal pattern ( = 3 is used in this example).
Let (:, 1) denote the symbolization result of (:, 1).Then, for the kth column of embedding vectors (:, ), 2 ≤  ≤ , (:, ) is calculated by the following equation: Finally,  is defined as the pattern matrix.Each row of  corresponds to a pattern   , 1 ≤  ≤  ∧ .Compute the probability distribution   of each pattern   ; the normalized IPE is written as where ℎ ≤  ∧  and ln( ∧ ) is the maximum value of   , which is only reached when the patterns have a uniform distribution.
It is worth noting that the main difference between the IAUQ and IPE is the definition of pattern.Unlike IAUQ, only the first element of the embedding vector is symbolized by UQ in the IPE algorithm.The patterns of other elements are calculated by (6), which takes the fluctuations of signals into consideration.Take a vector [1.9, 1, 2.1, 4] as an example; let  = 3 and  = 4; the vector will be symbolized as [0, 0, 1, 2] and [0, 0, 0, 2] by IAUQ and IPE, respectively.
There are 4 major differences between the PE and IPE algorithm.Firstly, amplitude information and fluctuations of signals are considered in IPE.Unlike PE that assigns the same pattern (012) to all vectors in Figure 1, for  = 3 and and [2.8, 2.9, 3] to (002), ( 022), (000), and(222), respectively.Secondly, the same symbol is assigned to equal values in IPE.For example, [1, 1, 1] and [2, 2, 2] are separately symbolized as (000) and (111).The way that IPE processes repeated values will not cause overestimating of permutation patterns; thus a more precise complexity measure can be obtained for time series with numerous equal values.Thirdly, IPE is more robust to noise interference.Vectors [1.01, 1, 1.01] and [1, 1.01, 1.01] are both transformed to (000), meaning that the presence of proper noise will not influence the IPE estimation.Finally, there are  ∧  possible patterns in IPE, while that in PE is !.

Multiscale Improved Permutation Entropy.
Within the multiscale improved permutation entropy (MIPE) algorithm, only the coarse-graining procedure is required to proceed prior to IPE estimation.Given a scale factor , the input sequence  = {  }  =1 is decomposed by the coarse-graining technique, yielding a new subsequence of length /, which can be written as where 1 ≤  ≤ /.The obtained new time series is then served as the input to the IPE algorithm for multiscale analysis.It is important to note that IPE can also be combined with other multiscale analysis techniques (e.g., [20]).In the present study, we select the prevalent coarse-graining technique for subsequent multiscale analysis.

Selection of Parameters
There are some parameters that need to be predetermined for computing the IPE and MIPE algorithm, such as embedding dimension , time delay , discretization level , scale factor , and data-length .The time lag is analogous to downsampling to some extent, and  = 1 is usually taken for structural preservation [30,31].Without specification,  = 1 is chosen for subsequent study.In the following, the selection of other parameters is investigated through two synthetic signals whose characteristics are known: (I) WGN and (II) 1/f noise.

Selection of Embedding Dimension.
In this subsection, we examined how IPE estimations varied as a function of the embedding dimension .There were 30 independent realizations generated for each synthetic signal with a datalength of 50000.Discretization level  = 4 was used in this experiment.The average IPE values with their standard deviation (SD) error bars over a varying  are provided in Figure 2. As can be seen, the IPE has very low SD, implying that it offers consistent entropy estimation.There is a slight decrease in entropy values for both synthetic signals as  increases.The entropy loss phenomenon at large embedding dimensions agrees with the inference in [32], which states that the trajectory of higher-dimensional embedding vectors is more predictable than those with lower dimension.Hence, lower entropy (complexity) could be expected at higher embedding dimensions.For practical purposes, Bandt and Pompe suggested setting 3 ≤  ≤ 7 for computing PE [7].Since  has only little influence on IPE evaluation within that range, without loss of generality, we selected  = 4 for subsequent study.

Selection of Data-Length.
The effect of data-length on IPE evaluation is depicted in Figure 3.The results were obtained by averaging 30 independent trials.The data-length of synthetic signals was varied from 10 to 10000 with a step length of 50.Discretization level  = 4 was chosen for computing the IPE algorithm.As shown in the picture, with increasing sample points, IPE curves of both synthetic signals firstly increase and then gradually converge to a constant value.The result implies that the IPE method provides unreliable entropy estimation for very short time series.For example, the WGN and the 1/f noise become indistinguishable by IPE when  ≤ 100.According to [26,33],  >> ! must be satisfied to achieve a reliable PE measurement, where ! denotes the number of potential permutation patterns in PE.Analogously,  should fulfill  >>  ∧  in the IPE algorithm, where  ∧  is the possible pattern in IPE.Because  = 4 and  = 4 were used in Figure 3, there are 256 possible patterns.As can be seen from Figure 3, IPE curves of both synthetic signals start to converge when  = 1000.Since 1000 ≈ 4 * 256, we can roughly deduce that  ≥ 4( ∧ ) should be satisfied for reliable IPE estimation.and then gradually converge, while that of the 1/f noise keeps progressively increasing.It is also seen that IPE does not reach its maximum for the completely random WGN.This is due to the fact that the definition of pattern in the IPE algorithm is partly based on UQ, in which only the dominant features of dynamics are faithfully preserved [29,34].A larger  means more abundant information of the observed time series is retained, but it becomes more sensitive to noise and calls for more sample points to provide reliable results.The situation is just the reverse when  is small.Hence, the selection of  involves a trade-off between accurate entropy estimation and high noise immunity.Although the IPE approach does not assign the maximum entropy value for the WGN, it obtains large enough entropy measurements (>0.93) when  ≥ 4, which is generally identical to the fact.Without loss of generality,  = 4 is selected for subsequent study.

Selection of Scale
Factor.According to (8), an increasing scale factor will result in the data-length of subsequence rapidly decreasing.As mentioned in Section 3.2,  ≥ 4( ∧ ) must be satisfied in the IPE algorithm.Since the data-length of subsequence is equal to /, we can therefore deduce that  ≤ /4( ∧ ) must be fulfilled in the MIPE method.

Results and Discussion
After having established the basic properties of the IPE algorithm, in this section, its advantages are fully demonstrated through synthetic and experimental analysis.For comparison purpose, other entropic approaches, such as PE, WPE, AAPE, mPE, and IAUQ (Shannon entropy is used to implement IAUQ) are also utilized for analysis.

Spiky Data Analysis.
A signal having abrupt changes in magnitude was firstly tested.As shown in Figure 5(a), the synthetic signal consists of an impulse and additive WGN.Sliding windows of 500 samples with 400 points overlapped  were used for entropy calculation.It is important to point out that the parameters in PE, WPE, AAPE, mPE, and IAUQ were set the same as those in IPE, and the adjusting coefficient was selected as 0.5 for AAPE.Without specification, the same parameters will be used for subsequent study.It is shown in Figure 5(b) that both PE and mPE remain constant across all windows, while a remarkable drop of entropy values can be noticed in the impulse region for all IPE, IAUQ, WPE, and AAPE methods.The results are due to the fact that PE and mPE overlook the amplitude information, while that is fully considered in IPE, IAUQ, WPE, and AAPE approaches [2,7,24,27].The result suggests the strong ability of IPE in detecting spiky features.

Heart Rate Variability Signal Analysis.
Typically, the HRV signals derived from electrocardiogram have numerous equal values because of the limited sampling frequency.We therefore used such time series to examine how the repeated values would affect the entropy values of various entropic methods.The HRV signals analysed in this paper originate from the MIT-BIH Fantasia database, which has been widely used in scientific works [25,27,35].Herein, we analysed a collection of 10 heart-beat time series including 5 young and 5 elderly subjects, each of which has 4096 sample points.By averaging all the subjects, Table 1 gives the percentage of equal values in embedding vectors with different embedding dimensions.It is found that the percentage of equal values approximately grows by 10% when the embedding dimension increases by 1, indicating that the equal values are almost randomly distributed in the time series.Figure 6 provides the entropy analysis results of the HRV signals.Unfortunately, the young and elderly subjects are unclassifiable by the PE, WPE, and AAPE methods.This occurs because the presence of numerous randomly distributed equal values leads to a stochastic distribution of ordinal patterns.Consequently, the entropy values of all these methods are close to 1, being indistinguishable.On the other hand, IPE, IAUQ, and where () is the WGN with zero mean and unit variance,  denotes the order of AR processes, and   stands for the correlation coefficients.Parameters for generating AR processes with diverse orders are listed in Table 2.For each order, 30 independent realizations with 10000 samples were produced.
The entropy analysis results of the synthetic AR time series are shown in Figure 7, where averaged entropy values with their SD error bars over varying scale factors (1∼20) are plotted.Figures 7(b)-7(e) are the results of PE, WPE, AAPE, and mPE, respectively.Very similar entropy curves are obtained for these methods except for the differences in absolute entropy values.As can be seen, AR 6 and AR 7 are not well distinguished by them.By contrast, both IPE and IAUQ differentiate all synthetic AR time series well.Particularly, for all scales, the mean IPE values are ranked in a descending order as the order of the AR time series increases, which is more consistent with the fact.As shown in (9), as the order of AR process grows, there is an increasing correlation among sample points.In terms of predictability, a higher-order AR time series is more predictable than that with a lower order and should have been assigned lower entropy.Comparing Figure 7(a) with Figure 7(f), it is seen that the result of IPE ranges from 0.2 to 0.9, while that of IAUQ ranges from 0.5 to 1.The difference is due to the diverse definition of pattern in two methods.The results in Figure 7 suggest that IPE is powerful for distinguishing signals with different predictability.

Analysis of Ship-Radiated
Noise.We finally tested the effectiveness of IPE under noisy condition.To this end, three types of real ship-radiated noise were analysed.Due to the effect of ocean ambient noise, the ship sounds are usually recorded in noisy conditions.The experimental data were taken from ShipsEar [36], which is an open database of underwater recordings of ship sounds.The sounds of three types of marine vessels were measured at a sampling rate of 52734 Hz.Three types of ships belong to the passenger ship, Table 2: The correlation coefficients for generating AR processes.ocean line, and motorboat, respectively.Classifying these ships from their radiating noise can be helpful for monitoring the maritime traffic.For more detailed descriptions of the data, please refer to [36]. Figure 8 shows the recorded time series of three ships.
For each type of ship-radiated noise, the data was equally cut into 50 pieces, each of which contains 52734 sample points.Figure 9 provides the feature extraction results using various entropic methods.The averaged entropy values with their SD error bars over varying scale factors (1∼20) were plotted.Again, PE and mPE achieve very similar entropy curves except for the difference in absolute entropy values.This occurs owing to the high sampling rate in the experiment.The high sampling rate results in very few equal values existing in the ship signals; the definitions of pattern in both approaches become similar in such situation; mPE is thus approximate to PE.Because both WPE and AAPE consider the amplitude information of signals through assigning weights to different patterns, similar entropy curves can also be found in these two algorithms.Visually, three types of ships are more distinguishable when utilizing IPE, IAUQ,  WPE, and AAPE.This may be due to the fact that all these methods take amplitude information into consideration.
Lower signal to noise ratio (SNR) condition was generated by adding WGN to the ship-radiated noise.Figure 10 gives the entropy analysis results under 5dB condition.Except for IPE, adding WGN seriously affects the entropy estimation of other algorithms, especially at lower scale factors.For example, compared with corresponding results in Figure 9, these methods assign much higher entropy values for all three types of ships when  = 1.As can be seen, there is little difference between Figures 9(a) and 10(a), meaning that IPE is more robust to noise.It is also found that the difference between IPE and IAUQ is obvious if comparing Figure 10(a) with Figure 10(f).Since IPE also considers the fluctuations of signals, it performs better under noisy condition.
The extracted entropy features under different SNR conditions were further processed by using the probability neural network (PNN) [37], which is a powerful tool for classification.For each type of vessels, 20 noise-free pieces were used for training and the other 30 pieces were for testing.Regarding situations with different SNR, all 50 pieces were set as test datasets.Table 3 shows the detailed classification results, which agree well with the entropy analysis results in Figures 9 and 10.All the entropic methods perfectly classify three types of ships in noise-free or high SNR (10dB) condition.With a decreasing SNR (5dB), classification performance of PE, AAPE, and mPE sharply declines, while that of IPE, IAUQ, and WPE remains unchanged.As the SNR further decreases (0dB), recognition rate of other entropic methods reduces to 53.33% or lower, while IPE still achieves an acceptable accuracy of 69.33%.This result validates the effectiveness of the IPE under noisy condition.

Conclusions
The  recognition rate for classifying ships under noisy conditions than PE and its modifications, implying that it is applicable for analysing signals under noisy condition.In the future work, IPE could be applied to various engineering applications such as fault diagnosis, acoustic signal processing, and stock market analysis.

Figure 2 :
Figure 2: Error bar plot of IPE over a varying embedding dimension.

Figure 3 :
Figure 3: Error bar plot of IPE over a varying data-length.

Figure 4 :
Figure 4: Error bar plot of IPE over a varying discretization level.

Figure 5 :
Figure 5: Entropy analysis for time series having spiky features.(a) Waveform of the synthetic signal.(b) Entropy estimation of the synthetic signal.

Figure 6 :
Figure 6: Entropy analysis of HRV time series.

Figure 7 :
Figure 7: Entropy analysis of AR time series.(a) Result of IPE.(b) Result of PE.(c) Result of WPE.(d) Result of AAPE.(e) Result of mPE.(f) Result of IAUQ.

Figure 8 :
Figure 8: Recorded time series of three types of ship-radiated noise.

Figure 9 :Figure 10 :
Figure 9: Entropy analysis of three types of ship-radiated noise.(a) Result of IPE.(b) Result of PE.(c) Result of WPE.(d) Result of AAPE.(e) Result of mPE.(f) Result of IAUQ.

Table 1 :
Percentage of equal values found in embedding vectors with diverse embedding dimensions.

Table 3 :
Classification accuracy of three types of ships by PNN.