Self-Adaptive Spectrum Analysis Based Bearing Fault Diagnosis

Bearings are critical parts of rotating machines, making bearing fault diagnosis based on signals a research hotspot through the ages. In real application scenarios, bearing signals are normally non-linear and unstable, and thus difficult to analyze in the time or frequency domain only. Meanwhile, fault feature vectors extracted conventionally with fixed dimensions may cause insufficiency or redundancy of diagnostic information and result in poor diagnostic performance. In this paper, Self-adaptive Spectrum Analysis (SSA) and a SSA-based diagnosis framework are proposed to solve these problems. Firstly, signals are decomposed into components with better analyzability. Then, SSA is developed to extract fault features adaptively and construct non-fixed dimension feature vectors. Finally, Support Vector Machine (SVM) is applied to classify different fault features. Data collected under different working conditions are selected for experiments. Results show that the diagnosis method based on the proposed diagnostic framework has better performance. In conclusion, combined with signal decomposition methods, the SSA method proposed in this paper achieves higher reliability and robustness than other tested feature extraction methods. Simultaneously, the diagnosis methods based on SSA achieve higher accuracy and stability under different working conditions with different sample division schemes.


Introduction
Bearings are critical parts in rotating machines and their health condition has a great impact on production. However, because of non-linear factors such as frictions, clearance and stiffness, vibration signals of bearings acquired in real application scenarios are characterized by non-linearity and instability which make bearing fault diagnosis difficult [1].
The general fault diagnosis process involves three main steps, namely signal acquisition and processing, fault feature extraction and fault feature classification [2]. Sensors are utilized to acquire signals with noises, and signal processing techniques are applied subsequently to improve the signal-to-noise ratio [3]. Particularly, ideal fault feature extraction can express the feature information of filtered signals comprehensively and efficiently, and it is the basis to produce an accurate fault feature classification. Therefore, a reasonable and efficient fault feature extraction plays an important role in fault diagnosis. Current fault extraction methods mainly include time domain, frequency domain and time-frequency domain analysis [4].
Time domain analysis is one of the earliest methods studied and applied. It calculates various statistical parameters in the time domain, for instance peak amplitude, kurtosis and skewness [5][6][7] to construct feature vectors. Frequency domain analysis transforms signals from the time domain into the frequency domain first, mainly focusing on Fourier Transform (FT) [8], then the periodical features, frequency features and distribution features of signals are extracted with methods such as cepstrum analysis and envelope spectrum analysis to construct feature vectors [9,10]. However, time domain or frequency domain analysis only extracts the information in the corresponding domain, resulting in the loss of information in the other domain. With in-depth study, time-frequency domain fault feature extraction methods were developed accordingly. They can extract both time and frequency information. They also have shown superiority for analyzing nonlinear and unstable signals.
As a typical time and frequency domain analysis method, Short Time Fourier Transform (STFT) [11] improves the analysis capability for unstable signals by introducing a fixed-width time window function. However, a fixed-width time window function in STFT cannot guarantee optimal time and frequency resolution simultaneously. The Wavelet Transform (WT) [12] introduces time and frequency scale coefficients to overcome the drawbacks of STFT. WT is based on the theory of inner product mapping and a reasonable basis function is the key to guarantee the effectiveness of WT. However, it is difficult to select a proper basis function. Therefore, to improve the adaptive analysis capability to signals, Empirical Mode Decomposition (EMD) [13] and Local Mean Decomposition (LMD) [14] methods were successively studied and applied. According to the local characters of signals themselves, EMD and LMD adaptively decompose a signal into various components which have better statistical characters for later analysis. Compared with each other, EMD is a mature tool for long-term study and usage, while LMD has an improved decomposition process and better decomposition results with physical explanations [15].
In recent years, EMD and LMD have been extensively studied and implemented. Mejia-Barron et al. [16] developed a method based on EMD to decompose signals and extract features, completing the fault diagnosis of winding faults. Saidi et al. [17] introduced a synthetical application of bi-spectrum and EMD to detect bearing faults. Cheng et al. [18] combined EEMD and entropy fusion to extract fault features for planetary gearboxes, and furthermore implemented fault diagnosis successfully. Yi et al. [19] also utilized EEMD to pre-process signals for further fault diagnosis for bearings. Liu and Han et al. [20] applied LMD and multi-scale entropy methods to extract fault features and analyzed faults successfully. Yang et al. [21] proposed an ensemble local mean decomposition method and applied it in rub-impact fault diagnosis for rotor systems. Han and Pan et al. [22] integrated LMD, sample entropy and energy ratio to process vibration signals and realized the fault feature extraction and fault diagnosis in rolling element bearings. Yasir and Koh et al. [23] adopted LMD and multi-scale permutation entropy and realized bearing fault diagnosis. Guo et al. [24] studied an improved fault diagnosis method for gearbox combining LMD and a synchrosqueezing transform.
Fault feature classification is implemented after fault feature extraction. Nowadays, shallow machine learning methods are extensively utilized to solve the classification problem. Support Vector Machine, Artificial Neural Network and Fuzzy Logical System are widely applied in condition monitoring and fault diagnosis [25]. Particularly, SVM is based on statistics and minimum theory of structured risk, and it has better classification performance when dealing with the practical problems of a small amount of and non-linear samples. To solve the multi-class classification problems, based on SVM, Cherkassky [26] proposed a one-against-all (oaa) strategy in his studies, transforming a N-class classification problem into N binary classification problems. Also, Kressel [27] used a method to transform a N-class classification problem into N(N − 1)/2 binary classification problems, namely the one-against-one (oao) strategy. Wu et al. [28] adopted SVM to diagnosis via analyzing the full-spectrum to extract fault features. Saimurugan et al. [29] improved the diagnosis performance by integrating SVM and avdecision tree. Santos et al. [30] selected SVM for classification in wind turbine fault diagnosis with several trails of different kernels.
Currently, researchers all over the world have carried out extensive studies on bearing fault diagnosis. To our best knowledge, fault diagnosis methods still need further study, although various solutions have been investigated from different aspects. The main problems to be solved in this paper are summarized as follows: (1) Vibration signals acquired in real application scenarios are non-linear and unstable and their statistical characters are time-varying. Hence, it is difficult to extract effective and comprehensive fault features only in the time-domain or in frequency-domain. (2) Conventional fault feature extraction methods take the overall characteristics of signals into account via calculating statistical parameters to construct feature vectors with fixed dimensions, however, local detailed characteristics are neglected. Therefore, fault information contained in vectors may be insufficient or redundant in different working conditions because vectors have a fixed dimension, consequently leading to lower reliability and robustness of fault feature extraction. Meanwhile, data-driven classifiers are sensitive to classification features and minor changes in classification features may result in performance reduction [31].
In order to improve the fault diagnosis performance, in this paper, SSA is proposed to adaptively extract fault features and construct unfixed-dimension feature vectors according to local characters of signals. Then, SSA is implemented under the designed framework. Signals are decomposed firstly to obtain components with better analyzability, LMD and EEMD are both utilized to decompose signals into different components from different analysis aspects. SSA is utilized to extract fault features adaptively and feature vectors with non-fixed dimensions are constructed subsequently. Finally, SVM is selected to classify the fault features considering its inherent advantages to small amount train samples.

Self-Adaptive Spectrum Analysis
Aiming at solving the problem that conventional feature extraction methods neglect local details of signal and fault information may be redundant or insufficient because of fixed-dimension feature vectors, Self-adaptive Spectrum Analysis (SSA) is proposed. With the SSA method, unfixed-dimension feature vectors are constructed by extracting the local characteristics of signals adaptively.
At first, a number of signals corresponding to different categories of fault types are selected. To implement SSA method efficiently, Fast Fourier Transform (FFT) is used to transform the signals into frequency domain to get corresponding spectrums for better readability. Then an overall frequency-window is set to all spectrums according to the fluctuation in spectrums, and local feature information inside the frequency-window is extracted to construct feature vectors.
In order to implement the proposed SSA, some definitions are given: f z is the minimum frequency unit in SSA. Normally, feature information is extracted at points corresponding to n f z (n = 1, 2, 3, . . . ), where f z is calculated as follows: Firstly, in each spectrum, the maximum amplitude and corresponding frequency value are found. All the frequency values are denoted as f 1 , f 2 , f 3 , . . . , f m , where m means the sequence number of signals. More than two fault categories must be included within the selected signals.
Secondly, the frequency values are arranged into different vectors according to the categories of samples; vectors are denoted as: where k means k kinds of faults, i = [1, 2, . . . k]. Here we assume that different categories have the same amount of signals. Then, the average values of all elements in each vector are figured out and denoted as v 1 , v 2 , . . . , v k , respectively, then a vector f = [v 1 , v 2 . . . v k ] is constructed. Thirdly, minimum frequency value f min and the maximum frequency value f max are selected in vector f . Then, two neighboring frequency values are also selected in f , between which there is the maximum value among the differences between every two neighboring frequencies, the lower frequency is denoted as f low and the higher one is denoted as f high .
Finally, f min , f max , f low , f high are arranged in ascending order, and absolute values f diff of differences between every neighboring two frequencies are calculated. The minimum non-zero f diff value is picked to be the value of f z : The frequency window is a specific frequency section for extracting feature information, f l is the left boundary while f r is the right boundary. Frequency window is determined with fixed boundaries, and feature information is extracted inner the window. Boundaries are calculated as follows: where f loor( * ) is a round down function, ceil( * ) is a round up function.

Definition 3. Tolerance µ.
Tolerance µ denotes that in a section which is centered with n f z , µ is taken as the semidiameter to determine the searching section (n f z − µ, n f z + µ ], and the maximum amplitude value corresponding to a frequency within this section can be regarded as the amplitude value to n f z . µ is calculated as follows: h denotes the degree of peak amplitude value. It is utilized to judge whether the amplitude value is normal or not and all h construct fault feature vectors. h is calculated as follows: Firstly, average value of all the amplitude values in frequency window [ f l , f r ] is calculated, denoted as A ave , also the maximum amplitude value in section (n f z − µ, n f z + µ ] is selected and denoted as A max . Finrfally, h can be calculated as follows: Figure 1 gives a description of the definitions mentioned above. Finally, min , max , low , high are arranged in ascending order, and absolute values diff of differences between every neighboring two frequencies are calculated. The minimum non-zero diff value is picked to be the value of z : The frequency window is a specific frequency section for extracting feature information, l is the left boundary while r is the right boundary. Frequency window is determined with fixed boundaries, and feature information is extracted inner the window. Boundaries are calculated as follows: where ( * ) is a round down function, ( * ) is a round up function.
Tolerance denotes that in a section which is centered with z , is taken as the semidiameter to determine the searching section ( z − , z + ], and the maximum amplitude value corresponding to a frequency within this section can be regarded as the amplitude value to z . is calculated as follows: Definition 4. Peak value ratio coefficient ℎ.
ℎ denotes the degree of peak amplitude value. It is utilized to judge whether the amplitude value is normal or not and all ℎ construct fault feature vectors. ℎ is calculated as follows: Firstly, average value of all the amplitude values in frequency window [ l , r ] is calculated, denoted as ave , also the maximum amplitude value in section ( z − , z + ] is selected and denoted as max . Finally, ℎ can be calculated as follows: Figure 1 gives a description of the definitions mentioned above.  Combined with Figure 1, SSA is implemented on each spectrum as follows: (1) Calculating values of differential frequency f z and boundaries f l , f r , frequency window W is determined; (2) Calculating all the n f z values, taking µ as side intervals to determine different searching sections; (3) Selecting the maximum amplitude in each searching section and corresponding frequency value, calculating the absolute frequency interval d between this frequency value and section center n f z , also, h are calculated, frequency interval vector Setting a threshold value h t for h, and h t could be optimized automatically by the overall accuracy.
h t is used to judge if an anomaly exists in sections. When h > h t , the corresponding section is regarded as an abnormal one; (5) If an anomaly is found, figuring out whether all the frequency values corresponding to abnormal sections are on the same side of n f z (n = 1, 2, 3, . . . ) along the frequency axis simultaneously. If they are on the same side, selecting a minimum d in D, and shifting the spectrum to the opposite direction by d. Subsequently, repeating steps 1 to 3. While, if they are not on the same side, skip steps 5 and 6; (6) H is taken as the fault feature vector extracted from the spectrum.

Framework Construction of Fault Diagnosis
The overall framework construction of the proposed fault diagnosis method based on SSA in our research is shown in Figure 2. Combined with Figure 1, SSA is implemented on each spectrum as follows: (1) Calculating values of differential frequency z and boundaries l , r , frequency window is determined; (2) Calculating all the z values, taking as side intervals to determine different searching sections; (3) Selecting the maximum amplitude in each searching section and corresponding frequency value, calculating the absolute frequency interval between this frequency value and section center z , also, ℎ are calculated, frequency interval vector = [ 1 , 2 , 3 … n ], Peak value ratio coefficient vector = [ℎ 1 , ℎ 2 , ℎ 3 … ℎ n ]; (4) Setting a threshold value ℎ t for ℎ, and ℎ t could be optimized automatically by the overall accuracy. ℎ t is used to judge if an anomaly exists in sections. When ℎ > ℎ t , the corresponding section is regarded as an abnormal one; (5) If an anomaly is found, figuring out whether all the frequency values corresponding to abnormal sections are on the same side of z (n = 1, 2, 3,…) along the frequency axis simultaneously. If they are on the same side, selecting a minimum in , and shifting the spectrum to the opposite direction by . Subsequently, repeating steps 1 to 3. While, if they are not on the same side, skip steps 5 and 6; (6) is taken as the fault feature vector extracted from the spectrum.

Framework Construction of Fault Diagnosis
The overall framework construction of the proposed fault diagnosis method based on SSA in our research is shown in Figure 2. The proposed fault diagnosis method includes three parts, namely data processing, fault feature extraction and fault feature classification.

Data Processing
As shown in Figure 3, a signal segment containing 120,000 points is selected, then it is segmented into 100 parts with a same length. In total, 100 samples are extracted from one signal segment. Therewith, 100 samples are separated into a training sample set and a test sample set.
Each sample is decomposed into a set of components with better analyzability with a timefrequency analysis method, LMD and EEMD are two commonly used ones. The very first component in each set of components is chosen to extract fault features because they accumulate the main part of the energy. The proposed fault diagnosis method includes three parts, namely data processing, fault feature extraction and fault feature classification.

Data Processing
As shown in Figure 3, a signal segment containing 120,000 points is selected, then it is segmented into 100 parts with a same length. In total, 100 samples are extracted from one signal segment. Therewith, 100 samples are separated into a training sample set and a test sample set.
Each sample is decomposed into a set of components with better analyzability with a time-frequency analysis method, LMD and EEMD are two commonly used ones. The very first component in each set of components is chosen to extract fault features because they accumulate the main part of the energy.

Fault Feature Extraction
FFT is utilized to transform the decomposed component into the frequency domain, and then SSA is implemented to extract fault features. First the components of the training samples are selected to calculate z , and z is utilized for both the training samples and test samples to extract fault features.

Fault Feature Classification
Fault feature vectors are classified into different fault patterns. Vectors extracted from the training samples are utilized to train the classification model and parameters are tuned to optimize the model. Here, SVM is selected because of its better performance in classification with small samples. Eventually, categories are output with the well-trained model.

Data Selection and Processing
Vibration signals acquired from bearings are utilized for validation. In this paper, selected bearing data published by Case Western Reverse University were used [32]. Single point faults are introduced to the test bearings on different parts (ball, inner race and outer race) to simulate different kinds of faults. Vibration signals of different kinds of faults with different failure degrees are collected under different loads to construct the experimental data set.
The data set consisted of vibration data collected on SKF bearings, and the sampling frequency is 12 kHz. Twelve kinds of combinations under four kinds of loads (0, 1, 2 and 3 hp) and three kinds of failure degrees (0.007, 0.014 and 0.021 inch) form 12 different working conditions.
Under each working condition, four kinds of fault mode (normal, ball fault, inner race fault and outer race fault) are simulated, and four time-varying signals corresponding to the faults are collected, respectively. Each signal is processed with the proposed method given in Figure 3 to extract 100 samples, and 100 feature vectors are subsequently constructed. Eventually, 400 feature vectors are determined under every working condition.

Fault Feature Extraction
FFT is utilized to transform the decomposed component into the frequency domain, and then SSA is implemented to extract fault features. First the components of the training samples are selected to calculate f z , and f z is utilized for both the training samples and test samples to extract fault features.

Fault Feature Classification
Fault feature vectors are classified into different fault patterns. Vectors extracted from the training samples are utilized to train the classification model and parameters are tuned to optimize the model. Here, SVM is selected because of its better performance in classification with small samples. Eventually, categories are output with the well-trained model.

Data Selection and Processing
Vibration signals acquired from bearings are utilized for validation. In this paper, selected bearing data published by Case Western Reverse University were used [32]. Single point faults are introduced to the test bearings on different parts (ball, inner race and outer race) to simulate different kinds of faults. Vibration signals of different kinds of faults with different failure degrees are collected under different loads to construct the experimental data set.
The data set consisted of vibration data collected on SKF bearings, and the sampling frequency is 12 kHz. Twelve kinds of combinations under four kinds of loads (0, 1, 2 and 3 hp) and three kinds of failure degrees (0.007, 0.014 and 0.021 inch) form 12 different working conditions. Under each working condition, four kinds of fault mode (normal, ball fault, inner race fault and outer race fault) are simulated, and four time-varying signals corresponding to the faults are collected, respectively. Each signal is processed with the proposed method given in Figure 3 to extract 100 samples, and 100 feature vectors are subsequently constructed. Eventually, 400 feature vectors are determined under every working condition.

Parameter Determination
Parameters corresponding to decomposition methods, fault feature extraction process and fault feature classification modeling process are determined as follows: Parameters to be determined in signal decomposition methods: (1) In LMD, parameters are determined according to reference [33]; (2) In EEMD, parameters are determined according to reference [34]; Parameters to be determined in SSA method: (1) f z , differential frequency value is calculated according to Equation (2); (2) f l , left boundary value is calculated according to Equation (3); (3) f r , right boundary value is calculated according to Equation (4); (4) µ, tolerance value is calculated according to Equation (5); (5) h, peak value coefficient ratio is calculated according to Equation (6); (6) h t , the minimum value in vector H is selected as the threshold value of h; Parameters to be determined in pattern recognition method: (1) In SVM, cost c is a basic parameter while g is a specific one in RBF kernel. In this paper, Grid search [35] is applied and overall accuracy is taken into consideration to tune the two parameters.

Experiment Results and Analysis
In this subsection, a simulated signal x(t) is utilized to evaluate the effectivity of decomposition methods [33]. x(t) consists of two superimposed component signals: The LMD and EEMD methods are used to decompose the signal. Figure 4 illustrates the results of the decomposition. Parameters corresponding to decomposition methods, fault feature extraction process and fault feature classification modeling process are determined as follows: Parameters to be determined in signal decomposition methods: (1) In LMD, parameters are determined according to reference [33]; (2) In EEMD, parameters are determined according to reference [34]; Parameters to be determined in SSA method: (1) z , differential frequency value is calculated according to Equation (2); (2) l , left boundary value is calculated according to Equation (3); (3) r , right boundary value is calculated according to Equation (4); (4) , tolerance value is calculated according to Equation (5); (5) ℎ, peak value coefficient ratio is calculated according to Equation (6); (6) ℎ t , the minimum value in vector is selected as the threshold value of ℎ; Parameters to be determined in pattern recognition method: (1) In SVM, cost c is a basic parameter while g is a specific one in RBF kernel. In this paper, Grid search [35] is applied and overall accuracy is taken into consideration to tune the two parameters.

Experiment Results and Analysis
In this subsection, a simulated signal ( ) is utilized to evaluate the effectivity of decomposition methods [33]. ( ) consists of two superimposed component signals: ) (200 + 2 (10 )) + 3 (20 2 + 6 ) ∈ [0,1] The LMD and EEMD methods are used to decompose the signal. Figure 4 illustrates the results of the decomposition.   Figure 4b,c, the oscillographs in red are two original components of the raw simulated signal, and the ones in blue are the Product Function (PF) components extracted with the LMD method. Obviously, the original components and extracted PF components have a high similarity except for several end points on the right. In Figure   Figure 4. Results of decomposition for simulated signal. Figure 4a shows the oscillograph of the simulated signal. In Figure 4b,c, the oscillographs in red are two original components of the raw simulated signal, and the ones in blue are the Product Function (PF) components extracted with the LMD method. Obviously, the original components and extracted PF components have a high similarity except for several end points on the right. In Figure 4d,e, the oscillographs in blue are the first two Intrinsic Mode Function (IMF) components extracted with the EEMD method; both of them have less similarity with the original ones. These results prove that LMD adopted in the research can effectively decompose the raw signal into PF components which have physical significance, and EEMD can decompose the raw signal into IMF components by another mechanism [18].
Four experimental sets are designed combining two different signal decomposition methods: two different feature extraction methods and a fault feature classification method. The four experimental sets are arranged as shown in Table 1. LMD and EEMD are utilized to decompose the signals. The fault feature extraction methods include the proposed SSA and the combination of Sample Entropy (SE) and Energy Ratio (ER) [36], and the LIBSVM [37] software package is selected to implement the pattern classification. In each experiment set, 12 kinds of working conditions (a working condition is denoted as a load-fault five kinds of sample division scheme are tested (a sample division scheme is denoted as: number of training samples in 100 samples to every fault/number of test samples in 100 samples to every fault, for example 5/95, 10/90, 20/80, 40/60, 60/40), with each scheme, 10 independent experiments are repeated. Ultimately, 2400 experiments are carried out in total within the four experiment sets. Table 2 shows the f z values and dimensions of feature vectors under 12 working conditions with sample division schemes of 5/95 and 60/40, respectively, in experimental set 1. The results illustrate that when the working condition or division scheme changes, the differential frequency f z value and dimension value of the feature vectors change accordingly.
Without considering sample division schemes, the overall diagnostic capability of proposed model is evaluated. The average values and variance of accuracy values to all independent experiments (50 times) under each working condition are listed in Tables 3 and 4.     Figure 5a show that Set 3 achieves the best average accuracies under six kinds of working conditions and Set 1 achieves the best under five kinds of working conditions, while Set 4 only ranks the first place under one kind of working conditions. Overall, the average accuracies of    Table 7. Figure 7 transforms Table 7 into diagrams.   Table 7. Figure 7 transforms Table 7 into diagrams.
In Table 7, 60 comparisons are conducted under different working conditions with different sample division schemes, and the results show that Set 1 and Set 3 have better performance in average accuracy with 56 comparisons out of 60, while Set 2 or Set 4 only get higher accuracies under the working condition of 0-0.007 with four kinds of schemes. As shown in Figure 7, with the increase of the number of training samples, the average accuracies of Set 1 and Set 3 obviously converge toward the highest value faster than Set 2 and Set 4. Considering all the results comprehensively, further analysis is carried out. LMD and EEMD can decompose nonlinear and unstable signals into a set of components in the time domain, and these components have better analyzability. The proposed SSA method can adaptively extract feature information according to local characteristics, and construct unfixed-dimension fault feature vectors, and it is proved to have better efficiency and robustness. SSA-based fault diagnosis methods can obtain higher accuracies under different working conditions with different sample division schemes in most comparisons (56/60), and the accuracies show less fluctuation between different conditions. With the increasing number of samples, the accuracies achieved with the SSA-based method converge towards the highest values faster. Especially with a small sample division scheme (5/95), the results have shown that methods based on SSA still maintain high accuracy and stability and they are proved specially suitable for practical application in scenarios with small amounts of training samples.

Conclusions
To improve the fault extraction performance, SSA is proposed in this paper. Combined with signal decomposition methods, SSA extracts fault features from non-linear and unstable signals effectively, then fault features are classified with SVM. Bearing data under 12 different working conditions obtained from CWRU are utilized to evaluate the diagnosis methods. The conclusions may be summarized as follows: 1.
SSA extracts fault features and constructs unfixed-dimension vectors adaptively, it has reduced the side effects caused by information insufficiency and redundancy. Moreover, SSA has higher efficiency and robustness in fault extraction.

2.
Fault diagnosis methods based on SSA can achieve higher accuracies and stability than other methods under the same proposed framework and with an increased number of training samples, the accuracies achieved with the SSA-based method converge to the highest value faster.

3.
Especially, with a small amount training samples, the SSA-based method still provides high accuracy with more obvious superiority in accuracy and stability, therefore they have the potential to be implemented in real application scenarios.

Future Lines of Work
In recent years, deep learning has been adopted gradually in fault diagnosis. It can extract fault features automatically because of its multi-layer structure, this characteristic can improve the feature extraction further. At the same time, transfer learning [38] has achieved great success in many fields.
Its generalization capability can be also utilized in fault diagnosis to promote the diagnostic theories to applications. Therefore, our future work will be focused on the study of implementation of the combination of deep learning and transfer learning in fault diagnosis.

Conflicts of Interest:
The authors declare no conflict of interest.