Optimization of Kurtosis in the Extend-Infomax Blind Signal Separation Algorithm

A kurtosis optimization method is proposed to improve the blind separated signal qualities based on the extend-infomax algorithm. +e kurtosis of the hypothetical source signal was optimized based on the probability density function of sub-Gaussian signals. Obtained parameters after kurtosis optimization were then utilized to validate the effectiveness of the algorithm, which showed that the running time of the algorithm was significantly reduced, and the qualities of the separated signals were enhanced. Methods. Using kurtosis as a control variable, a one-way analysis of variance (ANOVA) was carried out on the algorithm’s performance metrics, the number of iterations, and the signal-to-noise ratio of the separated signal. Results. +e results showed that there were significant differences in the above metrics under different kurtosis levels. +e curves of average metric values indicate that, with the increase in kurtosis of the hypothetical source signal, the performance of the algorithm was improved.


Introduction
Blind signal separation (BSS) refers to the estimation of source signals based on an observed signal in the condition where both the source signal and the mixed system are unknown. e BSS digital signal processing technology was first proposed in the 1990s and has since become a popular research topic in the field of signal processing [1][2][3][4]. Several studies on BSS have been implemented in various areas, such as mechanical fault detection [5,6], audio signal processing [7], image processing [8], biomedical engineering [9], and radar signal detection [10].
In a linear instantaneous mixture BSS model, n statistically independent source signals S(t) � [s 1 (t), s 2 (t), . . . , s n (t)] T are processed by an unknown aliasing matrix A � (a ij ) n×n and n observation signals X(t) � [x 1 (t), x 1 (t), . . . , x n (t)] T were obtained: X(t) � AS(t), t � 1, 2, . . . , T 0 , T 0 is the number of samples , (1) where A is a n × n nonsingular constant matrix. e task of the BSS algorithm is to recover the source signals from mixtures x(t). e separation model of BSS algorithms can be divided into two categories: batch approach and extraction approach. e goal of BSS is to obtain the source signal based on the observed signal X(t) only, without knowing the source signal S(t) and the aliasing matrix A n×n . It has been applied in various scenarios. For example, in the "cocktail party" problem, BSS technology was used to separate the speech and music signals from the mixed background signal [11]. In brain, electroencephalogram (EEG) processing BSS technology was used to automatically remove eye movement and blink artifacts to extract the characteristics of neural signals [12,13]. In addition, BSS was used to extract the components of mechanical vibration signals in mechanical fault detection thereby increasing the accuracy of fault detection [14].
Due to the increased popularity and its wide application area that BSS can be applied to, a lot of research has been conducted to resolve this issue. Scarpiniti has developed one effective blind source separation based on the Adam algorithm [15]. e method is based on a novel stochastic optimization approach known as the Adaptive Moment Estimation (Adam) algorithm [16], which provides excellent properties of itself to the BSS solution. One of the most effective algorithms is the InfoMax algorithm proposed by Belland Sejnowski [11]. e algorithm is based on the joint entropy maximization of one single neural network output. e efficiency of this method is guaranteed by the fact that the joint entropy gradient can be evaluated in a simple closed-form. e extend-infomax algorithm was proposed by Girolami et al. [17] and Lee et al. [18]. e extended version is proved to be able to separate 20 sources with a variety of source distributions easily.
However, little research has been done in further boosting the performances of the extend-infomax algorithm.
is paper is adopting one approach of kurtosis optimization for BSS performance improvement. Research on BSS is of significant practical and application value. An overview of the main method applied in the field of BSS is included in Section 2.1.
Major contributions of this paper are as follows: (1) For blind signal separation, independent component analysis is used (ICA). (2) Different algorithms have been discussed in the literature. e infomax algorithm is an optimization principle for artificial neural networks and other information processing systems. Infomax algorithms are learning algorithms that perform this optimization process. (3) e extend-infomax algorithm is used. One of the objectives of the extend-infomax algorithm is to provide a simple learning rule that can separate sources with a variety of distributions. (4) e influence of kurtosis optimization on the performance of the algorithm was analyzed. (5) One-way ANOVA was then carried out on the performance metrics of the algorithm under different kurtosis levels to derive the source signal with improved qualities. e outline of this paper is given as follows. In Section 2, the Methods section, the independent component analysis (ICA) method is discussed, used for the separation of the blind signal. e infomax algorithm is described which is a neural network method. en, the extendinfomax algorithm is discussed to provide a simple learning rule that can separate sources with a variety of distributions.
In Section 3, aiming at the mixture separation of Gaussian source signals in extend-infomax algorithm, the parameter setting of this algorithm is explained from the perspective of kurtosis optimization of assumed source signals, and the way to improve the algorithm is given.
In Section 4, the Analysis section, one-way ANOVA analysis of the number of iterations, one-way ANOVA analysis of SN1, and one-way ANOVA analysis of SN2 are discussed.

Methods
In this section, we described the independent component analysis (ICA). It is a technology used for the separation of the blind signal. Also, the infomax algorithm is described which is a neural network method. en, the extend-infomax algorithm is discussed to provide a simple learning rule that can separate sources with a variety of distributions.

Independent Component Analysis (ICA)
. ICA is currently the primary technology used for blind signal separation. It is generally assumed that the source signals are statistically independent, and the independence is measured by the cost function. Using optimization algorithms, the separation matrix W n×m that makes the cost function reach the extreme values is obtained, and the signal y(t) � [y 1 (t), y 2 (t), . . . , y n (t)] T � WX(t) is taken as the estimation for the source signals. e cost function can be selected using three different methods including minimum mutual information (MMI), maximum entropy (ME) [11], and maximum likelihood estimation (MLE) [15,19]. Optimization of the cost function is performed using the stochastic gradient (SG) [11] and natural gradient (NG methods) [20]. Different algorithms have been proposed in the literature to solve the BSS problem. In general, these algorithms can be classified into two groups: the algorithm based on (i) statistical analysis and (ii) neural networks methods. (i) e neural network-based methods are considered to be more computationally efficient. (ii) e statistical analysis methods are slow as compared with neural network-based methods, but they might be slow in convergence.

Infomax Algorithm.
Infomax is an optimization principle for artificial neural networks and other information processing systems. e principle was described by Linsker in 1988. It prescribes that a function that maps a set of input values I to a set of output values O should be chosen or learned to maximize the average Shannon mutual information between I and O, subject to a set of specified constraints and/or noise processes. Infomax algorithms are learning algorithms that perform this optimization process. One of the limitations of the Infomax algorithm is that it cannot adapt itself to inputs with a variety of distributions.
As the learning rule defined by Bell and Sejnowski states that there is only one nonlinear function for the mapping network, therefore, it can only separate signals with the same distribution.

Extend-Infomax
Algorithm. e extend-infomax algorithm was proposed by Girolami et al. ere are three different types of distributions for signals: super-Gaussian, Gaussian, and sub-Gaussian signals. e difference between PDF of sub-Gaussian and super-Gaussian distributions is shown in Figure 1. Signals with different distribution such as (a) a sub-Gaussian signal, (b) a Gaussian signal, and (c) a super-Gaussian signal are reproduced by Ashouri et al. 2009, under the Creative Commons Attribution License/public domain [21]. One of the limitations of the existing BSS algorithms is that they cannot adapt themselves to a variety of inputs distributions. e algorithm measures the independence of the separated signals based on the MI (i.e., Kullback-Leibler divergence): Assume the separated signals are independent which gives I(y) � 0. Minimization of MI of the separated signals is equivalent to the maximization of the likelihood function in Equation (2): Based on the conventional gradient method, we can obtain the following equation: where p(y) is the hypothetical probability density of the source signal.
To avoid the inversion of the separation matrix and speed up the convergence of the algorithm, the NG method was used as follows: e probability density of the hypothetical source signal in the extend-infomax algorithm is Let μ � σ � 1, the following equations, are obtained: K � diag sgn kurt y 1 , sgn kurt y 2 , . . . , sgn kurt y n .
e infomax algorithm is effective in separating sub-Gaussian sources. is is due to using only one nonlinear function in the learning rule of the neural network. Hence, one of the objectives of the extend-infomax algorithm is to provide a simple learning rule that can separate sources with a variety of distributions. In this paper, based on the hypothetical probability density of a sub-Gaussian source signal in the extendinfomax algorithm, the influence of kurtosis optimization on the performance of the algorithm was analyzed. One-way ANOVA was then carried out on the performance metrics of the algorithm under different kurtosis levels to derive the source signal with improved qualities.

Proposed Algorithm
In this section, the kurtosis algorithm is discussed, that is, the assumed kurtosis of sub-Gaussian source signal in extendinfomax algorithm, and kurtosis is used as a control variable.

Kurtosis.
Kurtosis is the fourth-order cumulant of a signal. For a normal distribution, the kurtosis can be calculated from its characteristic function from [22] in the following equation: where p( σ)e − (x− μ) 2 /2σ 2 . e kth-order central moment of the normal distribution is as follows: us, m 1 , m 2 , m 3 , and m 4 are normally distributed random variables as follows: e kurtosis of normally distributed random variables is as follows: Assuming the source signal . . , n, based on the central limit theorem, there is where (a p1 , a p2 , . . . , a pn ) is the pth row of the matrix A. erefore, the kurtosis of the mixture signal should be approximately zero. If kurtosis > 0, the signal is called super-Gaussian, while kurtosis < 0, it is sub-Gaussian. |k 4 | can be used as a measure of the degree to which the signal is far away from the Gaussian signal. e hypothetical probability density of the sub-Gaussian source signal in the extend-infomax algorithm is from the Gaussian mixture model of the study by Pearson [18]: where where c � μ/σ 2 and η � a/(a − 1). e kurtosis of the hypothetical source signal is as follows: e following constrained optimization problem is to be solved: Using a random parameter setting off a ∈ [0, 1], μ ∈ [− 100, 100], and σ 2 ∈ (0, 100] as the initial iteration conditions, the minimum value kurt(y) was − 2 and a � 0.5 after 100 iterations. μ and σ 2 showed irregular variation. According to the Gaussian mixture model of Pearson [18], set a � 0.5; then, the probability density function graph is as shown in Figure 2.
For BSS of sub-Gaussian signals, the algorithm equation is as follows:

Influences of Kurtosis on the Source Signal.
In the probability density function of the hypothetical source signal in the extend-infomax algorithm, kurt(y) � − 0.5. A higher value |kurt(y)| means a larger distance between the source signal and Gaussian signal, and a lower |kurt(y)| indicates the smaller distance between them. Both c and 1/σ 2 will change with |kurt(y)|. e parameters can be obtained by solving the following function: In this paper, s 1 (t) � sin(2π * (300 * t)) and s 2 (t) � sin(2π * (5 * t))sin(2π * (600 * t)). A total of 10000 sampling points in the equal distance were selected within the interval [− 10 * π, 10 * π].  (Table 1), which was derived when μ � 1 and σ 2 � 1, the parameters obtained with increasing |k 4 | are used to validate the performance of the algorithm based on the following metrics: (1) Performance index (PI) is as follows: (2) Signal-to-noise ratio (SNR) is as follows: where S * is the separated signal. e other experimented alternations of k 4 ranging from − 0.51 to − 0.55 with a step of − 0.01 and their corresponding performance indices are included in Tables 2-6, where SN1 refers to the SNR of the recovered signal corresponding to the first source signal, expressed as SNR(S, S * ) � 10 log (‖S 1 ‖ 2 /‖S 1 − S * 1 ‖ 2 ), and SN2 means the SNR of the recovered signal corresponding to the second source signal, expressed as SNR(S, S * ) � 10 log From the results, it can be seen that at the same kurtosis level, the running time and iterations of different combinations μ and σ 2 are similar. As shown in Figure 4, when the kurtosis of the source signal increases to a certain level, at the maximum number of iterations, the PI value does not tend to zero or the global matrix is empty. If the value |k 4 | is not too big (e.g., |k 4 | � 0.6), the algorithm becomes unstable after the obtained parameters were imported. In this case, the iteration step can be reduced (e.g., u � 0.0003) to ensure convergence.
To evaluate the influence of kurtosis of the hypothetical source signal on the performance of the extend-Infomax algorithm, kurtosis was used as a control variable. Under six kurtosis levels, one-way ANOVA analysis of the running time, the number of iterations, SN 1 , and SN 2 was carried out using SPSS 25.0.

Analysis
In this section, one-way ANOVA [23] analysis of the number of iterations, one-way ANOVA analysis of SN1, and one-way ANOVA analysis of SN2 are discussed.

One-Way ANOVA Analysis of the Running Time.
At a significance level α � 0.05, the running time at different kurtosis levels does not meet the homogeneity of variance assumption. e results of Welch and Brown-Forsythe tests are shown in Table 7.
e results showed significant differences in the algorithm running time among the six kurtosis levels. e average running time decreases with the increase in the kurtosis, as shown in Figure 5.

One-Way ANOVA Analysis of the Number of Iterations.
At a significance level α � 0.05, the number of iterations at different kurtosis levels does not meet the homogeneity of variance assumption. e results of Welch and Brown-Forsythe tests are shown in Table 8.
ere are significant differences in the number of iterations between the six kurtosis levels. e average number of iterations decreases with increasing kurtosis, as shown in Figure 6.

One-Way ANOVA Analysis of SN1.
At a significance level α � 0.05, the SN1 results at different kurtosis levels meet the homogeneity of variance assumption, indicating significant differences in SN1 among different kurtosis levels. e average SN1 increases with increasing kurtosis (Figure 7). e result of the F test is shown in Table 9.
4.4. One-Way ANOVA Analysis of SN2. At a significance level α � 0.05, the SN2 at different kurtosis levels does not meet the homogeneity of variance assumption. e results of Welch and Brown-Forsythe tests are shown in Table 10.
ere are significant differences in SN2 among the different kurtosis levels, and the average SN2 increased with increasing kurtosis (Figure 8).

Conclusions
Using the hypothetical probability density of sub-Gaussian signals in the extend-infomax algorithm, the constrained optimization problem was solved according to the principle of optimal kurtosis. e optimal parameter a in the probability density of the hypothetical source signal is obtained: a � 1/2. e different combinations of parameters at different kurtosis levels were examined to validate the performance of the algorithm using such indices as the performance index, running time, number of iterations, SN1, and SN2. With kurtosis as a control variable, a one-way ANOVA analysis of the above indices was carried out. e results showed that there are significant differences in the          Mobile Information Systems indices among different kurtosis levels. With an increasing kurtosis, the average running time and the number of iterations showed a decreasing trend, whereas the average signal noise ratio increased. Once the kurtosis reaches a certain level, the iteration step size needs to be reduced for the algorithm to converge. e experiment has validated the effectiveness of the proposed method in improving the quality of blind recovered signals.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.