Wiener Model Identification Using a Modified Brain Storm Optimization Algorithm

: The Wiener model is widely used in industrial processes. It is composed of a linear dynamic block and a nonlinear static block. Estimating the Wiener model is challenging because of the diversity of static nonlinear functions and the immeasurableness of intermediate signals owing to the series structure of the Wiener model. Existing optimization algorithms cannot satisfy the requirements of accuracy and efficiency of identification and often lose into a local optimum. Herein, a modified Brain Storm Optimization (mBSO) is proposed to estimate the parameters of the Wiener model. Many different combinations of individuals from intra or extra-groups ensure the diversity of the proposed mBSO algorithm. Furthermore, the mBSO algorithm incorporates a multiplicative term. It is triggered by the current state of the population that achieves a good balance between global exploration and local exploitation. Comparative experiments are presented to demonstrate the effectiveness and efficiency of the proposed method.


Introduction
The Wiener model describes the behavior of nonlinear systems adequately, and are widely applied in industrial processes, such as chemical [1], biological plant [2], and chromatographic separation processes [3]. Although the effectiveness of the Wiener model has been confirmed, estimating the structure and parameters of the Wiener model is still challenging in industrial applications [4,5].
Many studies have been reported including different techniques for identifying the Wiener model using various classical optimization methods. Wang et al. proposed the least squares iterative identification algorithm and the gradient iteration algorithm to identify Wiener nonlinear systems [6]. Furthermore, Ding et al. offered a Newton iterative identification algorithm to determine a unique Wiener system [7]. However, classical optimization methods assumed a smooth search space of the continuous derivative and optimized in the direction of the gradient. Therefore, it is easy to fall into a local extremum.
To improve the modeling accuracy and efficiency of the optimization process as well as eliminate the occurrence of optimal local trapping, stochastic evolutionary optimization algorithms are presented as an alternative and effective tool to solve nonlinear optimizations. Different stochastic algorithms can be classified according to the inspiration behind their population update mechanism [8]. One category is the evolutionary algorithm, in which the biological evolution process inspires the update process. A representative is the genetic algorithm (GA) that is used to obtain the optimal solution by mimicking the mechanism of natural selection and inheritance [9]. It is well known that classical optimization algorithms iteratively seek the optimal solution from a single initial value, while the GA processes multiple individuals simultaneously in the group. Bipin et al. adopted an adaptive GA and corresponding results were compared with those of classical GA and particle swarm optimization (PSO) algorithm [9]. The result indicated that the GA convergence accuracy was low.
The other is the swarm intelligence algorithm, in which the update process is inspired by the behaviors of some living organisms [10]. Many swarm intelligence algorithms are called foraging algorithms as they mimic the foraging behavior of animals and/or insects, such as particle swarm optimization [11] and ant colony optimization (ACO) [12]. They are inspired by bird-watching behavior and ant-searching behavior for food in nature, respectively. Compared with GAs, PSO models a society rather than the principle of survival of the fittest. Every particle in a group represents a potential solution. Wu used the Bacterial Foraging Optimization Particle Swarm Optimization (BFOPSO) algorithm to test four classical functions and modeled a Wiener simulation. The results indicated its superiority in the overall searching ability. However, the BFOPSO algorithm failed to demonstrate its efficiency in highdimension optimization problems. Although ACO exhibits strong robustness, its searching time is longer thus affecting its convergence speed. Moreover, if excessive "elites" are selected, ACO will cause premature stagnation in the process owing to an earlier convergence to the local suboptimal solution.
Brain Storm Optimization (BSO) algorithm is a meta-heuristic optimization algorithm developed by mimicking the human brainstorming procedure [13]. Compared with other animals, the human's thinking pattern is more intelligent [14]. Therefore, optimization algorithms based on brainstorming should demonstrate better performance in accuracy and convergence than other swarm intelligence optimizations. However, the original BSO cannot obtain the global solution during successive iterations because of premature or local minima trapping [15]. Hence, a modified BSO (mBSO) algorithm is presented herein. In its update process, new ideas can be generated from a cluster center and another randomly chosen cluster. Compared with the original BSO algorithm, many combination methods are used to generate new ideas to improve the diversity of mBSO algorithms. In addition, to prevent an algorithm falling into a local optimal and facilitate in obtaining an optimal value in a population, a multiplicative term is introduced. It can intelligently change the searching domain according to the current state of an individual combined with a global-best version. Numerical and industrial cases have been presented to illustrate the performance of the mBSO algorithm.
The remainder of the paper is organized as follows: Section 2 describes the optimal problem. Section 3 details the original BSO and the mBSO. Section 4 presents the numerical simulation and industrial cases. Finally, discussions and conclusions are summarized in Section 5 and Section 6 respectively.

Problem Description
A Wiener model consists of a linear, time-invariant, dynamic subsystem followed by a static nonlinearity [16].
where ( ) f ⋅ is a nonlinear function, and are the input and output of the dynamic linear block, respectively.
is the output of the Wiener model, is white noise, i.e., , and is the number of sampling instants.
The polynomials and are defined as follows: The vector θ is defined as follows: (2) Combining Eq. (1) and Eq. (2), the output of the linear block is rewritten as follows: The parameter θ can be estimated by minimizing the objective function.
[ ] Owing to the nonlinear static block of the Wiener model, the parameter θ cannot be solved directly. In this study, an intelligent algorithms, i.e., BSO, is used as a tool to obtain the optimal value.

BSO Algorithm
The BSO algorithm is inspired by the brainstorming process in human-problem solving [17]. In BSO, each idea represents a potential solution to a problem and is updated in each iteration. Initially, n ideas are clustered into m clusters with k-means clustering. The best idea in each group is maintained as the cluster center. In each iteration, a new individual new x is generated as follows:  Clustering n individuals are classified into m clusters. According to their fitness value in each cluster, the best individual will be chosen as the cluster center.
 Disruption Randomly select an original idea org x from the population, and change its value in a randomly selected cluster using Eq. (6) represents a Gaussian distribution with mean 0 and standard deviation 1, ξ is a dynamically updated step-size, log () sig is a logarithmic sigmoid transfer function, max H is the maximum number of iterations, cur H is the current iteration number, s is for changing log () sig function's slope, and () rand is a random value within (0,1).

 Generation
The generation of new x exhibits three characteristics: Leading, colonial and flexible.
(1) Leading Good ideas will be clues for the next generation, i.e., assigning cluster centers with a high probability of participating in the creation of new individuals.
(2) Colonial All individuals in the group have been thoroughly discussed according to the learning mechanism. Assume that m clusters exist; therefore, j m is the total number of individuals in one cluster and x (see Eq. (9)) in the j cluster, Furthermore, it can be obtained from the combination of two cluster centers (see Eq. (10)) or two random individuals by probability (see Eq. (11)).
() rand is a random value between 0 and 1. Detailed combination methods are described in Fig. 2.
Similar to a random mutation process, the algorithm will add a random value () rand to the generated individuals in Eq. (7).

 Updating
During each iteration, the existing individuals will generate a new idea. After comparison, better individuals will be remained and subsequently enter the next iteration [18]. Hence, the entire process of obtaining an optimal solution will end until the number of iterations reaches the upper limit.
The procedure of the BSO algorithm is listed in Algorithm 1.

Modified BSO Algorithm
In this section, two modified procedures to improve the performance of the BSO algorithm are present.
 The generation process In the BSO algorithm, the generation of new individuals relies on the combination of intra and extragroups [19]. One or two individuals can be chosen randomly from different clusters or linearly combining generatio disruption updating Initialization clustering two cluster centers, as shown in Fig. 2. However, the modified BSO algorithm increased the population diversity by using proposed the strategy shown in Fig. 3.
where t Additionally, a new idea generated from one cluster center and one random individual in two clusters is described as follows (Fig. 3f):  The global-best update As shown in Eq. (16), the global-best information improved the performance of meta-heuristic algorithms significantly. The effect of the global-best idea gb x in the population will be added after the new idea is generated as follows: where () F is the fitness value of the individuals, new x is the current individual, ) is. This implies that the current idea must learn from other good ideas. In other words, the searching domain will tend to a global search. On the contrary, the smaller the fitness value of the current individual, the smaller α is. When α becomes smaller, the searching domain tends to local search. Triggered by the current state of the individuals, the multiplicative term α intelligently determines the searching area of the next iteration to achieve a balance between local search and global search in the population. The procedure of the mBSO algorithm is listed in Algorithm 2.

Case Studies
The experimental section is divided into three: The performance comparison among mBSO algorithms, original BSO algorithm, and PSO algorithm based on four benchmark functions; the comparison between the original BSO and mBSO on a second-order model, and the comparison on an actual nonlinear CE8 coupled electric drive system.
Two indexes, i.e., absolute-relative error ( ARE ) and root-mean-square-error ( RMSE ) are used to demonstrate the algorithm's performance.

Benchmark Functions
Based on the Matlab software simulation platform, four selected benchmark functions (Rosenbrock, Griewank, Rastrigin and Quadric, shown in Tab. 1) are selected to test the performance of PSO, BSO and mBSO algorithms. The information of the four functions are shown in Tab. 1.
The parameters of three intelligent optimization algorithms are listed in Tab. 2. The probability value is the result obtained by multiple random experiments.
The results are shown in Fig. 4. Regardless of whether the dimension of the problem is 30 or 50, it is clear that the performance of the mBSO algorithm is the best among the three algorithms.

Numerical Simulation
A classical Wiener model is used as an example herein:   The identified results are shown in Tabs. 3-5.   Figure 4: Comparisons of the algorithms in benchmark functions As shown in Tab. 3 and Tab. 4, the convergent rate of the mBSO algorithm is faster than that of the original BSO algorithm. Additionally, the accuracy of the mBSO had been modified significantly. Meanwhile, the ARE was reduced to 0.6806% only after 50 iterations.
The iterative process is shown in Fig. 5; the optimal fitness value is gradually reduced, and the estimated parameters are closer to the real value. As observed, the red curve exhibits a steeper slope during the interval (5-50) and the mBSO achieved an excellent convergence.

Industrial Case
The CE8 couples two current-controlled electric motor drive systems through a pulley. The pulley is suspended in a fixed spring at one end to form a dynamic light damping mode, as shown in Fig. 6. The pulse sensor used in the system could not detect the positive and negative angular velocities of the pulley. Apparently, the absolute value function could describe the irreversible nonlinear process. The system model was a dynamic third-order linear model, which was described as follows [20]: To confirm the performance of the mBSO, both PSO and BSO algorithms were used to compare with the mBSO. The parameters used in the PSO, BSO and mBSO algorithms are shown in Tab. 5.
The difference between the actual and estimated outputs of the mBSO algorithm is shown in Fig. 7, and the corresponding iterative process is shown in Fig. 8.     The iterative process of each stochastic algorithm is shown in Figs. 9-11. Three algorithms are implemented in a computer with Intel(R) Core (TM) i5-3317U CPU, 4-GB memory and Windows 7 operating system. The PSO algorithm achieved convergence in 400 iterations (consuming 214.2940s), the BSO algorithm converged in 327 iterations (consuming 180.3679s), and the mBSO algorithm converged in 173 iterations (consuming 92.2496s). It can be concluded that the convergence speed of the mBSO algorithm is faster than those of the BSO and PSO algorithms.

Discussion
With optimization problems becoming increasingly complex, classical numerical optimization algorithms can no longer cater to demands, and many evolutionary optimization algorithms have been presented that achieved great success in many true-value and portfolio optimization problems. However, most stochastic optimization algorithms still suffer from "dimensional disasters". Herein, the dimensional sensitivity of the mBSO algorithm is further considered and tested using four benchmark functions. The result indicates that the increase in magnitude in the fitness mean value of the mBSO algorithm is 19.83% (shown in Fig. 4). The accuracy of the mBSO algorithm is relatively insensitive to the dimension, thus indicating that the mBSO algorithm can be used to solve large-scale optimization problems.

Conclusion
To accurately and quickly estimate the parameters of the Wiener model, the mBSO algorithm was presented herein. Many combination strategies of ideas in the update process and a real-time variable parameter combined with the global best design were introduced to improve the performance of the mBSO. The mBSO-based optimization technique can be extended to other block-oriented models such as the Hammerstein and the cascaded combination of Hammerstein and Wiener models.

Conflicts of Interest:
No potential conflict of interest was reported by the authors.