Human Behavior-Based Particle Swarm Optimization

Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c 1 and c 2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost.


Introduction
Particle swarm optimization (PSO) [1] is a population-based intelligent algorithm, and it has been widely employed to solve various kinds of numerical and combinational optimization problems because of its simplicity, fast convergence, and high performance.
Researchers have proposed various modified versions of PSO to improve its performance; however, there still are premature or lower convergence rate problems. In the PSO research, how to increase population diversity to enhance the precision of solutions and how to speed up convergence rate with least computation cost are two vital issues. Generally speaking, there are four strategies to fulfill these targets as follows.
(1) Tuning control parameters. As for inertial weight, linearly decreasing inertial weight [2], fuzzy adaptive inertial weight [3], rand inertial weight [4], and adaptive inertial weight based on velocity information [5], they can enhance the performance of PSO. Concerning acceleration coefficients, the time-varying acceleration coefficients [6] are widely used. Clerc and Kennedy analyzed the convergence behavior by introducing constriction factor [7], which is proved to be equivalent to the inertial weight [8].
(3) Changing the topological structure. The global and local versions of PSO are the main type of swarm topologies. The global version converges fast with the disadvantage of trapping in local optima, while the local version can obtain a better solution with slower convergence [15]. The Von Neumann topology is helpful for solving multimodal problems and may perform better than other topologies including the global version [16].
Though all kinds of variants of PSO have enhanced performance of PSO, there are still some problems such as hardly implement, new parameters to just, or high computation cost. So it is necessary to investigate how to trade off the exploration and exploitation ability of PSO and reduce the parameters sensitivity of the solved problems and improve the convergence accuracy and speed with the least computation cost and easy implementation. In order to carry out the targets, in this paper, the global worst position (solution) was introduced into the velocity equation of the standard PSO (SPSO), which is called impelled/penalized learning according to the corresponding weight coefficient. Meanwhile, we eliminate the two acceleration coefficients 1 and 2 from the SPSO to reduce the parameters sensitivity of the solved problems. The so-called HPSO has been employed to some nonlinear benchmark functions, which compose unimodal, multimodal, rotated, and shifted high-dimensional functions, to confirm its high performance by comparing with other well-known modified PSO.
The remainder of the paper is structured as follows. In Section 2, the standard particle swarm optimization (SPSO) is introduced. The proposed HPSO is given in Section 3. Experimental studies and discussion are provided in Section 4. Some conclusions are given in Section 5.

Standard PSO (SPSO)
The PSO is inspired by the behavior of bird flying or fish schooling; it is firstly introduced by Kennedy and Eberhart in 1995 [1] as a new heuristic algorithm. In the standard PSO (SPSO) [2], a swarm consists of a set of particles, and each particle represents a potential solution of an optimization problem. Considering the th particle of the swarm with particles in a -dimensional space, its position and velocity at iteration are denoted by (   Update the next particles positions as (3) Update the next particles velocities as (5) Initialize parameters

Randomly initialize velocities and positions of population
Update Pbest, Gbest, and Gworst Initialize Pbest, Gbest, and Gworst and position on the -dimension of this particle at iteration + 1 will be calculated by using the following: where = 1, 2, . . . , , and is the population size; = 1, 2, . . . , , and is the dimension of search space; 1 and 2 are two uniformly distributed random numbers in the interval [0, 1]; acceleration coefficients 1 and 2 are nonnegative constants which control the influence of the cognitive and social components during the search process. best ( ) = ( best 1 ( ), . . . , best ( )), called the personal best solution, represents the best solution found by the th particle itself until iteration ; best( ) = ( best 1 ( ), . . . , best ( )), called the global best solution, represents the global best solution found by all particles until iteration . is the inertial weight 6 The Scientific World Journal where max is the initial weight, min is the final weight, is the current iteration number, and is the maximum iteration number. Then, update particle's position using the following: and check min ≤ ( + 1) ≤ max , where min and max represent lower and upper bounds of the th variable, respectively.

Human Behavior-Based PSO (HPSO)
In this section, a modified version of SPSO based on human behavior, which is called HPSO, is proposed to improve the performance of SPSO. In SPSO, all particles only learn from the best particles best and best. Obviously, it is an ideal social condition. However, considering the human behavior, there exist some people who have bad habits or behaviors around us, at the same time, as we all known that these bad habits or behaviors will bring some effects on people around them. If we take warning from these bad habits or behaviors, it is beneficial to us. Conversely, if we learn from these bad habits or behaviors, it is harmful to us. Therefore, we must give an objective and rational view on these bad habits or behavior.
In HPSO, we introduce the global worst particle, who is of the worst fitness in the entire population at each iteration. It is denoted as worst and defined as follows: worst ( ) = argmax { ( best 1 ) , ( best 2 ) , . . . , ( best )} , where (⋅) represents the fitness value of the corresponding particle.
To simulate human behavior and make full use of the worst, we introduce a learning coefficient 3 , which is a random number obeying the standard normal distribution; that is, 3 ∈ (0, 1). If 3 > 0, we consider it as an impelled learning coefficient, which is helpful to enhance the "flying" velocity of the particle; therefore, it can enhance the exploration ability of particle. Conversely, if 3 < 0, we consider it as a penalized learning coefficient, which can decrease the "flying" velocity of the particle; therefore, it is beneficial to enhance the exploitation. If 3 = 0, it represents that these bad habits or behaviors have not effect on the particle. Meanwhile, in order to reduce the parameters sensitivity of the solved problems, we take place of the two acceleration coefficients 1 and 2 with two random learning coefficients 1 and 2 , respectively. Therefore, the velocity equation has been changed as follows: where 1 and 2 are two random numbers in range of [0, 1] and 1 + 2 = 1. The random numbers 1 , 2 , and 3 are the same for all = 1, 2, . . . , but different for each particle, and they are generated anew in each iteration. If V ( +1) overflows the boundary, we set boundary value to it. Consider where V min and V max are the minimum and maximum velocity of the -dimensional search space, respectively. Similarly, if ( + 1) flies out of the search space, we limit it to the corresponding bound value.
In SPSO, the cognition and social learning terms move particle towards good solutions based on best and best in the search space as shown in Figure 1. This strategy makes a particle fly fast to good solutions, so it is easy to trap in 8 The Scientific World Journal    Figure 2, we can clearly observe that both impelled learning term and penalized term provide a particle with the chance to change flying direction. Therefore, the impelled/penalized term plays a key role in increasing the population diversity, which is beneficial in helping particles to escape from the local optima and enhance the convergence speed. In HPSO, the impelled/penalized learning term performs a proper tradeoff between the exploration and exploitation.
To sum up, Figure 3 illustrates the flowchart of HPSO. Meanwhile, the pseudocodes of implementing the HPSO are listed as shown in Algorithm 1.

Experimental Studies and Discussion
To evaluate the performance of HPSO, 28 minimization benchmark functions are selected [22,24,25] as detailed in Section 4.1. HPSO is compared with SPSO in different search spaces and the results are given in Section 4.2. In addition, HPSO is compared with some well-known variants of PSO in Section 4.3.

Benchmark Functions.
In the experimental study, we choose 28 minimization benchmark functions, which consist of unimodal, multimodal, rotated, shifted, and shifted rotated functions. Table 1 lists the main information; please refer to papers [22,24,25] to obtain further detailed information about these functions. Among these functions, 1 -6 are unimodal functions. 7 is the Rosenbrock function, which is unimodal for = 2 and = 3 but may have multiple minima in high dimension cases. 8 -15 are unrotated multimodal functions and the number of their local minima increases exponentially with the problem dimension. 16 -23 are rotated functions. 24 -26 are shifted functions and 27 and 28 are shifted rotated multimodal functions and = ( 1 , 2 , . . . , ) is a randomly generated shift vector located in the search space. To obtain a rotated function, an orthogonal matrix [26] is considered and the rotated variable = × is computed. Then, the vector is used to evaluate the objective function value.

Comparison of HPSO with SPSO.
The performance on the convergence accuracy of HPSO is compared with that of SPSO. The test functions listed in Table 1 are evaluated. For a fair comparison, we set the same parameters value. Population size is set to 30 ( = 30), upper bounds of velocity max = 0.5 × ( max − min ), and the corresponding lower bounds min = − max , where min and max are the lower and upper bounds of variables, respectively. Inertia weight is linearly decreased from 0.9 to 0.4 in SPSO and HPSO. Acceleration coefficients 1 and 2 in SPSO are set to 2. The two algorithms are independently run 30 times on the benchmark functions. The results in terms of the best, worst, median, mean, and standard deviation (SD) of the solutions obtained in the 30 independent runs by each algorithm in different search spaces are as shown in Tables 2, 3, and 4. At the same time, the maximum iteration is 1000 for = 30, 2000 for = 50, and 3000 for = 100, respectively.
From Tables 2-4, we can clearly observe that the convergence accuracy of HPSO is better than SPSO on the most benchmark functions. An interesting result is that HPSO can find the global optimal solutions on functions 2 , 4 , 5 ,   also conclude that HPSO has better stability than SPSO from the data in different search spaces.
In the 9th columns of Tables 2-4, we report the statistical significance level of the difference of the means of the two algorithms. Note that here "+" indicates that the value is significant at a 0.05 level of significance by two-tailed test, and "−" stands for the difference of means that is not statistically significant. Figure 4 graphically presents the comparison in terms of convergence characteristics of the evolutionary processes in solving the selected benchmark functions in 30-dimensional search space with = 30 and = 1000.

Comparison of HPSO with Other PSO Algorithms.
In this section, a comparison of HPSO with some well-known PSO algorithms which are listed in Table 5 is performed to evaluate the efficiency of the proposed algorithm.
At first, we choose 10 unimodal and multimodal test functions for this evaluation. According to [22], the algorithms GPSO [2], LPSO [16], VPSO [27], FIPS [28], HPSO-TVAC [6], DMS-PSO [29], CLPSO [24], and APSO [22] are considered as detailed in Table 5. The experimental results of the algorithms are directly from [22] as shown in Table 6. In this trial, the population size = 20, the dimension = 30, and the maximum fitness evaluations (FEs) were set to 2 × 10 5 also. The parameter configurations of the selected algorithms have been set according to their corresponding references. The inertia weight is linearly decreased from 0.9 to 0.4 in HPSO. HPSO is independently run 30 times and the mean and SD are shown in Table 6. As seen, HPSO has the first rank among the algorithms and obtains the global minimum on functions 1 , 2 , 5 , 9 , 10 , and 12 and gives the good nearglobal optima on functions 6 and 11 . Meanwhile, HPSO has the worst performance on functions 3 and 14 . As for 3 , APSO has the best convergence accuracy, and HPSO only wins CLPSO. Considering 14 , DMS-PSO has the best performance.
Then, in the next step, we choose six functions from [25] and seven algorithms of GPSO, QIPSO [30], UPSO [31], FIPS, AFSO [25], and AFSO-Q1 [25] as detailed in Table 5. For a fair comparison, the population size = 30, the dimension = 30, and the maximum iteration = 10, 000 also in HPSO, and the inertia weight is linearly decreased from 0.9 to 0.4. HPSO is independently run 30 times and the mean and SD are shown in Table 7. As seen, HPSO shows better performance and has the first rank. HPSO finds the global optimal solution on functions 9 , 13 , 21 , and 22 . FIPS and UPSO have better convergence accuracy on functions 27 and 28 , respectively. Therefore, it is worth saying that the proposed algorithm has considerably better performance than the other wellknown PSO algorithms in unimodal and multimodal highdimensional functions.

Conclusion
In this paper, a modified version of PSO called HPSO has been introduced to enhance the performance of SPSO. To simulate the human behavior, the global worst particle was introduced into the velocity equation of SPSO, and the learning coefficient which obeys the standard normal distribution can balance the exploration and exploitation abilities by changing the flying direction of particles. When the coefficient is positive, it is called impelled leaning coefficient, which is helpful to enhance the exploration ability. When the coefficient is negative, it is called penalized learning coefficient, which is beneficial for improving the exploitation ability. At the same time, the acceleration coefficients 1 and 2 have been replaced with two random numbers, whose sum is