A Fusion Crossover Mutation Sparrow Search Algorithm

Aiming at the inherent problems of swarm intelligence algorithm, such as falling into local extremum in early stage and low precision in later stage, this paper proposes an improved sparrow search algorithm (ISSA). Firstly, we introduce the idea of flight behavior in the bird swarm algorithm into SSA to keep the diversity of the population and reduce the probability of falling into local optimum; Secondly, we creatively introduce the idea of crossover and mutation in genetic algorithm into SSA to get better next-generation population. .ese two improvements not only keep the diversity of the population at all times but also make up for the defect that the sparrow search algorithm is easy to fall into local optimum at the end of the iteration. .e optimization ability of the improved SSA is greatly improved.


Introduction
Swarm intelligence optimization algorithm is a kind of bionics algorithm that simulates the behavior of some creatures in nature or is inspired by some physical phenomenon. Its central idea is to balance the global search and local search in a solution space to find the optimal solution. e swarm intelligence algorithm has attracted the attention of many scholars because of its simple operation and high efficiency. At present, these algorithms are widely used in image processing [1], training neural networks [2], signal processing [3], and feature selection [4]. e common swarm intelligence optimization algorithms mainly include particle swarm optimization (PSO) [5], grey wolf optimizer (GWO) [6], whale optimization algorithm (WOA) [7], salp swarm algorithm (SSO) [8], sine cosine algorithm (SCA) [9], and sparrow search algorithm (SSA) [10].
Swarm intelligence optimization algorithms have the problem of easily falling into a local optimum and reducing the diversity of the population in later iterations. In response to this problem, many researchers have proposed methods to introduce various learning techniques into swarm intelligence optimization algorithms. For example, Li et al. [11] conducted a comprehensive study and analysis on how traditional intelligent optimization algorithms combine with learning operators and gave an outlook on its future development direction. To solve the numerical optimization problem, Li and Wang [12] proposed an improved elephant herd optimization using dynamic topology and learningbased biogeography optimization (BLEHO), while ensuring a better evolutionary process for the population through an elite strategy. Meanwhile, the global velocity strategy and the new learning strategy were introduced in EHO to update the velocity and position of individuals [13], and good results were obtained. e cuckoo search algorithm is an effective evolutionary method for global optimization, but it has the drawback of premature convergence problem and poor balance between exploitation and exploration. To address these problems, a balanced learning function was introduced into the cuckoo search algorithm by which a balance between exploitation and exploration was achieved [14]. Li et al. [15] proposed a new extension of CS with Q-learning steps and genetic operators, namely, the dynamic step cuckoo search algorithm (DMQL-CS). e step control strategy was considered as the action, which was used to check the effect of a single multistep evolution and to learn a single optimal step by calculating the Q-function value, and the crossover and variation operations expand the search range of the population and increase the diversity of the population. Li et al. [16] introduced a learning model combining individual historical knowledge and group knowledge into the CS algorithm while using a threshold statistical learning strategy to select the optimal learning model to exploit the potential of individual knowledge learning and group knowledge learning, providing a good trade-off between exploration and exploitation. To address the premature convergence suffered by the CS and the poor balance between exploration and exploitation, a dynamic CS with Taguchi opposition-based search (TOB-DCS) was proposed [17]. It employed two new strategies: Taguchi opposition-based search and dynamic evaluation. e Taguchi search strategy provided random generalized learning based on opposing relationships to enhance the exploration ability of the algorithm. e dynamic evaluation strategy reduced the number of function evaluations and accelerated the convergence property. Wang et al. [18] and Feng et al. [19] introduced opposition-based learning into the krill herd algorithm and monarch butterfly optimization, respectively, providing us with a strategy to optimize the population of individuals. Wang et al. [20] introduced chaos theory into CS to further improve the optimization performance of CS. Lu et al. [21] studied three chaotic strategies with 11 different chaotic map functions on GWO and successfully applied the most suitable chaotic strategy as the chaotic GWO to real-world engineering problems. Inspired by the SCA algorithm, Nabil et al. [22] updated the positions of the followers in the SSO algorithm through a sine function, which helped to improve the exploration stage and avoid stagnation in local areas. Liu et al. [23] introduced chaos strategy into SSA and used adaptive inertia weight to balance the convergence speed and exploration ability of the algorithm, but between the speed of convergence and the ability of exploration, a certain part must be sacrificed. Yuan et al. [24] used gravity center reverse learning mechanism to initialize the population so that the population has better spatial solution distribution. Secondly, the learning coefficient was introduced into the location updating part of the discoverer to improve the global search ability of the algorithm, but the local search ability of SSA was ignored. Lei et al. [25] introduced the Levy flight strategy into SSA, which improved the global optimization ability of SSA, but increased the complexity. Zhang et al. [26] used the sine cosine algorithm as a hybrid algorithm to help SSA jump out of local optimum. Although all the above various improvements contribute to optimization performance, there are still certain shortcomings, and thus optimization performance can be further improved.
In this paper, on the basis of predecessors' ideas, the flight behavior of the bird swarm algorithm and the cross mutation idea of the genetic algorithm are introduced into SSA, and the improved tent chaos is used to assist. e experimental results show that the ISSA proposed in this paper is effective and feasible in convergence accuracy, stability, convergence speed, and comprehensive performance. e organizational structure of this paper is as follows. Section 1 introduces SSA and its improved methods. Section 2 uses test function analysis to verify the effectiveness of the improved algorithm. Section 3 gives the conclusion and describes the future of the proposed algorithm.

Sparrow Search Algorithm and
Its Improvement 2.1. Sparrow Search Algorithm. Xue and Shen [10] proposed the sparrow search algorithm method. e idea of this algorithm comes from the foraging and antipredation behaviors of sparrows, which can be abstracted as an explorerfollower-forewarner model. e explorer has a high energy reserve and high fitness value. It provides a foraging area and direction for the followers. e followers follow the explorer with the best fitness value to find food to increase their own energy reserves and fitness values. Some followers may also constantly monitor the explorer to compete for food. A forewarner will sound an alarm when it is aware of the danger, and at the same time, moves quickly to a safe area to get a better position. Sparrows in the middle of the flock will randomly walk close to other sparrows, which is an antipredation behavior. At the same time, if the alarm value is greater than the safety threshold, the explorer needs to take all the followers away from the dangerous area.
In SSA, assuming that there are N sparrows in a D-dimensional search space, the position of each sparrow is as follows: where i � 1, 2, . . . , N, d � 1, 2, . . . , D, and x id represents the position of the ith sparrow in the dth dimension. As the explorer guides the movement of the entire sparrow group, search for food can occur anywhere. erefore, the location of the explorer is updated as follows: where t represents the current iteration, iter max is the maximum iteration, α is a random number in (0, 1], Q is a random number following a normal distribution, and L represents a 1 × d matrix, where each element is 1. R 2 ∈ [0, 1] represents an alarm value, and ST ∈ [0.5, 1] represents the safety threshold. If R 2 < ST, there are no predators around and the explorer will enter the wide search mode. If R 2 ≥ ST, some sparrows have found natural predators, and all sparrows need to fly to another safe area quickly.
2 Mathematical Problems in Engineering e followers follow the explorer to find food and may compete with the explorer for food to increase their food intake. e position update equation is as follows: where xworst t d represents the global worst position of the sparrows in the d dimension at the tth iteration and xbest t+1 d represents the best position of the explorer in the d dimension at the (t + 1)th iteration. When i > (n/2), the ith follower with poor fitness is most likely to starve to death; otherwise, the ith follower randomly finds a location for food near the best position of the explorer.
Assuming that the forewarner sparrows account for about 10% to 20% of the sparrow population, and their initial positions are randomly determined, the mathematical model can be expressed as follows: where β is a step size control parameter, which is a random normal distribution with the mean of 0 and variance of 1; K is a random number between [− 1, 1]; f i , f g , and f w , respectively, represent the current fitness values of the sparrow, the current global best fitness value, and the current global worst fitness value; ε is the smallest constant to avoid zero-division-error. f i ≠ f g represents the sparrow at the edge of the group. f i � f g indicates that the sparrow in the middle of the group is aware of the danger and needs to move to another place.

Initial
Population. e quality and diversity of the initial population in the swarm intelligence optimization algorithm have a great impact on the optimization performance of the algorithm. A high-quality initial population can increase the convergence speed of the algorithm and help find the global optimal solution. e standard SSA algorithm does not have any prior knowledge. e initial population generated by a random initialization method can easily lead to poor population diversity and uneven distribution. erefore, the initialization of the sparrow population has a great effect on the search accuracy of the SSA algorithm. In order to maintain the diversity of the population, this paper adopts the fusion of an improved tent chaotic map and elite opposition-based learning strategy to initialize the population and replace the method of randomly generating the population in the SSA algorithm. We first use the improved tent chaotic map to make the initial population distribution more uniform, and we then use the feature that elite individuals contain more effective information in the elite oppositionbased learning strategy to construct superior individuals and further improve the quality of the population.

Improved Tent Chaotic
Sequence. Tent chaos, as a type of chaos, is a more complicated nonlinear dynamic behavior than logistic chaos. e study proposed by Shan et al. [27] has shown that the tent chaotic map has better traversal uniformity and faster search speed than logistic chaos. Using the randomness, ergodicity, and regularity of the tent chaotic sequence to optimize the SSA algorithm can effectively maintain the diversity of the algorithm population, thereby improving the global search capability. e expression of the tent chaotic map is as follows: Analysis of equation (5) shows that the tent chaotic sequence is not perfect, and there are points with small periods and those with unstable periods. To avoid falling into the points in the iterative process, a random variable, rand(0, 1) × (1/N), is introduced into the tent chaos expression. e improved tent chaotic map expression is as follows: e improved tent chaotic map after Bernoulli shift transformation is expressed as follows: According to the characteristics of the tent chaotic map, the method for initializing the sparrow population in the feasible region is as follows: where q is the uniform distribution in [0, 1], and lb id and ub id represent the upper and lower bounds of the feasible solution, respectively.

Elite
Opposition-Based Learning Strategy. Opposition-based learning (OBL) is a new strategy in computational intelligence. A great number of studies have shown that the opposition-based solution in the oppositionbased learning strategy is closer to the global optimal solution than the existing solutions. erefore, using this strategy can effectively enhance the diversity of the population, and to a certain extent, prevent the algorithm from falling into a local optimum. e elite opposition-based learning strategy is based on the reverse learning strategy in Mathematical Problems in Engineering order to overcome the problem in the reverse learning strategy that the current solution and the resulting reverse solution are not necessarily easier than the current search space in search for the global optimal solution. e strategy selects the elite individuals in the current solution to construct the reverse solution, thus avoiding the invalid reverse solutions generated by nonelite individuals. Assuming that , is defined as follows: where s i,j is the j-dimension vector of the elite solution s i , j � 1, 2, . . . , D, δ is the random value in [0, 1], and lb j � min i (s i,j ) and ub j � max i (s i,j ) are the lower and upper bounds of the jth dimension search space, respectively. After the opposition-based solution s i is obtained, it needs to be treated across the bounds: where rand(lb j , ub j ) is a random value in [lb j , ub j ].

Population Initialization.
According to the two methods described in Sections 2.2.1 and 2.2.2, the pseudocode of the population initialization of the SSA algorithm in this paper is shown in Algorithm 1.

Birds Swarm Algorithm Strategy.
e bird swarm algorithm (BSA) was proposed by Meng et al. [28] in 2015 based on the flight, foraging, and vigilance behaviors of birds. Similar to SSA, the BSA also has explorers and followers.
e equations for updating their locations are as follows: where randn(0, 1) represents a Gaussian random distribution with a mean of 0 and a standard deviation of 1; represents the probability that a follower follows the explorer to find food. In SSA, when R 2 < ST, the number of the individual explorer sparrows is decreasing in every dimension, while the individual explorer sparrows in the BSA overcome this defect, as shown in Figure 1. erefore, with the help of the idea of the explorer in BSA, the equation for the position of the explorer in SSA is modified as follows: At the same time, in SSA, the follower approaching the best positions in all dimensions can lead to rapid convergence but also reduces the diversity of the population, thereby making it easy for the algorithm to fall into a local optimum. In the BSA, the follower approaches the explorer with a certain probability. While ensuring global convergence and the diversity of the population, the local optimum can be avoided effectively. erefore, the idea of the follower in BSA is introduced into SSA, and the location of the improved SSA follower is updated as follows:

Cauchy Mutation.
e Cauchy mutation comes from the Cauchy distribution of the continuous probability distribution.
e probability density of the one-dimension Cauchy distribution is expressed by the following: When a is 1, as the distribution is the standard Cauchy distribution. Figure 2 shows the curves of the probability density functions of the Cauchy distribution and Gaussian distribution. It can be seen from the figure that the main feature of the Cauchy distribution is that, compared to the Gaussian distribution, its peak value at zero is smaller, its descent from the peak to the zero value is less steep and its descent is slower. erefore, the Cauchy mutation is stronger than Gaussian mutation in perturbation ability, and the range of mutation is more uniform. Introducing the Cauchy mutation into the SSA algorithm can fully take advantage of the perturbation ability of the Cauchy operator and improve the algorithm's global optimization ability.
e Cauchy mutation equation is as follows: where x is the original individual position, mutation (x) is the individual position after Cauchy mutation, and u is a random number in (0, 1).

Chaotic
Perturbance. e purpose of introducing chaotic perturbation in the algorithm is to prevent it from falling into a local optimum and to improve the global search ability and optimization accuracy. e steps are as follows: (1) Generate chaotic variables Y d through equation (7).
(2) Map the chaotic variables to the solution space of the problem to be solved where min d and max d are the minimum and maximum values of the dth-dimension variable new X d .
where X ′ is the individual who needs chaotic perturbance, new X is the amount of chaotic perturbance generated, and new X d ′ is the individual after the chaotic perturbance.

Introduction of Genetic
Algorithm. e genetic algorithm is introduced into the SSA algorithm [29]. Due to its position update equations, the idea of SSA algorithm is position transfer, that is to improve the population quality by sharing useful information among sparrow individuals and self-learning between individuals. After the introduction of the genetic algorithm, the improved individual obtains a better next-generation population through crossover mutation. Moreover, the SSA algorithm now not only has the powerful global search ability of the genetic algorithm, but also integrates the position transfer idea of the SSA algorithm. It makes full use of the information of the population and the individuals that are ignored by the genetic algorithm, and its optimization performance is better. At the same time, the selection of crossover probability p c and mutation probability p m in the genetic algorithm is one of the important factors affecting the optimization ability of the algorithm. If p c is too small, the generation speed of new individuals during iteration will slow down, leading to early termination of the calculation process. If p c is too large, there are too many newly generated individuals in the group, which can damage the excellent individuals that Initialize the population S using equation (8) Sort S according to the fitness value Select the first half of the better individuals from S to form elite population E Obtain the opposition-based population OE using equations (9) and (10) Combine S and OE to obtain a new population Select N individuals from the new population with better fitness to form the initial population ALGORITHM 1: Pseudocode of initialization population.
Mathematical Problems in Engineering 5 have been generated in the group. If p m is too small, the ability to generate new individuals by mutation operations will be weak, and many excellent genes will be lost prematurely and not enter the next generation, which is not conducive to maintaining the diversity of the population. Lastly, if p m is too large, it will be similar to a random search algorithm. erefore, this paper improves the crossover rate and the mutation rate and proposes an adaptive crossover and an adaptive mutation strategy based on the golden ratio index function to make the good individuals at a certain moment of evolution also change. e improved change equations are shown below: where p c1 is the maximum crossover probability set by the algorithm, and its value is set to 0.7; p m1 is the maximum mutation probability, and its value is set to 0.01. Meanwhile, the golden ratio is introduced into the equations, and the optimal solutions are approached one by one according to the principle of equal ratio, symmetric contraction, and optimal selection; this improves the speed of the global optimal solution search. A flow chart of the genetic algorithm introduced into SSA is shown in Figure 3.
In Figure 3, the individual sparrow in SSA is regarded as a chromosome in GA. In the Nth generation group, individual sparrows enter the (N + 1)-th generation after being improved, crossovered, and mutated. e steps to introduce GA into SSA are as follows: (1) Improve the selection operator: in each generation, first calculate the fitness value of each individual sparrow, sort by fitness value, and choose the top half of the best individuals as excellent samples to be improved through the SSA algorithm (these samples thus enter the next generation). (2) Crossover mutation operator: in order to obtain the remaining p/2 next-generation individuals, select outstanding individuals from the SSA population as fathers and mothers, and generate new offspring individuals for the next generation through dynamic crossover operators and mutation operators.

Improved Sparrow Search
Algorithm. e pseudocode of the improved sparrow search algorithm is expressed in detail in Algorithm 2.

Experimental Results and Analysis
In order to verify the optimization performance of the ISSA algorithm, the particle swarm algorithm, grey wolf optimization algorithm, whale optimization algorithm, sine cosine optimization algorithm, salvia swarm algorithm, basic sparrow search algorithm, and ISSA were used in simulation experiments on 15 benchmark functions. e convergence accuracy, stability, convergence speed, improved advantages, and comprehensive performance of the ISSA algorithm were analyzed. e simulation experiment in this paper was performed on the Windows 10 64-bit operating system with Intel Core i7 CPU, 2.60 GHz, 16 GB, and MATLAB R2016b. e parameters of the algorithms are shown in Table 1.

Comparative Experiment with Benchmark Functions.
e benchmark functions are shown in Table 2. f1-f6 are unimodal functions, f7-f10 are multimodal functions, and f11-f15 are fixed-dimension functions. In order to make the experimental results fair and objective, the number of individuals in each algorithm is set to 30, and the maximum number of iterations is 500. Fifty independent simulation experiments were performed on each benchmark function for each algorithm, and the average value, optimal value, and standard deviation obtained from 50 experiments were calculated; the results are shown in Table 3.
For the unimodal functions f1-f4, ISSA can accurately find the optimal value of zero, and the average value and standard deviation are also zero. Although SSA can find the optimal value of zero in function f1 and function f2, the average values and standard deviations are not zero. e optimization performance of the other five algorithms can only approach zero infinitesimally. Among them, WOA performs better in f1 and f2, with its optimization performance several orders of magnitude higher than those of the other four algorithms. In the unimodal functions f5 and f6, the optimization performance of ISSA is only one or two orders of magnitude higher than that of SSA; meanwhile, the optimization performance of the remaining five algorithms is not much different from that of the SSA algorithm. For the multimodal functions f7-f10, ISSA, SSA, and WOA can all find the optimal value zero in f7, and the average value and the standard deviation are all zero. In f8, ISSA and SSA stop searching after finding a point that is infinitesimally close to the optimal value and have the same stability; WOA is slightly inferior and is not as stable as ISSA and SSA. In f9, ISSA, SSA, GWO, and WOA can also find the optimal value zero, but GWO and WOA are less stable. In f10, the performance of the ISSA algorithm is not improved much, with the optimal value being only three orders of magnitude higher than that of SSA; moreover, ISSA and SSA have stabilities in the same order of magnitude. In the fixed-dimension functions, seven algorithms are able to find the optimal value in both f11 and f13. e standard deviation of SSO in f11 is the lowest, and the standard deviation of PSO in f13 is the lowest, indicating they are the most stable. In f12, only ISSA and SSA are able to find the optimal value. e Initialize the parameters of SSA, for instance, the initial values of the upper and lower bound lb and ub and the maximum number of iterations T; Initialize the sparrow population X i (i � 1, 2, . . ., N) by Algorithm 1; Calculate the fitness value f i of the individual sparrow; Obtain the current optimal position, the worst position, and the corresponding optimal fitness value and worst fitness value; While (t < T) For i � 1 to N Update the positions of the explorers using equation (12) Update the positions of the followers using equation (13) Update the positions of the forewarners using equation (4 Retain the best first half of the individuals to promoted by SSA and directly enter the next generation; Select parents and offspring from the individuals that have been promoted to crossover and mutation using equations (18) and (19); Update the current optimal position, the worst position, and the corresponding optimal fitness value and worst fitness value; End for t � t + 1; End while Return X ALGORITHM 2: Pseudocode of the ISSA.  Algorithm  Ackley Generalized penalized f 10 (x) � (π/n) 10 sin 2 (πy 1 ) + n− 1 i�1 (y i − 1) 2 [1 + 10 sin 2 (πy i+1 )] + (y n − 1) 2 30 [− 50, 50] 0 Foxholes stability of ISSA is three orders of magnitude higher than that of SSA, and the remaining algorithms are infinitesimally close to the optimal value. In f14, only SCA and WOA fail to find the optimal value, and the optimization capabilities of the other algorithms are not much different. In f15, only SCA fails to find the optimal value, whereas ISSA has the best stability, which is 14 orders of magnitude higher than the stability of the other algorithms. erefore, the performance of ISSA is greatly improved in the unimodal functions and the multimodal functions. All the algorithms perform well for the fixed-dimension functions. It is relatively easy to find their optimal values. ISSA's performance improvement is weak in these functions. Figure 4 shows the convergence curves of different algorithms on the benchmark functions. It can be seen that, for the unimodal functions f1, f2, and f3 and multimodal functions f7 and f9, the ISSA algorithm makes these functions find the optimal value before reaching the maximum number of iterations. From the overall performance of all functions, the convergence speed of ISSA is faster than that of other algorithms. For the fixed-dimension functions f12, f13, f14, and f15, each algorithm is able to find the optimal value or comes infinitesimally close to the optimal value. Although the convergence speed of the ISSA algorithm has not improved much, its convergence speed can still reflect its advantages.

Rank-Sum Test.
Derrac et al. [30] suggest that a statistical test should be carried out to evaluate the performance of improved algorithms. It is inadequate to evaluate an algorithm only based on the average and standard deviation. A statistical test is needed to demonstrate that the proposed improved algorithm has significant improvements over existing algorithms. Each result is compared independently to represent the stability and fairness of the algorithm. In this paper, the Wilcoxon rank-sum test is used to determine whether each result of ISSA is statistically significantly  different from the best results of the other five algorithms at p less than 0.05. Table 4 shows the p values obtained by the rank-sum tests for ISSA and the other five algorithms on the 15 benchmark functions. When two algorithms under comparison reach the optimal values, they cannot be compared; thus, NaN in the table means "not applicable;" that is, a significance assessment cannot be made. R is the result of the significance assessment: "+," "− ," and "�," respectively, represent ISSA's performance as being superior, inferior, and equivalent to the algorithms under comparison. Table 4 shows that only the p values of ISSA and WOA in f9, ISSA and PSO in f14, and ISSA and SSA in f12 and f13 are slightly greater than 0.05; the other p-values are much less than 0.05. is indicates that the superior performance of ISSA is statistically significant. In the comparison between ISSA and SSA, the R in the f12 and f13 functions is "− ." is is because the optimization performance of SSA itself is better. Both ISSA and SSA can find the optimal values, but their average values are different. Although the optimization performance of ISSA on fixed-dimension functions has been improved somewhat, there was not much room for improvement to begin with.
In order to evaluate the performance of the algorithms in many aspects, the mean absolute error (MAE) is used as an evaluation index. e MAE ranking shown in Table 5 corresponds to the algorithms listed in Table 2.
As can be seen, ISSA ranks first. Compared with the other six algorithms, ISSA has the smallest MAE, which further proves the effectiveness of ISSA.

Box Plot.
In Section 2.1, each algorithm is used to perform 50 experiments on each of the 15 benchmark functions. e results are compared in Table 3. However, it is difficult to observe the various data and the relationship between them from Table 3. To verify the stability and convergence of ISSA, we also conduct a box plot analysis [31]. e box plot summarizes six types of data: the maximum, upper quartile, median, lower quartile, minimum, and outliers. Figure 5 shows the plots for the 15 benchmark functions in the experiment. In the figure, "+" in each box subgraph represents the outlier, "-" represents the median, the two ends of the rectangular box are the upper and lower quartiles, and "− " represents the maximum or minimum. e difference across the seven algorithms is large. In order Best score obtained so far      to make this box plot comparison more obvious, three subgraphs are generated for each benchmark function, which are the comparison box plots of the seven algorithms, the comparison box plots of the remaining six algorithms after the SCA algorithm is removed, and the comparison box plots of the ISSA and SSA algorithms.
It can be seen from Figure 5 that the median of ISSA is always close to the optimal value of each function, and the variation in optimal values across 50 iterations is the smallest. Furthermore, the distribution of the convergence value is more concentrated than in the other six algorithms, indicating that the ISSA algorithm has stronger robustness. Comparing ISSA and SSA separately, SSA and ISSA can find the optimal value in each iteration of functions f7, f8, f9, and f13, so the performance of ISSA cannot be improved anymore. Additionally, except for f11 and f14, in each benchmark function SSA has more outliers than ISSA, indicating that the ISSA algorithm performs better in terms of stability. Although no SSA outliers appeared in f11 and f14, the variation range in optimal values of SSA is relatively large, which shows that the SSA algorithm is less stable than ISSA.

Radar
Chart. In order to analyze the overall optimization capabilities of the seven algorithms, the results of the algorithms in each benchmark function are compared and ranked according to their mean values (see Table 3). If the means are equal, the standard deviations are compared; and if the standard deviations are also equal, the two are ranked identically. e ranking results are shown in Table 6. e last row in Table 6 is the average rank of each algorithm. ISSA ranks first and thus has the best optimization performance among the seven algorithms; it is also significantly better than SSA. e ranking results of the remaining algorithms are SSA, GWO/WOA, PSO, SSO, and SCA. In order to display the ranking of the seven algorithms more intuitively on different benchmark functions, a radar chart is used to plot the results of Table 6. is chart is shown in Figure 6. e smaller the area enclosed by the algorithm performance curve, the smaller the ranking values of the algorithm and the better the optimization performance. It can be seen from the figure that the area enclosed by the black curve representing ISSA is the smallest, indicating that ISSA has the best optimization performance overall among the seven algorithms.

Conclusion
In this paper, the idea of flight behavior of BSA and the idea of crossover and mutation of GA are introduced into SSA. us ISSA is proposed. rough the improvement, the ISSA can keep the diversity of the population at the end of iteration and keep the population in a better state. e improved method has good global convergence performance and robustness and shows better optimization ability and better comprehensive performance than the original algorithm. In the next step, it will be compared with other novel algorithms such as Monarch Butterfly Optimization (MBO) [32], Earthworm Optimization Algorithm (EWA) [33], Elephant Herding Optimization (EHO) [34], Moth Search (MS) algorithm [35,36], Slime Mold Algorithm (SMA) [37], and Harris Hawk Optimization (HHO) [38] on specific problems, and it will be applied in practical engineering to verify the advantages of ISSA.

Data Availability
All data are included within the article.

Additional Points
Highlights. (1) Bird swarm algorithm and genetic algorithm are integrated into SSA; (2) Cauchy mutation and Tent chaotic sequence are introduced into SSA; (3) e greater performance of ISSA is verified over some classic algorithms.

Conflicts of Interest
e authors declare that they have no conflicts of interest.