Multi-strategy sparrow search algorithm with non-uniform mutation

Sparrow search algorithm (SSA) suffers from a tendency to fall into local optima, as well as a preference for zero locations. Therefore, to improve this drawback, we propose a non-uniform mutation sparrow search algorithm (NMSSA). In the initialization stage of the population, we introduce a tent chaos map and a generalized opposition-based learning strategy to improve the diversity of the population; We introduce adaptive weight to dynamically adjust the search range of the discoverer to improve the search efficiency of the algorithm; To prevent the algorithm from deviating from the target in the early stage, we adopt a non-uniform mutation strategy to improve the flexibility of the follower search to improve the convergence accuracy of the algorithm. Finally, we use the somersault strategy to reduce the probability of the algorithm falling into local optimum. In the test experiments with 10 benchmark functions and CEC2017 functions, we compare the experimental results of NMSSA with those of other algorithms, and the experimental results verify the effectiveness of NMSSA. In addition, we also applied NMSSA to engineering problems optimization and K-means image segmentation, and the experimental results show that NMSSA has good performance in practical applications.


Introduction
Over the years, with the improvement of the difficulty of optimization problems, swarm intelligence optimization algorithm has become the focus of many scholars with its advantages of simple structure and high efficiency. The sparrow search algorithm (SSA) was proposed by Xue and Shen (2020) in 2020, which imitates the law of sparrow foraging in nature. The algorithm has the characteristics of less parameter adjustment and easy programming. It has better optimization ability than grey wolf optimization(GWO) (Mirjalili et al., 2014), social group optimization (SGO) (Satapathy & Naik, 2016), particle swarm optimization algorithm (PSO) (Kennedy & Eberhart, 1997) and genetic algorithm (GA) (Houck et al., 1995) when shown in optimizing functions, which has been strongly confirmed. Besides, SSA has been successfully used in engineering problems (Yuan et al., 2021).
However, SSA also has some defects. On the one hand, the search efficiency and convergence speed of SSA still need to be improved (Fang et al., 2022), and there is also a tendency to fall into local extremes (Ma et al., 2022). On the other hand, SSA uses a simple randomized method to initialize the population, which affects CONTACT Xiao Wang tianzhu213@zjnu.cn the diversity of the population and leads to poor performance of the algorithm . At present, many scholars put forward a series of methods to improve the shortcomings of the SSA and further improve its optimization ability. Ouyang et al. (2021a) incorporated the lensing principle into opposition-based learning strategy to improve the sparrow search region, then the variable spiral search strategy was used to improve the search ability of the followers, and finally, the simulated annealing algorithm was combined to refine the solution quality. The algorithm was applied to 3D UAV path planning with good results. In addition, they proposed a learning sparrow algorithm that enhances the diversity of the population with a stochastic opposition-based learning strategy (Ouyang et al., 2021b), introduces an improved sine cosine mechanism to guide the finder search, and then introduces differential local search to obtain highquality solutions. Finally, the algorithm is used for robot path planning and the effectiveness of the algorithm is verified. Li et al. (2022) proposed an improved chaotic sparrow search algorithm, they used Kent chaotic map to initialize the sparrow population; then used adaptive T-distribution to improve the search mechanism of producers and scouts; and finally used Lévy flight strategy to improve the location update of followers. Zhang et al. (2022) improved the location update method of the sparrow search algorithm to speed up the convergence of the algorithm; then the neighbourhood search strategy was used to enhance the fitness value of the optimal individual. Finally, good results were achieved in path planning applications. Wang et al. (2021) combined opposition-based learning and Gaussian mutation to reduce the probability of stopping at the local optimum, and balanced the exploration and exploitation abilities of the algorithm. 12 benchmark test functions were used to evaluate the optimization ability of the improved method. Zhang and Ding (2021) introduced logistic map, adaptive hyperparameter and mutation operator to enhance the global search capability of the algorithm. The feasibility and practicability of the algorithm are confirmed in test function and random state network. Experts and scholars have put forward various methods to improve the optimization ability of SSA. Although good results have been achieved, there is still large randomness and the probability of sinking into local optimal when encountering high-difficulty problems. The details are as follows: • Traditional chaos theory can enrich the diversity of population but there are still uncertainties and randomness. Because the SSA has the characteristics of impressive optimization ability, strong global search ability, the effect of employing traditional chaos theory is not obvious. In other words, it is necessary to utilize a flexible optimization mechanism to obtain more reliable solutions with higher quality. • The traditional opposition-based learning strategy only performs reverse solutions in a certain space, even the optimization method is monotonous and lacks flexibility. • Finally, since most of the above-mentioned authors optimize the solution based on the function of the optimal solution at the origin, the improved algorithm only tends to the origin, which makes the algorithm lack of universality.
In this paper, the non-uniform mutation sparrow search algorithm is proposed. In the initial stage, integrated Tent map and generalized opposition-based learning are introduced to get away from the uncertainty of traditional chaos theory and improve the learning ability of individuals in the population. In addition, the adaptive weight strategy is introduced to balance the local search and global development ability. And then, non-uniform mutation is proposed for flexible search method. Finally, a somersault strategy is used to prevent the clustering of sparrow individuals and improve the optimization ability of the algorithm. In the benchmark function, the testing effect of NMSSA is compared with SSA, chaos sparrow search algorithm (CSSA) (Lv et al., 2020), improved sparrow search algorithm (ISSA) (Lv et al., 2021), PSO, GWO, manta ray foraging optimization (MRFO) (Zhao et al., 2020), teaching-learning-based optimization (TLBO) (Črepinšek et al., 2012) and beetle swarm optimization (BSO) (Wang & Yang, 2018). The firefly algorithm with courtship learning (FACL)  and triple distinct search dynamics (TDSD) (Li et al., 2020) that have passed the test are also compared in the CEC2017 test set. The results indicate that NMSSA has better optimization ability and universality. NMSSA was applied to engineering problems optimization to verify its practicality. The experimental results show that NMSSA has good optimization results. NMSSA is also applied to improve k-means image segmentation, and it can also be seen through the experiments that NMSSA has better segmentation performance.
The work and innovation of the NMSSA are as follows: (1) The improved Tent map and generalized oppositionbased learning are performed to initialize the population, which jointly promotes the optimization ability of the algorithm. (2) Adaptive weight and non-uniform mutation are utilized to improve the convergence of the algorithm. (3) The somersault strategy is introduced into the adaptive algorithm for the first time in order to improve the disadvantage that easily falls into local optimum of the algorithm. (4) NMSSA is applied to engineering problems optimization and K-means image segmentation based on NMSSA, proving the practicality of NMSSA.
The structure of this paper is organized as follows. Section 2 describes and analyzes the classical SSA; Section 3 is a detailed description of NMSSA and gives the corresponding flow; Section 4 treats the experiments of NMSSA in the standard test function and CEC2017 test set; Section 5 verifies the practicality of NMSSA by engineering problems optimization and K-means image segmentation experiments based on NMSSA; finally concludes The strengths and weaknesses of this work are analyzed and future work is discussed.

Sparrow search algorithm
The SSA consists of three stages: producer, follower, and watcher. According to the sparrow search strategy, the producer provides foraging direction and area for the whole population; the follower follows the producer for foraging; the watcher is responsible for monitoring the foraging site. In the process of foraging, the three locations are constantly updated to complete the acquisition of resources. The producer's position update formula is as follows: In formula (1), X t i,j refers to the coordinate value of the i_th sparrow on the j_th dimension at the t_th iteration; M is the fixed maximum number of iterations; β 1 ∈ (0, 1] is a random variable. R 2 ∈ [0, 1] indicates the warning value; ST ∈ [0.5, 1] is the safe value. Q is a random number that follows a standard normal distribution; β 2 is a 1 × d dimensional matrix with all ones. When R 2 < ST, it indicates that there is no threat in the foraging environment, and the producer can search a wide area; When R 2 ≥ ST, predators are considered that they can be found in the foraging environment, and all sparrows must quickly fly to a safe area to feed.
Followers follow the producer in food search, which means local search. Individuals with better adaptability get food first. The position of the follower changes dynamically with the following equation: In formula (2), X worst indicates the worst place in the current foraging environment, X best represents the best location of the current population where the producer is. A refer to a 1 × d dimensional matrix with elements of either 1 or −1, where A + = A T (AA T ) −1 . If i > NP/2, this indicates that i_th follower is hungry, has a poor fitness value and needs to fly to other areas to gain energy. In order to simulate the situation that sparrow population will be hunted by natural enemies and fall into danger in reality, SSA corresponds to this situation to fall into local optimization. Sparrows have watchers, and the number of watchers is randomly selected from the individual of producers and followers. when the danger is found, an alarm will be generated, which makes the producers lead other individuals to a safe place. This behaviour can be expressed mathematically as follows: In formula (3), X t best represents the optimal position of the t_th iteration population; β is the parameter controlling the step, and β ∼ N(0, 1). K ∈ [−1, 1] is a random digit; f i refers to the fitness value of the t_th iteration individual; f g and f w refer to the best individual fitness and the worst fitness in the current population, respectively. ε is the minimum used to ensure that the denominator is meaningful. When f i > f g , sparrows are close to the boundary of team and are vulnerable to be preyed. When f i ≤ f g , this shows that the central population are aware of the danger, and the sparrow will move to others to avoid being hunt.

Multi-Strategy sparrow search algorithm with non-uniform mutation
Section 3 describes and analyzes the improvement strategies proposed in this paper, including tent chaos mapping, generalized opposition-based learning, adaptive weight, and somersault strategy. We also give the flowchart of the proposed algorithm and the time complexity analysis in this section.

Hybrid initialization strategy
Initialization population is a very important stage in intelligent optimization algorithms. Its main purpose is to provide sufficient preparation and space for algorithm optimization, make the population distribution density higher. At the same time, it can improve the optimization speed of the algorithm and prevent the premature convergence of the algorithm to a certain extent. For better control population initialization, this paper uses a hybrid strategy based on Tent map and generalized oppositionbased learning to initialize the population. On the one hand, Tent map has advantages in uniformity and ergodicity compared with other maps (Kaur & Arora, 2018). On the other hand, using generalized opposition-based learning strategy refines the initial population, chooses the better sparrow individuals, and provides a suitable environment for algorithm optimization to accelerate the convergence of the algorithm. The mathematical model for Tent map is as follows: Tent map generates a sequence of values by first generating a random initial value x 0 between 0 and 1, and x 0 avoids being equal to (0.2, 0.4, 0.6, 0.8); then the x n sequence is generated according to Equation 4; finally, when x falls into a small periodic point or immobile point, the x n sequence is perturbed to make the Tent mapping re-enter the chaotic state (Zhang et al., 2008).

Generalized opposition-based learning
Generalized opposition-based learning is more learnable than general opposition-based learning (Wang et al., 2011), because it's more flexible and its optimization methods are diverse. Its expression is: K is a random number of (0,1), if x j / ∈ [a j , b j ], then regenerate: Generalized opposition-based learning of tent mapping initializes the population in three steps: (1) Use the chaotic sequence generated by tent mapping to initialize the position of sparrow population X ij (i = 1, 2, · · · , D; j = 1, 2, · · · , N), N represents population size.
(2) According to the definition of the reverse solution, generate the reverse individual position X ij from the initial population position. (3) The individuals generated by the two methods were sorted according to the value of fitness, and the N individuals with the highest fitness were selected to form the initial population.

Adaptive weight
The strategy of weight is very common in particle swarm optimization algorithm. Generally, it changes adaptively according to the maximum value and minimum value set previously in order to reduce the possibility of particle swarm optimization sinking into local optimum. The producer plays a guiding role in the population, so this paper introduces an adaptive weight W in the location update stage of the producer to balance the search ability of the algorithm (Chander et al., 2011;Xie et al., 2002). The producer leads the population individuals to approach the optimal food. Adaptive weight are introduced to orderly narrow the search scope of the algorithm, enhancing personalized search and reducing ineffective search methods. The adaptive weight equation is designed as follows: The significance of formula (7) is that w varies nonlinearly between 0 and 1, and according to the characteristics of this formula, the early weight of the algorithm is small, but the convergence speed is fast. In the later stage of the algorithm, the weight becomes larger, but the change speed will also decrease. The modified producer position update formula is: Through the fusion of the adaptive weight to dynamically change the position of the sparrows, producers have different ways of guiding at different times, which makes the algorithm more flexible. With the increase of the number of iterations, sparrows will close in the direction of the optimal position, and the speed of sparrows will be quicker with the increasing weight, so as to converge faster.

Non-uniform mutation
The follower position update mode only performs local search after the producer, which makes the follower's search mode blind, and is easy to be premature if local extremal points are encountered. Therefore, it is necessary to add a new search strategy to improve this shortcoming. Mutation is one of the means that scholars usually employ to implement population evolutionary algorithms. As the name implies, mutation is a way to make the new algorithm get rid of the constraints of the original mechanism and produce a new search mechanism, so as to improve the flexibility of the algorithm (Alireza, 2011;dos Santos Coelho, 2008;Higashi & Iba, 2003). The non-uniform mutation operation (Chauhan et al., 2021;Zhao et al., 2007) is an operation in which new individuals cloned and duplicated undergo variations of varying magnitudes that allow them to evolve. It can further refine the algorithm search and improve accuracy. The working principle is: The Mutation is performed on the j_th component of the individual. The upper and lower limits of x ij are denoted as ub and lb, respectively. Then the component after mutation is: , t represents the current iteration, T represents the set maximum number of iterations, b is the control parameters, which determines the non-uniformity of mutation operation. r is the random number uniformly generated between 0 and 1. The nonuniform mutation step (t, y) is a mutation operator that adaptively adjusts the step size, so that the search area of the algorithm can be extended to the whole domain and the potential regions can be explored. With the continuation of the algorithm, the search radius decreases according to the probability. When the algorithm is coming to the end, it only searches in the narrow neighbourhood of the current solution. This mechanism will be able to ensure the accurate location of the optimal solution without escaping from the current neighbourhood.

Somersault
According to formula (3), in case of danger, the individual escape mode of sparrow is monotonous and narrow, resulting in premature phenomenon. Although the traditional opposition-based learning strategy can improve the population diversity, it can only get the reverse solution in the parallel space, and the sensitivity of the algorithm is poor. To overcome this defect, this paper adopts somersault foraging strategy from MRFO. With more diverse search methods, the somersault strategy updates the sparrow position and flexibly searches the nearby reliable solution, thus, the probability of SSA falling into the local optimal value is reduced. The somersault foraging strategy takes the current optimal as the central point, and each update position is located on the line between the current position and its position symmetrical about the centre point. Location update formula: In formula (10), S stands for the somersault factor. It is convenient to allow individual sparrows to jump between the current position and the symmetric position each time the position is updated. This helps to widen the search space of the sparrow individual and can be more effective in moving away from the local optimum. According to the narrative of the MRFO paper, S takes the value of 2; X j i (t) represents the j_th dimension position of the t_th iteration of the i_th individual; X j best is the current optimal position, N represents the total number of sparrows, r 1 , r 2 is a random number between 0 and 1, respectively. The somersault strategy diagram is shown in Figure 1.
The introduction of this strategy improves the accuracy of the solution after iteration, and for each iteration, the probability of finding the optimal solution will increase. This not only solves the problem of easily falling into local optimization but also improves the optimization speed and convergence accuracy of the algorithm.

Improved sparrow search algorithm with non-uniform multi-strategy
Sparrow search algorithm does have some advantages in performance compared with other algorithms, but it contains a variety of random parameters which will lead to the increase of randomness and the probability of falling into local extremum. Therefore, this paper proposes a non-uniform multi-strategy sparrow search algorithm. Generalized opposition-based learning and tent mapping were utilized to initialize the population to make full preparation for sparrow individual optimization. An adaptive weight strategy is presented to balance the finder's search pattern, and then, a non-uniform mutation strategy is adopted to make the follower's search more flexible and detailed. Finally, the high-quality solution is obtained by employing the somersault strategy to make the sparrow escape from the local optimal position. The main steps of NMSSA are summarized in Algorithm 1: Algorithm 1: The NMSSA algorithm (1) Set parameters: T, P, S, POP %T is the maximum number of iterations; P is the number of producers; S is the number of sparrows aware of the danger; POP is the population size Initialize population using tent mapping and generalized opposition based learning (2) t = 1; (3) Calculate fitness value; (4) while(t < T) (5) The fitness values are sorted to find the worst and best individual positions respectively; (6) R 2 = rand(1) (7) for k = 1:P (8) Update the location of the producers according to formula (8); % Adaptive weight (9) end for (10) for j = (P+1):POP (11) Update the location of the followers according to formulas (2) and (9); % Nonuniform mutation (12) end for (13) for h = 1:S (14) Get the individual position of a sparrow that is aware of danger according to formulas (3) and (10)

Effectiveness analysis
In order to further verify the effectiveness of the improved algorithm, the individual distribution of the NMSSA and SSA algorithms on the multimodal function SCHWEFEL FUNCTION is given. The SCHWEFEL FUNCTION model diagram is shown in Figure 2. When the position is about 420, the optimal value 0 is taken, set the population number of the two algorithms to 50, and the number of iterations 10. The distribution is shown in Figures 3 and 4.
It can be seen that most of the particles of NMSSA are close to the optimal value and have a fast convergence speed, while all the individuals of SSA are not close to the optimal value in 10 generations and are distributed in other local extreme points. Therefore, the improved algorithm improves the optimization mechanism of the original algorithm and verifies the effectiveness of the improved strategy.

Time complexity analysis
Time complexity is an important index that can reflect the rationality and timeliness of the algorithm. Set the total number of NMSSA algorithm as P, the maximum number of iterations as M, and the problem dimension as D, and the ratio coefficients of followers and scouts as r 1 and r 2 , respectively. Therefore, the time complexity of NMSSA is analyzed as follows: On the macro level, the time of whole process is O(P × M × D), and the same as the SSA. Although NMSSA increases the complexity of O(P) during the initialization phase, it does not change the structure of the algorithm or increase the number of loops. So its time complexity is still O(P × M × D), just like the basic SSA.
Microscopically, if the ratio of followers is r, the time to compute the generalized inverse solution is t 1 , the time to compute the non-uniform mutation is t 2 , and the time to compute the position update with the somersault strategy is t 3 . The time complexity of the initialization of tent map is the same as that of the original algorithm. Other calculations are small and can be ignored. As shown in the algorithm flow in Figure 5, the NMSSA algorithm adds O( )) in comparison with the SSA algorithm, but there is no improvement in quantity, which can obviously improve the efficiency and precision of the algorithm. In conclusion, the added time complexity is very worthwhile and necessary.

Experimental results and analysis
In this section, we test the proposed algorithm on the benchmark function and CEC2017 function, respectively, and compare it with other intelligent optimization algorithms, and the results show that our proposed NMSSA algorithm has more outstanding performance in terms of optimization capability. In addition, we also verify the reasonableness of the experimental results by rank sum test.

Benchmark function test
A measure of the optimization capability of an algorithm needs to be well validated in the function test. In order to better verify the optimization ability of the NMSSA algorithm, this article first selects 10 standard test functions for verification, and compares them with the nine algorithms of PSO (Kennedy & Eberhart, 1997), GWO (Mirjalili et al., 2014), SSA (Xue & Shen, 2020), ISSA (Lv et al., 2021), CSSA (Lv et al., 2020), MRFO (Zhao et al., 2020), TLBO (Črepinšek et al., 2012), whale optimization algorithm(WOA) (Mirjalili & Lewis, 2016) and BSO (Wang & Yang, 2018). BSO is one of the most popular fusion algorithms in recent years, which combines particle swarm optimization with the longicorn whisker algorithm. The specific function information is shown in Table 1. F1-F6 are simple single-peak type functions. F7-F8 are complex multi-peak type functions, and the rest are fixed dimensional functions. F1-F8 are variable dimensional functions, which are tested in 30 and 100 dimensions respectively. To ensure fairness, each algorithm parameters is set as given in the literature, with the population size of 100 and the maximum number of iterations of 500. And each algorithm is run independently 30 times. All simulation experiments were implemented using MAT-LAB R2019a on a PC with Intel(R) Core(TM) i5-10200H CPU at 2.40 GHz and 16 GB of memory. Three indicators, the average, best and standard deviation of the results are calculated and comprehensively evaluate the optimization ability of each algorithm in function. The optimization results are shown in Tables 2 and 3.
From Tables 2 and 3, it is clear that NMSSA performs better in each function than other algorithms. Especially in F1-3, F6 and F9-10, the theoretical optimum values can be found and the stability is good. Other algorithms all lag behind NMSSA in their optimization capabilities. In F1 and F4, SSA itself can also find theoretical optimum values, which indicates that NMSSA does not weaken the optimization capability of SSA itself. Besides, it is of weak difference between 30-dimension and 100-dimension optimization results for NMSSA, which further illustrates the rationality and validity of the existence of NMSSA. In order to clearly see the convergence effect of each algorithm on the functions selected, the average convergence diagram of 30 results of each algorithm is given as shown in Figure 6.
As shown in Figure 6, the advantage of NMSSA algorithm lies in the convergence accuracy and optimization speed of each function. The algorithm has an obvious speed advantage in convergence for singlepeak function. For multi-peak function, it has better anti-local attraction ability, and can develop higher convergence accuracy. So it can be judged that NMSSA algorithm creates a better search space and gets rid of the constraints existing in the original algorithm search mechanism.
The performance of the improved algorithm is onesided by only three indicators. To show its rationality and fairness, the superiority of the improved algorithm over other algorithms is evaluated by a statistical test. In this paper, Wilcoxon rank test was utilized to determine whether there were significant differences between u(x i , 10, 100, 4)  Table 4. Table 4 shows that there are obvious differences between NMSSA algorithm and other algorithms, which further illustrates the effectiveness and feasibility of the improved algorithm.

CEC2017 function test
Tests on the benchmark function alone do not fully demonstrate the universality and validity of the algorithm. To better emphasize the practicability of the NMSSA algorithm and avoid the dependence of the NMSSA algorithm on the optimal value of 0, seven algorithms are tested on the CEC 2017 test function, with the evaluation times of 10000 * dim, the dimensions of 30 and the total number of 100. It is worth noting that only a few variants of the algorithms proposed in recent years, such as CSSA, FACL, and TDSD, are compared here. FACL and TDSD have passed the CEC test set validation. Table 5 lists the precise parameters of each algorithm. Each algorithm runs independently 30 times, and then we calculate the algorithm results corresponding to the five indicators, including best, worst, median, mean and standard deviation. These five indicators can reflect the optimization ability of each algorithm. The best value in each metric is shown in bold. At the same time, Wilcoxon rank test is used to test whether there are significant differences among algorithms, and at the significance level of 5%, '+' means that the optimization performance of NMSSA algorithm is better than other algorithms, '−' means the opposite, and ' = ' means the performance equivalence between algorithms. All the results were then clearly compared. The results are shown in Tables 6 and 7. It is worth noting that most of the data are from the literature (Ouyang et al., 2021b), and all experiments are carried out on the same equipment.
It can be seen from Table 6 that NMSSA has a strong advantage in most functions, and only in F10, F13, F28, and F30 functions, the optimization is poor. From the test results, NMSSA beats other algorithms  with obvious differences. According to the no free lunch theorem (Wolpert & Macready, 1997), it can be concluded that NMSSA has strong universality, and it still keeps good performance when the theoretical optimal value is not 0.

Applications of NMSSA on engineering problems
To highlight the good optimization capability of NMSSA and verify the practicality of NMSSA, two engineering design optimization experiments are designed in  this section. They are cantilever beam design and pressure vessel design, respectively; these problems have many constraints and test the optimization ability of the algorithm very much.

Cantilever beam design
A stepped cantilever beam as shown in Figure 7, consists of 5 hollow blocks of rectangular cross section and their thickness is kept constant. The cantilever beam is rigidly supported at end 1 and is subject to vertical forces at the free end 5 (Gandomi et al., 2013). The optimization objective for this type of problem is that the weight of the cantilever beam should be minimized while the constraints are satisfied. The variables for this type of problem are the width (or height) of each rectangular hollow block, which takes values in the range [0.01,100] (Mirjalili, 2015). The mathematical model for this type of problem can be expressed as follows: In the cantilever beam design optimization experiment, NMSSA was run 30 times, the optimal value was taken and noted as f (x), and the data were compared with those of the six algorithms mentioned in the literature (Bayzidi et al., 2021), and the detailed comparison results are shown in Table 7. It can be seen from Table 7 that NMSSA has a better optimization effect than CS and MFO, and the difference with the rest of the algorithms is very small. So in general, NMSSA achieves good results in the optimization of cantilever beam design. 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 F6 2.3936e-11 2.3936e-11 2.3936e-11 3.0199e-11 3.0199e-11 F7 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 3.0199e-11 F8 2.9991e-11 2.9991e-11 2.9991e-11 3.0199e-11 3.0199e-11 F9 N/A 1.2118e-12 N/A 1.7203e-12 1.7203e-12 F10 6.2070e-03 1.7203e-12 4.7632e-02 6.2029e-09 1.7203e-12

Pressure vessel design problem
The pressure vessel design problem is a classic problem in the field of engineering optimization. Its optimization objective is to minimize the manufacturing cost of a cylindrical vessel with a hemispherical head. The model of the pressure vessel is shown in Figure 8 (Bayzidi et al., 2021). As seen from t figure, the variables of the problem are cylinder thickness T s ; head thickness T h ; vessel radius R; and cylinder length L. The mathematical model of the pressure vessel is as follows.

K-means image segmentation based on NMSSA
With the development of computer vision, digital image preprocessing becomes very important. As an important part of image preprocessing, image segmentation of its main work is of the same type as the rope in the image point to the same class, which will image segmentation into the area of each intersection, in order to quickly and accurately to provide the characteristic information of the image well for computer vision task must strike an according to the different segmentation scenario, at present there are mainly based on the edge, region, Clustering and so on image segmentation method.   In this paper, the fusion of NMSSA K-means clustering algorithm is used to achieve image segmentation. The standard K-means algorithm initializes K clustering centres, divides each point to the location of the nearest centre point, recalculates the centre point, and selects generations continuously until clustering is completed. In this paper, NMSSA is used to optimize the initial clustering centre of K-means and find the best clustering centre to achieve the best segmentation effect. The objective function of this optimization problem is shown in formula (13), where X represents a gray value in the image and Y represents the j_th clustering centre.
In this paper, PSNR, SSIM and FSIM are used to verify the quality of image segmentation. PSNR calculates the difference between the segmented image and the original image (Abd El Aziz et al., 2017); SSIM calculates the similarity between the segmented image and the original image (Wang et al., 2004). The larger the value of these two indicators is, the better the segmentation effect is. Their calculation formulas are shown in formulas (14) and (15). PSNR = 20 · log 10 255 RMSE , In formula (14), RMSE calculates the root mean square error of a pixel, H × W is the size of the image, I(i, j) represents the gray value of the original image, and Seg(i, j) represents the gray value of the pixel after segmentation.

SSIM =
(2μ x μ y + C 1 )(2σ xy + C 2 ) (μ 2 x + μ 2 y + C 1 )(σ 2 x + σ 2 y + C 2 ) where, μ x and μ y respectively represent the average intensity of the original image and the segmented image. σ 2 x and σ 2 y respectively represent the standard deviation of signal contrast between the original image and the segmented image. σ xy is the covariance between the original image and the segmented image, C 1 and C 2 are constants used to keep the result stable.
FISM is an important indicator to judge the similarity between the original image and the segmentation image (Bayzidi et al., 2021). Its value is between 0 and 1. The closer it is to 1, the better the segmentation effect is. Its calculation formula is as follows: In the formula (16), X represents all regions of the original image, S L (X) represents the similarity degree of images  before and after segmentation, and PC m (x) represents phase consistency mapping. In order to demonstrate the practicality of NMSSA, this paper designs a K-means image segmentation experiment based on NMSSA. Four sets of images in different scenes were selected to verify the segmentation performance and PSO, flower pollination algorithm (FPA) (Yang, 2012), artificial bee colony(ABC) (Karaboga & Basturk, 2007) and standard K-means algorithms were chosen for comparison. To ensure the objectivity of the experimental results, the parameters of NMSSA are set according to Table 5, and the rest of the algorithms are set as given in the literature. To make the segmentation results more stable, after our many experiments, the K-means algorithm was set to k value of 5, the population size of each algorithm was uniformly set to 30, and each was run 10 times. The mean values of PSNR, SSIM and FSIM were recorded, and the results are shown in Figure 9 and Table 9, and the optimal data for each indicator in Table 9 are bolded (Figure 9).

Conclusion
In order to improve the convergence speed and search efficiency of the SSA, and to improve the defect that the SSA is easy to fall into local optimum, we propose a non-uniform mutation sparrow search algorithm. We introduce tent chaotic mapping and generalized opposition-based learning to obtain a better initial population; we adopt an adaptive weighting strategy to optimize the finder's search method, and then incorporate the non-uniform mutation strategy into the follower's position update method, so as to improve the search efficiency and convergence accuracy of the algorithm; we also adopt a somersault strategy to improve the defect that the algorithm is prone to fall into a local optimum. We demonstrate the good optimization ability of NMSSA by benchmark function and CEC2017 function, and achieve good results on engineering problems optimization problem and K-means segmentation, which proves the practicality of NMSSA. However, NMSSA also has some shortcomings. On the one hand, with the enlargement of dimensions, the calculation time is enlarged. On the other hand, from the CEC 2017 function, Now we know that NMSSA's ability to optimize and improve a single function is not perfect, which is far behind other algorithms. Therefore, The next work will continue to study how to reduce the running time of the algorithm and the ability to seek the optimal solution.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
This research was funded by the National Natural Science Foundation of China [grant numbers 62272418, 62102058].