Research on multi-strategy improved sparrow search optimization algorithm

: To address the issues with inadequate search space, sluggish convergence and easy fall into local optimality during iteration of the sparrow search algorithm (SSA), a multi-strategy improved sparrow search algorithm (ISSA), is developed. First, the population dynamic adjustment strategy is carried out to restrict the amount of sparrow population discoverers and joiners. Second, the update strategy in the mining phase of the honeypot optimization algorithm (HBA) is combined to change the update formula of the joiner’s position to enhance the global exploration ability of the algorithm. Finally, the optimal position of population discoverers is perturbed using the perturbation operator and levy flight strategy to improve the ability of the algorithm to jump out of local optimum. The experimental simulations are put up against the basic sparrow search algorithm and the other four swarm intelligence (SI) algorithms in 13 benchmark test functions, and the Wilcoxon rank sum test is used to determine whether the algorithm is significantly different from the other algorithms. The results show that the improved sparrow search algorithm has better convergence and solution accuracy, and the global optimization ability is greatly improved. When the proposed algorithm is used in pilot optimization in channel estimation, the bit error rate is greatly improved, which shows the superiority of the proposed algorithm in engineering application.


Introduction
The sparrow search algorithm [1] is a new SI algorithm put forward by Xue in 2020. By observing the predation behavior and reconnaissance mechanism of sparrow populations in nature, the intelligent algorithm has the advantages of high search accuracy, fast optimization and few parameters, which has attracted more and more scholars' attention. Furthermore, as a kind of SI algorithm, the sparrow search algorithm also has some shortcomings, such as premature convergence and easy to fall into local optimization.
For the improvement and application of the sparrow search algorithm, many scholars have done a lot of research. GAO et al. [2] put forward a multi-strategy improved evolutionary sparrow search algorithm by adding tent chaos in the initialization population phase, which sped up convergence and improved convergence precision. Additionally, the algorithm used a greedy strategy to fully utilize each individual sparrow to increase its capability to deal with the global optimal solution. Liu et al. [3] added the circle chaotic mapping into the sparrow search algorithm to improve the global search ability of the algorithm in population initialization and introduce t-distribution in the position update formula for different iteration cycles of the sparrow to facilitate the algorithm to jump out of the local optimum. Ren et al. [4] proposed a sparrow search algorithm based on sine cosine and firefly disturbance. The sine cosine algorithm with random inertia weight was added to the finder position update, and all sparrows were updated using the optimal sparrows obtained by firefly disturbance method to improve the sparrows search ability. Brezočnik et al. [5] analyzed various methods of SI algorithms in feature selection problems, provided a unified framework for SI algorithms to solve feature selection and discussed the application prospects of feature selection methods based on SI in different application fields. Zhang et al. [6] proposed a random configuration network based on the chaotic sparrow search algorithm, the chaotic sparrow search algorithm mainly uses logistic mapping, adaptive hyperparameters and variational operators to enhance the global optimization capability of the sparrow search algorithm. The accuracy of the random configuration network is affected by the allocation and selection of some network parameters. The chaotic sparrow search algorithm is used to optimize the random configuration network to provide better parameters for the network. Fan et al. [7] used the hybrid sparrow search algorithm to optimize the hyperparameters of the deep learning algorithm. The hybrid sparrow search algorithm is a hyperparameter optimization method combining the sparrow search algorithm and particle swarm optimization, which avoids the local optimal solution in the sparrow search algorithm and the search efficiency of the particle swarm optimization algorithm. Dong et al. [8] used an improved multi-objective sparrow search algorithm to distribute the capacity of distributed power generation, introduced Levy flight strategy to enhance the ability of multi-objective sparrow search algorithm to jump out of local optimum and established a multi-objective optimization model with investment cost, environmental protection and power supply quality and used the multi-objective sparrow search algorithm to optimize the solution. Zhu et al. [9] used an improved sparrow search algorithm to optimize the control of a chilled water system, disturbed the sparrow by random walk strategy to improve the global search ability of the sparrow and added Gaussian mutation in the iterative process of the algorithm to enhance the local search capability, which effectively solves the problem of large time lag and inertia of the chilled water system. Li et al. [10] proposed an improved sparrow search algorithm to solve the problem of super-parameter selection of the support vector machine (SVM) model. Through a new dynamic adaptive t-distribution mutation, the performance of the sparrow search algorithm was enhanced, and the proposed method can effectively improve the prediction accuracy.
Although the aforementioned revised approaches have helped the algorithm's search performance to some degree, there is still much potential for advancement. In order to improve the algorithm's convergence performance and convergence accuracy simultaneously, we propose a multi-strategy improved sparrow search algorithm based on the existing research. The major contributions of this article are as follows: 1) A multi-strategy improved sparrow search algorithm (ISSA) has been proposed, with mostly the following three points. a) Dynamically adjust the number of discoverers and joiners in the population, which facilitates the algorithm to make a balance between search and global search. b) The update the strategy of the mining phase of the honeypot optimization algorithm (HBA) is introduced to improve the location update of the joiners in SSA and enhance the global exploitation capability of the algorithm. c) The optimal position of the discoverer is disturbed by the disturbance operator and levy flight to increase the algorithm's capacity to depart from the local optimum. 2) Compared with five basic algorithms on 13 benchmark functions, the effectiveness of the proposed algorithm is verified. The results demonstrate that the proposed algorithm has faster convergence speed and higher convergence accuracy in solving functional optimization problems.
3) Applying ISSA to the pilot optimization problem in channel estimation, the bit error rate is greatly improved, indicating the superiority of the proposed algorithm in engineering applications. The organizational structure of the remaining parts of this article is as follows: The second section discusses the basic sparrow search algorithm, describing the population distribution and update method of the original algorithm. In the third section, an improved sparrow search algorithm (ISSA) was proposed, and three improved strategies were introduced. In the fourth section, we conducted simulation experiments on the proposed method, conducted experiments on unimodal and multimodal test functions and conducted Wilcoxon rank sum tests. The fifth section applies the proposed algorithm to channel estimation in the OFDM system, and finally, a summary is provided in the sixth section.

Basic sparrow search algorithm
The sparrow search algorithm is put forward by observing the predation behavior and reconnaissance mechanism of sparrow populations in nature. The sparrow population is divided into two categories: finders and participants, in which the finders account for 30% of the population and supply the foraging guidance for the whole sparrow population, and the remaining sparrows are participants, which search for food around the discoverers with the best fitness value. Additionally, certain sparrows were chosen at random to serve as scouts to add an early warning mechanism.
In SSA, discoverers are sparrows with higher fitness values and they supply foraging directions and locations for the joiners. The location formula of the discoverers is as follows: indicates the current iteration number, is the maximum iteration number, indicates the location of the sparrow in the iteration, is the random number of (0, 1), satisfies normal distribution, is the matrix of 1 with all elements in it are 1 and is the maximum dimensional value. and represent the alert values and security values. When , it means that the current environment is free of pouncers and the discoverer can conduct an extensive search. When , it indicates that a portion of the population has found a predator and an alert has been issued, and all sparrows need to move closer to the safety zone at this time.
Joiners always observe the behavior of discoverers and they adjust their positions based on the information from the discoverers. The location formula of the participants is as follows: where denotes the global worst location, denotes the optimal position currently occupied by the discoverer, is a matrix of 1 with values randomly assigned to 1 or -1 and . When /2 , it indicates that the sparrow with low adaptability is not getting sufficient nutrition and needs to fly to other places to forage for better food, otherwise, the joiner searches near the optimal location searched by the finder.
During sparrow foraging, in order to avoid attacks from predators, the population randomly selects sparrows from 10-20% to scout, and when danger is detected, individuals in the population will make corresponding adjustments. The location formula of the scouts is as follows: is the current global optimum position and is the step control variable, which satisfies the standard normal distribution, is a random number between (-1, 1) and and are the global highest and lowest fitness values, respectively. is a constant to prevent the denominator from being 0. When , the sparrow is at the edge of the population and is vulnerable to attack by predators, sparrows need to move to the best individual position of the population.
indicates that the sparrow in the middle of the population is aware of the danger and needs to move closer to other sparrows to reduce the risk of being pounced.

Multi-strategy improved sparrow search algorithm (ISSA)
First, the population dynamic adjustment strategy is used to control the number of sparrow population discoverers and joiners. The number of discoverers and joiners of the original sparrow search algorithm is fixed and the discoverers perform global search and the joiners perform local search. With the increase of iterations, the algorithm tends to fall into a local optimum and requires more discoverers for global search, so the population dynamic adjustment strategy is designed to balance the algorithm's global search and local search capabilities of the algorithm to avoid falling into a local optimum. Then, the joiner's position update formula in the algorithm is improved. The joiner's position update formula for conducting global search in the original algorithm is a step length multiplied by a normally distributed random number, which is determined by the current position and the global worst position, and only the global worst position is considered, while the global optimal position is ignored. In the improved position update formula, the mining phase of the honeypot optimization algorithm (HBA) is introduced, and the global optimal position and the global worst position are added to enhance the global exploration capability of the algorithm. Finally, the optimal position of the population discoverer is perturbed using the perturbation operator and levy flight strategy. In the original algorithm, the joiner always searches near the optimal position of the discoverer, the optimal position of the discoverer may be in the local optimum, at which time it is necessary to perturb the move step using the perturbation operator. Levy flight is added at the optimal position to enhance the ability of the algorithm to depart the local optimum.

Sparrow species dynamic adjustment strategy
Discoverers in the sparrow population primarily undertake worldwide searches, while joiners' activity is classified into two categories. One portion of the joiners conducts local searches in the discoverer's optimal location, while the other part conducts worldwide searches. The number of discoverers and joiners in the sparrow population is fixed in the original sparrow search algorithm, and a fixed number of discoverers always undertake global searches in each iteration. The joiners then seek, depending on the direction supplied by the discoverer. Once the discoverer's ideal location falls into a local optimum, a set number of joiners do a local search at the optimal point, followed by a global search, making it difficult to exit the local optimum. Later in the algorithm iteration process, a bigger number of discoverers are necessary to conduct global searches in order to explore better sites all over the world. Moreover, a greater proportion of sparrows are required to do global searches among joiners. To balance the algorithm's ability to do global and local searches, a dynamic adjustment approach for the number of sparrow discoverers and participants has been created. The dynamic adjustment strategy is as follows: is the proportion of improved discoverers, is the original discoverers' ratio, generally set at 20%， is the upper limit for the set discoverer ratio column increase, which is equal to 0.1, is the current iteration number and is the maximum iteration number. is the comparison column of sparrows in the population for global search, is the original set ratio, generally set to 0.5, is the upper limit of the increased global search sparrow ratio, taken as 0.1. With the increase of the number of times, the number of discoverers and global search joiners are increasing, and the number of joiners searching near the optimal position of the discoverer is decreasing, which is conducive to the algorithm to jump out of the local optimum and increase the ability of global search.

Honeypot optimization strategy
The honeypot optimization algorithm [11] is a new meta-heuristic intelligent algorithm proposed by Fatma A. Hashim in 2021. The HBA algorithm is mainly used to find the optimal by mimicking the honey badger foraging behavior, the algorithm model has few parameters and has a better global search capability. There are two phases in the search process, tracking around the excavation and following existing guides to find foraging honey. Thus, we mainly introduce the update strategy of the tracking around the excavation phase of HBA, and in the excavation phase, the formula for updating the position of the honeypot algorithm is as follows: is the location of the prey and is the best location globally. 1 (Default is 6) represents the honey badger's ability to find food.
is the distance between the prey and the th honey badger, and , and are three different random numbers between 0 and 1. as a sign to change the search direction, the following equation is used to update: is the definition of olfactory intensity, if the odor is high, the movement will be fast and vice versa, which is given by the inverse square law.
is the time-varying search decay factor, which indicates the randomness of the search process over time.
value decreases with increasing number of iterations and is defined by equation. 5 2 In the mining phase of the honeypot optimization algorithm, the search is mainly carried out near the global optimal position, and two random steps are added. In other words, a position is randomly selected from the current and optimal positions as the update position for the next iteration. In the original sparrow search algorithm, the update formula of the participant's location for global search only takes into account the current and worst positions, leaving out the ideal position. The search strategy of the honeypot algorithm is applied to the position update formula of the joiners in SSA, and the optimal position of the discoverer is introduced into the global search update formulas, randomly select a position between the optimal and worst positions as the step size for position update in the next iteration. The improved discoverer location update formula is as follows: represents the number of sparrow populations. When 1 , it indicates that the current enrollee has not found a better location, so it is necessary to expand the search interval for global search. In other cases, the participants need to use the information provided by the discoverer for local search.

Perturbation operator and levy flight strategy
In the original algorithm, the joiner is searching at the optimal position found by the discoverer, and in the multi-peak test function, if the discoverer finds the local optimal location, then the joiner searches at the local optimal position of the algorithm. If it is difficult to depart from the local optimum, it is necessary to perturb the joiner position update formula with a small perturbation when the current sparrow's fitness value is low, and a larger perturbation when the fitness value is larger, which is also affected by the number of iterations, while adding levy flight [12] near the optimal position of the discoverer to facilitate the algorithm to depart from the local optimum. The perturbation operator and levy flight are defined as follows: where is the current perturbation, and are the highest and lowest perturbations, respectively, which take the values of 0.5 and 1.5 here, is the levy flight's step size, , satisfy the normal distribution and is the step scaling factor. In the experiment, levy flight requires small changes, so the value of a should not be too large. After repeated experiments, we found that the performance is best when is set to 0.01 and is the random number of [0, 2], when taken as 1.5. Levy flight involves performing small step size transformations over a long period of time, with occasional large step size transformations. By introducing levy flight into the optimal position in the local search formula of the joiners, the optimal position can be perturbed, resulting in a small deviation of the discoverer's optimal position with a high probability and a large deviation with a low probability. The joiners not only retain the discoverer's position information, but helps the algorithm depart from local optima. The improved algorithm updates the formula as follows: represents the number of sparrow populations. In the global search stage of the discoverer, we added the search strategy of the honeypot algorithm, expanding the search space and global search ability of the algorithm. In the local search stage, we added perturbation operators and levy flight, which can break free from the constraints of the optimal position of the discoverer and have the ability to jump out of the local optimum.

Steps for a multi-strategy improved sparrow search algorithm
The specific implementation steps of the ISSA algorithm are as follows: Step 1. Set the population size , maximum number of iterations , scout ratio , alarm value and security value .
Step 2. Calculate each sparrow's fitness value individually using the fitness function, then rank them, record the best position and the best fitness value , the worst position and the worst fitness value in the population.
Step 3. Calculate the proportion of discoverers and the proportion of joiners performing a global search according to Eq (3.1).
Step 4. The population is separated into discoverers and participants according to the ratio calculated in step 3, and the locations of discoverers and joiners are updated according to Eqs (2.1) and (3.7).
Step 5. Update the location of the scout according to Eq (2.3).
Step 6. Update the best position and best fitness value , the worst position and the worst fitness value .
Step 7. When the maximum number of iterations has been achieved, the best outcome is produced and the algorithm is finished, otherwise go to step 3.
The pseudocode of the ISSA algorithm is shown in Table 1.

Benchmarking functions
In order to test the performance of the improved sparrow algorithm (ISSA), 13 distinct standard test functions were selected for testing. To ensure the reliability of the algorithm, these functions include single-peak and multi-peak.
is the single-peak benchmark test function and is the multi-peak benchmark test function, and the specific function information is shown in Table 2,  where represents the dimension of function, range represents the upper and lower limits of each dimension and Fmin is the theoretical optimal value of the test function.

Algorithm performance testing
The improved algorithms in this paper were compared with the initial sparrow search algorithm (SSA) [1], gray wolf algorithm (GWO) [13], particle swarm algorithm (PSO) [14], whale algorithm (WOA) [15] and harris hawk algorithm (HHO) [16] with a population setting of 50 and an iteration number of 500, which were run in 13 basic test functions. To verify the improved accuracy and reliability of the algorithms, each algorithm was run 30 times independently to obtain the best value, the worst value, the mean value and the standard deviation. The experimental data are shown in Table 3.
As shown in Table 3, for the single-peak test function , the proposed ISSA has a higher finding effect than SSA, GWO, PSO, WOA and HHO. All can find the theoretical optimal value with a standard deviation of 0, and the optimization effect is stable. For the and functions, ISSA's optimization is lower than that of SSA, but the accuracy loss is not great. For the function, ISSA has little improvement, and the standard deviation and mean value are the lowest. For the function, HHO achieves the best result, the optimal value and the average value are close to the theoretical optimal value, the performance of ISSA has improved significantly and the average value has been greatly enhanced compared with SSA. For the function, ISSA, SSA and HHO have similar performance and have discovered the test function's ideal value, which means that ISSA has maintained the optimization-seeking ability of SSA and has not reduced the optimization-seeking ability of the algorithm. For the function, SSA outperforms ISSA, which comes in second only to SSA and also achieves good optimization results. In summary, ISSA performs poorly on , , , and achieves better performance on other test functions. In order to highlight the superiority of the algorithm more intuitively and to facilitate the presentation of experimental results, the first 12 test functions were selected for testing, each algorithm was independently executed 30 times and the average convergence curves of the test functions were plotted according to the fitness function value and the number of iterations. Specifically, as shown in Figure 1, it can be observed that the improved algorithm in this paper has a faster convergence speed and higher convergence accuracy on most test functions. In the function, the improved algorithm is able to converge to 0, and ISSA has a huge improvement in both convergence speed and convergence accuracy. In , , ISSA has poorer performance compared to SSA, but there is no major loss in accuracy, and it is still more accurate than GWO, HHO, PSO and WOA. In the function, ISSA has the greatest optimization finding precision among the six algorithms, and in the function, ISSA has lower performance than HHO, but still higher than SSA, GWO, PSO and WOA. In , ISSA, SSA and HHO all find the theoretical optimum, but ISSA has higher convergence speed, and the convergence speed improvement is very large. In the function, the convergence accuracy of ISSA is lower than that of SSA, but the difference is not large. In summary, ISSA has good performance in both single-peak and multi-peak functions. Due to the improvement of the original algorithm, ISSA has higher computational complexity, and the running time can indirectly reflect the complexity of the algorithm. Therefore, we have made statistics on the running time of each algorithm, in which the number of iterations is set to 500, and the population is set to 50, running for a total of 100 times. The average running time of the six algorithms obtained is in Table 4, and the time unit is seconds. It can be seen that the average running time of ISSA is longer than that of SSA and other algorithms. The running time of SSA is longer than that of GWO and PSO, because these two algorithms are traditional SI algorithms with low complexity and poor performance in optimization, so the time is short. HHO and WOA are new SI algorithms, and the calculation time of SSA is longer than WOA and shorter than HHO. ISSA is improved on the basis of SSA, adding three strategies, which increases the computational complexity and running time, but has better convergence speed and accuracy.

Wilcoxon rank sum test
To more thoroughly evaluate the effectiveness of the proposed algorithm, we introduce the Wilcoxon rank sum test [17] to test the significance of the optimal results of ISSA algorithm and other algorithms under 30 independent runs to evaluate whether there is a significant difference between the proposed ISSA and other algorithms. The original hypothesis is : there is no significant difference between the two algorithms, and the alternative hypothesis is : there is a significant difference between the two algorithms. When 5%, the original hypothesis is rejected and the alternative hypothesis is accepted, which means that there is a significant difference between the two algorithms, and when 5%, the original hypothesis is accepted, which implies that there is no significant difference between the two algorithms, indicating that the two algorithms are equivalent in finding the optimal results. The rank sum test results of ISSA and SSA, GWO, PSO, WOA and HHO are shown in Table 5, where / indicates that the two algorithms have equivalent performance and cannot be compared.
According to Table 3, the performance of ISSA is equivalent to that of SSA and HHO in the function, which shows that the three algorithms can determine the best value of the test function pair in each experiment. According to the convergence curve, it is known that ISSA has faster convergence velocity and higher stability. The rest of the p-values are less than 0.05, indicating that there is a significant difference between the proposed ISSA and the other algorithms.   Figure 2 shows the boxplot of the optimal value for each test function of six algorithms. Each algorithm runs independently 50 times in the test function, selecting , , and from the single-peak function and and from the multi-peak function. It can be seen that in and , the ISSA algorithm has a maximum, minimum and median of 0 in the boxplot, which is much smaller than other algorithms, indicating that ISSA has strong balance ability and high robustness. In , the optimal value accuracy of ISSA is lower than ISSA, but much higher than other algorithms. However, its upper quartile is smaller than ISSA, and the optimal value distribution is relatively concentrated, improving the robustness of the original algorithm. In , the accuracy of the optimal value solved by ISSA is greater than that of SSA, and the entire boxplot of ISSA is below SSA, which also improves the robustness of the original algorithm. The search accuracy of ISSA and HHO is the same, the median and mean values of ISSA are lower than HHO and the stability of ISSA is higher. In , HHO has greater robustness, the accuracy of the ISSA is significantly higher than that of SSA and it is concentrated close to the theoretical ideal value with minimal variation in the optimal value, which further increases robustness. For , the optimal value for ISSA, SSA and HHO is 0, and each of these three algorithms has a high level of robustness. Based on the corresponding convergence curve in Figure 1, it can be seen that ISSA has a faster convergence speed. In summary, ISSA not only improves search accuracy, but also has high robustness.

Engineering applications
Orthogonal frequency division multiplexing (OFDM) [18] is the core technology of 4G networks, as a low-complexity transmission technology, it is widely used in broadcast systems as well as wireless LAN standards and has great advantages in terms of resistance to multipath fading, narrowband interference, multiple access and signal processing. The difficulty of this system is mainly to obtain the channel state information matrix accurately so as to recover the transmitted signal at the receiver side, so channel estimation is the key to achieve this step. The major traditional channel estimation methods are least squares (LS) [19], minimum mean square error (MMSE) [20], maximum likelihood [21] and Bayesian channel estimation [22]. The three major orthogonal frequency division multiplexing (OFDM) channel estimation methods are non-blind channel, blind channel and semi-blind channel estimation [23]. Among them, the performance of the blind channel estimation method is better than the semi-blind channel estimation, but its complexity is quite high and there are not many practical use cases. Non-blind channel estimation is based on pilots or training sequences. In order to track channel changes in real-time and reduce errors, pilot-based channel estimation algorithms are generally used. This method has problems such as high pilot overhead and poor robustness and performs poorly in low signal-to-noise ratio situations.
The LS algorithm is used in pilot-based channel estimation to obtain the channel at the pilot location. Then, estimate the whole channel through interpolation algorithm and finally get the signal sent by the sender. The channel estimation based on pilot focuses on the design of pilot, the traditional guide frequency design method is fixed, generally using equal interval into the pilot. This pilot arrangement order is manually set, so it is impossible to obtain a lower bit error rate. Aiming at the defects of traditional pilot design, we designed a least square method based on an improved sparrow search algorithm (ISSA-LS). The improved sparrow search algorithm is used to determine the optimal position of pilots, the fitness value is taken as the average bit error rate (BER) of each experiment and ISSA is utilized to identify the pilot arrangement order with the lowest average bit error rate.
The experimental signal modulation method are phase shift keying modulation (PSK) and quadrature amplitude modulation (QAM), the signal-to-noise ratio is 0-30, the number of subcarriers is 52, the number of guide frequency is 12, the number of population of the improved sparrow search algorithm is taken as 30 and the number of iterations is set to 50 (Figure 3). In 4PSK signals, the LS algorithm reduces the bit error rate to 0 when the SNR (signal-to-noise ratio) is 5, while ISSA-LS already reduces it to 0 when the SNR is 3. In the 8PSK signal, when the SNR reaches 8, the LS method decreases the bit error rate to 0, but ISSA-LS already does so when the ratio is 7. The bit error rate of ISSA-LS is significantly lower than LS at both low and high signal-to-noise ratios when the transmitter uses PSK as the modulation method. The bit error rate in 16QAM signals is reduced to 0 by the LS algorithm at a SNR of 7, whereas it has already fallen to 0 by ISSA-LS at a SNR of 6. As the SNR of 64QAM signals reaches 14, the LS algorithm reduces the bit error rate to 0, whereas the ISSA-LS method has already achieved 0 at 11. The performance of ISSA-LS at low signal-to-noise ratios is comparable to that of the LS algorithm when the modulation mode of the modulation transmitter is QAM. ISSA-LS has a substantially lower bit error rate than LS at high SNR. The performance of ISSA-LS improves as the SNR increases.  were conducted, and error bars were drawn based on the mean and standard deviation of the bit error rate at different signal-to-noise ratios. It can be seen that the bit error rate of ISSA-LS algorithm is lower than that of LS. Overall, whether it is PSK modulation or QAM modulation, the least squares method optimized by the improved sparrow search algorithm (ISSA-LS) has a lower bit error rate. The performance is superior to traditional least squares (LS) methods in both low signal-to-noise and high signal-to-noise ratios.

Conclusions
In this paper, a multi-strategy improved sparrow search algorithm is proposed. First, a sparrow population dynamic adjustment strategy is added to dynamically adjust the population size of discoverers and joiners, and with the increase of iterations, more discoverers find foraging directions for the whole population, the number of joiners for global search increases, which facilitates the algorithm to depart from a local optimum. In the position formula of the finders, an update mechanism in the mining stage of the honeypot optimization algorithm is introduced. After changing the location update formula, the algorithm's global search ability increases and can search in a larger range; the update formula of the joiners is perturbed by the perturbation operator and levy flight strategy, which further improves the algorithm's capacity to depart from the local optimum. Finally, the algorithm is tested in 13 test functions to verify the superiority of the algorithm with other algorithms, and the algorithm is applied to the pilot optimization in channel estimation and achieves a lower BER. However, there are some defects in this paper. For instance, the improved algorithm does not achieve better performance on , , and in the pilot optimization in the channel estimation; and the performance of the improved sparrow search algorithm is equivalent to that of the least square method when the signal-to-noise ratio is low. In a future study, we will strive to use various ways to improve the algorithm and its convergence accuracy in these test functions. The enhanced algorithm will then be applied to pilot optimization to increase the channel estimate accuracy even more.