A multi-sample particle swarm optimization algorithm based on electric field force

Aiming at the premature convergence problem of particle swarm optimization algorithm, a multi-sample particle swarm optimization (MSPSO) algorithm based on electric field force is proposed. Firstly, we introduce the concept of the electric field into the particle swarm optimization algorithm. The particles are affected by the electric field force, which makes the particles exhibit diverse behaviors. Secondly, MSPSO constructs multiple samples through two new strategies to guide particle learning. An electric field force-based comprehensive learning strategy (EFCLS) is proposed to build attractive samples and repulsive samples, thus improving search efficiency. To further enhance the convergence accuracy of the algorithm, a segment-based weighted learning strategy (SWLS) is employed to construct a global learning sample so that the particles learn more comprehensive information. In addition, the parameters of the model are adjusted adaptively to adapt to the population status in different periods. We have verified the effectiveness of these newly proposed strategies through experiments. Sixteen benchmark functions and eight well-known particle swarm optimization algorithm variants are employed to prove the superiority of MSPSO. The comparison results show that MSPSO has better performance in terms of accuracy, especially for high-dimensional spaces, while maintaining a faster convergence rate. Besides, a real-world problem also verified that MSPSO has practical application value.


Introduction
In recent years, traditional optimization algorithms cannot optimally solve many complex problems in real life. On the contrary, the metaheuristic algorithm can give a feasible solution to the problem within a limited range and approach the optimal solution as much as possible. Therefore, it has gradually become the first choice for solving these complex problems and is used to deal with multi-dimensional, single-object or multi-objective, continuous, or combinatorial optimization problems. Metaheuristic algorithms deal with issues by abstracting some phenomena in nature and animals into algorithms. Some representative algorithms have also appeared in this field, such as genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO), symbiotic organisms search (SOS), simulated annealing (SA), salp swarm algorithm (SSA), and so on. To improve the algorithm's performance further, the hybrid metaheuristic algorithm that combines the advantages of various algorithms has also received extensive attention. We can't extravagantly expect an algorithm to solve all optimization problems. Therefore, researchers always upgrade metaheuristic algorithms for specific issues. Kaplan et al. [1] introduced GA in the field search of SA, which improves the search performance and effectively solves the excitation current estimation of synchronous motors. To solve complex multimodal functions and high-dimensional problems, Mohamed et al. [2] introduced the concept of attraction and repulsion into the imperial competitive algorithm (ICA). They verified the effectiveness of the algorithm through a multi-objective engineering problem. Ç elik et al. [3] proposed an improved slap swarm algorithm (mSSA) to avoid the premature convergence of SSA. They also offered an improved stochastic fractal search (ISFS) algorithm, which replaced the first update process of SFS with a chaos-based strategy to effectively deal with the automatic generation control problem of the power system [4]. Lin et al. [5] proposed a comprehensive algorithm combining PSO and the estimate distribution algorithm (EDA) to solve the maximum segmentation problem. In addition, considering the shortcomings of SOS premature, ISOS [6] combined the advantages of quasi-oppositional based learning (QOBL) and chaotic local search (CLS), which balances exploration and exploitation. hSOS-SA [7] integrated the SA into the SOS algorithm to improve the convergence accuracy of the algorithm. Similarly, Singh et al. [8] proposed a hybrid metaheuristic algorithm (HSSAPSO), using the velocity phase of PSO in SSA, which reduced the risk of PSO falling into a local optimum. The abovementioned researches review that the metaheuristic algorithms have significantly developed.
Among the developing methods in the field of metaheuristic algorithms, the PSO algorithm has attracted wide attention from researchers and practitioners for its advantages of easy implementation, strong adaptability, and low complexity. PSO [9] is a metaheuristic algorithm proposed by Kennedy and Eberhart to simulate the predation behavior of birds. The algorithm compares the problem's search space to the flight space of birds, and each bird is abstracted into a particle to represent a candidate solution to the problem. According to the fitness value, all particles are randomly searched domain to find a better solution. Therefore, the algorithm relies on a random process similar to evolution. Randomness makes the PSO algorithm generate uncertainty. Hence, researchers usually adopt new strategies to make the particles move purposefully, thereby upgrading the PSO algorithm. Uncertainty modeling has also made some progress over the years. Given the uncertainty of gene expression data, Taylan et al. [10] introduced stochastic differential equations into uncertainty modeling for the first time. Kropat et al. [11] used an interval algorithm to overcome the problem that given data is susceptible to noise. Since the data for practical problems are usually discrete, CMARS [12] applied the framework of conic quadratic programming to improve multivariate adaptive regression splines (MARS).Özmen et al. [13] further improved the CMARS algorithm through a robust optimization technique to deal with data uncertainty. A robust and flexible regulatory network regression model was introduced in the literature [14] to determine unknown system parameters from uncertain data. Semialgebraic sets were used to express the uncertain state of genes and environmental factors in Reference [15], solving the gene-environment network's data uncertainty, which improved the model prediction accuracy. The above researchers have adopted effective methods to address data uncertainty and reduce errors. As a simple but powerful method, PSO is superior to deterministic algorithms in solving some specific problems. And PSO has shown good performance in many practical optimization tasks and has been used in many research fields, including Function Optimization [16][17][18], Classification prediction [19,20], Neural network training [21][22][23], Feature selection [24][25][26], and Image encryption [27,28].
Although PSO is simple and easy to implement, it still has the problems of easily falling into a local optimum and slow convergence speed. To overcome these problems, many researchers have proposed various PSO algorithm variants. To enhance the searchability of the algorithm, Liang et al. [29] offered a dynamic grouping particle swarm optimization (DMSPSO) algorithm, which divides the population into multiple subgroups. The members of the group exchange information for better local exploration, and at the same time, frequently reorganize and change the learning samples to realize the information exchange among the populations. HCLDMS-PSO [30] introduced a non-uniform mutation operator in the PSO to enhance the global search ability and adjusts the global best position through the Gaussian mutation operator, reducing the risk of falling into the local optimum. Zhang et al. [31] added two constraint factors to the PSO through migration learning, namely the source domain factor and the target domain factor, to balance the PSO algorithm's search ability and search efficiency.
Furthermore, to avoid premature convergence, CMPSOWV [32] discarded the velocity component in the particle velocity update formula and introduced a constraint processing method to guide the particle search space with the best personal position and the global best position. Chen et al. [33] also introduced chaos mapping in the PSO algorithm and adjusted the inertia weight through the sinogram, balancing local exploitation and global exploration effectively. To better deal with multi-modal functions, Zhang et al. [34] introduced a local optimal topology (LOT) based on the comprehensive learning particle swarm optimization (CLPSO) algorithm, which expanded the search space of particles and improved the convergence speed of the algorithm. To increase the convergence speed of the algorithm, Zhu et al. [35] proposed a multi-ion particle swarm optimization algorithm (MIONPSO) based on repulsive force and attractive force. They introduced the concept of charge in the PSO algorithm and divided the population into multiple sub-populations, then the optimal solution of each group guides the update of individuals. In addition, papers [36,37] both adopted various strategies in the PSO algorithm to make the particles search better in the feasible domain space, which improves the accuracy of the solution.
Based on the above research, we can find that most of the articles on PSO algorithms have put forward novel concepts to increase the diversified behavior of particles, thereby reducing the risk of falling into a local optimum. Nevertheless, the matching update strategies of many new PSO algorithms are not perfect, and they cannot take the convergence speed and accuracy into account simultaneously. Additionally, when solving complex optimization problems, a single improvement strategy has no advantage in improving convergence accuracy. On the contrary, adopting a variety of improvement strategies can enhance the diversity of the algorithm and achieve higher convergence accuracy.
To further improve the performance of the PSO algorithm, inspired by MIONPSO, this paper proposes a multi-sample particle swarm optimization algorithm based on electric field force. MSPSO introduces the concept of the electric field into the PSO algorithm and makes it match more complete strategies. We construct multiple learning samples to improve the performance of the particle swarm algorithm in solving unimodal and multimodal problems. Lots of experiments have verified the effectiveness of these additional strategies. We further evaluated the performance of MSPSO through practical cases of design problems of multiphase codes for spread spectrum pulse radar. And the main contributions of this paper can be summarized as follows.
• To show the diverse behavior of particles, we regard the feasible region of the population as an electric field, and the particles suffer the electric field force of other charged particles in the electric field. • We propose an electric field force-based comprehensive learning strategy to construct the attractive sample and the repulsive sample to guide the particle movement purposefully. The interaction of the two pieces directs the particles to areas that are more conducive to exploration.
• We use the historical experience information of elite particles and general particles obtained by the tournament mechanism to update the corresponding weight coefficients adaptively, which enhances the diversity of the population.
• To reduce the risk of falling into a local optimum, we construct a global learning sample by a segment-based weighted learning strategy so that particles can learn more helpful information from the elite particles. The rest of the paper is organized as follows. Section 2 introduces related works. Section 3 describes the details of MSPSO. The performance of newly introduced strategies is experimentally verified in Section 4. In Section 5, sixteen benchmark functions and eight well-known PSO variants are employed to verify the effectiveness of MSPSO. Finally, Section 6 summarizes the relevant conclusions.

Basic PSO
PSO is a random optimization algorithm based on population, in which each particle represents a potential problem solution. In the D-dimensional space, each particle has two attributes, the velocity vector V i = v i,1 , v i,2 , . . . , v i,D and the position vector X i = x i,1 , x i,2 , . . . , x i,D . X i is the candidate solution of the problem, and V i represents the search direction and step size of i th particle. Each individual adjusts its trajectory according to its own best historical experience pbest i = pb i,1 , pb i,2 , . . . , pb i,D and the best overall experience in history gbest = (g 1 , g 2 , . . . , g D ) in the feasible domain space. The speed and position update rules are defined as Equations (2.1) and (2.2) respectively.
where j = 1, 2, . . . , D, ω represents the inertia weight which determines the proportion of the previous speed; c 1 and c 2 are acceleration learning factors, respectively representing the weights learned from pbest i and gbest; r 1 and r 2 are random numbers in the interval [0, 1).

Other improved PSO algorithms
The PSO algorithm mainly improves from four directions, including parameter adjustment, learning modes adjustment, topology change, and hybrid algorithm. The proposed MSPSO mainly covers the two improvement directions of adjusting parameters and changing particle learning modes, so other approaches will not be discussed.
For adjusting parameters, Shih and Eberhart [38] proposed a strategy of linearly reducing the inertia weight from 0.9 to 0.4, changing the step length of the particle's velocity component in different evolutionary periods. XPSO [37] also adopted this method which effectively improves the performance of the algorithm. However, the strategy of linear weight reduction does not perform well on the objective function with multiple optimal solutions. To solve the above problem, Farooq et al. [39] proposed a phased update strategy. In the first half of the iteration, the inertia weight was linearly reduced from 0.9 to 0.4. The same method is repeated in the second half of the iteration. Chatterjee et al. [40] proposed a new nonlinear function to adaptively change the inertia weight, which effectively balances local exploitation and global exploration. Similarly, the sigmoid activation function in the neural network is used to change the inertia weight in HCLDMS-PSO [30] non-linearly. The abovementioned researches reveal that the adaptive adjustment of parameters will benefit the evolution of the population.
For changing particle learning modes, Liang et al. [41] proposed CLPSO, which encourages each particle to learn from different particles of different dimensions to obtain more comprehensive information. An opposition-based learning competitive particle swarm optimizer (OBL-CPSO) [42] introduced two mechanisms of oppositional learning and competitive learning. Competitive learning allows particles with poor fitness to learn from particles with good fitness, and particles with moderate fitness are updated through oppositional learning. To solve the problem of dimensionality disaster, Shi et al. [43] proposed a segment-based learning strategy, which randomly divides the dimension into several segments. At the same time, a predominant learning strategy allows elite particles to guide each dimension segment. Additionally, MPCPSO [36] introduced two new strategies: a dynamic segment-based mean learning strategy (DSMLS) and a multidimensional comprehensive learning strategy (MDCLS), which effectively improved the performance of the algorithm. DSMLS realizes the information exchange between the elite population and the ordinary population, and MDCLS accelerates the convergence speed. XPSO [37] extends the social learning part of each particle from one sample to two samples so that the particles learn from the global best particle and the local best particle. The abovementioned researches show that the dynamic selection of learning samples effectively maintains the diversity of the population, which is conducive to solving complex multimodal problems.

The proposed MSPSO algorithm
In this part, we introduce the proposed MSPSO in detail. Section 3.1 gives the particle model, and Section 3.2 explains the electric field force-based comprehensive learning strategy. The segmentbased weighted learning strategy is introduced in Section 3.3, and Section 3.4 lists the framework of the MSPSO algorithm.

Learning model based electric field force
In MSPSO, the feasible region of particles is seen as an electric field, and every particle is regarded as an electric charge. Hence, the electric field has a powerful effect on the particles. To put it simply, when the particle velocity is updated, the electric field force around the particle will affect it. We hope that the particle swarm can exhibit diverse behaviors by calculating the electric field force between particles, which is beneficial to the evolution of the population. At the same time, we propose an elec-tric field force-based comprehensive learning strategy to construct attractive and repulsive samples and utilize the historical knowledge of particles to adjust the weight coefficients adaptively. Additionally, to increase the convergence accuracy of the algorithm, we adopt a segment-based weighted learning strategy to construct a global learning sample. The proposed MSPSO is introduced in detail as follows. The velocity update rule of positively charged particles: The velocity update rule of negatively charged particles: Update the position according to Equation (3.3).
. . , v i,D is the velocity of the i th particle at the t th time, and X t i = x i,1 , x i,2 , . . . , x i,D is the position of the i th particle at the t th time. ω t is t th inertia weight. PE t 1 and PG t 1 represent t th attractive sample and repulsive sample of positively charged particles, respectively. PE t 2 and PG t 2 denote t th attractive sample and repulsive sample of negatively charged particles, respectively. α t 1 , β t 1 , α t 2 , and β t 2 represent t th weight coefficient of the corresponding learning sample. GM t denotes t th global learning sample and f ig is the electric field force of GM t on the current update particle X t i . The particle's velocity update equation consists of four parts. The first part is the velocity of the particle itself. Attraction is reflected in the second part. To reduce the risk of falling into a local optimum, we select elite particles to construct an attractive sample by EFCLS. The third part is repulsion. For particles with poor fitness, their trajectory may not find the optimal value, and the particles should be urged to explore the optimal value in other directions. Similarly, we choose general particles to construct the repulsive sample by EFCLS. The last part is the global learning sample built by the SWLS to guide particles' movement and increase the convergence accuracy of the algorithm.
The inertia weight ω plays a vital role in the PSO algorithm. A larger ω in the early stage can enhance the global exploration ability, while a smaller ω in the later stage is beneficial to local exploitation. Existing studies have shown that during the evolution of the population, the dynamic change of the inertia weight can obtain a better optimization result than the fixed value. In MSPSO, a nonlinear decreasing inertia weight based on the sigmoid function proposed by reference [30] is adopted. The calculation method of ω is defined as Equation (3.4).
where ω max is the maximum inertia weight, and ω min is the minimum inertia weight, t is the current iteration number. T represents the maximum number of iterations.

Electric field force-based comprehensive learning strategy
In MSPSO, all particles are in an electric field. Particles with opposite charges attract each other, and particles with the same charge repel each other. To purposefully guide particles to a better area, we enable the attraction of particles with opposite charges to lead the particles to move in the direction of the elite particles, thereby accelerating the convergence speed. At the same time, the repulsion of the particles with the same charge drives the poorer particles to explore other better directions, reducing the risk of falling into the local optimum. Therefore, we select elite particles and general particles through the tournament mechanism and construct attractive sample PE and repulsive sample PG based on the electric field force.
Firstly, We screen out the elite particles and general particles of the positively and negatively charged subpopulation through the tournament mechanism. The tournament mechanism is to classify the particles of the population according to fitness ranking. The specific method is as follows. We arrange all particles of the positively charged subpopulation in ascending order of fitness. After sorting, the set of positively charged particles POS = X 1 , X 2 , . . . , X n 1 satisfies f (X 1 ) ≤ f (X 2 ) ≤ · · · ≤ f X n 1 , where f denotes fitness function, n 1 is the number of positively charged particles. Then we select the first n e particles as the elite particles to create set S 1 . And the last n g particles are used as general particles in the positively charged subpopulation to form set S 2 . Similarly, we arrange all the particles of the negatively charged subpopulation in ascending order of fitness. After sorting, the set of negatively charged particles NEG = X n 1 +1 , X n 1 +2 , . . . , X n 1 +n 2 satisfies f X n 1 +1 ≤ f X n 1 +2 ≤ · · · ≤ f X n 1 +n 2 , where n 2 represent the number of negatively charged particles. Then we select the first n e particles as the elite particles to create the set S 3 . The last n g particles are used as the general particles of the negatively charged subpopulation to form the set S 4 . Finally, we obtained the following four sets. elite particle set of the positively charged subpopulation S 1 = X 1 , X 2 , . . . , X n e , general particle set of the positively charged subpopulation S 2 = X n 1 −n g +1 , X n 1 −n g +2 , . . . , X n 1 , elite particle set of the negatively charged subpopulation S 3 = X n 1 +1 , X n 1 +2 , . . . , X n 1 +n e , general particle set of the negatively charged subpopulation S 4 = X n 1 +n 2 −n g +1 , X n 1 +n 2 −n g +2 , . . . , X n 1 +n 2 .
In physics, we calculate the magnitude of the force between two charges through Coulomb's law, it is proportional to the distance between the charges. In the early stage of population evolution, the distance between particles is so far that the force is small. On the contrary, particles are more concentrated in the later stage, and the distance between the particles is small. Therefore, we calculate the electric field force based on the distance between the particles. The electric field force of any two charged particles X i and X j is defined as Equation (3.5).
where f i j denotes the eletric field force of X j to X i , d i j represents the distance of X i and X j . ε is a real number distributed in (0, 1), avoiding too large search space for particles. The distance of any two particles X i = x i,1 , x i,2 , . . . , x i,D and X j = x j,1 , x j,2 , . . . , x j,D is calculated by Euclidean distance: where D is the dimension of a particle, x i,k represents the value of X i in the k th dimension, and x j,k denotes the value of X j in the k th dimension. This paper adopts the same method to build PE 1 and PE 2 , but the selected population is different. PG 1 and PG 2 are also constructed in the same way. Therefore, we take positively charged particles as an example to illustrate building PE 1 = PE 1,1 , PE 1,2 , . . . , PE 1,D and PG 1 = PG 1,1 , PG 1,2 , . . . , PG 1,D .
n e elite particles are selected from the negatively charged subpopulation to construct the attractive sample PE 1 through the electric field force-based comprehensive learning strategy, directing the particles to a better area. The n e elite particles constitute the set S 3 . For the currently updated positively charged particle X i , we calculate the electric field force f i j of each elite particle X j in S 3 to X i , where j represents the serial number of the elite particle in S 3 .
where d = 1, 2, . . . , D, PE 1,d is the value of PE 1 in the d th dimension, pbest j,d represents d th dimension of the best historical of the elite particle X j in S 3 . f i j is the electric field force of X j to X i . Similarly, n g general particles are selected from the positively charged subpopulation to construct the repulsive sample PG 1 through the electric field force-based comprehensive learning strategy. The n g general particles constitute the set S 2 . For the currently updated positively charged particle X i , we calculate the electric field force f ik of each elite particle X k in S 2 to X i , where K represents the serial number of the general particle in S 2 .
of the best historical of the general particle X k in S 2 . f ik is the electric field force of the X k to X i . Algorithm 1 lists the pseudo code of EFCLS, taking positively charged particles as an example. And it is worth noting that when we construct PE 2 , X i comes from S 1 . When we build PG 2 , X k comes from S 4 .
Algorithm 1 electric field force-based comprehensive learning strategy 1: Sort positively charged particles and negatively charged particles respectively by fitness value 2: for i = 1 → n 1 do 3: for X j in S 3 do 4: Calculate the distance d i j between X i and X j by Eq (3.6) 5: Calculate the electric field force f i j of X j to X i by Eq (3.5) 6: end for 7: Construct the attractive sample PE 1 by Eq (3.7) 8: for X k in S 2 do 9: Calculate the distance d ik between X i and X k by Eq (3.6) 10: Calculate the electric field force f ik of X k to X i by Eq (3.5) 11: end for 12: Construct the repulsive sample PG 1 by Eq (3.8) 13: end for In MSPSO, once the particle is updated, the selection of elite particles and general particles will change according to the fitness value. To enhance the diversity of the population, we adaptively adjust the weight coefficients α 1 , β 1 , α 2 and β 2 by the historical information of the particles.
We assign a weight coefficient w i to each particle and initialize w i by a random real number generated by N 0.1, σ 2 . Besides, real numbers randomly generated from Gaussian distribution N µ 1 , σ 2 , N µ 2 , σ 2 , N µ 3 , σ 2 and N µ 4 , σ 2 are assigned to each α 2 , β 1 , α 1 and β 2 . µ 1 , µ 2 , µ 3 and µ 4 are updated according to the historical knowledge of elite particles and general particles of the subpopulations as Equation (3.9).
where λ represents the degree to which µ k (k = 1, 2, 3, 4) learns from the historical knowledge of the corresponding subpopulation. S k (k = 1, 2, 3, 4) represents the four sets that we have filtered through the tournament mechanism. |S k | denotes the number of particles in S k . We update w i by an actual random number generating by its corresponding N µ k , σ 2 . Based on experience, we set σ to 0.1.

Segment-based weighted learning strategy
Although the attraction and repulsion in the velocity update equation can disperse the particles and facilitate exploration, the particles may fall into the locally optimal values of the two subpopulations. Therefore, to reduce the risk of falling into the local optimum, we construct a global learning sample GM by SWLS to guide particle motion. Algorithm 2 lists the pseudo-code of SWLS. E randomly select k particles from the elite population 5: for d = j → j + 10 do Firstly, The dimension of the global learning sample is divided into m segments evenly, and we let each segment follow the elite particles of the entire population to construct the learning sample. Specifically, we sort all particles in ascending order through the tournament mechanism and select a part of top particles to form an elite population. For each segment, we select k particles randomly from the elite population. Then we use a weighting strategy to construct the global learning sample GM = (GM 1 , GM 2 , . . . , GM D ).
where d = 1, 2, . . . , D, GM d represents the value of GM in the d th dimension. E is the set of selected k elite particles, and E j denotes the j th elite particle X j in E. E i,d is the value of E i in the d th dimension. The fitness function is denoted by f , and γ is a small positive number that prevents the denominator from being zero.
In this way, the particles can obtain more comprehensive information, which is conducive to global exploration, increasing the probability of finding the global best value. Based on SWLS, we calculate the electric field force of the newly constructed sample on the currently updated particles, which represents the influence of the newly created global learning example GM on the particles in the electric field.

Algorithm framework
By integrating the above strategies, Algorithm 3 shows the implementation process of MSPSO. Randomly initialize V i and X i

5:
Evaluate f (X i ); pbest i = X i 6: end for 7: Initialize all particles' weight coefficient w i , set t=1 8: /*Main Loop*/ 9: while t<Maxiter do 10: Compute ω t by Eq (3.4) 11: Compute µ 1 , µ 2 , µ 3 and µ 4 by Eq (3.9) 12: Update α 1 , β 1 , α 2 , β 2 13: Construct the global elite sample GM t by SWLS 14: for i = 1 → n 1 do 15: Construct PE t 1 and PG t 1 by EFCLS 16: Compute the electric field force f ig of GM t to X i by Eq (3.5) 17: Update V i by (3.1) 18: end for 19: for i = (n 1 + 1) → (n 1 + n 2 ) do 20: Construct PE t 2 and PG t 2 by EFCLS 21: Compute the electric field force f ig of GM t to X i by Eq ( In this section, we conducted experiments to evaluate the performance of MSPSO. Firstly, the performance of the MSPSO algorithm optimizes with parameter adjustment. Then the diversity of the algorithm search space is analyzed through experiments, and the effectiveness of the new proposed strategies is verified.

Parameter sensitivity
We carry out experiments to analyze the influence of the parameters n e , n g and λ involved in MSPSO on the algorithm's performance. We set the number of particles in the positively charged subpopulation to be the same as the number of particles in the negatively charged subpopulation, n 1 = n 2 = n/2. For simplicity, we set n e = η e · n/2, n g = η g · n/2. The experimental setting is that the population size n=60 and the maximum number of function evaluation MaxFes=1000D. We change the parameter from 0 to 1, and the step amplitude is 0.1. One unimodal function (Schwefel's P2.21) and three multimodal functions (Quartic, Alpine, Penalized 1) are employed to evaluate the results. The evaluation standard is the mean value after 30 independent runs, and the tests are performed on the problem of 50D. Note that to analyze the influence of each parameter on the performance of the algorithm separately, we set a default value for each parameter, namely η e = 0.5, η g = 0.5, λ = 0.5. (1) Sensitivity of η e η e represents the proportion of elite particles that are attractive to the current update particle. And its function is to guide particles to a better area. A smaller η e will increase the convergence speed of the algorithm, but it may also cause the particles to ignore the information of other better particles.
Although a larger η e can obtain more information from the oppositely charged particles, it may cause the particles to evolve in a worse direction. From the first row of Figure 1, we can see that a smaller η e is more conducive to improving the accuracy, so we set η e to 0.1.
(2) Sensitivity of η g η g represents the proportion of general particles that repel the current update particle. And its function is to disperse the particle swarm and drive the poorer particles to explore other directions, increasing the ability of global exploration. A smaller η g may increase the convergence speed of the algorithm but may cause the particles to gather too much and weaken the global exploring ability. A larger η g would make the particles more dispersed but may change the direction of movement of better particles. According to the experimental results in the second row of Figure 1, η g is better in the range of [0.1, 0.2], and we set η g to 0.1.
(3) Sensitivity of λ λ represents the degree to which the weight coefficients of attraction and repulsion learn from the historical information of elite particles and general particles. If λ is small, the knowledge of elite particles or general particles has less influence on the adjustment of µ k , which may cause some useful historical information of elite particles or general particles to be ignored. The greater the λ, the greater the dependence of µ k on elite or general experience. Experimental results based on the third row of Figure 1, λ = 0.4 is adopted in this work.
According to the above experiment and analysis, we will set η e , η g and λ to 0.1, 0.1 and 0.4 in the subsequent experiments.

Diversity analysis
MSPSO introduces electric field into PSO and proposes two learning strategies to improve the performance of the algorithm, namely, an electric field force-based comprehensive learning strategy and a segment-based weighted learning strategy. Additionally, we also adopted a parameter adaptation strategy to update the attractive weight coefficient and the repulsive weight coefficient. To verify the effectiveness of the proposed strategies, we selected four representative test functions for the population diversity experiment: a unimodal function (Different power) and three multimodal functions (Quartic, Ackley, Penalized 1). The diversity experiment settings are as follows: the overall size n=60, the problem dimension D=50, and the maximum number of iterations MaxFes=1000D. Population diversity is defined according to references [30].
where x i, j represents the position of the i th particle in the j th dimension, andx j represents the average value of all positions of particles in j th dimension. MSPSO-EFF, MSPSO-PA, and MSPSO-SEW represent the algorithm that the electric field force, parameter adaptation, and segment-based weighted learning strategy are removed from MSPSO. The experimental comparison results of the diversity are shown in the first row of Figure 2. In addition, to further analyze the different effects of these strategies on the algorithm, we also experimented on the accuracy, as shown in the second row of Figure 2. As shown in Figure 2, MSPSO-SEW delivers the best performance in terms of the diversity on the four test functions. However, MSPSO-SEW sacrifices the convergence accuracy of the algorithm and cannot provide the best results in the accuracy of the solution. On the contrary, the loss of diversity of MSPSO-PA is faster than other algorithms. Although MSPSO does not obtain the best performance in terms of diversity, it is always better than MSPSO-EFF except for the Quartic function, which shows that the electric field force is beneficial to enhance the diversity of the population. Additionally, compared with MSPSO-SEW, MSPSO-EFF, and MSPSO-PA, MSPSO has fewer advantages on convergence speed but always gets the highest accuracy.
We can get some preliminary conclusions about the newly introduced strategies based on the above experimental results. Firstly, from the diversity, electric field force and the adaptive update of weight coefficients both play an essential role. Secondly, the three strategies are all beneficial to improve the accuracy of the algorithm.

Benchmark functions and baseline algorithms
To evaluate the performance of the MSPSO algorithm from multiple aspects, we selected sixteen widely used benchmark functions and the detailed information is shown in Table 1. The third and fourth columns of Table 1 respectively represent the search range and global best value of the benchmark function. The benchmark functions include two categories: unimodal function ( f 1f 8 ) and multimodal functions ( f 9f 16 ). Among the multimodal functions, f 11f 16 are relatively more complicated than f 9f 10 .
We selected eight well-known evolutionary algorithms based on particle swarm optimization algorithm to verify the superiority of the MSPSO algorithm, including PSO, DMSPSO, CLPSO, OBL-CPSO, CLPSO-LOT, MPCPSO, XPSO, and MIONPSO. And the specific information about the parameter settings of each algorithm is shown in Table 2. Note that each algorithm's parameter settings and experimental settings are the same as those of the corresponding original text for the fairness of comparison.

Comparison on solutions accuracy
The experimental settings are as follows: the population size n=60, the maximum number of function evaluation MaxFes=1000D, each algorithm runs 30 times independently. We adopt the evaluation indicators commonly used in the PSO algorithm: average and standard deviation. Additionally, to show the performance characteristics of each algorithm on different dimensions, we conducted experiments in 30D, 50D, and 100D. Table 3, Table 4, and Table 5 show the results of different algorithms on different dimensions and benchmark functions, including the running time of the CPU. (1) Unimodal functions ( f 1f 8 ) It can be seen from Table 3 that when MSPSO solves the problem of 30D, it shows a better mean value and standard deviation on most unimodal functions. Except for the functions f 2 , f 3 and f 8 , MSPSO can find the optimal value of other unimodal functions, and it also has strong stability. On functions f 2 and f 3 , the accuracy of MSPSO is slightly lower than MIONPSO, but better than the other seven algorithms. All algorithms have similar performance on the function f 8 while XPSO performs best, and no optimal value is found. Besides, MSPSO also shows similar performance to the 30D problem with high dimensions (Table 4 and Table 5), almost finding the optimal value. And it won first place five times both on 50D and 100D.
(2) Multimodal functions ( f 9f 16 ) The results in Table 3 indicate that for the eight multimodal functions, MSPSO can easily jump out of the local optimal values on functions f 12 , f 13 and f 14 . None of the algorithms finds the optimal value on function f 9 , but MSPSO achieves the highest accuracy. Although MSPSO does not reach the global best value on function f 10 , it has a small difference from the MIONPSO, and is better than the other which causes particles to fall into the local optimum. In addition, it can be seen from Table 4 and Table  5 that MSPSO has won first place 4 times and 5 times in the 50D and 100D multimodal functions, respectively. The results show that as the dimension increases, the performance of MSPSO is getting better and better.
To further illustrate the comprehensive performance of comparison algorithms, in terms of the accuracy of the solution, Table 6 performs a Friedman test of mean values with a significance level of α = 0.05 on sixteen benchmark functions of all algorithms on different dimensions. The result shows that MSPSO takes first place on unimodal functions with an obvious advantage and ranks third on multimodal functions. Although MSPSO is slightly worse than MPCPSO and OBL-CPSO in handling multimodal problems, it still ranks first overall. MSPSO has the best overall performance on benchmark functions, followed by MPCPSO and OBL-CPSO. The above experimental results show that the performance of MSPSO is better than the other eight comparison algorithms. And it can effectively deal with high-dimensional unimodal and multimodal optimization problems, which have high robustness and convergence accuracy. The better performance of MSPSO benefits from the following advantages. Firstly, the interaction of attraction and repulsion disperses particles instead of gathering them at one point, enhances the capability of global exploration. The attraction between particles exchanges the information of the two subpopulations, making the particles co-evolve in a better direction. Secondly, the attractive sample and repulsive sample constructed by EFCLS make particles move purposefully and lead them to a better area. At the same time, the global learning sample created by SWLS enables particles to obtain more comprehensive information of the elite particles, which is conducive to global exploration. Thirdly, the adaptive adjustment of the weight coefficients enhances the diversity of the population, which is beneficial to the evolution of the population. Finally, the non-linear decrease of the inertia weight makes particles focus on global exploration in the early stage and local exploitation later, which improves the accuracy of convergence. Table 3, Table 4, and Table 5 record the CPU time consumed by each algorithm in sixteen test functions, and Table 7 shows the comprehensive ranking results of CPU time. The top three are PSO, DMSPSO, and OBL-CPSO. And MSPSO is close behind. Figure 3 is the comprehensive performance diagram of MSPSO and other comparison algorithms. It is a comprehensive ranking diagram based on accuracy and CPU time on benchmark functions. The closer to the lower-left corner, the better the performance of the algorithm. From Figure 3, we can see that MSPSO and OBL-CPSO are the best among all algorithms and show similar performance. Although the accuracy of MSPSO is higher than that of OBL-CPSO, the CPU takes longer. This result indicates that MSPSO has a higher solution accuracy and performs well in algorithm complexity. To further prove the superiority of the performance of the proposed MSPSO, a Nemenyi test was performed based on the Friedman test, shown in Figure 4. CD denotes the critical difference. It can be seen from Figure 4 that the performance of the MSPSO algorithm is significantly higher than that of PSO, CLPSO-LOT, DMSPSO, and CLPSO. Compared with MPCPSO, OBL-CPSO, XPSO, and MIONPSO, MSPSO has the best performance on the Friedman test, but there is no significant difference. In addition, we use the Wilcoxon signed-rank test with a significant level of α=0.05 to represent the magnitude of the difference between MSPSO and the baseline algorithms on different dimensions, as shown in Table 8. It can be seen from Table 8 that MSPSO has a significant improvement compared with CLPSO-LOT, CLPSO, DMSPSO, and PSO. It is worth noting that there is no significant difference between MSPSO and PSO on the problem of 30D. However, the significant level of MSPSO and PSO changes from 0.1 to 0.01 with increased dimension. There is no statistically significant difference between MSPSO and MIONPSO, XPSO, MPCPSO, OBL-CPSO. However, MSPSO shows better results than these baseline algorithms on most of the test functions. Therefore, the statistical data of Friedman test, Nemenyi test, and Wilcoxon signed rank-test verify the consistent validity of MSPSO on the benchmark problems. And MSPSO performs better in high-dimensional spaces.

Analysis of the algorithm on convergence
The dynamic convergence curve of each benchmark function in the complete iteration cycle is employed to illustrate the convergence effect. We select the convergence of the first 1000 rounds to observe the difference in the convergence speed of each algorithm, as shown in Figure 5 and Figure 6. For the eight unimodal functions shown in Figure 5, MSPSO maintains a high convergence speed and has a high convergence accuracy. Other PSO algorithms most converge faster in the early stage, then tend to be slow. In the initial stage of function f 8 , although MSPSO has a high convergence speed, the convergence accuracy is relatively poor. As shown in Figure 6, for most multimodal functions, MSPSO has obvious advantages in terms of convergence speed and convergence accuracy. But on the functions f 11 , f 15 and f 16 , although MSPSO converges faster in the early stage and then tends to be flat, it does not converge to the global optimal solution.
Based on the above analysis, MSPSO, compared with other PSO algorithms, has the best convergence speed and accuracy on unimodal functions and multimodal functions, mainly due to the fol- Additionally, the introduction of the electric field force and the adaptive update of parameters increase the diversity of the population and make the convergence speed faster.

MSPSO performance for a real-word problem
In this section, MSPSO will verify its performance against a practical optimization problem that is widely used. Dukic et al. [45] proposed a design method of multiphase code for spread spectrum pulse radar(SSPR) based on the characteristics of the non-periodic autocorrelation function, which is used in the design of multiphase pulse compression code. Gil-López et al. [46] formulated SSRP as a nonlinear maximum-minimum optimization problem, which is defined as follows.
(5.1) Table 9 shows the comparison results of MSPSO and other algorithms in solving the SSRP coding design problem. The fifth row of Table 9 represents the final ranking of all algorithms on the SSRP problem of different dimensions. Although the performance of MSPSO on SSRP is slightly lower than MPCPSO, it is better than the other seven algorithms. The experiment result shows that the proposed MSPSO also has a better performance in solving practical problems and has a practical application value.

Comparison with other metaheuristic algorithms
This section further compares MSPSO with other recently released metaheuristic algorithms, namely MOMICA [2] and mSSA [3]. MOMICA is an improved ICA algorithm, which is conducive to solving the multimodal problem. mSSA is an improved SSA, which solves optimization problems more widely. The unimodal functions cannot evaluate the ability of the algorithm to fall into the local  Table 10. The third-to-last row of Table 10 represents the average rank of the algorithm on the eight test functions. From Table 10, we can see that MSPSO and mSSA are effective for unimodal problems and achieve good performance on multimodal functions. The Wilcoxon signed-rank test shows that there is no significant difference between MSPSO and the other two algorithms. However, mSSA offers the best performance on test functions in terms of solution accuracy and Friedman test results. The proposed MSPSO follows closely behind. It is worth noting that MSPSO is inferior to the other two algorithms on the multimodal functions f 15 and f 16 . The reason may be that the global learning sample guides the particles to move too fast, which increases the probability of falling into the local optimum. The experimental results show that the proposed MSPSO needs to be further improved to improve the ability to jump out of the local optimum, which is more conducive to solving complex optimization problems.

Conclusions
In this study, we propose a multi-sample particle swarm optimization algorithm based on electric field force. In the proposed MSPSO, electric field force-based comprehensive learning strategy and segment-based weighted learning strategy are employed to construct learning samples, enhancing the ability of global exploration. Additionally, the adaptive changes of inertia weights and weight coefficients strengthen the diversity of the population and help the particles get rid of the local optimum. The experimental results of MSPSO and eight PSO algorithms demonstrate the superiority of MSPSO performance. It can obtain faster convergence speed and higher convergence accuracy when dealing with unimodal and multimodal problems. MSPSO is also effective in solving practical problems.
Nevertheless, the proposed MSPSO algorithm still has some problems, and further research and improvement are needed. On the one hand, although the MSPSO algorithm can obtain a faster convergence rate, the accuracy of the multimodal function is not ideal. For example, the optimal value cannot be found on the multimodal function f 11 . Secondly, adjusting the parameters through the normal distribution function has a certain degree of randomness, which affects the algorithm's accuracy. Therefore, our subsequent work will further explore the local exploitation features of MSPSO, such as matching appropriate strategies for the specific dimension of particles to improve search efficiency. On the other hand, the application selected in this article is relatively single. Next, we will extend the application of the proposed MSPSO algorithm.