Particle Swarm Optimization Algorithm With Adaptive Two-Population Strategy

The particle swarm optimization (PSO) algorithm is a swarm intelligence (SI) algorithm used to solve optimization problems. Owing to its advantages in simplicity, using only a few parameters, PSO has become one of the most popular optimization algorithms. However, the single structure of PSO leads to challenges in finding the appropriate optima, resulting in low convergence accuracy. To solve the defects of PSO, it is necessary to increase the diversity of the populations involved as well as enhance the ability of the algorithm to develop locally. In this study, we propose a PSO algorithm with an adaptive two-population strategy (PSO-ATPS), which adaptively divides a population into two groups representing excellent and ordinary populations. Inspired by animal hunting behavior, a new velocity–position update method is proposed for the general population. A velocity update formulation with decreasing inertia weights based on logistic chaotic mapping is applied to the excellent population. The algorithm increases the diversity of the population by continuously changing the search strategy of the particles. In addition, a new neighborhood search strategy (oscillation strategy) is proposed, in which a particle searches randomly in its own adaptive neighborhood when its motion is stagnant and updates the particle position using an elite strategy. Among several optimization strategies, PSO-ATPS achieved first place in 7, 8, and 9 groups of tests involving 10 test functions in 3 dimensions, indicating the accuracy and effectiveness of PSO-ATPS. The results show that the performance of PSO-ATPS is competitive, and many improvements developed for PSO can be applied to PSO-ATPS, demonstrating the potential for further development.


I. INTRODUCTION
Computation is a scientific method that connects theory and reality. A large number of real-world problems can be transformed into theoretical problems through analysis and modeling. The solutions to these abstract theoretical problems require computation, and algorithms are a general term for the class of computational methods [1]. Optimization algorithms are computational methods that are used to solve optimization problems. The scope of optimization problems is very broad, and many real problems in engineering, science, and economics can be classified as optimization problems [2]. For optimization problems with few param-The associate editor coordinating the review of this manuscript and approving it for publication was Felix Albu . eters and low dimensionality, researchers have proposed computational methods based on problem gradients, which are often called traditional optimization algorithms. These algorithms have high accuracy and fast computational speed. However, if the problem to be optimized is complex and high-dimensional, traditional optimization algorithms are not useful. With the remarkable increase in the availability of data in this era, the problems we face have become more complex and higher-dimensional, motivating the development of meta-inspired algorithms to address such complex and high-dimensional optimization problems.
Meta-inspired algorithms are a class of intelligent optimization algorithms provoked by natural phenomena and biological behavior. Many excellent meta-inspired algorithms have been proposed in recent years, such as Henry gas solubility optimization [10], Archimedes optimization algorithm [11], Rime optimization algorithm [12], social network search [13], black widow optimization [14], and coronavirus herd immunity optimizer [15]. According to the source of inspiration, meta-inspired algorithms can be broadly classified as those motivated by physics, chemistry, and biology. Meanwhile, optimization algorithms inspired by biological behavior are known as swarm intelligence optimization algorithms. There are many novel and excellent SI optimization algorithms that have been proposed based on the premise that the behavior of organisms is guided by group behavior and that this biological intelligence exists only in the group and not in the individual organisms. Many SI optimization algorithms, such as genetic algorithm (GA) [16], sparrow search algorithm [17], whale optimization algorithm (WOA) [18], harris hawk optimizer [19], particle swarm optimization (PSO) [20], slime mould algorithm [21], honey badger algorithm [22], and colony predation algorithm [23], utilize the information shared in the population to perform the search. These meta-inspired algorithms have spawned many new algorithms applied to real-world optimization problems, and some examples of the research are listed in Table 1.
The idea of PSO originates from the process of foraging in a population containing social properties such as a flock of birds or colony of bees. In these social groups, individual members share information when performing foraging tasks. Examples of this sharing are leaving a scent or vibrating wings at a certain frequency. Thus, members of the group have information about other members at all times; in PSO, this information is reduced to information about the position of any particular individual, and information regarding the optimal position is communicated to all individuals. All members of the group not only receive the optimal information from other individuals but also record their optimal positions, and the acceleration of the individuals is based on these two optimal positions. The PSO model is divided into two parts: the velocity update equation and the position update equation. Therefore, PSO essentially involves the summation of vectors.

II. RELATED WORK
We first describe the framework of the original PSO algorithm. Then, we present some representative improved versions of the PSO algorithm, which can be divided into parameter improvement, topology improvement, strategy combination with other algorithms, and practical problem solving.

A. THEORY OF PSO
The PSO algorithm was proposed by Kennedy and Eberhart in 1995 [20], and it has shown great research potential in solving optimization problems. The PSO model consists of two parts, namely the velocity model and the position model. In dimensional space, the solutions to the problem are matrixes with the corresponding dimensions. Thus, the position and the velocity of PSO can be represented by matrixes of size denotes the value of the position X i in dimension j and v i,k denotes the value of velocity V i in dimension k. The operations of the algorithm are also matrix operations, and the velocity and position update models of the standard PSO are represented as [24] v t+1 VOLUME 11, 2023 where j = 1, 2, · · · , D and c 1 and c 2 are the acceleration coefficients controlling the particle deflection, which are constants in the original PSO. Parameters r 1 and r 2 are random numbers taking the value of [0, 1] and following the Gaussian distribution, and t is the number of current iterations. Gbest t j denotes the value of the jth dimension of the globally optimal particle in the previous t iterations, and Pbest t i,j denotes the value of the jth dimension of the optimal position of particle i in the previous t iterations.

B. VARIANTS OF PSO 1) PARAMETER IMPROVEMENT
Parameter optimization in PSO mainly focuses on the inertia weight coefficients ω and acceleration coefficients c 1 , c 2 . Because these coefficients control the migration of the particles, the fluctuation of the coefficients affects the search area of the particles and the convergence speed of the algorithm, whereby a suitable combination of parameters can achieve a balance between population diversity and the convergence speed of the algorithm.
To enhance the local search and escape from the local PSO optimum, Chen et al. [25] used a sine mapping scheme to describe the inertia weight and sine mapping and cosine mapping schemes to describe the acceleration coefficients and, respectively. Considering that sine mapping demonstrates good ergodicity as well as nonrepeatability and randomness, it can be used to improve the population diversity of PSO. Ghasemi et al. [26] proposed PPSO, which removes the inertial part of the velocity from the PSO and uses adaptive phase angles to control the motion of the particles. Feng et al. [27] proposed the chaotic descending inertia weight PSO (CDI-WPSO), which randomly decreases the inertia weight of the PSO by logistic chaos mapping, and Duan et al. [28] initialized the particles using logistic chaos mapping, demonstrating that the chaos mapping PSO has an advantage over the standard PSO in a global search. Tian et al. [29] used the Sigmoid function to update the acceleration coefficients of the particles and added a slow-change function to the position update formulation to ensure that the algorithm does not converge prematurely. In the later stage of the search, a Gaussian variation strategy was adopted for poorly adapted particles. Kassoul et al. [30] and Moazen et al. [31] normalized the fitness of the particles to adjust the parameters, with each particle given a set of unique parameter combinations. Sedighizadeh et al. [32] proposed using dynamic inertia weights via automatic adjustment and introduced two terms in the velocity update model to increase the diversity of the PSO population.

2) TOPOLOGY IMPROVEMENT
To improve PSO, changes can also be made in the topology consideration. In standard PSO, the motion of the particle is influenced by two factors, social cognition and self-cognition. In the velocity update of the particle, the former causes the particle to converge toward the optimal position of the population, whereas the latter allows the population to perform a more comprehensive search of the solution space. That is, at this time, individual particles receive information regarding the optimal position of the population and the individual. This relationship can cause problems such as slow convergence, premature convergence, and low convergence accuracy.
To solve the problem that the PSO algorithm can become trapped near a local optimum, Liu and Nishi [33] proposed a new update formula for Pbest, which expands the search range of particles by searching adjacent positions of Pbest to avoid premature convergence to some extent. Lee et al. [34] added a random noise term to the velocity update formula of PSO and proposed the repulsive method based on the repulsion theory applied to the PSO algorithm. This algorithm causes the particles to adopt repulsive and attractive strategies for Gbest in the development and search phases, respectively. Mousavirad and Rahnamayan [35] added a centroid of all particles to the velocity update formula of PSO to promote faster algorithm convergence [36] and utilizes an average dimensional learning strategy to allow the particles to escape when they fall into a local optimum with information from other dimensions. Meanwhile, [37] and [38] utilize the learning strategy to modify the PSO topology by changing the source of the particle information, which allows the particles to diversify their search in the solution space.

3) HYBRID PSO
Some studies have combined PSO with theories from other metaheuristic algorithms. For example, applying related strategies from the genetic algorithm method, [39] combined the PSO algorithm with the crossover method in GA to improve the diversity of the population by achieving crossover with the position of random particles. In [40], a crossover and variation strategy is applied to some particles in PSO (a value of is suggested in the study) to ensure diversity in the population. Chen et al. [41] combined artificial bee colony (ABC) with PSO, borrowing the lead, follow, and scout strategies of ABC to diversify the PSO population behavior. Singh et al. [42] combined the salp swarm algorithm with PSO to search the solution space by setting different position update formulas for leaders and followers. Hu et al. [43] and Khan and Ling [44] combined the gravitational search algorithm (GSA) with PSO to help PSO escape from local optimum solutions, using the excellent global search capability of GSA.

4) PROBLEM SOLVING WITH PSO
Owing to its simple structure and good plasticity, PSO and its variants have been applied to many real-world problems.
In [45], the proposed quadratic binary PSO is used to solve the scheduling problem of smart home appliances. In [46], [47], enhanced leader particle swarm optimization (PSO) and time-varying acceleration coefficients PSO are proposed and applied to parameter estimation for photovoltaic cells and modules, respectively. In [48], PSO is used to plan the flight path of a vehicle, with the optimization resulting in the reduction of the number of collisions of the vehicle. In [49], a new PSO algorithm is proposed to solve the microgrid unit combination problem. In [50], the hybrid binary continuous PSO algorithm is used to solve the coordinated control problem of the generator sets. In [51], PSO and some variants of PSO are used to solve the Flexible AC Transmission Systems optimization problem for power systems. In [52], the fuzzy expected value model problem is solved by combining stochastic PSO with a back propagation neural network. In [53], PSO is used to reduce the operating cost of a power system. In [54], the opposition-based PSO algorithm is proposed by considering the learning strategy of the opposition to PSO, which is used to solve the distribution and dispatching problem of active distribution networks.

III. PARTICLE SWARM OPTIMIZATION ALGORITHM WITH ADAPTIVE TWO-POPULATION STRATEGY
As one of the more well-known optimization algorithms, PSO is very influential and flexible. However, owing to its simple and easily adjustable model, PSO can easily get trapped into local optimality when facing many high-dimensional complex problems. As PSO is a mathematical model that abstracts the predatory behavior of a flock of birds, the population of PSO can be used to perform more than one type of search behavior. To improve the solution quality as well as convergence speed of PSO, this chapter proposes a PSO algorithm with an adaptive two-population strategy (PSO-ATPS). The logic of this algorithm is to divide the population into two subpopulations by fitness sorting, with the two populations executing different search strategies. The size of the two populations can be dynamically adjusted by an adaptive function to achieve a balance between the algorithm in exploitation and search. An explanation of the notation used in optimization models is presented in Table 2.
This chapter is divided into six parts: (1) The first part introduces a new simple classification function that helps us classify the individuals in the population into two subpopulations according to fitness size. (2) The second part introduces the velocity update formula for each of the two classified subpopulations and introduces a new velocity update formula to enhance the exploitation ability as well as the search ability of the population.

A. ADAPTIVE FUNCTIONS TO BE USED FOR CLASSIFICATION
To apply PSO-ATPS, the particles in population POP must first be sorted, and if the minimization problem (maximization problem) is solved, the fitness is sorted from low to high (high to low). The sorted population is called POPnew and is adaptively divided into excellent and ordinary populations by the function shown in (2), with the functional expression where random is a Gaussian-distributed random number within the interval [0,1], N is the size of the population, t is the current number of iterations of the algorithm, and iter max is the maximum number of iterations of the algorithm set.

B. FORMULA FOR THE NEW UPDATED VELOCITY
Before introducing the velocity update formula, we first need to classify POPnew into excellent population POPnew1 and ordinary population POPnew2, according to the classification function. When the particle's ordinal number i is less than the number of classification φ, this particle belongs to POPnew 1, while other particles that do not satisfy the condition belong to POPnew2. We provide a velocity update formula for each of these two subgroups where r 1 is a random number in the interval [-1,1] and r 4 is a random number in the interval [0,1]. Parameter ω t is the decreasing inertia weight using the logistic chaos mapping [27]  where z t denotes the value of the t iteration of the logistic chaos mapping, the initial value z 0 is set to a random number rand, ω max denotes the upper limit of the inertia weight at the beginning of the iteration, and ω min denotes the upper limit of the inertia weight at the termination of the iteration. Fig. 2. (a) shows the iterative images of chaotic inertia weights for 62246 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. ω max = 0.9 and ω min = 0.6. Fig. 2. (b) shows the iterative images of the inertia weights for random number rand with the same parameter settings. It is evident that the inertia weights are more uniformly distributed with the help of the logistic chaos mapping; thus, the logistic chaos mapping is suitable.
Levy is a random number obeying the Levy distribution, with the Levy distribution defined by the following equation [55] L (x, γ , µ) Here, parameter γ is used to control the shape of the distribution curve, and parameter µ is used to control the position of the distribution curve.
After Fourier transform, the Levy distribution becomes where α is a parameter in the interval [1,1] and Levy index β is a parameter in the interval (0,2). A typical value for β is 1.5. Obtaining random numbers obeying the Levy distribution requires the use of Mantegna's equation, which has the following form, where S is a random number generated according to Mantegna's equation, obeying the Levy distribution, and both u and υ obey the Gaussian distribution of the form The random walk based on the step size generated by this method is referred to as Levy Flight [56], [57], which describes most of the random-motion trajectories in nature. Compared with Brownian motion, Levy Flight makes large step spans, a feature that permits the escape from local optima in stochastic search. Fig. 3.(a) shows the path of Levy Flight for Levy index β = 1.5 with 10 3 cycles. Fig. 3.(a) shows that Levy Flight makes large stepwise jumps, which is advantageous for global stochastic search. Fig. 3.(b) shows a plot of the randomly generated numbers obeying the Levy distribution for β = 1.5 and 10 3 cycles are executed, from which we can see that the Levy random numbers are not uniform; rather, this randomness is more in line with biological reality.

C. FORMULA FOR THE NEW UPDATE POSITION
As the position update essentially involves a summation of vectors, we classify the position parameters of the two subgroups based on the standard PSO. For POPnew1, the position update formula of the standard PSO is followed, as the form of its velocity update formula does not differ from that of the standard PSO. The velocity update formula of POPnew2 is taken from the biopredator algorithm; thus, the form of the position update formula needs to be changed to The newly introduced update formulas of velocity and position of POPnew2 cause particles with poor adaptation to randomly search in the neighborhood of the optimum solution, which can enhance the efficiency of particle utilization and strengthen the ability of the local search, improving the accuracy of the solution. From the velocity update formula of POPnew2, velocity can be regarded as a position in (3). If the velocity or position of the particle exceeds the maximum (minimum) bound during the iteration, (12) is used to adjust the particle in POPnew1, and (13) is used to adjust the particle in POPnew2 where v min is the lower bound of the velocity, v max is the upper bound of the velocity, x max is the upper bound of the solution space, and x min is the lower bound of the solution space.

D. OSCILLATION STRATEGY
To solve the PSO trapping problem, a new adaptive neighborhood search strategy is introduced. Inspired by the ABC optimization algorithm, we use the neighborhood to help VOLUME 11, 2023 particles with constant fitness to escape the optimization trap, applying an oscillation strategy, where ϕ is the neighborhood control parameter, and k is the adaptive function from the following expression, Fig. 4 shows an image of the function k. The image shows that at low cycles, k at the maximum number of iterations is concave, which allows the particle to oscillate in a large range. Meanwhile, at a high number of cycles, k at the maximum number of iterations is convex, allowing the particle to oscillate in a small range, causing the search to achieve a balance. When the fitness of the particle is equal to its historical optimal fitness, the particle executes the oscillation strategy and compares the new solution to its historical optimal solution. If the result is better, the solution is updated; otherwise, the current position of the particle remains unchanged. Fig. 1 shows the flowchart of the new algorithm, and Table 3 shows the corresponding pseudocode. The pseudocode and flowchart can help the reader organize the relationship between the components.  PSO-ATPS is alsoO(DN ), and PSO-ATPS has the same time complexity as PSO.

E. FRAMEWORK OF THE PROPOSED ALGORITHM
Typically, PSO-ATPS takes more time on simple functions than PSO, while it takes nearly the same amount of time on complex multipeaked functions as PSO. This is because oscillatory strategies are more likely to be executed on simple functions and less often on multipeaked functions.

IV. EXPERIMENTAL RESULTS AND DISCUSSION
This section uses seven algorithms for comparison with PSO-ATPS and conducts numerical experiments on a test set consisting of 15 benchmark test functions. Three of them are related to PSO algorithms: PSO, CDIWPSO, and PPSO. Four of them are intelligent optimization algorithms widely used in recent years: sine cosine algorithm (SCA) [58], WOA [18], grey wolf optimizer (GWO) [59], and marine predators algorithm (MPA) [60].

A. NUMERICAL BENCHMARK FUNCTIONS
PSO is chosen as the comparison algorithm, as the strategy adopted by PSO-ATPS for POPnew1 is inspired by PSO. CDIWPSO is chosen as the comparison algorithm because both it and PSO-ATPS employ decreasing inertia weights  influenced by chaotic mapping. PPSO is chosen as the comparison algorithm because of its unique way of updating the velocity, and changes are made in PSO-ATPS for POPnew2. SCA is chosen as one of the comparison algorithms for PSO-ATPS because it belongs to the group intelligence optimization algorithm. In addition, the search range is controlled by the triangular function, and the strategy of neighborhood search is also used in PSO-ATPS. The WOA and GWO algorithms are chosen as comparison algorithms because of their relationship with PSO-ATPS regarding the location update approach of the algorithm, and as one of the most well-known SI optimization algorithms, comparison with them can better display the advantages and disadvantages of PSO-ATPS. MPA is chosen as a comparison algorithm because the adaptive process of PSO-ATPS is taken from MPA, and as one of the latest well-known SI optimization algorithms, MPA can provide us with many new ideas. The parameters of all algorithms are indicated in Table 9.
Test functions are divided into three categories. The first is unimodal test functions, which have only one extreme value in the solution space. The second category is multipeaked functions, which have many extreme points in the solution space, i.e., they have many local optima, and these problems are very complex, whereby the optimization algorithm can easily descend into the local optima. The third category is test functions with fixed dimensionality. For the sake of fairness, all algorithms are set with the same population size N = 100 and the same maximum number of iterations iter max = 10 3 . All tests are executed 25 times, from which the mean, standard deviation, and optimal fitness values are recorded. For    the first two types of functions, the performance of the algorithms for number of dimensions dim=10, dim=30 and dim=50 are examined to determine whether the algorithms will have a sharp decay in accuracy owing to the increase in dimensionality.
The information of the 15 benchmark test functions is presented in Table 4, Table 5, and Table 6. Five of the test functions in Table 4 are unimodal test functions, which are tested on this test set to determine the local search capability of the algorithm. In Table 5, five multimodal test functions are of parameters on f9 25 times each and record the average value. Here, f9 is selected as a test function from Table 5: The test function dimension D is set to 30, and the maximum number of iterations iter max is set to 10 3 . Table 7 presents the test results, showing that the parameter ω min has a small effect, and the maximum difference in the average value is only 6.92E-04. Our recommended choice for ω min is 0.6 to 0.7.
For the neighborhood control parameter ϕ, we selected 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 for comparison. Table 8 shows the average values of these 10 groups of parameters at f9 in 30 dimensions, and each group of experiments was conducted 25 times. The effect of the parameter ϕ is very small, with a maximum average difference of only 1.97E-04. We recommend the selection of 20 to 30 dimensions.

C. ACCURACY COMPARISON
Tables 10, 11, and 12 show the test results of the comparison algorithms for unimodal test functions and multimodal test functions in 10, 30, and 50 dimensions for the mean, standard deviation, and optimal values of the results from 25 independent experiments. The results of the best algorithm of comparison algorithm for each test function are shown in bold [61].
The results in Table 10 show that for the 10-dimensional test functions, PSO-ATPS obtained the theoretically optimal solutions for f1,f4 and f6, f8; PPSO obtained the best result for f9, f10; SCA found the theoretically optimal solution for f6; WOA found the theoretically optimal solution on f1, f2, and f6; GWO found the theoretically optimal solution for f1 and f6; and MPA found the theoretically optimal solution for f5,f8. Among the test functions in 10 groups, PSO-ATPS achieved first place in seven groups of tests with an overall ranking of 1, PPSO achieved first place in two groups of tests with an overall ranking of 5, SCA achieved first place in one group of tests with an overall ranking of 6, WOA achieved first place in two groups of tests with an overall ranking of 4, GWO achieved first place in two groups of tests with an overall ranking of 3, MPA achieved first place in four groups of tests with an overall ranking of 2, PSO ranked as 8, and CDIWPSO ranked as 7.
The results in Table 11 show that for the 30-dimensional test functions, PSO-ATPS achieved first place for f1 to f8 and obtained the theoretically optimal solution on all groups of tests except f5, PPSO achieved first place for f9, WOA found the theoretically optimal solution for f1 and f2 and MPA achieved first place for f6 and f8. Among the 10 groups of tested functions, PSO-ATPS achieved first place in eight groups of tests, ranked first. PPSO achieved first place in one group of tests and ranked fifth. WOA achieved first place in three groups of tests and ranked second. PSO ranked eighth. The CDIWPSO ranking is 6, the SCA ranking is 7, and the GWO ranking is 4.
The results in Table 12 show that PSO-ATPS achieved first place on all 50-dimensional test functions from f1 to f9. WOA found theoretically optimal solutions for f1, f2, and f6, and achieved first place for f10. MPA achieved first place for f6 and f8. Among the 10 groups of test functions, PSO-ATPS achieved first place in nine groups of tests, ranked first. WOA achieved first place in four groups of tests and ranked third. MPA achieved first place in two groups of tests and ranked second. PSO ranked seventh, and CDIWPSO ranked sixth. The PPSO ranking is 5, the SCA ranking is 8, and the GWO ranking is 4.
The results in Table 13 show that for the fixed-dimension test functions, PSO-ATPS obtained the theoretically optimal solution for f11, f12, f14, and f15, and obtained fourth place for f13. MPA found the theoretically optimal solution for all five problems.
The above results show that PSO-ATPS has strong local exploitation and global search capability, and because the population of PSO-ATPS is dynamically updated, it does not present a sharp decline in accuracy based on the increase in dimensions. In addition, the experimental results show that the results of PSO increase sharply with an increase in dimensionality, and PSO-ATPS solves this problem and enhances the applicability of PSO. In tests on unimodal test functions, PSO-ATPS performs very well, which demonstrates that the new strategy shows significant improvement in the accuracy of the algorithm.

D. STABILITY ANALYSIS
On the 10-dimensional test set, the standard deviation of the results of 25 independent experiments of PSO-ATPS for f1 to f4 and f6 to f8 is all 0, indicating that PSO-ATPS is very stable in these tests. The standard deviation of PSO-ATPS for f5 is slightly higher than that of MPA but smaller than that of the other algorithms, and the PSO-ATPS standard deviation for f9 is better than that of PSO, SCA, and GWO. The PSO-ATPS standard deviation for f10 is only better than that of WOA.
On the 30-dimensional test set, PSO-ATPS outperforms the other algorithms in terms of standard deviation of results for 25 independent experiments on f1 to f8, and only slightly outperforms PPSO on f9 but outperforms the other algorithms. The standard deviation on f10 is only better than that of WOA.
On the 50-dimensional test set, PSO-ATPS outperforms other comparative algorithms in terms of the standard deviation of results for 25 independent experiments on f1 to f9 and outperforms GWO in terms of standard deviation on f10.
In the test results of the fixed-dimension test functions, the standard deviation of the experimental results of PSO-ATPS achieved rankings of 5th, 1st, 4th, 4th and 1st place in the tests of f11 to f15, respectively. Thus, the stability of the algorithm of PSO-ATPS algorithm is excellent, which is caused by the mechanism of PSO. It can be seen that the standard deviation of PSO and its variants in the fixed-dimension test functions is generally better than that of the other intelligent optimization algorithms. Fig. 5 shows the convergence curves of the compared algorithms on the 30-dimensional unimodal test functions. In Fig. 5. (a) to Fig. 5. (d), a part of the graph is enlarged to observe the convergence speed of the algorithm more intuitively because the algorithm converges too fast. Meanwhile, in Fig. 5. (e), a logarithmic coordinate system is used to clearly show the difference in convergence speed. From  Fig. 5, it is clear that PSO-ATPS converges fastest on all five sets of test functions, indicating that PSO-ATPS demonstrates good convergence. Fig. 6 shows the convergence curves of  all compared algorithms on 30-dimensional multimodal test functions. In Fig. 6. (a) and Fig. 6. (b), the curves are shown in the logarithmic coordinate system, Fig. 6. (c) and Fig. 2. (d) show enlarged plots of certain intervals, and Fig. 6(e) is not adjusted because the curves are evident. From (a), (b), and (c) in Fig. 6, PSO-ATPS converges faster than other comparison algorithms on f6 and f8, and Fig. 6(d) shows that PSO-ATPS converges slightly slower than PPSO on f9 but faster than other comparison algorithms. Fig. 6. (e) clearly shows that WOA converges fastest for f10.

E. COMPARISON OF CONVERGENCE SPEED
To have a more intuitive feeling of particle motion in PSO-ATPS, Fig. 7 to Fig. 10 show the two-and three-dimensional particle motion trajectories of PSO-ATPS for unimodal test functions and multimodal test functions, respectively. The trajectories of Fig. 7. (a) to Fig. 7. (e) and Fig. 9. (a) to Fig. 9. (e) show that the particles search the solution space using large steps at the beginning of the algorithm. As the iterative step size undergoes irregular changes, the step size decreases to exploit the global optimum. The trajectories of the particles on the unimodal test functions shrink steadily toward the global optimum. In Fig. 8. (a) to Fig. 8. (d) and   any time, each particle is either searching or developing at any given iteration, and this property of the particles is well-suited for optimization in high-dimensional complex problems. In Fig. 10. (e), three particles fall into the local optimum, but the search for the solution space is very comprehensive.
The above comparison shows that PSO-ATPS has excellent optimization accuracy with negligible errors and a very fast convergence rate. Therefore, PSO-ATPS is an effective optimization algorithm that can dynamically balance search and development.

F. DISCUSSION
This study has potential limitations. With respect to the use of an oscillation strategy, we include the trigger condition that the strategy is triggered when the solution after the present iteration is equal to its historical optimal solution. This strategy is intended to remove the particles when they are trapped in a local optimum; however, it is continuously triggered when the algorithm has searched for a global optimum. The convergence speed of PSO-ATPS is very fast, and this leads to the oscillation strategy performing useless work after the algorithm converges to the optimal solution, which increases the time complexity of the algorithm. We believe that the solution to this problem is to propose new conditions or add new conditions so that the oscillation strategy is no longer executed after the algorithm converges, reducing the time complexity of the algorithm.

V. CONCLUSION
Inspired by biological predation behavior, we propose a new variant of PSO, PSO-ATPS, which is used to solve the problem of prematurely falling into a local optimum, causing poor convergence accuracy of the PSO algorithm. We classify the original population according to the degree of adaptation and implement different strategies for different types of particles. To verify the performance of PSO-ATPS, we compare PSO-ATPS on a classical function test set with some newly proposed well-known optimization algorithms. The experimental results show that PSO-ATPS has great potential in treating this problem, providing good comparative results.
The main research contributions of this study are as follows. (1) address the shortcomings of the premature behavior of the PSO algorithm, we propose a new algorithmic structure, increasing the population diversity by shifting the particles between two subpopulations. The number of individuals in the two subpopulations is controlled using an adaptive classification function. (2) We propose a new oscillation strategy based on the elite strategy, which is used to solve the problem of poor solution accuracy of PSO in high-dimensional complex functions. (3) We propose a new concise velocity-position update formulation.
Based on the Occam's Razor principle, we provide a simple velocity-position update formulation, which makes PSO-ATPS very flexible. For PSO, a large number of improved versions and new strategies of metaheuristic algorithms can be easily mixed with PSO-ATPS. The capability of a single optimization algorithm is always limited; thus, it may be better to integrate the strategies of optimization algorithms. Interested readers can borrow the ideas from this paper to introduce some excellent strategies, such as the spiral search strategy of WOA and the crossover and variational strategies of GA. These strategies can be triggered under suitable conditions, making the algorithm more applicable to the situation presented.