A particle swarm optimization algorithm based on an improved deb criterion for constrained optimization problems

To solve the nonlinear constrained optimization problem, a particle swarm optimization algorithm based on the improved Deb criterion (CPSO) is proposed. Based on the Deb criterion, the algorithm retains the information of ‘excellent’ infeasible solutions. The algorithm uses this information to escape from the local best solution and quickly converge to the global best solution. Additionally, to further improve the global search ability of the algorithm, the DE strategy is used to optimize the personal best position of the particle, which speeds up the convergence speed of the algorithm. The performance of our method was tested on 24 benchmark problems from IEEE CEC2006 and three real-world constraint optimization problems from CEC2020. The simulation results show that the CPSO algorithm is effective.


INTRODUCTION
Various types of constraints in numerous practical problems increase the difficulty of optimization. Problems with constraints are referred to as constrained optimization problems (COPs) (Sun & Yuan, 2006). Because the efficient solution of COPs is an important research topic in the optimization field, studying methods for solving COPs is both theoretically and practically important.
The core of the solution method for a COP is the constraint-handling technique employed. The key to designing a constraint-handling technique is balancing the objective function against the constraint function. The most popular constraint-handling techniques fall within three categories: (1) penalty function-based methods; (2) methods based on a feasibility criterion (a Deb criterion); and (3) multiobjective optimization-based methods. For the first method, a penalty factor is introduced into the constraint function, which is then included in the objective function, thereby converting the COP into an unconstrained optimization problem. These methods include the static penalty function (Kulkarni & Tai, 2011), dynamic penalty function (Liu et al., 2016), and adaptive penalty function (Yu et al., 2010). These methods are simple and easy to implement and have therefore been widely used in numerous algorithms. However, selecting penalty factors, which directly affect the results, is very difficult. For the second approach, Deb proposed this criterion in 2000 (Deb, 2000) as a means of selecting between solutions: a feasible solution is always preferred to an infeasible solution; for two feasible solutions, the one with a smaller fitness value is retained; and for two infeasible solutions, the one with a lower level of constraint violation is retained. This method is easy to implement, has a high convergence speed and is therefore widely used in various algorithms for solving COPs (Wang et al., 2016;Sarker, Elsayed & Ray, 2014;Elsayed, Sarker & Mezura-Montes, 2014). However, this method overemphasizes the importance of feasibility: under certain conditions, infeasible solutions may be closer to the global optimum than feasible solutions. Regardless, these 'excellent' infeasible solutions containing useful information would be discarded using the Deb criterion. Thus, studying how to better utilize these 'excellent' infeasible solutions is necessary. For the third problem, a COP is transformed into an unconstrained multiobjective optimization problem with two objectives (Wang et al., 2007): the objective function of the original problem and a constraint-violation-degree function composed of all the constraint functions of the original problem. This method can be solved by existing multiobjective algorithms (Wang, Dai & Hu, 2010;Cai, Hu & Fan, 2013), and it can produce a good diversity of solutions. However, multiobjective optimization problems have a high computational time complexity and are difficult to solve.
With the rapid development of intelligent algorithms, different algorithms have been integrated into the abovementioned methods for solving COPs. For example, Wang et al. (2016) proposed a FROFI algorithm in which feasibility criteria were incorporated into a differential evolution (DE) algorithm to solve COPs. Karaboga & Akay (2011) integrated the Deb criterion and an improved artificial bee colony (ABC) algorithm and used a probabilistic selection scheme for feasible solutions based on their fitness values. Francisco, Costa & Rocha (2016) combined the firefly algorithm with feasibility and dominance rules and a fitness function based on global competitive ranking to solve nonsmooth nonconvex constrained global optimization problems. Kohli & Arora (2018) proposed the chaotic grey wolf algorithm for COPs. Kimbrough et al. (2008) used a data-driven approach to analyse the dynamics of a two-population genetic algorithm (GA). Among numerous available algorithms, the particle swarm optimization (PSO) algorithm proposed by Kennedy & Eberhart (1995) has a small number of parameters and a fast convergence and has therefore attracted considerable attention for solving COPs. Yadav & Deep (2014) proposed a coswarm PSO algorithm for nonlinear constrained optimization. In this algorithm, the total swarm is subdivided into two sub swarms. The first swarm uses the shrinking hypersphere PSO (SHPSO), and the second uses DE. Venter & Haftka (2010) converted the COP into an unconstrained biobjective optimization, which was solved by a multiobjective PSO algorithm. Liu et al. (2018) proposed a parallel boundary search PSO algorithm, in which an improved constrained PSO algorithm is adopted to conduct a global search for one branch, while the subset constrained boundary narrower (SCBN) function and sequential quadratic programming (SQP) are applied to perform a local

INTRODUCTION TO COPS
A general COP can be expressed as follows (Sun & Yuan, 2006): min f ðxÞ s:t: g j ðxÞ 0; j ¼ 1; 2; L; q is the decision space; L i and U i are the lower and upper bounds of the ith component, respectively; f ðxÞ is the objective function; g j ðxÞ is the jth inequality constraint; h j ðxÞ is the jth equality constraint; and q and m À q are the numbers of inequality and equality constraints, respectively.
For a COP, the degree of violation of each decision vector x for the jth constraint is defined as G j ðxÞ ¼ maxð0; g j ðxÞÞ; 1 j q maxð0; jh j ðxÞj À dÞ q þ 1 j m where d is the slack in the equality constraint. Therefore, the degree of violation of the decision vector x for all constraints can be defined as given below.
If the degree of constraint violation of the decision vector x is GðxÞ ¼ 0, then the decision vector is a feasible solution of the COP; otherwise, it is an infeasible solution. The feasible solution with the minimum value is the optimal feasible solution. Thus, the goal of solving the COP is to find the feasible solution with the minimum function value.

A PSO ALGORITHM BASED ON AN IMPROVED DEB CRITERION
PSO is an evolutionary technique that was proposed by Dr. Eberhart andDr. Kennedy in 1995 (Kennedy &Eberhart, 1995). PSO originated from research on the predatory behaviour of birds. PSO is similar to the genetic algorithm (GA) in that it is an iterativebased optimization method; that is, a system is initialized using a set of random solutions, and the optimal value is then searched through iteration.
(1) Classical PSO algorithm Consider an n-dimensional target search space containing a population of N particles, each of which is regarded as a point. Each particle is characterized by a unique position vector x ¼ ðx 1 ; x 2 ; …; x n Þ that corresponds to a different fitness function value of the objective function.
Algorithm 3.1 Adaptive PSO (APSO) Step 1: Initialization. The population size N, self-cognitive coefficient c 1 , and socialcognitive coefficient c 2 are determined. The i-th particle (i ¼ ð1; 2; …; NÞ) in the ndimensional space is characterized by a position vector x i and a velocity vector v i . The maximum number of iterations is T max . An initial swarm Xð0Þ of N particles is randomly generated, and t :¼ 0 is set.
Step 2: Particle evaluation. The fitness value of each particle is calculated or evaluated.
Step 3: The velocity and position of each particle are updated using the following equations: where w max and w min are the maximum and minimum values of the inertia weights, respectively.
Step 4: The best position p ibest of the iÀth particle and the global best position g best are updated for each particle.
Step 5: Termination. If the termination Deb criterion is met, the global optimal value is output as the optimal solution, and the calculation process is terminated. Otherwise, t :¼ t þ 1 is set, and the process is repeated from Step 2.
Unlike the typical PSO algorithm, the APSO algorithm introduces an adaptive linear decreasing function into the selection of inertia weights, which is similar to the concept of a global search.
(2) Improved Deb criterion Unlike the methods used to solve general unconstrained optimization problems, the entire space must be searched to solve COPs. Having to both optimize the objective function and ensure the feasibility of the solution during the evolution process inevitably increases the difficulty of the solution procedure. Hence, the development of the Deb criterion has attracted considerable attention. To satisfy the Deb criterion, a feasible solution is used to replace an infeasible solution to become the optimal particle position for the current generation. However, the condition shown in Fig. 1 often occurs, that is, the infeasible solution is closer to the global best position than the feasible solution. Discarding the infeasible solution to satisfy the Deb criterion affects the convergence speed. Therefore, we propose an improved Deb criterion. Specifically, an 'excellent' infeasible solution for which the objective function values are close to the global best solution is stored. The rules for this procedure are given below. i) If two particles represent feasible solutions, the particle with the smaller fitness value is retained. ii) If one particle represents a feasible solution and the other represents an infeasible solution, the particle representing the feasible solution is retained. Moreover, an infeasible solution with a small fitness value is stored in an 'excellent' infeasible solution set A.
iii) If both particles represent infeasible solutions, the particle with a lower degree of constraint violation is retained. Moreover, an infeasible solution for an unselected particle with a small fitness value is also stored in the 'excellent' infeasible solution set A.
Thus, an 'excellent' infeasible solution set is built from infeasible solutions with small fitness values. In subsequent operations, these particles are used to update the current swarm, where the information from these infeasible solutions is used in the iterative process. (3) Updating the optimal particle position After each iteration of the algorithm, the optimal particle position is updated using the improved Deb criterion. Because the PSO algorithm can easily fall into a local optimum, DE has been integrated into the PSO algorithm to improve the global search ability (Yadav & Deep, 2014;Liu, Cai & Wang, 2010;Wang & Cai, 2009) with good results. Therefore, DE was used to improve the global search ability of the algorithm in this study.
The following steps are performed.
(ii) Mutation: v i ¼ Pbest i þ FðPbest r 2 À Pbest r 1 Þ, where v i is an intermediate variable, and F is a scaling factor.
(iii) Cross operation: for j ¼ 1:n where u i;j is an intermediate variable, CR is the crossover probability, and j rand is a random integer, f1; 2; …; ng. where rand is a random number within [0, 1] with a normal distribution.
(v) Compare u i and Pbest i based on the improved Deb criterion. If u i is better, then update Pbest i and the infeasible solution set A.
Note that the DE strategy is incorporated into the HMPSO algorithm (Wang & Cai, 2009) to update the optimal positions of individual particles. However, the algorithm performs a mutation operation on a particle's optimal position by using three other particles to generate an intermediate particle, without using the information for the optimal position of the particle to be mutated. The optimal position of a particle determined by the PSO algorithm is the best position of the particle over all iterations, which contains historical information and should not be directly discarded. Therefore, the optimal position of the particle to be mutated is retained, and the optimal positions of only two other particles are used in the mutation operation.
(4) Updating the current swarm using the infeasible solution set The PSO algorithm differs from the DE and GA algorithms. Both of these algorithms only perform evolutionary optimization of the current population. However, the PSO algorithm stores both the current swarm and the optimal-particle swarm, that is, the optimal solutions of the particles since the first iteration. Many PSO modifications were developed based on the optimal particle swarm, and the effect of the efficient evolution of the current swarm on the entire algorithm was neglected. In "Classical PSO algorithm", we used the improved Deb criterion to obtain an 'excellent' infeasible solution set A. Inspired by the FROFI algorithm (Wang et al., 2016), this infeasible solution set is used to update the current particle swarm. However, the FROFI algorithm only uses an external set to update one particle of the current swarm in each iteration, such that information from excellent infeasible solutions is not used. Hence, the infeasible solution set was used in this study to guide the evolution of the current swarm.
The following steps are performed.
i) A nondominance operation is performed on the elements of set A; that is, the objective function value and the degree of constraint violation are regarded as two objective values. Particle dominance is considered as follows: if the objective function value and degree of constraint violation of a particle a 1 are higher than those of a 2 , then a 1 is removed from the swarm. This process is repeated until all dominant particles are removed.
ii) The particle with the lowest degree of constraint violation in set A and the particle with the highest degree of constraint violation in the current swarm are compared. If the former degree is larger than the latter degree, the current particle is replaced with the former particle and deleted from set A. This process is repeated until the degree of constraint violation of the particle with the lowest degree of constraint violation in set A is larger than the highest degree of constraint violation in the current swarm.
(5) Retention strategy used to determine the best global position The following retention strategy is used to determine the global best position. (i) The particles with global optimal positions and the optimal particle swarm are combined into a candidate swarm U. (ii) If swarm U contains feasible solutions, the particle solution with the smallest fitness value is stored as the global best position. If swarm U does not contain feasible solutions, the particle solution with the smallest constraint violation value is stored as the global best position.
(6) PSO algorithm based on the improved Deb criterion Algorithm 2 The procedure for the PSO algorithm for constrained optimization problems (CPSO) is summarized below.
Step 1: Initialization. Randomly initialize the position vector X ¼ ðx 1 ; x 2 ; …; x N Þ and the velocity vector V ¼ ðv 1 ; v 2 ; …; v N Þ; set the parameters c 1 , c 2 , w max , w min , F, CR, T max , N, and D (the dimensions of the decision vector); and set t = 0.
Step 2: Calculate the objective function value of each particle in the initial swarm, where the optimal positions of the initial particles are P best ¼ X. Store g best using the retention strategy described in "Updating the Current Swarm Using the Infeasible Solution Set".
Step 3: Update the current swarm based on Eqs. (2) and (3) to generate a new generation x i;j tþ1 , and use the following equation to treat out-of-bound particles (Liu et al., 2018). ; Step 4: Update the optimal particle positions p ibest and i ¼ ð1; 2; …; NÞ based on the improved Deb criterion proposed in "Classical PSO algorithm", and store 'excellent' infeasible solutions in set A.
Step 5: Use DE (as described in "Improved Deb criterion") to update the optimal particle position and set A.
Step 6: Update the current swarm using set A, and after the update, let A ¼ f.
Step 7: Update g best based on the retention strategy.
Step 8: Termination. If the termination criterion is met, the global best particle is output as the optimal solution, and the calculation process is terminated. Otherwise, let t :¼ t þ 1, and return to Step 2.
(7) Complexity Analysis We present the computational time complexity of the CPSO algorithm below, considering only the worst-case scenarios for the main steps during one iteration for a swarm of size N. 1. Updating a particle: OðNÞ 2. Updating the optimal particle position and the set A: OðNÞ 3. DE updating strategy: OðNÞ 4. Update the current swarm using set A (for the worst case in which all the current particles must be replaced): OðN log NÞ

The retention strategy for the global best position: OðNÞ
Thus, the computational time complexity of the CPSO algorithm for the worst case is OðN log NÞ, which demonstrates that the algorithm is computationally efficient.

TEST FUNCTIONS AND PARAMETER SETTINGS
A total of 24 test functions from IEEE CEC2006 (Liang et al., 2006) were used to further evaluate the performance of the proposed algorithm. However, because finding feasible solutions for functions G20 and G22 is widely believed to be difficult, they were subsequently excluded. The 22 remaining functions were of various types. Table 1 shows the specific characteristics of the functions. N represents the number of decision variables. Linear, nonlinear, polynomial, quadratic, and cubic objective functions are considered. q ¼ jj=jSj is the estimated ratio between the randomly simulated feasible area and the search space, jSj is the number of randomly generated solutions in the search space, and jj is the number of feasible solutions in jSj. The number of simulations is usually 1,000,000. LI and NI represent linear and nonlinear inequality constraints, respectively. LE and NE represent linear and nonlinear equality constraints, respectively. a is the number of active constraint functions near the optimal solution. The optimal solutions of the considered functions are known, where f ðx Ã Þ denotes the global optimal function value. The improved optimal solution of G17 obtained in the literature (Wang & Cai, 2011) was used in this study.
The Wilcoxon rank-sum test was used to compare the performance of these algorithms. The null hypothesis is that there is no significant difference in the performance between the proposed algorithm and the corresponding algorithm. The symbol (+) indicates that the proposed algorithm is significantly better compared to the corresponding algorithms based on the Wilcoxon rank-sum test at the α = 0.05 significance level; the symbol (−) indicates significantly worse, and the symbol (=) indicates no significant difference (Derrac et al., 2011).

RESULTS
(1) Comparison of improved PSO algorithms based on different strategies We introduced the improved Deb criterion (IDeb) and the DE update strategy into the classic PSO algorithm. To demonstrate the effectiveness of the individual strategies, two algorithms were constructed, i.e., PSO+Deb+DE and PSO+IDeb. The same parameters were used for CPSO, PSO+Deb+DE, and PSO+IDeb to facilitate comparison of the numerical results. Table 2 shows the feasible rate and success rate of CPSO, PSO+Deb+DE and PSO+IDeb for the 22 test functions after 25 runs. The feasible rate refers to the proportion of feasible solutions in the numerical results, and the success rate refers to the proportion of solutions for which the error in the function value is f ðxÞ À f ðx Ã Þ 0:0001, where x is a feasible solution. The formula is as follows: where A Ã denotes the number of feasible solutions obtained by the algorithm, A denotes the number of solutions obtained by the algorithm, A ÃÃ denotes the number of solutions for which the error in the function value f ðxÞ À f ðx Ã Þ 0:0001, x is a feasible solution, and x Ã is the known optimal solution. The numerical results show that the feasible rates of CPSO and PSO+Deb+DE reached 100%, except for G21. The feasible rate of CPSO was 96% for G21 (that is, one out of 25 runs converged outside the feasible region), which was higher than that for PSO+Deb+DE (92%) and PSO+IDeb (20%). PSO+IDeb had a lower feasible rate of 64% for G23. A 100% success rate was obtained for 17 functions by CPSO, for 15 functions by PSO+Deb+DE, and for eight functions by PSO+IDeb. After 25 runs, PSO+Deb+DE had not successfully solved one function (G02) compared to 10 functions for PSO+IDeb. These results show that introducing DE to update the optimal particle solution set increased the feasible rate of the PSO algorithm and improved the global search ability. In particular, the feasible rates of all three algorithms reached 100% for G02, but the success rates were 32%, 0%, and 0%. Thus, the feasible space of a function can be easily found, but the global best solution is difficult to obtain. The success rate of PSO+IDeb was not high but represented an improvement over that of PSO.
The abovementioned numerical results were analysed in detail. Table 3 shows the numerical results obtained using the three algorithms for the 22 test functions after 25 runs. In the table, Best denotes the best solution, Worst denotes the worst solution, Median denotes the median value, Mean denotes the mean value, and Std denotes the standard deviation. Wil test denotes the Wilcoxon test result. The success rates show that the functions for which a success rate of 100% was obtained were relatively stable, with a small standard deviation. Next, the five functions for which the CPSO success rate did not reach 100% (i.e., G02, G03, G13, G17, and G21) were considered. (a) For G02, only CPSO obtained the optimal solution of −8.036191E−01. The Std obtained using CPSO was also the smallest, indicating that the algorithm results fluctuated around the optimal solution. (b) For G03, the results of CPSO and PSO+Deb+DE were close, although a smaller Std was obtained by using CPSO than by using PSO+Deb+DE. By comparison, PSO+IDeb did not produce good results. (c) For G13, CPSO and PSO+Deb+DE produced very close and good results, whereas the optimal value obtained by PSO+IDeb was far from the best known solution and indicated a nonideal performance. (d) For G17, both CPSO and PSO+Deb +DE obtained the most recently obtained optimal solution (8,853.53387480648), where the mean of CPSO was closer to the global best solution, but the std of PSO+Deb+DE was smaller. In addition, the PSO+IDeb result was some distance from the latest optimal solution. (e) For G21, the stability of the three algorithms was comparable, although that of  Tables 2 and 3 show that CPSO was superior to the other two algorithms for solving the 22 functions; that is, the integration of Deb and DE improved the ability of the PSO algorithm to solve the COPs. The results of the Wilcoxon test show that the performance of CPSO is significantly better than PSO+IDeb, which is equivalent to PSO +Deb+DE. However, combined with the success rates, the strategy of IDeb and DE can be used simultaneously to solve these 22 problems. Figures 2-9 shows the mean convergence of the 22 functions for the three algorithms. The x-axis is the number of evaluations of the fitness function, and the y-axis is the mean  of the function values for 25 runs. Note that for some functions (e.g., G01, G17, and G18), the function values may be smaller than the known optimal solution. These solutions appeared in the early stage of iteration and were therefore infeasible and invalid solutions that were eliminated during the later stage. Figures 2-9 shows that CPSO did not converge as fast as PSO+Deb+DE during the early stage for G02 and G05 but converged more rapidly during the later stage than the other two algorithms.
(2) Comparison of CPSO and HMPSO As the DE strategy was used in both CPSO and HMPSO (Wang & Cai, 2009) to perform evolutionary operations on the optimal particle set, the numerical results of the two algorithms were compared to confirm the effectiveness of CPSO. The source program of the HMPSO algorithm was obtained from Yong Wang's personal website. The same parameters were used to facilitate comparison. As the local search PSO algorithm is used for sub swarms in HMPSO, the parameters c 1 , c 2 , and w are not required. The other parameters were the same as those used in Section Test Functions and Parameter Settings. Table 4 shows that the same numerical results were obtained using CPSO and HMPSO for seven functions (i.e., G01, G06, G11-12, G15-16, and G24). Both algorithms obtained the optimal solution for the remaining 15 functions, except HMPSO, which did not obtain the optimal solution for G13. The five worst solutions obtained by CPSO (i.e., for G03, G13, G17, G19, and G23) were better than those obtained by HMPSO, whereas the two worst solutions obtained by HMPSO (for G02 and G21) were better than those obtained by CPSO. Both algorithms converged to the same solution for the other functions. Both algorithms achieved the same mean value for nine functions (i.e., G04-05, G07-10, G14, G18, and G23). CPSO produced a better mean than HMPSO for four functions (i.e., G3, G13, G17, and G19), and HMPSO produced better mean values for the remaining two  functions. A smaller standard deviation was obtained by using CPSO than by using HMPSO for 11 functions, not including G2, G8, G13, and G21. In particular, the standard deviation obtained by using CPSO for G18 and G19 was nearly nine orders of magnitude smaller than that of HMPSO. The Wilcoxon test results show that CPSO had the same performance as HMPSO on 18 functions, a better performance in three functions, and a worse performance in function G21. These results show a higher convergence and stability for CPSO than HMPSO in solving the 22 selected functions.
(3) Comparison of CPSO and other algorithms To further illustrate its effectiveness, the CPSO algorithm was compared with other intelligent algorithms from the literature. The main parameters of each algorithm were set as follows: the number of evaluations of the fitness function was 2:4 Â 10 5 , the mutation probability was 0.6, and the crossover probability was 0.8 in the GA algorithm (Mezura-Montes & Coello, 2005). The number of evaluations of the fitness function was 2:4 Â 10 5 , and the correction rate was 0.8 in the ABC algorithm (Karaboga & Akay, 2011). The number of evaluations of the fitness function was 2:8 Â 10 5 in the CSHPSO algorithm (Yadav & Deep, 2014). The number of evaluations of the fitness function was 3:5 Â 10 5 in the PESO algorithm (Zavala, Aguirre & Diharce, 2005). The number of evaluations of the fitness function 2½1:06 Â 10 4 ; 1:401 Â 10 5 , F 2 ½0:9; 1:0, and CR 2 ½0:95; 1 in the PSO-DE algorithm (Liu, Cai & Wang, 2010). Because most studies only discuss the test problems G01-G13, the performance of each algorithm on these 13 problems is analysed below.
The data in Table 5 are the best value and average value obtained by solving 13 test problems with six algorithms, and the best value obtained by each problem is expressed in bold (because the data retention digits used in each study are different, the comparison is based on the rounding result of the last digit). If the optimal value is the same, then all are bold. The best value and average value of G12 were successfully obtained by the six algorithms. However, for G02, G05 and G13, the performance of each algorithm decreased. Except for the average value of G02 obtained by the GA algorithm, the other best values were only solved by the CPSO algorithm, and the other algorithms were far from the real best solution; the GA algorithm did not obtain a feasible solution on G05 and G13. The PSO-DE, PESO and ABC algorithms obtained the best value and the best average value on seven, six and four problems, respectively. The CSHPSO algorithm only obtained the best value on G01, G08 and G12, but the best value and average value of the algorithm were essentially the same, which demonstrates that the stability of the CSHPSO algorithm is good, but the ability to jump out of local extremum needs to be improved. In the CPSO algorithm, except for the fact that the average value of G02 and G03 did not obtain the best solution, the other numerical results are among the best values. The CPSO algorithm had a better robustness and optimization effect than the other five algorithms in solving these 13 problems.

CONCLUSIONS
A PSO algorithm based on an improved Deb criterion, referred to as CPSO, was proposed for solving COPs. We developed a strategy for updating the current swarm using an 'excellent' infeasible solution set to address incomplete utilization of infeasible information in the existing Deb criterion, and the DE strategy was used to update the optimal particle set to improve the global convergence of the algorithm. We verified the effectiveness of the proposed algorithm by comparing numerical results obtained by the CPSO algorithm to those obtained using PSO+Deb+DE and PSO+Ideb. The numerical results show that PSO incorporated with Deb and DE effectively solved 22 test functions from CEC2006. Finally, comparing the performance of the CPSO algorithm to that of other algorithms demonstrated the effectiveness and stability of the proposed algorithm for solving the considered 13 functions and three real-world constraint optimization problems.
In future research, we will mainly focus on two aspects. First, effective operation strategies will be designed to solve more complex constrained optimization problems, such as multiobjective constrained optimization problems and mixed-integer constrained optimization problems. Second, more effective and targeted operators will be designed to form an operator pool, enabling the CPSO algorithm to solve more real-world problems such as data clustering, engineering optimization and image segmentation.

ADDITIONAL INFORMATION AND DECLARATIONS
Funding