A Hybrid Multi-Objective Particle Swarm Optimization with Central Control Strategy

In recent years, researchers have solved the multi-objective optimization problem by making various improvements to the multi-objective particle swarm optimization algorithm. However, we propose a hybrid multi-objective particle swarm optimization (CCHMOPSO) with a central control strategy. In this algorithm, a disturbance strategy based on boundary fluctuations is first used for the updated new particles and nondominant particles. To prevent the population from falling into a local extremum, some particles are disturbed. Then, when the external archive capacity reaches the extreme value, we use a central control strategy to update the external archive, so that the archive solution gets a good distribution. When the dominance of the current particle and the individual best particle cannot be determined, to enhance the diversity of the population, the combination method of the current particle and the individual best particle can be used to update the individual best particle. The experimental results show that CCHMOPSO is better than four multi-objective particle swarm optimization algorithms and four multi-objective evolutionary algorithms. It is a feasible method for solving multi-objective optimization problems.


Introduction
Multi-objective optimization problems (MOPs) are based on population-based meta-heuristic optimization designs. Generally speaking, MOPs are more challenging than singleobjective optimization problems (SOPs). For MOPs [1], there exists a set of optimal solutions to be obtained by a trade-off between different objective values. Meta-inspired optimization algorithms have been widely applied to solve other optimization problems, such as the creation of graphical characters [2], optimal outcomes of evolutionary games [3], and inventory control [4]. Particle swarm optimization (PSO) [5] has been widely used to solve SOPs due to its easy implementation and fast convergence speed. In some research reports [6,7], the particle swarm optimization algorithm can solve the potential solutions of MOPs very well. Nowadays, in the optimal tuning of fuzzy controller, meta-heuristic slime mould algorithm [8], Takagi-Sugeno type 1 fuzzy logic controller and Takagi-Sugeno type 2 fuzzy logic controller [9] are introduced, which can effectively deal with the fuzzy controller and better solve the optimization problem.
In the research field of MOPs, the purpose was to improve the efficiency of the algorithm and the data structure of the inventory control nondominant vector. Researchers have produced some techniques to maintain the diversity of the population (such as the adaptive grid used by the Pareto archive evolution strategy (PAES) [10]) and the data structure for handling unconstrained external archives (such as dominant tree [11]). To use PSOs for MOPs, at least two issues need to be considered. e first question is how to select learning samples. Since the optimal solution to a MOP is a set, it is difficult to find a solution that performs best on each objective. Since each particle in the population is guided by learning samples to determine the search direction, the selection of learning samples is quite important for PSO, especially for solving MOPs [12]. erefore, we adopt certain methods to choose the learning samples. e second question is how to balance the convergence and diversity of the population. Because the purpose of multi-objective optimization is to obtain a set of optimal solutions, it is particularly important to maintain the diversity of algorithms. e multi-objective algorithm based on PSO has a fast convergence speed, but it is easy to fall into the local optimum of the MOP. erefore, the inventory control and algorithmic disturbance strategy are used to maintain the diversity of the population. To address the above problems, researchers have proposed many improved multi-objective particle swarm optimization algorithms (MOPSOs), such as learning samples selected based on Pareto sorting scheme [13], global marginal sorting [14], and learning samples selected based on competition mechanism strategy [15].
In recent ten years, PSO has been widely used in various fields, such as neural network training, industrial production, and hydropower dispatching. When they are used to solve many optimization problems, optimization is to find a solution with the least cost, the best adaptability, and the highest economic benefit among many solutions. For example, Niu et al. proposed a parallel MOPSO [16] to effectively solve the balance between benefit and enterprise output in the operation of cascade hydropower reservoirs. Feng et al. proposed a multi-objective quantum particle swarm optimization [17] to solve the economic environment hydrothermal scheduling optimization problem effectively. Zhang et al. proposed a solution algorithm based on naked particle swarm optimization [18] to effectively solve the problem of reducing the energy consumption of high-energy-consuming buildings. Precup et al. proposed an experiment-based method to teach optimization technology courses in the system engineering curricula at the undergraduate level [19], to solve problems in the optimization, modeling, and control of complex systems.
In this study, we propose a multi-objective particle swarm optimization based on central control and combination methods. CCHMOPSO can deal with MOPs better than PSO algorithms. is algorithm is improved on the MOPSO [20]. In the original algorithm, the update strategy of individual best particles is added, and the perturbation operator is improved. At the same time, the central control strategy was adopted to update the external archive, thus improving the exploratory capacity of particles in the population, while increasing the diversity of the population. ese methods help CCHMOPSO to solve MOPs effectively. Compared with some existing MOPSOs, the main contributions of this algorithm are summarized as follows: (1) e study found that during the maintenance of the external archive of MOPSO, 40% of the nondominated solutions are randomly deleted from the grid with the highest density in the archive. is method will blindly delete good nondominated solutions, and it will affect the search results of particles. erefore, this study designs a new central control method, which deletes the nondominated solution according to the distribution of the solution and the Euclidean distance from the central particle. It can improve the quality of the solution in the archive and accelerate the convergence speed of the algorithm. is helps guide the search direction of the particles towards the true Pareto front.
(2) For the traditional MOPSO, the individual optimal particle is selected by comparing the value of the objective function. When the objective function value between two particles cannot be compared, it is selected randomly. If the individual optimal particle is selected improperly, it is easy to be trapped in local optimization. erefore, the algorithm introduces a combination method to update the individual optimal location, which can enhance the diversity of the population and avoid falling into local optimization. is helps guide the particles to search for the globally optimal particles. (3) In CCHMOPSO, nondominant particles and new particles are obtained after each update implements an improved mutation strategy. is method effectively prevents particles from falling into local optima, so that nondominant particles have a chance to become dominant. However, perturbing the particles can help the particles to search for the global optimal particle. e advantages of CCHMOPSO are verified by experiments. In the experiment, 22 standard test problems are used for verification and compared with four improved MOPSOs and four extremely competitive MOEAs: CMOPSO [15], MOPSOCD [21], MPSOD [12], NMPSO [22], MOEAD [23], NSGA-II [24], SPEA2 [25], and MOEAIGDNS [26].
e experimental results show that CCHMOPSO's overall performance is better than other eight algorithms in terms of solution set quality. e specific work of the study is as follows. e second section reviews the MOP, briefly introduces the PSO, and the existing MOPSO. e third section gives the details of CCHMOPSO proposed in this study. e fourth section is the research comparison and relevant discussion. e fifth section summarizes this study.

Multi-Objective
Optimization. Many optimization problems in the engineering field require simultaneous optimization of multiple different objectives. is kind of problem is called MOP, in which the objectives to be optimized often conflict with each other. erefore, the multi-objective optimization algorithm is mainly to find a set of relatively optimal solutions between various objectives in the solution space, and the more uniform the distribution of these solutions, the better. In general, the multi-objective optimization problem (taking the minimization problem as an example) is expressed as follows: (2) Definition 2 (Pareto optimal solution): let x → be the solution of the multi-objective optimization problem (1). If then x → is called the Pareto optimal solution or nondominated solution.
Definition 3 (Pareto optimal solution set): the set of all Pareto optimal solutions in the decision space x → is called the Pareto optimal solution set (Pareto optimal set, PS), i.e., Definition 4 (Pareto front): in the Pareto optimal solution set, the curve (surface) formed by the objective space is called Pareto front, i.e., (5) Figure 1 plots the Pareto front of a special case of a biobjective optimization function.

Particle Swarm Optimization.
e particle swarm optimization (PSO) was originally developed by Kennedy and Eberhart [5] for optimization problems. e PSO was inspired by the foraging behavior of bird flocks. In PSO, individuals are flying in a multidimensional space in groups, to find the potential optimal solution in the population. It is worth noting that each individual learns from their past individual experience and the experience of successful peers and updates their speed and position adaptively. In the standard PSO [27], the individuals in the swarm are called particles, and it is a potential solution in the swarm. e position and velocity of the particles are updated according to the following formula: where represent position and velocity of the ith particle in the Dth dimensional search space; x pbest ij represents the individual optimal jth dimensional position of the ith particle and is usually called the individual optimal position (pbest); x gbest j represents the jth dimensional position of the globally optimal particle in the population and is usually called the global optimal position (gbest); w is the coefficient of inertia, w � 0.4; c 1 and c 2 represent the acceleration coefficients, c 1 � c 2 � 2.0; r 1 and r 2 , respectively, represent two random coefficients generated uniformly in the range of [0, 1]; and t represents the number of iterations, t � 1, 2, ..., T (T is the maximum number of iterations). In general, to keep the particles from flying out of the search space, a maximum value (v max ) is defined for each dimension of the particle's velocity vector. When the particle velocity v ij exceeds the defined v max , the particle velocity v ij is directly set to v max .

Existing Multi-Objective Particle Swarm Optimization
Algorithms. In recent years, some MOPSOs have been proposed. Next, we briefly review some representative MOPSOs. e algorithm was proposed by Coello et al. [6], which used the Pareto dominance relationship to determine the learning samples, and stored the nondominated particles in an external archive. Although the MOPSO performed better than the traditional MOEAs in solving MOPs, such as NSGA-II [24] and PAES [10], it was difficult to solve MOPs in complex landscape. Aiming at the problem that the learning samples in the MOPSO are determined by the dominance relationship, Zhang et al. [23] were based on the decomposition-based MOPSO and replaced the genetic operator with a search method based on the PSO algorithm. Dai et al. [12] proposed another decomposition method of MOPSO, which divided the target space vector into subregions based on a set of direction vectors.
In the MOPSO, some methods to enhance the diversity of the population have been proposed. Zhang et al. [15] proposed a MOPSO based on a competition mechanism, in which a competition mechanism was used to select the  Computational Intelligence and Neuroscience global optimal solution. Liu et al. [22] proposed a MOPSO that balances fitness estimation methods, in which the algorithm adopted a new update formula to enhance the convergence of the algorithm. Lin et al. [28] proposed a MOPSO where the PSO-based search method searched for particles in the population with crossover [24] and mutation operation [29] updated the particles in the archive. Gu et al. [30] proposed MOPSO based on R2 contribution and adaptive method, in which a new global optimal solution selection mechanism was adopted to maintain the diversity of the population. ese algorithms could avoid falling into local optima and enhanced the diversity of the population.
In recent years, researchers have proposed some hybrid swarm intelligence algorithms. For example, Liu et al. proposed a particle swarm optimizer [31] that simulates human social interaction behavior. e learning strategy of this algorithm makes full use of the excellent information of each particle to guide the particle search, which improves the diversity of the population. Liu et al. proposed a PSO based on genetic interference hybrid learning [32], which uses the genetic interference hybrid strategy to update particles and improves the ability of the population to avoid falling into local optimal solutions. Qian et al. proposed an improved genetic algorithm [33] based on memory update and environmental response scheme, which uses memory update and environmental response scheme to improve the adaptability of the algorithm to different dynamic environments. Leng et al. proposed MOPSO based on optimal grid distance [34], which integrates two grid sorting methods in maintaining the size of the external archives. Niu et al. proposed a bacterial foraging optimization algorithm [35] based on a coevolutionary structure redesign, so that all bacteria can learn from each other and search for the optimal solution cooperatively, which can speed up convergence and promote search accuracy. Moattari et al. discussed a braininspired approach to design evolutionary optimization algorithm [36]. Kaur et al. designed silicon fin-shaped fieldeffect transistor devices on insulators and the best indicators. After using a set of parameters to train the neural network and determine the fitness function, they realized the genetic algorithm and the whale optimization algorithm [37].
In summary, the main concern of the existing MOPSO is how to effectively improve the diversity of the population. In this study, a MOPSO based on central control and combination method (CCHMOPSO) is proposed. In CCHMOPSO, a new strategy is proposed to improve the MOPSO.
is central control method combines the Euclidean distance to maintain the external archive size that can improve the quality of the solution. In addition, a new method is proposed to update the best position of individuals, which enhances the diversity of the population in a combined manner. e details of the algorithm will be introduced in Section 3.

Description of CCHMOPSO Algorithm
In this section, we first elaborate on three improvement strategies of the CCHMOPSO and then give the basic framework of the CCHMOPSO.

Central Control Strategy for External Archiving.
e purpose of external archiving is to preserve the high-quality nondominated solutions found during the search process. In the early stage of the algorithm, there are few nondominant solutions, so archiving can effectively preserve the dominant nondominant solutions. In the late stage of the algorithm, since the population has a limited search range in the feasible domain space, it is necessary to consider whether the capacity of the archive is full. erefore, we need to maintain the external archive to guide the population to search to the Pareto front and ensure the distribution of solutions on the Pareto front. When the number of nondominated solutions generated by the algorithm exceeds the maximum capacity of the archive, some particles need to be deleted from the archive. is study proposes a central control strategy in the external archive to update the archive. is strategy deletes some particles one by one from the archive, while storing high-quality nondominated solutions. Next, we will describe the update of the external store in detail.
During each iteration of PSO, it generates a set of nondominated solutions (where this set of nondominated solutions is the relatively dominant solutions obtained by the Pareto nondominated ranking [38], and no solution among them is superior to several other solutions). Compared with this set of nondominated solutions with the individuals in the external repository one by one (where the external repository is initially empty), the method of selecting the dominant particle or two nondominant particles obtained after each comparison can be described as follows: (1) both dominant and nondominant solutions are stored in the external archive, while the dominant and nondominant solutions in the archive are retained; (2) when the capacity of the external archive is within the size limit, all dominant and mutually exclusive nondominant solutions are stored in the external archive; and (3) when the capacity reaches the limit, this study uses a central control strategy to delete some particles one by one and at the same time store the dominant nondominated solutions one by one.
When the capacity of the external archive reaches its limit, an adaptive grid [10] is first established, and then, a method to delete some particles in the archive is used. In the original MOPSO [20], objects were removed based on the density of the grid. It was based on the idea that the greater the density of the grid, the greater the probability that particles within it would be deleted. It randomly deleted some particles from the grid with the highest density according to a certain proportion. Although this method guarantees the scale of the archive, it has a certain degree of blindness and will bring difficulties to the population to converge. erefore, the update and maintenance strategy of the external archive are very critical for the MOPSO. In this study, a central control strategy is used to determine the object to be deleted. e determinant factor depends on the grid density and the Euclidean distance from the central particle. e basic idea is to use the adaptive grid in the external storage and find the central particle from the densest grid of the adaptive grid and mark it as N * . To get a good distribution, we select 40% [20] of the total number of particles from the densest grid and delete it. (8) calculates the total number q of deleted particles in the densest grid; (9) and (10), respectively, calculate the distances d i * and d i between the central particle N * and the ith individual on the left and right sides. We need to delete the q particles with the smallest distance in front of them in sequence, and the specific method is as follows: (1) if the total particle number in the densest grid is an odd number, the middle particle in the grid is directly selected as the central particle N * , and the q particles with the smallest distance in front are deleted one by one (Figure 2(a)); (2) if the total particle number in the densest grid is even, one of the two particles in the middle of the grid is randomly selected as the central particle N * , and then, the q particles with the smallest distance in front are deleted one by one (Figures 2(b) and 2(c)). It is worth noting that deleting particles one by one in the archive also stores the dominant nondominated solutions one by one.
where [ ] represents the rounding conforms; | | represents Euclidean distance; s represents the total number of individuals in the densest cell; x → N * represents the position of the central particle; x → N * −i represents the ith individual position to the left of the central particle; and x → N * +i represents the ith individual position to the right of the central particle.
When the external archive is updated each time, some particles closest to the central particle in the most densely distributed area of the archive are deleted according to the central control strategy. It can maintain the diversity of the solution set and ensure that the solutions in the external archive have a good distribution.

Update Strategy of Individual Optimal Particle.
In PSO, the individual best position (pbest) of a particle is the best position obtained by itself so far. Reasonable selection of pbest is conducive to enhancing the population's ability to develop local space, so that particles can find the optimal solution in multiple regions. When solving SOPs, we only need to compare whether the fitness value of the current particle is better than the fitness value of the pbest to determine the pbest. However, in the MOPs, the best individual is chosen by the information of the externally archived solution. In this study, the individual best particle update strategy is adopted, and the Pareto dominance relationship is used to update the individual best particle, so as to improve the convergence speed. After the individual best particles are updated, it is also necessary to maintain the particles in the feasible region space to prevent them from exceeding the boundary of the space (avoid newly generated particles not in the effective feasible region space). e dominance relationship between the current particle and best individual is first compared. ere are three situations as follows: (1) if the individual best particle dominates the current particle, the individual best particle remains unchanged; (2) otherwise, the current particle dominates the individual best particle, and the current particle replaces the individual best particle; and (3) if they do not dominate each other, it is difficult to choose a single one as the individual best solution. erefore, the text uses a combined method [39] to update the individual best particles, which is conducive to the search of the population particles in their own local space and improves the development ability of the algorithm. At the same time, it also enhances the diversity of the population and avoids the ability to fall into local minima. e updated formula of the best individual is as follows: where parameter u represents the uniform factor, the function of this factor is to balance the influence of the search directions of the two solutions: if u � 1, the position of the current particle is selected as pbest, , is kept. On the other hand, parameter u (u ∈ [0, 1]) is the compound searchability of two solutions, which combines the respective advantages of the two solutions, u � 0.1; c ∼ N(0, 0.01 2 I) is normally distributed parameters, and I D×D is the identity matrix; parameter c is a random parameter that mimics the mutation of the evolutionary algorithm to enhance the exploration ability of the algorithm.
A dual-objective optimization problem is taken as an example: assuming that the objective function value of the current particle i in the t + 1 th iteration is ). In this study, the individual best particle update strategy for particles is as follows:  Computational Intelligence and Neuroscience (1)

Perturbation Strategy Based on Boundary
Fluctuation. e implementation of perturbation strategy in PSO (e.g., RPSO [40]) can promote the searchability of particles, because perturbation strategy helps to enhance the diversity of population and make particles get rid of local optimization. e following situations may occur in the population. When a particle is searching in the feasible region space, if the particle finds a good position, then other particles will fly in its direction. It should be noted that if the good position explored by the particle corresponds to the local optimal solution, then our particle cannot search the entire feasible region space. As a result, the diversity of the population will be lost and the algorithm will fall into a local optimum [41,42]. To avoid the above situation and enhance the diversity of solutions, this study introduces a perturbation strategy to improve the diversity of search solutions in the process of CCHMOPSO optimization. e disturbed objects are the nondominant particles and the new particles after the update. is strategy is affected by the exponential coefficient to randomly select particles, which are first perturbed with convergent upper bounds. If the result of the perturbation causes the particle position to exceed the upper bound, then the perturbation towards the lower bound will be carried out. is is beneficial to reduce the probability of the algorithm falling into a local optimum and maintain the diversity of the population. From the calculation of (13), it can be seen that the perturbation amplitude of the particles will decrease with the increase in the number of iterations. In the perturbation operation, the disturbed range of the whole population gradually decreases, and the algorithm converges as the disturbed particles gradually decrease. e perturbation formula is as follows: where ub �→ represents the upper bound of particle position, ub �→ � (1, 1, ..., 1); ψ is an exponential disturbance factor, which will decrease as the number of t iterations increases, the ψ coefficient decreases, and the disturbance range decreases; n (n ≤ N) represents the number of particles affected by the perturbation factor; j (j ≤ D) represents the specific position of ith particle disturbance, which is affected by the number of iterations; r 3 represents the random coefficient uniformly distributed within the range of [0, 1]; and T represents the maximum number of iterations.
New particles obtained after each update, and all nondominant particles after each external archive update, adopt a perturbation strategy for these particles. Using perturbation, the diversity of solutions is enhanced, and some particles become feasible, which effectively avoids the population falling into local extremum.
All in all, the abovementioned perturbation strategy makes the particles converge to the true Pareto front faster and accelerates the convergence speed. It is worth noting that compared with the traditional MOPSO, CCHMOPSO proposed in the text does not increase the amount of calculation too much.

Framework of CCHMOPSO.
e description of CCHMOPSO is as follows: Step 1. Initialization. First, set relevant parameters, such as population size N and final stop condition T max � 10000. Secondly, set the archive to an empty set, and set the initial velocity v → � 0 and position x → of the initialized particles; Step 2. Perturb the particles in the population using the methods described in Section 3.3; Step 3. Calculate the objective function value of all particles in the population; Step 4. Update the external archive using the methods shown in Section 3.1; Step 5. Update the position of pbest and gbest (use (11) in Section 3.2 to update the position of pbest); Step 6. Update the velocity and position using (6) and (7); Step 7. Use the method described in Section 3.3 to perform disturbing effects on new particles in the population; Step 8. Calculate the objective function value of all new particles in the population; Step 9. Update the external archive again using the methods described in Section 3.1; Step 10. Use the method described in Section 3.3 again to perform disturbing effects on nondominant particles; Step 11. Update the position of pbest and gbest (use (11) in Section 3.2 to update the position of pbest); Step 12. Judge whether the algorithm reaches the final stop condition. If yes, stop and output the final result; otherwise, return to step 6.

Validation Results
In this section, several experiments are performed to verify the performance of CCHMOPSO. e content of this section is divided into four parts for description. e first part is the performance index; the second part is parameter setting; the third part is the comparison between CCHMOPSO and four other MOPSOs; and the fourth part compares CCHMOPSO with four competitive MOEAs.

Performance Indicators.
In this study, two performance metrics are used to compare CCHMOPSO with selected MOPSOs and MOEAs. ese metrics are the inverted generational distance and hyper-volume. e inverted generational distance [43] (IGD) metric is used to evaluate the performance of the nine algorithms. Normally, this indicator is used to measure the distance between the true Pareto front and the nondominated solution set obtained by the algorithm. e IGD value is a metric, and its function is to evaluate the quality of the solution set obtained in terms of convergence and diversity. e smaller the IGD value, the better the quality of the solution set obtained by the algorithm. In addition to evaluating the performance of the algorithm, it is also necessary to further evaluate the overall performance of the algorithm. IGD is defined as follows: where P represents the current Pareto front set; dist(x * , P) represents the Euclidean distance between a point x * ∈ F * and the nearest solution in P. e hyper-volume [44] (HV) metric is used to evaluate the overall performance of the nine algorithms. is metrics is achieved by measuring the hyper-volume of the region composed of the optimal set and the reference point in the target space. Its function is to evaluate the distribution and convergence. e larger the HV value, the better the overall performance obtained by the algorithm. e reference point of HV is set to (1.1, 1.1).
where f i represents the ith objective function value of P; R i represents the ith objective function value of the reference point. HV value is set as (1.1, 1.1).   [15], MOP-SOCD [21], MPSOD [12], and NMPSO [22]; four competitive MOEAs are also selected for performance comparison, namely MOEAIGDNS [26], SPEA2 [25], NSGA-II [24], and MOEAD [23]. We use 22 benchmark problems in the three test series of ZDT [45], DTLZ [46], and UF [47]  To make a fair comparison, the relevant parameters of all the comparison algorithms used in this study are set according to the recommended values in Table 1. p c and p m represent the crossover and mutation probability, respectively; η c and η m represent the distribution indicators, respectively. For CMOPSO and MOPSOCD, their control parameter R is randomly sampled in [0, 1], and the inertia weight w of MOPSOCD is 0.4; for MPSOD and CCHMOPSO, their control parameter c is 2.0, and the inertia weight w of MPSOD is randomly sampled in [0.1, 0.9], while the inertia weight w of CCHMOPSO is 0.4. For NMPSO, the control parameter c is randomly in [1.5, 2.5]. For MOEAD, T represents the neighborhood size between weight coefficients, and F represents the crossover probability of differential evolution. e div of CCHMOPSO represents the number of division grids of cells. In this study, the population size of all algorithms is set to N � 200. Except that the maximum number of fitness assessments for test problem ZDT4 is 40000, the maximum number of fitness assessments for other 21 test problems is 10000. On each test question, the experiment is carried out with 30 independent runs, and each result of IGD value and HV value and their average and standard deviation were recorded. In this article, all the algorithm source codes used for comparison are provided in PlatEMO [48]. Tables 2 and 3 Tables 2 and 3 show the result comparison of the index IGD value and HV value obtained among the five algorithms. Compared with existing MOEAs, CCHMOPSO has achieved better overall performance on test problems. It can be seen from the IGD value that among the 22 test problems, the CCHMOPSO, NSGA-II, MOEAIGDNS, SPEA2, and MOEAD algorithms obtain the best average performance on 13, 5, 1, 2, and 1 test problems, respectively. erefore, in the 22 test problems, the IGD value obtained by CCHMOPSO is better than the other four algorithms. It is found from Table 2 that CHHMOPSO significantly outperforms the other four algorithms on the test problems UF1-UF7. Because it uses a central control method to maintain external archives and improve the search capabilities of the algorithm, in the 22 comparisons, the effect of CCHMOPSO is better than that of MOEAIGDNS, SPEA2, NSGA-II, and MOEAD for 16, 14, 13, and 17 times, respectively. However, it is only 6, 8, 9, and 5 times worse than MOEAIGDNS, SPEA2, NSGA-II, and MOEAD, respectively. For the test problems, DTLZ5 and DTLZ6 of degraded PFs, MOEAD, SPEA2, and MOEAIGDNS perform poorly because they use evolutionary search and cannot effectively solve the test problems DTLZ5 and DTLZ6. erefore, in terms of index IGD, CCHMOPSO has the better performance compared with the other four algorithms in most of the 22 test problems.

Comparison Experiments with Four MOEAs. As shown in
To further verify the effectiveness of CCHMOPSO, an important test index HV is used in addition to the index IGD value. As can be seen from the HV values, CCHMOPSO, NSGA-II, MOEAD, MOEAIGDNS, and SPEA2 obtain the best overall performance on 15, 3, 1, 1, and 1 of the 22 test problems, respectively. erefore, the CCHMOPSO obtains relatively better HV values than the other four algorithms in the 22 test problems. Note that in test problems ZDT3 and DTLZ7, the IGD values obtained by our CCHMOPSO are relatively average, but it can be seen from the HV values that it has the best overall performance.
As shown in Figure 4, the box plot of IGD index results of five algorithms is drawn, which illustrates the distribution of the data. As can be seen from Figure 4, the CCHMOPSO shows a significant improvement over the other four algorithms in the test problems ZDT1, ZDT2, ZDT4, ZDT6, UF1, UF2, UF5, UF7, UF9, and UF10. e lower the mean value of IGD and the shorter the box plot in the figure show that the IGD value obtained by the algorithm is better and has a more consistent result. In other test problems, except for DTLZ1, DTLZ2, DTLZ3, and DTLZ5, it is hard to see the difference between the effects of CCHMOPSO, NSGA-II, MOEAIGDNS, and SPEA2, while the performance of Table 1: Parameter settings for all the algorithms.

Algorithms
Parameter settings MOEAIGDNS Computational Intelligence and Neuroscience MOEAD is obviously poor. In general, CCHMOPSO gives better results than the other four algorithms. As shown in Figure 5, convergence diagrams of five algorithms on test problems UF2, UF3, and ZDT6 are drawn. It is observed in the figure that CCHMOPSO converges faster than the other four algorithms because the combination method is used to select learning samples to effectively maintain the diversity of the population. At the same time, MOEAD has a faster convergence rate on the test problems UF3 and ZDT6, because the heuristic method adopted by the algorithm accelerates the convergence speed of the algorithm. e figure shows that each algorithm records the IGD value 10 times at a certain time interval between function evaluations. It can be seen that CCHMOPSO proposed in this study has a good convergence speed, which indicates that the three strategies proposed in CCHMOPSO can balance convergence and diversity well.

12
Computational Intelligence and Neuroscience  II  MOEAD  CCHMOPSO  MOEAIGDNS  SPEA2  NSGA-II  MOEAD  CCHMOPSO  MOEAIGDNS  SPEA2  MOEAIGDNS  MOEAD  CCHMOPSO  NSGA-II   SPEA2  NSGA-II  MOEAD  CCHMOPSO  MOEAIGDNS  SPEA2  MOEAIGDNS  MOEAD  CCHMOPSO  NSGA-II  NSGA-II  MOEAD  CCHMOPSO     14 Computational Intelligence and Neuroscience namely CMOPSO, MOPSOCD, MPSOD, and NMPSO. As can be seen from the IGD values, CCHMOPSO, CMOPSO, and NMPSO achieve optimal IGD values on 12, 7, and 3 of the 22 test problems, respectively. For MOPSOCD and MPSOD, the results are not good on the DTLZ, UF, and ZDT series of test problems. e IGD values obtained by CCHMOPSO and CMOPSO are relatively better than the other three algorithms. For ZDT1-ZDT2 and ZDT6, the effect of CCHMOPSO is worse than CMOPSO, but better than other algorithms. For solving DTLZ6 PFs, CCHMOPSO performs best, because the central control strategy proposed by this algorithm can effectively search for the global optimum. One-to-one comparison shows that CCHMOPSO has better performance than CMOPSO, MOPSOCD, MPSOD, and NMPSO in 14, 16, 20, and 15 of the 22 comparisons. Meanwhile, CCHMOPSO is surpassed by CMOPSO, MOPSOCD, MPSOD, and NMPSO for 8, 6, 2, and 7 times, respectively. From the aspect of index IGD, it is proved that CCHMOPSO is better than the other four MOPSOs on most of the 22 test problems.
It can be seen from Table 5 that the comparison result of using index HV is similar to that of index IGD. As can be seen from the HV value, CMOPSO and NMPSO have the best performance in 4 and 5 times, respectively, while CCHMOPSO has 11 best performance. MOPSOCD and MPSOD perform worse than the other three algorithms on 22 test problems. In addition, in the test problems DTLZ1, DTLZ3, ZDT1, ZDT2, and ZDT6, CCHMOPSO is better for the measurement index IGD value, but slightly worse than the optimal algorithm. In test problem ZDT3, the IGD value of our CCHMOPSO is slightly worse, but it has the best overall performance. erefore, CCHMOPSO is still superior to these four types of MOPSOs in solving these 22 test problems.
According to Figure 8, it can be seen that the CCHMOPSO proposed in this study has a significant improvement over the other four algorithms in the test problems ZDT4, UF1, UF2, UF5, UF7, UF8, UF9, and UF10. It is worth noting that the lower the IGD value and the shorter the box plot in the figure, it means that the algorithm has obtained a better average IGD value and at the same time has a more consistent result. In other test problems, except for the test problems DTLZ2 and DTLZ5, the effects of CCHMOPSO, CMOPSO, and NMPSO are better, while the performance of MOPSOCD and MPSOD is relatively poor.
ese figures further prove that CCHMOPSO's data results are better, and it also shows that it has better overall performance than other four algorithms on 22 test problems.
As shown in Figure 9, the convergence graphs of five algorithms on the test problems UF2, UF3, and ZDT6 are drawn. e figure shows that each algorithm records the  IGD value 10 times at a certain time interval between function evaluations. It is observed from the figure that CCHMOPSO better balances the global search and local exploitation capabilities compared with the other four algorithms, resulting in better convergence of the algorithm. e CCHMOPSO proposed in this study has good convergence, mainly because the three strategies proposed in CCHMOPSO can balance convergence and diversity well.
As shown in Figures 10 and 11, the Pareto peak surfaces of CCHMOPSO, CMOPSO, MOPSOCD, MPSOD, and NMPSO are drawn in the dual-objective test problems ZDT2 and ZDT4. e true Pareto front of these test problems is plotted as a continuous straight line. It is observed from Figure 10 that the true Pareto front of ZDT2, except that CMOPSO and CCHMOPSO, can cover the true Pareto front very well, and the other three comparison algorithms have poor coverage of the true Pareto front. It can be seen intuitively from the figure that the approximate Pareto front of CCHMOPSO and CMOPSO is evenly distributed on the true Pareto front, mainly because they have the better convergence. It can be seen from Figure 11 that in ZDT4, except for CCHMOPSO, which can cover the true Pareto front, the other four comparison algorithms hardly 3.1206e + 0 (7.84e − 1)

Conclusion
is study proposes a hybrid multi-objective particle swarm optimization with a central control strategy, which uses three strategies to improve the MOPSO. Firstly, the perturbation strategy is adopted to enable the particles in the population to obtain a wider search range, and at the same time, it can also prevent the population from trapping in the local optimum. rough the individual best particle update strategy of the combined method, the searchability of the algorithm is effectively improved, and the diversity of the population is also increased. en, using the central control strategy to update the external archive, the archive can effectively store nondominated solutions. e experimental results on 22 test problems show that compared with the existing MOPSOs and classic MOEAs, CCHMOPSO has better overall performance.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this study. Computational Intelligence and Neuroscience 21