Handling multi-objective optimization problems with a comprehensive indicator and layered particle swarm optimizer

: The multi-objective particle swarm optimization algorithm has several drawbacks, such as premature convergence, inadequate convergence, and inadequate diversity. This is particularly true for complex, high-dimensional, multi-objective problems, where it is easy to fall into a local optimum. To address these issues, this paper proposes a novel algorithm called IMOPSOCE. The innovations for the proposed algorithm mainly contain three crucial factors: 1) an external archive maintenance strategy based on the inflection point distance and distribution coefficient is designed, and the comprehensive indicator (CM) is used to remove the non-dominated solutions with poor comprehensive performance to improve the convergence of the algorithm and diversity of the swarm; 2) using the random inertia weight strategy to efficiently control the movement of particles, balance the exploration and exploitation capabilities of the swarm, and avoid excessive local and global searches; and 3) offering different flight modes for particles at different levels after each update to further enhance the optimization capacity. Finally, the algorithm is tested on 22 typical test functions and compared with 10 other algorithms, demonstrating its competitiveness and outperformance on the majority of test functions.


Introduction
Most real-life problems typically involve multiple, conflicting objectives that should be simultaneously considered. Solving these multi-objective optimization problems (MOPs) [1] requires optimizing multiple objectives [2,3]. If one of the objectives is optimal, it is impossible to simultaneously obtain the optimal solutions for all the other objectives, and it may even make the results of the other objectives worse. As a kind of complicated optimization problem, the research on MOPs is reflected in production scheduling [4,5], urban transportation [6], network communication [7,8], and other areas. In addition, it exists in problems like engineering design [9], data mining [10], and fund planning [11], which is significant from both a theoretical and practical standpoint. With the rapid development of the real world, MOPs confront numerous challenges, such as diversification and dynamic programming.
Optimization problems are generally divided into single-objective optimization problems and multi-objective optimization problems. Unlike the single-objective optimization problem, there is no unique global optimal solution for multi-objective optimization problems. Therefore, solving real-life MOPs faces many difficulties and challenges. Intelligent algorithms are an effective way to solve MOPs. Since the development of intelligent algorithms, many kinds of intelligent algorithms have been produced. Swarm intelligence algorithms such as the particle swarm optimization algorithm (PSO) [12], the whale optimization algorithm (WOA) [13] and the dragonfly optimization algorithm (DA) [14] are widely used to solve MOPs.
In the above intelligent algorithms, the researchers have extended PSO to multi-objective particle swarm optimization (MOPSO) [15] due to its simple structure and high efficiency. However, similar to other optimization algorithm, MOPSO has several problems, including the following: how to achieve a good balance between exploration and exploitation ability; search accuracy is insufficient; premature convergence; the algorithm is easy to fall into local optimal; and so on. In order to solve these problems, many academics have committed their time to relevant research and proposed numerous improvement strategies [16][17][18][19][20][21] to enhance the performance of MOPSO. These strategies can be divided into the following three groups: 1) parameter settings. Chen et al. [16] proposed a heuristic algorithm. The probability density function is adjusted adaptively, and the random inertia weight is generated during the search; 2) the neighborhood topology. Roshanzamir et al. [17] proposed a new hierarchical multi-swarm particle swarm optimization algorithm with different task assignments. This structure greatly improves the performance of particle swarm; and 3) hybrid strategies. Deb et al. [18,19] proposed an adaptive multi-objective optimization algorithm based on Pareto dominance. This algorithm proposed a diversity measurement strategy of "distribution entropy" which can measure the distribution of solutions more accurately. However, its main purpose is to increase diversity without considering convergence, which cannot improve its overall performance. Raquel and Naval [20] proposed an extended particle swarm optimization algorithm. The diversity of Pareto optimal solutions in external archive is maintained by increasing the crowding distance on MOPSO. Compared with the traditional algorithm, this algorithm has some improvement in diversity. However, only the crowding distance is taken as the core, and other influencing factors are not fully considered. In order to balance convergence and diversity, Liu et al. [21] used the R2 indicator and a particle swarm optimizer based on decomposition. A new speed updating method is proposed to improve the capability of exploration and exploitation.
Although the aforementioned algorithms have a certain effect on improving the comprehensive performance of the algorithm and getting rid of the local optimum, there are still some drawbacks such as the following: diversity and convergence are not taken into account at the same time; too many parameter settings; and other influencing factors are not fully considered. Based on these problems, this paper proposes a novel algorithm with a comprehensive indicator and layered particle swarm optimizer (IMOPSOCE) to further improve the performance of MOPSO. While introducing some new strategies, IMOPSOCE retains some settings of MOPSO. Twenty-two test functions validate the effectiveness of the algorithm. The following are the primary contributions of the proposed IMOPSOCE: 1) A new strategy for external archive maintenance is proposed. In order to update and maintain the external archive, which effectively improves the convergence of the algorithm and the diversity of the swarm, a comprehensive indicator is employed to measure the comprehensive performance of the non-dominated solutions in the external archive.
2) The random inertia weight is utilized to further balance the capacity for exploration and exploitation of swarm by taking into account the global search in the early stage and the local exploration in the later stage.
3) The swarm is divided into two layers based on the levels of particles after each update, and the speed update mode of the first level of particles is altered to improve the search efficiency of the algorithm.
The rest of this paper is organized as follows. The definitions of MOPs and PSO are briefly introduced in Section 2. In Section 3, IMOPSOCE is proposed, and its improvement strategy is described in detail. Section 4 shows results for an experimental study and discussions that demonstrate the effectiveness of IMOPSOCE. Finally, Section 5 draws some conclusions based on the work done in this paper and describes future work.

Multi-objective optimization problems
The majority of problems encountered in both practical life and scientific research belong to MOPs [1]. Different from solving single-objective optimization problems, rather than just one optimal solution, MOPs have a solution set [22] that is the Pareto optimum solution set. Multiple conflicting objective functions that minimize or maximize are present in MOPs. The formulas to minimize the MOPs can be described as follows: where M is the number of objective variables, When a solution is not dominated by any other solution, it is considered to be a non-dominated Each particle has a position and a velocity. The position of the particle is denoted by , and the velocity of the particle corresponding to it is , N is the number of particles and D is the size of the decision space. The i-th particle in the population updates its velocity and position according to the two formulas above. As the fundamental formula of the algorithm, Eq (3) has three parts: the first part represents the state of the previous generation of particles; the second part represents the influence of the optimal position of the individual history on particles; and the third part represents the influence of the optimal position of population on particles. In Eq (4), the current position is composed of the position of the previous generation and the current velocity where  is the inertia weight, 1 c and 2 c are the learning factors, 1 r and 2 r are uniformly distributed random numbers in the range   1 0， , denote the velocity and position of the i-th particle in the j-th dimension at the t-th iteration, respectively, denotes the individual historical optimal position of the i-th particle in the j-th dimension, and   t g j best denotes the optimal position of the population in the j-th dimension.

Existing MOPSOs
In addition to the MOPSOs mentioned above, some MOPSOs are designed to improve the algorithm performance, balance exploration and exploitation. Next, we briefly review some representative MOPSOs.
Hu and Yen [23] proposed a new algorithm named pccsAMOPSO. A density estimation method (PCCS) is designed, and based on this method, the distribution entropy of non-dominated solutions is used to evaluate the approximate Pareto front uniformity obtained by the MOP optimizer. At the same time, this method can also be used for the selection of leaders and to update the external archive in MOPSO.
Han et al. [24] proposed an AMOPSO algorithm based on a hybrid framework of solution distribution entropy and population spacing (SP) information. The leader selection mechanism based on the solution distribution entropy can analyze the evolutionary trend and select suitable leaders to balance the convergence and diversity of non-dominant solutions. In addition, a flight parameter adjustment mechanism based on population spacing information is proposed to balance the global and local search abilities of particles. The above strategies can obtain a set of optimal solutions with a high diversity and achieve a balance between exploration and exploitation capabilities in the search process.
In terms of parameter setting, Tripathi et al. [25] described a time-varying multi-objective particle swarm optimization algorithm (TV-MOPSO). In a typical MOPSO algorithm, the inertia weight  and learning factors 1 c and 2 c have a very important impact on the algorithm performance. Proper parameter settings enable the algorithm to achieve a good balance between exploration and exploitation. TVMOPSO is inherently adaptive in terms of inertia weight and acceleration coefficient. This adaptability helps the algorithm explore the search space more efficiently. Shibata et al. [26] proposed a multi-objective discrete particle swarm optimizer (DPSO). This method introduces a hierarchical structure composed of DPSOs and a multi-objective genetic algorithm (MOGA). The hierarchical structure can reduce the computational cost of learning; therefor, the method is effective in high-dimensional problems. In addition, the diversity and accuracy of the solutions obtained are either equal to or higher than those obtained using traditional methods.
In the above MOPSOs, they propose different strategies from different aspects to get better solutions. Additionally, under their incentive, in order to improve the convergence of the algorithm and the diversity of the population, IMOPSOCE is proposed. A comprehensive indicator is used to measure the performance of the solution in the external archive to help better maintain it. In order to take the global and local search into account, a random inertia weight strategy is designed, and a hierarchical structure is proposed to divide the swarm according to the levels of particles after each update. This is described in detail in Section 3.

The comprehensive indicator application
The convergence and diversity of the algorithm have a direct impact on its overall performance. This research uses a comprehensive indicator that combines the inflection point distance (CPI) [27] and the distribution coefficient (MPI) to measure the performance of the non-dominated solutions in the external archive after each iteration. The external archive is maintained by the comprehensive indicator, while the non-dominated solutions with a bad convergence and diversity are deleted and those with a good convergence and diversity are saved in the external archive. This strategy can boost the convergence and diversity of non-dominated solutions in the external archive. The comprehensive indicator is described as follows: where CPI is the convergence indicator and MPI is the diversity indicator. When the external archive reaches the threshold, the proposed IMOPSOCE calculates the CM value of each non-dominated solution in the external archive and eliminates the non-dominated solutions with a poor comprehensive performance by comparing the CM values. In Eq (5), CPI can reflect the convergence of the non-dominated solutions in the external archive. The distribution of non-dominated solutions in the external archive can be measured using MPI. CPI and MPI are defined as follows: According to the number of objective functions, the calculation of CPI is separated into the two following cases: 1) The distance between each non-dominated solution and the extreme line determined by the two extreme non-dominated solutions in the external archive is the CPI when the objective function is a bi-objective function. The formula used to calculate the CPI value in this situation is as follows: where 0 x and 0 y are the coordinate values of each non-dominated solution in the external archive and A, B and C are real numbers that are determined by the extreme line composed of two extreme non-dominated solutions. The extreme line is denoted by the following: where x and y are the coordinate values of the extreme non-dominated solutions. 2) CPI needs to calculate the distance between each non-dominated solution in the external archive and the extremal "hyperplane" if there are three or more objective functions. In this situation, the CPI is determined as follows: The extremal "hyperplane" determines the A, B, C and D in this calculation formula. 0 x , 0 y and 0 z are the coordinate values of each non-dominated solution in the external archive. The following formula is used to determine the extremal "hyperplane": where x , y and z are the coordinate values of the extreme non-dominated solutions.
CPI is an indicator to measure the convergence of non-dominated solutions. The smaller the CPI values are, the better the convergence of the non-dominated solutions. In other words, the closer the non-dominated solutions are to the extreme line or extremal "hyperplane", the better the convergence of the non-dominated solutions. Using Figure 1 as an illustration, the extreme points of the non-dominated solutions are shown in blue; 1 N is the point with good convergence and 3 N has poor convergence. Numerous brilliant scholars have suggested different superior approaches for improving population diversity. This led to the concept of crowding distance being put forth by forerunners. Based on crowding distance, Deb et al. obtained an improved population diversity [18,19]. In order to improve the population diversity, the distribution of non-dominated solutions in the objective space is measured in this paper using the distribution coefficient. The distribution coefficient is defined as follows: when the number of objective functions is m where ij f is the distance from the i-th solution to its previous adjacent solution on the j-th objective function, and ij b is the distance from the i-th solution to its next adjacent solution on the j-th objective function. The smaller the MPI value of a particle, the better its distribution.
Equations (10)- (13)  . Solution A clearly has a better distribution than solution B. It follows that the distribution coefficient can reflect the distribution of non-dominated solutions in the objective space; thus, the non-dominated solutions with a better distribution in the external archive are chosen. When computing the MPI value of the non-dominated solutions, the boundary non-dominated solutions are given a minimum distribution coefficient value, so that these solutions are always selected. Equations (10)-(13) will be used to obtain the distribution coefficients of the non-dominated solutions in the middle. The entire procedure for the external archive maintenance is listed in Algorithm 1. This strategy is mainly maintained for the external archive by using the CM indicator. First, using Eqs (6) and (8), the CPI value of the non-dominated solution is determined (line 2). Then, the MPI value of the non-dominated solution is determined using Eq (10) (line 3). Finally, Eq (5) is utilized to obtain the CM value of the non-dominated solution (line 4). The non-dominated solutions with a poor comprehensive performance are eliminated by comparing the CM values of each non-dominated solution (line 5).

Algorithm 1: Update Archive
Input: max R (External archive threshold), best g (Global optimal position) Output: best g (Global optimal position) 1 While Calculate the value of CPI via using Eqs (6) and (8) 3 Calculate the value of MPI via using Eq (10) 4 Calculate the value of CM via using Eq (5)

5
Delete the poor performance non-dominated solutions according to the value of CM 6 End While

The proposed random inertia weight method
The motion of the particles in the standard MOPSO follows Eqs (3) and (4). In Eq (3), the learning factors 1 c and 2 c determine the impact of the experience information of the particle and the experience information of other particles on the particle motion and reflect information exchange among the particle swarm.  stands for the effect of the previous velocity on the current velocity, which can be used to adjust the flying speed of the particle, limit the movement range of the particle, and balance its capacity for exploration and exploitation. Therefore, in order for the particles to search more effectively, the proper  must be set. In conventional MOPSO,  is typically taken as a fixed value, making it difficult to balance the global search in the early stage and the local exploration in the later stage of the particle. Therefore, we design a random inertia weight strategy to adjust  , taking the global search and local exploration of particles into account, in order to efficiently regulate the movement of particles. Its specific formula is as follows: where t is the current iteration number, max t is the maximum iteration number, and max  and min  are the maximum and minimum values of inertia weight, respectively. The proposed IMOPSOCE improves the search efficiency of the algorithm and reasonably balances the exploration and exploitation capabilities of particles when compared to the traditional MOPSO.

Bilayer velocity update with different task allocations
Layer 1

Layer 2
Fly according to equation (15) Fly according to equation (3) The particles at the first level after updating The particles at the other levels after updating Particles move in the classical MOPSO in accordance with Eqs (3) and (4). The IMOPSOCE proposed in this paper has two layers. After each update, the particles of the population are split into two layers based on their levels, and then the velocity update mode of the particles situated at the first level is modified. Since the particles at the first level after updating have a decent movement trend, learning from the social part may cause them to deviate from the correct movement direction. As a result, in this paper, after each update, the particles in the first level maintain their own movement trend without being influenced by the particles in the social part. Particles in the second layer continue to fly using Eq (3). In this way, some particles can explore more, diversity will be maintained, and a premature convergence will be prevented. Some particles can more effectively exploit and use appropriate information in the search process, thus improving the solutions obtained. The right balance between exploration and exploitation capabilities is created by giving particles from different layers different ways to update their speeds. The aforementioned is well illustrated in Figure 3. The particles in the first level adopt the speed update mode of Eq (15) after each update, while the particles in the other levels fly in accordance with Eq (3).
Combining the above, the pseudo-code of the particle update procedure is described in Algorithm 2. Equation (14) is used to compute the inertia weight (line 1). Next, the individual optimal position, velocity, position, etc. for the particle of the first level is found (lines 2-4). Finally, using Eqs (3), (4), and (15), the velocity and position of each particle after updating are determined (lines 5-8).

Algorithm 2: Update Particles
Input: max  (Inertial weight maximum value), min  (Inertial weight minimum value), p (Particle position), vel (Particle velocity ), best p (Individual optimal position), best g (Global optimal position) Output: v (New speed of particle), ' v (New speed of the first-level particle), newp (New position of particle), ' newp (New position of the first-level particle) 1 Calculate the inertial weight via using Eq (14) 2 Find ' best p % individual optimal position of the first-level particle 3 Find ' vel % velocity of the first-level particle 4 Find ' p % The position of the first-level particle newp via using Eq (4) 7 Calculate v via using Eq (3) 8 Calculate newp via using Eq (4) Calculate the inertial weight via using Eq (14) 6 Update Particles Algorithm 2 7 Calculate the fitness values 8 Update Archive Algorithm 1 9 Update best P and best g 10 End While 11 Return NA

The proposed IMOPSOCE algorithm
The proposed IMOPSOCE algorithm is presented in this section. IMOPSOCE is mainly composed of three parts. First, in IMOPSOCE, the population is split into two layers based on the levels of particles after each iteration, and different speed update modes are offered for particles in different layers. Second, in terms of the parameter setting, a random inertia weight strategy is proposed to balance the capacity for exploitation and exploration of the population. Finally, the CM indicator is used to maintain the external archive once it has reached its maximum capacity. The non-dominated solutions with a good convergence and diversity are retained in the external archive, which ensures that the non-dominated solutions in the external archive have an improved comprehensive performance. The pseudo-code of IMOPSOCE is displayed in Algorithm 3.

Experimental studies
In order to fully verify the performance of the proposed algorithm, three sets of benchmark functions are used in this section: ZDT, DTLZ, and UF. This section contrasts the proposed algorithm with five existing MOPSOs, including MOPSO [15], dMOPSO [28], NMPSO [29], SMPSO [30], and MPSOD [31]. In addition, we compare the proposed IMOPSOCE with five classical MOEAs, including NSGAIII [32], MOEAD [33], MOEAIGDNS [34], SPEAR [35], and VaEA [36], in order to further assess the proposed IMOPSOCE. The detailed discussions of the experimental procedure and results are as follows. The comprehensive performance of IMOPSOCE is verified using three different sets of benchmark functions. Specifically, to compare IMOPSOCE with the compared algorithms, five bi-objective test functions of ZDT [37], seven three-objective test functions of DTLZ [38], and seven bi-objective and three three-objective test functions of UF [39] are employed. They are not used in this paper since ZDT5 is a discrete optimization problem while DTLZ8 and DTLZ9 are two constrained optimization problems. Table 1 displays the relevant settings for these test problems where N is the number of particles, M is the number of objective functions, D is the dimension of decision variables, and FEs is the number of evaluations.

Performance indicators
In this paper, we employ two comprehensive indicators, namely inverted generational distance (IGD) [40] and hypervolume (HV) [41], to test the algorithms and verify the comprehensive performance of IMOPSOCE. Both indicators are used here to fully validate the algorithm, even though they can both detect the convergence and diversity of the algorithm.
The IGD is used to measure the distance between the true Pareto front and the set of Pareto optimal solutions obtained by the algorithm. A smaller IGD means that the set of non-dominated solutions is closer to the true Pareto front. It is calculated as follows: where PF is the Pareto front obtained by the algorithm,  PF is a set of sampling points from the true Pareto front, and S is the number of non-dominated solutions. HV is achieved by measuring the hypervolume of the region consisting of the optimal set and the reference points in the objective space. HV can simultaneously evaluate the convergence and diversity of the algorithm. The larger the HV, the better the overall performance obtained by the algorithm. The reference point of HV is set to   where i f denotes the i-th objective function value of S and i R denotes the i-th objective function value of the reference point. Table 2. Parameter settings of all algorithms.

Algorithms
Parameter settings In this paper, five representative MOPSOs and five classical MOEAs are compared to IMOPSOCE. In solving many MOPs, these algorithms have demonstrated an excellent performance. The parameter settings of the compared algorithms are compatible with the original reference literature in order to make the various different algorithms comparable to one another and to conduct a fair comparison, as indicated in Table 2.
For the compared MOPSOs, the inertia weight  and flight parameters 1 c and 2 c are used to update velocity. c p is crossover probability, and m p is mutation probability. In MOEAD, GHz. The original code for the compared algorithm is provided by PlatEMO [42], and each algorithm is independently run 30 times on each test function.

Experimental results and data analysis
This part compares the characteristics of the improved algorithm, selects 22 test functions of the ZDT, DTLZ, and UF series to evaluate the performance of the proposed IMOPSOCE, and analyzes the experimental results in order to verify the comprehensive performance of IMOPSOCE.  Tables 3 and 4. Comparing the experimental results reveals that the proposed IMOPSOCE performs better overall than the other five classical algorithms. To visually show the optimization result, the optimal values in the table are bolded. Table 3 shows the means and standard deviations of the IGD indicator of the five classical algorithms and IMOPSOCE on the 22 test functions. On the test functions ZDT1, ZDT2, ZDT3, DTLZ6, UF4, UF5, UF8, UF9, and UF10, it is immediately obvious that IMOPSOCE performs noticeably better than the five existing MOPSOs that are compared. IMOPSOCE performs well since it obtains the best IGD on 13 out of the 22 test functions. The IGD of IMOPSOCE is better than the other five algorithms, and its performance is much better than the competitive MOPSOs, which verifies its good performance. Second, NMPSO and SMPSO perform better in the remaining compared algorithms. Table 3 shows that MOPSO and dMOPSO perform poorly on 22 test functions and that their optimal IGD number is 0. According to the Wilcoxon rank sum test results presented in the second-last row, IMOPSOCE performs significantly better than MOPSO, dMOPSO, NMPSO, SMPSO, and MPSOD on 18, 17, 12, 16, and 19 out of 22 comparisons, respectively, while it performs worse than MOPSO, dMOPSO, NMPSO, SMPSO, and MPSOD on 3, 4, 7, 5, and 3 comparisons, respectively. Besides, IMOPSOCE obtains similar results to MOPSO, dMOPSO, NMPSO, SMPSO, and MPSOD on 1, 1, 3, 1, and 0 test functions, respectively. All the statistical results demonstrate the great efficacy of IMOPSOCE in solving the MOPs. On ZDT1 and UF10, IMOPSOCE performs better than NMPSO by a factor of 5 and 4, respectively. Only the proposed IMOPSOCE performs well on nearly all test functions, despite other compared algorithms performing well on some test functions. For example, NMPSO and SMPSO perform admirably on the DTLZ test functions, but poorly on the ZDT. Additionally, NMPSO and SMPSO perform well on the ZDT6 test function, but much worse on UF10 test function. IMOPSOCE performs significantly better than the other compared algorithms on UF1-UF10. IMOPSOCE shows its superior performance compared with the other five algorithms. This is mostly attributable to the adoption of the random inertia weight strategy, which effectively balances the exploitation and exploration capacities of particles to enable the swarm to discover better non-dominated solutions. The aforementioned evidence demonstrates that, when compared to the existing MOPSOs, the proposed IMOPSOCE performs the best overall.
In addition to the IGD indicator, a crucial HV indicator is employed to further confirm the effectiveness of IMOPSOCE. The comparison results using the HV indicator are similar to those using the IGD indicator, as shown in Table 4. Performance is poorer for MOPSO, dMOPSO, and MPSOD. It is noteworthy that the proposed IMOPSOCE only has a very small gap of less than one time from the optimal result on the test functions ZDT6, DTLZ2, DTLZ5, UF1, and UF4. On the test function UF6, IMOPSOCE has a slightly worse HV value but the best overall performance. As a result, when combined with the information in Table 4, we can draw the conclusion that the IMOPSOCE performs the best among the 22 test functions in terms of the HV indicator and is very competitive when compared to the existing MOPSOs algorithms. The proposed IMOPSOCE can perform better in terms of the convergence and diversity when solving the MOPs. Figure 4 displays box plots of the IGD indicator for the six algorithms. It is worth noting that the lower the mean value of IGD, the shorter the box plot in the figure, indicating that the algorithm obtains better IGD values and more consistent results. When different algorithms are run independently 30 times on each test function, the box plots of the IMOPSOCE algorithm and the compared algorithms with respect to the IGD indicator are shown in Figure 4 (1, 2, 3, 4, 5, and 6 on the horizontal coordinate represent MOPSO, dMOPSO, NMPSO, SMPSO, MPSOD, and IMOPSOCE, respectively, and the vertical coordinate represents the IGD value of each algorithm). Figure 4 records the data fluctuations of the six algorithms on ZDT1-ZDT4, ZDT6, DTLZ1-DTLZ7, and UF1-UF10. It is evident that IMOPSOCE can obtain better solutions than other MOPSOs. This is in agreement with the findings of Table 3. Meanwhile, the proposed IMOPSOCE shows significant improvement on the test functions ZDT1, ZDT2, ZDT3, ZDT4, DTLZ6, UF2, UF5, UF6, UF7, and UF9. Except for the test functions DTLZ1, DTLZ2, and UF8, IMOPSOCE, NMPSO, and SMPSO perform well, while dMOPSO and MPSOD perform relatively poorly. These figures further demonstrate the superior results of IMOPSOCE and demonstrate that it outperforms the other five algorithms in terms of overall performance on the 22 test functions. It is clear from the aforementioned narration that IMOPSOCE has certain advantages and effectiveness in improving the performance of the algorithm compared with the existing MOPSOs.
Continued on next page  IMOPSOCE better balances the exploitation and exploration capabilities, leading to a better convergence of the algorithm. The proposed IMOPSOCE has a good convergence because its proposed strategy can successfully prevent the population from falling into the local optimum in order to find solutions with better convergence. Figures 6 and 7 plot the approximate Pareto front obtained on the bi-objective test functions ZDT1 and ZDT3 to visualize the optimization result. Figure 6 shows that NMPSO and IMOPSOCE can cover the true Pareto front very well, mainly because of their good convergence, while the other four compared algorithms have a poor coverage of the true Pareto front. Meanwhile, the approximate Pareto fronts of the other five compared algorithms cannot be uniformly distributed, as obtained by IMOPSOCE. Figure 7 shows that on ZDT3, except for IMOPSOCE, which can cover the true Pareto front, the other five compared algorithms have difficulty approaching the true Pareto front. Additionally, MOPSO, SMPSO, and MPSOD cannot converge to the true Pareto front for ZDT3 with an unconnected PF, which is consistent with the conclusion in Table 3. NMPSO can only find a small number of Pareto optimal solutions for the ZDT3 function on the Pareto front. This can intuitively show that, when compared to the other five algorithms, IMOPSOCE has the best convergence effect. This is mainly because IMOPSOCE uses an external archive maintenance strategy based on the inflection point distance and the distribution coefficient to guarantee that the non-dominated solutions in the external archive have better comprehensive performance. IMOPSOCE takes advantage of the hierarchical idea to provide different flight modes for particles at different layers, which improves the search capabilities of the algorithm and makes it have better convergence. Tables 5 and 6 show the means and standard deviations of the IGD indicator and HV indicator for 30 independent runs on the ZDT, DTLZ, and UF functions for NSGAIII, MOEAD, MOEAIGDNS, SPEAR, VaEA, and the proposed IMOPSOCE, respectively. Additionally, the experiments adopt the Wilcoxon rank sum test to obtain statistically sound conclusions at the 0.05 significance level. Combining the data in Tables 5 and 6, the proposed IMOPSOCE still performs better than the other five classical algorithms and still outperforms MOEAs on these test functions. To visually show the optimization result, the optimal values in the table are bolded. Table 5 shows the means and standard deviations of the IGD indicator of the five classical algorithms and IMOPSOCE on the 22 test functions. IMOPSOCE performs the best compared to the five compared algorithms on ZDT1-ZDT4, ZDT6, DTLZ6, UF3-4, and UF7-10. According to the Wilcoxon rank sum test results shown in the second-last row of Table 5, the IMOPSOCE has significantly better IGD values than NSGAIII, MOEAD, MOEAIGDNS, SPEAR, and VaEA on 12, 16, 12, 12, and 11 test functions, respectively. The performance of IMOPSOCE is the most competitive, even though its advantage compared to these five MOEAs is not particularly outstanding. For instance, IMOPSOCE outperforms VaEA on ZDT1 by a factor of 10 and SPEAR on DTLZ6 by a factor of 17. IMOPSOCE outperforms the compared MOEAs and exhibits the best performance on the ZDT and UF test functions. Due to the somewhat higher IGD values of IMOPSOCE, its performance for the three-objective DTLZ test functions is less than optimal. This indicates that the classical algorithms are better suited to solving the three-objective series of problems. When it comes to solving the three-objective DTLZ test functions, MOEAD demonstrates its superiority. The findings of the comparisons also demonstrate that the proposed IMOPSOCE has a good overall performance. IMOPSOCE achieves the 12 statistically best results on the 22 test functions, according to the results in the last row of Table 5. To sum up the foregoing content, the proposed IMOPSOCE ranks first among the aforementioned compared algorithms due to the use of the external archive maintenance strategy, the comprehensive indicator combined with inflection point distance and distribution coefficient, and focusing on the convergence and diversity of non-dominated solutions at the same time; thus, the non-dominated solutions in the external archive have a better performance.

Comparisons with five existing MOEAs
A similar conclusion that IMOPSOCE has superior performance can also be obtained from Table 6. The proposed IMOPSOCE achieves more than half of the best HV values on the 22 test functions, as shown in Table 6. IMOPSOCE achieves the best results on ZDT1-ZDT4, ZDT6, DTLZ6, UF2-UF4, and UF7-UF10. There are 0 optimal HV values for SPEAR, 1 optimal HV value each for NSGAIII and MOEAIGDNS, 3 optimal HV values each for MOEAD and VaEA, and 13 optimal HV values for the proposed IMOPSOCE. SPEAR performs relatively poorly compared to the other algorithms on the 22 test functions. IMOPSOCE outperforms NSGAIII, MOEAD, MOEAIGDNS, SPEAR, and VaEA on 13, 14, 13, 14, and 13 test functions, respectively. From the findings, it is evident that IMOPSOCE performs well on both the bi-objective and three-objective test functions. In general, compared with the existing MOEAs, the proposed IMOPSOCE achieves the best overall performance on the test functions, especially on the bi-objective test functions. The flight mode of the particles at the first layer is changed after each update due to the hierarchical update strategy of particle velocity, and the particles at the first layer continue their original movement trend. The convergence of the algorithm is effectively improved. Figure 8 presents box plots of the IGD indicator for the six algorithms to illustrate the distribution of the data. When different algorithms are independently run 30 times on each test function, the box plots of the IMOPSOCE algorithm and compared algorithms with respect to the IGD indicator are shown in Figure 8 (1, 2, 3, 4, 5, and 6 on the horizontal coordinate represent NSGAIII, MOEAD, MOEAIGDNS, SPEAR, VaEA, and IMOPSOCE, respectively, and the vertical coordinate represents the IGD value of each algorithm). The data fluctuations of the six algorithms on ZDT1-ZDT4, ZDT6, DTLZ1-DTLZ7, and UF1-UF10 are shown in Figure 8. The above contents demonstrates unequivocally that, when compared to other MOEAs, IMOPSOCE can achieve the best Pareto optimal solutions. The results match those in Table 5. On the test functions ZDT1, ZDT2, ZDT3, ZDT6, UF7, and UF9, IMOPSOCE outperforms the other five algorithms by a wide margin. There is no discernible difference between NSGAIII, MOEAD, MOEAIGDNS, SPEAR, and VaEA on the test functions of DTLZ, with the exception of DTLZ4 and DTLZ6. IMOPSOCE performs better than the other five algorithms overall. As may be observed from the aforementioned experimental results, IMOPSOCE is very competitive in solving MOPs compared with the existing MOEAs.
The IGD convergence trajectories of the six algorithms for the test functions ZDT3, ZDT4, and UF2 are plotted in Figure 9. IGD values are recorded 10 times in a certain time interval for each algorithm. The figure shows that IMOPSOCE converges more quickly than the other five algorithms because we utilize an external archive maintenance strategy that combines the inflection point distance and distribution coefficient, resulting in the non-dominated solutions in the external archive having the best performance. Figures 10 and 11 plot the Pareto fronts of NSGAIII, MOEAD, MOEAIGDNS, SPEAR, VaEA, and IMOPSOCE on the bi-objective test functions ZDT1 and ZDT2. Only IMOPSOCE can fit the true Pareto front in Figure 10, whereas the other five compared algorithms barely do. The primary reason is that IMOPSOCE makes particle searches more effective by using a random inertia weight strategy. Figure 11 demonstrates that none of the other five MOEAs cover the true Pareto front, except for IMOPSOCE, which almost completely covers the true Pareto front of ZDT2. The two figures above show that only IMOPSOCE can be evenly distributed on the true Pareto front when compared to the other five algorithms. The primary reason for this is that IMOPSOCE employs an external archive maintenance strategy that combines inflection point distance and distribution coefficient, allowing it to measure the performance of non-dominated solutions from two perspectives of convergence and diversity and retain solutions with good comprehensive performance. The good diversity and convergence of IMOPSOCE allows it to better approximate the true Pareto front in the majority of test functions.   In order to obtain more comprehensive comparative statistics and determine whether the obtained results are correct, we conducted a Friedman rank test [43,44] on the experimental results to compare the performance of various algorithms.  Table 7 presents the Friedman rank test rankings of the IGD values for all algorithms on the ZDT, DTLZ, and UF benchmark suites, with the highest ranking shown in bold. As shown in Table 7, IMOPSOCE ranks first on both the ZDT and UF benchmark suites. The total highest ranking is obtained by IMOPSOCE and VaEA, which are, in turn, MOEAIGDNS, NMPSO, SPEAR, NSGAIII, MOEAD, dMOPSO, SMPSO, MPSOD, and MOPSO. Table 8 presents the Friedman rank test rankings of the HV values for all algorithms on the ZDT, DTLZ, and UF benchmark suites. Similar to Table 7, IMOPSOCE achieves the first ranking on the ZDT and UF benchmark suites. IMOPSOCE is ranked second in the total ranking, with only a slight gap between it and NMPSO, which is ranked first. Combining Tables 7 and 8, it can be seen that the IMOPSOCE proposed in this paper can obtain the best ranking on the ZDT and UF benchmark suites for both IGD and HV. The experimental results show that IMOPSOCE provides better overall performance compared to other comparison algorithms and has obvious advantages.

Conclusions
In this paper, IMOPSOCE, a novel MOPSO algorithm with a comprehensive indicator and layered particle swarm optimizer, has been proposed for solving MOPs. First, the CM indicator, which is based on inflection point distance and distribution coefficient, is used to measure the performance of non-dominated solutions in the external archive. Superior-performing non-dominated solutions are retained, while inferior-performing solutions are eliminated. Second, to balance the capacities for exploitation and exploration of particles, the random inertia weight strategy is used. Finally, the particles in the population are divided into two layers according to the levels after updating, and different flight modes are offered for the particles in different levels in order to improve the efficiency of the algorithm in solving problems and the optimization capability. Twenty-two test functions are used to validate the proposed IMOPSOCE. The results of the experiment demonstrate that, compared to five other MOPSOs and MOEAs, IMOPSOCE is better able to maintain the diversity of Pareto optimal solutions and make the Pareto optimal solutions converge to the true Pareto front.
Some questions can be further studied in future work. First, in order to better balance exploration and exploitation capabilities, it would be interesting to design a better hierarchical structure. Second, employing learning strategies is highly efficient for improving MOPSO performance. To get better solutions, various learning strategies might be taken into account. Finally, we can optimize the suggested algorithm and attempt to utilize it to address specific issues in the real world. This is a brand-new subject that merits more research.

Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.