A Novel Angular-Guided Particle Swarm Optimizer for Many-Objective Optimization Problems

Most multiobjective particle swarm optimizers (MOPSOs) often face the challenges of keeping diversity and achieving convergence on tackling many-objective optimization problems (MaOPs), as they usually use the nondominated sorting method or decomposition-based method to select the local or best particles, which is not so effective in high-dimensional objective space. To better solve MaOPs, this paper presents a novel angular-guided particle swarm optimizer (called AGPSO). A novel velocity update strategy is designed in AGPSO, which aims to enhance the search intensity around the particles selected based on their angular distances. Using an external archive, the local best particles are selected from the surrounding particles with the best convergence, while the global best particles are chosen from the top 20% particles with the better convergence among the entire particle swarm. Moreover, an angular-guided archive update strategy is proposed in AGPSO, which maintains a consistent population with balanceable convergence and diversity. To evaluate the performance of AGPSO, the WFG and MaF test suites with 5 to 10 objectives are adopted. The experimental results indicate that AGPSO shows the superior performance over four current MOPSOs (SMPSO, dMOPSO, NMPSO, and MaPSO) and four competitive evolutionary algorithms (VaEA, θ-DEA, MOEA\D-DD, and SPEA2-SDE), when solving most of the test problems used.


Introduction
Many real-world applications often face the problems of optimizing m (often conflicting) objectives [1]. is kind of engineering problem is called multiobjective optimization problems (MOPs), when m is 2 or 3, or called many-objective optimization problem (MaOPs), when m > 3. A generalized MOP or MaOP can be modeled as follows: where x � (x 1 , x 2 , . . . , x D ) is a decision vector in the Ddimensional search space Ω and F(x) defines m objective functions. Assuming that two solutions x and y(x ≠ y) locate in the search space Ω, if f i (x) ≤ f i (y), for ∀i ∈ 1, 2, . . . , m { }, x is said to dominate y. en, if no solution can dominate x, it is called nondominated solution.
For MOPs or MaOPs, the aim is to search a set of nondominated solutions called Pareto-optimal set (PS), with its mapping in the objective space called Pareto-optimal Front (PF).
During the past two decades, a number of nature-inspired computational methodologies have been proposed to solve various kinds of MOPs and MaOPs, such as multiobjective evolutionary algorithms (MOEAs) [2,3], multiobjective particle swarm optimizers (MOPSOs) [4][5][6], and multiobjective ant colony optimizers [7]. Especially, MOPSOs become one of the most outstanding populationbased approaches due to the easy implementation and strong search ability [8], which can be also applied to other kinds of optimization problems, such as multimodal optimization problems [9] and standard image segmentation [10], as well as some real-world engineering problems [11,12].
Currently, MOPSOs have been validated to be very effective and efficient when solving MOPs, which can search a set of approximate solutions with balanceable diversity and convergence to cover the entire PFs. Based on the selection of local or global best particles, which can effectively guide the flight of particles in the search space, most of MOPSOs can be classified into three main categories. e first category embeds the Pareto-based sorting approach [2] into MOPSO (called Pareto-based MOPSOs), such as OMOPSO [13], SMPSO [14], CMPSO [15], AMOPSO [5], and AGMOPSO [16]. e second category decomposes MOPs into a set of scalar subproblems and optimizes all the subproblems simultaneously using a collaborative search process (called decomposition-based MOPSOs), including dMOPSO [17], D 2 MOPSO [18], and MMOPSO [19]. e third category employs the performance indicators (e.g., HV [20] and R2 [21]) to guide the search process in MOPSOs (called indicator-based MOPSOs), such as IBPSO-LS [22], S-MOPSO [23], NMPSO [24], R2HMOPSO [25], and R2-MOPSO [26].
Although the above MOPSOs are effective for tackling MOPs with the objective number m ≤ 3, their performance will significantly deteriorate for solving MaOPs with m > 3, mainly due to the several challenges brought by "the curse of dimensionality" in MaOPs [27,28]. Generally, there are three main challenges for MOPSOs to deal with MaOPs. e first challenge is to provide the sufficient selection pressure to approach the true PFs of MaOPs, i.e., the challenges in maintaining convergence. With the increase of objectives in MaOPs, it becomes very difficult for MOPSOs to pay the same attention to optimize all the objectives, which may lead to an imbalanced evolution such that solutions are very good at solving some objectives but perform poorly on the others [29]. Moreover, most solutions of MaOPs are often nondominated with each other at each generation. erefore, MOPSOs based on Pareto dominance may lose the selection pressure on addressing MaOPs and show a poor convergence performance [24]. e second challenge is to search a set of solutions that are evenly distributed along the whole PF, i.e., the challenges in diversity maintenance. Since the objective space is enlarged rapidly with the increase of dimensionality in MaOPs, this requires a large number of solutions to approximate the whole PF. Some well-known diversity maintenance methods may not work well on MaOPs, e.g., the crowding distance [2] may prefer some dominated resistant solutions [30], the decomposition approaches require a larger set of well-distributed weight vectors [3], and the performance indicators require an extremely high computational cost [31]. us, diversity maintenance in MOPSOs becomes less effective with the increase of objectives in MaOPs [32]. e third challenge is to propose the effective velocity update strategies for MOPSOs to guide the particle search in the high-dimensional space [33].
ere are a number of research studies to present the improved velocity update strategies for MOPs [34]. For example, in SMPSO [14], the velocity of the particles is constrained in order to avoid the cases in which the velocity becomes too high. In AgMOPSO [35], a novel archive-guided velocity update method is used to select the global best and personal best particles from the external archive, which is more effective to guide the swarm. In MaOPSO/2s-pccs [36], a leader group is selected from the nondominated solutions with the better diversity in the external archive according to the parallel cell coordinates, which are selected as the global best solutions to balance the exploitation and exploration of a population. In NMPSO [24], another search direction from the personal best particle pointing to the global best one is provided to make more disturbances in velocity update. However, the abovementioned velocity update strategies are not so effective for MaOPs. With the enlargement of the objective space, it is more difficult to select the suitable personal or global best particles for velocity update.
To overcome the abovementioned challenges, this paper proposes a novel angular-guided MOPSO (called AGPSO). A novel angular-based archive update strategy is designed in AGPSO to well balance convergence and diversity in the external archive. Moreover, a density-based velocity update strategy is proposed to effectively guide the PSO-based search. When compared to the existing MOPSOs and some competitive MOEAs for MaOPs, the experimental results on the WFG and MaF test suites with 5 to 10 objectives have shown the superiority of AGPSO on most cases.
To summarize, the main contributions of this paper are clarified as follows.
(1) An angular-guided archive update strategy is proposed in this paper. As the local-best and global-best particles are both selected from the external archive, the elitist solutions with balanceable convergence and diversity should be preserved to effectively guide the search direction to approximate the true PF. In this strategy, N evenly distributed particles (N is the swarm size) are firstly selected by the angular distances to maintain the diversity in different search directions, which helps to alleviate the second challenge above (diversity maintenance). After that, the rest particles are respectively associated to the closest particle based on the angular distance, which will form N groups with similar particles. At last, the particle with the best convergence in each particle group is saved in the external archive, trying to alleviate the first challenge above (keeping convergence).
(2) A density-based velocity update strategy is designed in this paper, which can search high-quality particles with fast convergence and good distribution and helps to alleviate the third challenge above in MOPSOs. e local best and global best particles are selected according to the densities of the particles, for guiding the PSO-based search. e sparse particles will be guided by the local best particles to encourage exploitation in the local region as the surrounding particles are very few, while the crowded particles are disturbed by the global best particles to put more attentions on convergence. By this way, the proposed velocity update strategy is more suitable for solving MaOPs as experimentally validated in Section 4.5.
e rest of this paper is organized as follows. Section 2 introduces the background of the particle swarm optimizer

Particle Swarm Optimizer.
Particle swarm optimizer is a simple yet efficient swarm intelligent algorithm, which is inspired by the social behaviors of the individuals in flocks of birds and schools of fish, as proposed by Kennedy and Eberhart [6] in 1995. Generally, a particle swarm S is stochastically initialized to have N particles according to the corresponding search range of the problem (N is the swarm size). Each particle x i ∈ S is a candidate solution to the problem, characterized with the velocity v i (t) and the position x i (t) at tth iteration. Moreover, each particle x i will record the historical position-information in S, i.e., the personal best position for x i (pbest i ) and the global best position for x i (gbest i ), which are used to guide the particle fly to the next position x i (t + 1). en, the velocity and location of each particle are iteratively updated as follows: where w is an inertia weight to follow the previous velocity and r 1 and r 2 are two random numbers uniformly generated in [0, 1], while c 1 and c 2 are the two learning weight factors from the personal and global best particles [14].

Some Current MOPSOs and MOEAs for MaOPs.
Recently, a number of nature-inspired optimization algorithms have been proposed for solving MaOPs, such as MOEAs and MOPSOs. Especially, a number of MOEAs have been proposed, which can be classified into three main kinds, such as Pareto-based MOEAs, indicator-based MOEAs, and decomposition-based MOEAs. A lot of approaches have been proposed to enhance their performance for solving MaOPs. For example, Pareto dominance relation was modified to enhance the convergence pressure for Pareto-based MOEAs, such as ε − dominance [30,37] θ − dominace, a strengthened dominance relation [38], and a special dominance relation [39]. New association mechanisms for solutions and weight vectors were designed for decomposition-based MOEAs, such as the use of reference vectors in NSGA-II [40], a dynamical decomposition in DDEA [41], and a self-organizing map-based weight vector design in MOEA/D-SOM [42]. A lot of effective and efficient performance indicators were presented for indicator-based MOEAs, such as a unary diversity indicator based on reference vectors [43], an efficient indicator as a combination of sum of objectives and shift-based density estimation called I SDE + [44], and an enhanced inverted generational distance (IGD) indicator [45].
In contrast to the abovementioned MOEAs, there are only a few of MOPSOs presented to solve MaOPs. e reason may refer to the three challenges faced by MOPSOs as introduced in Section 1. e main difference between MOPSOs and MOEAs is their evolutionary search, e.g., most MOEAs adopt differential evolution or simulated binary crossover, while most MOPSOs use the fly of particles to run the search. e environmental selection in external archive of MOPSOs and MOEAs are actually similar. As inspired by the environmental selection in some MOEAs, MOPSOs can be easily extended to solve MaOPs. Recently, some competitive MOPSOs have been proposed to better solve MaOPs. For example, in pccsAMOPSO [5], a parallel cell coordinate system was employed to update the external archive for maintaining the diversity, which was used to select the global best and personal best particles. In its improved version MaOPSO/2s-pccs [36], the two-stage strategy was further presented to emphasize convergence and diversity, respectively, by using a single-objective optimizer and a many-objective optimizer. In NMPSO [24], a balanceable fitness estimation method was proposed by summarizing different weights of convergence and diversity factors, which aims to offer sufficient selection pressure in the search process. However, this method is very sensitive to the used parameters and requires a high-computational cost. In CPSO [29], a coevolutionary PSO with bottleneck objective learning strategy was designed for solving MaOPs. Multiple swarms coevolved in a distributed fashion to maintain diversity for approximating the entire PFs, while a novel bottleneck objective learning strategy was used to accelerate convergence for all objectives. In MaPSO [46], a novel MOPSO based on the acute angle was proposed, in which the leader of particles was selected from its historical particles by using the scalar projections and each particle owned K historical particle information (K was set to 3 in its experiments). Moreover, the environmental selection in MaPSO was run based on the acute angle of each pair of particles. Although these MOPSOs are effective for solving MaOPs, there are still some improvements for the design of MOPSOs. us, this paper proposes a novel angular-guided MOPSO with an angular-based archive update strategy and a density-based velocity update strategy to alleviate the three abovementioned challenges in Section 1.

The Proposed AGPSO Algorithm
In this section, the details of the proposed algorithm are provided. At first, two main strategies used in AGPSO, i.e., a density-based velocity update strategy and an angularguided archive update strategy, are, respectively, introduced.
e density-based velocity update strategy will consider the density around each particle, which is used to determine whether the local best particle or the global best particle is used to guide the swarm search. In this strategy, the local best particle is selected based on the local angular information, aiming to encourage exploitation in the local region around the current particle, while the global best particle is selected by the convergence performance, which is used to perform exploration in the whole search Complexity space. In order to provide the elite particles for leading the PSO-based search, the angular-guided archive update strategy is designed to provide an angular search direction for each particle, which is more effective for solving MaOPs. At last, the complete framework of AGPSO is also shown in order to clarify its implementation as well as other components.

3.1.
A Density-Based Velocity Update Strategy. As introduced in Section 2.1, the original velocity update strategy includes two best positional information of a particle, i.e., the personal-best (pbest) and global-best (gbest) particles, as defined in equation (1), which are used to guide the swarm search.
eoretically, the quality of generated particles is dependent on the guiding information from pbest and gbest in velocity update. However, the traditional velocity update strategy will face the great challenges in tackling MaOPs, as the selection of effective personal-best and global-best particles is difficult. Moreover, two velocity components used in equation (2) may lead to too much disturbance for the swarm search, as most of the particles are usually sparse in the high objective space. us, the quality of particles generated by the traditional velocity formula may not be promising, as the large disturbance in velocity update will lower the search intensity in the local region around the current particle. Some experiments have been given in Section 4.5 to study the effectiveness of different velocity update strategies. In this paper, a density-based velocity update strategy is proposed in equation (4), which can better control the search intensity in the local region of particles: where t is the iteration number; v i (t) and v i (t + 1) are the tth and the (t + 1)th iteration velocity of particle x i , respectively; w is the inertial weight; c 1 and c 2 are two learning factors;r 1 and r 2 are two uniformly distributed random numbers in [0, 1]; lbest i and gbest i are the positional information of the local-best and global-best particles for x i , i � (1, 2, . . . , N), respectively; D x is the shift-based density estimation (SDE) of particle x as defined in [47]; and med SDE is the median value of all D x in the swarm. By this way, the sparse particles with D x ≥ med SDE will be guided by the local best particles to encourage exploitation in the local region as their surrounding particles are very few, while the crowded particles with D x < med SDE are disturbed by the global best particles to put more attentions on convergence. Please note that lbest i and gbest i of x i (t) are selected from the external archive A as introduced below.
To the selection of gbest i of particle x i , the selection of gbest focuses on the particles with better convergence in this archive. Meanwhile, we also need to ensure a certain perturbation; thus, we randomly choose from the top 20% of the convergence performance. e gbest i of particle x i is randomly selected from the top 20% particles in the external archive with the better convergence performance values. For each particle x, we calculate its convergence performance values using the following formulation: where m is the number of objectives and f k ′ (x) returns the kth normalized objective vector of x, which is obtained by the normalization procedure as follows: where z * k and z nadir k are the kth objective value of the ideal and nadir points, respectively, k � 1, 2, . . . , m.
To the selection of lbest i , the current particle x i in the particle swarm (i = 1, . . ., N, where N is the number of particles size) will be firstly associated to the closest particle y in the external archive (A), by comparing the angulardistance of x i to each particle in the extra archive (A). And use y particle to find the lbest i in A. Here, the angulardistance [41] between the particle x to another particle y, termed AD(x, y), is computed by where λ(x) is defined as follows: where calculated by equation (6). en, lbest i is selected from the T angular-distance-based closest neighbors of y with the best convergence value by equation (5). Specially, each particle of A owns their T nearest neighbors, which are found by calculating the angular distance of each particle to the other particles of A using equation (7). e neighborhood information of . , x i T ) are T closest angular-distance-based solutions to x i by equation (7). en, for each particle in S, lbest i and gbest i are obtained to update the velocity by equation (4) and position by equation (3), respectively. After that, the object values of the particle are evaluated. To further clarify the selection of local best and global best particles and the density-based velocity update, the related pseudocode is given in Algorithm 1.

An Angular-Guided Archive Update Strategy.
After performing the abovementioned density-based velocity update for the swarm search, each particle flies to the new position to produce the new particle swarm S. In order to get a population consisting of a fixed number of elitist solutions, which can maintain an excellent balance between convergence and distribution in the external archive, an appropriate selection strategy is required to update A. In this paper, we propose an angular-guided archive update strategy, following the principle of distributivity first and convergence second, and the pseudocode of updating the external archive is given in Algorithm 2.
In Algorithm 2, the input is S and A; N is the size of both S and A. First, S and A are combined into a union set U and set A � Θ in line 1. en, all the particles in U is normalized by equation (6) in line 2. After that, in lines 3-6, a set of particles are found to represent the distribution of all particles. To do this, each time, the current particle pair (x h , x u ) with minimum angular-distance of U is found in line 4, which is computed as where the A D(x, y) is computed as equation (7). en, one particle x * in (x h , x u ) with the minimum angular-distance to U is found and added to A and deleted from U in lines 5-6.
Here, the angular-distance of (x h , x u ) to U, termed as AD(x, U), is computed as where the AD(x, y) is computed as equation (7), N subsets S 1 ,S 2 , . . . , S N are obtained in line 8-11, where particle x i in U is saved into S i , i � 1, 2, . . . , N. In lines 12-15, each particle x in A is associated with the minimum angular-distance to the particle x t in the U, and then this particle . . , N, a particle with the best convergence performance computed by equation (5) is added into A, in lines 17-20. e A is returned in line 21 as the final result.
In order to facilitate the understanding of the process of this angular-guided strategy, a simple example is illustrated in Figure 1. e U includes ten individuals x 1 , x 2 , . . . , x 10 and their mapping solutions, which map individuals to the hyperplane f 1 ′ + f 2 ′ � 1 (calculating angular-distance) in the normalized biobjective space, as shown in Figure 1(a). First, five particles (half the population size), i.e., x 1 , x 4 , x 7 , x 9 , x 10 that represent the distribution of all ten particles are kept in U, as shown in Figure 1(b). en, the remaining particles, i.e., x 2 , x 3 , x 5 , x 6 , x 8 , are preserved in A as in Figure 1(c). Second, each particle in Figure 1(c) is associated with a minimum angular-distance particle in Figure 1(b). After that, five subsets S 1 , S 2 , . . . S 5 are obtained, where S 1 preserves x 1 and its associated particles in A, i.e., x 2 . Similarly, S 2 preserves x 3 and x 4 ; S 3 preserves x 5 , x 6 , and x 7 ; S 4 preserves x 8 and x 9 ; S 5 preserves x 10 , as shown in Figure 1(d). Finally, in each subset, only the particle with the best convergence is selected, as in Figure 1(e).

e Complete Algorithm of AGPSO.
e above sections have introduced the main components of AGPSO, which include the velocity update strategy and archive update operator. In addition, the evolutionary search is also proposed on the external archive. In order to describe the remaining operator and to facilitate the implementation of AGPSO, the pseudocode of its complete algorithm is provided in Algorithm 3. e initialization procedure is first activated in lines 1-6 of Algorithm 3. For each particle in S, its positional information is randomly generated and its velocity is set to 0 in line 3. After that the objectives of each particle are evaluated in line 4. e external archive A is updated with Algorithm 2 in line 7 and sorted in the ascending order by calculating the convergence fitness of each particle using equation (5) in line 8. After that the iteration counter t is increased by one. en, AGPSO steps into the main loop, which contains the particle search and the evolutionary search, until the maximum number of iterations is reached.
In the main loop, the SDE distance of each particle in S is calculated, and these particles are sorted by the SDE distance to get median SDE distance of particles in S, which is called med SDE as in lines 11-12. en, the PSO-based search is performed in line 13 with density-based velocity update of Algorithm 1. After that the angular-guided archive update strategy of Algorithm 2 is executed in line 14, with the inputs S, A, and N (the sizes of S and A are both N). en, in line 15, the evolutionary strategy is applied on A to optimize the swarm leaders, providing another search strategy to cooperate with the PSO-based search.
(1) sort the extra archive A in ascending order based on convergence value as in equation (5) (2) for each solution x i ∈ A, i � (1, 2, . . . , N) (3) identify its T neighborhood index as B(i) (4) end for (5) for each particle x i ∈ S, i � (1, 2, . . . , N) (6) associate x i to the angular-guided closest particle y ∈ A with index j in A (7) get the T nearest neighbors in A of y by the neighborhood index B(j) (8) sort the T neighbors of y in an ascending order based on convergence value by equation (5) (9) select the first angle-distance-based neighboring particle of y as lbest i (10) select randomly from the top 20% particles in A as gbest i (11) update the velocity v i of x i by equation (4)  (12)  (1) combine S and A into a union set U and set A � ∅ (2) normalize all particles in U by equation (6) (3) for i � 1 to N (4) find the particle pair (x h , x u ) in U with the minimum angular-distance for all particle pairs in U (5) find x * in (x h , x u ) such that x * has the smaller angular-distance to U by equation (9) (6) add x * to A, then deleted x * from U (7) end for (8) for each particle x i ∈ U (9) initialize an empty subset S i (10) add the particle x i into S i (11) end for (12) for each particle x i ∈ A (13) associate x i to the particle x t in U that has the smallest angular-distance to x i (14) 18) find the particle x * of S i that has the best convergence performance computed by equation (5)   (1) VaEA [48]: this algorithm uses the maximum-vector-angle-first principle in the environmental selection to guarantee the wideness and uniformity of the solution set. (2) θ − DEA [30]: this algorithm uses a new dominance relation to enhance the convergence in the high dimension optimization. (3) MOEA/DD [49]: this algorithm proposes to exploit the merits of both dominance-based and decomposition-based approaches, which balances the convergence and the diversity for the population. (4) SPEA2 + SDE [47]: this algorithm develops a general modification of density estimation in order to make Pareto-based algorithms suitable for many-objective optimization.

The Experimental Studies
Four competitive MOPSOs are also used to validate the performance of our proposed AGPSO. ey are also briefly introduced as follows: (1) NMPSO [24]: this algorithm uses a balanceable fitness estimation to offer sufficient selection pressure in the search process, which considers both of the convergence and diversity with weight vectors. (2) MaPSO [46]: this algorithm uses an angle-based MOPSO called MaPSO. MaPSO chooses the leader positional information of particles from its historical particles by using scalar projections to guide particles. (3) dMOPSO [17]: this algorithm integrates the decomposition method to translate an MOP into a set of single-objective optimization problems, which solves them simultaneously using a collaborative search process by applying PSO directly. (4) SMPSO [14]: this algorithm proposes a strategy to limit the velocity of the particles.

Benchmark Problems.
In our experiments, sixteen various unconstrained MOPs are considered here to assess the performance of the proposed AGPSO algorithm. eir features are briefly introduced below. In this study, the WFG [50] and MaF [31] test problems were used, including WFG1-WFG9 and MaF1-MaF7. For each problem, the number of objectives was varied from 5 to 10, i.e., m ∈ 5, 8, 10 { }. For MaF1-MaF7, the number of decision variables was set as n � m + k − 1, where n and m are, respectively, the number of decision variables and the number of objectives. As suggested in [31], the values of k were set to 10. Regarding WFG1-WFG9, the decision variables are composed by k position-related parameters and l distance-related parameters. As recommended in [51], k is set to 2×(m-l) and l is set to 20.
(1) Let t � 0, A � ∅, initial particles S � x 1 , x 2 , . . . , x N , and T be the value of the neighborhood size (2) for i � 1 to N (3) randomly initialize position x i and set v i � 0 for x i (4) evaluate the objective values of x i (5) end for (6) (11) calculate the SDE of each particle in S (12)   e goals of MaOPs include the minimization of the distance of the solutions to the true PF (i.e., convergence) and the maximization of the uniform and spread distribution of solutions over the true PF (i.e., diversity).
Hypervolume (HV) [20] metric calculates the objective space between the prespecified reference point z r � (z r 1 , . . . , z r m ) T which is dominated by all points on the true points and the solution set S we obtain. e calculation of HV metric is given as follows: where Vol(·) denotes the Lebesgue measure. When calculating the HV value, the points are firstly normalized as suggested in [51] by the vector 1. 1, 2, . . . , m) being the maximum value of kth objective in true PF.
ose that cannot dominate the reference point will be discarded (i.e., solutions that cannot dominate the reference point are not included to compute HV). For each objective, an integer larger than the worst value of the corresponding objective in the true PF is adopted as the reference point. After the normalization operation, the referent point is set to (1.0, 1.0, . . . , 1.0). A larger value of HV indicates a better approximation of the true PF.
Because the weight vectors are implied to dMOPSO and MOEA/DD, the population sizes of these two algorithms are set the same as the number of weight vectors. e number of weight vectors is set to 210, 240, and 275, following [51], for the test problem with 5, 8, and 10 objectives. In order to ensure fairness, the other algorithms adopt the population/swarm size the same as to the number of weight vectors.
To allow a fair comparison, the related parameters of all the compared algorithms were set as suggested in their references, as summarized in Table 1. p c and p m are the crossover probability and mutation probability; η c and η m are, respectively, the distribution indexes of SBX and polynomial-based mutation. For these PSO-based algorithms mentioned above (SMPSO, dMOPSO, NMPSO, AGPSO), the control parameters c 1 , c 2 , c 3 are sampled in [1.5, 2.5], and the inertia weight w is a random number of [0.1, 0.5]. In MaPSO, K is the size of historical maintained by each particle, which is set to 3; the another algorithmic parameters are θ max � 0.5, w � 0.1, and C � 2.0. In MOEA/ DD, T is the neighborhood size; δ is the probability to select parent solutions from T neighborhoods; and n r is the maximum number of parent solutions that are replaced by each child solution. Regarding VaEA, σ is a condition for determining whether the solution is searching for a similar direction, which is set to π/2(N + 1).
All the algorithms were run 30 times independently on each test problem. e mean HV values and the standard deviations (included in bracket after the mean HV results) in 30 runs were collected for comparison. All the algorithms were terminated when a predefined maximum number of generations (t max ) was reached. In this paper, t max is set to 600, 700, and 1000 for 5-, 8-, and 10-objective problems, respectively. For each algorithm, the maximum function evaluations (MFE) can be easily determined by MFE � N * t max , where N is the population size. To obtain a statistically sound conclusion, Wilcoxon rank sum test was run with a significance level (0.05 is set in this paper), showing the statistically significant differences on the results of AGPSO and other competitors. In the following experimental results, the symbols "+," "−," and "∼" indicate that the results of other competitors are significantly better than, worse than, and similar to the ones of AGPSO using this statistical test, respectively.
All these nine algorithms were implemented by JAVA codes and run on a personal computer with Intel (R) Core (TM) i7-6700 CPU, 3.40 GHz (processor), and 20 GB (RAM).

Comparison with State-of-the-Art Algorithms.
In this section, AGPSO is compared to four PSO algorithms (dMOPSO, SMPSO, NMPSO, and MaPSO) and four competitive MOEAs (VaEA, θ − DEA, MOEA/DD, and SPEA2 + SDE) on WFG1-WFG9 and MaF1-MaF7 problems. In the following tables, the symbols "+," "−," and "∼" indicate that the results of other competitors are significantly better than, worse than, and similar to the ones of AGPSO using statistical test, respectively. e best mean result for each problem is highlighted in boldface.

Comparison Results with Four Current MOPSOs
(1) Comparison Results on WFG1-WFG9. Table 2 shows the HV performance comparisons of four PSOs on WFG1-WFG9, which clearly demonstrate that AGPSO provides promising performance in solving WFG problems with these PSOs, as it is best on 21 out of 27 cases on the experimental data. In contrast, NMPSO, MaPSO, dMOPSO, and SMPSO are best on 6, 1, 0, and 0 cases for HV metric, respectively. ese data are summarized in the second last row of Table 2. e last row of Table 2 are the comparisons of AGPSO with each PSO on WFG, where the mean of "−/∼/+" is the number of test problem in which the corresponding 8 Complexity algorithm performs worse than, similarly than, better than the AGPSO. It is found that the AGPSO shows absolute advantage from the comparisons with NMPSO, MaPSO, dMOPSO, and SMPSO on the five test problems: WFG1 with a convex and mixed PF; WFG3 with a linear and degenerate PF; WFG4-WFG6 with concave PF. Regarding WFG2, which has a disconnected and mixed PF, AGPSO shows a better performance than the other PSOs except MaPSO. AGPSO is the best on WFG2 with 5 and 8 objectives, while the MaPSO is best on WFG2 with 10 objectives. For WFG7 with concave PF, nonseparability of position, and distance parameters, AGPSO is worse than NMPSO with 5 objectives, but better with 8 and 10 objectives. Regarding WFG8, where distance related parameters are dependent on position related parameters, AGPSO performs worse with 5 and 10 objectives and similar with 8 objectives. For WFG9 with a multimodal and deceptive PF, AGPSO only performs the best with 10 objectives and worse with 5 and 8 objectives.
As observed from the one-to-one comparisons in the last row of Table 2, SMPSO and dMOPSO perform poorly in solving WFG problems; AGPSO showed the superior performance over the traditional PSOs on WFG problems.
From the results, AGPSO is better than NMPSO and MaPSO in 19 and 25 out of 27 cases, respectively. Conversely, it was worse than NMPSO and MaPSO in 6 and 1 out of 27 comparisons, respectively. erefore, it is reasonable to conclude that AGPSO showed a superior performance over NMPSO and MaPSO in most problems of WFG1-WFG9.
(2) Comparison Results on MaF1-MaF7. Table 3provides the mean HV comparison results of AGPSO with four PSO algorithms (NMPSO, MaPSO, dMOPSO, and SMPSO) on MaF1-MaF7 with 5, 8, and 10 objectives. As observed from the second last row in Table 3, there are 12 best results in 21 test problems obtained by AGPSO, while NMPSO performs best in 7 cases, MaPSO and SMPSO perform best only in 1 case, respectively, and dMOPSO is not best on any MaF test problem.
MaF1 is modified inverted DTLZ1 [52], which leads to the shape of reference points not able to fit to the PF shape of MAF1.
e performance of reference point-based PSO (dMOPSO) is worse than PSO algorithms that do not use reference points (AGPSO, NMPSO, and MaPSO), while the AGPSO performs best on tackling the MaF1 test problem. MaF2 is obtained from DTLZ2 to increase the difficulty of convergence, which requires that all objectives are optimized  simultaneously to reach the true PF. e performance of AGPSO is the best on MaF2 with all objectives. As for MaF3, with a large number of local fronts and convex PF, AGPSO and NMPSO are the best in the PSOs described above and AGPSO performs better than NMPSO with 5 objective and worse with 8 and 10 objectives. MaF4, containing a number of local Pareto-optimal fronts, is modified from DTLZ3 by inverting PF shape and AGPSO is better than any PSOs mentioned above in Table 3. Regarding MaF5, modified from DTLZ4, with badly scaled convex PF, AGPSO, NMPSO, and MaPSO solved it all well and the performance of NMPSO is slightly better than AGPSO and MaPSO with 5 and 8 objectives and worse than AGPSO with 10 objectives on MaF5. On MaF6 with a degenerate PF, AGPSO performs best with 8 objectives, while SMPSO performs best with 5 objectives and MaPSO performs best with 10 objectives. Finally, AGPSO does not solve MAF7 with a disconnected PF very well and NMPSO performs best with all objectives. In the last row of Table 3, SMPSO and dMOPSO perform poorly in solving MaF problems with 5 objectives, especially in the high-dimensional objectives such as 8 and 10 objectives. e main reason is that Pareto dominance used by SMPSO failed in high-dimensional spaces. e pure decompositionbased dMOPSO cannot solve MaOPs very well because a finite fixed reference point does not provide enough search for high-dimensional spaces. AGPSO is better than dMOPSO and SMPSO in 20 and 21 out of 21 cases, respectively, which show superior performance over dMOPSO and SMPSO on MaF problems. Regarding NMPSO, it is competitive with AGPSO, while AGPSO is better than NMPSO in 14 out of 21 cases. erefore, AGPSO showed the better performance over NMPSO on MaF test problems.

Comparison Results with Five Current MOEAs.
In the sections mentioned above, it is experimentally proved that AGPSO has better performance compared to existing four MOPSOs on most test problems (MaF and WFG). However, there are not many existing PSO algorithms for dealing with MaOPs. e comparison of PSO algorithms alone does not reflect the superiority of AGPSO; thus, we further compared AGPSO with four current MOEAs (VaEA, θ − DEA, MOEA/DD, and SPEA2 + SDE).
(1) Comparison Results on WFG1-WFG9. e experimental data are listed in Table 4, which shows the HV metric values and the comparison of AGPSO with four competitive MOEAs (VaEA, θ − DEA, MOEA/DD, and SPEA2 + SDE) on WFG1-WFG9 with 5, 8, and 10 objectives. ese four competitive MOEAs are specifically designed for MaOPs, which are better than most of MOPSOs to solve MaOPs; however, experimental results show that they are still worse than AGPSO. As observed from the second last row in Table 4, there are 22 best results in 27 test problems obtained by AGPSO, while MOEA/DD and SPEA2 + SDE perform best in 2 and 3 in 27 cases and VaEA and θ − DEA are not best on any WFG test problem.
Regarding WFG4, AGPSO performs best with 8 and 10 objectives and slightly worse than MOEA/DD with 5 objectives. For WFG8, MOEA/DD has slightly advantages with 5 objectives compared to AGPSO, but AGPSO shows   10 Complexity excellence in 8 and 10 objectives. As for WFG9, θ − DEA and SPEA2 + SDE have more advantages, and AGPSO is worse than θ − DEA and SPEA2 + SDE slightly on this test problem, but still better than VaEA and MOEA/DD. For the rest of comparisons on WFG1-WFG3 and WFG5-WFG7, AGPSO has the best performance on most of test problems, which proves the superiority of the proposed algorithm.
In the last row of Table 4, AGPSO performs better than VaEA, θ − DEA, MOEA/DD, and SPEA2 + SDE in 27, 23, 24, and 23 out of 27 cases, respectively, while θ − DEA, MOEA/ DD, and SPEA2 + SDE only have better performance than AGPSO in 3, 2, and 3 out of 27 cases, respectively. In conclusion, AGPSO is found to present a superior performance over four competitive MOEAs in most cases for WFG1-WFG9.
(2) Comparison Results on MaF1-MaF7. Table 5 lists comparative results between AGPSO and four competitive MOEAs using HV on MaF1-MaF7 with 5, 8, and 10 objectives. e second last row of Table 5 shows that AGPSO performs best on 11 out of 21 cases, while VaEA, θ − DEA, MOEA/DD, and SPEA2 + SDE perform best on 2, 3, 3, 2 cases out of 21 cases, respectively. As a result, AGPSO has a clear advantage over these MOEAs.
Regarding MaF1-MaF2, AGPSO is the best to tackle these test problems in MOEAs mentioned above. For MaF3, MOEA/DD is the best one and AGPSO has a median performance among the compared MOEAs. Concerning MaF4 with concave PF, AGPSO shows the best performance with all objectives and MOEA/DD performs poorly in the MaF4 test problem. For MaF5, AGPSO only obtained the 3rd rank as it is better than VaEA, MOEA/DD, and SPEA2 + SDE, while outperformed by θ − DEA with 5 and 10 objectives and θ − DEA with 8 objectives. For MaF6, AGPSO is the best to tackle it with 10 objectives, worse than VaEA with 5 and 8 objectives. As for MaF7, AGPSO performs poorly, while SPEA2 + SDE is the best with 5 and 8 objectives and θ − DEA is the best with 10 objectives. As observed from the one-to-one comparisons in the last row of Table 5, AGPSO is better than VaEA, θ − DEA, MOEA/DD, and SPEA2 + SDE on 13, 15, 18, and 16 out of 21 cases, respectively, while AGPSO is worse than VaEA, θ − DEA, MOEA/DD, and SPEA2 + SDE on 7, 6, 3, and 5 out of 21 cases, respectively.
In conclusion, it is reasonable to conclude that AGPSO shows a better performance over four compared MOEAs in most cases of MaF1-MaF7. is superiority of AGPSO was mainly produced by a novel density-based velocity strategy which enhances the search for areas around each particle and the angle-based external archive update strategy which effectively enhances the performance of AGPSO on MaOPs.

Further Discussion and Analysis on AGPSO.
To further justify the advantage of AGPSO, it was particularly compared to a recently proposed angle-based evolutionary algorithm VaEA. eir comparison results with HV on MaF and WFG problems on 5 to 10 objectives are already contained in Tables 3 and 5. According the results in the Tables 3  and 5, we can also conclude that AGPSO showed a superior performance over VaEA in most problems of MaF1-MaF7 and WFG1-WFG9, as AGPSO performs better than VaEA in 40 out of 48 comparisons and is only defeated by VaEA in 7 cases. We compare the performance of our algorithm with VaEA's environment selection on different shapes of PF test problems (MaF1-MaF7). ey have the same shape of PF but have some different characteristics such as the difficulty of convergence and the deceptive PF property (WFG1-WFG9). erefore, the angular-guided-based method embedded into PSOs is effectively improved in tackling MaOPs. e angular-guided-based method is adopted as a distributed indicator to distinguish the similarities of particles, which is transformed by the vector angle. e traditional vector angle performs better with concave but worse with convex or linear PFs based on MOEA/C [52]. erefore, the angularguided-based method improves this situation; the angularguided-based method performs better than the traditional vector angle with convex or linear PFs and it will not behave badly with concave PFs. On the one hand, compared with the angle-based strategy designed in VaEA, AGPSO mainly focuses on the diversity first in the update of its external archive and convergence second. On the other hand, AGPSO designed a novel density-based velocity update strategy to produce new particles, which mainly enhances the search for areas around each particle and speeds up the convergence speed, while balancing convergence and diversity concurrently. From these analyses, we think that our environment selection is more favorable for some linear PFs and concave-shaped PFs with boundaries that are not difficult to find. In addition, AGPSO also added the evolutionary operator on the external archive to improve the performance. erefore, our proposed AGPSO is reasonable to be regarded as an effective PSO for solving MaOPs.
To visually show and support the abovementioned discussion results, to better visualize the performance, and to show the solutions distribution in high-dimensional objective spaces, some final solution sets with the median HV values from 30 runs were plotted in Figures 2 and 3, respectively, for MaF1 with an inverted PF and for WFG3 with a linear and degenerate PF. In Figure 2, compared with the graph of VaEA, we can find that our boundary cannot be found completely, but the approximate PF we calculated through our proposed AGPSO PF is more closely attached to the real PF. is also proves the correctness of our analysis on environmental selection, i.e., our environment selection is more favorable for some linear PFs and concave-shaped PFs with boundaries that are not difficult to find. e HV trend chart of WFG3, WFG4, and MaF2 with 10 objectives is shown in Figure 4. As observed from Figure 4, the convergence rate of AGPSO is faster than that of the other four optimizers.

Further Discussion and Analysis on Velocity Update
Strategy.
e abovementioned comparisons have fully proved the superiority of AGPSO over four MOPSOs and four   MOEAs on the WFG and MaF test problems with 5, 8, and 10 objectives. In this section, we make a deeper comparison of the speed update formula. In order to prove the superiority of the formula based on the density velocity update, the effect is compared with the traditional speed update formula as in [14] (denoted as AGPSO-I). Furthermore, to show that only embedding local best into the traditional velocity update method is not sufficient enough, and the effect is also compared with equation (12), which is denoted as AGPSO-II.  Table 6 shows the comparisons of AGPSO and two modified AGPSOs with different speed update formula on WFG1-WFG9 and MaF1-MaF7 using HV. e mean HV values and the standard deviations (included in bracket after the mean HV results) in 30 runs are collected for comparison. As in the second last row of Table 6, there are 34 best results in 48 test problems obtained by AGPSO, while AGPSO-I performs best in 10 cases and AGPSO-II performs best only in 4 case, respectively. Regarding the comparison of AGPSO with AGPSO-I, AGPSO performs better on 33 cases, similarly on 5 cases, and worse on 10 cases. e novel velocity update strategy based on density is effective in improving the performance of AGPSO on WFG4, WFG5, WFG7, WFG9, and MaF2-MaF6. From these comparison data, the novel velocity update strategy can improve the AGPSO and enable particle generation to be more efficient under the coordination of the proposed angular-guided update strategies. Regarding the comparison AGPSO and AGPSO-II, AGPSO performs better on 31 cases, similarly on 12 cases, and worse on 5 cases. It also verified the superiority of the proposed novel velocity update strategy by comparing the variants of the traditional formula defined by (12). Adding the local-best (lbest) positional information of a particle to the traditional velocity formula is not enough to control the search strength in the surrounding region of this particle. It also confirmed that the proposed formula can produce higher quality particles.

Computational Complexity Analysis on AGPSO.
e computational complexity of AGPSO in one generation is mainly dominated by the environmental selection that is described in Algorithm 2. Algorithm 2 requires a time complexity

Conclusions and Future Work
is paper proposes AGPSO, which is a novel angular-guided MOPSO with efficient density-based velocity update strategy and excellent angular-guided-based archive update strategy. e novel density-based velocity update strategy uses adjacent information (local-best) to explore information around particles and uses globally optimal information (global-best) to search for better performing locations globally. is strategy improves the quality of the particles produced by searching for the surrounding space of sparse particles. Furthermore, the angularguided archive update strategy provides efficient convergence while maintaining good population distribution. is combination of proposed novel velocity update strategy and excellent archive update strategy enables the proposed AGPSO to overcome the problems encountered in the existing state-ofthe-art algorithms for solving MaOPs. e performance of AGPSO is assessed by using WFG and MaF test suites with 5 to 10 objectives. Our experiments indicate that AGPSO has superior performance over four current PSOs (SMPSO, dMOPSO, NMPSO, and MaPSO) and four evolutionary algorithms (VaEA, θ-DEA, MOEA\D-DD, and SPEA2 + SDE). is density-based velocity update strategy and angularguided archive update strategy will be further studied in our future work. e performance of density velocity update strategy is not good enough on some partial problems, which will be further studied. Regarding angular-guided archive update strategy, while reducing the amount of computation and performing better than the tradition angle selection on concave and linear PFs, it also sacrifices a part of the performance against many objective optimization problems with convex PFs, the improvement of which will be further studied in future.

Data Availability
Our source code could be provided by contacting the corresponding author. e source codes of the compared stateof-the-art algorithms can be downloaded from http://jmetal. sourceforge.net/index.html or provided by the original author, respectively. Also, most codes of the compared algorithm can be found in our source code, such as VaEA [27], θ-DEA [48], and MOEA/DD [50]. Test problems are the WFG [23] and MaF [24]. WFG [23] can be found at http:// jmetal.sourceforge.net/problems.html and MaF [24] can be found at https://github.com/BIMK/PlatEMO.

Conflicts of Interest
e authors declare that they have no conflicts of interest.