Abstract

Most multiobjective particle swarm optimizers (MOPSOs) often face the challenges of keeping diversity and achieving convergence on tackling many-objective optimization problems (MaOPs), as they usually use the nondominated sorting method or decomposition-based method to select the local or best particles, which is not so effective in high-dimensional objective space. To better solve MaOPs, this paper presents a novel angular-guided particle swarm optimizer (called AGPSO). A novel velocity update strategy is designed in AGPSO, which aims to enhance the search intensity around the particles selected based on their angular distances. Using an external archive, the local best particles are selected from the surrounding particles with the best convergence, while the global best particles are chosen from the top 20% particles with the better convergence among the entire particle swarm. Moreover, an angular-guided archive update strategy is proposed in AGPSO, which maintains a consistent population with balanceable convergence and diversity. To evaluate the performance of AGPSO, the WFG and MaF test suites with 5 to 10 objectives are adopted. The experimental results indicate that AGPSO shows the superior performance over four current MOPSOs (SMPSO, dMOPSO, NMPSO, and MaPSO) and four competitive evolutionary algorithms (VaEA, -DEA, MOEA\D-DD, and SPEA2-SDE), when solving most of the test problems used.

1. Introduction

Many real-world applications often face the problems of optimizing m (often conflicting) objectives [1]. This kind of engineering problem is called multiobjective optimization problems (MOPs), when m is 2 or 3, or called many-objective optimization problem (MaOPs), when . A generalized MOP or MaOP can be modeled as follows:where is a decision vector in the D-dimensional search space and defines m objective functions. Assuming that two solutions x and y locate in the search space , if , for , x is said to dominate y. Then, if no solution can dominate x, it is called nondominated solution. For MOPs or MaOPs, the aim is to search a set of nondominated solutions called Pareto-optimal set (PS), with its mapping in the objective space called Pareto-optimal Front (PF).

During the past two decades, a number of nature-inspired computational methodologies have been proposed to solve various kinds of MOPs and MaOPs, such as multiobjective evolutionary algorithms (MOEAs) [2, 3], multiobjective particle swarm optimizers (MOPSOs) [46], and multiobjective ant colony optimizers [7]. Especially, MOPSOs become one of the most outstanding population-based approaches due to the easy implementation and strong search ability [8], which can be also applied to other kinds of optimization problems, such as multimodal optimization problems [9] and standard image segmentation [10], as well as some real-world engineering problems [11, 12].

Currently, MOPSOs have been validated to be very effective and efficient when solving MOPs, which can search a set of approximate solutions with balanceable diversity and convergence to cover the entire PFs. Based on the selection of local or global best particles, which can effectively guide the flight of particles in the search space, most of MOPSOs can be classified into three main categories. The first category embeds the Pareto-based sorting approach [2] into MOPSO (called Pareto-based MOPSOs), such as OMOPSO [13], SMPSO [14], CMPSO [15], AMOPSO [5], and AGMOPSO [16]. The second category decomposes MOPs into a set of scalar subproblems and optimizes all the subproblems simultaneously using a collaborative search process (called decomposition-based MOPSOs), including dMOPSO [17], [18], and MMOPSO [19]. The third category employs the performance indicators (e.g., HV [20] and R2 [21]) to guide the search process in MOPSOs (called indicator-based MOPSOs), such as IBPSO-LS [22], S-MOPSO [23], NMPSO [24], R2HMOPSO [25], and R2-MOPSO [26].

Although the above MOPSOs are effective for tackling MOPs with the objective number , their performance will significantly deteriorate for solving MaOPs with , mainly due to the several challenges brought by “the curse of dimensionality” in MaOPs [27, 28]. Generally, there are three main challenges for MOPSOs to deal with MaOPs.

The first challenge is to provide the sufficient selection pressure to approach the true PFs of MaOPs, i.e., the challenges in maintaining convergence. With the increase of objectives in MaOPs, it becomes very difficult for MOPSOs to pay the same attention to optimize all the objectives, which may lead to an imbalanced evolution such that solutions are very good at solving some objectives but perform poorly on the others [29]. Moreover, most solutions of MaOPs are often nondominated with each other at each generation. Therefore, MOPSOs based on Pareto dominance may lose the selection pressure on addressing MaOPs and show a poor convergence performance [24].

The second challenge is to search a set of solutions that are evenly distributed along the whole PF, i.e., the challenges in diversity maintenance. Since the objective space is enlarged rapidly with the increase of dimensionality in MaOPs, this requires a large number of solutions to approximate the whole PF. Some well-known diversity maintenance methods may not work well on MaOPs, e.g., the crowding distance [2] may prefer some dominated resistant solutions [30], the decomposition approaches require a larger set of well-distributed weight vectors [3], and the performance indicators require an extremely high computational cost [31]. Thus, diversity maintenance in MOPSOs becomes less effective with the increase of objectives in MaOPs [32].

The third challenge is to propose the effective velocity update strategies for MOPSOs to guide the particle search in the high-dimensional space [33]. There are a number of research studies to present the improved velocity update strategies for MOPs [34]. For example, in SMPSO [14], the velocity of the particles is constrained in order to avoid the cases in which the velocity becomes too high. In AgMOPSO [35], a novel archive-guided velocity update method is used to select the global best and personal best particles from the external archive, which is more effective to guide the swarm. In MaOPSO/2s-pccs [36], a leader group is selected from the nondominated solutions with the better diversity in the external archive according to the parallel cell coordinates, which are selected as the global best solutions to balance the exploitation and exploration of a population. In NMPSO [24], another search direction from the personal best particle pointing to the global best one is provided to make more disturbances in velocity update. However, the abovementioned velocity update strategies are not so effective for MaOPs. With the enlargement of the objective space, it is more difficult to select the suitable personal or global best particles for velocity update.

To overcome the abovementioned challenges, this paper proposes a novel angular-guided MOPSO (called AGPSO). A novel angular-based archive update strategy is designed in AGPSO to well balance convergence and diversity in the external archive. Moreover, a density-based velocity update strategy is proposed to effectively guide the PSO-based search. When compared to the existing MOPSOs and some competitive MOEAs for MaOPs, the experimental results on the WFG and MaF test suites with 5 to 10 objectives have shown the superiority of AGPSO on most cases.

To summarize, the main contributions of this paper are clarified as follows.(1)An angular-guided archive update strategy is proposed in this paper. As the local-best and global-best particles are both selected from the external archive, the elitist solutions with balanceable convergence and diversity should be preserved to effectively guide the search direction to approximate the true PF. In this strategy, N evenly distributed particles (N is the swarm size) are firstly selected by the angular distances to maintain the diversity in different search directions, which helps to alleviate the second challenge above (diversity maintenance). After that, the rest particles are respectively associated to the closest particle based on the angular distance, which will form N groups with similar particles. At last, the particle with the best convergence in each particle group is saved in the external archive, trying to alleviate the first challenge above (keeping convergence).(2)A density-based velocity update strategy is designed in this paper, which can search high-quality particles with fast convergence and good distribution and helps to alleviate the third challenge above in MOPSOs. The local best and global best particles are selected according to the densities of the particles, for guiding the PSO-based search. The sparse particles will be guided by the local best particles to encourage exploitation in the local region as the surrounding particles are very few, while the crowded particles are disturbed by the global best particles to put more attentions on convergence. By this way, the proposed velocity update strategy is more suitable for solving MaOPs as experimentally validated in Section 4.5.

The rest of this paper is organized as follows. Section 2 introduces the background of the particle swarm optimizer and some current MOPSOs and MOEAs for MaOPs. The details of AGPSO are given in Section 3. To validate the performance of AGPSO, some experimental studies are provided in Section 4. At last, our conclusions and future work are given in Section 5.

2. Background

2.1. Particle Swarm Optimizer

Particle swarm optimizer is a simple yet efficient swarm intelligent algorithm, which is inspired by the social behaviors of the individuals in flocks of birds and schools of fish, as proposed by Kennedy and Eberhart [6] in 1995. Generally, a particle swarm S is stochastically initialized to have N particles according to the corresponding search range of the problem (N is the swarm size). Each particle is a candidate solution to the problem, characterized with the velocity and the position at th iteration.

Moreover, each particle xi will record the historical position-information in S, i.e., the personal best position for xi (pbesti) and the global best position for xi (gbesti), which are used to guide the particle fly to the next position . Then, the velocity and location of each particle are iteratively updated as follows:where is an inertia weight to follow the previous velocity and and are two random numbers uniformly generated in [0, 1], while and are the two learning weight factors from the personal and global best particles [14].

2.2. Some Current MOPSOs and MOEAs for MaOPs

Recently, a number of nature-inspired optimization algorithms have been proposed for solving MaOPs, such as MOEAs and MOPSOs. Especially, a number of MOEAs have been proposed, which can be classified into three main kinds, such as Pareto-based MOEAs, indicator-based MOEAs, and decomposition-based MOEAs. A lot of approaches have been proposed to enhance their performance for solving MaOPs. For example, Pareto dominance relation was modified to enhance the convergence pressure for Pareto-based MOEAs, such as [30, 37] , a strengthened dominance relation [38], and a special dominance relation [39]. New association mechanisms for solutions and weight vectors were designed for decomposition-based MOEAs, such as the use of reference vectors in NSGA-II [40], a dynamical decomposition in DDEA [41], and a self-organizing map-based weight vector design in MOEA/D-SOM [42]. A lot of effective and efficient performance indicators were presented for indicator-based MOEAs, such as a unary diversity indicator based on reference vectors [43], an efficient indicator as a combination of sum of objectives and shift-based density estimation called ISDE+ [44], and an enhanced inverted generational distance (IGD) indicator [45].

In contrast to the abovementioned MOEAs, there are only a few of MOPSOs presented to solve MaOPs. The reason may refer to the three challenges faced by MOPSOs as introduced in Section 1. The main difference between MOPSOs and MOEAs is their evolutionary search, e.g., most MOEAs adopt differential evolution or simulated binary crossover, while most MOPSOs use the fly of particles to run the search. The environmental selection in external archive of MOPSOs and MOEAs are actually similar. As inspired by the environmental selection in some MOEAs, MOPSOs can be easily extended to solve MaOPs. Recently, some competitive MOPSOs have been proposed to better solve MaOPs. For example, in pccsAMOPSO [5], a parallel cell coordinate system was employed to update the external archive for maintaining the diversity, which was used to select the global best and personal best particles. In its improved version MaOPSO/2s-pccs [36], the two-stage strategy was further presented to emphasize convergence and diversity, respectively, by using a single-objective optimizer and a many-objective optimizer. In NMPSO [24], a balanceable fitness estimation method was proposed by summarizing different weights of convergence and diversity factors, which aims to offer sufficient selection pressure in the search process. However, this method is very sensitive to the used parameters and requires a high-computational cost. In CPSO [29], a coevolutionary PSO with bottleneck objective learning strategy was designed for solving MaOPs. Multiple swarms coevolved in a distributed fashion to maintain diversity for approximating the entire PFs, while a novel bottleneck objective learning strategy was used to accelerate convergence for all objectives. In MaPSO [46], a novel MOPSO based on the acute angle was proposed, in which the leader of particles was selected from its historical particles by using the scalar projections and each particle owned K historical particle information (K was set to 3 in its experiments). Moreover, the environmental selection in MaPSO was run based on the acute angle of each pair of particles. Although these MOPSOs are effective for solving MaOPs, there are still some improvements for the design of MOPSOs. Thus, this paper proposes a novel angular-guided MOPSO with an angular-based archive update strategy and a density-based velocity update strategy to alleviate the three abovementioned challenges in Section 1.

3. The Proposed AGPSO Algorithm

In this section, the details of the proposed algorithm are provided. At first, two main strategies used in AGPSO, i.e., a density-based velocity update strategy and an angular-guided archive update strategy, are, respectively, introduced. The density-based velocity update strategy will consider the density around each particle, which is used to determine whether the local best particle or the global best particle is used to guide the swarm search. In this strategy, the local best particle is selected based on the local angular information, aiming to encourage exploitation in the local region around the current particle, while the global best particle is selected by the convergence performance, which is used to perform exploration in the whole search space. In order to provide the elite particles for leading the PSO-based search, the angular-guided archive update strategy is designed to provide an angular search direction for each particle, which is more effective for solving MaOPs. At last, the complete framework of AGPSO is also shown in order to clarify its implementation as well as other components.

3.1. A Density-Based Velocity Update Strategy

As introduced in Section 2.1, the original velocity update strategy includes two best positional information of a particle, i.e., the personal-best (pbest) and global-best (gbest) particles, as defined in equation (1), which are used to guide the swarm search. Theoretically, the quality of generated particles is dependent on the guiding information from pbest and gbest in velocity update. However, the traditional velocity update strategy will face the great challenges in tackling MaOPs, as the selection of effective personal-best and global-best particles is difficult. Moreover, two velocity components used in equation (2) may lead to too much disturbance for the swarm search, as most of the particles are usually sparse in the high objective space. Thus, the quality of particles generated by the traditional velocity formula may not be promising, as the large disturbance in velocity update will lower the search intensity in the local region around the current particle. Some experiments have been given in Section 4.5 to study the effectiveness of different velocity update strategies. In this paper, a density-based velocity update strategy is proposed in equation (4), which can better control the search intensity in the local region of particles:where is the iteration number; and are the tth and the (t + 1)th iteration velocity of particle xi, respectively; is the inertial weight; and are two learning factors; and are two uniformly distributed random numbers in [0, 1]; lbesti and gbesti are the positional information of the local-best and global-best particles for xi, , respectively; is the shift-based density estimation (SDE) of particle x as defined in [47]; and is the median value of all in the swarm. By this way, the sparse particles with will be guided by the local best particles to encourage exploitation in the local region as their surrounding particles are very few, while the crowded particles with are disturbed by the global best particles to put more attentions on convergence. Please note that lbesti and gbesti of are selected from the external archive A as introduced below.

To the selection of gbesti of particle , the selection of gbest focuses on the particles with better convergence in this archive. Meanwhile, we also need to ensure a certain perturbation; thus, we randomly choose from the top 20% of the convergence performance. The gbesti of particle is randomly selected from the top 20% particles in the external archive with the better convergence performance values. For each particle x, we calculate its convergence performance values using the following formulation:where m is the number of objectives and returns the kth normalized objective vector of x, which is obtained by the normalization procedure as follows:where and are the kth objective value of the ideal and nadir points, respectively, .

To the selection of lbesti, the current particle in the particle swarm (i = 1, …, N, where N is the number of particles size) will be firstly associated to the closest particle in the external archive (A), by comparing the angular-distance of to each particle in the extra archive (A). And use y particle to find the lbesti in A. Here, the angular-distance [41] between the particle x to another particle y, termed , is computed bywhere is defined as follows:where and is calculated by equation (6).

Then, lbesti is selected from the T angular-distance-based closest neighbors of y with the best convergence value by equation (5). Specially, each particle of A owns their T nearest neighbors, which are found by calculating the angular distance of each particle to the other particles of A using equation (7). The neighborhood information of xi in A is obtained in B(i), where and are T closest angular-distance-based solutions to by equation (7).

Then, for each particle in S, lbesti and gbesti are obtained to update the velocity by equation (4) and position by equation (3), respectively. After that, the object values of the particle are evaluated. To further clarify the selection of local best and global best particles and the density-based velocity update, the related pseudocode is given in Algorithm 1.

(1)sort the extra archive A in ascending order based on convergence value as in equation (5)
(2)for each solution
(3) identify its T neighborhood index as B(i)
(4)end for
(5)for each particle
(6) associate xi to the angular-guided closest particle with index j in A
(7) get the T nearest neighbors in A of y by the neighborhood index B(j)
(8) sort the T neighbors of y in an ascending order based on convergence value by equation (5)
(9) select the first angle-distance-based neighboring particle of y as lbesti
(10) select randomly from the top 20% particles in A as gbesti
(11) update the velocity of xi by equation (4)
(12) update the position by equation (3)
(13) evaluate the objective values for xi
(14)end for
(15)return S
3.2. An Angular-Guided Archive Update Strategy

After performing the abovementioned density-based velocity update for the swarm search, each particle flies to the new position to produce the new particle swarm S. In order to get a population consisting of a fixed number of elitist solutions, which can maintain an excellent balance between convergence and distribution in the external archive, an appropriate selection strategy is required to update A. In this paper, we propose an angular-guided archive update strategy, following the principle of distributivity first and convergence second, and the pseudocode of updating the external archive is given in Algorithm 2.

(1)combine S and A into a union set U and set
(2)normalize all particles in U by equation (6)
(3)for i = 1 to N
(4) find the particle pair (xh, xu) in U with the minimum angular-distance for all particle pairs in U
(5) find x in (xh, xu) such that x has the smaller angular-distance to U by equation (9)
(6) add x to A, then deleted x from U
(7)end for
(8)for each particle
(9) initialize an empty subset Si
(10) add the particle xi into Si
(11)end for
(12)for each particle
(13) associate xi to the particle xt in U that has the smallest angular-distance to xi
(14) add xi into St
(15)end for
(16)set
(17)for i = 1 to N
(18) find the particle x of Si that has the best convergence performance computed by equation (5)
(19) add x into A
(20)end for
(21)return A

In Algorithm 2, the input is S and A; N is the size of both S and A. First, S and A are combined into a union set U and set in line 1. Then, all the particles in U is normalized by equation (6) in line 2. After that, in lines 3–6, a set of particles are found to represent the distribution of all particles. To do this, each time, the current particle pair with minimum angular-distance of U is found in line 4, which is computed aswhere the is computed as equation (7). Then, one particle x in with the minimum angular-distance to U is found and added to A and deleted from U in lines 5-6. Here, the angular-distance of to U, termed as , is computed aswhere the is computed as equation (7), N subsets , are obtained in line 8–11, where particle in U is saved into , In lines 12–15, each particle in A is associated with the minimum angular-distance to the particle in the U, and then this particle of A is added into the subset , in line 14. Then, set in line 16. Finally, in each subset , , a particle with the best convergence performance computed by equation (5) is added into A, in lines 17–20. The A is returned in line 21 as the final result.

In order to facilitate the understanding of the process of this angular-guided strategy, a simple example is illustrated in Figure 1. The U includes ten individuals and their mapping solutions, which map individuals to the hyperplane (calculating angular-distance) in the normalized biobjective space, as shown in Figure 1(a). First, five particles (half the population size), i.e., that represent the distribution of all ten particles are kept in U, as shown in Figure 1(b). Then, the remaining particles, i.e., , are preserved in A as in Figure 1(c). Second, each particle in Figure 1(c) is associated with a minimum angular-distance particle in Figure 1(b). After that, five subsets are obtained, where preserves and its associated particles in A, i.e., . Similarly, preserves and ; preserves , , and ; preserves and ; preserves , as shown in Figure 1(d). Finally, in each subset, only the particle with the best convergence is selected, as in Figure 1(e).

3.3. The Complete Algorithm of AGPSO

The above sections have introduced the main components of AGPSO, which include the velocity update strategy and archive update operator. In addition, the evolutionary search is also proposed on the external archive. In order to describe the remaining operator and to facilitate the implementation of AGPSO, the pseudocode of its complete algorithm is provided in Algorithm 3. The initialization procedure is first activated in lines 1–6 of Algorithm 3. For each particle in S, its positional information is randomly generated and its velocity is set to 0 in line 3. After that the objectives of each particle are evaluated in line 4. The external archive A is updated with Algorithm 2 in line 7 and sorted in the ascending order by calculating the convergence fitness of each particle using equation (5) in line 8. After that the iteration counter t is increased by one. Then, AGPSO steps into the main loop, which contains the particle search and the evolutionary search, until the maximum number of iterations is reached.

(1)Let t = 0, , initial particles , and T be the value of the neighborhood size
(2)for i = 1 to N
(3) randomly initialize position and set for
(4) evaluate the objective values of
(5)end for
(6)randomly initialize A with N new particles
(7)A = Angular-guided Archive update (S, A, N)
(8)sort A in ascending order based on equation (5)
(9)t = t+ 1
(10)while t < tmax
(11) calculate the SDE of each particle in S
(12) sort particles in S to get the medSDE
(13)Snew = Density-based Velocity Update (S, A, medSDE)
(14)A = Angular-guided Archive update (Snew, A, N)
(15) apply evolutionary search strategy on A to get a new swarm Snew
(16) evaluate the objectives of the new particles in Snew
(17)A = Angular-guided Archive update (Snew, A, N)
(18) sort A in ascending order based on equation (5)
(19)t = t+ 2
(20)end while
(21)output A

In the main loop, the SDE distance of each particle in S is calculated, and these particles are sorted by the SDE distance to get median SDE distance of particles in S, which is called medSDE as in lines 11-12. Then, the PSO-based search is performed in line 13 with density-based velocity update of Algorithm 1. After that the angular-guided archive update strategy of Algorithm 2 is executed in line 14, with the inputs S, A, and N (the sizes of S and A are both N). Then, in line 15, the evolutionary strategy is applied on A to optimize the swarm leaders, providing another search strategy to cooperate with the PSO-based search.

Finally, the objectives of solutions, which are newly generated by the evolutionary strategy, are evaluated in line 16. Also, the angular-guided archive update strategy of Algorithm 2 is carried out again in line 17, and the A is sorted in the ascending order again in line 18 for selecting the global-best (gbest) particles. The iteration counter t is increased by 2 in line 19 because one PSO-based search and one evolutionary strategy are carried out in each particle and in each of the main loop. The abovementioned operation is repeated until the maximum of iterations is achieved. At last, the final particles in A are saved as the final approximation of PF.

4. The Experimental Studies

4.1. Related Experimental Information
4.1.1. Involved MOEAs

Four competitive MOEAs are used to evaluate the performance of our proposed AGPSO. They are briefly introduced as follows.(1)VaEA [48]: this algorithm uses the maximum-vector-angle-first principle in the environmental selection to guarantee the wideness and uniformity of the solution set.(2) [30]: this algorithm uses a new dominance relation to enhance the convergence in the high dimension optimization.(3)MOEA/DD [49]: this algorithm proposes to exploit the merits of both dominance-based and decomposition-based approaches, which balances the convergence and the diversity for the population.(4)SPEA2 + SDE [47]: this algorithm develops a general modification of density estimation in order to make Pareto-based algorithms suitable for many-objective optimization.

Four competitive MOPSOs are also used to validate the performance of our proposed AGPSO. They are also briefly introduced as follows:(1)NMPSO [24]: this algorithm uses a balanceable fitness estimation to offer sufficient selection pressure in the search process, which considers both of the convergence and diversity with weight vectors.(2)MaPSO [46]: this algorithm uses an angle-based MOPSO called MaPSO. MaPSO chooses the leader positional information of particles from its historical particles by using scalar projections to guide particles.(3)dMOPSO [17]: this algorithm integrates the decomposition method to translate an MOP into a set of single-objective optimization problems, which solves them simultaneously using a collaborative search process by applying PSO directly.(4)SMPSO [14]: this algorithm proposes a strategy to limit the velocity of the particles.

4.1.2. Benchmark Problems

In our experiments, sixteen various unconstrained MOPs are considered here to assess the performance of the proposed AGPSO algorithm. Their features are briefly introduced below.

In this study, the WFG [50] and MaF [31] test problems were used, including WFG1-WFG9 and MaF1-MaF7. For each problem, the number of objectives was varied from 5 to 10, i.e., . For MaF1-MaF7, the number of decision variables was set as , where n and m are, respectively, the number of decision variables and the number of objectives. As suggested in [31], the values of k were set to 10. Regarding WFG1-WFG9, the decision variables are composed by k position-related parameters and l distance-related parameters. As recommended in [51], k is set to 2×(m-l) and l is set to 20.

4.1.3. Performance Metrics

The goals of MaOPs include the minimization of the distance of the solutions to the true PF (i.e., convergence) and the maximization of the uniform and spread distribution of solutions over the true PF (i.e., diversity).

Hypervolume (HV) [20] metric calculates the objective space between the prespecified reference point which is dominated by all points on the true points and the solution set S we obtain. The calculation of HV metric is given as follows:where denotes the Lebesgue measure. When calculating the HV value, the points are firstly normalized as suggested in [51] by the vector with being the maximum value of kth objective in true PF. Those that cannot dominate the reference point will be discarded (i.e., solutions that cannot dominate the reference point are not included to compute HV). For each objective, an integer larger than the worst value of the corresponding objective in the true PF is adopted as the reference point. After the normalization operation, the referent point is set to . A larger value of HV indicates a better approximation of the true PF.

4.2. General Experimental Parameter Settings

In this paper, four PSO algorithms (dMOPSO [17], SMPSO [14], NMPSO [24], and MaPSO [46]), and four competitive MOEAs (VaEA, , MOEA/DD, and SPEA2 + SDE), were used for performance comparison.

Because the weight vectors are implied to dMOPSO and MOEA/DD, the population sizes of these two algorithms are set the same as the number of weight vectors. The number of weight vectors is set to 210, 240, and 275, following [51], for the test problem with 5, 8, and 10 objectives. In order to ensure fairness, the other algorithms adopt the population/swarm size the same as to the number of weight vectors.

To allow a fair comparison, the related parameters of all the compared algorithms were set as suggested in their references, as summarized in Table 1. and are the crossover probability and mutation probability; and are, respectively, the distribution indexes of SBX and polynomial-based mutation. For these PSO-based algorithms mentioned above (SMPSO, dMOPSO, NMPSO, AGPSO), the control parameters are sampled in [1.5, 2.5], and the inertia weight is a random number of [0.1, 0.5]. In MaPSO, K is the size of historical maintained by each particle, which is set to 3; the another algorithmic parameters are , . In MOEA/DD, T is the neighborhood size; is the probability to select parent solutions from T neighborhoods; and is the maximum number of parent solutions that are replaced by each child solution. Regarding VaEA, is a condition for determining whether the solution is searching for a similar direction, which is set to .

All the algorithms were run 30 times independently on each test problem. The mean HV values and the standard deviations (included in bracket after the mean HV results) in 30 runs were collected for comparison. All the algorithms were terminated when a predefined maximum number of generations was reached. In this paper, tmax is set to 600, 700, and 1000 for 5-, 8-, and 10-objective problems, respectively. For each algorithm, the maximum function evaluations (MFE) can be easily determined by , where N is the population size. To obtain a statistically sound conclusion, Wilcoxon rank sum test was run with a significance level (0.05 is set in this paper), showing the statistically significant differences on the results of AGPSO and other competitors. In the following experimental results, the symbols “+,” “−,” and “∼” indicate that the results of other competitors are significantly better than, worse than, and similar to the ones of AGPSO using this statistical test, respectively.

All these nine algorithms were implemented by JAVA codes and run on a personal computer with Intel (R) Core (TM) i7- 6700 CPU, 3.40 GHz (processor), and 20 GB (RAM).

4.3. Comparison with State-of-the-Art Algorithms

In this section, AGPSO is compared to four PSO algorithms (dMOPSO, SMPSO, NMPSO, and MaPSO) and four competitive MOEAs (VaEA, , MOEA/DD, and SPEA2 + SDE) on WFG1-WFG9 and MaF1-MaF7 problems. In the following tables, the symbols “+,” “−,” and “∼” indicate that the results of other competitors are significantly better than, worse than, and similar to the ones of AGPSO using statistical test, respectively. The best mean result for each problem is highlighted in boldface.

4.3.1. Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9. Table 2 shows the HV performance comparisons of four PSOs on WFG1-WFG9, which clearly demonstrate that AGPSO provides promising performance in solving WFG problems with these PSOs, as it is best on 21 out of 27 cases on the experimental data. In contrast, NMPSO, MaPSO, dMOPSO, and SMPSO are best on 6, 1, 0, and 0 cases for HV metric, respectively. These data are summarized in the second last row of Table 2. The last row of Table 2 are the comparisons of AGPSO with each PSO on WFG, where the mean of “−/∼/+” is the number of test problem in which the corresponding algorithm performs worse than, similarly than, better than the AGPSO.

It is found that the AGPSO shows absolute advantage from the comparisons with NMPSO, MaPSO, dMOPSO, and SMPSO on the five test problems: WFG1 with a convex and mixed PF; WFG3 with a linear and degenerate PF; WFG4-WFG6 with concave PF. Regarding WFG2, which has a disconnected and mixed PF, AGPSO shows a better performance than the other PSOs except MaPSO. AGPSO is the best on WFG2 with 5 and 8 objectives, while the MaPSO is best on WFG2 with 10 objectives. For WFG7 with concave PF, nonseparability of position, and distance parameters, AGPSO is worse than NMPSO with 5 objectives, but better with 8 and 10 objectives. Regarding WFG8, where distance related parameters are dependent on position related parameters, AGPSO performs worse with 5 and 10 objectives and similar with 8 objectives. For WFG9 with a multimodal and deceptive PF, AGPSO only performs the best with 10 objectives and worse with 5 and 8 objectives.

As observed from the one-to-one comparisons in the last row of Table 2, SMPSO and dMOPSO perform poorly in solving WFG problems; AGPSO showed the superior performance over the traditional PSOs on WFG problems. From the results, AGPSO is better than NMPSO and MaPSO in 19 and 25 out of 27 cases, respectively. Conversely, it was worse than NMPSO and MaPSO in 6 and 1 out of 27 comparisons, respectively. Therefore, it is reasonable to conclude that AGPSO showed a superior performance over NMPSO and MaPSO in most problems of WFG1-WFG9.

(2) Comparison Results on MaF1-MaF7. Table 3provides the mean HV comparison results of AGPSO with four PSO algorithms (NMPSO, MaPSO, dMOPSO, and SMPSO) on MaF1-MaF7 with 5, 8, and 10 objectives. As observed from the second last row in Table 3, there are 12 best results in 21 test problems obtained by AGPSO, while NMPSO performs best in 7 cases, MaPSO and SMPSO perform best only in 1 case, respectively, and dMOPSO is not best on any MaF test problem.

MaF1 is modified inverted DTLZ1 [52], which leads to the shape of reference points not able to fit to the PF shape of MAF1. The performance of reference point-based PSO (dMOPSO) is worse than PSO algorithms that do not use reference points (AGPSO, NMPSO, and MaPSO), while the AGPSO performs best on tackling the MaF1 test problem. MaF2 is obtained from DTLZ2 to increase the difficulty of convergence, which requires that all objectives are optimized simultaneously to reach the true PF. The performance of AGPSO is the best on MaF2 with all objectives. As for MaF3, with a large number of local fronts and convex PF, AGPSO and NMPSO are the best in the PSOs described above and AGPSO performs better than NMPSO with 5 objective and worse with 8 and 10 objectives. MaF4, containing a number of local Pareto-optimal fronts, is modified from DTLZ3 by inverting PF shape and AGPSO is better than any PSOs mentioned above in Table 3. Regarding MaF5, modified from DTLZ4, with badly scaled convex PF, AGPSO, NMPSO, and MaPSO solved it all well and the performance of NMPSO is slightly better than AGPSO and MaPSO with 5 and 8 objectives and worse than AGPSO with 10 objectives on MaF5. On MaF6 with a degenerate PF, AGPSO performs best with 8 objectives, while SMPSO performs best with 5 objectives and MaPSO performs best with 10 objectives. Finally, AGPSO does not solve MAF7 with a disconnected PF very well and NMPSO performs best with all objectives.

In the last row of Table 3, SMPSO and dMOPSO perform poorly in solving MaF problems with 5 objectives, especially in the high-dimensional objectives such as 8 and 10 objectives. The main reason is that Pareto dominance used by SMPSO failed in high-dimensional spaces. The pure decomposition-based dMOPSO cannot solve MaOPs very well because a finite fixed reference point does not provide enough search for high-dimensional spaces. AGPSO is better than dMOPSO and SMPSO in 20 and 21 out of 21 cases, respectively, which show superior performance over dMOPSO and SMPSO on MaF problems. Regarding NMPSO, it is competitive with AGPSO, while AGPSO is better than NMPSO in 14 out of 21 cases. Therefore, AGPSO showed the better performance over NMPSO on MaF test problems.

4.3.2. Comparison Results with Five Current MOEAs

In the sections mentioned above, it is experimentally proved that AGPSO has better performance compared to existing four MOPSOs on most test problems (MaF and WFG). However, there are not many existing PSO algorithms for dealing with MaOPs. The comparison of PSO algorithms alone does not reflect the superiority of AGPSO; thus, we further compared AGPSO with four current MOEAs (VaEA, , MOEA/DD, and SPEA2 + SDE).

(1) Comparison Results on WFG1-WFG9. The experimental data are listed in Table 4, which shows the HV metric values and the comparison of AGPSO with four competitive MOEAs (VaEA, , MOEA/DD, and SPEA2 + SDE) on WFG1-WFG9 with 5, 8, and 10 objectives. These four competitive MOEAs are specifically designed for MaOPs, which are better than most of MOPSOs to solve MaOPs; however, experimental results show that they are still worse than AGPSO. As observed from the second last row in Table 4, there are 22 best results in 27 test problems obtained by AGPSO, while MOEA/DD and SPEA2 + SDE perform best in 2 and 3 in 27 cases and VaEA and are not best on any WFG test problem.

Regarding WFG4, AGPSO performs best with 8 and 10 objectives and slightly worse than MOEA/DD with 5 objectives. For WFG8, MOEA/DD has slightly advantages with 5 objectives compared to AGPSO, but AGPSO shows excellence in 8 and 10 objectives. As for WFG9, and SPEA2 + SDE have more advantages, and AGPSO is worse than and SPEA2 + SDE slightly on this test problem, but still better than VaEA and MOEA/DD. For the rest of comparisons on WFG1-WFG3 and WFG5-WFG7, AGPSO has the best performance on most of test problems, which proves the superiority of the proposed algorithm.

In the last row of Table 4, AGPSO performs better than VaEA, , MOEA/DD, and SPEA2 + SDE in 27, 23, 24, and 23 out of 27 cases, respectively, while , MOEA/DD, and SPEA2 + SDE only have better performance than AGPSO in 3, 2, and 3 out of 27 cases, respectively. In conclusion, AGPSO is found to present a superior performance over four competitive MOEAs in most cases for WFG1-WFG9.

(2) Comparison Results on MaF1-MaF7. Table 5 lists comparative results between AGPSO and four competitive MOEAs using HV on MaF1-MaF7 with 5, 8, and 10 objectives. The second last row of Table 5 shows that AGPSO performs best on 11 out of 21 cases, while VaEA, , MOEA/DD, and SPEA2 + SDE perform best on 2, 3, 3, 2 cases out of 21 cases, respectively. As a result, AGPSO has a clear advantage over these MOEAs.

Regarding MaF1-MaF2, AGPSO is the best to tackle these test problems in MOEAs mentioned above. For MaF3, MOEA/DD is the best one and AGPSO has a median performance among the compared MOEAs. Concerning MaF4 with concave PF, AGPSO shows the best performance with all objectives and MOEA/DD performs poorly in the MaF4 test problem. For MaF5, AGPSO only obtained the 3rd rank as it is better than VaEA, MOEA/DD, and SPEA2 + SDE, while outperformed by with 5 and 10 objectives and with 8 objectives. For MaF6, AGPSO is the best to tackle it with 10 objectives, worse than VaEA with 5 and 8 objectives. As for MaF7, AGPSO performs poorly, while SPEA2 + SDE is the best with 5 and 8 objectives and is the best with 10 objectives. As observed from the one-to-one comparisons in the last row of Table 5, AGPSO is better than VaEA, , MOEA/DD, and SPEA2 + SDE on 13, 15, 18, and 16 out of 21 cases, respectively, while AGPSO is worse than VaEA, , MOEA/DD, and SPEA2 + SDE on 7, 6, 3, and 5 out of 21 cases, respectively.

In conclusion, it is reasonable to conclude that AGPSO shows a better performance over four compared MOEAs in most cases of MaF1-MaF7. This superiority of AGPSO was mainly produced by a novel density-based velocity strategy which enhances the search for areas around each particle and the angle-based external archive update strategy which effectively enhances the performance of AGPSO on MaOPs.

4.4. Further Discussion and Analysis on AGPSO

To further justify the advantage of AGPSO, it was particularly compared to a recently proposed angle-based evolutionary algorithm VaEA. Their comparison results with HV on MaF and WFG problems on 5 to 10 objectives are already contained in Tables 3 and 5. According the results in the Tables 3 and 5, we can also conclude that AGPSO showed a superior performance over VaEA in most problems of MaF1-MaF7 and WFG1-WFG9, as AGPSO performs better than VaEA in 40 out of 48 comparisons and is only defeated by VaEA in 7 cases. We compare the performance of our algorithm with VaEA’s environment selection on different shapes of PF test problems (MaF1-MaF7). They have the same shape of PF but have some different characteristics such as the difficulty of convergence and the deceptive PF property (WFG1-WFG9). Therefore, the angular-guided-based method embedded into PSOs is effectively improved in tackling MaOPs. The angular-guided-based method is adopted as a distributed indicator to distinguish the similarities of particles, which is transformed by the vector angle. The traditional vector angle performs better with concave but worse with convex or linear PFs based on MOEA/C [52]. Therefore, the angular-guided-based method improves this situation; the angular-guided-based method performs better than the traditional vector angle with convex or linear PFs and it will not behave badly with concave PFs. On the one hand, compared with the angle-based strategy designed in VaEA, AGPSO mainly focuses on the diversity first in the update of its external archive and convergence second. On the other hand, AGPSO designed a novel density-based velocity update strategy to produce new particles, which mainly enhances the search for areas around each particle and speeds up the convergence speed, while balancing convergence and diversity concurrently. From these analyses, we think that our environment selection is more favorable for some linear PFs and concave-shaped PFs with boundaries that are not difficult to find. In addition, AGPSO also added the evolutionary operator on the external archive to improve the performance. Therefore, our proposed AGPSO is reasonable to be regarded as an effective PSO for solving MaOPs.

To visually show and support the abovementioned discussion results, to better visualize the performance, and to show the solutions distribution in high-dimensional objective spaces, some final solution sets with the median HV values from 30 runs were plotted in Figures 2 and 3, respectively, for MaF1 with an inverted PF and for WFG3 with a linear and degenerate PF. In Figure 2, compared with the graph of VaEA, we can find that our boundary cannot be found completely, but the approximate PF we calculated through our proposed AGPSO PF is more closely attached to the real PF. This also proves the correctness of our analysis on environmental selection, i.e., our environment selection is more favorable for some linear PFs and concave-shaped PFs with boundaries that are not difficult to find. The HV trend chart of WFG3, WFG4, and MaF2 with 10 objectives is shown in Figure 4. As observed from Figure 4, the convergence rate of AGPSO is faster than that of the other four optimizers.

4.5. Further Discussion and Analysis on Velocity Update Strategy

The abovementioned comparisons have fully proved the superiority of AGPSO over four MOPSOs and four MOEAs on the WFG and MaF test problems with 5, 8, and 10 objectives. In this section, we make a deeper comparison of the speed update formula. In order to prove the superiority of the formula based on the density velocity update, the effect is compared with the traditional speed update formula as in [14] (denoted as AGPSO-I). Furthermore, to show that only embedding local best into the traditional velocity update methodis not sufficient enough, and the effect is also compared with equation (12), which is denoted as AGPSO-II.

Table 6 shows the comparisons of AGPSO and two modified AGPSOs with different speed update formula on WFG1-WFG9 and MaF1-MaF7 using HV. The mean HV values and the standard deviations (included in bracket after the mean HV results) in 30 runs are collected for comparison. As in the second last row of Table 6, there are 34 best results in 48 test problems obtained by AGPSO, while AGPSO-I performs best in 10 cases and AGPSO-II performs best only in 4 case, respectively. Regarding the comparison of AGPSO with AGPSO-I, AGPSO performs better on 33 cases, similarly on 5 cases, and worse on 10 cases. The novel velocity update strategy based on density is effective in improving the performance of AGPSO on WFG4, WFG5, WFG7, WFG9, and MaF2-MaF6. From these comparison data, the novel velocity update strategy can improve the AGPSO and enable particle generation to be more efficient under the coordination of the proposed angular-guided update strategies. Regarding the comparison AGPSO and AGPSO-II, AGPSO performs better on 31 cases, similarly on 12 cases, and worse on 5 cases. It also verified the superiority of the proposed novel velocity update strategy by comparing the variants of the traditional formula defined by (12). Adding the local-best (lbest) positional information of a particle to the traditional velocity formula is not enough to control the search strength in the surrounding region of this particle. It also confirmed that the proposed formula can produce higher quality particles.

4.6. Computational Complexity Analysis on AGPSO

The computational complexity of AGPSO in one generation is mainly dominated by the environmental selection that is described in Algorithm 2. Algorithm 2 requires a time complexity of O(mN) to obtain the union population U in line 1 and normalize each particle in U in line 2, where m is the number of objectives and N is the swarm size. In lines 3–7, it requires a time complexity of O (m2N2) to classify the population. In lines 8–11, it needs time complexity of O(N). In the loop for association, it also requires a time complexity of O(m2N2) as in lines 12–15. In lines 17–20, it needs a time complexity of O(N2). In conclusion, the overall worst time complexity of one generation in AGPSO is , which performs competitive time complexities with most optimizers, e.g., [48].

5. Conclusions and Future Work

This paper proposes AGPSO, which is a novel angular-guided MOPSO with efficient density-based velocity update strategy and excellent angular-guided-based archive update strategy. The novel density-based velocity update strategy uses adjacent information (local-best) to explore information around particles and uses globally optimal information (global-best) to search for better performing locations globally. This strategy improves the quality of the particles produced by searching for the surrounding space of sparse particles. Furthermore, the angular-guided archive update strategy provides efficient convergence while maintaining good population distribution. This combination of proposed novel velocity update strategy and excellent archive update strategy enables the proposed AGPSO to overcome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs. The performance of AGPSO is assessed by using WFG and MaF test suites with 5 to 10 objectives. Our experiments indicate that AGPSO has superior performance over four current PSOs (SMPSO, dMOPSO, NMPSO, and MaPSO) and four evolutionary algorithms (VaEA, -DEA, MOEA\D-DD, and SPEA2 + SDE).

This density-based velocity update strategy and angular-guided archive update strategy will be further studied in our future work. The performance of density velocity update strategy is not good enough on some partial problems, which will be further studied. Regarding angular-guided archive update strategy, while reducing the amount of computation and performing better than the tradition angle selection on concave and linear PFs, it also sacrifices a part of the performance against many objective optimization problems with convex PFs, the improvement of which will be further studied in future.

Data Availability

Our source code could be provided by contacting the corresponding author. The source codes of the compared state-of-the-art algorithms can be downloaded from http://jmetal.sourceforge.net/index.html or provided by the original author, respectively. Also, most codes of the compared algorithm can be found in our source code, such as VaEA [27], θ-DEA [48], and MOEA/DD [50]. Test problems are the WFG [23] and MaF [24]. WFG [23] can be found at http://jmetal.sourceforge.net/problems.html and MaF [24] can be found at https://github.com/BIMK/PlatEMO.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 61876110, 61872243, and 61702342, Natural Science Foundation of Guangdong Province under Grant 2017A030313338, Guangdong Basic and Applied Basic Research Foundation under Grant 2020A151501489, and Shenzhen Technology Plan under Grants JCYJ20190808164211203 and JCYJ20180305124126741.