A Multiobjective Particle Swarm Optimization Algorithm Based on Competition Mechanism and Gaussian Variation

In order to solve the shortcomings of particle swarm optimization (PSO) in solving multiobjective optimization problems, an improved multiobjective particle swarm optimization (IMOPSO) algorithm is proposed. In this study, the competitive strategy was introduced into the construction process of Pareto external archives to speed up the search process of nondominated solutions, thereby increasing the speed of the establishment of Pareto external archives. In addition, the descending order of crowding distance method is used to limit the size of external archives and dynamically adjust particle parameters; in order to solve the problem of insufficient population diversity in the later stage of algorithm iteration, time-varying Gaussian mutation strategy is used to mutate the particles in external archives to improve diversity. (e simulation experiment results show that the improved algorithm has better convergence and stability than the other compared algorithms.


Introduction
In many engineering problems, the problems are composed of multiple goals that influence and conflict with each other. In solving practical issues, people often encounter multiple objectives that need to obtain the best optimal solution at the same time, that is, multiobjective optimization problems. e optimization problem has more than one optimization objective and needs to be processed at the same time, which becomes a multiobjective optimization problem (MOP). Usually, the optimal solution of the multiobjective optimization problem obtained after analyzing the objective function is the Pareto optimal solution set [1]. erefore, in solving the multiobjective optimization problem [2], the following three key elements need to be solved: (1) e solution set is as close to the Pareto front as possible. (2) Keep the good diversity of the population as much as possible. (3) Make the particles effectively and uniformly distributed in the solution space. In recent years, in order to effectively solve MOP, multiobjective optimization algorithms based on different optimization theories have been continuously proposed. Deb et al. [3] proposed a multiobjective evolutionary computing algorithm based on nondominated sorting, called NSGA-II, and introduced a new selection operator into the algorithm to reduce the complexity of the algorithm. Zitzler et al. [4] proposed SPEA-II using the idea of Pareto domination. Coello et al. [5] proposed an improved multiobjective particle swarm algorithm that uses the concept of Pareto dominance to determine the flight direction of a particle and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight.
is algorithm improves the efficiency of solving multiobjective problems. Tsai et al. [6] proposed an improved multiobjective particle swarm optimizer with proportional distribution and jump improved operation, named PDJI-MOPSO, for dealing with multiobjective problems. PDJI-MOPSO maintains diversity of newly found nondominated solutions via proportional distribution and combines advantages of wide-ranged exploration and extensive exploitations of PSO in the external repository with the jump improved operation to enhance the solution searching abilities of particles. Introduction of cluster and disturbance allows the proposed method to sift through representative nondominated solutions from the external repository and prevent solutions from falling into local optimum. Mnif et al. [7] introduced a new approach called multiobjective firework algorithm (MFWA). Mellouli et al. [8], in order to solve the two-dimensional cutting stock problem, combined genetic algorithm with linear programming model to estimate the best Pareto frontier for these two goals. e Pareto front provided by this algorithm is very close to the optimal front. Ali et al. [9] proposed a modified variant of Differential Evolution (DE) algorithm for solving multiobjective optimization problems. e proposed algorithm, named Multiobjective Differential Evolution Algorithm (MODEA), utilizes the advantages of Opposition-Based Learning for generating an initial population of potential candidates and the concept of random localization in mutation step. Zhou et al. [10] proposed a generic transformation strategy that can be referred to as the Mucard strategy, which converts an MSCCOP into a lowdimensional multiobjective optimization problem (MOP) to simultaneously obtain all the (near-) optima of the constrained optimization problems in a single algorithmic run. Vargas et al. [11] studied the performance of the combination of adaptive penalty technology called APM and GDE3 algorithm. Sun et al. [12] proposed a novel multiobjective particle swarm optimization algorithm based on Gaussian mutation and an improved learning strategy. is method uses Gaussian mutation strategy to improve the consistency of external archives and current population. In order to improve the global optimal solution, different learning strategies are proposed for nondominant and dominant solutions. An indicator is proposed to measure the distribution width of nondominated solution sets generated by various algorithms. Coello et al. [13] proposed that the external set could be used to retain the nondominated solutions found in the iterative process, which improved the efficiency of the algorithm. Tsai et al. [6] used the specific global optimal value to replace the individual optimal value fully using the guiding role of the global optimal value. Zhang [14] proposed an improved MOPSO algorithm with a mutation operator that can maintain the diversity of optimal solutions and has good convergence. Zhang [15] proposed a MOPSO algorithm based on fuzzy dominance, and the experimental results showed the effectiveness of the proposed algorithm. Tao [16] proposed a multiobjective optimization algorithm combining PSO and difference algorithms. By generating common new particles and updating the particle velocity formula, the search efficiency of the algorithm was effectively improved. Li [17] proposed an improved MOPSO algorithm that updates the optimal position of all particles through the Pareto dominance relationship. e experiment shows that the proposed algorithm can obtain a better noninferior solution. Ni [18] proposed an adaptive dynamic recombinant PSO algorithm that adopts a high-level clustering algorithm. e experimental results show that this algorithm can improve the convergence speed and evolutionary ability of the algorithm.
More and more improved algorithms and strategies are used to solve various multiobjective optimization problems. However, few researchers have improved the algorithm from the perspective of the balance between local search capability (exploitation) and global search capability (exploration) to solve multiobjective optimization problems. Related research has pointed out that the effective balance between the exploration and exploitation of intelligent algorithms will have a vital impact on the optimization performance of the algorithm [19]. Different from other study results, in this study, we proposed a multiobjective particle swarm optimization algorithm based on competition mechanism strategy and Gaussian mutation to balance the exploration and exploitation of the algorithm and enable the algorithm to search the optimal location and converge to the Pareto front more quickly. Finally, through the simulation test of the multiobjective test functions and the comparison with other multiobjective optimization algorithms, the results show that the algorithm proposed in this paper is superior to the compared algorithm in terms of convergence and population distribution. e rest of the paper is organized as follows. Section 2 describes the multiobjective optimization problem and the basic concept. Section 3 presents the PSO algorithm. In Section 4, an improved multiobjective particle swarm optimization algorithm is introduced. In Section 5, the evaluation indicators and test functions used are introduced. e numerical experiment results and data analysis are described in Section 6. Finally, conclusions and respects are presented in Section 7.

Formal Definition of Multiobjective Optimization
Problem. Generally, a multiobjective optimization problem (MOP) includes a set of objective functions and some constraints\enleadertwodots. Without loss of generality, an MOP with m objective functions and n decision variables can be described as follows [20,21]: . . , f m ) is the objective function, which contains one or more objective functions; and f � (f 1 , f 2 , . . . , f m ) is the multiobjective function (m ≥ 2); f k (x): R n ⟶ R, (k � 1, 2, . . . , m) is the k − th objective function; g i : R n ⟶ R, (i � 1, 2, . . . , p) is the i − th inequality constraint; h j : R n ⟶ R, (j � 1, 2, . . . , q) represents the j − th equality constraint condition.

Pareto Optimal Solution Concepts.
e Pareto optimal solution was discovered by the Italian economist Pareto. It was originally limited to the field of economics; this rule has gradually extended to various fields of social life and was deeply recognized by people as well. e strict Pareto optimal solution can be described by multiobjective mathematical programming. Assuming that there are several objectives at the same time, these objectives are independent of each other and cannot be weighed and summed. New optimization theories are needed to solve such problems. Generally, if one subobjective is improved, other subobjectives will be sacrificed. So, it is impossible to improve all subobjectives at the same time. Pareto optimal solution is also called nondominated solution. In a multiobjective optimization problem, due to factors such as conflict and incomparability among various subobjectives, a solution is often the best in one subobjective and may be the worst in other subobjectives; if there is one solution, improving any subobjective function will inevitably weaken at least one other subobjective function, which is called nondominated solution or Pareto optimal solution. All Pareto optimal solutions constitute the Pareto optimal solution set, and these solutions are mapped by the objective function to form the Pareto optimal front. Pareto proposed the concept of nondominated set of multiobjective solutions in 1986, which is defined as follows: assuming any two solutions S 1 and S 2 for all objectives, if S 1 is better than S 2 , then we say that S 1 dominates S 2 , and if S 1 is not dominated by other solutions, then S 1 is called a nondominated solution, also called a Pareto optimal solution.

Related Definitions of Multiobjective Optimization
Problem.
e following introduces some basic concepts in the multiobjective optimization problem [22,23]. Definition 1. (feasible domain). In the decision space, the feasible domain is represented by X, and its expression is then x 1 dominates x 2 , so x 1 ≺x 2 .
Definition 4. (Pareto optimal solution). e Pareto optimal solution is also known as the nondominant solution. For a solution x * in the feasible domain, if x * is not dominated by any other solution in the feasible domain, x * is called the Pareto optimal solution, and its definition is as follows: ∃x ∈ X: x≺x * .
Definition 5. (Pareto optimal solution set, PS). e set of all nondominated solutions in the feasible domain is called the Pareto optimal solution set and can be defined as follows: PS � x * |∃x ∈ X: x≺x * .
Definition 6. (Pareto optimal frontier, PF). e target vector set corresponding to the Pareto optimal solution set is the Pareto optimal frontier, also known as the Pareto optimal front end or Pareto equilibrium surface, which is defined as follows:

Particle Swarm Optimization Algorithm
Particle swarm optimization (PSO) is a heuristic swarm intelligence algorithm that solves optimization problems by imitating the swarm behavior of birds [24]. e algorithm has high stability and good adaptability. It is an intelligent global optimization algorithm that has attracted the attention of many scholars in recent years. Each particle in the PSO algorithm is equivalent to a bird in the population, so each particle has its own speed and position. rough selflearning and social learning, the particles move in the solution space to obtain the global optimum solution. Assuming that the population size of the particle is N and the dimension of the space is D, the update formula for the velocity and position of the particle are as follows: where is the velocity (position) of the j − th dimension, t is the current iteration number, velocity and position are limited in a certain range, ω is the inertia weight, c 1 and c 2 are two positive constants in [0, 4] which represent the learning factors, r 1 and r 2 are two random numbers in [0, 1], pbest ij represents a coordinate that defines a particle pbest i in the j − th dimension of the individual best position, and gbest j represents a coordinate that defines the global best position in the j − th dimension of the population. ω is determined by the following formula: where ω max , ω min are the maximum and minimum values of inertia weight. Generally the value is set as 0.9 and 0.4, t represents the number of current iterations, and T max is the maximum number of iterations.

Competition Mechanism.
e competition mechanism of this paper can quickly search the disposal solution set and build the external archives set in the MOPSO algorithm. First, select a particle x from the population s, and generally select the first particle in the population. en, let s � s − x { } and compare each particle in the x population on the basis of the objective function value of Pareto dominance relations. If x≺y, particle y will be removed from the population s; is method is also adopted when the nondominant solution set enters the external archive set. As more and more particles are removed, the number of times the algorithm runs is less, which can effectively reduce complexity of the algorithm and improve search speed of the algorithm. e pseudocode of the algorithm is shown in Table 1.

External Archives Maintenance Strategy.
In solving a MOPSO algorithm, each iteration produces a set of Pareto solutions. erefore, external archives are needed to store the nondominated solutions produced by each iteration, and the solution set forms the Pareto front [25]. After each iteration, the Pareto front is also updated. However, as the number of iterations increases, the size of external archives will increase, and the complexity of the algorithm will also greatly increase. erefore, it is necessary to limit the size of external archives to reduce the complexity of the algorithm.
is paper uses the descending order of crowded distance to limit the size of the external archives set.

Selecting pbest and gbest.
In a MOPSO algorithm, selecting the gbest is very important, which directly affects the convergence speed and capabilities. In IMOPSO, the size of the particle population is fixed, and the particles will not be deleted from the population, but the position of the particles in the population needs to be adjusted to update pbest and gbest. In multiobjective conditions, gbest normally exists in a group of noninferior solutions and is not a single gbest position. When gbest and pbest are nondominated, each particle may have more than one pbest. erefore, it is necessary to choose pbest and gbest by the appropriate method.

Selecting pbest.
e specific process is as follows: pbest t i is used to record the individual position and save the nondominated solution of the particles in the evolution process.
e updating formula of pbest t i for the t − th generation particles is as follows: where pbest t i is the optimal position of the i − th particle in the previous t generation and x t i is the position of the i − th particle in the t − th generation.

Selecting gbest.
e specific process is as follows: In the selection process of gbest, this paper adopts the Pareto principle, also known as the asymmetry principle or the 80/ 20 law; that is, 80% of the results in practical issues are produced by 20% of key factors [26]. erefore, in this study, the global optimal value is randomly selected from the top 20% nondominated solutions of the external archive.

Parameter Improvement Strategy.
Inertia weight ω and the learning factors c 1 and c 2 in the PSO algorithm have a considerable influence on the searching ability of the population in the target region. However, the traditional linear adjustment strategies for these two types of parameters cannot effectively reflect the search process of the algorithm. erefore, this article adopts a nonlinear dynamic adjustment strategy to more accurately reflect the search process of the algorithm and to more effectively balance the exploitation and exploration.

Inertia Weight.
e inertia weight ω determines the influence of the velocity of the previous generation particle on the current velocity. e appropriate adjustment rule of ω can effectively balance the exploitation and exploration of the algorithm. If ω � 0, the particle speed is determined by the current position, and the particle has no inheritance to the speed of the previous generation, so the algorithm is easy to fall into the local optimum. If ω ≠ 0, the larger inertial weight can strengthen the global search capability, and the smaller inertial weight can strengthen the local search capability. In order to effectively balance the exploration and exploitation and improve the search performance of the algorithm, this paper adopts a new strategy for adjustment; the updated formula is as follows: where ω(t) changes with the number of iterations. In this paper, p 1 is one-third of the maximum number of iterations.
x � first (s); //Assign the first particle Complexity Figure 1(a) shows the change curve of the new inertia weight.

Learning
Factor. e learning factor determines the influence of self-learning ability and social-learning ability on particle motion during the search process, which reflects the state of information exchange between particles. In recent years, many scholars have revised the learning factors. Some scholars have proposed asynchronous learning factors; that is, in the initial search phase of the algorithm, the particles have greater self-learning ability and smaller sociallearning ability; in the later phase of the algorithm search, the asynchronous learning factors can enhance the ability of particles to move to the global optimal position and obtain high-quality particles, which makes the algorithm have a higher probability of converging to the global optimal solution. For the learning factors, the nonlinear function of the new inertia weight mentioned above [27] is adopted in this paper, and its updated formula is as follows: where, through the above formula, it is known that the learning factor also adjusts dynamically with the adjustment of inertia weight. Figure 1(b) shows the change curve of the learning factor.

Time-Varying Gaussian Mutation.
In solving the problem of multiobjective optimization, the fast convergence of the PSO may not be advantageous. Instead, it may make the population prematurely gather around certain particles or a certain area and lose diversity, which is easy to make the algorithm show premature phenomena. erefore, to enhance the population diversity and avoid the algorithm falling into local optimum, based on literature [13], this paper designs a kind of time-varying particle mutation to produce new solutions through mutation operators to external archives. e perturbation formula is as follows: where P m is the mutation probability. r g is subject to a Gaussian distribution with a mean of 0 and a variance of 1. mut r ange represents the scope of action of variation. ub(j) and lb(j) are the upper bound and lower bound of decision variables of the j − th dimension, respectively. M r is the mutation parameter, and the value in this paper is 0.5.

e Specific Steps of the IMOPSO Algorithm
Step 1. Initialize the group's position and speed. Set the iteration times, population size, and algorithm parameters.
Step 2. Calculate the fitness value of each particle. Generate a nondominating solution set according to dominating relation.
Step 3. Update the external archive set.
Step 4. Arrange each particle of the external archival set in descending order according to the crowding distance and determine whether the set size number is exceeded. If it is exceeded, clear the nondominant solution beyond the size.
Step 5. Update the individual optimal position; if it is the first generation, directly select the initial position of each particle as the individual optimal value; otherwise, update according to the Pareto dominance relationship.
Step 6. Randomly select the global optimal location from the external archive set ranked in the top 20% nondominant solution.
Step 7. Update the speed formula. If the particle velocity ] i > ] max , then let ] i � ] max ; if the particle velocity ] i < ] min , then let ] i � ] min .
Step 8. Update the position of the next generation of each particle and carry out time-varying Gaussian variation on the particles in each external archive according to the probability P to avoid premature precocity of the algorithm.
Step 9. Determine whether the condition (the maximum number of iterations) is met, and if so, end the loop. Otherwise, return to Step 2 to continue the iteration.

Performance
Measurement. e quality evaluation mainly focuses on the distance between the solution produced by the algorithms and the Pareto optimal solution for MOPs and the extent covered by the solution produced by the algorithms. In this paper, two performance indicators are adopted.

Generational Distance.
Generational Distance (GD) is used to evaluate the convergence performance of multiobjective algorithms. It is used to calculate the average minimum Euclidean distance from each point in the solution set n to the reference set n * . e calculation formula is defined as follows [28]: GD n, n * � ������������������ y∈n min x∈n * dis(x, y) 2 |n| , Complexity 5 where n is the solution set obtained by the algorithm, n * is a set of uniformly distributed reference points sampled from true Pareto front (PF), and dis(x, y) is the Euclidean distance between the point y in the solution set n and the point x in the reference set n * . e smaller GD is, the closer the Pareto optimal solution set obtained by the algorithm is to the true Pareto front, and the better convergence of the algorithm is. e ideal value of GD is 0; that is, the Pareto optimal solution obtained by the algorithm is on the true Pareto front.

5.1.2.
Spacing. Spacing (SP) is a parameter used to evaluate the performance of the Pareto optimal solution obtained by the algorithm. e mathematical expression is [29,30] SP � where n represents the number of nondominated solutions found; d is the average of all d i , and d i is calculated as follows: where k represents the number of objective functions. e smaller SP is, the better the distribution of the solution obtained by the algorithm is. e ideal value of SP is 0; that is, the Pareto optimal solution obtained by the algorithm is evenly distributed in the target space.

Inverted Generational
Distance. GD can only evaluate the convergence of the algorithm. In order to further evaluate the comprehensive performance of the algorithm, Inverted Generational Distance (IGD) was proposed. IGD represents the average value of the Euclidean distance between each reference point in the reference set n * and the closest solution to n. e closer the IGD value is to 0, the better the overall performance of the algorithm is. e calculation formula of IGD is as follows: where n is the solution set obtained by the algorithm, n * is a set of uniformly distributed reference points sampled from true PF, and dis(x, y) is the Euclidean distance between the point x in the reference set n * and the point y in the solution set n.

Test Functions.
To test the performance of the IMOPSO algorithm, this paper selects the classic multiobjective test functions for the simulation test [27,31]. e test functions are expressed in Tables 2 and 3.

Experimental Analysis and Comparison
rough experiments, the IMOPSO algorithm in this paper was compared with the experimental results of the NSGA-II, SPEA-II, MOPSO, and NSGA-III [32] algorithms. e population size was set as 100, the maximum number of iterations was set as 10,000, and the size of the external archive was set as 100. e specific parameters of each algorithm were set as shown in Table 3. Among them, pc is the crossover probability, pm is the mutation probability, mu is the mutation rate, sep is the variable step length, ng is the grid control maximal particle number, cg is the cross probability, aph is the expansion probability, and t 1 and t 2 are mutation parameters. e NSGA-II, SPEA-II and MOPSO, and NSGA-III and IMOPSO independent algorithms were ran 30 times on each test function. Tables 4-6, respectively, show the convergence, distribution, and comprehensive performance of the algorithms. e test was completed in MATLAB 2018a with a Windows 10 system, and the computer was configured with an Intel Core i7 3.40 GHz processor. e numerical experiment results are shown in Tables 4-6, and the optimal results are marked in bold. Table 4 gives the numerical experiment result of the   8 Complexity evaluation index GD, which is the convergence performance of the algorithm; Table 5 gives the numerical experiment result of the evaluation index SP, which is the algorithm distribution index; Table 6 gives the numerical experiment result of the evaluation index IGD, which represents the algorithm comprehensive performance. It can be seen from Table 4 that, compared with the other compared algorithms, in the test functions, ZDT, IMOPSO has obtained the best results 4 times, and NSGA-III has obtained the best results once; in the test functions, DTLZ, IMOPSO has obtained the   best results 5 times and NSGA-III has obtained the best results 2 times. It can be concluded that the convergence of IMOPSO is superior to those of the other algorithms used for comparison, and, at the same time, it can be further concluded that the learning factor adjustment rules adopted in this paper have effectively improved the convergence of  Table 5. In the test functions, ZDT, IMOPSO obtained the best results 5 times; in the test functions, DTLZ, IMPSO obtained the best results 4 times, NSGA-II obtained the best results 2 times, and SPEA-II obtained the best result once. By analyzing the Complexity numerical results in Table 5, it can be seen that the distribution of the algorithm can be effectively improved by introducing time-varying Gaussian mutation strategy into IMOPSO. By analyzing Table 6, it can be seen that, in the test functions, ZDT, IMOPSO has obtained the best results 3 times, and NSGA-II and SPEA-II have obtained the best results once, respectively; in the test functions, DTLZ, IMOPSO has obtained the best results 4 times, and NSGA-II and MOPSO obtained the best results 2 times and once, respectively, so, on the whole, IMOPSO has better comprehensive performance (Table 7).
In order to more intuitively show the characteristics of each optimization problem in this article and the status of each algorithm in solving the optimization problems,   14 Complexity convergence of IMOPSO in DTLZ2 and DTLZ4 is worse than that of NSGA-III, but it is better than the other compared algorithms in other test functions; in terms of SP, NSGA-II is better than IMOPSO in DTLZ1 and DTLZ6, and SPEA-II is better than IMOPSO in DTLZ6, but, in the remaining DTLZ test functions, the performance of IMOPSO is better than those of the other compared algorithms.

Conclusions and Prospects
In order to improve MOPSO prone to premature convergence, poor distribution, and so forth, in solving multiobjective optimization problems, this paper proposes an improved multiobjective particle swarm optimization (IMOPSO) algorithm. In this paper, by introducing a competition mechanism strategy in IMOPSO and dynamically adjusting the inertia weight and learning factor of the algorithm, the convergence performance of the algorithm is effectively improved. In addition, in order to solve the shortcomings of insufficient distribution of the algorithm in the optimization process, time-varying Gaussian mutation is cited in the algorithm to increase the diversity of the algorithm and improve its distribution. rough the analysis of numerical experiment results, in the experimental results of the test functions, ZDT, and the test functions, DTLZ, as a whole, the convergence and distribution of the algorithm are better than those of the other compared algorithms. Finally, in the results of the comparison of the comprehensive performances of the algorithms, the comprehensive performance of the IMOPSO algorithm is also the best among the compared algorithms.
It should be pointed out that this article only conducts the numerical experiment analysis of the benchmark function, and its performance in actual multiobjective optimization problems needs further verification and analysis, such as multiobjective job shop scheduling problems and redundancy allocation problem with cold-standby strategy.
is will be our next research works.

Data Availability
At present, these raw data cannot be shared because which forms part of an ongoing study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.