Multiobjective Parallel Chaos Optimization Algorithm with Crossover and Merging Operation

Chaos optimization algorithm (COA) usually utilizes chaotic maps to generate the pseudorandom numbers mapped as the decision variables for global optimization problems. Recently, COA has been applied to many single objective optimization problems and simulations results have demonstrated its eﬀectiveness. In this paper, a novel parallel chaos optimization algorithm (PCOA) will be proposed for multiobjective optimization problems (MOOPs). As an improvement to COA, the PCOA is a kind of population-based optimization algorithm which not only detracts the sensitivity of initial values but also adjusts itself suitable for MOOPs. In the proposed PCOA, crossover and merging operation will be applied to exchange information between parallel solutions and produce new potential solutions, which can enhance the global and fast search ability of the proposed algorithm. To test the performance of the PCOA, it is simulated with several benchmark functions for MOOPs and mixed H 2 / H ∞ controller design. The simulation results show that PCOA is an alternative approach for MOOPs.


Introduction
Optimization problems that at least two objectives need to be optimized simultaneously are called multiobjective, and multiobjective optimization problems (MOOPs) are very common in real world and many engineering areas. Solving MOOPs has become a crucial part of the optimization field. e solution of MOOPs is usually not easy because the multiple objectives tend to be in conflict with each other. Generally, the optimal solution of a MOOPs is a set of optimal solutions, largely known as Pareto-optimal solutions [1]. Each solution represents a particular performance trade-off between multiple objectives and can be considered optimal.
Generating the Pareto-optimal set can be computationally expensive and is often infeasible, because the complexity of the underlying application prevents exact methods from being applicable. For this reason, a number of multiobjective search strategies have been developed over the past decades. One of the most important aspects is the development of evolutionary algorithms and a number of multiobjective evolutionary algorithms have been suggested [2][3][4][5][6][7][8][9]. Evolutionary algorithms are used for MOOPs because they provide a set of solutions in a single run and do not require objectives to be aggregated. Additionally, the performance of evolutionary algorithms does not get affected by the shape of the Pareto front [6]. Other recent effective multiobjective optimization algorithms are multiobjective particle swarm optimization [10,11], multiobjective artificial immune algorithm [12,13], ant colony optimization [14], or artificial bee colony (ABC) [15] for MOOPs.
Facing single objective optimization problems, a lot of existing algorithms, such as genetic algorithm (GA), simulated annealing (SA), particle swarm optimization (PSO), differential evolution (DE), harmony search algorithm (HSA), and compact pigeon-inspired optimization [16][17][18], have demonstrated excellent performance, but trapping in local optimum still remains a challenge. Chaos optimization algorithm (COA) is a recently developed global optimization technique based on chaos theory and particularly is the specification of the use of numerical chaotic sequences. Recently, literatures have demonstrated that the COA can carry out overall global searches at higher speed than stochastic ergodic searches that depend on the probabilities [16][17][18][19][20][21][22]. In addition to the development of COA, chaos has also been integrated with optimization algorithms for MOOPs, such as chaotic nondominated sorting genetic algorithm (CNSGA) [23,24], chaotic sequences based multiobjective differential evolution (CS-MODE) [25], chaos multiobjective immune algorithm [26], multiobjective chaotic particle swarm optimization [27,28], multiobjective chaotic ant swarm optimization [29], and multiobjective chaotic artificial bee colony algorithm [30]. Because of the ergodicity and the pseudorandomness of chaos, applying chaos in multiobjective optimization algorithms becomes an interesting alternative for enriching population diversity and escaping from local optimum [24,29]. e parallel strategy is widely used for its great capability of overcoming the sensitivity to the initial, by producing a number of the initial solutions which are uniformly distributed in solution space. Some new optimization algorithms, developed by applying the parallel strategy to conventional intelligent algorithm framework, such as parallel particle swarm optimization algorithm and parallel multiverse optimizer, show excellent performance in robustness and convergence speed.
Although COA has been successfully applied to single objective optimization problems, as far as we know, there are no literatures concerning COA for MOOPs until now. e reasons for this may be as follows: (1) COA is an individualbased algorithm which is not suitable for MOOPs with Pareto-optimal solutions; (2) the chaotic sequences are pseudorandom and sensitive to the initial conditions; therefore, the success of COA crucially depends on appropriate starting values.
In this paper, a novel population-based parallel chaos optimization algorithm (PCOA) with crossover and merging operation will be proposed for MOOPs. In the PCOA, multiple chaos variables (like population) are simultaneously mapped onto one decision variable, so PCOA searches from diverse initial points and detracts the sensitivity of initial values. In addition, crossover and merging operation will be used to exchange information within population and produce new potential solutions. Actually, the proposed algorithm provides a combination of the ability of global search of PCOA and local search accuracy of crossover and merging operation. To preserve the diversity of the Pareto-optimality, an external elitist archive and accurate crowding measure method is applied in the PCOA for MOOPs. e rest of this paper is organized as follows. Section 2 briefly describes MOOPs. e PCOA approach is introduced in Section 3. Section 4 gives presentation of PCOA for MOOPs. Test problems simulation results show the effectiveness of the PCOA approach in Section 5. In Section 6, PCOA approach is applied through mixed H 2 /H ∞ controller design. Conclusions are presented in Section 7.

Multiobjective Optimization Problems (MOOPs)
MOOPs with conflicting objectives do not have a single solution. erefore, multiobjective algorithms (MOAs) aim to obtain a diverse set of nondominated solutions, i.e., solutions that balance the trade-off between the various objectives, referred to as the Pareto-optimal front (POF). Another goal of MOAs is to find a POF that is as close as possible to the true POF of MOOPs. e objectives of MOOPs are normally in conflict with one another; i.e., improvement in one objective leads to a worse solution for at least one other objective. erefore, when solving MOOPs the definition of optimality used for single objective optimization problems has to be adjusted. For MOOPs, when one decision vector dominates another, the dominating decision vector is considered as a better decision vector.
e MOOPs can be mathematically described as where f(x) is the objective vector to be optimized and the D is the number of objective functions.
is the set of equality constraints, and m and p are the number of inequality constraints and equality constraints, respectively.
We call x � [x 1 , x 2 , . . . , x n ] T the vector of decision variables, and R is the feasible region. e MOOPs determine the particular set of values x * 1 , x * 2 , . . . , x * n , which yield the optimum values of all the objective functions, from the set F of all vectors which satisfy (2) and (3).
Several definitions for MOOPs are given by the following [1].
Definition 3 (Pareto-optimal set): for a given MOP f(x), the Pareto-optimal set (P * ) is defined as . Definition 4 (Pareto front): for a given MOP f(x) and Pareto-optimal set P * , the Pareto front PF * is defined as In the general case, it is impossible to find an analytical expression of the line or surface that contains these points. e normal procedure to generate the Pareto front is to compute the feasible points Ω and their corresponding f(Ω). When there are a sufficient number of these, it is then possible to determine the nondominated points and to produce the Pareto front.

PCOA Approach
A novel population-based parallel chaos optimization algorithm (PCOA) is proposed for MOOPs. e salient feature of PCOA lies in its pseudoparallel mechanism. In addition, crossover and merging operation will be applied to utilize the fitness and diversity information of the population. Actually, the proposed algorithm provides a combination of the ability of global search of PCOA and local search accuracy of crossover and merging operation.

Twice Carrier Wave
Mechanism-Based PCOA. Consider a single objective optimization problem for nonlinear multimodal function with boundary constraints as In the PCOA, multiple stochastic parallel chaos variables (like population) are simultaneously mapped onto one decision variable, and the search result is the best solution of parallel candidate individuals. e process of PCOA is based on the twice carrier wave mechanism, which is described in Table 1. e first part of PCOA is the raw search in different chaotic traces, and the second part of PCOA is refined search to enhance the search precision.

Crossover and Merging Operation within Population.
In this paper, crossover and merging operation within population are employed in the PCOA. Both crossover and merging operation will exchange information within population and produce new potential parallel variables, which usually are different from chaotic sequences.

Crossover Operation.
e motion step of chaotic maps between two successive iterations is usually big, which results in the big jump of the decision variable in search space.
is kind of randomness of chaotic maps is benefit to jump out local optimum; however it is not efficient for local exploitation. In this paper, the crossover is used for information interaction between parallel solutions. e crossover operation within population is illustrated in Figure 1. In the crossover, decision variable from one parallel solution x * j is randomly chosen to be crossed with the corresponding one from the other parallel solution. e new candidate individual by crossover operation is denoted by

Merging Operation.
Even if PCOA has reached the neighborhood of the global optimum, it needs to spend much computational effort to reach the optimum eventually by search numerous points [22]. In order to improve PCOA's precise exploitation capability, merging operation within population is employed here. e merging operation within population is illustrated in Figure 2. e merging operation between two parallel solutions can be denoted by where c is chaotic map and frequently used Logistic map is defined by the following equation [21,29]: is kind of merging operation within population randomly chooses two parallel solutions to merge, and it may produce new potential solutions for optimization problem. In essence, the merging operation within population is a kind of local exploiting search as shown in (7). e crossover and merging operation within population are also used as the supplement to the twice carrier wave search during each iteration.
is means that if the new parallel variables by crossover and merging operation have reached better objective function value than the original two parallel variables, the new parallel variables will replace one original parallel variable. In another situation, if the new parallel variables bring a worse objective function value than the original two parallel variables, the new parallel variable will be given up.
Both crossover and merging operation within population are conducted during each iteration in the PCOA search procedure. Another problem is to choose how many parallel variables for crossover or merging operation. e more the crossover or the merging operation, the more the diversity of parallel variables and the more the computing cost. In this paper, the crossover rate and the merging operation rate are set as P cross � 0.1 ∼ 0.5 and P merging � 0.1 ∼ 0.5; that is, about 10% to 50% of parallel variables have been applied through the crossover or merging operation. ese parameters values are usually chosen by trial, taking into account both optimal search and computing cost.

PCOA for MOOPs
As far as we know, there are no literatures concerning COA for MOOPs until now, and this motivates us to extend PCOA for MOOPs. In the following discussion, the Pareto dominance concept and elitist archive mechanism are used to extend the PCOA to tackle MOOPs.

Dominance Selection Operator.
To extend PCOA to MOOPs, the most important work is the selection mechanism, where the selection operation is based on the concept of Pareto dominance. According to the dominance relation between potential solutions x i ′ (k) and x i (k), there may be at most three situations: us, dominance selection operator is defined as follows:

Mathematical Problems in Engineering
Crossover operation Parallel solution 3 Parallel solution N Individual 3 (C) Figure 1: Crossover operation within population.
Global solution X * Merging operation Merging operation  Table 1: Algorithmic description of the PCOA. Initialization: specify the maximum iterations K 1 and K 2 for twice carrier wave; set population size N; random initial value of chaos variable c ij ; parallel optimum P * j � ∞; global optimum P * � ∞. Begin the first carrier wave Compute the objective function value of x ij (k).
Update the search result of P * j and P * with f(x ij (k)). Generate next value of chaos variable by one-dimensional map function (M): c ij (k + 1) � M(c ij (k)). end for 1 End the first carrier wave Begin the second carrier wave for 2 k′ � 1 to K 2 Compute the second carrier wave around parallel solution x * j by: Update the search result of P * j and P * with f(x ij (k′)). Generate next value of chaos variable by one-dimensional map function (M): Adjust small ergodic range by: λ i ⟵ tλ i . end for 2 End the second carrier wave where LC(x i ′ (k), x i (k)) denotes the less crowded one between x i (k) and x i ′ (k). e crowding degree estimation is introduced in [7].

Handling Constraints.
For optimization problems with constraints, the penalty function approach is frequently used to handle constraints where the constrained-domination is used to handle constraints, which is a penalty-parameterless constraint handling approach [2]. A solution A is said to dominate a solution B, if any of the following conditions is true [9]: (i) solution A is feasible and solution B is not; (ii) solutions A and B are both infeasible, but solution A has a smaller overall constraint violation; (iii) solutions A and B are feasible and solution A dominates solution B. e effect of using this constrained-domination principle is that any feasible solution has a better nondomination rank than any infeasible solution. All feasible solutions are ranked according to their nondomination level based on the objective function values. However, among two infeasible solutions, the solution with a smaller constraint violation has a better rank [6].

External Elitist Archive. Since Zitzler and
iele formally introduced the strength Pareto evolutionary algorithm (SPEA) [31] with the elitist reservation mechanism in 1999, many researchers have adopted similar elitist reservation concepts in practice [1,[7][8][9]. e main motivation for this mechanism is the fact that a solution that is nondominated with respect to its current population is not necessarily nondominated with respect to all the populations that are produced by an optimization algorithm. us, what we need is a way of guaranteeing that the solutions that we will report to the user are nondominated with respect to every other solution that our algorithm has produced [6]. erefore, the most intuitive way of doing this is by storing in an external memory (or archive) all the nondominated solutions found. If a solution that wishes to enter the archive is dominated by its contents, then it is not allowed to enter. Conversely, if a solution dominates anyone stored in the archive, the dominated solution must be deleted.
In this paper, the elitist reservation strategy is also adopted. Initially, this archive is empty. As the evolution progresses, the trial vectors that are not dominated by the corresponding target vectors obtained at each generation are compared one by one with the current archive, and good solutions enter the archive. ere are three cases when the nondominated trial vectors compare with the current archive [7,9]: (a) if the trial vector is dominated by member(s) of the external archive, the trial vector is rejected; (b) if the trial vector dominates some member(s) of the archive, then the dominated members are deleted from the archive and the trial vector enters the archive; and (c) the trial vector does not dominate any archive members and none of the archive members dominates the trial vector, which implies that the trial vector belongs to the Pareto front and it enters the archive.
When the external archived population reaches its maximum capacity, the crowding entropy measure is used to keep the external archive at its maximum size as [1,7] where the parameters f max  (1); x is the decision solution vector consisting of n variables x i ∈ R n bounded by lower (L i ) and upper (U i ) limits. i � 1, 2, . . . , n, which represents each decision variable; j � 1, 2, . . . , N, which represents each decision variable simultaneously mapped by multiple N chaos variables. N can also be considered as population size as other evolutionary algorithms. e process of the PCOA approach for MOOPs is described as follows, and it is illustrated in Figure 3.
Step 1: set the iterations number k � 1, specify the maximum number of iterations k max , switch point from the first carrier wave to the second carrier wave S 1 , initialize chaotic maps 0 < c ij (k) < 1 randomly, set population size N, set crossover probability and merging probability, and initialize the external elitist archive A (l) � ∅ and its maximum size N max .
Step 2: map chaotic maps c ij (k) onto the variance range of decision variables x ij (k) by the following two ways. If k < S 1 , PCOA search using the first carrier wave is If k ≥ S 1 , PCOA search using the second carrier wave is where λ is a very important local search parameter and adjusts small ergodic ranges around so far global solutions x * ij .
Step 3: evaluate the fitness value of each target vector x ij (k).
Step 4: in crossover operation within population, produce x (C) ij (k) and evaluate the trial vector x (C) ij (k).
Step 5: in merging operation within population, produce x (M) ij (k) and evaluate the trial vector x (M) ij (k).
Step 6: perform selection operation between x ij (k),

Mathematical Problems in Engineering
If x ij (k) dominates x (C) ij (k), update A(k) with the elitist archive update rules. If x ij (k) , x (C) ij (k), and x (M) ij (k) are nondominated with respect to each other, update A (l) with elitist archive update rules. Each vector in A(k) has stored its respective global solution x * ij .
Step 7: size external elitist archive A(k). When A(k) exceeds the maximum size, the less crowded vectors based on the distance in (10) keep the archive size at N max .
Step 8: generate next values of chaotic maps by a chaotic mapping function (M) as in (8): Step 9: if k ≥ k max , stop the search process; otherwise k←k + 1 and go to Step 2.

MOOPs Test Simulation
is section is devoted to presenting the experiments performed in this work. First, the set of MOOPs used as a benchmark and the quality indicators applied for measuring the performance of the resulting Pareto fronts are introduced. Next, our preliminary experiments of PCOA are described and analyzed. en, PCOA is evaluated and compared to other multiobjective optimization algorithms.

MOOPs Test Problems. Different sets of classical test
problems suggested in the MOOPs literature are used to estimate the performance of the PCOA.

Performance Measures.
In order to determine whether an algorithm can solve MOOPs efficiently, the algorithm's performance should be quantified with functions referred to Step 3 Step 4 Step 5 Step 6 Step 7 Step  as performance measures. A comprehensive overview of performance measures currently used in the multiobjective optimization problems literatures has been provided in [32].
ere are three goals in a multiobjective optimization: (i) convergence to the Pareto-optimal set, (ii) maintenance of diversity in solutions of the Pareto-optimal set, and (iii) maximal distribution bound of the Pareto-optimal set. In this article, three quality indicators evaluating each type of the above goals are introduced as follows.
(a) Generational distance (GD) [1]: the concept of generational distance was used to measure how far the elements are in the set of nondominated vectors found so far from those in the Pareto-optimal set. is indicator is defined as where n is the number of vectors in the set of nondominated solutions found so far and d i is the Euclidean distance (measured in objective space) between each of these solutions and the nearest member of the Pareto-optimal set. A smaller value of GD demonstrates a better convergence to the Pareto front. (b) Spread (Δ) [2]: this indicator is to measure the extent of spread archived among the obtained nondominated solutions. e metric is defined as where N is the number of nondominated solutions found so far, d i is the Euclidean distance between neighboring solutions in the obtained nondominated solutions set, d is the mean of all d i , and d f and d l are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set. A smaller value of Δ indicates a better distribution and diversity of the nondominated solutions. (c) Hypervolume (HV) [1]: the reference point can be found simply by constructing a vector of the worst objective function values. ereafter, a union of all hypercubes is found and its HV is calculated: where Ω is the members of nondominated set of solutions. Algorithms with larger HV values are desirable.
Since the calculation of the HV is related to the reference point, in our experiment, the HV value of a set of solutions is normalized by a reference set of Pareto-optimal solutions with the same reference point. After normalization, the HV values are confined in [0, 1].
In order to know how competitive the proposed PCOA approach is, it is compared with several popular multiobjective algorithms (MOAs) as follows: nondominated sorting genetic algorithm-II (NSGA-II), strength Pareto evolutionary algorithm 2 (SPEA2), and multiobjective particle swarm optimization (MOPSO), and these MOAs are representative of the state of the art [1]. In the simulations, the parameter values of NSGA-II, SPEA2, and MOPSO are the same as in [1]. e parameters of PCOA used in this simulation are chosen by trial as follows: parallel number N � 100, Logistic map in (8) used as the chaotic map, crossover probability P cross � 0.5, merging probability P merging � 0.5, archive size N max � 100, k max � 300 for lowdimensional tests (Schaffer, Fonseca, Kursawe, ConstrEx, Srinivas, and Tanaka) and k max � 600 for high-dimensional tests (ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6), switch point S 1 � 0.5 * k max , and local search parameter in (15) λ � 1.5. To avoid randomness, all these MOAs independently run 30 times on each test problem.

Simulation Results.
e Pareto fronts obtained with the proposed PCOA on different MOOPs have been illustrated in Figures 4 and 5. It can be seen from Figures 4 and 5 that the proposed PCOA can usually obtain Pareto fronts on different MOOPs, and Pareto fronts obtained with the PCOA are more perfect and uniformly distributed for lowdimensional tests (Figure 4). Tables 1-3 and Figures 6-8 have reported the simulation results of the previously described quality indicators (generational distance (GD), spread (Δ), and hypervolume (HV)) using the four MOAs: PCOA, NSGA-II, SPEA2, and MOPSO. In the simulation results, the "mean" is the average values of 30 runs, and "SD" is the standard deviation (SD) of 30 runs of each MOAs. Table 1 shows the generational distance (GD) indicator in these tests, and Figure 6 shows the mean of GD for these MOAs. In the group of low-dimensional tests (Schaffer, Fonseca, Kursawe, ConstrEx, Srinivas, and Tanaka), the resulting Pareto fronts from PCOA are as close as those computed by NSGA-II, SPEA2, and MOPSO. In the group of ZDT problems (high-dimensional tests), PCOA obtains better results than MOPSO.
is can also be seen from Figures 4 and 5 that the Pareto fronts in Figure 5 (highdimensional tests) are not as perfect as those in Figure 4 (low-dimensional tests). From Table 1 and Figure 6, it can be seen that PCOA can reach Pareto fronts for all these tests, while its performance in low-dimensional tests is more competitive. Table 2 shows the spread indicator (Δ)in these tests, while Figure 7 shows the mean of Δ for these MOAs. e results obtained from Δ in Table 2   the lowest values in almost all MOOPs as PCOA is a kind of global search. From Table 2 and Figure 7, we can see that PCOA obtains better results for Δ in all the test problems than NSGA-II, while PCOA obtains better results in most test problems than SPEA2 and MOPSO. is means that PCOA approach has shown good diversity, and this may attribute its success to PCOA's parallel search pattern and escaping from local optima. Table 3 shows the hypervolume (HV) indicator in these tests, while Figure 8 shows the mean of HV for these MOAs. From Table 3 and Figure 8, we can see that PCOA approach obtains very large HV values in low-dimensional tests while its HV in high-dimensional tests is also good. It can be seen from these results that PCOA approach outperforms MOPSO based on the HV indicator Table 4.
From the above results, it can be seen that the proposed PCOA approach has shown good performance based on generational distance, spread indicator, and HV indicator. In all these simulation results, the PCOA outperforms MOPSO and it is as good as NSGA-II and SPEA2. is means that the PCOA can be used as an alternative approach for MOOPs.

Comparing Algorithm Parameters.
In order to test the performance of crossover probability and merging probability on the proposed PCOA approach, here the PCOA with different crossover probability and merging probability is also compared (other PCOA parameters values are the same as those in the former simulation). Case 1, crossover probability and merging probability are all 0.5 (denoted by PCOA1); Case 2, crossover probability and merging probability are all 0.3 (denoted by PCOA2); Case 3, crossover probability and merging probability are all 0.2 (denoted by PCOA3). e simulation results of the PCOA with different crossover probability and merging probability have been shown in Figure 9.
It can be seen from Figure 9 that, with higher crossover probability and merging probability, we may have more potential trial solutions for MOOPs; therefore, its performance with quality indicators (generational distance (GD), spread (Δ), and hypervolume (HV)) will be better.

Mixed H 2 /H ' Controller Design
e mixed H 2 /H ∞ control synthesis problem is an important multiobjective controller design problem in the field of control theory, and it has received a great deal of attention in recent years. e most popular approach for solving this problem is the linear matrix inequalities (LMIs) approach [33,34]. Recently, the mixed H 2 /H ∞ control synthesis problem is stated as a multiobjective optimization problem, with objectives of minimizing H 2 and H ∞ norms simultaneously. In this section, we will apply the proposed PCOA approach for the mixed H 2 /H ∞ control multiobjective optimal design. Consider the following system equations for H 2 /H ∞ control synthesis: where z � z 2 � z ∞ . For the multiobjective control synthesis, the solutions obtained by the PCOA will be compared with the solutions calculated with the function "msfsyn" provided by the MATLAB LMI Control Toolbox. As the LMI-based approach can only find a single solution in each run, the set of solutions of the multiobjective problem is obtained by varying H ∞ upper bound as c � 0.1, 0.2, . . . , 0.9. Figure 10 illustrates the Pareto estimates (H 2 and H ∞ closed-loop norms) obtained by the two approaches. It can be seen that PCOA is able to find a set of estimates of the Pareto front that are more equally distributed and are with a better extension over the conflicting objectives. Figure 11 shows the obtained solutions in the parameters space (k 1 × k 2 ). Considering the analyzed example, the proposed synthesis procedures present better results than LMI-based approaches.    Mathematical Problems in Engineering

Conclusion
As far as we know, there are no literatures concerning COA for MOOPs until now, and this motivates us to propose PCOA for MOOPs. In this paper, a PCOA with crossover and merging operation is proposed for MOOPs. Both crossover and merging operation can exchange information between parallel variables and produce new potential solutions, which can enhance the global and fast search ability of the proposed algorithm. To test the effectiveness of the PCOA, it is simulated with several benchmark functions for MOOPs and mixed H 2 /H ∞ controller design. e simulation results have shown that PCOA can be an alternative approach for MOOPs.

Data Availability
e raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.