A dual-population Constrained Many-Objective Evolutionary Algorithm based on reference point and angle easing strategy

Constrained many-objective optimization problems (CMaOPs) have gradually emerged in various areas and are significant for this field. These problems often involve intricate Pareto frontiers (PFs) that are both refined and uneven, thereby making their resolution difficult and challenging. Traditional algorithms tend to over prioritize convergence, leading to premature convergence of the decision variables, which greatly reduces the possibility of finding the constrained Pareto frontiers (CPFs). This results in poor overall performance. To tackle this challenge, our solution involves a novel dual-population constrained many-objective evolutionary algorithm based on reference point and angle easing strategy (dCMaOEA-RAE). It relies on a relaxed selection strategy utilizing reference points and angles to facilitate cooperation between dual populations by retaining solutions that may currently perform poorly but contribute positively to the overall optimization process. We are able to guide the population to move to the optimal feasible solution region in a timely manner in order to obtain a series of superior solutions can be obtained. Our proposed algorithm’s competitiveness across all three evaluation indicators was demonstrated through experimental results conducted on 77 test problems. Comparisons with ten other cutting-edge algorithms further validated its efficacy.


INTRODUCTION
The constrained multi-objective optimization problem (CMOP) is a type of problem that widely occurs often in real-life scenarios (Zhao et al., 2023;Wang et al., 2023).One major characteristic of CMOPs is the involvement of diverse and complex constraints related to decision variables or objective functions, which makes it difficult to find ideal or approximately ideal solutions.The theoretical model of CMOPs can be presented as: minimize F (x) = (f 1 (x),...,f j (x)) T subject to x ∈ S g m (x) ≥ 0,j = 1,2,...,M; h n (x) = 0,k = 1,2,...,N .
The total CV can be determined by: Over the past few decades, there have been continuous efforts made to solve CMOPs.Algorithms often need to find new and better feasible solutions by bypassing the infeasible regions.Therefore, transforming the original problem into a multi-population-based cooperative optimization problem is an often used approach (Bao et al., 2023;Liang et al., 2023a).Relevant evidence indicates which methods can effectively balance objectives and constraints (Liang et al., 2023b).Previous algorithms have been successful in solving traditional constraint multi-objective optimization problems (CMOPs).However, they have encountered limitations when it comes to constrained many-objective optimization problems (CMaOPs).These challenges predominantly stem from the following: • They get stuck in local optima caused by overemphasizing convergence.To illustrate the drawbacks of traditional selection strategies, we present the following example.
Figure 1 demonstrates the evolution of decision variables and objective values for three traditional optimization algorithms-CTAEA (Li et al., 2019), NSGAIII (Jain & Deb, 2014), and IDBEA (Asafuddoula, Ray & Sarker, 2015)-tackling the C1-DTLZ3 (Jain & Deb, 2014) problem.As iterations progress, these algorithms gradually approach the optimal CPFs but often get stuck in local optima.Simultaneously, there is a gradual reduction in the diversity of their decision variables, which is not coincidental.These behaviors are controlled by their selection strategies to move toward CPFs, it is necessary to promptly eliminate poorly performing solutions.However, this might also prevent the population from finding the final CPF.To further support this, we conducted multiple runs and obtained a notably ideal result.Figure 2 displays the distribution of decision variables and objective values in this ideal result.It is evident that the entire population is uniformly distributed over the CPF, with a significant portion of its decision variables distributed around 0.5.In the results depicted in Fig. 1, the presence of decision variables at 0.5 gradually diminishes with further iterations because these solutions perform poorly in the current environment.However, these solutions are crucial for the population's progression towards CPFs.As a result, some solutions that might not perform well in the current context but are instrumental for the overall optimization process were discarded.It is important to note the distinction in our study compared to Wang & Xu (2020), which focused on infeasible solutions that show good convergence, or Ming et al. (2023a), which concentrated on suboptimal solutions tailored for the current population.Instead, our study prioritized solutions that perform poorly in the current environment but contribute to the overall optimization process.This highlights the need to ease the population's convergence pressure, which is crucial in CMaOPs, as it might prevent the population from converging to CPFs in time.Employing a multi-population collaborative approach is effective but it also brings forth the second issue.
• In the process of population collaboration, it is crucial to strike a balance between exploring feasible and infeasible regions.While exploring infeasible regions, it is also important to avoid missing feasible areas caused by excessively rapid convergence (Zeng, Cheng & Liu, 2023), which could result in inefficiency and waste of computational resources.This balance becomes more fragile in CMaOPs.To address this issue, Zeng, Cheng & Liu (2023) divided the archive into an inverse archive and a diversity archive, which enables better performance across various types of problems.However, the algorithm relies on the distance between solutions, making them susceptible to local optima influences in smaller dimensions with different value ranges.Moreover, distances lose meaning in higher dimensions (Myszkowski & Laszczyk, 2021).Similarly, Li et al. (2019) used two archives, CA and DA, to store convergence and diversity solutions.However, in CMaOPs, updating a solution in the archive often requires traversing the entire population for convergence or diversity, which is inefficient.Furthermore, mating-selection strategies lead to deficiencies in exploring new feasible regions (Yang et al., 2023).Some dual-population algorithms (Bao et al., 2023;Ming et al., 2022) utilize a bidirectional search strategy.This approach involves rapidly converging an infeasible solution towards Unconstrained Pareto fronts (UPFs).Subsequently, together with the population of exploring feasible solutions, it gradually moves towards CPFs from both directions.This method proves highly effective in CMOPs.However, in CMaOPs, the emergence of numerous nondominated solutions weakens the population's convergence toward the PF (Zou et al., 2023;Elarbi, Bechikh & Ben Said, 2021).This prevents the traditional method from moving the population closer toward PF in time.There is a critical need for a search strategy that explores infeasible solutions while balancing convergence and diversity.
• In MaOPs, preserving diversity among solutions, aiming for their uniform distribution along the PF, remains crucial.Two representative approaches to achieving this are reference points (Jain & Deb, 2014) and angles among solutions (He & Yen, 2017).The former ensures uniformity across the solution space by generating reference vectors, while the latter promptly eliminates poor-quality solutions based on the angles between them.However, in CMaOPs, the scenario differs: the reference points might be uniform, while the CPF might not be, indicating that traditional reference point-based methods cannot guarantee uniform solution distribution (Wang, Huang & Pan, 2023).Similarly, the angle-based approach has its drawbacks.To illustrate the limitations of these traditional strategies, consider the following artificial scenario depicted in Fig. 3, which demonstrates the selection process.The solutions-A, B, C, and D--are four nondominated solutions, where B has a y-coordinate of 0 and one solution needs to be eliminated.If the angle-based method is applied, A or B would be chosen from the AB pair: Solutions dominating B would also have a y-coordinate of 0. Comparatively, the solution space dominating B would be less than that of A. In this case, eliminating A seems like a favorable choice.However, it would drive the population closer to the x-axis.In MaOPs, such a strategy could severely damage population diversity.If B is eliminated, there might appear non-dominated solutions with a y-coordinate of 0 but an extremely high x-coordinate, especially in problems with multimodal attributes like DC3-DTLZ3 (Li et al., 2019).When B has been eliminated, any new solutions of this type would be non-dominated, consequently compromising the overall quality of the population.The optimization problem of irregular PFs can be effectively addressed by employing neural networks (Wang, Huang & Pan, 2023;Liu et al., 2020), which produces favorable outcomes.However, it has been noted by Ming et al. (2023c) that the excessive integration of multiple techniques tends to complicate the algorithm.Additionally, while some problems have a uniform CPFs, others might not.Algorithms need to balance both scenarios to provide a series of high-performance solutions.
These challenges lead to varying degrees of reduced efficiency when using traditional methods in CMaOPs.Striking the right balance between feasibility, convergence, and diversity has been a critical challenge.
The motivation for this article is as follows: As a type of meta-heuristic algorithm, the evolutionary algorithm does not rely on problem continuity or differentiability, making it highly suitable for solving CMOPs, particularly CMaOPs with intricate constraints.However, it should be noted that there is no universally versatile meta-heuristic algorithm capable of effectively addressing all types of optimization problems.This concept is known as the No Free Lunch (Wang et al., 2020;Del Ser et al., 2019).The verification of this theorem also underscores the necessity for continuous theoretical research on meta-heuristic algorithms.
In contrast to other evolutionary algorithms that have undergone substantial development, there has been limited research on CMaOEAs, especially with methods based on multi-population collaborative techniques.However, the performance decreased significantly when using existing CMOEAs to handle CMaOPs (Ming et al., 2023c).Hence,  conventional multi-objective optimization algorithms are not suitable for CMaOPs, leaving ample room for advancement and potential when using CMaOEA to handle CMaOPs.The design of effective CMaOEAs requires a proper balance among convergence, diversity, and feasibility (Ming et al., 2023c) algorithms could yield better performance and results in CMaOPs if appropriate consideration is given to the feasibility.In this article, a unique dual-population constrained many-objective evolutionary algorithm based on reference point and angle easing strategy (dCMaOEA-RAE) was developed.This method effectively resolves CMaOPs by striking a balance feasibility, convergence, and diversity.Specifically, we divided the population into two parts: the main population (called PopulationMain) was responsible for searching feasible zones.A selection strategy was proposed to optimize the distribution of the population in discrete and irregular CPFs by combining reference points and angles.The other auxiliary population (called PopulationExplore) was dedicated to exploring infeasible regions.We retained some current poor but beneficial solutions for the evolutionary process by slowing down the convergence rate of some solutions, thereby uncovering new and superior areas of feasible solutions.
The primary achievements of this study can be briefly described as follows: • For the main population (PopulationMain), the process involves selecting nondominated sets from parents and offspring that using the constraints.Subsequently, the solution region was divided into a series of sub-regions using reference vectors.Within each region, the closest solutions to the reference line were initially chosen to ensure the distribution of solutions in the outcome.Then, among the remaining solutions, those closer in angle within the current region and adjacent regions were eliminated.The aim of this was to ensure a more uniform distribution within the population.
• For the auxiliary population (PopulationExplore), solutions were selected in pairs using binary tournaments based on angular distance, favoring solutions with better dominance relationships and distances from the ideal point.This approach enhanced diversity while guiding the population to spread in various directions within the search space.This balance the convergence and diversity, which led the population to superior feasible solution regions.
• Extensive experiments encompassing three test suites with a total of 77 benchmark problems were conducted.The goals were to confirm the effectiveness and competitiveness of dCMaOEA-RAE against 10 advanced CMOEA/CMAOEA methods.
The following is the structure of the remainder of this article: Section 'Related work' offers a short review of the related work on reference point adaptation methods and their underlying motivations.Section 'Proposed algorithm' explains the overall structure of the dCMaOEA-RAE with a detailed description of its components.Section 'Experimental results and analysis' analyzes the experimental setup and comprehensive experiments conducted.Finally, Section 'Conclusions' concludes this article, and highlights further directions for research.

RELATED WORK Existing constraint-handling techniques
With the development of CMAOEA/CMOEA, an increasing number of constraint-handling techniques (CHTs) have been invented.This article briefly reviews some representative techniques in this regard.It first provides a concise introduction to several representative CHTs, followed by an introduction to collaborative optimization.Generally, these techniques can be divided into six classes: (1) Penalty functions; (2) constrained dominance principle (CDP); (3) stochastic ranking (SR); (4) ε-constraints; (5) multi-objective methods (MOs); (6) hybrid methods.

Penalty function
Penalty functions convert constrained optimization problems into unconstrained problems by incorporating the degree of CV into the objectives.In general, the fitness F'(x) of solution x can be calculated on the basis of penalty functions: F (x) = F(x) + β • ϕ(x) where F(x) represents the fitness of solution x without considering constraints, while β is the penalty factor, which can be set in three ways: static, dynamic, or adaptive penalty coefficients.For instance, in Yahya & Tokhi (2017), the authors successfully tackled constrained optimization problems by introducing penalty functions into the bat algorithm, which produced promising results.In Nargundkar & Kulkarni (2023), a combination of cohort intelligence algorithms and penalty functions yielded very promising results.Chen & Ni (2014) employed an enhanced logistic chaotic mapping combined with penalty functions to address a resource-constrained project scheduling problem.However, adjusting the penalty factor's parameters to adapt to complex and dynamic problems posed a significant challenge (Ming et al., 2023b).

CDP
CDP was presented by Jain & Deb (2014).Specifically, for two given solutions, A and B, it can be stated that A constraint dominates B (defined as A ≺ B) if any of the below conditions are applicable: • A is a feasible solution whereas B is not.
• Both A and B are feasible solutions, but A dominates B.
• Both A and B are infeasible solutions, where the CV of A is less than the one of B.
Since it was first proposed, CDP has been widely accepted and used due to its simple structure and ease of implementation.When considering violation degrees as an objective, this handling process can be seen as a one-dimensional search.CDP only moves the population toward lower violation degrees, often leading to the population getting stuck in locally optimal solutions, particularly in cases where feasible regions are discrete.

ε-constraint
To address the limitations of CDP, Takahama & Sakai (2006) relaxed the definition of feasible solutions.They introduced a variable ε, considering solution x feasible as long as its violation degree doesn't exceed ε.The rest remains similar to CDP.Noman & Iba (2011) combined differential evolution with the ε-constraint technique to solve economic load dispatch problems.Wang & Li (2022) developed a multi-objective distribution planning system using an improved ε-constraint algorithm.Experimental outcomes demonstrated that its optimization results were nearly identical to the optimal path.While the ε-constraint allows the population to break out of local optima, it also leads to unstable outcomes.Additionally, defining the size of ε remains a challenge (Ming et al., 2023c).

Stochastic ranking
A probability parameter pr (where pr ∈ (0, 1)) is introduced when comparing two individuals.With a probability of pr, only the constraint violation degree is compared.
With a probability of (1 -pr), only the objective functions are compared.This allows the comparison to retain infeasible solutions.

Multi-objective methods
In this approach, constraints are regarded as one or more independent objective functions, thereby transforming the initial problem into an unconstrained optimization problem.Unlike penalty functions, this method increases the number of objective functions, often leading to multi-objective or even many-objective problems.Different problems pose various challenges to optimization within this framework.

Hybrid methods
Due to the individual limitations of the aforementioned methods, an increasing number of researchers are combining multiple handling mechanisms to improve the performance of CMAOP/CMOPS.Wang, Huang & Pan (2023) observed that in CMaOPs, feasible regions are often irregular and discrete, whereas reference point algorithms assume uniformly distributed feasible regions.Based on this observation, they proposed a constraint-based many-objective evolutionary algorithm, CMaOEA/RPA, which adapts reference points by using a learning vector quantization network to generate feasible region-adaptive reference points.They introduced an adaptive constraint-handling technique based on ε -truncation to incorporate infeasible solutions.Similarly, Ming et al. (2023a) combined machine learning with ε-constraint techniques, proposing CMaDPPs, which retain temporarily underperformed solutions to enhance overall performance.This provides a range of high-performance solutions.However, the combination of multiple techniques can lead to algorithmic complexity.Moreover, the limitations of combined CHTs might result in imbalances among convergence, diversity, and feasibility, thereby leading to suboptimal performance (Ming et al., 2023c).

Use the information of the other solutions
In order to effectively push the population to CPFs, solutions which are non-dominated or infeasible have gradually attracted attention due to their clear advantages.In the research of Wang & Xu (2020), an angle-based constrained dominance relation was proposed to use the function of the objective information carried by infeasible solutions.In the research of Myszkowski & Laszczyk (2021), the proposed NTGA2 guides the evolution of the population towards the unexplored parts of the space, promoting diversity and spreading of the population.In the research of Bao et al. (2022), promising solutions were archived to be used to improve search performance.In the research of Long et al. (2023), the proposed EGDCMO used an efficient global diversity strategy to maintain some infeasible solutions.In the research of Liang et al. (2023c), the proposed CMaOEA-AIR explored the potentially feasible regions and escaped from local optima over time by adjusting the selection criteria of infeasible solutions.In the research of Ming et al. (2023a), the authors note that existing algorithms mainly focus on evaluating the quality of individual solutions, instead of evaluating the quality of the overall solutions.They pay more attention to poorly converged, distributed, and infeasible solutions by selecting the population for the next generation in its entirety.

Multi-population collaborative techniques
To more effectively address the CMOPs/CMaOPs, many researchers have translated the problem into other problems, such as in collaborative optimization.This involves dividing the original population into two functionally different populations.This allows for better exploration of the solution space, uncovers unexplored and potential information, and ultimately obtains a more comprehensive CPFs.In the research of Chafekar, Xuan & Rasheed (2003), the decomposition of a CMOP into multiple optimization problems with a single-objective and the design of different algorithms to optimize each objective have been explored.However, the effectiveness of the algorithm may be compromised when faced with an excessive amount of constraints, potentially leading to local optima if the focus is solely placed on a single objective.In a similar manner, Wang, Liang & Zhang (2019) established (M+1) populations.Where M subpopulations were used for the constrained optimization of the single objective, while another population was used for the constrained optimization of the M-objective.Each sub-population optimized its respective problem using differential evolution.To enhance solution diversity, Yang, Liu & Tan (2021) partitioned of the objective space, dividing the initial problem into multiple sub-problems and employing multiple CHTs to solve optimization problems.However, irregular feasible regions might lead to poorer performance.In the research of Liu et al. (2007), the proposed COGA assigned populations to optimize objectives and constraints separately while allowing them to exchange information.Li et al. (2019) proposed C-TAEA, which maintains two archives concurrently during evolution: one focusing on convergence (CA) and the other emphasizing diversity (DA).The former is used for simultaneous optimization of constraints and objectives, ensuring the final result's reliability, while the latter primarily aims to explore infeasible regions.However, the algorithm suffers from the drawback of updating the archive sequentially, which results in low efficiency.Moreover, the offspring do not exhibit good feasibility and convergence (Tian et al., 2021).Considering that information sharing could diminish population diversity, Tian et al. (2021) presented a collaborative evolution framework, called CCMO, in which two populations evolve independently.This class of independent ways is known as ''weak cooperation'' and it enhanced diversity by not sharing information and thereby improving performance.To avoid excessive positive exploration which leads to neglecting feasible solutions, Ming et al. (2022) employed a new mechanism in the proposed dual-stages and dual-population constrained multi-objective evolutionary algorithm (DDCMOEA): initially they explored infeasible solutions to UPFs by rapidly converging the population and then allowing both populations to converge towards CPFs simultaneously from two directions.Similarly, Bao et al. (2023) used bidirectional searches to enhance search capabilities and exploit infeasible solutions.Liang et al. (2023b) noted that when UPFs and CPFs don't fully overlap, the population used for exploration has a diminished role in assisting the main population in later stages.Based on this observation, they proposed dual-population constrained multi-objective evolutionary algorithm with variable auxiliary population size (DPVAPS).Which strategically reduces the computational resource consumption of the exploration population and allows the primary population to devote a larger amount of available resources to the search for CPF.Additionally, in the research of Zeng, Cheng & Liu (2023), a method for updating diversity and reverse archives was proposed, which was capable of handling fraudulent constraints or narrow feasible areas.In the context of CMaOPs, new environments pose new challenges for algorithms.Some CMaOPs may contain complex constraints, which can make it challenging for the population to traverse multiple disjoint infeasible zones that are too close to the CPFs.Geng et al. (2023) introduced NSGA-III based on dual populations, which improved the performance.

Dealing with complex CPFs
In order to obtain a better distribution of the discrete and discontinuous CPF of the population in many-objective problems, Wang & Xu (2020) used angle as an index to judge population density and to evaluate the diversity of the population.However, it did not perform well in the face of some complex CMOPs (such as DC3- DTLZ3 Jain & Deb, 2014).In the research of Cheng et al. (2016), there exists two sets of reference vectors, one of which remains uniformly distributed, and the other one adaptively adjusts.In order to adapt to discrete CMaOPs with different sizes, clustering methods are added to the optimization algorithm.For instance, an adaptive strategy based on the k-means clustering method is proposed in the research of Liu et al. (2022b), in which the method used and the PF shape could be fitted gradually over the evolutionary process.In addition, in the research of Liu et al. (2022a), an improved growing neural gas (GNG) was used to adapt the reference vectors in order to solve CMaOPs.In the research of Wang, Huang & Pan (2023), in order to better adjust the reference points toward the feasible regions, both the feasible and infeasible solutions were used as two classes of samples to train the learning model.However, the addition of other techniques can complicate the algorithm and thus affect its performance (Ming et al., 2023c).

PROPOSED ALGORITHM
Overview of the proposed dCMaOEA-RAE dCMaOEA-RAE maintains two populations, both of size N.The main population, PopulationMain, is responsible for exploring feasible solutions and providing a uniformly distributed final result with strong convergence.The exploration population, PopulationExplore, explores infeasible solutions to discover new, better feasible solution regions.Additionally, different selection strategies are applied to the two populations to fulfill their distinct functions.Algorithm 1 outlines the overall framework of dCMaOEA-RAE.In the beginning, each population is initialized and reference points are generated.Subsequently, the populations enter an evolutionary loop.Parents are selected using binary tournament which is based on non-dominated sorting.This generates offspring, P'.The offspring and parents together undergo environmental selection.The environmental selection process is conducted differently for the two populations: feasible non-dominated solutions that meet the constraints undergo the environmental selection strategy of the main population.These solutions are screened, based on angles, to eliminate crowded solutions and then form the new generation of the main population.The remaining solutions undergo the environmental selection strategy of the exploration population, creating a

Environmental selection of PopulationMain
After obtaining a range of feasible non-dominated solutions that exceeded the population size, an environmental selection process was required.As all the solutions within this set are non-dominated, this step focuses on enhancing diversity.Algorithm 2 delineated the population selection process.The search space was divided into subspaces using pregenerated weight vectors and each solution was associated with the vector perpendicularly closest (Line 1).This step effectively reduced the computational complexity.For each subspace, the algorithm first identified the solution closest to the weight vector and designated it as 'the key of this vector' (Line 2).Following this, we identified the weight vector associated with the highest number of connected solutions, denoted by z. (Line 4).We calculated the angles between the solution set S (excluding 'key') associated with the vector z and the set comprising themselves along with adjacent weight vectors (Line 6).The method is further illustrated by Fig. 5, which depicts a hyperplane composed of ten weight vectors (A-J) in a three-objective environment.For F, its adjacent weight vectors were C, E, I, and J.We found the pair of solutions with the smallest angle.At least one solution in this pair should be from set P. If the other solution was not part of set S, we eliminated the other solution (Lines 8-11).Otherwise, we compared these two solutions with the second smallest angle excluding each other.This was used as the criteria for diversity.
And eliminated the solution with the smaller angle (Line 13-14).The process continued iteratively utill the number of solutions was same as N. 5 In the set of solutions connected to each reference point, the solution with the shortest distance is selected and named as the 'key' of each reference point.

repeat 7
Find the reference point that connects the largest number of solutions in the same reference point, denoted as z. 8 The solutions associated with the vector z are denoted as S, while the set of solutions belonging to the reference vectors adjacent to z is denoted as S'.

9
Calculate the Angle between (S -key) and (S' ∪ S), and the result is denoted as R.

10
Find the pair of solutions with the smallest Angle in R, denoted as A and B.

Environmental selection of population explore
After determining the primary population, the remaining individuals required filtration.The exploration population, which disregards the constraints, was used to explore infeasible solutions and discover new feasible regions.Algorithm 3 outlines the selection process for the exploration population.Similar to Section 'Environmental Selection of PopulationMain' initially, predefined reference points generated weight vectors and were linked to nearby solutions.Subsequently, the algorithm calculated the angles between solution sets associated with each reference vector.It recorded the minimum angle within each reference vector's set of solutions (Line 1).This method avoids traversing the entire population each time a solution is chosen which reduces computational complexity.Then, the algorithm conducted nondominated sorting of the solution sets and computed the distances of each solution from the origin (Lines 2-3) for assessment.
Entering the 'while' loop, it selected the reference vector corresponding to the minimum value in Angle_min.It picked the two solutions with the smallest angle among these (Line 6).The algorithm first compared the domination levels of the two solutions; it eliminated the one with the larger domination level (Lines 7-10).If both solutions had equal domination levels, it eliminated the solution farthest from the origin (Line 12).Subsequently, the algorithm updated the Angle_min for that reference vector (Line 14).This process was repeated until the population reached the required size.

Remark
Certainly, it is important to note the significant differences between the use of reference points and angles in the search methods of these two populations.The main population aims for an even distribution to ensure the outcome.We used the 'key' solution to maintain uniform distribution, particularly in cases of a uniform CPF.In situations with non-uniform CPFs, we calculated the angle with adjacent reference vectors, allowing the solutions to distribute as evenly as possible.In Fig. 6, we demonstrate the effectiveness of this strategy through an artificial selection scenario: Fig. 6A  Our proposed search strategy accurately eliminated D by calculating the angle between BD and other solutions.
For the auxiliary population tasked with exploring infeasible solutions, achieving maximum uniformity, unlike the main population, was not crucial.Instead, it aimed to slow down convergence pressure through angle-based strategies, preserving some sub-optimal yet beneficial solutions throughout the evolution process.As the main population used non-dominated sorting to decrease convergence pressure, the overall convergence pressure was guaranteed.Additionally, while comparing non-dominant layers and their distance from the origin are somewhat similar but different, we illustrated this distinction through an example: Fig. 7 displays three solutions, A, B, and C, within a selection environment where one needs elimination.It is clear that AC is non-dominated, and B is dominated by C. When considering their distances from the origin, C < B < A, which would lead to the elimination of A. However, when considering the dominance layer, B is to be eliminated, which is the intended outcome.Therefore, by first comparing dominance layers and then distances, we achieved our intended objective.

EXPERIMENTAL RESULTS AND ANALYSIS
A concise overview of the experimental setup and algorithm parameters is included in this section.We then evaluated our designed method against ten advanced CMOEAs/CMaOEAs on three representative benchmark suites containing a total of 77 test problems.A performance analysis was carried out on the results.All experiments were performed using PlatEMO (Tian et al., 2017).

The methods used for comparison and parameter settings
We ran comparative trials with four CMOEAs (Top (Liu & Wang, 2019), CCMO (Tian et al., 2021), DDCMOEA (Ming et al., 2022), and BiCO (Liu, Wang & Tang, 2022)), and six CMaOEAs (NSGA-III (Jain & Deb, 2014), the improved decomposition-based evolutionary algorithm (IDBEA) (Asafuddoula, Ray & Sarker, 2015), Two-Archive Evolutionary Algorithm for Constrained Multiobjective Optimization (C-TAEA) (Li et al., 2019), TiGE2 (Zhou et al., 2020), DCNSGAIII (Jiao et al., 2021b), and CMME (Ming et al., 2023b)) to show the effectiveness of our work.The selected algorithms are representative of their respective categories: In CMOEAs, Top utilizes DE operators and is a well-established algorithm; CCMO is a highly effective weak-cooperative population-based algorithm; DDCMOEA and BiCo are recent prominent algorithms based on bidirectional search.Among CMaOEAs, NSGA-III is a classic optimization algorithm with significant guidance for many other algorithms; IDBEA relies on decomposition for CMaOPs; C-TAEA is an archive-based optimization algorithm often included in literature comparisons; TiGE2 converts constraints into a third criterion apart from convergence and diversity; DCNSGAIII and CMME are both notable CMaOEAs introduced in recent years, known for their competitiveness.All of these methods were implemented within PlatEMO (Tian et al., 2017).
Referring to the work of Ming et al. (2023a) and Wang, Huang & Pan (2023), the population sizes (N) and the maximum fitness evaluation (maxFE) for the algorithm across various test problems are outlined in Tables 1 and 2. Among these, Top utilizes DE-based genetic operators, while other CMaOEAs/CMOEAs employ GA-based operators using simulated binary crossover (Deb & Agrawal, 2000) and polynomial mutation (Edupuganti, Prasad & Ravi, 2010), with a crossover probability of 1 and a distribution index of 20.The mutation probability p m is set to 1/n, where n represents the number of decision variables, and the distribution index is set to The parameters for all algorithms are the same as those suggested in their original references in order to maintain fairness.All parameters below are unchanged unless otherwise stated.
All of the algorithms were independently run 30 times on every test case.Both the mean and standard deviation were recorded.Statistics were calculated with MATLAB software (The MathWorks, Natick, MA, USA), using the Wilcoxon test at a significance level of 0.05 and the Friedman test with Bonferroni correction at a significance level of 0.05 to analyze the experimental results.In particular, we transformed the HV metric for the algorithms as follows: HV = 1 − HV , to satisfy the 'the smaller, the better' property required by the Friedman test for the data.

Comparison results
The designed algorithm was compared with the ten methods mentioned previously.Values such as ''NaN'' and ''0.0000e+0'' signify that the results were too distant from the true Pareto front to compute the metric.We used symbols ''+'', ''-'', and ''='' respectively to indicate results that were statistically superior to dCMaOEA-RAE, significantly inferior to dCMaOEA-RAE, or similar to dCMaOEA-RAE.The result with the best performance in each problem had been bolded.Within the Friedman ranking, in addition to the best data, we bolded the second-best data.

Comparison results on DTLZ test problem
Table 3 presents the Friedman rankings of eleven algorithms across various objective quantities in the C-DTLZ and DC-DTLZ test problems.It is evident that dCMaOEA-RAE consistently outperformed other algorithms across all three metrics.CMME and DCNSGAIII, as recent CMaOEAs, exhibited relatively good performance.These test sets included diverse feasible regions and offered a comprehensive evaluation of algorithm performance.Tables 4, 5 and 6 report the IGD, IGDp, and HV performance indices for the test problems, respectively.Looking at the three metrics collectively, it is apparent that dCMaOEA-RAE achieved the highest number of superior rankings in all three metrics.Specifically 34, 31, and 35, respectively, exceeded more than half of the total.Particularly notable performances were observed in C1-DTLZ3, DC1-DTLZ3, DC2-DTLZ1, and DC3-DTLZ3.Due to the complex nature of the constraints, the designed algorithm faced significant challenges in dealing with these types of problems, yet it outperformed other algorithms.For instance, C1-DTLZ3 encounter infeasible obstacles when approaching the PFs and DC2-DTLZ1 contained extensive infeasible regions, which showed the ability of dCMaOEA-RAE to reach superior CPFs through such regions.For example, DC1-DTLZ3 had a narrow feasible region, while DC3-DTLZ3's CPF consisted of a couple of segmented, narrow, tapered strips and a flexible sheet area above the PF.dCMaOEA-RAE strengthened population diversity in the auxiliary population through binary tournaments.It guided the main population toward the precise exploration of minute feasible regions.Additionally, algorithms like CCMO are competitive in solving some problems such as C2-DTLZ2, DC1-DTLZ1, and DC2-DTLZ3 in the 3-objective problems.This competitiveness stems from CCMO's utilization of a weak cooperation approach, where two independent populations evolve.This enhances certain aspects of diversity and enables the discovery of smaller feasible regions.However, the slow convergence of the CCMO as the number of objectives grows prevents it from being suitable for CMaOPs.Furthermore, it is notable that dCMaOEA-RAE exhibited relatively inferior performance in D1-DTLZ1 and DC1-DTLZ1.This could be due to the simplicity of these test problems in terms of the constraint complexity, where even using the straightforward CDP of NSGA-III yielded reasonably good performance.and small feasible ratios, MW and CF highlighted the robust integrated capability of dCMaOEA-RAE in navigating infeasible regions and exploring minute irregular feasible solutions.It is evident that dCMaOEA-RAE consistently secured the top position across all indicators and test problems.It was closely followed by CMME.Tables 8, 9 and 10 illustrate the detailed results of the algorithms using different metrics.CMME was effective certain effectiveness due to its enhanced mating and environmental selection.However, its performance was relatively unstable, resulting in higher standard deviations.Meanwhile, dCMaOEA-RAE demonstrated favorable execution in MW8, CF4, and CF12.These problems encompass various scenarios such as multimodal landscapes, irregular CPFs, convex shapes, and small feasible regions.This showcased dCMaOEA-RAE's adeptness in tackling these complexities, which are challenging for other algorithms.
The best results are in bold.concerning changes in the objective space and decision variables for both the main and exploration populations.The proposed algorithm managed to maintain certain suboptimal solutions, thereby preventing the decision variables from becoming overly uniform and avoiding local optima.

Summaries and discussion
Based on the aforementioned results, the performance of dCMaOEA-RAE in various types of CMaOPs can be summarized as follows: • dCMaOEA-RAE demonstrated suitability for problems with complex constraints, such as C1-DTLZ3 and DC2-DTLZ1, due to its ability to retain solutions that might not perform well in the current context but are critical for the overall optimization process.Moreover, it effectively utilized information from these solutions to guide the population towards CPFs in a timely manner.
• Additionally, dCMaOEA-RAE was well-suited for problems featuring irregular or discrete CPFs, like CF4 and CF12.The selection strategy based on reference points and

Figure
Figure 1 Changes in decision variables and objective values of the conventional algorithm in C1-DTLZ3.Full-size DOI: 10.7717/peerjcs.2102/fig-1

Algorithm 3 :
EnvironmentSelectionForExplore1 Input: P (The Population), Z (Reference point) 2 Output: P(The screened population) 3 Calculate the minimum angle among the solutions of the set of solutions that connect the same reference point, denoted as Angle_min.If there is only 0 or 1 solution in the solution set, the Angle is denoted as inf.