1 Introduction

The term "optimization" is commonly used to refer to the process of determining which of several possible actions would yield the best results under specified constraints. Because of the interdependence and complexity of sophisticated engineering systems, one will need an analyst with a broad perspective to help one optimize their production, laboratory, retail, or service system. Furthermore, when studying a system, the subsets' interaction should be considered to preserve its integrity and optimality. Additionally, the system's components' specifications, and existing uncertainties, should be described and incorporated into the system's intended goals. Metaheuristic algorithms are search techniques that use a higher-level approach to find the optimal solution to a given problem. Genetic Algorithm (GA) [1], Particle Swarm Optimizer (PSO) [2], Ant colony Optimization (ACO) [3], Stochastic Paint Optimizer (SPO) [4] and Mountain Gazelle Optimizer (MGO) [5] are some of the well-known metaheuristic algorithms. Additionally, optimization is applied in a number of fields, such as control, medicine, image processing and structural engineering [6, 7].

Everyone desires to gain the most significant benefit at the cheapest cost [8]. This goal can be presented mathematically as an optimization problem. However, there are various optimization problems with many objectives in the real world and frequent inconsistencies among specific goals [9]. Therefore, it is often challenging to discover the optimal solution that settles all the objects simultaneously [10]. Accordingly, multi-objective problems frequently have multiple solutions rather than a single one, and multi-objective optimizers have gained the interest of researchers. Ordinarily, optimization problems with less than four specified objectives are designated multi-objective problems, while other problems with more than four are designated many-objective problems [11].

After a lengthy investigation, multi-objective problems are sufficiently advanced, and exciting consideration is given to addressing many-objective problems [12]. Generally speaking, techniques for tackling various optimization problems are subdivided into two kinds. The conventional optimizers are gradient search optimizers, Newton search optimizers, quasi-newton search optimizers, and conjugate gradient search optimizers. The other kind is heuristic search optimizers, which are stimulated the person's expertise in addressing remarkable problems or behavior of living in real life. Classical optimizers typically require calculating derivatives or differentials, so it is hard to utilize many complex real-world problems. Therefore, usually, when tackling multi-objective problems, heuristic optimizers are employed, such as Multi-Objective Genetic Algorithm (MOGA) [13], Multi-Objective Artificial Bee Colony Optimizer (MOABC) [14], Multi-Objective Artificial Hummingbird Algorithm (MOAHA) [15], Multi-Objective Seagull Optimization Algorithm (MOSOA) [16], Multi-Objective Particle Swarm Optimization (MOPSO) [17], Multi-Objective Firefly Algorithm (MOFA) [18], Multi-Objective Atomic Orbital Search (MOAOS) [19], Artificial Vultures Optimization Algorithm (MOAVOA) [20], Multi-Objective Bonobo Optimizer (MOBO) [21], Multi-Objective Stochastic Paint Optimizer (MOSPO) [22], Multi-Objective Moth-Flame Optimization (MMFO) [23], Archive-Based Multi-objective Harmony Search (AMHS) [24] and Multi-objective Non-dominated Advanced Butterfly Optimization Algorithm (MONSBOA) [25]. This paper proposed a novel optimization structure with a distinguished convergence and coverage as a new multi-objective optimizer. The proposed method is based on modifying the Chaos Game optimizer (CGO) [26] to produce dynamic control factors to decrease the time of finding the best solutions for addressing various multi-objective benchmark functions and industrial engineering problems.

Nevertheless, the number of non-dominated solutions is negligible at the beginning of the optimization rule. Therefore, they may use the population members in the wrong direction. Hence, the main idea is to generate a diverse number of solutions in the Pareto front that will encourage the candidate solutions to progress toward encouraging areas of the given search space in successive iterations. The multi-objective CGO approach that has been presented, referred to as MOCGO, makes use of a leader selection methodology to strengthen its capabilities and avoid the drawbacks of the original CGO method as well as an archive method to save non-dominated solutions. The proposed MOCGO is tested on a wide variety of problems, both constrained and unconstrained, from the fields of mathematics and industrial engineering optimization. The results of a series of comparisons between the proposed MOCGO method and other state-of-the-art multi-objective approaches using several common performance metrics, such as Inverted Generational Distance (IGD), Generational Distance (GD), Spacing(S), and Maximum Spread (MS), demonstrated the proposed MOCGO approach's superior ability to handle multiple complex problems.

This article continues as follows. Section 2 covers multi-objective related work. Section 3 suggests a single version and a multi-objective Chaos Game Optimizer (MOCGO). Section 4 tabulates and discusses the experimental outcomes. Section 5 then discusses the conclusion and future works.

2 Literature review

Most real-world optimization problems, including big data, data mining, design, optimization, scheduling, mathematics, control, etc., are essentially designated by multiple differing objectives. The variables are constantly indistinct when tackling specific problems because of uncontrollable circumstances, leading to more complex problem presentations [27]. Single-objective problems are distinct from multi-objective problems [28]. Only one best solution is achieved in the first type, whereas many solutions are accomplished in multi-objective problems, called Pareto-optimal solutions [29]. The objective function in single-objective problems is numerical, and it is sufficient to check the objective values to compare the quality of the candidate solutions. Typically, the best cases of minimization problems are the smaller objective values. But the objective values are a vector in multi-objective problems. Therefore, the theory of Pareto dominance is used to compare the quality of the candidate solutions with various objective values [30].

As an example, in [31], a multi-objective GA is proposed for optimizing the parameters of the Modular Neural Network, and this is only one of a number of new multi-objective techniques that have recently been introduced in the literature. The advantages of the proposed multi-objective strategy are illustrated using face and ear datasets. Results from the granular strategy-using modular neural network were shown to be more trustworthy than those from the traditional method that did not involve optimization. A new optimization structure is expressed in [32] by connecting multi-objective and multicriteria decision-making ideas. The proposed optimization method combined multi-objective ABC, best–worst, and grey relational methods to address the optimization problem. The outcomes demonstrated the efficacy of the proposed approach for resolving problems with multiple objectives.

A new multi-objective hybrid forecasting method is proposed in [33] using Ant Lion optimizer, which includes four steps: data preprocessing, optimization, forecasting, and evaluation steps. The decomposing approach distributes the initial wind speed data into a finite collection of segments. The outcomes demonstrated that the suggested methodology produced lower average mean absolute errors. For the purpose of resolving multi-objective problems in rapidly changing environments, an innovative multi-objective evolutionary PSO has been developed in [34]. Furthermore, a new optimization structure of multi-swarm-based PSO is utilized to tackle the given issues in changing settings. The results showed that the proposed method got better outcomes for trading with these multi-objective problems in quickly changing settings.

In [35], it is suggested that a modified version of multi-objective FA, which consists of six single and multi-objective optimization problems, may be applied to big data situations. As seen in the findings, the proposed strategy outperformed the competitors. This paper introduced a multi-objective optimizer for addressing the flow shop scheduling problems considering the energy losses. The proposed optimizer is compared with other well-known optimizers by analyzing the results. A novel framework is introduced in [36] as a multi-objective evolutionary method. Several multi-objective methods are used in the proposed framework, which is used to address various problems. The proposed methodologies had good results, indicating that the design is feasible and practicable. New multi-objective feasibility PSO is presented in [37] to address constrained multi-objective problems. A comparison of the suggested method to the original multi-objective PSO and other popular methods revealed significant improvements for the latter.

Khodadadi et al. [38] have created a multi-objective version of the Crystal Structure Algorithm (Crystal), which draws its inspiration from crystal structure principles. Completions on Evolutionary Computation (CEC-09), real-world engineering, and mathematics multi-objective optimization benchmark problems are used to evaluate the effectiveness of the given method. If applied to multi-objective issues, the strategies presented can deliver outstanding results.

Pereira et al. [39] described the invention of the Multi-objective Lichtenberg Algorithm, a new metaheuristic inspired by the propagation of radial intra-cloud lightning and Lichtenberg figures that can handle multiple objectives. For each iteration, the algorithm uses a Lichtenberg figure to distribute points for evaluation in the objective function, which is shot in various sizes with varied rotations. This allows for a great deal of exploration and exploitation. As the first hybrid multi-objective metaheuristic, the Multi-objective Lichtenberg Algorithm (MOLA) has been tested against classic and current metaheuristics employing well-known and complicated test function groups as well as constrained complex engineering challenges. With expressive values of convergence and maximum spread, the Multi-Objective Lichtenberg Algorithm stands out as a potential multi-objective algorithm.

Zhong [40] suggested the multi-objective marine predator algorithm (MOMPA). This approach incorporates an external archive component storing previously discovered non-dominant Pareto-optimal solutions. The concept of elite selection serves as the foundation for a technique that is being developed for selecting top predators. Using the predator's foraging strategy as a model, this method selects the most powerful solutions from the repository to serve as top predators. Algorithm performance is evaluated using the CEC2019 multi-modal multi-objective benchmark functions and compared to nine current metaheuristics techniques. In addition, the proposed approach is tested using seven multi-objective engineering design problems. The findings show that the suggested MOMPA algorithm outperforms previous algorithms and gives very competitive outcomes.

Multi-objective thermal exchange optimization (MOTEO) is a physics-inspired metaheuristic approach suggested by Khodadadi et al. [40] to address problems of multi-objective optimization. The single version of TEO has used Newtonian cooling laws to solve single-objective optimization problems more effectively, and MOTEO is based on that principle. Different problems are used to assess MOTEO's efficacy in this research. In comparison with existing algorithms, the recommended method may provide accurate solution, consistency, and coverage for addressing multi-objective problems, resulting in high-quality Pareto Fronts.

Dhiman et al. [16] introduce the Multi-objective Seagull Optimization Algorithm (MOSOA). The non-dominated Pareto-optimal solutions are supposed to be able to be cached with the help of the dynamic archive, according to this method. By driving seagull migration and attacking behaviors, the roulette wheel selection approach is utilized in order to select the archive solutions that have the greatest potential for success. In order to validate the suggested algorithm, it is subjected to validation with twenty-four benchmark test functions, and the performance of the proposed algorithm is evaluated alongside that of previously developed metaheuristic algorithms. In order to determine whether or not the proposed method is suitable for use in the process of finding solutions to problems that occur in the real world, it is tested on six constrained engineering design problems. Empirical analyses demonstrate the suggested method outperforms others. The suggested approach also considers those Pareto-optimal solutions with a high convergence rate.

This research is motivated to develop the multi-objective version of CGO limitations for the first time in the literature. In addition, several analyses have been carried out on the uses of MOO in various fields of study. A survey of some of the MOO settlement methods reveals that they employ a complicated mathematical problem and a complex method of solving. The fundamental contribution of this study is to suggest a MOO settlement approach that does not involve the use of sophisticated mathematical calculations to solve the problem. As the majority of extant optimizers are population-based, they can simultaneously handle a large number of candidate solutions, whereas other search methods employ the same procedure to iteratively duplicate their solutions. Recent novel optimizers have distinct optimization procedures to address different problems with various objectives. However, the well-known optimization theorem, No Free-Lunch (NFL) [41], reasonably explained that none of the existing search methods are approved to tackle all problems efficiently. This statement is true for both single- and multi-objective optimization approaches. As a result, it can be concluded that important problems can be solved by modifying existing, well-known techniques. Different methods are better adapted to tackle unconstrained issues than other constraints, which require careful operators or components.

CGO utilizes a multi-objective particle swarm optimization technique called an archive method in addition to a leader selection rule. Each of these methods is used to find the best solution. Heuristic algorithms can be used in various ways to discover and store Pareto's optimum solutions. In this work, Pareto-optimal solutions are stored in an archive. Evidently, the MOCGO algorithm's convergence originates from the CGO method. CGO can enhance the quality of a solution chosen from the repository.

Nevertheless, it is difficult to identify a set of Pareto-optimal solutions with an extensive range of variations. Chaos Game Optimization (CGO) [26] is a novel search algorithm that handles various optimization challenges. The CGO optimizer's concept is based on chaos theory.

3 Multi-objective chaos game optimization (MOCGO)

The CGO, with its inspiration and the mathematical model of the optimization technique, is described in the next part. Then, the multi-objective nature of this method is described and its features.

3.1 Chaos game optimization (CGO)

Talatahari and Azizi [26] devised the CGO, a population-based metaheuristic algorithm that replicates chaos theory's self-similar and self-organized dynamical systems. The majority of chaotic processes exhibit fractal graphical forms. The chaotic game generates fractals by starting with a polygon form and a randomly chosen beginning point. The goal is to build a series of points repeatedly in order to create a picture with a comparable form at various scales. The number of vertices dictates the primary form of the polygon. A Sierpinski triangle is formed by combining three vertices (see Fig. 1). As can be seen in Fig. 1, a triangle is repeatedly split into sub-triangles.

Fig. 1
figure 1

The process of creating the Sierpinski triangle

The CGO method takes into account various solution candidates that reflect certainly suitable seeds within a Sierpinski triangle. The beginning positions of eligible seeds in the search space are picked at random. Each iteration of the algorithm generates four new seeds (\({X}_{\mathrm{new}}\)) that are eligible for the following iteration based on the location of each seed. The new seeds are constructed utilizing three vertices in the search space: \({X}_{i}\), \({X}_{Mean}\), and \({X}_{best}\). \({X}_{i}\) represents the location of the ith suitable seed,\({X}_{Mean}\) represents the mean of a randomly selected collection of suitable seeds, and \({X}_{best}\) represents the location of the finest seed. The temporary triangle is formed by these three vertices, and each of them is indicated by one of the colors red (\({MG}_{i}\)), blue (\({X}_{i}\)), and green (\(GB\)) colors mark each of the selected vertices. A dice is taken with two red faces, two blue faces, and two green faces. Figure 2 shows the temporary triangle.

Fig. 2
figure 2

The temporary triangles in the search space

It has been shown that there are four ways to control and change the CGO algorithm's exploration and exploitation rate by manipulating the movement constraints of the seeds. Following is a presentation of four distinct formulations for \({\alpha }_{i}\) [26]:

$$\alpha_{i} = \left\{ {\begin{array}{*{20}c} {{\text{rand}}} \\ {2 \times {\text{rand}}} \\ {\left( {\delta \times {\text{rand}}} \right) + 1} \\ {\left( {\varepsilon \times {\text{rand}}} \right) + \left( {\sim \varepsilon } \right)} \\ \end{array} } \right.$$
(1)

where rand denotes to random number in the interval of [0,1] with uniformly distributed, while \(\delta\) and \(\varepsilon\) are random integers in the interval of [0,1]. As the dice are rolled, the ith seed in its position is moved toward the corresponding vertex based on which color comes up. The dice are modeled using a combination of three random factors \({\alpha }_{i}\), \({\beta }_{i}\), and \({\gamma }_{i}\). Each initial seed contributes to the production of four other seeds, which are based on the other vertices of the temporary triangles as follows [26]:

$$X_{new}^{1} = X_{i} + \alpha_{i}^{p\left( 1 \right)} \times \left( {\beta_{i} \times GB - \gamma_{i} \times MG_{i} } \right)$$
(2)
$$X_{new}^{2} = GB + \alpha_{i}^{p\left( 2 \right)} \times \left( {\beta_{i} \times MG_{i} - \gamma_{i} \times X_{i} } \right)$$
(3)
$$X_{new}^{3} = MG_{i} + \alpha_{i}^{p\left( 3 \right)} \times \left( {\beta_{i} \times GB - \gamma_{i} \times X_{i} } \right)$$
(4)
$$X_{new}^{4} = X_{i} \left( {x_{i}^{k} = x_{i}^{k} + R} \right)$$
(5)

where k is a uniformly distributed random integer in the range [1, d], d is the number of design variables, and R is a uniformly distributed random number [0, 1]. In addition, \({\beta }_{i}\) and \({\gamma }_{i}\) are two random integers of 1 or 2. The probability of rolling the dice is modelled using \({\beta }_{i}\) and\({\gamma }_{i}\). It's also worth noting that \({\alpha }_{i}\) produces four unique random vectors. The exploration and exploitation rate of the CGO algorithm is controlled and adjusted by changing their order using a permutation between integers 1 to 4 as p. Until a termination requirement is satisfied, the process is carried out for each seed and repeated each iteration. A schematic representation of this procedure is shown in Fig. 3

Fig. 3
figure 3

The process of forming temporary triangles

3.2 Multi-objective chaos game optimization (MOCGO)

There is a wide variety of multi-objective algorithms and methods for solving complex challenges. Since no method or algorithm has ever been employed to solve a multi-objective problem with 100% efficiency, most academics are constantly looking for fresh ideas and methods with improved capabilities. In order to solve multi-objective issues, we proposed a multi-objective CGO method in this study. The findings section is where we expect to find a better function through comparison. Because it was designed to work with problems involving single-objective optimization, the CGO cannot be utilized directly for the purpose of resolving challenges involving multi-objective optimization. Therefore we introduce the multi-objective variation of CGO for addressing optimization problems in a way that simultaneously satisfies several requirements. The capability of CGO to carry out multi-objective optimization has recently been expanded with the addition of three new mechanisms. The mechanisms used are similar to those used by MOGWO [42], but the exploration and exploitation phases of MOCGO inherit from the CGO algorithm. Those mechanisms are discussed in detail as follows:

The Archive: A fixed-sized external archive is integrated into the CGO for saving non-dominated Pareto-optimal solutions obtained so far. The archive has its own special controller to decide which solutions are allowed in and which are not. The number of saved solutions is restricted in the archive. Archive outputs are measured against iteratively generated non-dominated solutions. Three possible scenarios:

  1. 1.

    It is not possible to add the new solution to the archive if there is at least one archive member that dominates the new solution.

  2. 2.

    If the newly proposed solution is superior to at least one of the existing solutions in the archive, then it may be considered to be included in the archive. In such a scenario, the repository will be able to include newly developed solutions.

  3. 3.

    If the new solution and archive solutions are not dominant, the new solution is added to the archive.


The grid mechanism: is the second effective mechanism integrated into CGO to enhance the non-dominated solutions in the archive. In the situation that the archive is already at full, the grid mechanism needs to be activated so that the segmentation of the objective space can be reorganized and a search can be conducted to identify the most congested area so that a solution may be removed from there. To improve diversity of the final approximated Pareto-optimal front, position the new solution in the least crowded area. If there are a greater number of potential solutions in the hypercube, there is a greater chance that one of those answers will be eliminated. If there is already a solution archive full, the most congested areas are found first, and a solution is intentionally deleted from one of them. A solution that lies outside of the hypercubes represents a unique circumstance. Each component of this scenario has been enhanced so that it can accommodate the most up-to-date solutions. As a consequence of this, components of various other solutions can also be altered.


The Leader Selection Mechanism: is the last machine included in CGO. This leader leads the other search agents toward areas of the search space that appear to have a good chance of providing a solution, with the end goal of obtaining a solution that is close to the global optimum. However, due to the Pareto optimality principles covered in the prior paragraph, it is difficult to compare the solutions in a multi-objective search space. To address this problem, the leader selection process was created. As was already indicated, the best non-dominated solutions so far are archived. The leader selection component selects one of its non-dominated solutions and puts it in the search space's least congested regions. For each hypercube, the selection is performed by a roulette wheel with the following probability:

$$P_{i} = \frac{C}{{N_{i} }}$$
(6)

where \(N\) is the variety of acquired Pareto-optimal solutions in the \(i\) th section and \(C\) is a constant number greater than one.

According to Eq. (6), hypercubes with less congestion are more likely to employ new leaders. When there are fewer solutions available in a hypercube, that hypercube becomes a more likely candidate for leader selection. As the archive is optimized, its diversity is protected by the grid mechanism and the selection leader component. A low chance of selecting leaders from the most populated hypercubes is also provided by the leader selection component's use of a roulette wheel. This focuses on avoiding MOCGO at the local front.

Obviously, the MOCGO algorithm derives its convergence from the CGO method. If we pick one of the solutions from the archive, the MOCGO method will have an even higher level of consistency than it already possesses. On the other hand, it is difficult to determine which solutions are best according to the Pareto principle when there is a lot of variability. This issue was resolved by including the leader function collection and archive maintenance. The computational complexity of MOCGO is \(O({mn}^{2})\), where \(n\) is the population size and \(m\) is the number of objectives to achieve. There is a significant improvement in computational complexity over traditional methods such as NSGA [43] and SPEA [44], which have \(O({mn}^{3})\) complexity. MOCGO's pseudo-code is shown in Fig. 4.

Fig. 4
figure 4

Pseudo-code of the MOCGO algorithm

4 Results and discussion

Performance measurements and case studies are used to figure out how well the methods in this section. These approaches include advanced multi-modal benchmark functions, real-world engineering design and mathematics problems. These problems are used to test how well multi-objective optimizers can handle non-convex and nonlinear constraints. Experiments are carried out using MATLAB software (R2021a) on a Macintosh (macOS Monterey) with a Core i9 processor and 16 GB of RAM.

4.1 Performance metrics

The algorithms' performance is evaluated using the following four metrics [45,46,47]:

Generational Distance is one of the measures that is utilized on a regular basis for the purpose of determining whether or not multi-objective metaheuristic optimization algorithms have converged. It measures the total distances between solution candidates obtained by different methods [48].

$$GD = \left( {\frac{1}{{n_{pf} }}\mathop \sum \limits_{i = 1}^{{n_{pf} }} dis_{i}^{2} } \right)^{\frac{1}{2}}$$
(7)

Solution candidates in separate sets achieved by various optimization techniques are called spacing (S) [49].

$$\begin{array}{*{20}l} {S = \left( {\frac{1}{{n_{pf} }}\mathop \sum \limits_{i = 1}^{{n_{pf} }} \left( {d_{i} - \overline{d}} \right)^{2} } \right)^{\frac{1}{2}} } \hfill & {{\text{Where}}} \hfill & { \overline{d} = \frac{1}{{n_{pf} }}\mathop \sum \limits_{i = 1}^{{n_{pf} }} d_{i} } \hfill \\ \end{array}$$
(8)

The maximum spread (MS) in various solution sets refers to the spread of solution candidates in terms of the number of distinct optimal options and the number of possible solutions [50].

$${\text{MS}} = \left[ {\frac{1}{m}\mathop \sum \limits_{i = 1}^{m} \left[ {\frac{{\min \left( {f_{i}^{{{\text{max}}}} ,F_{i}^{{{\text{max}}}} } \right) - {\text{max}}\left( {f_{i}^{{{\text{min}}}} ,F_{i}^{{{\text{min}}}} } \right)}}{{F_{i}^{{{\text{max}}}} - F_{i}^{{{\text{min}}}} }}} \right]^{2} } \right]^{\frac{1}{2}}$$
(9)

The Inverted Generational Distance (IGD) is a statistic for comparing the Pareto front approximations obtained by various multi-objective algorithms [51].

$${\text{IGD}} = \frac{{\sqrt {\mathop \sum \nolimits_{i = 1}^{n} d_{i}^{2} } }}{n}$$
(10)

4.2 Experimental setting

This section compares the proposed multi-objective Chaos Game optimizer (MOCGO) to other well-known competitive approaches utilizing numerous benchmark problems. The comparisons were carried out in order to validate the suggested method's outcomes in terms of numerous standard performance measures such as IGD, MS, GD, and S. Several comparative methods have been used in the comparisons, including multi-objective Particle Swarm Optimizer (MOPSO) [17], multi-objective Gray Wolf Optimizer (MOGWO) [52], multi-objective Ant Lion Optimizer (MOALO) [53], multi-Objective Crystal Structure Algorithm (MOCryStAl) [38], multi-objective Harris Hawks Optimization (MOHHO) [54] and multi-objective Salp Swarm Algorithm (MSSA) [55]. The population size (number of tested solutions (N)) and the total number of tested iterations (T) of all tested algorithms are fixed as 50 and 1000, respectively. The parameter settings of the comparative methods are taken from the original paper, which is presented in Table 1. The used benchmark functions in the experiments are presented in Tables 2, 3, and 8.

Table 1 Parameters of methods
Table 2 Bi-objective CEC-09 benchmark functions
Table 3 Tri-objective CEC-09 benchmark functions

4.2.1 Discussion of the CEC-09 test function

The outcomes of the comparison methods using the CEC-09 are provided in the following section.

Tables 2 and 3 include descriptions of the evaluated Bi-objective and Tri-objective CEC-09 benchmark functions. These problems are usually used to evaluate the performance of the multi-objective methods in the literature. The following section contains the findings of the comparison approaches.

Table 4 provides the statistical findings of CEC-09 benchmark functions in terms of IGD performance measures. The findings reveal clearly that the suggested MOCGO produced outstanding outcomes compared to previous methodologies. MOCGO got the best results in six out of ten test cases in several problems (i.e., UF2, UF3, UF4, UF8, UF9, and UF10), followed by MOGWO, which got the best results in some problems (i.e., UF5, UF6, and UF7), three out of ten test cases, and MOPSO got the best results in a problem (i.e., UF1), one out of ten test cases. The results shown in Table 4 show the strength of the proposed method in solving various complex problems with multiple objectives compared to other similar methods used in the literature. The proposed modifications to the new MOCGO method clearly helped improve the results and obtain substantial results in all comparisons, which confirms the ability of the proposed MOCGO method to solve such problems. These problems are usually hard to solve by the traditional method, and the method that gets excellent results can be considered an advanced search method to solve any complicated problem.

Table 4 Statistical analysis of the CEC-09 benchmark function to determine IGD performance

Table 5 analyzes CEC-09 benchmark functions using GD performance metrics. The findings clearly demonstrate that the proposed MOCGO outperformed previous comparing approaches. MOCGO consistently achieved the top outcomes in various challenges in six out of ten test cases, followed by MOGWO, which acquired the best results in some problems (i.e., UF1, UF5, and UF6), three out of ten test cases, and MOPSO got the best results in a problem (i.e., UF7), one out of ten test cases. The findings in Table 5 demonstrate the suggested method's resilience for handling various complicated situations with multiple objectives compared to other comparable techniques in the literature. The proposed new MOCGO method clearly improved the results and achieved substantial results in all measurements, confirming the strength of the proposed MOCGO method in solving such problems. The SD values showed that the proposed approach produced consistent results. We concluded from these results that the proposed multi-objective method is active and can solve complicated problems.

Table 5 Statistical analysis of the CEC-09 benchmark function to determine GD performance

Table 6 gives the statistical outcomes of CEC-09 benchmark functions in terms of MS performance metrics. The results show that the suggested MOCGO produced better than other comparative methods. MOCGO obtained the best results in several test problems in eight out of ten test cases. MOPSO got the best results in a few other problems (i.e., UF3 and UF8), two out of ten test cases. The results presented in Table 6 confirm the quality of the obtained results produced by the proposed MOCGO method for tackling different complex problems with multiple objectives compared to similar methods employed in the literature. The proposed novel MOCGO method developed the results. It produced better results in all mentioned measurements, proving the robustness of the proposed MOCGO method to address such problems. Moreover, according to these results, the proposed method got more Pareto-optimal solutions than the other comparative algorithms in the decision space.

Table 6 Statistical analysis of the CEC-09 benchmark function to determine MS performance

Table 7 summarizes the statistical findings for CEC-09 benchmark functions in terms of S performance metrics. The conclusions demonstrated indisputably that the suggested MOCGO approach outperformed previous comparable methodologies. (MOPSO, MOGWO, and MOALO). MOCGO achieved the most reliable results in various test problems (i.e., UF2, UF3, UF5, UF8, UF9, and UF10), in six out of ten test cases. MOALO produced the best results in other few problems (i.e., UF1, UF4, UF6, and UF7), in four out of ten test cases. The results shown in Table 7 verify the quality of the acquired results presented by the proposed MOCGO method for addressing several complex problems with multiple objectives compared to other comparable methods used in the literature. The proposed MOCGO method obviously improved the results. It yielded clearly better results in terms of all considered measurements, demonstrating the robustness of the suggested MOCGO method to address such optimization problems. The SD values also showed that the proposed strategy consistently produced similar outcomes independent of evaluation measures.

Table 7 Statistical analysis of the CEC-09 benchmark function to determine S performance

Figures 5 and 6 represent the best PF obtained on CEC-09 problems by MOPSO, MOGWO, MOALO, and the proposed MOCGO algorithms. Figure 5 depicts the outcomes of the comparative methodologies on UF1-UF5, and Fig. 6 illustrates the results on UF6-UF10. Based on these figures, it can be shown that the proposed MOCGO displays a perfect convergence as it gets closer and closer to all of the true Pareto-optimal fronts. Moreover, the MOPSO, MOGWO, and MOALO methods explain the worst convergence, corresponding with the obtained results. The suggested approach is compared to other well-known comparison methods on the map-based problem to illustrate its usefulness. MOCGO can cover all Pareto areas, although the optimum regions reported by others in the literature are partial, as demonstrated in Figs. 5 and 6. This demonstrates MOCGO's excellent performance and demonstrates its efficacy.

Fig. 5
figure 5

The CEC-09 problems (UF1-5) True and obtained Pareto front results

Fig. 6
figure 6

The CEC-09 problems (UF6-10) True and obtained Pareto front results

4.2.2 Discussion of the ZDT and DTLZ test function

The advanced multi-modal benchmark functions with fixed-dimension, including ZDT (i.e., ZDT1-ZDT6) and DTLZ (DTLZ2 and DTLZ4), are tested to validate further the performance of the proposed MOCGO algorithm in the following section. The findings that were achieved using the proposed approach are compared with the results acquired using other comparison methods that are well-known (i.e., MOPSO, MOGWO, and MOCGO). The descriptions of the tested Multi-modal benchmark functions with fixed-dimension are presented in Table 8.

Table 8 Multi-modal benchmark functions with fixed-dimension

Benchmark functions ZDT and DTLZ, which measure GD performance, are statistically compared in Table 9. When compared to previous approaches, the suggested MOCGO performed exceptionally well. MOCGO got the best results in several problems (i.e., ZDT1, ZDT2, ZDT3 and ZDT4,), in five out of seven test cases. Followed by MOPSO, it got the best results in some problems (i.e., ZDT6, DTLZ2), two out of seven test cases, and MOGWO got the best results in a problem (i.e., DTLZ4), one out of seven test cases. The results shown in Table 9 compare the proposed method to similar approaches that have been used to solve advanced difficult issues with multiple objectives. According to the findings, the approach that was suggested is superior to others in this regard. In addition, the standard deviation values demonstrated that the suggested method is capable of producing results that are consistent across multiple instances.

Table 9 Statistical analysis of the mathematical functions to determine GD performance

Table 10 summarizes the statistical outcomes for the ZDT and DTLZ benchmark functions using IGD. Compared to other comparison algorithms, the results show that MOCGO performed quite well. MOCGO achieved the best results in five problems (i.e., ZDT1, ZDT2, ZDT3, ZDT4, ZDT6). MOPSO finished in second place, achieving the best possible scores in two of the seven tests (DTLZ2 and DTLZ4). In contrast to other comparable approaches utilized in the literature, Table 10 demonstrates the power of the suggested method in addressing various advanced complicated problems with multiple objectives. In addition, the SD values demonstrated that the suggested method is capable of producing results that are consistent across a range of different applications.

Table 10 Statistical analysis of the mathematical functions to determine IGD performance

Tables 11 and 12 show the ZDT and DTLZ benchmark functions' MS and S performance. The findings indicate that the suggested MOCGO produced outstanding results compared to previous comparison algorithms. According to the MS measure results in Table 11, out of seven test instances, MOCGO achieved the best results in six problems (i.e., ZDT2, ZDT3, ZDT6, DTLZ2, and DTLZ4). MOGWO finished in second place, achieving the highest marks in one of the seven different tests (ZDT4). The results of the S measure are presented in Table 12, and it can be seen that out of a total of seven different test cases, MOCGO got the best results in three of them (i.e., ZDT3, ZDT6, and DTLZ4). MOPSO came in second, with the best results in two of the seven test cases (i.e., ZDT2 and ZDT4). MOGWO got the best results in one case (i.e., DTLZ2), and MOALO got the best results in one case (i.e., ZDT1). In contrast to other similar methods utilized in the literature, these results demonstrated the power of the suggested approach in addressing various advanced complicated problems with multiple objectives. The SD values confirmed the suggested technique's capability to provide consistent results.

Table 11 Statistical analysis of the mathematical functions to determine MS performance
Table 12 Statistical analysis of the mathematical functions to determine S performance

Figures 7 and 8 show the best PF produced by MOPSO, MOGWO, MOALO, and the proposed MOCGO algorithms on ZDT and DTLZ problems. The results of the comparison techniques on ZDT (i.e., ZDT1-ZDT6) are shown in Fig. 7, and the results of the comparative methods on DTLZ (i.e., DTLZ2 and DTLZ4) are shown in Fig. 8. These diagrams demonstrate that the proposed MOCGO approaches all true Pareto-optimal fronts with almost complete convergence. Furthermore, the MOPSO, MOGWO, and MOALO approaches explain the poorest convergence.

Fig. 7
figure 7

True and obtained Pareto front for ZDT problems

Fig. 8
figure 8

The DTLZ problems True and obtained Pareto front results

4.2.3 Discussion of engineering problems

This section tests the proposed MOCGO on eight multi-objective engineering problems (see Appendix) of which some are discussed as follows:

4.2.3.1 The 4-bar truss

In the well-known issue of structural optimization shown in Fig. 9, [56], the goal is to reduce both the volume (\({f}_{1}\)) and the displacement (\({f}_{2}\)) of a 4-bar truss to their smallest possible values. The following equations link four design variables (\({x}_{1}-{x}_{4}\)) to the cross-sectional area of members 1 to 4.

$${\text{Minimize}}: f_{1} \left( x \right) = 200 \times \left( {2 \times x_{1} + {\text{sqrt}}\left( {2 \times x_{2} } \right) + {\text{sqrt}}\left( {x_{3} } \right) + x_{4} } \right)$$
(11)
$${\text{Minimize}}: f_{2} \left( x \right) = 0.01 \times \left( {\frac{2}{{x_{1} }}} \right) + \left( {\frac{{2 \times {\text{sqrt}}\left( 2 \right)}}{{x_{2} }}} \right) - \left( {\frac{{2 \times {\text{sqrt}}\left( 2 \right)}}{{x_{3} }}} \right) + \left( {2/x_{1} } \right) 1 \le x_{1} \le 3, 1.4142 \le x_{2} \le 3\;1.4142 \le x_{3} \le 3, 1 \le x_{4} \le 3$$
(12)
Fig. 9
figure 9

The schematic view of the four-bar truss

4.2.3.2 The welded beam

Ray and Liew [57] proposed four design restrictions for welded beams. Figure 10 illustrates this scenario in further detail. The welded beam is shown schematically in Fig. 10. The manufacturing cost (\({f}_{1}\)) and beam deflection (\({f}_{2}\)) of a welded beam should be kept to a minimum in this issue. The four design variables are the weld thickness (\({x}_{1}\)), the clamped bar's length (\({x}_{2}\)), the clamped bar's height (\({x}_{3}\)) and the clamped bar's thickness (\({x}_{4}\)).

$${\text{Minimize}}: f_{1} \left( x \right) = 1.10471 \times x_{1}^{2} \times x_{2} + 0.04811 \times x_{3} \times x_{4} \times \left( {14 + x_{2} } \right)$$
(13)
$${\text{Minimize}}: f_{2} \left( x \right) = 65856000/\left( {30 \times 10^{6} \times x_{4} \times x_{3}^{3} } \right)$$
(14)
$${\text{where}}: g_{1} \left( x \right) = \tau - 13600$$
(15)
$$g_{2} \left( x \right) = \sigma - 30000$$
(16)
$$g_{3} \left( x \right) = x_{1} - x_{4}$$
(17)
$$g_{4} \left( x \right) = 6000 - P$$
(18)
$$0.125 \le x_{1} \le 5, 0.1 \le x_{2} \le 10$$
$$0.1 \le x_{3} \le 10, 0.125 \le x_{4} \le 5$$
$${\text{where}}: q = 6000*\left( {14 + \frac{{x_{1} }}{2}} \right);D = {\text{sqrt}}\left( {\frac{{x_{2}^{2} }}{4} + \frac{{(x_{1} + x_{3} )^{2} }}{4}} \right)$$
(19)
$$J = 2*\left( {x_{1} *x_{2} *{\text{sqrt}}\left( 2 \right)*\left( {\frac{{x_{2}^{2} }}{12} + \frac{{(x_{1} + x_{3} )^{2} }}{4}} \right)} \right)$$
(20)
$$\alpha = \frac{6000}{{{\text{sqrt}}\left( 2 \right)*x_{1} *x_{2} }}$$
(21)
$$\beta = Q*\frac{D}{J}$$
(22)
Fig. 10
figure 10

The welded beam

4.2.3.3 Disk brake

According to Ray and Liew [56], there are many limitations to consider while designing a disc brake. Two goals need to be attained: reducing stopping time (\({f}_{1}\)) and reducing brake mass (\({f}_{2}\)). Figure 11 shows a schematic representation of the disc brake. Disc's inner radius (\({x}_{1}\)), outer radius (\({x}_{2}\)), engaging force (\({x}_{3}\)), the number of friction surfaces (\({x}_{4}\)), and five constraints are given below as equations.

$${\text{Minimize}}: f_{1} \left( x \right) = 4.9 \times \left( {10} \right)^{{\left( { - 5} \right)}} \times \left( {x_{2}^{\left( 2 \right)} - x_{1}^{\left( 2 \right)} } \right) \times \left( {x_{4} - 1} \right)$$
(23)
$${\text{Minimize}}: f_{2} \left( x \right) = \left( {9.82 \times \left( {10} \right)^{\left( 6 \right)} } \right) \times \left( {x_{2}^{\left( 2 \right)} - x_{1}^{\left( 2 \right)} } \right)/\left( {\left( {x_{2} )^{\left( 3 \right)} - x_{1}^{\left( 3 \right)} } \right) \times x_{4} \times x_{3} } \right)$$
(24)
$$g_{1} \left( x \right) = 20 + x_{1} - x_{2}$$
(25)
$$g_{2} \left( x \right) = 2.5 + \left( {x_{4} + 1} \right) - 30$$
(26)
$$g_{3} \left( x \right) = \left( {x_{3} } \right)/\left( {3.14 \times \left( {x_{2}^{\left( 2 \right)} - x_{1}^{\left( 2 \right)} } \right)^{2} } \right) - 0.4$$
(27)
$$g_{4} \left( x \right) = \left( {2.22 \times \left( {10} \right)^{{\left( { - 3} \right)}} \times x_{3} \times \left( {\left( {x_{2} } \right)^{\left( 3 \right)} - x_{1}^{\left( 3 \right)} } \right)} \right)/\left( {\left( {x_{2}^{\left( 2 \right)} - x_{1}^{\left( 2 \right)} } \right)^{2} } \right) - 1$$
(28)
$$g_{5} \left( x \right) = 900 - \left( {2.66 \times \left( {10} \right)^{{\left( { - 2} \right)}} \times x_{3} \times x_{4} \times \left( {\left( {x_{2} } \right)^{\left( 3 \right)} - x_{1}^{\left( 3 \right)} } \right)} \right)/\left( {\left( {x_{2}^{\left( 2 \right)} - x_{1}^{\left( 2 \right)} } \right)^{2} } \right)$$
(29)
$$55 \le x_{1} \le 80, 75 \le x_{2} \le 110$$
$$1000 \le x_{3} \le 3000, 2 \le x_{4} \le 20$$
Fig. 11
figure 11

The disk brakes

4.2.3.4 Speed reducer

It is well knowledge in mechanical engineering that the design of a speed reducer must minimize the component's mass (\({f}_{1}\)) and stress (\({f}_{2}\)) (see Fig. 12). The details of this example with seven variables and eleven constraints can be found in [56, 58].

$${\text{Minimize}}: f_{1} \left( x \right) = 0.7854 \times x_{1} \times x_{2}^{2} \times \left( {3.3333 \times x_{3}^{2} + 14.9334 + x_{3} ) - 43.0934} \right) \ldots - 1.508 \times x_{1} \times \left( {x_{6}^{2} + x_{7}^{2} } \right)$$
(30)
$$\begin{array}{*{20}l} {\dot{\eta }_{i} = R_{i} (\psi_{i} )\upsilon_{i} } \hfill & {i = 1,2,...,n} \hfill \\ {\dot{\upsilon }_{i} = M_{i}^{ - 1} ( - C_{i} (\upsilon_{i} ) - D_{i} (\upsilon_{i} ) - \tau_{wi} + \tau_{i} )} \hfill & {} \hfill \\ \end{array}$$
(31)
$${\text{where}}: g_{1} \left( x \right) = 27/\left( {x_{1} \times x_{2}^{2} \times x_{3} } \right) - 1$$
(32)
$$g_{2} \left( x \right) = 397.5/\left( {x_{1} \times x_{2}^{2} \times x_{3}^{2} } \right) - 1$$
(33)
$$g_{3} \left( x \right) = (1.93 \times x_{4}^{3} /\left( {x_{2} \times x_{3} \times x_{6}^{4} } \right) - 1$$
(34)
$$g_{4} \left( x \right) = (1.93 \times x_{5}^{3} /\left( {x_{2} \times x_{3} \times x_{7}^{4} } \right) - 1$$
(35)
$$g_{5} \left( x \right) = \left( {{\text{sqrt}}\left( {\left( {745 \times x_{4} } \right)/x_{2} \times \left( {\left( {\left( {x_{3} } \right)} \right)^{2} + 16.9e6} \right)} \right)/\left( {110 \times x_{6}^{3} } \right)} \right) - 1$$
(36)
$$g_{6} \left( x \right) = \left( {{\text{sqrt}}\left( {\left( {745 \times x_{4} } \right)/x_{2} \times \left( {\left( {\left( {x_{3} } \right)} \right)^{2} + 157.9e6} \right)} \right)/\left( {85 \times x_{7}^{3} } \right)} \right) - 1$$
(37)
$$g_{7} \left( x \right) = \left( {\left( {x_{2} \times x_{3} } \right)/40} \right) {-} 1$$
(38)
$$\tau = {\text{sqrt}}\left( {\alpha^{2} + 2 \times \alpha \times \beta \times \frac{{x_{2} }}{2 \times D} + \beta^{2} } \right)$$
(39)
$$\sigma = \frac{504000}{{x_{4} \times x_{3}^{2} }}$$
(40)
$${\text{tmpf}} = 4.013 \times \frac{{30 \times 10^{6} }}{196}$$
(41)
$$P = {\text{tmpf}} \times {\text{sqrt}}\left( {x_{3}^{2} \times \frac{{x(4)^{6} }}{36}} \right) \times \left( {1 - x_{3} \times \frac{{{\text{sqrt}}\left( {\frac{30}{{48}}} \right)}}{28}} \right)$$
(42)
Fig. 12
figure 12

The speed reducer

4.2.3.5 Comparison of MOCGO with MOPSO, MOALO, MOGWO

In this subsection, a comparison is made between MOCGO and the algorithms MOPSO, MOALO, and MOGWO for solving engineering problems based on the criteria of Ave and SDT. The outcomes of the comparison methodologies used in GD, IGD, MS, and S are presented in Tables 13, 14, 15 and 16, respectively. Table 13 demonstrates that the proposed strategy achieved promising outcomes in almost all of the situations that were put to the test using the GD measure. In comparison with MOGWO, MOPSO and MOALO achieved some of the best results; nevertheless, MOGWO did not get any of the best scores in that table. The findings of the comparison approaches for all of the problems that were examined are presented in Table 14, which summarizes IGD. The proposed method also proved its ability to solve real-world engineering problems effectively, which is also harmonized with the results in terms of MS and S, as shown in Tables 15 and 16. It can be concluded that the proposed method can solve complex problems with proven results using many tested problems. It can be considered an attractive alternative in this domain to solve multi-objective problems.

Table 13 Statistical analysis of the engineering problems to determine GD performance
Table 14 Statistical analysis of the engineering problems to determine IGD performance
Table 15 Statistical analysis of the engineering problems to determine MS performance
Table 16 Statistical analysis of the engineering problems to determine S performance

Figures 13 and 14 show the best PF produced by MOPSO, MOGWO, MOALO, and the proposed MOCGO algorithms on the given real-world industrial engineering problems. The results of the comparative methods on BNH, CONSTR, DISK BRAKE, and 4-BAR TRUSS are shown in Fig. 13. The results of the comparative methods on WELDED BEAM, OSY, SPEED REDUCER, and SRN are shown in Fig. 14. These diagrams confirm that the proposed MOCGO approaches are very close to the actual Pareto-optimal fronts with almost complete convergence. Moreover, the MOPSO, MOGWO, and MOALO demonstrate the poorest convergence. Finding optimal Pareto front values using the proposed method is preferable to alternative methods.

Fig. 13
figure 13

Results of Pareto front results for BNH, CONSTR, DISK BRAKE, and 4-BAR TRUSS with MOCGO, MOPSO, MOALO, and MOGWO

Fig. 14
figure 14

Results of Pareto front results for WELDED BEAM, OSY, SPEED REDUCER, and SRN with MOCGO, MOPSO, MOALO, and MOGWO

4.2.3.6 Comparison of MOCGO with MOCryStAl, MOHHO, MSSA

The effectiveness of the proposed MOCGO is examined using additional optimization problems since the new multi-objective algorithms should be assessed using a few challenging real-world optimization problems. This subsection compares MOCGO with MOCryStAl [38], MOHHO [54], and MSSA [55] algorithms by the criterion of Ave and SDT for engineering problems. The outcomes of comparative approaches for GD, IGD, MS, and S are displayed in Tables 17, 18, 19, and 20. According to the GD measure in Table 17, the suggested strategy achieved encouraging outcomes in six out of all evaluated problems. In contrast MOCGO and MSSA obtained some of the best outcomes in terms of Ave in this table, whereas MOCryStAl and MOHHO did not. Table 18 presents the results that were obtained by making comparisons using the various techniques for each of the challenges that were investigated with regard to IGD. Regarding the Ave findings for the IGD measure, which are derived in Table 18, MOCGO has the capability to achieve acceptable results in any case. According to Table 19, the MOCryStAl and the MOHHO are only capable of offering the best results for one or two of the test problems that are taken into consideration when employing the MS metric to deal with engineering problems. In six of these problems, the suggested MOCGO is able to outperform the other approaches, proving its ability to handle this class of challenging issues. Table 20 compares and summarizes the statistical outcomes of various multi-objective strategies together with the suggested algorithm. In four of the cases, it was found that MOCGO can outperform the other approaches in terms of the S index, although the other approaches, like MSSA and MOHHO, also generate highly competitive results.

Table 17 Statistical analysis of the engineering problems to determine GD performance
Table 18 Statistical analysis of the engineering problems to determine IGD performance
Table 19 Statistical analysis of the engineering problems to determine MS performance
Table 20 Statistical analysis of the engineering problems to determine S performance

The results of the comparative methods on all engineering problems are shown in Figs. 15 and 16. These figures can be examined and it can be seen that MSSA and MOHHO exhibit the worst convergence while having strong coverage in CONSTR, DISK BRAKE and WELDED BEAM. The MOCryStAl and MOCGO, however, both offer excellent convergence toward all the real Pareto-optimal fronts.

Fig. 15
figure 15

Results of Pareto front results for BNH, CONSTR, DISK BRAKE, and 4-BAR TRUSS with MOCGO, MOCryStAl, MOHHO, and MSSA

Fig. 16
figure 16

Results of Pareto front results for WELDED BEAM, OSY, SPEED REDUCER, and SRN with MOCGO, MOCryStAl, MOHHO, and MSSA

5 Conclusion and future works

The multi-objective version of Chaos Game Optimization (CGO) as one of the newly suggested innovative metaheuristic algorithms is developed in this work. The CGO's inspiring concept is based on certain chaos theory concepts, in which the formation of fractals by the chaotic game concept and the fractal's self-similarity difficulties are considered. The proposed approach was compared to well-known algorithms such as MOPSO, MOGWO, MOALO, MOCryStAl, MOHHO, and MSSA for result confirmation. As a consequence, when compared to the previously described method, the results from this technique are quite competitive. The Completions on Evolutionary Computation (CEC-09) benchmark problems with some constrained mathematical (i.e., ZDT and DTLZ) are utilized for performance evaluation of multi-objective versions of CGO. Some real-world engineering design problems are tested to evaluate the MOCGO method's efficiency. The research shows that the proposed MOCGO can get higher rankings than competing methods when evaluating IGD, GD, and S indices and the MS index. Results showed that the proposed MOCGO technique could get one closer to the Pareto front in mathematical and engineering issues, which means better solutions.

In the future, the solution of multi-modal and nonlinear functionally demanding technical issues and engineering design obstacles, such as truss structures and the development of the structural health evaluation, may be used for the proposed MOCGO.