A chaos-based adaptive equilibrium optimizer algorithm for solving global optimization problems

: The equilibrium optimizer (EO) algorithm is a newly developed physics-based optimization algorithm, which inspired by a mixed dynamic mass balance equation on a controlled fixed volume. The EO algorithm has a number of strengths, such as simple structure, easy implementation, few parameters and its effectiveness has been demonstrated on numerical optimization problems. However, the canonical EO still presents some drawbacks, such as poor balance between exploration and exploitation operation, tendency to get stuck in local optima and low convergence accuracy. To tackle these limitations, this paper proposes a new EO-based approach with an adaptive gbest-guided search mechanism and a chaos mechanism (called a chaos-based adaptive equilibrium optimizer algorithm (ACEO)). Firstly, an adaptive gbest-guided mechanism is injected to enrich the population diversity and expand the search range. Next, the chaos mechanism is incorporated to enable the algorithm to escape from the local optima. The effectiveness of the developed ACEO is demonstrated on 23 classical benchmark functions, and compared with the canonical EO, EO variants and other frontier metaheuristic approaches


Introduction
Real-world design and decision problems can be considered as global optimization problems consisting of different types of objective functions [1]. The objective functions can be classified as continuous or discrete, single or multi-objective, constrained or unconstrained, depending on their characteristics. Numerous real-world multi-modal and non-linear optimization problems are complex, such as parameter calibration, structural design and optimization of neural networks [2]. Without having any gradient information of the objective function, it is challenging to find global or near-global best solutions for solving real-world problems [3]. Traditional mathematical optimization methods are time-consuming and ineffective when faced with these complex problems. Consequently, in order to get global best solutions of complex real-world problems, metaheuristic approaches are continuously developed, which are the idea of finding the best solution based on intuitive, empirical or simulation of some natural phenomena and mechanistic constructs [4]. Furthermore, metaheuristic algorithms do not require gradient information, consider only inputs and outputs and have been a great development. Well-performing algorithms provide a tool for researchers to address optimization problems in different fields [5]. Moreover, metaheuristics are highly flexible and do not need specific adaptation of the algorithm according to the type of problem, thus they are becoming increasingly popular and have been successfully adopted for complex optimization problems in various domains [6].
The equilibrium optimizer (EO) algorithm is a novel physics-based metaheuristic technique proposed by Faramarzi et al. [7] in 2020. The algorithm is inspired by a hybrid dynamic mass equilibrium differential equation on a controlled fixed volume, which describes the elementary physical phenomenon of the conservation of a mass during the entrance, departure and generation within a controlled volume. The principle is to consider each particle and its concentration as an independent individual and, after that, update the individuals stochastically in accordance with the concentration of the equilibrium candidate, finally reaching the equilibrium state. Compared with other intelligent optimization algorithms, EO has several merits, including simple framework, ease of implementation, strong adaptability, few parameters and ease of hybridizing with other algorithms. Compared with the genetic algorithm (GA) [8], the particle swarm algorithm (PSO) [9], the grey wolf optimization algorithm (GWO) [10], the gravitational search algorithm (GSA) [11], the salp swarm algorithm (SSA) [12] and the covariance matrix adaptation evolution strategy (CMA-ES) [13], EO has been proven to perform exceptionally well.
Although EO is superior to other popular methods, it still has some defects such as a great tendency to fall into local optima, slow convergence and immature balance between exploration and exploitation. Therefore, many scholars have studied in-depth and proposed some effective ways to improve EO performance. In [14], an opposition-based learning EO algorithm is proposed, called EOOBLE. First, an opposition-based learning mechanism is injected in the initialization and update process of basic EO. Secondly, a levy flight mechanism is employed in the concentration update equation. Finally, an evolutionary population dynamics mechanism is adopted to avoid getting trapped in local optima. The performance of the EOOBLE is verified on 25 benchmark functions with dimensions from 100 to 5000, and compared with the original EO, EO variants and some well-known metaheuristics. The statistical results show that EOOBLE is an advantageous algorithm for tackling high-dimensional global optimization problems. Furthermore, the effectiveness of EOOBLE is proven in a high-dimensional engineering design problem. In [15], an enhanced EO (EEO) algorithm based on three communication strategies is proposed. The accuracy of the EEO is verified on 28 benchmark functions, and compared with existing optimization methods. The analysis illustrates that the EEO algorithm outperforms its competitors. Additionally, EEO is utilized to address a discrete job shop scheduling problem (JSSP), and compared with the three improvement approaches of EEO. The experimental results reveal that EEO achieves significant improvements in solving JSSP. In [16], a self-adaptive EO (self-EO) is introduced, which integrates four effective exploring stages. The performance of the self-EO algorithm is verified on numerous optimization problems, including ten functions of the CEC'20 benchmark, three engineering optimization problems, two combinatorial optimization problems and three multi-objective problems. Moreover, the proposed self-EO is compared with nine metaheuristic techniques, including the standard EO, and eight well-performing metaheuristic techniques. The analysis shows that the self-EO has a better searching capability, and a faster convergence rate than the other algorithms. In [17], a new multi-objective EO (MOEO) is proposed. The crowding distance mechanism is used to balance the exploitation and exploration during the search process. Furthermore, a non-dominant sorting mechanism is combined with the MOEO algorithm to maintain population diversity. The performance of the MOEO algorithm is evaluated for 33 contextual problems, and compared with other state-of-the-art multi-objective optimization methods. Quantitative and qualitative experiments show that the MOEO algorithm has a high efficiency and exploration capability for multi-objective problems. In [18], a new EO version, called EEO, is proposed, which incorporates a performance-enhancing new levy flight mechanism strategy. The effectiveness of the EEO algorithm is confirmed on the ten functions of the CEC'20 test suite, compared with other high-performance algorithms. Subsequently, EEO is utilized to resolve the optimal power flow (OPF) problem. The results of EEO are compared with standard EO, and other metaheuristics. These simulations show that EEO has better performances than 20 published approaches and the original EO. Moreover, the superiority of EEO is illustrated by six different cases that involve different objective minimization. The comparisons show that EEO can provide viable solutions for various OPF problems. In [19], a new algorithm based on the hybrid of EO and pattern search (PS) techniques, called EO-PS, is introduced. The EO-PS algorithm operates in two stages. The first stage performs EO to explore the search space and achieve the desired region by utilizing the equilibrium pool of elite particles. The second stage incorporates PS to lead the search to better neighborhoods and gain high quality solutions by employing its detection and pattern motion. The proposed EO-PS is utilized to handle the single and multi-objective optimization problems of wind farm layout optimization in different wind speed scenarios. Additionally, EO-PS is studied on irregular land space in the Gulf of Suez-red sea, Egypt. The comprehensive results show that EO-PS can obtain superiority compared with other advanced methods in terms of the quality and reliability of the solution. In [20], a multi-objective equilibrium optimizer slime mould algorithm (MOEOSMA) known as MOEOSMA is proposed. In the MOEOSMA, the elite archiving mechanism is utilized to facilitate convergence of the algorithm and the crowding distance method is employed to keep the distribution of the Pareto frontiers, dynamic coefficients are provided for adjusting exploration and exploitation and the equilibrium pool method is employed to simulate the collaborative foraging behavior of the slime molds to improve the global search ability of the algorithm. The performance of MOEOSMA is investigated on the CEC2020 test suite, eight real multi-objective constrained engineering problems and four large scale truss structure optimization problems. The results reveal that MOEOSMA is significantly better than other comparable algorithms. Meanwhile, the algorithm finds more Pareto optimal solutions and remains well-distributed in the decision space and objective space. In [21], an improved quantum equilibrium optimizer (QEO) algorithm combining quantum coding and quantum rotating gate strategies for linear antenna arrays is introduced. The excitation amplitude of the array elements in the linear antenna array model is optimized by numerical simulation using QEO to minimize the interference of the side lobe levels on the main lobe radiation. Six different metaheuristics are employed to perform optimization of the excitation amplitude of the line antenna array elements for three different arrays. Experimental results demonstrate that QEO is more competitive than other optimization algorithms and is more advantageous in obtaining maximum side lobe level reduction. In [22], an improved version of EO, called equilibrium optimizer slime mould algorithm (EOSMA), is proposed. First, the exploration and exploitation capabilities of slime mould algorithm (SMA) are adjusted. Next, the anisotropic search operator of SMA is replaced by the search operator of EO to guide the search space of SMA. Finally, a stochastic differential mutation operator is incorporated to facilitate SMA to get rid of local optimality and increase the diversity of the population. The performance of EOSMA is validated on CEC2019, CEC2021 test suites, and nine engineering design problems. The results reveal that EOSMA is significantly better than 15 famous comparative algorithms on the CEC2019 benchmark problems. EOSMA performs clearly better than the three comparable algorithms on the CEC2021 benchmark functions. In addition, EOSMA outperforms other state-of-the-art comparison algorithms on all nine engineering problems.
Although the aforementioned EO variants have improved the performance of the basic EO, there are still some shortcomings. For example, in [14], although EOOBLE demonstrates better performance in solving high-dimensional problems, the additional search mechanism increases the computational complexity of the algorithm. In [15], EEO overlooks the balance between exploitation and exploration, leading to low convergence accuracy. In [16], self-EO improves the convergence speed of the basic EO but fails to address the issue of loss of population diversity. In [17], MOEO maintains population diversity and balances exploration and exploitation, but the additional search mechanism prolongs the optimization time. In [18], EEO aims to enhance the exploration and exploitation processes of the algorithm but neglects the diversity of particles and the possibility of premature stagnation during the search. Moreover, other variants of EO only focus on certain deficiencies of the basic EO without providing a comprehensive solution, resulting in remaining flaws such as imbalanced exploitation and exploration operations, low quality of randomly generated initial populations, and limited potential for large-scale jumps during population iterations, leading to poor convergence performance [23]. The main limitations of the EO algorithm are analyzed in detail in the next section. Based on the above motivations, the present work develops an EEO based on chaos, known as a chaos-based adaptive equilibrium optimizer algorithm (ACEO). First, a new chaos-based update rule is proposed to reduce the possibility of falling into a local optimum. Then, an adaptive gbest-guided search mechanism is developed to enrich the population diversity and expand the search area. The main objective of this work is to thoroughly analyze the shortcomings of the EO algorithm and to propose an improved EO variant that will improve the performance and stability of the basic method. The main highlights of this paper are outlined as follows.  A novel EO variant, called ACEO, is developed. ACEO employs two mechanisms, which are an adaptive gbest-guided search mechanism and chaos mechanism. The effectiveness of the two components is examined using the ablation study in Section 4.4.


The efficacy of the developed ACEO is verified on 23 classical benchmark functions, and compared with the canonical EO, EO variants and other state-of-the-art metaheuristic approaches. In addition, the experimental results are statistically analyzed using the Friedman mean rank test and the Wilcoxon signed rank test.


To inspect the feasibility of the ACEO algorithm, it is adopted to resolve a mobile robot path planning (MRPP) task, and compare with some classical metaheuristic methods. The rest of this paper is presented as follows. In Section 2, the update principle and drawbacks of the canonical EO is described. Section 3 discusses and analyzes the developed ACEO algorithm. Section 4 investigates the performance of the developed ACEO on 23 benchmark functions, and the ablation study of each mechanism employed. Section 5 discusses the ACEO-based path-planning task for mobile robots. Section 6 elaborates comprehensive conclusions.

Initialization
Similar to most metaheuristic algorithms, EO utilizes candidate solutions initialized in the search space to initiate the optimization process. The initial concentration is given as follows.
where Ci is the concentration of the ith initial particle, Cmax and Cmin are the upper and lower limit of each dimension of the search space, N is the number of particles in the population.

Equilibrium pool and candidates (Ceq)
The four particles with the optimal fitness values and the mean of these four particles are selected, and these five particles are utilized to build the equilibrium pool. The Ceq_pool and Ceq_ave are expressed as follows, respectively.
, } eq pool eq eq eq eq eq ave eq eq eq eq ave

Exponential term (F)
The exponential term F is a parameter that maintains the balance between global search and local search during the optimization process, which is described as follows.
where l is a vector of random variables in the interval [0,1], t is a nonlinear function, and is calculated as follows. where Iter and Max_iter are the number of current iterations and the maximum number of iterations, respectively, a2 is equal to 1 and controls the exploitation capacity, where the larger the a2, the higher the exploitation ability and the lower the exploration ability and t0 is expressed as follows.
where a1 is equal to 2 and controls exploration capability, sign(r-0.5) managing the direction of development, r is a random vector between 0 and 1. Substituting Eq (6) into Eq (4), the exponential term F can be expressed as follows.
2.1.4. Generation rate (G) The generation rate G is a parameter that controls the local search capability, and is defined as follows.
where r1 and r2 are random numbers in the interval [0,1], and GCP is the generation rate control parameter, which is determined by the generation probability GP. Therefore, the concentration update formula of the EO algorithm is as follows.

Deficiencies of EO
The canonical EO has some advantages, such as a simple framework, being easy to implement, having few parameters and being easy to mix with other algorithms. Although the EO algorithm has shown competitive performance on global optimization problems, it still has some limitations. This subsection will provide a detailed analysis of the shortcomings of the EO algorithm as follows:  The information of the equilibrium candidates in the equilibrium pool is not sufficiently utilized.
The canonical EO establishes an equilibrium pool, which is the core component of the EO algorithm. The equilibrium candidates in the equilibrium pool provide information about the equilibrium state of the algorithm, but it is selected by ranking according to the size of the fitness value, which increases the risk of obtaining similar solutions, thus reducing the diversity of particles, and cannot guarantee that the algorithm converges to the optimum. Hence, making full use of the information of the equilibrium candidates can guide the particles to more promising regions and can improve their performance.  Low exploration ability. Based on the concentration update rule, the particle updates its position only according to the equilibrium state equation. During the iterative updating process, the differences of the positions between the equilibrium candidate particles become smaller and smaller, and at the later stage the positions between the particles are all extremely similar, which can lead to falling into a local optimum and thus stagnating prematurely. However, this aspect is particularly problematic in the canonical EO, reducing the performance of the algorithm. The above analysis indicates that EO has some limitations that affect expected performance. Existing EO research has worked to alleviate some of these limitations, but not all of them. This is the motivation to propose a new EO variant. Thus, we propose a new EO variant based on the above drawbacks and address the drawbacks that still exist in the existing EO variants by novel modifications. In the following sections, we discuss the new EO variant in detail.

The proposed ACEO
In this section, the developed ACEO is explained in detail. At the particle update concentration stage, an adaptive gbest-guided mechanism is introduced to enrich the selection of the equilibrium candidate. During the concentration update process, the chaos mechanism is performed for all particles to avoid getting caught up in the local optimum, thus reaching a proper balance between exploration and exploitation.

Adaptive gbest-guided search mechanism
Reaching an appropriate balance between global and local search is an important feature of metaheuristic algorithms [24]. For the canonical EO, the exponential term F is a key operator responsible for balancing exploration and exploitation, as shown in Eq (11). There are three terms in Eq (11): the first term is the equilibrium candidate, the second term is responsible for the global search to find the optimal solution and the third term is responsible for exploitation to make the solution more precise. Obviously, in this equation, the latter two terms move the algorithm in the desirable direction. Nevertheless, the equilibrium candidate of the first term is selected in the equilibrium pool by ranking according to the size of the fitness value, which increases the risk of similarity to obtain the solution, and thus does not ensure the convergence to the optimal result. To alleviate the above limitations, we propose an adaptive gbest-guided concentration update equation to replace Eq (11). The proposed new update equation is as follows.
where w is an inertia weight coefficient. Inspired by PSO, the inertia weight mechanism has been widely applied to metaheuristic algorithms. For example, Wang et al. [25] propose an inertia weight strategy to enhance the leader's position update equation of the SSA, thus taking full advantage of the information provided from the food source to the leader. Pathak and Srivastava [26] involve the Sugeno fuzzy inertia weight in the velocity updating equation of the bat algorithm (BA). Chen et al. [27] employ the mechanism of dynamically adjusting the inertia weight to enhance the updating method of the wolf swarm position of GWO. Ding et al. [28] introduce an adaptive inertia weight factor to modify the follower position update equation of the SSA. Yin and Mao [29] add inertia weights to modify the position vector of the grey wolf of GWO. Consequently, we propose a novel adaptive inertia weight that is described by the following mathematical formula.
where wmax, wmin denote the maximum and minimum values of the weight, respectively. a and b are constants. Figure 1 plots the nonlinear decline curves of the inertia weight w. As depicted in Figure 1, the curve is similar to an "inverse S" curve with a range of [0.2,0.9]. During this range, the slopes of the curve start to steepen gradually, then reach the minimum at the lowest point. Subsequently, the slopes of the curve start to slow down gradually, and finally converge to 0. The curve changes relatively smoothly near the center point, and changes more rapidly on both sides. Based on the characteristics of the curve, the values of w are larger at the initial stage of the iterations, facilitating the particle to explore the search space. However, as the iterations progressed, the values of w gradually decrease, which favor making the solution more accurate in the regions already searched.

Chaos mechanism
In this subsection, we introduce a chaos mechanism to make EO avoid being trapped in a local optimum. Chaos is a random-like phenomenon that exists in nonlinear and deterministic systems. It is exceedingly sensitive to initial conditions, and is neither periodic nor convergent. Chaotic motion is not repetitive, and can also be searched at a higher rate than random traversal search that relies on probabilities. Its pseudo randomness is reflected in the stochastic search process, which greatly improves search efficiency. As a result, chaos search has more strengths than random search, it has been broadly applied to metaheuristic techniques and obtained excellent effects in improving optimization algorithms. For example, Gezici and Livatyalı [30] use the harris hawk optimization (HHO) and 10 different chaotic graphs for hybridization to improve the performance of the algorithm. Liang et al. [31] utilize the ergodicity of the chaos factor to make the marine predators algorithm (MPA) easier to jump out of local optima. Feng et al. [32] propose a chaos-based loudness approach to enhance the BA algorithm. Gharehchopogh et al. [33] embed twelve chaotic maps into the farmland fertility algorithm (FFA) to search for the optimal numbers of prospectors and improve the exploitation of the most promising solutions. Joshi [34] embeds chaos-based opposition learning in the GSA to achieve stagnation-free search.
Compared with other chaos models, the logistic chaotic model has particularly complex dynamic behavior, and is easy to implement. Consequently, the present work adopts the logical chaotic model to map the population. The logical chaotic sequence is expressed as follows.
where μ is the control parameter, usually μ = 4. When μ = 4, the particles move chaotically. Xi is a random number between (0,1). N is the population size.
The updated model of the concentration update for the particles is as follows.
where Ci c is the ith particle concentration with chaotic disturbance. Xi is the ith chaotic value in the chaotic sequence. The population positions of the canonical EO are always determined by the equilibrium state equation, resulting in the phenomenon that the algorithm suffers from falling into a local optimum and stagnating. Consequently, our idea is to first enrich the diversity of equilibrium candidates using the update equation of Eq (12), to enlarge the search area, and then use Eq (15) to perform chaotic search for particles in the population, to improve the search efficiency and avoid falling into the local optimum. The specific implementation steps are that a large number of number sequences will be generated by the initial value of the logical chaotic mapping. The ergodic character of the generated chaotic numbers makes the algorithm easier in the iterative search process, and under this condition, all possible iterative future states are evaluated non-repetitively. This non-repetitiveness of chaotic numbers not only enhances the convergence ability but also improves the search efficiency of the algorithm. Based on the above analysis about chaos, it is very advantageous to use chaotic graphs instead of randomness. These properties of chaos provide a significant improvement in the performance of the canonical EO.

The framework of ACEO
The developed ACEO algorithm introduces two mechanisms, and its implementation does not affect the canonical EO framework. Because the candidate solutions are selected within the equilibrium pool by ranking according to the size of the fitness value, increasing the similarity risk of the obtaining solutions cannot ensure convergence to the optimum. Therefore, the adaptive gbest-guided search mechanism is introduced to overcome this shortcoming. During the update concentration stage, the adaptive gbest-guided search mechanism enhances the navigational ability of the equilibrium candidate through inertia weights, thus increasing its diversity and expanding the search area. In the early stages of searching, the inertia weight has larger values, and the randomness of the selection of the equilibrium candidate is also larger. Thus, the solution space can be sufficiently explored. As the number of iterations increases, in the later stages the smaller inertia weight values sufficiently exploit the already searched range, and the equilibrium candidate is chosen to be more favorable to equilibrium concentrations. Throughout the search process, it is hoped that the equilibrium candidate will move to a more prospective region with the constant adjustment of the inertia weight values. Nevertheless, during the updating process, the particles in the population undergo random exploration, and the positions of the particles are always determined by the equilibrium state equation. This may cause the algorithm to easily fall into a local optimum and result in premature stagnation of the search. The chaos mechanism solves this problem by utilizing the ergodic properties and non-repeatability of produced chaos numerical sequences, which accelerates the convergence speed and effectively causes the particles to jump out of the local optimum. Consequently, based on the constantly changing inertia weight values and the characteristics of chaos, a proper balance is achieved between exploration and exploitation of particle concentration. The pseudo-code of ACEO is given in Algorithm 1. Figure  Perform memory saving on the population to retain high-quality individuals end if Calculate the average particle Ceq_ave using Eq (3) Construct the equilibrium pool Ceq_pool using Eq (2) Update the adaptive parameter t using Eq (5) for i = 1:N Generate random number matrices r and l Select Ceq randomly from the equilibrium pool Ceq_pool Construct F using Eq (7) Calculate GCP using Eq (10) Construct G0 using Eq (9) Construct G using Eq (8) Update the individual using Eq (12) end for for i = 1:N Update the concentrations of the particles using Eq (15) end for Iter = Iter + 1 end while Return the elite candidate solution Ceq1

Computational complexity
Computational complexity is an important criterion for analyzing the performance of an algorithm. In this paper, Big-O notation is employed to represent the complexity. The computational complexity of the proposed ACEO consists of initialization, fitness evaluation, memory saving, concentration update and chaos update mechanism. The computational complexity of initializing In conclusion, the developed ACEO has the same computational complexity as the canonical EO.

Simulation and discussion
To evaluate the performance of the ACEO algorithm, a series of experiments are done to deal with different benchmark functions.

Benchmark test functions
To facilitate comparison, we test the developed algorithm on 23 classical benchmark functions. The details of these benchmark functions are outlined in Table 1.
In Table 1, 23 classical benchmark functions are subdivided into unimodal functions and multimodal functions. The unimodal functions are appropriate for testing the exploitation capability of the algorithm because of it only has one global best solution. In contrast, for the multimodal functions that have multiple local best solutions, suitable for testing the global search ability of the optimizer [35].

ACEO in comparison with EO, EO variants, and other metaheuristics
In order to demonstrate the effectiveness of the developed ACEO, the 23 benchmark functions in Table 1 are adopted. The dimension of these functions is set to 100. In this paper, we classify the comparative algorithms into two categories: the EO variants and the state-of-the-art metaheuristic algorithms. Using basic EO and recently reported EO variants as the comparative algorithms allows us to highlight the superiority of our proposed algorithm over EO-based methods, thereby demonstrating that the proposed EO variant further alleviates the limitations of basic EO and improves the algorithm's performance. On the other hand, the inclusion of other state-of-the-art metaheuristic algorithms aims to showcase the superiority of our proposed method over different types of representative algorithms in the optimization community. These advanced metaheuristic algorithms cover various optimization paradigms.
By comparing our method with these algorithms, we can assess the effectiveness and competitiveness of our approach. Based on the above analysis, the comparison algorithms include the canonical EO algorithm, the efficient EO with mutation strategy (mEO) [36], the EO algorithm based on levy flight, the whale optimization algorithm's spiral encirclement strategy, the adaptive proportional mutation mechanism (LWMEO) [37], information utilization strengthened EO (IS-EO) [38], the improved EO with a decreasing equilibrium pool (IEO) [39], the SSA algorithm based on random replacement tactic and double adaptive weighting mechanism (RDSSA) [40], the hybrid enhanced whale optimization SSA algorithm (IWOSSA) [41], selective opposition based grey wolf optimizer (SOGWO) [42], opposition-based learning grey wolf optimizer (OGWO) [43] and moth-flame optimization algorithm with diversity and mutation mechanism (DMMFO) [44].  Table 2. Moreover, the results acquired by ACEO and its competitors are analyzed using the Friedman mean rank test. Moreover, the results of the Friedman mean rank test are recorded in the final column in Table 2.
According to the results in Table 2, it can be observed that ACEO converges to theoretical global optima solutions on all functions other than f4, f7, f13, and f15. For these four functions, ACEO achieves a better efficiency than other algorithms. For the EO variants, compared with canonical EO, ACEO is superior on 18 out of 23 cases, except for f5, f12, f14, f17, f21 where it indicates similiar performance. ACEO outperforms LWMEO and IS-EO in tackling all functions. Additionally, the comparative analysis suggested that ACEO is better than mEO and IEO on a considerable number of functions, but similar to them on some functions. For other advanced metaheuristic algorithms, when comparing with RDSSA, ACEO performs better across 18 functions, except for 5 functions in which RDSSA shows similar results. ACEO provides better performance compared with the IWOSSA and DMMFO across all the statistics. With the exception of f5, ACEO surpasses SOGWO and OGWO on all evaluated functions, demonstrating its competitiveness in addressing 100-dimensional optimization problems. For f5, ACEO, EO, mEO, IEO, RDSSA, SOGWO and OGWO all find the optimal values. For f12, f14, f17, f21, ACEO, EO, mEO, IEO and RDSSA achieve theoretical results, which suggests that the advantages of the introduced strategies in ACEO are not significant in these multimodal cases. For f13, ACEO, EO, mEO, IEO and RDSSA have extremely similar results, which suggests that the performance of the developed ACEO is unremarkable in tackling this multimodal case. Furthermore, the mean and Std of the majority of functions evaluated by ACEO are zero or close to zero, which indicates the stability of ACEO. Figure 3 visually plots the results of the Friedman mean rank test. According to Figure 3, ACEO is the highest rank, followed by RDSSA, mEO, IEO, EO, OGWO, SOGWO, IWOSSA, LWMEO, IS-EO and DMMFO, which illustrates that ACEO provides the best performance among all implemented algorithms. These data demonstrate the superiority of the proposed ACEO algorithm. This is because the adaptive gbest-guided search mechanism introduced using inertia weight values by ACEO enriches the diversity of equilibrium candidate and expands the search region, and the chaos mechanism uses generated chaos number sequences to replace the original stochastic search to accelerate the convergence and avoid falling into a local optimum. The combined effect of these mechanisms allows the developed algorithms to converge to the best results for most of the single-peak and multi-peak functions, as verified in the above experiments. These findings also highlight that the ACEO algorithm exhibits superior performance in terms of convergence and solution quality, compared with EO, EO variants and variants of advanced metaheuristic algorithms. Overall, the comprehensive experimental analysis illustrates that ACEO is a reliable and efficient algorithm that can provide prospective solutions for complex optimization problems.
To show statistical differences between the ACEO algorithm and other comparative approaches, the Wilcoxon signed rank test with a significance level of 5% is employed. The p-values of the Wilcoxon signed rank test of the ACEO, EO, EO variants and other metaheuristic algorithms are given in Table 3. The symbols "+/−/=" in Table 3 imply that the ACEO algorithm is better than, worse than, or equal to the other approaches. From Table 3, the p-values are almost always less than 0.05 on the majority of test functions, which illustrates that the performance of the ACEO is highly superior to other comparative algorithms.

Convergence analysis
To explore intuitively the performance of the developed ACEO, in this section, we plot the convergence curves of all the tested methods and analyze them. The convergence curves of all the tested algorithms of some of the benchmark functions are generated in Figure 4. Based on Figure 4, it can be observed that the developed ACEO exhibits higher competitiveness compared to other methods, which demonstrates its potential to optimize efficiency. The convergence curves of ACEO present an abrupt change at the beginning of the iterations, indicating its ability to explore the search space extensively. This can be attributed to the larger inertia weights in the adaptive gbest-guided search introduced at the beginning of the iteration that enriches the diversity and expands the exploration area. As the iteration progresses, the nonlinear gradual decrease of the inertia weight values causes abrupt changes in the convergence process. Then, the movement changes of ACEO gradually decrease and converge to the optimal solution. This trend is particularly evident for most of the functions. The reduction in changes can be attributed to the gradual reduction of the inertia weights and the accelerated convergence of the generated chaos number sequences, which well prevents falling into the local optimum. The fitness value of the proposed ACEO ultimately converged to zero after several stages, whereas the other algorithms did not achieve zero fitness and displayed stagnation. Notably, the convergence performance of DMMFO was found to be the worst among all the implemented methods. Additionally, ACEO exhibits a faster decay rate on all the functions. Overall, ACEO demonstrates a stronger ability to locate the optimal area and has a faster convergence rate to the best solution than the other methods.

Component analysis
ACEO adds two mechanisms to modify the overall performance based on the canonical EO, namely the adaptive gbest-guided search mechanism and the chaotic map. In this subsection, the validity of these two operators is analyzed. We perform a series of experiments on 23 100-dimensional benchmark problems, as shown in Table 1. To distinguish between the two mechanisms, a single strategy is named. First only the gbest-guided search mechanism is employed, called AEO. Second only the chaotic map strategy is introduced, called LEO. The general parameters are kept the same as in Section 4.2. The performance of the methods involved is evaluated by the means and Stds, as shown in Table 4. Furthermore, the Friedman's rank test results are also used to examine performance and are recorded at the bottom. To intuitively display the results of the Friedman's rank test, we draw them in Figure 5. The convergence curves of the relevant methods of some of the selected functions are plotted in Figure 6.
From Table 4, it can be shown that ACEO outperforms AEO, LEO and EO on the vast majority of functions, which indicates that each mechanism on its own significantly enhances canonical EO. From Figure 5, ACEO ranks first, followed by AEO, LEO and EO, which further demonstrates that the two mechanisms implemented are effective and the developed ACEO provides a superior performance. For f7 and f13, ACEO achieves similar outcomes to AEO and LEO, which show that the synergistic effect of the two mechanisms is less evident in solving these two functions. ACEO, AEO, LEO, and EO all find theoretical optima on f5, f12, f14, f17 and f21. The advantage of introducing two mechanisms on these functions is not apparent because the canonical EO has converged to the optimum when solving these functions. On the remaining functions, each of the two mechanisms introduced can independently enhance the canonical EO. Moreover, the two mechanisms can reach the best in synergy as can be intuitively observed from the convergence curves in Figure 6. Additionally, Figure 6 highlights that AEO converges faster than LEO, while canonical EO performs the worst, which suggests that both introduced mechanisms can improve the performance of canonical EO, in which the adaptive gbest-guided search mechanism contributes more than the chaos mechanism. This is because the inertia weight in the adaptive gbest-guided search mechanism plays an important role throughout the iteration process. Ultimately, the two implemented mechanisms are extremely advantageous and enhance the overall performance remarkably.

Application in path planning
In recent years, with the application and popularity of mobile robots, this technology has been developed rapidly. Path planning, which is the key technology of mobile robots, has also become a critical research direction for mobile robots. MRPP is the process of finding an optimal or near-optimal path around obstacles from a starting position to a target position in an established map environment.
In order to improve the efficiency and path smoothness of MRPP, a number of path planning algorithms have been introduced successively. The current traditional path-planning methods such as the A* algorithm, artificial potential field method, visibility graph method, and RRT algorithm all have corresponding disadvantages [45]. The A* algorithm can improve the search path efficiency, but it will consume a lot of energy if the target point is unreachable. The artificial potential field method is more likely to be trapped in the local optimum problem. The visibility graph method has difficulty obtaining the optimal path in the environment with more obstacle information. RRT is a stochastic sampling planning algorithm, its complexity is not influenced by the dispersion of the map and it still has high search accuracy in high-dimensional space. However, the paths that the RRT algorithm finds tend to have more path inflection points and longer path lengths. To deal with these disadvantages, and because of the increasing space complexity of the robot's environment, numerous metaheuristic algorithms have emerged to solve these problems. Li et al. [46] propose an MRPP technique based on a modified GA and dynamic window method. Wang et al. [47] employ an orthogonal opposition-based learning-driven dynamic SSA to deal with the MRPP problem. Wang et al. [48] propose a modified SSA method, and use the new SSA variant to resolve the MRPP problem. Parhi and Kashyap [49] employ a hybrid-based enhanced gravitational search algorithm (IGSA) to solve the best path planning of humanoid robots in rugged terrain. Wu et al. [50] propose a novel ACO variant (MAACO) to resolve the MRPP problem.
To address the problems such as low search efficiency, and many path redundancies in global MRPP, this study employs the developed ACEO to tackle these problems. The five maps provided in [51] are used to inspect the performance of the ACEO-based path-planning task. The environment maps are described in detail in Table 5. Furthermore, the ACEO algorithm is compared with some classical metaheuristics, such as SSA [12], GWO [10], artificial bee colony algorithm (ABC) [52], PSO [9], and firefly algorithm (FA) [53]. The general parameter settings are the same as in the original literature, and the parameter settings of ACEO are the same as in Section 4.2. The shortest path lengths produced by the six approaches on the five maps are listed in Table 6. The generated shortest path is marked in bold. The trajectory routes of the six algorithms are displayed in  From the data in Table 6, it can be noted that the developed ACEO can plan the shortest paths that avoid collision on all five terrains compared with other techniques. The trajectory routes generated by the six algorithms are shown in detail as follows. Figure 7(a)-(f) plots the comparison of the trajectory routes of the six approaches on the first map. From Figure 7, this environment is a simple terrain, and all six algorithms can plan a collision-free route, and ACEO plans a shortest path. FA and GWO plot an identical path, which has more redundancy and inflection points, resulting in a longer path length. ABC, SSA and PSO design an identical trajectory path. The routes of PSO and ABC have no extra inflection points, but are not an optimal path. The route of SSA has more inflection points and the longest path length. Figure 8(a)-(f) depicts the trajectory routes of the second environment map. It can be observed from the figure that ACEO plans the shortest route. The routes planned by FA and GWO have more inflection points than ACEO, producing longer routes. The routes of ABC and PSO have high search efficiency, but the routes are not optimal. The route of SSA gets stuck in a local optimum in the initial search phase, which leads to more path redundancy.  Figure 9(a)-(f) visually shows the routes of the six algorithms on the third map with 13 obstacle areas of different sizes. It can be seen that ACEO has the shortest route and the highest efficiency. The routes of FA and GWO have more inflection points than ACEO. PSO and ABC plan different styles of routes that have no redundancy and inflection points, but are not optimal paths. The route of SSA is trapped in a local optimum at the initial search phase generating redundancy and inflection points, which causes an increase in length.
The trajectories of the six algorithms on an environmental map with 30 threat areas are shown in Figure 10(a)-(f). It can be shown from the figure that ACEO has the highest efficiency. GWO and SSA plan a trajectory route that bypasses all threat areas. Although this route avoids collisions with obstacle areas, it increases the path length and the fuel consumption of the robot. The routes of FA and ABC have more path inflection points than ACEO, and the path of FA is not smooth. The route of PSO is another style of route, and the path is not smooth and has inflection points. Figure 11(a)-(f) gives the simulation results of the fifth environmental map, which consists of 45 dangerous areas and is the most complex terrain among the five maps. The results show that the route of ACEO has the shortest and fastest efficiency. Compared with ACEO, the routes of PSO, ABC and FA are not optimal routes even though they do not have redundancy or redundant inflection points. The routes of GWO and SSA have more inflection points, and the path of GWO is not smooth. Based on the above analysis, the ACEO method can quickly find the optimal collision-free path that has better path quality, and improve operational efficiency, which is a feasible and effective path-planning algorithm.

Conclusions
In this paper, a new EO version, known as ACEO, is developed to focus on solving the imbalance between exploration and exploitation and its tendency to fall into local optima of the canonical EO. The canonical EO has a simple structure, strong search capability and easy implementation. However, the position of the population is always determined by the equilibrium state equation, which causes the algorithm to fall into the local optimum and stagnation phenomenon, thus reducing the algorithm's ability to search for the optimal. Meanwhile, the candidate solutions are selected within the equilibrium pool by ranking according to the size of the fitness value, which increases the similarity risk of the obtained solutions and reduces the population diversity. Hence, the developed ACEO introduces two mechanisms to improve the EO performance. First, the inertia weight embedded by the adaptive gbest-guided search mechanism is adopted to enrich the population diversity of the equilibrium candidate. Next, a chaos mechanism is utilized to increase the possibility of avoiding dropping into a local optimum, thus improving the search capability. The combination of the two mechanisms achieves a proper balance between exploration and exploitation. The performance of the proposed ACEO is proven on 23 classical benchmark test functions and compared with canonical EO, EO variants, and other metaheuristic methods. The analysis reveals that the mechanism added based on the canonical EO is effective, and ACEO has the best performance and is a highly competitive algorithm. In addition, this paper employs the ACEO method to resolve the problems of slow convergence speed and high ratio of the number of path inflection points of traditional path-planning algorithms, and is compared with some classical metaheuristic techniques. The experimental results illustrate that the ACEO method is a prospective tool for solving the MRPP task. From the above research, ACEO presents enormous potential for addressing various optimization problems. Consequently, in our future work, we will primarily work on the application of the ACEO algorithm to tackle various practical engineering optimization problems, such as vehicle scheduling problems and feature selection problems.

Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.