Chaotic Harris hawks optimization algorithm

Harris hawks optimization (HHO) is a population-based metaheuristic algorithm, inspired by the hunting strategy and cooperative behavior of Harris hawks. In this study, HHO is hybridized with 10 different chaotic maps to adjust its critical parameters. Hybridization is performed using four different methods. First, 15 test functions with unimodal and multimodal features are used for the analysis to determine the most successful chaotic map and the hybridization method. The results obtained reveal that chaotic maps increase the performance of HHO and show that the piecewise map method is the most effective one. Moreover, the proposed chaotic HHO is compared to four metaheuristic algorithms in the literature using the CEC2019 set. Next, the proposed chaotic HHO is applied to three mechanical design problems, including pressure vessel, tension/compression spring, and three-bar truss system as benchmarks. The performances and results are compared with other popular algorithms in the literature. They show that the proposed chaotic HHO algorithm can compete with HHO and other algorithms on solving the given engineering problems very successfully.


Introduction
Global optimization problems in continuous spaces have generally been the focal point for researchers. Global optimization problem could be defined as follows (Kaelo & Ali, 2006): where x is a continuous variable vector with domain ⊂ R n , and f (x) : → R is a continuous real-valued function. The domain is defined by specifying upper (u j ) and lower (l j ) limits of each component j. In optimization problems, some features of a system should be optimized by choosing system parameters properly. For an easier operation, the parameters of a system are generally represented as a vector (Storn & Price, 1997). While dealing with an optimization problem, usually, there is the need to design an objective function that could model the requirements of the problem. Most of the time, the objective function is designed in a way to convert an optimization problem into a minimization problem. In minimization problems, global minimum could be achieved by using Gradient Descent Optimizer (GDO). However, such a case is valid for convex problems. In problems with more than one local minimum, GDO could be trapped by the local minimum (De et al., 2020). Scale factor and the first step direction in GDO affect the performance of GDO significantly.
Optimization is the process of searching for the most suitable solution among all available solutions to a problem (Arora & Anand, 2019;Ibrahim & Tawhid, 2019). There are nonlinear constraints of real-world problems, high computational cost, and complex and numerous solutions (Yan et al., 2014). To solve realworld problems, metaheuristic algorithms (MHAs) have been used extensively in recent years due to their powerful and efficient performance and simple structure (Dhiman & Kumar, 2019b;Rezaie et al., 2019). MHA is inspired by nature and has a stochastic structure (Mitić et al., 2015;Mostafa Bozorgi & Yazdani, 2019). Generally, MHA can be divided into two groups (Mirjalili et al., 2014). In the first group, there are single solutionbased algorithms such as Simulated Annealing (Mafarja & Mirjalili, 2017). The second group includes population-based algorithms, the Genetic Algorithm as an example (Gharoonifard et al., 2010;Goharzay et al., 2020). In single solution-based algorithms, a solution is randomly generated and developed until the best result is obtained (Dhiman & Kumar, 2019a). In population-based algorithms, a series of random solutions are generated within some limitations and solution values are updated until the best solution is found. These algorithms begin the optimization process by creating a group (population) in which each individual in the population represents a candidate solution to the optimization problem (Heidari et al., 2019). The algorithm then tries to improve the existing population, often using stochastic operators (Mafarja et al., 2018b). The optimization process is continued until a stop criterion is reached, such as the maximum number of iterations (Aljarah et al., 2018;Mafarja et al., 2018a). Population-based algorithms have few parameters and are therefore easier to apply to problems. Each MHA searches for the most suitable solution in a solution space. Search steps have two stages, namely exploration and exploitation (Salcedo-Sanz, 2016). During the exploration stage, the algorithm should use random operators as effectively as possible to explore the search space deeply. Efficient use of these random operators will make it easier to identify quality solutions in the search space of the optimization algorithm (OA; Lozano & García-Martínez, 2010;Heidari et al., 2020). The exploitation phase is performed after the exploration phase. At this stage, the OA focuses on the solutions in the search space determined during the exploration phase (Heidari et al., 2019). In the exploitation phase, the algorithm focuses on certain parts of the search space, not the whole. The balance between exploration and exploitation stages must be maintained in an efficient OA (Dhiman & Kumar, 2019a). Otherwise, the OA runs the risk of falling into the local optima trap (Kohli & Arora, 2018).
This study focuses on the idea that chaotic maps (CMs) could be used for MHAs to escape from a local optimum trap. For MHA, recently published Harris hawks optimization (HHO) whose exploration and exploitation ability is high is selected. Even if HHO presents successful results, the existence of random variables in exploration and exploitation procedures causes it to be trapped by a local optimum. To detect random variables of HHO, four different modification methods and ten different CMs are proposed. The modification method and the CM that increases the performance of HHO the most are determined. Then, the proposed algorithm is compared to the other chaotic MHAs in the literature. For comparisons, 15 well-acknowledged test functions and the CEC2019 function were used. Finally, the proposed algorithm was tested on three engineering problems. Moreover, the results were supported using statistical tests. The results indicate that CMs lead HHO to be more competitive.
In the next section of this article, a review of recent literature is given. Section 3 contains information about the HHO algorithm. Chaotic methods are explained in Section 4. Information on the proposed chaotic HHO (CHHO) is given in Section 5. Experimental results of test functions and engineering problems are given in Section 6. In the "Conclusions" section, the analysis result ideas on future work are discussed.

Literature Review
Many MHAs have been presented in the literature to solve optimization problems with high complexity and constraints. These algorithms are generally inspired by nature. For example, for optimization problems, by simulating the migration behaviors of monarch butterflies, Wang et al. propose a new MHA that is inspired by nature and named Monarch Butterfly Optimization (MBO; Wang et al., 2015). In the MBO algorithm, the search di-rection of emperor butterflies is determined by the migration operator and butterfly tune operator. Besides, the migration operator and butterfly tune operator can be applied simultaneously. Therefore, the MBO method is ideal for parallel processes and can manage a balance between exploration and exploitation processes that are highly significant for MHA. The authors determine that MBO has competitive performance. Li et al. suggest a new MHA that depends on the oscillation behaviors of slime moulds under the name of Slime Mould Algorithm (SMA; Li et al., 2020). SMA has three basic processes: approach food, wrap food, and oscillation. The authors state that the adaptable weight parameter in SMA accelerates the convergence rate of the algorithm while enabling it to escape from a local optimum trap. They also indicate that location update parameter and three different location update strategies have been useful for SMA to adapt to different search procedures better.
Being inspired by the phototaxis and Lévy Flights (LF) of moth in nature, Wang proposed Moth Search (MS) that is a generalpurpose metaheuristic method (Wang, 2016). In the MS method, the best moth of each generation is taken as a source of light. The moths that are close to the source of light, with a flying move that could be modeled with LF, tend to fly around their location. On the other hand, the moths that are away from the source of light fly in a linear way towards the source of light. Depending on these two processes, the exploration and exploitation processes of MS are designed. The author suggests that MS has competitive results but should be tested with a wider comparison set. In addition, it is not known what effect the sensitive adjustment of the parameters could have on the algorithm.
Mirjalili proposes a swarm-based Moth-Flame Optimization (MFO) algorithm (Mirjalili, 2015). The inspiration for this algorithm is the moth's navigation method named transverse orientation. Despite the effectiveness of transverse orientation, moths have often been observed to fly spirally around lights. This spiral flight continues with converging to the source of light. The mathematical model of spiral flight motion is integrated into the MFO algorithm. The author informs that the MFO's ability to search the solution space is high and it is not trapped by the local optimum. He also emphasizes that the convergence behavior is competitive as MFO saves the best solution and looks for new solutions related to this solution.
Being inspired by the food searching and hunting characteristics of African vultures, Abdollahzade et al. developed a novel optimization algorithm (AVOA; Abdollahzadeh et al., 2021a). AVOA is a population-based method. The best two solutions of the algorithm represent the two most powerful vultures. Then, other vultures try to get close to these two most powerful vultures. AVOA includes four different processes. AVOA is claimed to present fast and competitive results. Moreover, it is emphasized that the parameter settings should be tuned sensitively and that AVOA should be tested on different problems.
Abdollahzade et al. developed a new MHA named Artificial Gorilla Troops Optimizer (GTO) by being inspired by the social intelligence of gorilla crews (Abdollahzadeh et al., 2021b). GTO has five different operators to carry out exploration and exploitation processes. In addition to that, GTO uses a special operator for the transition between exploration and exploitation. The authors used a wide data set to test GTO, and they state that the results are competitive.
Yang et al. present a general-purpose population-based MHA called Hunger Games Search (HGS; Yang et al., 2021). HGS is designed according to the hunger-related activities and behavioral preferences of animals and has a simple structure. HGS's adaptive and varying (differing) time mechanisms reduce this method's possibility of being trapped by local optima. The authors state that HGS gets better results than competing algorithms. They also argue that different approaches for the initial population of HGS could improve the performance of HGS.
Ahmadianfar et al. introduce the Runge Kutta Optimizer (RUN) algorithm to solve optimization problems (Ahmadianfar et al., 2021). The RUN algorithm is developed with its dependence on the Runge Kutta method that is used as a search engine to find the best solution in the search space. To improve the quality of the solutions, avoid local optimum, and increase the convergence rate, Enhanced Solution Quality (ESQ) is integrated into the RUN algorithm. The proposed ESQ calculates the average of three random solutions and produces a new solution by combining them with the best solution. The authors express that the RUN algorithm has a successful balance between global and local search and is more successful than its competitors. Moreover, they inform that various crossover and mutation operators can be included in the algorithm to improve the performance of the RUN algorithm.
Again, being inspired by the social characteristics of animals, Tu et al. suggest MHA named Colony Predation Algorithm (CPA;Tu et al., 2021). CPA balances fittingly between exploration and exploitation and can converge in the early stages without being trapped by the local optimum. Motivated by the common food searching characteristics of social animals, CPA includes a procedure that is based on animals' selective abandonment behavior to simulate the impact of this strategy on individual activities. Experimental results indicate that CPA has an impactful search ability to scan the solution space quicker and presents more competitive results than other algorithms. The authors recommend that parameter tuning should be worked on.
The HHO algorithm is inspired by Harris hawks hunting as a group (Heidari et al., 2019). HHO is a population-based MHA. HHO consists of exploration and exploitation phases. In the exploration phase, the algorithm aims to locate the best possible solutions within the solution space. In the exploitation phase, the algorithm aims to find the best solution by focusing on the locations determined during the exploration phase. HHO has six stages in total: two in exploration and four in exploitation (Heidari et al., 2019). To figure out the optimal solution, HHO performs one of the six stages randomly. It is proven that thanks to its powerful structure, the original performance of HHO is much better than many well-acknowledged methods like PSO, Grey Wolf Optimizer (GWO), Cuckoo Search (CS), MFO, Differential Evolution (DE), Bat Algorithm (BA), and Whale OA (WOA) (Zhang et al., 2020b). HHO has superior advantages compared to other swarm-based OAs as it does not require any parameter to be started except for the initial population of the swarm. HHO does not have a derivative equation. Moreover, it is easy to use, solid, and complete. The most advantageous point of HHO is the balance between exploration and exploitation. As the HHO iteration number increases, so does the exploration ability (Al-Betar et al., 2021). Due to these superiorities, recently, HHO and its derivatives have been widely used for real-world problems. HHO has achieved significant success in different fields and applications such as parameter estimation of photovoltaic models (Jiao et al., 2020), parameter estimation of three-diode photovoltaic models (Qais et al., 2020), drug design and discovery (Houssein et al., 2020), image segmentation on digital mammograms (Rodríguez-Esparza et al., 2020), fault diagnosis for rolling bearings (Fu et al., 2020), parameters identification of singlediode solar cells (Ridha et al., 2020), and satellite image segmentation (Jia et al., 2019). Its success places HHO ahead of its competitors.
HHO can solve optimization problems with satisfactory results within acceptable periods. However, the existence of random variables in exploration and exploitation stages reduces the convergence speed of HHO and decreases the ability to find the global minimum in all attempts. In this study, chaos theory has been applied to HHO to increase the convergence speed and prevent it from being caught in the local minimum trap.
The randomly determined parameters that exist in MHA cannot fully enable the algorithm to scan the solution space in detail. It has been observed that the performance of MHA increases when the values of these random parameters are determined by chaotic mapping methods (Yang et al., 2007).
Some of the significant studies using chaotic methods go as follows: Xu et al. propose single and multichaotic GWO to prevent GWO from being trapped by early convergence and local optimum (Xu et al., 2021). While in single chaotic GWOs, one CM is used to perform a chaotic local search (CLS), in multiple chaotic GWOs, all the CMs are used at the same time. In multiple chaotic GWOs, CMs are ranked according to their success and in this way, their possibility to be used in the next iteration increases. Experimental results indicate that both of the proposed methods have better performance than the original GWO. Authors point out that multiple chaotic GWO has promising results than single chaotic GWO. Gao et al. introduce two different CLS-based Caotic Joint Approximate Diagonalization of Eigenmatrices (CJADE) . The first one is a single CM variant and named CJADE-S. The second one is multichaotic CJADE and it has a mechanism that can select out of 12 different CMs This mechanism can select out of random (CJADE-R), in-parallel (CJADE-P), and memory-selective (CJADE-M) variants. The algorithm has four variants in total. The initial population of the algorithm is generated by the CMs. The authors argue that multiple CMs increase the performance of CLS better than single CMs and that the CJADE-M algorithm functions better than other variants.
Wang et al. state that although Gravitational Search Algorithm (GSA) can optimize various problems successfully, its search ability is limited by its mechanism . To improve the performance of GSA, the authors propose four different chaotic neural oscillators to adjust the gravitational constant (G). These chaotic neurals also enable transitions between exploration and exploitation. The authors stress that chaotic neural oscillators are more effective than CMs for tuning G.
To create a balance between exploration and exploitation processes of the brainstorm optimization (BSO) algorithm, Yu et al. hybridize BSO with the CLS process . The CLS process is applied when the BSO goes into stagnation. The authors point out that the method they propose solves the stagnation problem of BSO and improves its performance.
Ji et al. perform a comprehensive empirical analysis of the GSA to examine the role of G in the optimization process (Ji et al., 2017). To balance exploration and exploitation, they propose an adaptive mechanism adjusting the value of G automatically. In addition to that, to increase the convergence rate of GSA, they change the standard CLS and integrated it into the optimization process of GSA. The authors emphasize that the convergence rate and the quality of the solution are highly sensitive to the value of G. Moreover, with the use of these two techniques, the basic weakness of the GSA is effectively improved.
To present an impactful and productive OA for the solution of numerical functions, Gao et al. hybridize GSA and CMs (Gao et al., 2014). They propose two types of hybridization methodologies. In the first one, the parameters that are determined randomly in GSA are determined with CMs (CGSA1). In the second, the CLS approach is added to the GSA (CGSA2). The authors state that CGSA1 and CGSA2 algorithms perform better than the GSA. They also emphasize that the CGSA2 method outperforms the CGSA1 method.
To increase the searching ability of GSA, Song et al. hybridize GSA using different CMs and different modification strategies . In their study, the authors concentrate on three main points. The first one is comprehensively evaluating how chaos affects the performance of GSA. The second one is about figuring out which CM could improve the performance of the GSA. The third one is about designing a strategy that could use the search dynamics of chaos fully. To increase the performance of the GSO, the authors propose 12 single chaotic GSAs (CGSAs) and 3 multiple CGSAs (randomly, inparallel, and memory-selectively). The results point out that memory-selectively multiple CGSA presents more successful results.
Caponetto et al. study the use of chaotic number generators rather than a random number in EAs (Caponetto et al., 2003). The authors recommend using numbers that are generated by chaotic systems at all the stages of EAs that require random selection. Many examples and statistical tests indicate that using CMs instead of random numbers improves the performance of EAs.
In previous studies, CM was used to increase the performance of HHO. However, these studies do not present a detailed analysis. Ibrahim et al. used CM to strengthen HHO's exploitation procedure (Ibrahim et al., 2020). Elgamal et al. used CM in establishing the initial population and then analysed the performance (Elgamal et al., 2020). Sahoo et al. used the Chebyshev map to calculate the energy of the rabbit in HHO (Sahoo et al., 2020). Singh created the initial population by using a logistic map (Singh, 2020). Menesy et al. used CM to determine q which is a parameter of HHO's exploration procedure (Menesy et al., 2020). So far, the most comprehensive study is presented by Ewees and Elaziz (Ewees & Elaziz, 2020). In their study, only random parameters in equation (1) were determined by CM.
Al-Betar et al. propose three different variants of HHO (Al-Betar et al., 2021). These variants are developed based on the principle of natural selection. These variants are Tournament HHO (THHO), Proportional HHO (PHHO), and Linear-Rank HHO (LHHO). In these variants, the exploration operator in the original HHO that chooses solutions from the population randomly to replace the available solution is replaced, respectively, with tournament selection in THHO, proportional selection in PHHO, and linear-rank selection in LHHO. This process enables the survival of the fittest, which leads to rapid convergence. Depending on their results, the authors inform that the THHO variant is more successful than PHHO and LHHO variants.
Zhang et al. hybridize Salp Swarm Algorithm SSA with HHO to increase the exploration ability of HHO (Zhang et al., 2020b). They call their new algorithm IHHO. The main task of the SSA is to increase population diversity. At each iteration, the population is updated. This process has three stages. The SSA mechanism is at the first stage to tune the location of the hawk population and to generate an SSA-based hawk population. In the second stage, depending on the combination of HHO-based hawk and SSA-based hawk, a new hybrid search agent is generated. The processes at the first two stages increase the diversity of the population. In the third stage, after performing greedy selection, to generate the available search agent, the HHO process is used to update it. Greedy selection selects the best candidate solution as the available search agent out of HHO-based hawk, SSA-based hawk, and hybrid hawk. The authors emphasize that the algorithm they propose has significant success in feature selection.
As stated in the free lunch theorem, an OA is not able to solve all optimization problems (Wolpert & Macready, 1997). Therefore, researchers continue to develop new MHAs or improve the performance of existing MHAs. HHO has superior abilities for solutions to real-world problems (Al-Betar et al., 2021). However, the early convergence problem and the presence of random variables influence the performance of the HHO negatively (Zhang et al., 2020b). In previous studies, it is observed that determining the randomly determined parameters of MHAs with CMs increases the performance of the related algorithm. The reason for it is that CMs increase OAs' ability to search solution space. Studying solution space thoroughly is important especially for Np-hard problems. The abundance of random variables in the structure of HHO supports the idea that the performance of the algorithm could be improved. Considering the earlier studies, it is observed that some of the random parameters of HHO are determined with a small number of CMs or that HHO is hybridized with the CLS method. Therefore, more variants of HHO require to be analysed with more CMs It should also be borne in mind that integrating the CLS method into the HHO causes an additional cost to the algorithm (Xu et al., 2021).
In this study, 10 different CMs are integrated into HHO to increase its exploration and exploitation capabilities. This integration process goes as follows: The random parameters of HHO are determined by using CM. Four different methods are used for integration. In the first method, the value used as the selection parameter between the exploration and exploitation stages of the HHO is determined by CM. This modification is also applied in other methods. In the second one, HHO's exploration parameters, in the third one, the parameters of the HHO in the exploitation stage, and in the fourth one, the critical parameters of both the exploration and exploitation stages are determined by CM. The purpose of this integration is to increase the convergence speed of HHO by using the ability of chaotic mapping methods to generate nonrepetitive numbers and to increase the efficiency of HHO by achieving the best result. HHO and CHHO methods are applied to 15 different test functions with unimodal and multimodal properties. The minimum, average, and standard deviation values of the results obtained from the functions are given in tables and compared to each other. In addition, the working times and the Friedman test average rank values of the proposed algorithms are also compared to each other. The best CHHO variant is determined according to the Friedman test results. The variant is compared to four recently published MHAs. For comparison, in addition to the 15 test functions stated earlier, CEC2019 test functions are used. The minimum, mean, and standard deviation values and solution time of the algorithms are saved. The results are interpreted by making use of the Wilcoxon statistical test. To validate the proposed CHHO algorithm, the algorithm is applied to three different engineering problems and compared with popular MHA in the literature. The results obtained show that chaotic mapping methods increase the convergence and solution capability of HHO.

HHO Algorithm
The HHO algorithm has been developed inspired by the hunting strategies of Harris hawks. Harris hawks are among the smartest animals in nature and they hunt in groups. When they catch prey, they share it among all the hawks in the herd. When developing HHO, it is assumed that the prey is a rabbit. Harris hawks search the desert for a long time to catch a rabbit. If a rabbit is detected at the end of this search, the herd begins to follow the rabbit and attack at the appropriate moment. Hawks conduct their attacks in coordination with each other. This type of hunting involves a very clever strategy. The first attacks are to reduce the defense power of the rabbit. Then, hawks perform their main attacks to catch the surrounded rabbit with low defense resistance. It has now become easier to catch because the rabbit moves away from its nest to possible hiding places during the attacks. Finally, the hawks easily catch the tired rabbit without consuming too much energy. This strategy shows that Harris hawks have optimized their hunting processes. The HHO algorithm is modeled in two stages that are inspired by their exploration and attack strategies ( Fig. 1) (Heidari et al., 2019;Moayedi et al., 2021).

Exploration phase
During exploration, Harris hawks roost in a location for searching of their prey that is the rabbit in this case. Here, each hawk represents a possible solution while the rabbit represents the best solution. At each stage, the hawks get closer to the rabbit, which is the best solution. Harris hawks use two different roosting strategies. These two strategies are given equal chances (q) in the model. The first strategy is perching according to the rabbit's position (q < 0.5). The second one is roosting in close positions (q ≥ 0.5). The second strategy aims to help the members of the herd better with each other during hunting. The mathematical model of these two strategies is given in equation (2) (Jia et al., 2019).
where t is the current iteration, X(t + 1) is the position of hawks in the next iteration, X rabbit (t) represents the position of the rabbit, X(t) is the current position vector of hawks, r 1 , r 2 , r 3 , and r 4 are random numbers between 0 and 1 and refreshed at each iteration, X rand (t) is a randomly selected hawk in the current population, X m (t) is the average positions of the hawks in the current iteration, and ub and lb are the upper and lower limits of the variables, respectively. The average position of the hawks is calculated using equation (3).
where t is the current iteration, N is the total number of hawks, and X i (t) is the position vector of each hawk. After the exploration phase, the exploitation phase begins. However, the phase transition is related to the rabbit's energy. Rabbit's energy is modeled as in equation (4).
where E indicates the rabbit's escape energy, E 0 is the initial energy of the rabbit, t is the current iteration, and T is the maximum iteration number. E 0 is randomly determined between −1 and 1 in each iteration. If E 0 is between 0 and 1, it means that the rabbit is still strong, or if it is between 0 and −1, this means that the power of the rabbit is reduced. When E ≥ |1|, hawks continue the exploration phase to determine the position of the rabbit. When E < |1|, the location of the rabbit has been determined and the hunting phase is started. E can converge to zero depending on time.

Exploitation phase
At this stage, Harris hawks try to catch the rabbit whose position is determined. However, the rabbit shows the behavior of avoiding danger, and not every hunting attempt results in success. Four different hunting strategies are modeled at this stage. The probability of rabbits to escape is defined by a probability key (r) ranging from 0 to 1. r is determined randomly in each iteration. If r < 0.5, it is assumed that the rabbit has successfully escaped, and if r ≥ 0.5 the rabbit is not able to escape. The escape success or failure of a rabbit changes the hawks' attack strategies. Hawks apply either soft or hard siege strategies to the rabbit. If the rabbit's escape energy is |E| ≥ 0.5, soft siege procedures are applied, and if |E| < 0.5 hard siege procedures are applied (Chen et al., 2020a).

Soft besiege
If r ≥ 0.5 and |E| ≥ 0.5, the energy of the rabbit is high and its tendency to escape is weak. At this stage, hawks try to tire the rabbit for reducing its energy by making soft sieges. This strategy is modeled as in equations (5) and (6) (Houssein et al., 2020).
where X(t) is the difference between the rabbit's and hawks' current position, J is the rabbit's random jump procedure, and r 5 is randomly determined between 0 and 1 (Jiao et al., 2020).

Hard besiege
If r ≥ 0.5 and |E| < 0.5, the rabbit has low running energy and a weak tendency to escape. Harris hawks take surprise dives to catch the rabbit (Fig. 2). The positions at this stage are calculated using equation (7).

Soft besiege with progressive rapid dives
If r < 0.5 and |E| ≥ 0.5, the rabbit's energy and its tendency to escape are high (Fig. 3). At this stage, hawks gradually dive  smoothly. The hare makes unexpected leaping movements to escape. The rabbit's jumping movements are modeled by LF. LF is a method commonly used to model the movements of animals (Yang et al., 2014;Heidari et al., 2019). At this stage, the diving movements of the hawks are modeled with equation (8).
If the hawks fail in their diving movements, it is assumed that they will dive irregularly. To model this situation, equation (9) is used (Qais et al., 2020).
where D is the size of the problem and S is a randomly determined vector. LF is calculated using equation (10). where U and V are random numbers between 0 and 1. β is a constant number and its value is 1.5. The hawks decide the strategy to attack with equation (11).
where Y and Z are calculated by using equations (8) and (9).

Hard besiege with progressive rapid dives
If r < 0.5 and |E| < 0.5, it means that the energy of the rabbit has decreased. The hawks make a harsh siege to catch the prey. The hawks aim to reduce the distance between them and the prey. At this stage, the decision mechanism of hawks is the same as in equation (10). However, while modeling the behavior of hawks at this stage, Y and Z calculations are changed as in equations (12) and (13)  .
where X m (t) is calculated by using equation (3). The pseudo-code of the HHO algorithm is given in Fig. 4.

Chaotic Maps
In this study, one-dimensional maps are used to make the HHO algorithm chaotic. CM produces nonlinear, nonperiodic, and random numbers within certain limits (Mitić et al., 2015). The mathematical expressions of CM are straightforward and the number of parameters is few (Kaur & Arora, 2018). They can scan the solution space faster than random number generators (Misaghi & Yaghoobi, 2019). Starting parameters can be selected between 0 and 1. However, CM is very sensitive to initial values. In this study, the initial values of CM are set as 0.7. There are many CMs proposed in the literature. Among them, the most frequently used 10 maps are integrated into the algorithm to increase the performance of the HHO ( In Table A1, chaotic methods are integrated into HHO algorithms separately and the CHHO algorithms are generated. The proposed algorithms are named according to each chaotic method applied. The names of the proposed algorithms are given in the last column.

CHHO Algorithms
The randomly determined numbers found in MHA are suitable for the stochastic structure of these algorithms and have significant effects on their performance. These effects are among the determining factors of OAs' global and local search capabilities. These random variables are also used for modeling the HHO algorithm. HHO has important features in terms of convergence speed and the ability to solve problems. However, the random variables in the mathematical expressions of the HHO always prevent the HHO from reaching the best result. The reason for this is the lack of global search in its current form. Four different modifications and 40 different chaotic algorithms are proposed to overcome this deficiency. In the equations, the parameters determined with CMs are presented by combining with the letter C.

Proposed modification 1
The rabbit's escape energy calculated by equation (3) is a critical parameter that determines whether the HHO will perform the global exploration or the exploitation procedure. This parameter is calculated according to the rabbit's initial energy (E 0 ). E 0 is determined randomly in each iteration. This randomness prevents the HHO algorithm from scanning the search space deeply. The CM is proposed to determine the E 0 variable to overcome this deficiency and increase the performance of the HHO algorithm (equation 14). This modification is named PM1.

Proposed modification 2
At this stage, the changes made in PM1 are preserved and additional changes are made in the HHO algorithm. In the exploration phase of the HHO algorithm, the positions of the hawks are determined again and again in each iteration. Equation (2) is used to determine the hawks' positions relative to each other and relative to the rabbit. One of the main parameters of equation (2) is a randomly selected hawk [X rand (t)]. In this step, a hawk is randomly chosen by using chaotic mapping methods. Also, randomly determined coefficients r 1 , r 2 , r 3 , and r 4 , which are used in equation (2), are determined by chaotic methods as well (equations 15 and 16). With these changes, the exploration performance of HHO increases as well as its ability to find the global minimum. This modification is named PM2.

Proposed modification 3
At this stage, the scheme in PM1 is preserved with additional changes made in the HHO algorithm. The rabbit's jump procedure is another important parameter in the exploitation phase of HHO given in equations (5), (8), and (12). The jump strategy indicated by J in the equations is determined randomly. However, this randomness causes the HHO algorithm to get caught in the local minimum trap. Therefore, the rabbit's jump strategy is determined by chaotic mapping methods (equations 17-20). This modification is named PM3.

Proposed modification 4
In the fourth modification alternative (PM4), the changes made in PM1, PM2, and PM3 are preserved with some more additions to the algorithm. The significant random parameters found in the HHO algorithm are determined by CM methods (equations 21-25). With these changes, the efficiency of both exploration and exploitation procedures of the HHO algorithm is expected to increase.

Summary of the proposed modifications
In PM1 modification, the E 0 parameter that is used only in calculating the rabbit's energy is determined by chaotic mapping methods. The reason for changing the E 0 parameter is to achieve a more efficient balance between the exploration and exploitation phases of the HHO algorithm. The reason for not making changes in the exploration and exploitation stages is to observe the effects of the E 0 parameter on the HHO algorithm. This change is preserved in other recommended methods. In PM2 modification, only the exploration procedure of the HHO algorithm is changed and the effects of CM on HHO's exploration capability are observed. In PM3 modification, only the exploitation procedure of the HHO algorithm is changed and the effects of CM on HHO's performance are examined. All these changes have been made together in PM4 modification and the effects of these changes on the performance of HHO have been examined. Changes made in the proposed modification are listed in Table 1.

Computational complexity
The HHO algorithm has three main processes. These processes and computational complexity are given below (equation 26): 1. The complexity of initiation process O(N).

The complexity of determining the best location O(T×N).
Here, T stands for maximum iteration, N for population size, and D for the dimension of the problem.

Computational Results
In Section 6.1, the algorithm that presents the best performance is determined by testing the suggested modifications with 15 well-known functions. In Section 6.2, the proposed algorithm is compared to other ones. In Section 6.3, the productivity results of the exploration and exploitation processes of the proposed algorithm are given. In Sections 6.4, 6.5, and 6.6, the results of engineering problems are explained.

Comparison of proposed algorithms
The proposed algorithms are tested with 15 well-defined test functions for performance evaluation. These test functions are given in Table A2 (Appendix A). Test functions are divided into two main groups as unimodal and multimodal. F1-F7 are unimodal and F8-F15 are multimodal functions (Dhiman & Kumar, 2017). Unimodal functions have a single optimum, and unimodal functions test algorithms' convergence rates and exploitation capabilities. Multimodal functions have multiple optima, but only one of them is global optima while the rest are local optima. Therefore, multimodal functions are more difficult than unimodal functions. A well-designed MHA must find the global optima without getting caught in the local optima trap. The information about the experimental setup is given in Table 2. The experiments are performed on a computer that has 64 GB RAM, a 10th generation i9 processor, and UBUNTU operating system.
The proposed algorithms are applied to the test functions in Table A2 (Appendix A). Each algorithm is run independently 30 times and the results are recorded. The minimum, average, and standard deviation values of the results are used as the comparison metrics. The results of the chaotic algorithms created with the PM1, PM2, PM3, and PM4 modification of the original HHO algorithm are given in Tables 3-5, respectively. In all tables, the best results are shown in bold and underlined. Table 3 shows the results obtained with the PM1 modification made in the HHO algorithm (Table 1, Section 5.1). When Table 3 is examined, it is seen that PM1 modification increases the performance of the HHO algorithm. In terms of the minimum value metric, PM1C9 (Sinusoidal) achieved the best results in four test functions (26.6%) and PM1C10 (Tent) in four test func-tions (26.6%). In terms of the mean value metric, PM1C9 (Sinusoidal) achieved the most successful results in five (33.3%) and PM1C10 (Tent) in three (20%) test functions. In terms of the standard deviation metric, PM1C9 (Sinusoidal) achieved the most successful results in five (33.3%) and PM1C1 (Chebyshev) four (26.6%) test functions. The results show that Sinusoidal and Tent CM are more successful in increasing the performance of HHO compared to other chaotic mapping methods. Table 4 shows the results obtained with the PM2 modification made in the HHO algorithm (Table 1, Section 5.2). When Table 4 is examined, it is seen that PM2 modification increases the performance of the HHO algorithm. In terms of the minimum value metric, PM2C3 (Gauss) obtained the best results in four (26.6%) test functions. In terms of this metric, each of the PM2C6 (Piecewise), PM2C9 (Sinusoidal), and PM2C10 (Tent) algorithms obtained the best results in two (13.3%) test functions. In terms of the mean value metric, both the PM2C9 (Sinusoidal) and PM2C3 (Gauss) algorithms achieved the most successful results in four (26.6%) test functions. In terms of the standard deviation metric, PM2C9 (Sinusoidal) achieved the most successful results in five (33.3%) test functions, and PM2C3 (Gauss) obtained the most successful results in four (26.6%) test functions. The results show that Sinusoidal and Gaussian CM are more successful in increasing the performance of HHO compared to other chaotic mapping methods. Table 5 shows the results obtained with the PM3 modification made in the HHO algorithm (Table 1, Section 5.3). When Table 5 is examined, it is seen that PM3 modification increases the performance of the HHO algorithm. In terms of the minimum value metric, PM3C9 (Sinusoidal) achieved the best results in four (26.6%) test functions. In terms of this metric, both PM3C6 (Piecewise) and PM3C10 (Tent) algorithms achieved the best results in three (20%) test functions. In terms of the mean value metric, PM3C9 (Sinusoidal) four (26.6%) and each of the PM3C2 (Circle), PM3C5 (Logistic), and PM3C10 (Tent) algorithms achieved the most successful results in three (20%) test functions. In terms of the standard deviation metric, PM3C9 (Sinusoidal) achieved the most successful results in five (33.3%) and PM3C2 (Circle) in four (26.6%) test functions. The results show that Sinusoidal and Tent CM are more successful in increasing the performance of HHO compared to other chaotic mapping methods. Table 6 shows the results obtained with the PM4 modification made in the HHO algorithm (Table 1, Section 5.4). When Table 6 is examined, it is seen that PM4 modification increases the performance of the HHO algorithm. In terms of the minimum value metric, PM4C3 (Gauss) achieved the best results in six (40%) test     Evaluating the proposed algorithms in terms of the number of test functions they are more successful in is not a suitable method for choosing the best algorithm. Instead, a better comparison metric is to make a selection based on the success order of the proposed algorithms in all test functions. The rank values of the proposed algorithms obtained from the Friedman test are given in Table 6. While creating Table 7, average values are used to better evaluate the performance of the algorithms. While the C6 (Piecewise) algorithm is the most successful algorithm in PM1, PM3, and PM4 modifications, the most successful algorithm in PM2 modification is C3 (Gauss). When all rank values are added together, it is seen that the C6 (Piecewise) algorithm shows the best performance in all test problems with 271 points. It should be noted that the C6 (Piecewise) algorithm gets the best score in the PM4 modification. The HHO algorithm is in fourth place with 322 points. C1 (Chebyshev) and C2 (Circle) algorithms are more successful than the HHO algorithm with their performance. Piecewise, Chebyshev, and Circle CM methods increase the performance of HHO in terms of Friedman rank value metric.
In addition, algorithms have been compared according to computation times. This comparison is given in Table 8. In Table 8, the average times obtained by each algorithm in PM modifications are given in columns 1-4. The average of the times in columns 1-4 is given in the fifth column. The last column contains rank scores of the algorithms. The C5 algorithm is the best algorithm in terms of time metric. The HHO algorithm is the seventh-best algorithm with a performance of 0.576 s. It is seen that CM improves HHO's solution quality as well as positively affects the solution times.
To evaluate which PM modification is more successful in the HHO algorithm, success scores of PM modifications are given in Table 9. While performing success scoring, e.g. PM modification that solves the F1 test function best in terms of mean value metric is given 1 point and worst solving PM modification 4 points. These scores are given in columns 2-4 of Table 9. The total scores of PM modifications are in the last row. According to these scores, the most successful PM modification is PM4 modification. The convergence behaviors of some test functions according to PM4 modification are given in Fig. 5. When a general evaluation is made using these data, PM4 modification and CHHO6 (Piecewise) algorithm can be preferred as the best because it exhibits more stable behaviors, and the solution time is competitive. It is a frequently preferred method to test the OAs proposed in the literature with engineering problems. The CHHO6 (Piecewise) algorithm proposed in this study is applied to three engineering problems to test its performance in engineering problems and to compare it with competing algorithms.

Comparison of the proposed algorithm to the other chaotic MHAs
In this section, the PM4C6 algorithm, which has the best results depending on the results in Section 6.1, is compared to other chaotic MHAs in the literature. For comparison, four recently published popular algorithms are selected. Two of these algorithms are Nonlinear-based CHHO (NCHHO) (Dehkordi et al., 2021) and Binary HHO (BHHO) (Too et al., 2019), which are the chaotic variants of HHO. The other two algorithms are the CGSA (Rather & Bala, 2020) and Hybrid Particle Swarm Optimization and Butterfly OA (HPSOBOA) (Zhang et al., 2020a), which are the chaotic variants of different MHAs. While comparing the algorithms, for a just and fair comparison, population and iteration  numbers are equally taken. Thirty independent different operation results of the algorithms are saved. The iteration number is 500 for each operation. Moreover, a comprehensive analysis is performed by determining four different populations. Information about the experimental setup is given in Table 10. Experi-ments are performed on a computer that has 64 GB RAM, a 10th generation i9 processor, and UBUNTU operating system. All the algorithms are tested with two different test groups. The first group is the well-acknowledged test function that is used in 6.1, and the details are given in Table A2 (Appendix A).
The other test group is CEC2019 that consists of 10 functions (Abdullah & Ahmed, 2019). The information about CEC2019 functions is given in Table A3 (Appendix A).
The results of all algorithms are given tables separately according to their population sizes (Appendix B, Tables B1-B8). For the optimization test functions, minimum, average, and standard deviation values are used as a comparison metric. Moreover, to figure out the difference between the methods, a nonparametric Wilcoxon test that has a 5% significance level is applied. The Wilcoxon test results of 15 well-acknowledged test functions and CEC2019 test functions are given, respectively, in Tables 11 and 13 according to their population sizes. The "+" sign in Tables 11 and 13 indicates that the proposed algorithm is more successful than its competitors. The "-" sign indicates that the proposed algorithm is behind its competitors in terms of success. The "=" sign indicates that there is not a significant difference between the proposed algorithm and its competitors. In addition to these comparisons, the solution time of the algorithms is used as another comparison metric. Solution times of 15 well-acknowledged test functions and CEC2019 test functions are, respectively, given as seconds in Tables 12 and 14 according to their population sizes.
Analysing Table 11, it is observed that the proposed algorithm presents better performance than the competing algorithms at the 5% significance level. While the proposed algorithm, compared to its competitors, indicates competitive results in the functions between F1 and F12, it has unsuccessful results in the functions between F13 and F15. The proposed algorithm is more successful than NCHHO and BHHO, which are two chaotic variants of HHO. This superiority is observed in all population sizes. Depending on population sizes, the success of the proposed algorithm varies compared to CGSA and HPSOBOA. Such a case could be explained by the fact that the performance of CGSA and HPSOBOA is sensitive to population sizes. Studying Table 12, it is observed that the proposed algorithm operates faster than its competitors. It should also be specifically indicated that the solution time of HHO variants is shorter than that of MHA variants. Table 13 shows that the proposed algorithm performs better than its competitors at the 5% significance level. It should also be stressed that the superiority of the proposed algorithm seems  Table 14, it is seen that the proposed algorithm performs better than its competitors in all population sizes. In this test group, the solution times of HHO variants should be paid attention to as they are shorter than other chaotic MHA variants.

Effectiveness of exploration and exploitation parts in proposed algorithm
The PM4C6 algorithm is a chaotic variant of HHO. Accordingly, the proposed algorithm, just like HHO, consists of exploration and exploitation processes. The balance between these two processes is a significant matter for OAs. In Fig. 6, PM4C6-1 shows just exploration, PM4C6-2 shows just exploitation, and PM4C6 shows that the exploration and exploitation processes are used simultaneously. Fig. 6 informs us that the proposed algorithm can apply exploration and exploitation processes successfully.
According to the results of the Friedman test in Section 6.1, the most successful algorithm is PM4C6. CGSA, HPSOBOA, NCHHO, and BHHO are compared in Section 6.2. Different population sizes are used for comparison. The results are interpreted according to the Wilcoxon statistical test. All algorithms are compared in terms of their solution time. The results show that the proposed PM4C6 algorithm achieves slightly better re-sults than previous studies. Some points of discussion may be summarized as follows: 1. The random variables in the structure of HHO are determined by the functions of the language with which the proposed algorithm is programmed. However, this method does not guarantee that the variables are not nonrecurring. The generation of random variables with CMs increases the performance of HHO. 2. By which CM the random variables are determined affects the solution time. Such a case is caused by CM's process of generating numbers randomly. 3. Determining all the random variables in the structure of HHO with CM (as in modification-4) results in more success than other modification methods. This is because the solution space is elaborated in a more detailed way. 4. A piecewise CM increases the performance of HHO more than the others. This is because a piecewise CM can generate random numbers for 4 different cases. Such a case parallels with HHO's having six different searching procedures. 5. The results of F1-F15 indicate that as the population sizes of CGSA and HPSOBOA increase, so do their performances. That is because CGSA and HPSOBOA have weak performance in small populations. Moreover, the proposed algorithm and NCHHO and BHHO, which are the variations of HHO, are more successful than other algorithms in terms of the solution     time metric. Such a case could be explained by the fact that HHO's decision-making mechanism is less complex. 6. The proposed algorithm fails in F13-F15 (Shekel 1, 2, 3) functions. Such a case is caused by the fact that F13-F15 is multidimensional, multimode, and deterministic. Besides, it could be explained with the free lunch theorem. To make up for the deficiency of the proposed algorithm, determining random variables with more than one CM may be considered. 8. In the CEC2019 tests, the proposed algorithm has competitive results with better search capability. It presents better results especially when the population size increases. It is natural that as the population increases, the performances of the algorithms increase as well. However, it is a significant result that the performance of the proposed algorithm increases more than its competitors, especially two CHHO variants. This points out that determining all of the randomly determined variables of an MHA through CMs gives better results. 9. HHO variants have better results than CGSA and HPSOBOA.
This could be related to the fact that the number of random variables in the structure of HHO is more than the ones in CGSA and HPSOBOA.
Next, as a final benchmark, PM4C6 is applied to solve three engineering design problems that do not have analytical solutions.

Pressure vessel design
The pressure vessel design problem is a classic engineering design problem (Abualigah et al., 2021). The main purpose of this common benchmark problem is to minimize the cost of welding, manufacturing, and material of pressure vessels (equation 28), and no exact solution exists. The variables of this problem are shown in Fig. 7. This problem has four parameters: thickness of the shell (Ts), thickness of the head (Th), inner radius (R), and length of the section without the head (L) (equation 27). Also, the problem has four constraints (equation 29).
Consider : Minimize : f ( z) = 0.6224z 1 z 3 z 4 + 1.7781z 2 z 2 3 + 3.1661z 2 1 z 4 + 19.84z 2 1 z 3 (28) Subject to : g 1 ( z) = −z 1 + 0.0193z 3 ≤ 0, where 0≤ z 1 , z 2 ≤ 99, 10 ≤ z 3 , and z 4 ≤ 200. As in other algorithms compared, the population size of the proposed algorithm is 30, the number of iterations is 500, and the results of 30 independent runs are recorded. The results obtained are given in Table 15. The values of the competitor algorithms are taken from articles in the literature. Accordingly, it is seen that the proposed algorithm achieves competitive results and is more successful than the HHO algorithm. The proposed algorithm achieved approximately 3% more successful results than HHO.

Tension/compression spring design
The tension-compression spring problem is a classic engineering problem (Faramarzi et al., 2020). The purpose of the problem is to minimize the weight of the spring (equation 31). The design parameters of the problem are wire diameter (d), mean coil diameter (D), and the number of active coils (N) (Fig. 8) (equation 30). This design problem has limitations  such as shear stress, surge frequency, and minimum deflection (equation 32).

Three-bar truss design problem
This problem is frequently used in evaluating the performance of OAs. . Figure 9 shows the shape of the beams and the forces acting on them. This problem aims to minimize the total weight of the structure (equation 34). The problem has two parameters, namely the cross-sectional area of the first and third beams and the cross-sectional area of the second beam (equation 33). In addition, the problem has limitations such as stress, deflection, and buckling (equation 35).
The population size of the proposed algorithm and the number of iterations are taken as 30 and 500, respectively, and the results of 30 independent studies are recorded to make a fair comparison. The results of the proposed algorithm and the compared algorithms are given in Table 17. The values of the competitor algorithms are taken from articles in the literature. The results show that the proposed algorithm is more successful than other algorithms. The proposed algorithm achieved approximately 0.1% more successful results than HHO.

Conclusions
HHO, a metaheuristic method (MHA) inspired by nature, is hybridized with CMs to improve performance. Some variables in the structure of HHO are determined by using 10 different CM methods. Hybridization processes of HHO are performed with alternative modifications. The proposed algorithms are applied to 15 well-defined test functions with unimodal and multimodal properties for comparison. Modification-1 has increased the performance of HHO, because of the more balanced transitions between exploration and exploitation phases. This balance is critical for MHA, and thus, modification-1 is fixed in the other alternatives. In modification-2, only the parameters in the exploration stage are determined by CM methods. The results show that CM improves HHO's exploration capability. In the third approach, only the exploitation stage parameters are determined by CM. The results confirm that CM increases HHO's exploitation capability. In the modification called PM4, parameters at every stage of HHO are determined by the CM methods. Consequently, the best results are obtained with the fourth modification. The results of the proposed algorithm in standard engineering design problems are also promising. The effects of other chaotic methods on MHAs will be investigated in the future. Furthermore, the use of CHHO for the solution of alternative design problems as well as the threedimensional bin packing problem in the Np-hard class is also being tested. A new algorithm based on CMs to improve the performance of MHA needs to be developed. Circle F9 -Penalty 1 F 9 (z) = π 30 (10sin(π x 1 ) + 29 i = 1 (x i − 1) 2 (1 + 10sin 2 (π x i+1 )) + (x n − 1) 2 ) +