An improved hybrid Aquila Optimizer and Harris Hawks Optimization for global optimization

: This paper introduces an improved hybrid Aquila Optimizer (AO) and Harris Hawks Optimization (HHO) algorithm, namely IHAOHHO, to enhance the searching performance for global optimization problems. In the IHAOHHO, valuable exploration and exploitation capabilities of AO and HHO are retained firstly, and then representative-based hunting (RH) and opposition-based learning (OBL) strategies are added in the exploration and exploitation phases to effectively improve the diversity of search space and local optima avoidance capability of the algorithm, respectively. To verify the optimization performance and the practicability, the proposed algorithm is comprehensively analyzed on standard and CEC2017 benchmark functions and three engineering design problems. The experimental results show that the proposed IHAOHHO has more superior global search performance and faster convergence speed compared to the basic AO and HHO and selected state-of-the-art meta-heuristic algorithms.


Introduction
Meta-heuristic optimization algorithms develop rapidly [1][2][3] because of its simple concept, flexibility and ability to avoid local optima, and have been widely used in solving various complex optimization problems in the real world [4,5]. According to different inspiration of the algorithms, meta-heuristics can be divided into three main categories: evolutionary, physics-based and swarm intelligence based techniques. The inspirations of evolutionary algorithms are the laws of evolution in dynamically adjusting candidate solutions for avoiding solution stagnancy in HHO. Bao et al. [40] proposed HHO-DE by hybridizing HHO and DE algorithms. The convergence accuracy, ability to avoid local optima and stability are greatly improved compared to HHO and DE. Houssein et al. [41] proposed a hybrid algorithm called CHHO-CS by combining HHO with CS and chaotic maps. The CHHO-CS achieves a better balance between exploration and exploitation phases, and effectively avoids premature convergence. Kaveh et al. [42] hybridized HHO with Imperialist Competitive Algorithm (ICA). Combination of the exploration strategy of ICA and exploitation technique of HHO helps to achieve a better search performance. The satisfactory outcomes of several HHO-based hybrid algorithms proposed in the literature show potential research direction.
Thus, in view of defects in the slow convergence and local optima stagnation of HHO and inspired by the above researches, we try a hybridization to enhance the performance of HHO and AO. An improved hybrid Aquila Optimizer and Harris Hawks Optimization namely IHAOHHO is proposed. First of all, we combine the exploration phase of AO with the exploitation phase of HHO together. This operation extracts and retains the strong exploration and exploitation capabilities of basic AO and HHO. Then, in order to further improve the performance of IHAOHHO, the representative-based hunting (RH) and opposition-based learning (OBL) strategies are introduced into IHAOHHO. RH is mixed into the exploration phase to increase the diversification and OBL is added into the exploitation phase to avoid local optima stagnation, respectively. Thus, the capabilities of exploration, exploitation and local optima avoidance are effectively enhanced in the proposed algorithm. The standard and CEC2017 benchmark functions and three engineering design problems are utilized to test the exploration and exploitation capabilities of IHAOHHO. The proposed algorithm is compared with basic AO, HHO, and several well-known meta-heuristic algorithms, including HOA, SSA, WOA, GWO, MVO, IPOP-CMA-ES [43], LSHADE [44], Sine-cosine and Spotted Hyena-based Chimp Optimization Algorithm (SSC) [45] and RUNge Kutta Optimizer (RUN) [46]. The experimental results show that the proposed IHAOHHO algorithm outperforms other state-of-the-art algorithms.
The rest of this paper is organized as follows: The Section 2, provides a brief overview of the related work: basic AO and HHO algorithms, as well as the RH and OBL strategies. The Section 3, describes in detail the proposed hybrid algorithm. The Section 4, conducts simulation experiments and results analysis. Finally, Section 5, concludes the paper.

Aquila Optimizer (AO)
AO is a latest novel swarm intelligence algorithm proposed by Abualigah et al. in 2021. There are four hunting behaviors of Aquila for different kinds of prey. Aquila can switch hunting strategies flexibly for different prey, and then uses its fast speed united with sturdy feet and claws to attack prey. The brief description of mathematical model can be described as follows.
Step 1: Expanded exploration: high soar with a vertical stoop In this method, the Aquila flies high over the ground and explorers the search space widely, and then a vertical dive will be taken once the Aquila determines the area of the prey. The mathematical representation of this behavior is written as: ( 1) ( (2) where ( ) best X t represents the best position obtained so far, and ( ) M X t denotes to the average position of all Aqulias in current iteration. t and T are the current iteration and the maximum number of iterations, respectively. N is the population size and rand is a random number between 0 and 1.
Step 2: Narrowed exploration: contour flight with short glide attack This is the most commonly used hunting method for Aquila. It uses short gliding to attack the prey after descending within the selected area and flying around the prey. The position update formula is represented as: where, ( ) R X t represents a random position of the hawk, and D is the dimension size.
( ) LF D represents Levy flight function, which is presented as follows: where, s and  are constant values equal to 0.01 and 1.5 respectively, u and v are random numbers between 0 and 1. y and x are used to present the spiral shape in the search, which are calculated as follows: where, 1 r means the number of search cycles between 1 and 20, 1 D is composed of integer numbers from 1 to the dimension size ( D ), and  is equal to 0.005.
Step 3: Expanded exploitation: low flight with a slow descent attack In the third method, when the area of prey is roughly determined, the Aquila descends vertically to do a preliminary attack. AO exploits the selected area to get close to and attack the prey. This behavior is presented as follows: where  and  are the exploitation adjustment parameters fixed to 0.1, UB and LB are the upper and lower bounds of the problem.
Step 4: Narrowed exploitation: walking and grab prey In this method, the Aquila chases the prey in the light of its escape trajectory, and then attacks the prey on the ground. The mathematical representation of this behavior is as follows: where ( ) X t is the current position, and ( ) QF t represents the quality function value which used to balance the search strategy. 1 G denotes the movement parameter of Aquila during tracking prey, which is a random number between [-1,1]. 2 G denotes the flight slope when chasing prey, which decreases linearly from 2 to 0.
The flowchart of AO is shown in Figure 1.

Harris Hawks Optimization(HHO)
HHO is a new meta-heuristic optimization algorithm proposed by Heidari et al. in 2019. It is inspired by the unique cooperative foraging activities of Harris' hawk. Harris' hawk can show a variety of chasing patterns according to the dynamic nature of the environment and the escaping patterns of the prey. These switching activities are conducive to confuse the running prey, and the cooperative strategies can help Harris' hawk chase the detected prey to exhaustion, which increases its vulnerability. The brief description of mathematical model is as follows.

Exploration phase
The Harris' hawks usually perch on some random locations, wait and monitor the desert to detect the prey. There are two perching strategies based on the positions of other family members and the prey, which are selected in accordance with the random q value.
where q is random number between 0 and 1.

Transition from exploration to exploitation phase
The HHO algorithm has a transition mechanism from exploration to exploitation phase based on the escaping energy of the prey, and then changes the different exploitative behaviors. The energy of the prey is modeled as follows, which decreases during the escaping behavior.
where E represents the escaping energy of the prey, 0 E is the initial state of the energy. When | | 1 E  , the algorithm performs the exploration stage, and when | | 1 E  , the algorithm performs the exploitation phase.

Exploitation phase
In this phase, four different chasing and attacking strategies are proposed on the basis of the escaping energy of the prey and chasing styles of the Harris' hawks. Except for the escaping energy, parameter r is also utilized to choose the chasing strategy, which indicates the chance of the prey in successfully escaping ( 0. , the prey still has enough energy and tries to escape, so the Harris' hawks encircle it softly to make the prey more exhausted and then attack it. This behavior is modeled as follows: where ( ) X t  indicates the difference between the position of prey and the current position, J represents the random jump strength of prey. ii. Hard besiege When 0.5 r  and | | 0.5 E  , the prey has a low escaping energy, and the Harris' hawks encircle the prey readily and finally attack it. In this situation, the positions are updated as follows: iii. Soft besiege with progressive rapid dives When | | 0.5 E  and 0.5 r  , the prey has enough energy to successfully escape, so the Harris' hawks perform soft besiege with several rapid dives around the prey and try to progressively correct its position and direction. This behavior is modeled as follows: where S is a random vector. Note that, only the better position between Y and Z is selected as the next position. iv. Hard besiege with progressive rapid dives When | | 0.5 E  and 0.5 r  , the prey has no enough energy to escape, so the hawks perform a hard besiege to decrease the distance between their average position and the prey, and finally attack and kill the prey. The mathematical representation of this behavior is as follows: Note that only the better position between Y and Z will be the next position for the new iteration. The flowchart of HHO is displayed in Figure 2.

Representative-based hunting(RH)
The strategy of representative-based hunting was first proposed to improve the exploration and diversification of GWO algorithm in 2021 [47]. To achieve this, an archive called representative archive (RA) is constructed to maintain the representative solutions. A random representative search agent is selected from the five-best search agents archived by the RA, and a random search agent is selected from the RA. Meanwhile, two random search agents are selected from the population. These four selections efficiently improve the diversity, exploration capability and premature convergence avoidance. The mathematical model of RH is as follows: where XR_best and XR_archive are randomly selected from the five-best representative search agents and the whole archive, respectively.
1 rand X and 2 rand X are randomly selected from the whole population.  and the Cauchy distribution cd are calculated by:  is a nonlinearly decreasing parameter form 1 to 0, which leads to decrease the exploration to exploitation over the course of iterations, forming a transition from exploration to exploitation. The cd coefficient assists to enhance the random behavior, preferring exploration and escaping from the local optima.

Opposition-based learning(OBL)
Opposition-based learning (OBL) is a powerful optimization tool proposed by Tizhoosh in 2005 [48]. The main idea of OBL is simultaneously considering the fitness of an estimate and its corresponding counter estimate to obtain a better candidate solution ( Figure 3). An optimization process usually starts at a random initial solution. If the random solution is near the optimal solution, the algorithm converges fast. However, it's possible that the initial solution is far away from the optimum or just at exact opposite position. In this case, it might take quite long time to converge or not converge at all. Thus, considering the opposite direction of the candidate solution in each step increases the probability of finding a better solution. We can choose the opposite point as the candidate solution once the opposite solution is beneficial and then proceed to the next iteration. The OBL concept has successfully been used in varieties of meta-heuristics algorithms [49][50][51][52][53] to improve the convergence speed. OBL is defined by: where xjOBL represents the opposite solution, j l and j u are the lower and upper bounds of the problem in j th dimension. The opposite solution described by Eq (24) can effectively help the population jump out of the local optima.

The detail design of IHAOHHO
The AO simulates hunting behaviors for fast-moving prey within a wide flying area in exploration phase. The characteristics of these behaviors make AO have a strong global search ability and fast convergence speed. However, the selected search space is not exhaustively searched during the exploitation phase. The role of Levy flight is weak in the late iterations, which tends to result in premature convergence. Thus, the AO algorithm possesses good exploration capability and fast convergence speed, but it is hard to escape from local optima in the exploitation stage. For the HHO algorithm, the experimental results show that deficiencies of insufficient diversification of the population and low convergence speed exist in the exploration phase. On the basis of the energy and escape probability of the prey, four different hunting strategies are used to implement various position updating methods in the exploitation phase. In addition, the transition mechanism from exploration to exploitation is a good way to adapt to animal characteristics. As a whole, the energy of prey decreases with the increase of iterations, making the algorithm enter the exploitation stage. In this work, we retain the exploration phase of AO and the exploitation phase of HHO, and combine them together through the transition mechanism. The exploration phase of HHO is highly dominant with randomization that seems clueless search mechanism. In contrast, the position updating in exploration phase of AO is based on the best solution and average position with some randomness, which is more reasonable. And the four exploitation strategies based on the different values of E and r help the algorithm fully exploit the search space. This hybridization is beneficial to give full play to the advantages of these two basic algorithms. The global search capability, faster convergence speed, and detailed exploitation of the algorithms are all reserved. However, the diversity of the population in the exploration phase is insufficient due to the lack of randomness. As described in Section 2.3, RH is designed for improving the exploration and diversification of an optimization algorithm. Selections from different sub-population can efficiently improve the diversity and exploration capability. Thus, RH strategy is utilized to further improve the diversification of the population in exploration phase, which is conducive to find the most promising region quickly. Besides, AO and HHO possess a common defect of local optima stagnation. The OBL strategy can utilize the opposite solution to make the population jump out of the local optima. Therefore, OBL strategy is added to the exploitation phase to enhance the ability to jump out of the local optima as well. All these strategies effectively improve the convergence speed, convergence accuracy and the overall optimization performance of the hybrid algorithm. This improved hybrid Aquila Optimizer and Harris Hawks Optimization algorithm is named IHAOHHO. Different phases of IHAOHHO are illustrated in Figure 4. The pseudo-code of IHAOHHO is given in Algorithm 1, and the summarized flowchart is shown in Figure 5.  Set the initial values of the population size N and the maximum number of iterations T 2: Initialize positions of the population X 3: While t < T 4: Update x, y, cd 5: For i = 1 to N 6: If t < T/2 % Exploration part of AO 7: Archiving 8: If rand < 0.5 9: Update the position of Xnewi and X_Ri using Eqs (1) and (21), respectively 10: Calculate the fitness of Xnewi and X_Ri 11: Update the position of Xi 12: Else 13: Update the position of Xnewi and X_Ri using Eqs (3) and (21), respectively 14: Calculate the fitness of Xnewi and X_Ri 15: Update the position of Xi 16: End if 17: Else % Exploitation part of HHO 18: If r ≥ 0.5 and |E| ≥ 0.5 19: Update the position of Xi using Eq (11) 20: End if 21: If r ≥ 0.5 and |E| < 0.5 22: Update the position of Xi using Eq (14) 23: End if 24: If r < 0.5 and |E| ≥ 0.5 25: Update the position of Xi using Eq (15) 26: End if 27: If r < 0.5 and |E| < 0.5 28: Update the position of Xi using Eq (18)

Computational complexity analysis
The computational complexity is an important indicator for an algorithm, which is used to evaluate its time consumption during operating. The computational complexity of the IHAOHHO depends on three rules: initialization, evaluation of fitness, and updating of hawks. In the initialization stage, the computational complexity of positions generated of N hawks is O(N). Then, the computational complexity of fitness evaluation for the best solution is O(N) during the iteration process. Finally, the computational complexities of position updating of hawks and fitness comparison in one iteration are O(2 × N × D) and O(N) respectively, where D is dimension size of the problem. Therefore, the total computational complexity of the proposed IHAOHHO algorithm is O(N × (1 + 2 × D × T + 2 × T)). As described in the literature, the computational complexity of AO and HHO are both O(N × (1 + D × T + T)). Compared to the basic AO and HHO, the computational complexity of IHAOHHO is slightly increased due to the RH and OBL strategies, which is acceptable for improving the convergence accuracy and speed of the algorithm.

Experimental results and discussion
In this section, we implement four main experiments to evaluate the performance of the proposed IHAOHHO algorithm. Standard benchmark function experiment is carried out firstly, which is used to evaluate the performance of the algorithm in solving 23 simple numerical optimization problems. Secondly, the CEC2017 benchmark functions are tested to assess the performance of the algorithm in solving complex numerical optimization problems. Then, sensitivity analysis is performed to investigate the effect of the control parameters. The last one is engineering design problems, which aims to assess the performance of IHAOHHO in solving real-world problems. All experiments are implemented in MATLAB R2016a on a PC with Intel (R) core (TM) i5-9500 CPU @ 3.00GHz and RAM 16GB memory on OS windows 10.

Standard benchmark function experiments
We utilize 23 standard benchmark functions to test the performance of the IHAOHHO algorithm, which are divided into three types including unimodal, multimodal and fixed-dimension multimodal benchmark functions. The main characteristic of unimodal functions is that there is only one global optimum but no local optima. This kind of functions can be used to evaluate the exploitation capability and convergence rate of an algorithm. Unlike unimodal functions, multimodal and fixed-dimension multimodal functions have one global optimum and multiple local optima. These types of functions are utilized to evaluate the exploration and local optima avoidance capabilities. The benchmark function details are listed in Tables 1-3.  Table 4. After all, the average and standard deviation results of these test functions are exhibited in Table 5. Figure 6 shows the convergence curves of 23 test functions. The partial search history, trajectory and average fitness maps are shown in Figure 7. The Wilcoxon signed-rank test results are also listed in Table 6. Unimodal test functions are usually used to investigate the exploitation capability of the algorithm since they have only one global optimum and no local optima. As seen from Table 5, the IHAOHHO performs much better than other selected algorithms exclude F6. For all unimodal functions exclude F6, IHAOHHO obtains the smallest average values and standard deviations compared to other algorithms, which indicate the best accuracy and stability among all these algorithms. Hence, the exploitation capability of the proposed IHAOHHO algorithm is competitive with all the selected metaheuristic algorithms.

Results of the algorithms on multimodal test functions (F8-F23)
Multimodal test functions F8-F23 contain plentiful local optima whose number increases exponentially with the dimension size of the problem. These functions are very useful to evaluate the exploration ability and local optima avoidance of the algorithm. It can be seen from Table 5 that IHAOHHO outperforms other algorithms in most of the multimodal and fixed-dimension multimodal functions. For multimodal functions F8-F13, IHAOHHO shows completely superiority than other selected algorithms with the best average values and standard deviations. For ten fixed-dimensions multimodal functions F14-F23, IHAOHHO performs barely satisfactory. The IHAOHHO outperforms others in terms of both average values and standard deviations in F21-F23, and achieves the best accuracy for F16-F18. These results reveal that IHAOHHO can also provide superior exploration capability.

Analysis of convergence behavior
Search agents tend to change drastically to investigate promising regions of the search space in early iterations, and then exploit the region in detail and converge gradually as the number of iterations increases. Convergence curves of the IHAOHHO, AO, HHO, HOA, SSA, WOA, GWO, and MVO for 23 standard benchmark functions are given in Figure 6, which show the convergence rate of algorithms. As we can see, IHAOHHO shows competitive performance compared to other state-of-the-art algorithms. The IHAOHHO algorithm presents faster convergence speed than all other algorithms in F1-F4 and F8-F11. For else test functions, IHAOHHO may not have much advantages than other algorithms in terms of convergence speed with the reason that some algorithms are excellent as well, but the convergence accuracy of IHAOHHO is better than other algorithms in most of the test functions.
The superiority of IHAOHHO in terms of convergence speed is likely to come from the RH strategy in exploration phase. To be specific, the RH strategy provides better randomness and diversity for the search agents, making search agents explore the search space widely and randomly. The improvement of randomness and diversity increases the probability of finding the most promising region quickly. The advantage of convergence accuracy is likely to be derived from the OBL strategy, which improves randomness of search agents. The search agents can choose the better one to jump out of the local optima in each iteration. These two strategies help the hybrid algorithm outperforms the basic AO and HHO. Overall, IHAOHHO can efficiently achieve great solutions for all 23 standard benchmark functions.

Qualitative results and analysis
Furthermore, Figure 7 shows us the results of several representative test functions on search history, trajectory, average fitness and convergence curve. From search history maps, we can see the search agents' distribution of the IHAOHHO while exploring and exploiting the search space. Because of the fast convergence, the vast majority of search agents are concentrated near the global optimum. Inspecting trajectory figures in Figure 7, the first search agent constantly oscillates in the first dimension of the search space, which suggests that the search agent investigates the most promising areas and better solutions widely. This powerful search capability is likely to come from the RH and OBL strategies. The average fitness presents if exploration and exploitation are conducive to improve the first random population and an accurate approximation of the global optimum can be found in the end. Similarly, it can be noticed that the average fitness oscillates like trajectories in the early iterations, and then decreases abruptly and levels off accordingly. The average fitness maps show the great improvement of the first random population and the acquisition of the final global optimal approximation as well. At last, the convergence curves reveal the best fitness values found by search agents after each of iteration. By observing this, the IHAOHHO shows very fast convergence speed. The Wilcoxon signed-rank test is a non-parametric statistical test and useful to evaluate the statistical performance differences between the proposed IHAOHHO algorithm and other algorithms. As is well-known, p-values less than 0.05 indicate that there is a significant difference between the two compared algorithms. The calculated results of Wilcoxon signed-rank test between IHAOHHO and other seven algorithms for each benchmark functions are listed in Table 6. According to the criterion of 0.05, IHAOHHO outperforms all other algorithms in varying degrees. This superiority is statistically significant on unimodal functions F1-F6, which strongly indicates that IHAOHHO possesses high exploitation. IHAOHHO also shows better results on multimodal function F8-F23, which may suggest that IHAOHHO has a high capability of exploration. To sum up, the IHAOHHO algorithm can provide better results on almost all benchmark functions than other comparative algorithms.

Experiments on the CEC2017 benchmark function
Standard benchmark function experiments demonstrate the superior performance on simple problems of the proposed IHAOHHO algorithm. The complex functions can help to investigate the performance on intricate problems. One of the most challenging test function suites called CEC2017 [54] is selected to further test the performance of IHAOHHO, which contains 30 functions with more than half of the challenging hybrid and composition functions as shown in Table 7. The test results are compared to some well-known and latest algorithms proposed recently, in which IPOP-CMA-ES and LSHADE are the best behaved on CEC2017 in the literature. As the previous section described, each algorithm is ran 30 times with 500 iterations, and average and standard deviation results are presented in Table 8. From the comparison results, the proposed IHAOHHO obtains the 3 rd rank following IPOP-CMA-ES and LSHADE, and exceeds SSC, RUN and HOA methods completely. It reveals that IHAOHHO can also achieve better results on complex functions.

Sensitivity analysis
The performance of an optimization algorithm is affected by the values of the control parameters. For the sake of better performance, the influence of the parameters should be investigated to select the appropriate values. The IHAOHHO algorithm owns three parameters σinitial, σfinal and Exponent in Eq (22). At the end of the iteration, the algorithm needs to search in detail and minimize randomness as much as possible. Thus,σfinal should be equal to 0 to get rid of the random term in Eq (21). Next, the left two parametersσinitial and Exponent are assessed by the representative standard and CEC2017 benchmark functions in Table 9. The mean-square error values are obtained using benchmark functions from different categories including unimodal, multimodal and fixed-multimodal of standard benchmark functions, and unimodal, multimodal, hybrid and composite of CEC2017 with different parameters. The best performance bolded is obtained by values 1 and 2 for parameters σinitial and Exponent.

Experiments on engineering design problems
Considering equality and inequality constraints is a necessary process for optimization because most optimization problems have constraints in the real world. In this subsection, three well-known constrained engineering design problems, which include speed reducer design problem, tension/compression spring design problem and three-bar truss design problem, are solved to further verify the performance of IHAOHHO. The results of IHAOHHO are compared to the basic AO, HHO, and HOA, SSA, WOA, GWO, MVO as well. The parameter settings are the same as the previous numerical experiments. For all tests, each algorithm is ran 15 times independently. The best result among 15 times for each algorithm and the Wilcoxon signed-rank test results between IHAOHHO and other algorithms are shown in Tables 10-12.

Speed reducer design problem
This problem aims to optimize seven variables to minimize the speed reducer's total weights, which include the face width (x1), module of teeth (x2), a discrete design variable on behalf of the teeth in the pinion (x3), length of the first shaft between bearings (x4), length of the second shaft between bearings (x5), diameters of the first shaft (x6) and diameters of the second shaft (x7). Four constraints: covering stress, bending stress of the gear teeth, stresses in the shafts and transverse deflections of the shafts as shown in Figure 8 should be satisfied. The mathematical formulation is represented as follows: Compared to other algorithms, IHAOHHO can obviously achieve better results in the speed reducer design problem, as shown in Table 10. P-values in Table 10 show us the significant difference between IHAOHHO and other algorithms, proving the statistical superiority of the proposed algorithm. In this case, the intention is to minimize the weight of the tension/compression spring shown in Figure 9. Constraints on surge frequency, shear stress and deflection must be satisfied during optimum design. There are three parameters need to be minimized, including the wire diameter(d), mean coil diameter(D) and the number of active coils (N). The mathematical form of this problem can be written as follows:  The experiment results are listed in Table 11 and show that the IHAOHHO can attain the best weight values compared to all other algorithms. IHAOHHO obtains the significant different results compared to others exclude HOA.

Three-bar truss design problem
The three-bar truss design problem is a classical optimization application in civil engineering field. The main intention of this case is to minimize the weight of a truss with three bars by considering two structural parameters as illustrated in Figure 10. Deflection, stress and buckling are the three main constrains. The mathematical formulation of this problem is given: Table 12. Comparison of IHAOHHO results with other competitors for the three-bar truss design problem.

Conclusions
In this paper, an improved hybrid Aquila Optimizer and Harris Hawks Optimization algorithm is proposed by combining the exploration part of AO with the exploitation part of HHO. The advantageous parts of basic AO and HHO are combined to keep the well-behaved exploration and exploitation capabilities. Two strategies including representative-based hunting and opposition-based learning are incorporated into the proposed IHAOHHO to further improve the optimization performance. The representative-based hunting strategy can effectively enhance the diversity of the population and fully explore the search space. The opposition-based learning strategy contributes to keep the algorithm from trapping in local optima. This algorithm is evaluated by standard benchmark functions and CEC2017 test functions to analyze its exploration, exploitation and local optima avoidance capabilities. The experiments show competitive results compared to other state-of-the-art meta-heuristic algorithms, which prove that IHAOHHO has better optimization performance than others. Three engineering design problems are solved as well to further verify the superiority of the algorithm, and the results are also competitive with other meta-heuristic algorithms.
The performance of the proposed algorithm on CEC2017 benchmark functions still needs to be improved. The exploration and exploitation capabilities need to be further investigated to break the limitations on CEC2017 test suit. And the transition from exploration to exploitation phase of IHAOHHO is simple. For further work, the transition mechanism can be improved to provide a better balance between the exploration and exploitation phases of this algorithm. Besides, the IHAOHHO algorithm can only solve single-objective optimization problems. Multi-objective version of IHAOHHO may be developed to solve multi-objective problems in the future.