Enhanced Aquila optimizer algorithm for global optimization and constrained engineering problems

: The Aquila optimizer (AO) is a recently developed swarm algorithm that simulates the hunting behavior of Aquila birds. In complex optimization problems, an AO may have slow convergence or fall in sub-optimal regions, especially in high complex ones. This paper tries to overcome these problems by using three di ﬀ erent strategies: restart strategy, opposition-based learning and chaotic lo-cal search. The developed algorithm named as mAO was tested using 29 CEC 2017 functions and ﬁve di ﬀ erent engineering constrained problems. The results prove the superiority and e ﬃ ciency of mAO in solving many optimization issues.


Introduction
The process of selecting appropriate and applicable variable values for a specific task is known as optimization [1][2][3]. Optimization exists in almost every domain, including job shop scheduling [4], feature selection [5][6][7], image processing [8,9], face detection and recognition [10], predicting chemical activities [11], classification [12,13], network allocation [14], internet of vehicles [15], routing [16], and neural network [17]. Due to the nature of real-world problems, Optimization becomes very challenging and has many difficulties such as multiobjectivity [18], memetic optimization [19], large-scale optimization [20], fuzzy optimization [21], uncertainties [22] and parameter estimation [23]. Metaheuristics algorithms have been used to solve such problems due to their advantages such as flexibility, efficiency and getting a near-optimal solution in a reasonable time. Despite the powerfulness and superiority of the algorithm, and as stated by the no free lunch theorem, the AO cannot solve all optimization issues. So, the AO still needs more enhancements and developments.
This paper introduces a novel version of the AO in which three different strategies have been used to overcome the original optimizer drawbacks such as getting stuck in local optima and slow convergence. These strategies are the chaotic local search (CLS), opposition-based learning (OBL) and the restart strategy (RS). Using OBL and the RS enhances the AO exploratory search capabilities whereas the CLS improves AO exploitative search abilities.
The main contributions of this paper are as follows: • A novel Aquila algorithm has been developed using three strategies: OBL, the RS and the CLS.
• The developed optimizer has been compared with the original AO and nine other algorithms, namely, the CSA [40], EHO [60], GOA [46], LSHADE [61], Lshade-EpSin [62], MFO [63], MVO [64], and PSO [24]. • A scalability test and removing one strategy from the developed algorithm experiments have been carried out. • mAO was tested using 29 functions and five constrained ones. This paper is organized as follows: Section 2 discusses the background and preliminaries of the original algorithm, OBL, the CLS and the RS, whereas Section 3 introduces the structure of the modified optimizer and its complexity. Sections 4 and 5 discuss the results of the proposed mAO and other competitors in CEC2017 and five different constrained engineering problems whereas Section 6 concludes the paper.

Aquila optimizer (AO)
Aquila algorithm is one of the latest population-based swarm intelligence optimizers developed by Abualigah et al. [49]. Aquila can be considered among the most well-known prey birds existed in north hemisphere. Aquila is brown with a golden back body. Aquila uses its agility and strength with its wide nails and strong feet to catch various types ofprey usually squirrels, rabbits, marmots, and hares [65].
Aquila optimizer (AO) simulates the four different Aquila strategies in hunting. The next subsection shows Aquila's mathematical model.

Mathematical model
AO begins with a random set of individuals that can be represented mathematically as follows: .. X 1, j ... X 1,D−1 X 1,D X 2,1 X 2,2 ... X 2, j ... X 2,D−1 X 2,D ... ... ... ... ... X n−1,1 X n−1,2 ... X n−1, j ... X n−1,D−1 X n−1,D X n,1 X n,2 ... X n, j ... X n,D−1 X n,D where X is an agent position (solution) that can be computed using the following equation: X i, j = rand × (U B j − LB j ) + LB j , i = 1, ..., n, j = 1, ..., D (2.2) where D refers to the number of decision variables, N indicates the number of agents, U B j and LB j are the j th upper and lower boundaries, and x i refers to the i th value of the decision variable. AO simulates Aquila hunting in four different phases where the optimizer can easily move from exploration and exploitation using the following condition: t ≤ ( 2 3 ) × T Perform exploration Otherwise Perform exploration 2.2.1. Expanded exploration (X 1 ) In this phase, Aquila will determine the area to hunt the prey and select it by a vertical stoop and high soar. The mathematical formula for such a behavior is given by the following two equations: where X M (t) indicates the mean position in the ith generation, X best is the best Aquila position founded in this iteration, r is a randomly generated number in the interval [0, 1], t is the current generation where T is the maximum generation number, and N is the Aquilas number.

Narrowed exploration (X 2 )
This technique is the most technique used by Aquila for hunting. To attack the prey, a short gliding is used with contour flight. Aquila position's will be updated as follows: where X R indicates a position of Aquila generated randomly, rand is a random real number between 0 and 1, D is the number of variables, and Levy refers to a lévy function which is presented as follows: where s is a fixed value and equals 0.01, u and µ are random numbers between 0 and 1 and β is a constant and equals 1.5. Both y and x are used to model the spiral shape and can be computed using the following two equations: where r and θ can be calculated as follows: where U equals 0.00565, ω equals 0.005, and r 1 has a value between 1 and 20.

Expanded exploitation (X 3 )
In the 3 rd technique, the prey area is determined and agents can vertically perform a preliminary attack with low flight. Agents can attack the prey as follow: where X M (t) indicates the mean position in the i-th generation, X best is the best Aquila position founded in this iteration, rand is a randomly generated number in the interval [0, 1], α and β are exploitation parameters that are equal 0.1, and U B and LB refer to the upper and lower boundaries.

Narrowed exploitation (X 4 )
In this phase, Aquila can easily chase the prey and attack attacks it using escape trajectory light which can be modeled as follows: where QF(t) is the quality value, G 1 refers to various AO motions, and G 2 refers to chasing prey flight slope.

Opposition-based learning
Opposition-based learning strategy is a technique introduced by Tizhoosh [66] which has been employed by many researchers to improve many swarm optimizers. For example, Hussien [67] embedded OBL in SSA to overcome getting trapped in local optima. Moreover, Hussien and Amin used OBL with chaotic local search to improve the exploration abilities of HHO [7]. Zhao et al. employed OBL with arithmetic optimization algorithm [1]. OBL works by comparing the original solution with its opposite one. Let x is a real number that falls in the interval [lb, ub], then its opposite can be calculated from the following equation: where lb and ub are lower boundary and upper one respectively, andx indicates the opposite solution. If x is a vector that has multi values, thenx can be computed from the following equation: where x j indicates the j th value of x and ub j and lb j refer to upper and lower boundaries respectively.

Chaotic local search
The chaotic local search (CLS) technique has been integrated with many swarm optimizers such as WOA [68], HHO [7], brain storm optimization [69], and Jaya Algorithm (JAYA) [70]. CLS technique is almost used with the logistic map which is given in the following equation: where s is the number of the current iteration, C is a control parameter that equals 4, o 1 0.25, 0.50 and 0.75. Local search is used to search in the neighborhood area of the already founded optimal solution. CLS can be represented by the following equation: where Cs refers to the value generated by CLS in iteration i andC i can be easily calculated as follows: µ is a shrinking factor and can be computed from the following below equation: where t and T refer to the current and maximum number of iterations.

Restart strategy
During the search operation, some agents may not be able to find a better location as they may get trapped in local optimum regions. Such agents may affect the overall search as they take many generation resources and don't enhance the search process. Restart strategy (RS) at this point which is proposed by Zhang et al. [71] can help worse agents to jump out from local regions. RS counts the number of times for each individual that has been enhanced and updated. So, if the i-th agent has updated, then the trial value will be zero, otherwise, the trial value will be increased by 1. If the trial is equal to a certain threshold, then the individual position will be changed using the following 2 equations: where ub and lb refer to upper and lower boundaries and rand indicates a random number in the number [0, 1].

Shortcoming of Aquila algorithm
AO similar to other swarm optimizers may get stagnation in sub-optimal areas and have a slow convergence, especially when addressing and handling complicated & complex problems that have high dimensional features.

Architecture of modified AO
Our proposed algorithm which is termed mAO tries to solve the original optimizer limitations. In the proposed mAO, three different strategies are used to improve the classical AO namely: oppositionbased Learning, restart strategy, and chaotic local search. OBL strategy is used in both the initialization phase and updating agent position process. OBL is used in initialization by selecting the best N solutions from the pool of X ∪x to ensure the algorithm starts with a good set of agents whereas it is embedded in the updating process to improve algorithm exploration abilities. Moreover, a chaotic local search mechanism is used to improve the best solution and existed until now which will lead to the enhancement of the whole individuals. On the other hand, a restart strategy is employed in AO to change the position of the worst individuals if they have already get fallen in local regions. The pseudo-code of the developed optimizer can be seen in algorithm 1.

Complexity of mAO
The complexity of the proposed algorithm can be computed by calculating the complexity of each phase separately, i.e., initialization, evaluation, and updating process.

Parameter setting
To validate our proposed approach, 29 functions from CEC2017 have been used to test mAO performance. These CEC2017 functions are very challenging and contain different types of functions (Unimodal, multimodal, composite, and hybrid). The description of CEC2017 functions is shown in Table 4 where opt. refers to the global optimal value. All experiments have been performed on Matlab 2021b using Intel Corei7 and 8.00 G of RAM. The parameter setting of all experiments is shown in Table 2. mAO is compared with the original Aquila Optimizer and other nine well-known  Max number of iteration 500 and powerful swarm algorithms namely: crow search algorithm [40], elephant herd optimizer [60], grasshopper optimization algorithm [46], LSHADE [61], Lshade-EpSin [62], moth-flame optimization [63], multi-verse optimization [64], and particle swarm algorithm [24]. The parameter of each mentioned algorithm is given in Table 3.

CEC2017
The developed optimizer and its competitors' results are shown in Table 5 in terms of best (min), worst (mac), mean (average), and standard deviation. From the above-mentioned table, it can be seen that the suggested technique has good results and performs well in solving all functions type. For example, in term of average, it ranked first in all unimodal functions (F1 and F3), and all multimodal functions (F4 and F10). On the other hand, it can be noticed that mAO achieved better results compared to the original optimizer and others. It ranked first in almost functions whereas it ranked first in solving composite problems in 5 functions out of 10. Besides the statistical measures, convergence curve can be seen as a powerful tool to compare any new algorithm with its competitors to see if it has a good convergence or slow one. mAO has been recognized to achieve a fast convergence curve in all mentioned function types as shown in Figures 1-3.
Furthemore, a statistical comparsion using Wilcoxon test [72,73] has been carried out between the developed algorithm and all other competitors. Table 6 shows the p-values which show a big diffeence in the outputs between different optimizers. From Table 6, results prove the mAO algorithm superiority in finding near-optimal solutions when compared with others. To show the powerful and efficient of the proposed algorithm, a scalability test has been performed on 10 and 50 dimensions using the same functions and the same comparing algorithms. The results of this scalability test are shown in Table7. It can be seen that mAO is better than other competitors in almost functions.
To show the powerfulness of our suggested optimizer by intergrating three strategies with AO, we test the standard AO with each operator seperately. Table 8 shows the average and standard deviation of four algorithms: AO with OBL (AOOBL), AO with CLS (AOCLS), AO with RS (AORS), and the developed algorithm mAO that contains AO with CLS, RS, and OBL.       Table 3. Setting of all meta-heuristic algorithms parameters.

Alg. Parameter
Value H

Engineering problems
In this section, the performance of the developed optimizer is tested using many real-world constrained problems which contain many inequalities. These problems are Welded beam design problem, Pressure vessel design problem, Tension/compression spring design problem, Speed reducer design problem, and Three-bar truss design problem. The mathematical formulas for the above problems are existed in [68,74,75].

Welded beam design problem
The first constrained problem used in this study is welded beam design (WBD) which is proposed by Coello [76]. The aim of this problem is to find the minimum welded beam cost and its design structure is shown in Figure 4. WBD has 7 constraints and 4 design variables namely: bar thickness (b), bar height (t), weld thickness (h), and attached bar part length (l). The mathematical representation of WBD can be formulated as follows: Results of WBD are shown in Table 9 where mAO is compared with classical AO, GSA [46], GA [77], SSA [78], MPA [79], HHO [80,81], WOA [82], and CSA [40]. From the pre-mentioned table, it's notable that mAO has outperformed other swarm optimizers with an objective value of 1.6565 and decision values (x 1 , x 2 , x 3 , x 4 ) = (0.1625, 3.4705, 9.0234, 0.2057, 1.6565).

Tension/compression spring design problem
The 3 rd constrained engineering problem discussed here is Tension/Compression Spring Design (TCSD) which was introduced by Arora [89] and its main objective is to decrease the tension spring weight by determining the optimal design variables' values that satisfy its constrained requirements. TCSD has different 3 variables namely: diameter of mean coil (D), the diameter of the wire (d), and active coils number (N). TCSD design is given in Figure 6 and its mathematical formulation is given as follows: Consider: g 4 x = x 1 +x 2 1.5 − 1 ≤ 0 with 0.05 ≤ x 1 ≤ 2.0, 0.25 ≤ x 2 ≤ 1.3, and 2.0 ≤ x 3 ≤ 15.0 Table 11 shows the results of TCSD where mAO is compared with RO [90], WOA [82], PSO [87],   [91], OBSCA [92], GSA [93], and CPSO [87]. From the previously mentioned table, we can conclude that mAO has achieved better results compared to original and other competitors. It achieves a fitness value with 0.011056 and decision values (x 1 , x 2 , x 3 ) = (0.0502339, 0.32282, 10.5244).

Three-bar truss design problem
The last engineering problem addressed in this manuscript is called the Three-bar truss design (TBD) problem. TBD is a fraction and nonlinear civil engineering problem introduced by Nowcki [101]. Its objective is to find the minimum values of truss weight. It has two variables and its mathematical formulation is shown below: Minimize: f (x) = (2 √ 2x 1 + x 2 ) * l Subject to: mAO results are compared with CS [102], GOA [46], DEDS [103], MBA [104], PSO-DE [84],

Conclusions and future work
In this study, a novel AO version is suggested called mAO to tackle various optimization issues. mAO is based on 3 different techniques: 1) Opposition-based Learning to improve optimizer exploration phase 2) Restart Strategy to remove the worse agents and replace them with totally random agents. 3) Chaotic Local Search to add more exploitation abilities to the original algorithm. mAO is tested using 29 CEC2017 functions and different five engineering optimization problems. Statistical analysis and experimental numbers show the significance of the suggested optimizer in solving various optimization issues. However, mAO like other swarm-based algorithms has a slow convergence in high-dimensional problems so, it won't be able to solve all optimization problem types.
In future, we can apply mAO to feature selection, job scheduling, combinatorial optimization problems, and stress suitability. Binary and multi-objective versions may be proposed in future.