The Arithmetic Optimization Algorithm

This work proposes a new meta-heuristic method called Arithmetic Optimization Algorithm (AOA) that utilizes the distribution behavior of the main arithmetic operators in mathematics including (Multiplication ( M ), Division ( D ), Subtraction ( S ), and Addition ( A )). AOA is mathematically modeled and implemented to perform the optimization processes in a wide range of search spaces. The performance of AOA is checked on twenty-nine benchmark functions and several real-world engineering design problems to showcase its applicability. The analysis of performance, convergence behaviors, and the computational complexity of the proposed AOA have been evaluated by different scenarios. Experimental results show that the AOA provides very promising results in solving challenging optimization problems compared with eleven other well-known optimization algorithms. Source codes of AOA are publicly available at http://www.mathworks.com/matlabcentral/fileexchange/84742 and https://seyedalimirjalili.com/projects.


Introduction
In recent decades, the ever-increasing complexity and difficulty of real-world problems resulted in the need for more reliable optimization techniques, especially meta-heuristic optimization algorithms. These techniques are mostly stochastic and estimate optimal solutions for different optimization problems [1,2]. Such optimization algorithms supersede conventional optimization algorithms due to gradient-free mechanisms and high local optimal avoidance capability [3,4]. The optimization process finds the optimal decision variables of a function or a problem by minimizing or maximizing its objective function. Generally speaking, real-world and optimization problems have non-linear restrictions, complex, high computational time, non-convex, and wide search spaces [5,6], which make them challenging to solve.
Meta-heuristic optimization algorithms have two important search strategies: (1) exploration/diversification and (2) exploitation/intensification [7,8]. Exploration is the capability to explore the search space globally. This ability is related to the avoidance of local optima and resolving local optima entrapment. On the contrary, exploitation is the capability to explore nearby promising solutions to improve their quality locally [9]. Excellent performance of an algorithm requires a proper balance between these two strategies [10][11][12]. All population-based algorithms use these features but with different operators and mechanisms.
One popular classification of meta-heuristics is based on the inspiration of evolutionary algorithms, swarm intelligence algorithms, physics-based methods, and human-based methods [13,14]. Evolutionary algorithms simulate habits in natural evolution and use operators motivated by biology behaviors like crossover and mutation. A conventional evolutionary algorithm is the Genetic Algorithm (GA), which is motivated by Darwinian evolutionary ideas. Conventional methods of this group include Evolutionary Programming [15], Differential Evolution [16], and Evolution Strategy [17].
Swarm intelligence algorithms are another group of meta-heuristics, which simulates the behavior of animals in movement or hunting groups [18,19]. The main characteristic of this group is the sharing of organism information of all animals through the optimization course. Conventional methods of this group include Krill Herd Algorithm [20], Salp Swarm Algorithm [20], Symbiotic Organisms Search [21], Sine Cosine Algorithm [22], and Dolphin Echolocation [23].
Physics-based methods are another group of optimization algorithms. This group originates from physical laws in real-life and typically describes the communication of search solutions based on controlling rules ingrained in physical methods. The most commonly utilized algorithms in this group are Simulated Annealing [24], Gravitational Search Algorithm [25], Multi-verse Optimizer [26], and Charged System Search [27].
The final group of optimization is human-based methods, motivated by human co-operations and human behavior in communities. One of the most used algorithms in this group is the Imperialist Competitive Algorithm [28], which is motivated by the human socio-political growth practice. Another algorithm in this group is the Teaching-Learning-Based Optimization Algorithm [29].
The theoretical studies published in the literature can be classified into three sections: modifying the current algorithms, hybridizing various algorithms, and proposing novel algorithms. All these three areas are very active with a large body of algorithms and applications. The reason why researchers do not use a single algorithm is because there is no optimization algorithm to solve all optimization problems according to the No Free Lunch theorem [30]. Therefore, we need to modify the existing algorithms or propose new ones to be able to better solve the current problems or provide solutions for new problems. This motivates our attempt to propose a new optimization algorithm called Arithmetic Optimization Algorithm (AOA). The remainder of the paper is structured as follows.
The particular implementation of the proposed AOA is illustrated in Section 2. The results of the proposed AOA in solving various benchmark test functions and real-world problems are given in Section 3. Finally, the conclusion and potential future research directions are presented in Section 4.

The arithmetic optimization algorithm (AOA)
Generally, population-based algorithms begin their improvement processes (optimization process) with a set of candidate solutions generated randomly. This generated set of solutions is improved by a set of optimization rules incrementally and evaluated by a specific objective function iteratively; that is the essence of the optimization methods. Since population-based algorithms seek to find the optimal solution of optimization problems stochastically, getting a solution in a single run is not guaranteed. Nevertheless, the probability of getting the global optimal solution, for the given problem, is increased by a sufficient number of random solutions and optimization iterations [22].
Despite the differences between meta-heuristic algorithms in the area of population-based optimization methods, the optimization process consists of two main phases: exploration versus exploitation. The former refers to extensive coverage of search space using search agents of an algorithm to avoid local solutions. The latter is the improvement accuracy of obtained solutions during the exploration phase. In the following sub-sections, we represent the exploration (diversification) and exploitation (intensification) mechanisms in the proposed AOA, which is achieved by the Arithmetic operators in math (i.e., 1) Multiplication (M " × "), (2) Division (D " ÷ "), (3) Subtraction (S " − "), and (4) Addition (A " + ")). Fig. 1 shows the exploratory and exploitative mechanisms in AOA. This algorithm is a population-based meta-heuristic capable of solving optimization problems without calculating their derivitives.

Inspiration
Arithmetic is a fundamental component of number theory, and it is one of the important parts of modern mathematics, along with geometry, algebra, and analysis. Arithmetic operators (i.e., Multiplication, Division, Subtraction, and Addition) are the traditional calculation measures used usually to study the numbers [31]. We use these simple operators as a mathematical optimization to determine the best element subjected to specific criteria from some set of candidate alternatives (solutions). Optimization problems occur in all quantitative disciplines from engineering, economics, and computer sciences to operations research and industry, and the improvement of solution techniques has attracted the interest of mathematics for eras.
The main inspiration of the proposed AOA arises from the use of Arithmetic operators in solving the Arithmetic problems. In the following subsections, the behavior of Arithmetic operators (i.e., Multiplication, Division, Subtraction, and Addition) and their influence in the proposed algorithm will be discussed. Fig. 2 shows the hierarchy of Arithmetic operators and its dominance from the outside to the inside. AOA is then proposed based on the mathematical model.

Initialization phase
In AOA, the optimization process begins with a set of candidate solutions (X ) as shown in Matrix (1), which is generated randomly, and the best candidate solution in each iteration is considered as the best-obtained solution or nearly the optimum so far.
where M O A(C Iter) denotes the function value at the tth iteration, which is calculated by Eq. (2). C I ter denotes the current iteration, which is between 1 and the maximum number of iterations (M I ter ). Min and Max denote the minimum and maximum values of the accelerated function, respectively.

Exploration phase
In this section, the exploratory behavior of AOA is introduced. According to the Arithmetic operators, the mathematical calculations using either Division (D) operator or even Multiplication (M) operator got highdistributed values or decisions (refer to various reigns) which commit to the exploration search mechanism. However, these operators (D and M) cannot easily approach the target due to their high dispersion, unlike other operators (S and A). Fig. 3 shows the influence and behavior of Arithmetic operators in mathematical calculations. A function is employed based on using four mathematical operations to show the effect of the different operators' distribution values. Hence, the exploration search detects the near-optimal solution that may be deduced after several endeavours (iterations). In addition, the exploration operators (D and M) were operated at this stage of optimization to support the other stage (exploitation) in the search process through enhanced communication between them.
The exploration operators of AOA explore the search area randomly on several regions and approach to find a better solution based on two main search strategies (Division (D) search strategy and Multiplication search strategy), which are modeled in Eq. (3). This phase of searching (exploration search by executing D or M, see Fig. 4) is conditioned by the Math Optimizer accelerated (MOA) function (see Eq. (2)) for the condition of r 1 > MOA (r 1 is a random number). Fig. 4 shows how the used operators converge toward the optimal area. The first operator (D), in this phase (first rule in Eq. (3)), is conditioned by r 2 < 0.5 and the other operator (M) will be neglected until this operator finishes its current task. Otherwise, the second operator (M) will be engaged to perform the current task instead of the D (r 2 is a random number). Note, a stochastic scaling coefficient is considered for the element to produce more diversification courses and explore diverse regions of the search space. We employed the simplest rule, which is able to simulate the behaviors of Arithmetic operators. In this paper, the following position updating equations are proposed for the exploration parts:  where x i (C Iter+1) denotes the ith solution in the next iteration, x i, j (C Iter) denotes the jth position of the ith solution at the current iteration, and best(x j ) is the jth position in the best-obtained solution so far. ϵ is a small integer number, U B j and L B j denote to the upper bound value and lower bound value of the jth position, respectively. µ is a control parameter to adjust the search process, which is fixed equal to 0.5 according to the experiments of this paper.
where Math Optimizer probability (M O P) is a coefficient, M O P(C Iter) denotes the function value at the tth iteration, and C I ter denotes the current iteration and (M I ter ) denotes the maximum number of iterations. α is a sensitive parameter and defines the exploitation accuracy over the iterations, which is fixed equal to 5 according to the experiments of this paper.

Exploitation phase
In this section, the exploitation strategy of AOA is introduced. According to the Arithmetic operators, the mathematical calculations using either Subtraction (S) or Addition (A) got high-dense results which refer to the exploitation search mechanism. However, these operators (S and A) can easily approach the target due to their low dispersion, unlike other operators, as shown in Fig. 3. Hence, the exploitation search detects the near-optimal solution that may be deduced after several endeavours (iterations). In addition, the exploitation operators (S and A) were operated at this stage of the optimization to support the exploitation stage through enhanced communication between them.
This phase of searching (exploitation search by executing S or A) is conditioned by the MOA function value for the condition of r 1 is not greater than the current M O A(C Iter) value (see Eq. (2)). In AOA, the exploitation operators (Subtraction (S) and Addition (A)) of AOA explore the search area deeply on several dense regions and approach to find a better solution based on two main search strategies (i.e., Subtraction (S) search strategy and Addition (A) search strategy), which are modeled in Eq. (5).
This phase exploits the search space by conducting a deep search, which is very clear in Fig. 3. The first operator (S), in this phase (first rule in Eq. (5)), is conditioned by r 3 <0.5 and the other operator (A) will be neglected until this operator finishes its current task. Otherwise, the second operator (A) will be engaged to perform the current task instead of the S. These procedures in this phase are similar to the partitions of the previous phase. However, exploitation search operators (S and A) often attempt to avoid getting stuck in the local search area. This procedure assists the exploration search strategies in finding the optimal solution and keeping the diversity of the candidate solutions. We carefully designed µ parameters to produce a stochastic value at each iteration to maintain exploration not only during first iterations but also last iterations. This part of searching is very helpful in the situation of local optima stagnation, particularly in the last iterations.  Eventually, the AOA algorithm is stopped by reaching the satisfaction of the end criterion. The Pseudo-code of the proposed AOA is described in Algorithm 1. The intuitive and detailed process of AOA is shown in Fig. 5. Calculate the Fitness Function (F F) for the given solutions 5: Find the best solution (Determined best so far). 6: Update the MOA value using Eq. (2). 7: Update the MOP value using Eq. (4). 8: for (i=1 to Solutions) do 9: for ( j=1 to Positions) do 10: Generate a random values between [0, 1] (r 1, r 2, and r 3) 11: if r 1 >MOA then 12: Exploration phase 13: if r 2 >0.5 then 14: (1) Apply the Division math operator (D " ÷ "). (2) Apply the Multiplication math operator (M " × "). (1) Apply the Subtraction math operator (S " − "). 24: Update the ith solutions' positions using the first rule in Eq. (5). (2) Apply the Addition math operator (A " + "). 27: Update the ith solutions' positions using the second rule in Eq. (5).
To achieve a fair comparison, the considered algorithms have implemented using the same number of iterations and population size of GEO 500, 30, respectively, so the number of function evaluations is 15 000. The values used for the main controlling parameters of the comparative algorithms can be seen in Table 1.
The test functions are presented in Tables 2-5 and Figs. 6-9. The algorithms are compared using mean, standard deviation, Friedman ranking (Rank) test, and Wilcoxon signed-rank test.

Results comparisons using benchmark test functions
At the beginning of this section, we test the impact of changing the value of the parameters of AOA on its performance. Different scenarios are taken based on the parameters' values (µ and α) of AOA. These parameters are assessed at one value from 0.1, 0.5, and 0.9; therefore, we have nine scenarios (as in Table 6). Table 7 represents  Table 2 Unimodal test functions.

Function
Description the statistical results achieved at each scenario among the used thirteen benchmark functions. From these results, it can be seen that the fifth scenario (i.e., µ = 0.5 and α = 5), among all the tested functions, has better results; it is followed by the sixth and fourth scenarios that allocated the second and third rank, respectively. The performance of the proposed AOA algorithm is tested in terms of the impact of dimensions, as shown in Table 8. This test is a standard test used in the literature of optimization benchmark test functions, which can show the effects of dimensions on the performance of the AOA to prove its ability not only for low-dimensional problems but also for high-dimensional problems.
Furthermore, Table 8 shows how a population-based algorithm can maintain its searching merits in highdimensional problems. In this part of the experiments, the proposed AOA is employed to address the scalable Unimodal and Multimodal test functions (F1-F13) with various dimension spaces (30, 100, 500, and 1000). The Table 3 Multimodal test functions.

Function
Description Table 5 Hybrid  Table 8 illustrates that the obtained-results of the proposed AOA in dealing with thirteen test functions (F1-F13) with different dimensions are competitive; AOA got the best ranking when evaluated in 30 dimensions. This confirms that any optimization algorithm works more efficiently when the dimension is low; moreover, the obtained results when it uses high-dimensions size are competitive and not so far from the results of low-dimensions test functions. The AOA is implemented in highly scalable optimization problems to prove its ability to solve complicated optimization problems. In this part of the experiments, as shown in Tables 9-12, the proposed AOA algorithm is compared with other well-known optimization algorithms using thirteen benchmark functions (F1-F13) with several dimensions (30, 100, 500, and 1000). From these results, it is observed that the performance of the AOA is superior in most cases, and it is a competitor in some other cases compared to other optimization algorithms across various dimensions.
Besides the given evaluation metrics (i.e., average and standard deviation), Friedman ranking test has been carried for conducting ranking comparisons for the above mentioned algorithms in a statistical way, as shown in Table 13. The Friedman ranking test has been employed to investigate the ranking of the comparative algorithms using thirteen test functions (F1-F13) with different dimension sizes (10, 100, 500, and 1000). The obtained results show that the proposed AOA is ranked first compared to other comparative algorithms, followed by GWO is ranked second, CS is ranked third, FA is ranked fourth, GSA is ranked fifth, BBO is ranked sixth, FPA is ranked seventh, GA is ranked eighth, DE is ranked ninth, MSO is ranked tenth, PSO is ranked eleventh, and finally BAT is ranked twelfth. According to this test, the proposed AOA algorithm proved its ability to get the optimal solution by getting the first rank in different test functions compared to other comparative optimization algorithms.
In order to prove the convergence behavior of the proposed AOA, three metrics are also applied in 2D environments, which are shown in Fig. 10. The diagrams are search history, trajectory, and convergence curve.
In order to analyze the convergence performance of the proposed AOA compared to other six well-known optimization algorithms, Fig. 11 presents exemplary convergence curves using thirteen test functions (F1-F13). It is clear from Fig. 11 that the proposed AOA has a steady convergence and a slow convergence acceleration on these test functions compared with other comparative algorithms (GA, FPA, BBO, BAT, PSO, and GWO). Furthermore, the AOA obtained better solutions than other algorithms on these test functions in terms of the global search experience and convergence speed. This means that the AOA got a faster convergence rate and a more effective global search ability.
Moreover, it is obvious from Fig. 11(i), (j), and (k) that AOA does not give a distinct advantage in the first iterations. One reason causing this phenomenon is that AOA distributes the solutions' positions to various local research areas instead of accumulating all the positions in a local area based on the current best-obtained solutions. Nevertheless, the distribution mechanism, as mentioned earlier, increases the global search capability of AOA. AOA can dramatically avoid sticking in local search areas and has the competitive experience of searching for global search areas.
In this part of the experiments, the average running time of the proposed AOA algorithm compared to other well-known optimization algorithms is presented in Table 14. It can be observed that the proposed AOA needs less running time than other comparative algorithms in terms of seconds. Since AOA is a population-based algorithm, there is no need for optimization processes, i.e., Multiplication, Division, Subtraction, and Addition. Consequently, we concluded that the computational performance of the proposed AOA algorithm is sufficiently better than the other comparative algorithms. Moreover, according to the Friedman ranking test in Table 14, the results of the algorithms are ranked based on its average running time for the given thirteen test functions (F1-F13). The proposed AOA algorithm obtained the first rank, followed by BBO the second-ranked and so on. These marks are also under the computational complexity of AOA.
The reported results in Table 15 confirmed that AOA obtained superior and highly competitive results on the given test functions (F14-F29). The results are superior in almost all test cases and competitive in one test case (F26). All optimization algorithms in these experiments obtained high-quality results. According to the given results in Table 15, the proposed AOA has always obtained the best results in the given test cases (F14-F29) in comparison with other well-known optimization algorithms. The AOA is proficient in achieving high-quality solutions and in overwhelming other competitors. Moreover, Friedman ranking test has also been applied for these results; the proposed AOA got the first ranking compared to other comparative methods followed by DE, CS, TLBO, FA, MFO, GWO, FPA, BBO, PSO, GA, and BAT. We can conclude that the proposed AOA is a highly competitive optimization algorithm compared with the twelve well-known optimization algorithms. Fig. 12 also confirms the superiority of the AOA algorithm based on the average fitness values of the test functions (F14-F29).

Real-world applications
This section solves five engineering design problems using the proposed algorithm: welded beam design problem, tension/compression spring design problem, pressure vessel design problem, 3-bar truss design problem, and speed reducer problem. To address these problems, a set of 30 solutions and 500 iterations are used in each run [41,42]. The obtained results are compared with several similar techniques published in the literature. The following subsections show the results of the proposed AOA compared with the results of the state-of-art methods.
In this paper, bound-constrained and general constrained optimization problems are chosen to examine the effectiveness of the proposed AOA. For the bound-constrained optimization problems [43,44], each pattern variable is often required to provide a boundary limitation:   where L B j and U B j are the lower bound and upper bound of the position x i j , and n is the number of given positions. Furthermore, a general constrained problem can be usually presented as: where m is the number of various constraints, and l is the number of equilibrium constraints.
In the performance evaluation of the proposed AOA, all the constrained optimization problems in Eq. (7) are mapped into the bound-constrained design by applying the static cost function. For any infeasible solution, a cost function will be combined into the underlying objective function. Due to its convenience in employment, the static cost function is simplified. It only needs an auxiliary cost function and is proper for all various problems [6,45,46]. Using this procedure, the above-mentioned constrained optimization problem can be presented as Pe j max{g i (X ), 0} + n ∑ k=1 Pe k max{|h k (X ) − ε|, 0} where Pe j and Pe k are cost functions and usually charged a significant value. ε is the error of equilibrium constraints, which is set to 1e-6 in this paper.

Welded beam design problem
The main objective of this problem, welded beam design, is to find the minimum fabrication cost by defining the optimal value of the given variables, which are four optimization variables as shown in Fig. 13, namely, length of attached part of bar (l), thickness of weld (h), the height of the bar (t), and thickness of the bar (b). The given variables need to be satisfied with seven constraints. The mathematical representation of this problem can be found in the original paper.
The proposed algorithm (AOA) is applied for solving the welded beam design and compared with several optimization algorithms published in the literature. From Table 17, we concluded that the results of the AOA are better than all other comparative algorithms. Hence, it can be declared that the AOA can find the best possible optimal solution (i.e., design) for solving the welded beam design. Moreover, Fig. 14 shows the qualitative results for the welded beam design problem.

Tension/compression spring design problem
The main objective of the tension/compression spring design problem is to find the minimum weight of the tension/compression spring to satisfy its design constraints: shear stress, surge frequency, and deflection as shown in Fig. 15. Three design variables need to be taken into account: wire diameter (d), mean coil diameter (D), and the number of active coils (N ). The mathematical representation of this problem can be found in its original paper. The proposed AOA is applied for solving this engineering problem (Tension/compression spring design) and compared with a mathematical technique and other well-known optimization algorithms, which are published in the literature as shown in Table 18. The obtained results of the proposed AOA are compared with the literature studies in Table 18. It can be observed that AOA outperformed all other comparative algorithms. Fig. 16 shows the qualitative results for the tension/compression spring design problem.

Pressure vessel design problem
The main objective of the pressure vessel design problem is to find the overall cost of the cylindrical pressure vessel to satisfy its design constraints: forming, material, and welding as shown in Fig. 17. Both edges of the vessel are capped while the top has a hemispherical shape. Four design variables need to be taken into account in the optimization operations to satisfy its four constraints: the inner radius (R), the thickness of the head (T h ), thickness of the shell (T s ), and the length of the cylindrical part without examining the head (L). The mathematical representation of this problem can be found in its original paper.
The obtained results by the AOA for solving the Pressure vessel design problem are compared with other several optimization algorithms as shown in Table 19. From this table, we conclude that the results of AOA are better than almost all other comparative algorithms. It can be observed that the AOA outperformed almost all other optimization algorithms. Fig. 18 shows the qualitative results for the pressure vessel design problem.

3-bar truss design problem
This engineering design problem is to create a truss with three bars to decrease its weight. This problem has a very restricted search space [39,47]. The structural parameters in this problem are shown in Fig. 19. The mathematical representation of this problem can be found in its original paper.
The results of AOA when solving 3-bar truss design problem are shown in Table 20. The obtained results by the AOA for solving this problem (3-bar truss design problem) are compared with other several optimization algorithms published in the literature. It can be observed that AOA is a competitive algorithm compared to wellknown optimization techniques published in the literature. Fig. 20 shows the qualitative results for the 3-bar truss design problem.

Speed reducer problem
This main objective of the Speed reducer design problem [48], which is considered a discrete problem, is to find the minimum weight of the speed reducer to satisfy its four design constraints: bending stress of the gear teeth, covering stress, transverse deflections of the shafts, and stresses in the shafts, as shown in Fig. 21. Consequently, there are one discrete and six continuous variables observed. Here, x1 is the face width, x2 is the module of teeth, and x3 is a discrete design variable that presents the teeth in the pinion. Similarly, x4 is the length of the first shaft between bearings, and x5 is the length of the second shaft between bearings. The sixth and seven design variables (x6 and x7) are the diameters of the first and second shaft, respectively. The mathematical representation of this problem can be found in its original paper.
The obtained results by the AOA for solving the speed reducer design problem are compared with other several optimization algorithms published in the literature as shown in Table 21. It is clear that the results of AOA are better than all comparative algorithms. AOA can give very competitive results in addressing this problem. Moreover, Fig. 22 shows the qualitative results for the speed reducer design problem.      Taken together, the results of this work evidently showed that the proposed AOA algorithm can be considered as a reliable alternative to the existing optimization algorithms. The mechanisms proposed allow this algorithm to show exploratory and exploitative behaviors when solving a wide range of problems.

Conclusion and potential future researches
In this paper, from the behavior of Arithmetic operators in mathematical calculations, a novel meta-heuristic optimization algorithm, the Arithmetic Optimization Algorithm (AOA), is proposed. Counter to most of the wellknown optimization algorithms, AOA has an easy and straightforward implementation, according to its mathematical presentation, to adapt to tackle new optimization problems. It does not need to adjust many parameters except the  population size and stopping criterion, which are standard parameters in all optimization algorithms. The random and adaptive parameters also expedited the divergence and convergence of the search solutions in the AOA. Comprehensive experiments are conducted to validate the performance of the proposed AOA. Firstly, a set of twenty-nine well-known benchmark test functions, including unimodal, multimodal, composite functions, and hybrid composition functions are used to examine exploration, exploitation, local optima escape, and convergence behavior of the proposed AOA. Secondly, AOA is tested to solve some benchmark test functions with two-dimensional space. Various performance metrics (search story, trajectory, the average of fitness values, and the best-obtained solution during the optimization process) are applied to observe and confirm the performance of the proposed AOA qualitatively. Statistical ranking tests have been conducted to confirm the significant improvement of the proposed AOA over benchmark test functions statistically. Finally, the proposed AOA is also used to solve five real-life engineering design problems (welded beam design problem, tension/compression spring design problem,  [58] 0.051480 0.351661 11.632201 0.01270478 HS [59] 0.051154 0.349871 12.076432 0.0126706 CSCA [52] 0.051609 0.354714 11.410831 0.0126702 PSO [53] 0.051728 0.357644 11.244543 0.0126747 CPSO [53] 0.051728 0.357644 11.244543 0.0126747 ES [60] 0.051643 0.355360 11.397926 0.012698 RO [54] 0.051370 0.349096 11.76279 0.0126788 WOA [55] 0.051207 0.345215 12.004032 0.0126763 GSA [26] 0.050276 0.323680 13.525410 0.0127022 MVO [26] 0.05251 0.37602 10.33513 0.012790 OBSCA [56] 0.05230 0.31728 12.54854 0.012625 AOA 0.0500 0.349809 11.8637 0.012124  pressure vessel design problem, 3-bar truss design problem, and speed reducer design problem) to test and confirm its performance. According to the obtained results, the proposed AOA can find better solutions for most of the examined problems compared to other well-known optimization algorithms in regard to solutions' quality and computational performance. Moreover, the results of the proposed AOA also sufficiently prove its superiority on the ability to avert trapping of the local optima. Consequently, we achieved the intended goals of proposing a new algorithm in this paper.
We have proposed the AOA algorithm with a simple yet effective framework and a minimum number operators to build the foundations of this algorithm. We will leave exploring other arithmetic and evolutionary operators (e.g.

Table 19
Results of the comparative algorithms for solving the pressure vessel design problem.

Algorithm
Optimal      mutation and crossover, multi-swarm composition, evolutionary updating composition, and chaotic maps) to future works. In addition, other improved versions of the proposed AOA can be proposed to solve optimization problems with binary, discrete, and multiple objectives, respectively. Levy flight, disruption, mutation, and opposition-based learning can be combined with AOA for enhancing its performance. The AOA algorithm can be hybridized with  other stochastic components, including local search or global search methods, in the area of optimization to enhance its performance. Finally, the investigation of the utilization of AOA in other various disciplines would be a valuable contribution, such as in neural networks, image processing applications, feature selection, task scheduling in cloud computing, text and data mining applications, big data applications, signal denoising, recource management applications, smart home applications, network applications, industry and engineering applications, other benchmark test functions, other real-world problems.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.