An Advanced Crow Search Algorithm for Solving Global Optimization Problem

: The conventional crow search (CS) algorithm is a swarm-based metaheuristic algorithm that has fewer parameters, is easy to apply to problems, and is utilized in various ﬁelds. However, it has a disadvantage, as it is easy for it to fall into local minima by relying mainly on exploitation to ﬁnd approximations. Therefore, in this paper, we propose the advanced crow search (ACS) algorithm, which improves the conventional CS algorithm and solves the global optimization problem. The ACS algorithm has three differences from the conventional CS algorithm. First, we propose using dynamic AP (awareness probability) to perform exploration of the global region for the selection of the initial population. Second, we improved the exploitation performance by introducing a formula that probabilistically selects the best crows instead of randomly selecting them. Third, we improved the exploration phase by adding an equation for local search. The ACS algorithm proposed in this paper has improved exploitation and exploration performance over other metaheuristic algorithms in both unimodal and multimodal benchmark functions, and it found the most optimal solutions in ﬁve engineering problems.


Introduction
The optimization of engineering problems is of great interest to many researchers, and various strategies for incorporating optimization into the engineering field are being studied [1]. As an example, metaheuristic algorithms that are easy to apply to engineering problems are being developed for optimization. These algorithms are applied to various fields in order to optimize engineering problems by minimizing costs, shortening paths, and maximizing performance.
Metaheuristic algorithms originated in 1965 with the development of the evolution strategy (ES) algorithm [2], and algorithms using various natural phenomena have been proposed. Figure 1 classifies the metaheuristic algorithms based on the natural phenomena that they emulate. Metaheuristic algorithms can be classified into four main categories: evolutionary, swarm, physic, and human behavior [3][4][5][6][7]. Evolution-based algorithms are based on the genetic characteristics and evolutionary methods of nature, and representative algorithms include ES, evolutionary programming (EP), genetic algorithm (GA), genetic programming (GP), and differential evolution (DE). Swarm-based algorithms are based on the behavior of organisms such as birds or ants in clusters, and representative algorithms include ant colony optimization (ACO), particle swarm optimization (PSO), artificial bee colony (ABC), cuckoo search, and crow search (CS). Physical-based algorithms are based on physical phenomena, and representative algorithms include simulated annealing (SA), harmony search (HS), gravitational search (GS), black hole (BH), and sine cosine (SC).
Finally, the human behavior-based algorithms are based on human intelligent behavior, and representative algorithms include human-inspired (HI), social emotional optimization (SEO), brain storm optimization (BSO), teaching learning-based optimization (TLBO), and social-based (SB) [8]. All metaheuristic algorithms perform optimization using exploitation and exploration. If the metaheuristic algorithm mainly uses exploration, then it can easily find the global minima but has a difficult time finding the exact solution. Conversely, metaheuristic algorithms which mainly use exploitation can find accurate solutions but are prone to falling into local minima [9][10][11]. Therefore, the convergence performance of the algorithm varies greatly depending on the method of using exploitation and exploration, and exploitation and exploration should be used in harmony [12]. Swarm-based algorithms are efficient in searching for global optima and are easy to apply to a variety of optimization problems. They also lend themselves well to parallelization, making them a popular choice for many researchers [13]. With these advantages, swarm-based algorithms are applied to various engineering fields, and many researchers are working to improve the convergence performance of algorithms. The conventional CS algorithm, originally proposed by Askarzadeh in 2016 and ranked among the swarm-based algorithms, performs optimization by mimicking the high intelligence of crows [14]. Crow brains are intelligent enough to recognize food or humans because they are large compared to their body size. As crows are intelligent, they can remember the location of food hidden by other birds and steal this hidden food when the other birds are not around. The conventional CS algorithm proposes to perform optimization using these characteristics of crows and has the following four principles: • Crows live in groups. • Crows remember the location of their hidden prey. • Crows steal food from other birds. • Crows are protected by probability.
The conventional CS algorithm utilizes a small number of parameters and demonstrates excellent convergence performance. Due to its easy application in problems and excellent performance, it is widely applied in civil and architectural engineering, electrical engineering, mechanical engineering, and image processing [15]. The conventional CS algorithm is more likely to fall into local minima because it mainly performs optimization using exploitation rather than exploration. However, given that real optimization problems are often characterized by multimodal functions, optimization algorithms should mainly use exploration rather than exploitation to find accurate solutions [16]. In order to address this issue, Mohammadi et al. proposed a modified crow search (MCS) algorithm in 2018 that performs optimization through the adoption of a new method for selecting a target crow as well as variation of f l (flight length) based on distance depending on the distance of the crow [17]. In the same year, Díaz et al. proposed the improved crow search (ICS) algorithm, which is improved by random adoption methods using AP (awareness probability) and Lévy flight varying by fitness in the t generation [18]. AP is one of the important parameters used in the conventional CS algorithms, and depending on the size of the AP, the conventional CS algorithms perform exploitation or exploration. In 2019, Zamani et al. proposed the conscious neighborhood-based crow search (CCS) algorithm, which utilizes three strategies: neighborhood-based local search (NLS), non-neighborhood-based global search (NGS), and wandering around-based search (WAS) [19]. In the same year, Javidi et al. proposed the enhanced crow search (ECS) algorithm [20], which used three additional mechanisms. In addition, the convergence performance of the ECS algorithm was evaluated compared to the conventional CS algorithm to which three mechanisms were applied. In 2020, Wu et al. proposed the Lévy flight crow search (LFCS) algorithm combining Lévy flight and the conventional CS algorithm [21]. Recently, in 2022, Necira et al. proposed the dynamic crow search (DCS) algorithm, and it utilizes AP, which linearly decreases with the number of generations, and f l, which is randomly selected by the parity probability density function [22].
In this paper, the advanced crow search (ACS) algorithm was proposed as a means to solve the global optimization problem. The ACS algorithm uses dynamicAP-which varies nonlinearly with changes in the number of generations-and suggests that we follow the best results of previous generations with a probability-based approach, rather than randomly chasing the prey selected by crows. In addition, instead of randomly selecting from the entire problem range, the algorithm proposes reducing the randomly selected space as the number of generations increases. Section 2 briefly reviews conventional CS algorithms and papers that improve on these conventional CS algorithms, and Section 3 compares the explanation of the ACS algorithm with the convergence performance according to parameter changes. In Section 4, we solve the numerical optimization problem and compare the results with those of other algorithms. Section 5 presents the conclusions drawn from this study.

Related Work
In this section, we explain the process of optimizing the conventional CS algorithm and briefly outline research projects that have improved upon the conventional CS algorithm.

Conventional CS Algorithm
The conventional CS algorithm proposed by Askarzadeh describes the intelligent behavior of crows and performs optimization by repeating the following five steps [14]: Step 1. Define the problem and set the parameters The problem undergoing optimization is defined, and the initial value of the parameters used in the conventional CS algorithm are set. The parameters used in the conventional CS algorithms are AP (awareness probability), f l (flight length), N (flock size), pd (dimension of problem), and t max (maximum number of generations).
Step 2. Initialize the memory of crows and evaluate The size of the crow group, determined by pd and the size of N, is expressed as Equation (1), and the initial position of each crow is randomly assigned within the range between lb (lower boundary) and ub (upper boundary). In this context, i is 1, 2, . . . , N, t are 1, 2, . . . , t max , and d is pd. The initial position of the randomly placed crow is remembered as Equation (2), and the initial position of the crow is evaluated by object function.
Step 3. Generate and evaluate the new positions for crows Step 3 is the most important step that the conventional CS algorithm uses to perform optimization. Crow i (x i,t ) follows crow j (m j,t ), and two cases are proposed depending on whether crow j is aware of crow i's following. The first case is that crow j (m j,t ) does not recognize crow i (x i,t )'s following. The position of crow i (x i,t ) is adjusted by Equation (3), where r i is a random number between 0 and 1. In addition, a local ( f l < 1) or global ( f l > 1) area search is performed depending on the size of f l, and it is known to have the best convergence performance when using f l = 2.0. Figure 2 is a diagram that expresses this characteristic.
x i,t+1 = a random position The second case is that crow j (m j,t ) recognizes crow i (x i,t )'s following. In this instance, the position of crow i is adjusted by Equation (4), moving to a random position in the range between lb and ub. Two cases are selected from each generation by the AP, and the AP mainly uses 0.1. Relative to the size of AP, the conventional CS algorithms perform exploitation and exploration in order to find the optimal solutions. The positions of the newly moved crows are again evaluated by the objective function.
Step 4. Update the memory The results are compared by evaluating the crow position change using Equations (3) and (4) with the evaluation of crows stored in memory. Comparing the evaluation results, the better crow position is updated in the crow's memory.
Step 5. Termination of repetition The process of Steps 2-4 is repeated continuously, and when t reaches t max , the performance of the conventional CS algorithm is terminated in order to derive optimization results. The pseudo code of the above-mentioned process is provided in Algorithm 1.

Algorithm 1 Pseudo code of the conventional CS algorithm
Initialize the parameters(AP, f l, N, pd, t max ) Initialize the position of crows in the search space and memorize Evaluate the position of crows while t < t max do Randomly choose the position of crows x i,t+1 = a random position end if end for Evaluate the new position of crows Update the memory of crows end while Show the results

Modified CS Algorithm
The modified CS (MCS) algorithm was proposed by Mohammadi et al. in 2018 [17]. The MCS algorithm has a similar structure compared to the conventional CS algorithm, but two new equations have been proposed.
First, MCS algorithm uses K parameters, which use random variables between '0' and '1' to select the target crow (Crow j), unlike the conventional CS algorithm. K is defined as Equation (5) and consists of K max and K min . K has values that decrease with the number of generations by K max and K min .
If K has a large value, then the probability that a crow in a bad position will be selected increases; if K has a small value, then the probability that the crow in the best position will be selected increases. Therefore, exploration is primarily performed in the initial generations, and exploitation is primarily performed in the latter generations.
Second, the MCS algorithm uses a value of f l differently depending on the distance between crow i and crow j, where f l is defined as Equation (6). Here, f l thr and D thr are initially set parameters, and D i,j is the distance vector of crow i and crow j.
Askarzadeh noted that when f l = 2, the conventional CS algorithm has the best convergence performance [14]. However, the MCS algorithm uses f l i,t with a value greater than 2 when the crow's distance (D i,j ) is closer than D thr .

Dynamic CS Algorithm
The dynamic CS (DCS) algorithm was proposed by Necira et al. in 2022 [22], and it proposed dynamic AP and f l that change with the number of generations.
First, dynamic AP, which varies dynamically with the number of generations, is defined as Equation (7). dynamic AP decreases linearly within the range of AP max and AP min as the number of generations increases. This change causes the initial number of generations to perform the exploration in the global search space.
Second, f l c was used instead of f l used by the conventional CS algorithm, which is defined as Equation (8). The conventional CS algorithm initially determines f l and performs a local search or global search based on the determined value. However, DCS algorithm mainly performs a global search when it is less than a certain number of generations, and a local search when it exceeds a certain number of generations. These changes are determined by τ and are mainly used at 0.9.

Advanced CS Algorithm
The conventional CS algorithm, which repeats the above process to perform optimization, performs exploration and exploitation according to the size of AP, mainly using 0.1 for AP. That is, the conventional CS algorithm mainly performs the exploitation rather than the exploration. Figure 3 is a diagram showing the exploitation and exploration that occurs in the process of optimizing the Sphere function for 1000 generations of the conventional CS algorithms with a N of 50. It can be seen that exploitation mainly occurs in all generations. Optimization algorithms that mainly use excitation in optimization performance are likely to fall into local minima [10], and the performance of the conventional CS algorithms is largely dependent on the initial population. In this paper, to address this problem, we improve the performance of the initial population using dynamic AP that varies dynamically with the number of generations, and the performance of exploitation and exploration using two proposed equations. Similar to the conventional CS algorithm, the ACS algorithm consists of a total of five steps.
Step 1. Define the problem and set the parameters Like the conventional CS algorithm, the problem for performing optimization is defined in Step 1, and the parameters used in the ACS algorithm are set. The parameters added in the ACS algorithm are AP max , AP min , and FAR (Flight Awareness Ratio). Here, AP max and AP min are used for dynamic AP.
Step 2. Initialize the memory of crows and evaluate The size of the crew group used in the ACS algorithm is expressed as Equation (1) as in the conventional CS algorithm, and the initial position is remembered as Equation (2). The initial position of the remembered crow is evaluated by the objective function.
Step 3. Generate and evaluate the new positions for crows The ACS algorithm displays the biggest difference from the conventional CS algorithm in Step 3. First, The ACS algorithm uses dynamic AP, which changes dynamically with the number of generations. dynamic AP uses Equation (9) for dynamic changes, and AP max and AP min have a value between 0 and 1. Figure 4 shows an AP that changes dynamically according to the number of generations when t max is 2000. Using dynamic AP, as shown in Figure 5, increases the probability of exploration at the beginning of the generation, which can increase the performance of the initial population. Compared to Figure 3, the number of explorations increases at lower numbers of generations. Thus, the larger the AP, the higher the probability of the initial population performing exploration, and the smaller the AP, the higher the probability of performing exploitation. In addition, a dynamic AP of an appropriate size is required for harmony between exploitation and exploration.  Second, unlike the conventional Equation (3) in which crow i follows randomly selected crow j (m j,t ), in the ACS algorithm, it follows the best crow j (gb j,t ) by FAR. This can be expressed as Equation (10). Here, r 2 i,t , r 3 i,t is a random number between 0 and 1, and FAR is an initial set value between 0 and 1. The change in this equation improves the exploitation performance compared to the conventional CS algorithm. If FAR approaches 0, it follows the best solution stored in the crow's memory. Conversely, when FAR approaches 1, it follows a randomly selected crow, just like the conventional CS algorithm. Therefore, using the appropriate FAR, it is possible to improve the convergence performance of the optimization algorithm by harmonizing the exploitation and exploration.
Third, using this algorithm, the exploration phase of the conventional CS algorithm was improved. The conventional CS algorithms are randomly adopted in the lb and ub ranges if the random number is less than the AP. That is, global search is mainly performed. The global search can contribute to the convergence performance of the algorithm because it searches a large area at the beginning of the generation. However, it does not contribute significantly to the convergence performance of the algorithm as the generation progresses. Therefore, the process of reducing the range that can be selected toward the end of the generation was added as Equation (11), which allows the ACS algorithm to perform a local search. Here, r 4 i,t and r 5 i,t are random numbers between 0 and 1. Figure 6 illustrates this method. Step 4. Update the memory The results are compared through evaluation of the crow position change by Equation (3), Equation (4) with the evaluation of crows stored in memory. Comparing the evaluation results, the better crow position is updated in the crow's memory.
Step 5. Termination of repetition The ACS algorithm performs optimization by repeating the process of Steps 2-4. When the current number of generations (t) reaches the maximum number of generations (t max ), the execution of the ACS algorithm ends, and the optimization result of the problem is derived. Pseudo code of the above-mentioned process is provided in Algorithm 2.

Algorithm 2 Pseudo code of the ACS algorithm
Initialize the parameters(AP max , AP min , FAR, f l, N, pd, t max ) Initialize the position of crows in the search space and memorize Evaluate the position of crows while t < t max do Randomly choose the position of crows

Characteristic of the ACS Algorithm
Unlike the conventional CS algorithm, the ACS algorithm adds the parameters of dynamic AP and FAR. Therefore, this section compares the convergence performance according to the change in the newly added parameters and seeks the value with the best convergence performance. The benchmark function was used to compare convergence performance, and it was summarized in Table 1. Here, d was set to 10 in order to identify the characteristics of the ACS algorithm.

Fun
Equation B Min A total of 13 functions were used to compare the convergence performance according to the value of the added parameter. In Table 1, f 1-f 7 is a unimodal benchmark function that can test the exploitation performance of each algorithm. Additionally, f 8-f 13 is a multimodal benchmark function that can test the exploration performance of each algorithm. The multimodal benchmark function has many local minima, making it difficult to find an exact solution.

Dynamic AP
The ACS algorithm uses dynamic AP, which varies with the number of generations, to increase the performance of the exploration initially. dynamic AP is calculated by Equation (9) and has a different value depending on the size of the AP max . Figure 7 is a graph that changes according to the size of the AP max . The larger the AP max , the higher the probability of randomly selecting the entire boundary initially and the better the initial population selection. Therefore, this section compares results that change according to the value of the AP max . When AP becomes 0, only exploitation occurs in all generations. Therefore, the AP min was set to a minimum value of (=0.01). AP max was changed to 0.01, 0.1, 0.2, 0.4, 0.6, 0.8, and 1.0, and N, f l, and FAR were set to 20, 2.0, and 1.0. t max was set to 2000, and each analysis was repeated a total of 50 times. Table 2 presents the analysis result of each benchmark function according to the change of AP max , and the last row indicates the average ranking of the BF (best fitness) or MF (mean fitness) according to the AP max . If two or more values were ranked the same, then the average ranking was derived. The average ranking of BF was best at 1.88 when AP max = 0.4, and the average ranking of MF was best at 2.31 when AP max = 0.8. Conversely, when AP max = 0.01, both BF and MF performance deteriorated. In other words, using dynamic AP as an appropriate value yields better convergence performance than the conventional CS algorithms, and the convergence performance of the ACS algorithm is the best when the dynamic AP has a range of 0.4-0.6.

FAR
The ACS algorithm follows a randomly selected crow (m j,t ) by FAR or a crow (gb j,t ) with favorite prey. The closer FAR = 1.0 is, the more likely it is to follow a randomly selected crow (m j,t ) like the conventional CS algorithm, and the closer it is to FAR = 0.0 the more likely is to follow a crow (gb j,t ) with favorite prey. In this section, FAR was changed to 0.0, 0.2, 0.4, 0.6, 0.8, and 1.0 in order to compare convergence performance with changes in FAR, and N, f l, AP max , and AP min were set to 20, 2.0, 0.4, and 0.01, respectively. t max was set to 2000, and each analysis was repeated a total of 100 times. Table 3 presents the analysis results according to a change in FAR. The mean ranking of BF was the best at 2.04 when FAR = 0.2, and the mean ranking of MF was the best at 2.92 when FAR = 0.6. Conversely, the closer FAR = 0.0 or 1.0, the worse the average ranking. Furthermore, the closer the local minima were to FAR = 0.0 in F1, F4, and F6, the better the convergence performance. In other words, using the appropriate value of FAR yielded better convergence performance than the conventional CS algorithm, and when FAR had a range of 0.2-0.4, it had the best convergence performance.

Numerical Examples
In this section, the ACS algorithm was applied to benchmark function and engineering problems and compared with the results of other algorithms. The benchmark function used 23 functions shown in Tables 1 and 4 [23]. Five engineering problems were performed: a pressure vessel design problem (PVD), a welded beam design problem, a weight of a tension/compression string problem, a three-bar truss optimization problem, and a stepped cantilever beam design problem. Table 4. Fixed-dimension multimodal benchmark function for comparison.

Fun Equation B
Min 2 1

Benchmark Function Problems
The algorithms used to compare the convergence performance of the ACS algorithm were the conventional CS algorithm, HS, DE, the grasshopper optimization (GO) algorithm, the salp swarm (SS) algorithm, and GA. Table 5 presents the parameters used in each algorithm. t max , N, and Dim used 2000, 30, and 30, respectively, and each analysis was repeated a total of 30 times.   Table 6 presents the analysis results of each algorithm, and the last row shows the ranking using the BF of each algorithm. The ACS algorithm has the best convergence performance on unimodal ( f 1-f 7) and multimodal ( f 8-f 13) functions. In the fixed-dimension multimodal function ( f 14-f 23), f 15-f 19 confirmed the best convergence performance, but f 14 and f 20-f 23 did not. However, the ACS algorithm showed better convergence performance than the conventional CS algorithm. As a result of using the rankings of BF and MF, the ACS algorithm was derived as 1.65 and 1.78, confirming that it was the best. Therefore, it can be seen that the ACS algorithm has improved exploitation and exploration capabilities compared to the conventional CS algorithm.      Table 7 is a parameter of the ACS algorithm used to solve the numerical problem. The engineering problem was repeatedly interpreted 20 times. The fitness of the engineering problem was calculated as shown in Equation (12). Here, f (x), P(x), and x indicate a result value, a penalty value, and a design variable defined in each problem, respectively. P(x) can be defined as in Equation (13). Here, np, p i represent the number of constraints and a value assigned by the constraint, respectively. If the constraint is met, then p i is 0, and if the constraint is not met, then a penalty of 10 4 is imposed.

Pressure Vessel Design (PVD) Problem
This problem posed here is to design a cylindrical container with both ends blocked by a hemispherical head as shown in Figure 9 in such a way as to minimize material, forming, and welding costs. The design variables are T s (shell thickness; x 1 ), T h (head thickness; x 2 ), R (inner radius; x 3 ), and L (container length; x 4 ), and the range that the design variables can have is given by Equation (14). The cost minimization problem for cylindrical containers is expressed as an equation in Equation (15). In addition, each design variable has the constraint presented by Equation (16).
Subject to : Table 8 compares the results of the ACS algorithm with those of previous studies [24][25][26][27]. The ACS algorithm derived the smallest cost of 5885.333 (design variables were 0.7782, 0.3846, 40.3196, 200.0), and all of the constraints were satisfied. The ACS algorithm reduced the cost by about 0.08% compared to the conventional CS algorithm and by 6.85% compared to Coello's results. This problem posed here is to minimize the costs of welding and materials for the welding of two beams, as shown in Figure 10. h (welding height; x 1 ), l (welding length; x 2 ), t (thickness of beam 2; x 3 ), and b (width of beam 2; x 4 ) are design variables, and the range that the design variables can have is given by Equation (17). The welding cost minimization problem is expressed as an equation in Equation (18). Here, the load (P) applied to Beam 2 is 6000 lb, the length (L) of Beam 2 is 14.0 inches, the modulus of elasticity (E) is 30 × 10 6 psi, the modulus of shear elasticity (G) is 12 × 10 6 psi, the maximum shear stress (τ max ) is 13, 600 psi, maximum stress (σ max ) is 30, 000 psi, and the maximum displacement (δ max ) is 0.25 inches. In addition, each design variable has the constraints provided by Equation (19).
Subject to : Table 9 compares the results of the ACS algorithm with those of previous studies [24,[28][29][30]. The ACS algorithm derived the smallest cost of 1.7254 (design variables were 0.2057, 3.4747, 9.0365, and 0.2057), and all of the constraints were satisfied. The ACS algorithm reduced the cost by about 0.23% compared to the conventional CS algorithm and by 1.33% compared to a study by Coello [24].

Weight of a Tension/Compression Spring Problem
This problem presented here is to minimize the weight of a spring that satisfies the constraints when a load is applied to said spring, as shown in Figure 11. The design variables are d (spring thickness; x 1 ), D (spring diameter; x 2 ), and N (spring coil count; x 3 ), and the range that the design variables can have is given by Equation (20). The spring weight minimization problem is expressed as an equation by Equation (21). In addition, each design variable has the constraints provided by Equation (22).
Subject to : Table 10 shows the results of the ACS algorithm and those of other researchers. The ACS algorithm derived the smallest spring weight of 1.2665 × 10 −2 (the design variables were 0.0517, 0.3578, and 11.2240), and all of the constraints were satisfied. The ACS algorithm reduced the weight by about 0.03% compared to the conventional CS algorithm and by 0.31% compared toa study by Coello [24]. This problem aims to find the minimum truss weight that satisfies the constraints when a load (P) is applied to a truss structure of three members, such as in Figure 12. The design variables are A 1 (cross-sectional area of Member 1; x 1 = x 3 ) and A 2 (crosssectional area of Member 2; x 2 ), and the range that the design variables can have is given by Equation (23). The three-bar truss weight minimization problem is expressed as an equation by Equation (24). Here, the distance (L) of the node is 100 cm, the load (P) is 2 kN/cm 2 , and the maximum stress (σ max ) is 2 kN/cm 2 . In addition, each design variable has the constraints provided in Equation (25). The maximum number of generations (t max ) was set at 20 in this problem.
Subject to : Table 11 shows the results of the ACS algorithm and those of a previous study [14]. Here, SoC, MB, and DSS-MDE refer to the society and civilization (SoC) algorithm, the mine blast (MB) algorithm, and the dynamic stochastic selection with multimember differential evolution (DSS-MDE) algorithm. The ACS algorithm determined the weight of the threebar truss structure to be 263.895843 (the design variables were 0.7887 and 0.4081), and all of the constraints were satisfied. The result of the ACS algorithm was lighter than the results of the conventional CS algorithm and Askarzadeh. The problem posed here is to calculate the width of a stepped cantilever beam as shown in Figure 13 and minimize its weight. The λ 1−5 (width) of the five-cantilever beam is a design variable (x 1−5 ), and the range that the design variable can have is provided by Equation (26). Equation (27) is an expression of the stepped cantilever beam design problem, and Equation (28) is a constraint of the stepped cantilever beam design problem.
Subject to : g 1 (x) = 61 Table 12 shows the results of the ACS algorithm and those of Hijjawi et al. [33]. Here, AOACS and HHO stand for the hybrid algorithm of the arithmetical optimization algorithm and cuckoo search and for Harris hawks optimization, respectively. The ACS algorithm determined a minimum weight of the step cantilever beam of 1.3418 and satisfied the constraints. The ACS algorithm showed better results than the conventional CS algorithm.

Conclusions
This paper proposed the ACS algorithm, which improves Step 3 of the conventional CS algorithm. The ACS algorithm added three methods to the conventional CS algorithm. First, unlike conventional CS algorithms that use fixed-value AP, we proposed the use of dynamic AP, which decreases nonlinearly with the number of generations. This change improved the algorithm's exploration performance. Second, we proposed an expression that follows the crow in the best position rather than following a randomly adopted crow, and this improved the algorithm's exploitation performance. Third, we proposed a local search based on the adopted value rather than a global search of the entire area at later generations. The convergence performance according to the value change of AP max and FAR-parameters added to the ACS algorithm-was compared, and it was verified that the convergence performance was the best when the AP max was in the range of 0.4-0.6 and the FAR was in the range of 0.2-0.4. Finally, the ACS algorithm was applied to benchmark functions and four engineering problems in order to confirm that the convergence speed was the fastest and the best convergence performance compared to the results of other metaheuristic algorithms.
In future work, if the ACS algorithm is applied to various large-scale or real-scale engineering problems, it is believed that the optimal solutions for a variety of engineering problems would be obtained.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.