An adaptive dimension differential evolution algorithm based on ranking scheme for global optimization

In recent years, evolutionary algorithms based on swarm intelligence have drawn much attention from researchers. This kind of artificial intelligent algorithms can be utilized for various applications, including the ones of big data information processing in nowadays modern world with heterogeneous sensor and IoT systems. Differential evolution (DE) algorithm is one of the important algorithms in the field of optimization because of its powerful and simple characteristics. The DE has excellent development performance and can approach global optimal solution quickly. At the same time, it is also easy to get into local optimal, so it could converge prematurely. In the view of these shortcomings, this article focuses on the improvement of the algorithm of DE and proposes an adaptive dimension differential evolution (ADDE) algorithm that can adapt to dimension updating properly and balance the search and the development better. In addition, this article uses the elitism to improve the location update strategy to improve the efficiency and accuracy of the search. In order to verify the performance of the new ADDE, this study carried out experiments with other famous algorithms on the CEC2014 test suite. The comparison results show that the ADDE is more competitive.


INTRODUCTION
Optimization exists in various fields of engineering and applications (Song, Pan & Chu, 2020;Wang, Pan & Chu, 2020), and the intelligent algorithm is one of the important ways to solve optimization problems Pan et al., 2021). In recent years, with the rapid development of artificial intelligence, more and more scholars begin to explore and study the optimization algorithms based on swarm intelligence. This kind of algorithms are useful and effective for the optimization problems in heterogeneous sensor networks (Wang et al., 2019;Elhoseny et al., 2015;Binh et al., 2020;Bhushan, Pal & Antoshchuk, 2018). The algorithms are also popular to be utilized in the sensor data and information processing (Moldovan et al., 2020;Awgner et al., 2020;Sheoran, Mittal & Gelbukh, 2020). For swarm intelligence-based algorithms, the differential evolution (DE) algorithm (Storn & Price, 1997), artificial bee colony (ABC) algorithm (Karaboga et al., 2007), and QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm (Meng, Pan & Xu, 2016) algorithm have the characteristics of simple structure and ease of implementation. They are widely used in engineering practices and have become an important part of the contemporary optimization field. Among them, the DE algorithm attracts more attention because of its strong optimization ability and robustness.
To solve the problem of numerical optimization, Price and Storn firstly proposed differential evolution algorithm (DE) in 1995. Since the 21st century, more and more advanced DE variants have been proposed to be applied in different fields (Tanabe & Fukunaga, 2013a;Meng, Pan & Zheng, 2018;Meng, Pan & Tseng, 2019;Ali et al., 2016;Meng, Pan & Kong, 2018). It has become an effective technology to solve complex optimization problems. DE algorithm combines the idea of swarm intelligence into evolutionary computation and searches for the global optimal solution through the competition and cooperation of particles. In DE, the global optimization strategy based on population is adopted, and the differential operation is used for mutation operation. Through one-to-one competition, a better solution is generated, which improves the search efficiency of evolutionary computation. Also, the memory ability of DE can understand the current optimization situation and adjust the search strategy, so it has strong optimization ability and robustness, which is very suitable for solving some difficult optimization problems. In recent years, some new variants of DE (Price, Storn & Lampinen, 2006;Das & Suganthan, 2011;Das, Mullick & Suganthan, 2016) have also won the highest ranking in IEEE Evolutionary Computing Conference. However, DE has its shortcomings. For example, the crossover method determined by the parameter Crossover operator (CR) will lead to search bias. For this defect, Meng, Pan & Xu (2016) proposed the QUATRE algorithm (which can be regarded as an improved version of DE) that uses matrix M instead of CR to complete the crossover. Since the location update equation of the DE algorithm provides it with a strong exploit ability, the multi-dimensional update mode will make the DE algorithm easier to converge and fall into the local optimal solution. We know that multi-dimensional strategy will bring more exploit, especially in the early stage of the optimization process; it can approach the global optimal solution faster, while the one-dimensional update strategy has a slower search speed, but it can avoid falling into local optimum in the early stage of the search process. The artificial bee colony algorithm (ABC) is a kind of algorithm that adopts a one-dimensional update strategy and has strong search performance. The balance between exploration and exploit in optimization algorithm has always been a very difficult problem, and adaptive scheme (Islam et al., 2012;Ajithapriyadarsini, Mary & Iruthayarajan, 2019) is regarded as an effective technology to solve this problem. At present, the mainstream adaptive scheme of DE is realized by adjusting control parameters (Draa, Chettah & Talbi, 2019). In addition, there are also the adaptive dimension framework (Deng, Li & Sun, 2020) and the adaptive mutation strategy . These methods optimize the algorithm at different levels. They also have different adaptive trigger conditions, such as ranking schemes or reaching a certain number of times that cannot be updated. Among them, the sorting behavior is adopted by more and more scholars because it can intuitively reflect the advantages and disadvantages of the current individual and better integrate into the adaptive scheme. Among the research on ordering schemes, there are few adaptive strategies on dimension. Inspired by the single-dimensional updating strategy of the artificial bee colony algorithm, this study proposed a novel DE algorithm based on the ranking behavior Stanovov, Akhmedova & Semenkin, 2020;Tong et al., 2021;Xu et al., 2019;Leon & Xiong, 2018;Zhong, Duan & Cheng, 2021) and adaptive dimensional strategy (Deng, Li & Sun, 2020). The algorithm adaptively selects the update strategy according to the individual ranking, and combines the single dimension strategy with the multi-dimension strategy to achieve the balance of exploration and exploit.
The rest of this article is arranged as follows: in the second section, we briefly review the development of DE, the ABC algorithm, and their famous variants; The third section introduces the principle and structure of the proposed algorithm in detail. In "Experiment and Result", experiments are carried out in the CEC2014 test suite (Liang, Qu & Suganthan, 2014), and the performance of the proposed algorithm is verified by comparing with the well-known variants of DE and ABC. Finally, the conclusion and prospects are given in "Conclusion".

BACKGROUND REVIEW AND RELATED WORK
In this section, two important branches of the stochastic optimization algorithm, the famous ABC and the powerful DE, are reviewed as well as their variants.

ABC algorithm and its variants
In the ABC algorithm, the global optimal solution is found through three different stages. In the first stage, the leader bee looks for a better honey source in the neighborhood of the honey source (feasible solution); in the second stage, the following bee selects one of the leading bees to follow and collect honey according to the information of the quality of the honey source found by the leader bee, so as to find a better honey source in the neighborhood of the honey source; in the third stage, when the leader bee cannot find a better honey source for many times, he will become a reconnaissance bee, The new honey source location was randomly searched within the search range. The standard functions of the location update process of the original ABC algorithm are shown as follows.
where i ¼ 1; 2; . . . ; SN, j ¼ 1; 2; . . . ; D, h i is the ith feasible solution with D dimension in SN, j is the dimension of every feasible solution, r is a random number and r 2 0; 1 ½ . t is the number of iterations, h k is a randomly selected honeybee and k 6 ¼ i, λ is a random number and in the range of [−1,1], when the function value of the new honey source is better, the new honey source is used to replace the original honey source, otherwise the original honey source is retained. P i is the probability of selecting the ith hired bee, s is the number of hired bees, h i is the position of the ith hired bee, F h i ð Þ is the fitness value of the ith bee, which is related to the objective function value of the corresponding honey source.
The original ABC algorithm updates the position by randomly approaching or away from other bees, which greatly enhances its search ability, but also weakens the exploitability and is not easy to converge to the global optimal solution. In the development of the ABC algorithm, many excellent ABC variants have been proposed (Bajer & Zorić, 2019;Cui et al., 2018;Peng, Deng & Wu, 2019;Wang et al., 2014;Li et al., 2020). For example, the global optimal guided GABC algorithm (Zhu & Kwong, 2010) adds the guidance of global optimal solution (Gbest) in the running formula, which effectively enhances the development ability of the algorithm. The authors who proposed this method improved the GABC algorithm again and proposed the IABC algorithm (Cao et al., 2019). The algorithm improves the search mechanism of the follower bee to solve the continuous optimization problem. Inspired by the differential evolution algorithm (DE), a modified ABC algorithm (MABC) (Gao & Liu, 2012) was proposed by combining the differential evolution algorithm with the ABC algorithm. The main idea of the MABC algorithm is to generate feasible solutions in the vicinity of the global optimal solution of the previous iteration in the process of position update. In the phase of population initialization and bee detection, chaos system and opposition based learning are used to enhance the development ability and global convergence performance of the algorithm. Inspired by the QUATRE algorithm,  proposed QABC algorithm in 2021. This algorithm uses multi-dimensional coevolution to complete the location update, which greatly enhances the development of the algorithm. However, because it replaces the original ABC update strategy, it also weakens the global search ability to a certain extent.

DE algorithm and its variants
DE is a powerful evolutionary algorithm, the core idea of which is survival of the fittest. The evolution process of DE imitates the operation of mutation, crossover, and selection in genetics. It is a robust global optimization algorithm. The operation mechanism of DE consists of four steps (Storn & Price, 1997). First, starting from a randomly generated initial population, two individuals are selected from their parents to generate a difference vector. Second, another individual is selected to sum with the difference vector to generate the experimental individual. Third, the parent individual and the corresponding experimental individual are cross-operated to generate a new offspring individual. Finally, the selection operation is carried out between the parent generation individual and the offspring individual, and the individual that meets the requirements is saved to the next generation group. Through continuous evolution, good individuals are retained, poor ones are eliminated, and the search is guided to approach the optimal solution. Compared with crossover and selection, this variation has attracted more and more attention. As shown in Table 1, there are five common mutation strategies for the DE algorithm: DE/rand/1, DE/rand/2, DE/best/1, DE/best/2, and DE/target-to-best/1. They are used to generate new populations.
In the past 20 years, scholars have done a lot of research on DE, and many DE variants (Brest et al., 2006;Zhang & Sanderson, 2009;Tanabe & Fukunaga, 2013a, 2013b have been proposed to enhance its performance. We know in the process of mutation and crossover that the choice of the parent will directly affect the quality of the offspring. Sutton proposed the RBDE algorithm (Sutton, Lunacek & Whitley, 2007), whose main improvement strategy is to optimize offspring individuals by changing the selection of parent individuals. In RBDE, the probability of being selected as a parent in the current population is different, and the better the individual is, the greater will be the probability of being selected. Some researchers do this by using a population structure. Das et al. (2009) proposed a DEGL algorithm using a new mutation strategy. Its main innovation point is that it adopts a neighborhood search mutation strategy based on the population loop topology structure. Moreover, its control parameters are generated randomly, and successful control parameters are more likely to be retained.  proposed a SADE algorithm based on adaptive strategy and crossover rate. Multiple strategies are gathered in SADE, in which the successful strategy is more likely to be selected and used. In each generation of crossover operation, its crossover rate is generated by the normal distribution of the average of the historical successful crossover rate. Meng, Pan & Xu (2016) proposed a swarm-based evolutionary algorithm called the QUATRE algorithm. Aiming at the inherent weakness of DE cross-selection in highdimensional angle, this algorithm uses the collaborative search matrix M to replace the cross-control parameter CR in the DE algorithm. The individual then updates the position by the method of quasi-affine transformation, so as to solve the problem of position deviation in the search. Pan et al. (2017) proposed a new QUATRE algorithm based on the sorting method (S-QUATRE); its main innovation is that the population is divided into better and worse groups through a sorting strategy, the better group is directly reserved to the next generation, while the individuals in the worse group can only be passed to the next generation after evolution. Aiming at the problem that it is difficult for QUATRE to jump to the local optimal solution in the later stage of iteration,  proposed the QUATRE-DEG algorithm. The algorithm uses the dual population structure to maintain the diversity of information, and constructs a gravity model of global optimal solution and global suboptimal solution to coordinate the balance of exploration and exploit.

PROPOSED ALGORITHM
As we all know, search and development are equally important for the performance of swarm intelligence algorithms. DE is well known for its powerful development

NO.
DE/x/y Equation performance, which largely benefits from its multidimensional search strategy. In the search process, it can quickly approach the global optimal solution, but there will be premature convergence. This is caused by the lack of search performance in the late iteration, and the one-dimensional search strategy can make up for this deficiency. ABC algorithm is famous for its powerful searching ability. It uses a one-dimensional search strategy to update the location. Inspired by ABC, a new DE algorithm is proposed in this study, which update by the adaptive selection of single or multi-dimensional search strategy. The arrangement of this chapter is as follows: In the first part, the improved location update equation is given. In the second part, the adaptive selection scheme for the multi-dimensional update or single dimensional update is described. Finally, the pseudo-code of the proposed ADDE algorithm is given in the third section.

Improved location update strategy
Generally speaking, elitism is an effective technique to enhance the performance of the algorithm in the optimization process: using elitist individuals (better individuals) to guide other individuals to complete the evolution, will more quickly approach the global optimal solution. However, when only one elite is used, for example, only the global optimal position of the current iteration is used for guidance, it is easy to fall into the local optimal solution and converge prematurely.
where x i is a random selection of other individuals different from the current individual, x b is a randomly selected individual that is better than the current individual, k is a random number generated between [−1,1], b is a random number generated between [0,1.5].
As we can see, this update mechanism increases the guidance of better individuals. Better individuals are randomly generated, not only the optimal individuals, which undoubtedly brings more possibilities for search. The new mutation equation is as follows: where X b is also a better individual randomly selected than the current individual, X r2 and X r3 are a random selection of individuals in the population and is different from each other. F is the scaling factor used to control the influence of the difference vector, and B is the generated experimental population. Similarly, X b;G can bring better experimental variables for DE, while still maintaining its randomness.

Adaptive dimension selection scheme
In the second stage of the ABC algorithm, it is easier for observation bees to follow the better bees to collect honey. This means that the better the individual is, the easier it is to be selected for one-dimensional location updating. This is undoubtedly beneficial to the whole optimization process. The difference is that the poor individuals in the population usually mean that they are far away from the global optimal solution, and multidimensional updating can speed up the whole optimization process. Based on this, this article proposes an adaptive selection scheme based on ranking. Its selection mechanism is as follows: Among them, rank i is the current individual ranking, The best individual rank is one and the worst individual rank is NP. NP is the total number of particles in the population, and rand is a random number between [0,1]. x ij t þ 1 ð Þ is the progeny individual generated by the one-dimensional update strategy. B is the experimental population generated by the multi-dimensional update strategy, it still needs to complete the crossover operation using Eq. (6) to generate the progeny individual u ij CR 2 0; 1 ð Þ is the cross control parameter, and k 2 1; 2; . . . ; D ð Þis a random parameter index to ensure that each individual always selects at least one parameter from the experimental individuals.
As shown above, the higher the current individual ranking, the easier it is to choose a single-dimensional update strategy, while the lower the ranking, the easier it is to choose a multi-dimensional update strategy. In particular, when using a multi-dimensional strategy, there is still a need for crossover, and when using a single-dimensional strategy, there is no need for crossover. In the adaptive dimension selection scheme, according to the ranking of multi-dimensional or single dimensional update strategy selection, is a kind of effective play to their expertise technology, to achieve the balance of search and development.

INPUT:
Bounds: [X D min , X D max ], Objective function: f(X), Maximal generation number: MaxGen, Population size: NP OUTPUT: Individual of global optimal solution: x gbest , global optimal value f(x gbest ).

1:
Initialize the population within the bounds, and calculate the fitness of all individuals.

3:
for g = 1 to MaxGen 4: All individuals were ranked according to fitness values.

5:
for i = 1 to NP 6: Randomly select the better individual than the current one.

7:
Calculating the probability of current individual adaptive selection P i using Eq. (4).

EXPERIMENT AND RESULT
In our experiments, the CEC2014 benchmark set is used to evaluate the performance of the proposed algorithm. In the test benchmark suite, all benchmark functions can be divided into four categories: single-mode function (f 1 À f 3 ), basic multi-mode function (f 4 À f 16 ), mixed-function (f 17 À f 22 ), and combined function (f 23 À f 30 ). All basic functions are treated as black-box problems. According to Liang, Qu & Suganthan (2014), the search range is set to [−100,100]. It can be seen from the method part of Chapter 3 that the inspiration for the improvement of ADDE algorithm is the ABC algorithm. The inspiration mainly comes from two aspects. One is that the one-dimensional update equation of ABC brings greater search, and the other is that in the following bee stage of ABC algorithm, the individual position with higher (better) fitness value will get more position update opportunities. These two improvements are reflected in the location update strategy and adaptive selection scheme of the ADDE algorithm. Therefore, we compare our algorithm with some famous ABC variants and advanced DE variants to verify the effectiveness of our algorithm. ABC variants include: GABC, MABC, IABC, QABC, and DE variants include: DE/rand/1, RBDE, DEGL, SADE, QUATRE, S-QUATRE, QUATRE-DEG. The parameters of all algorithms are in line with the original recommended values, and the detailed parameters are listed in Table 2.
All experiments are carried out on the Windows 10 operating system with 8 GB memory and Intel i5-4210m processor. The simulation platform of the algorithm is MATLAB 2019a. For the reliability of the data, we conducted 51 independent experiments on all the comparison algorithms and collected the fitness error Df ¼ f Ã À f O ð Þ of these test functions, where f Ã represents the final value of algorithm convergence, and f O ð Þ represents the global optimal value of the test function. In each run, the total number of function evaluations, maxFES, is set to 10e4 Ã D, where D is the dimension. We then compared the average value of fitness error Δf and std (standard deviation) of 51 runs of each algorithm. In order to evaluate the significance of the two algorithms, a paired nonparametric statistical procedure of Wilcoxon's signed rank test, was performed on experimental data (Derrac et al., 2011). The test method has been widely used by scholars to compare the results of two different populations (Deng, Li & Sun, 2020 experiment, the significance level was set at 0.05, and the comparison results, the symbols "+", "−", and "=" respectively represent that the performance of the new algorithm ADDE is "better", "worse", or "similar" than that of other algorithms.

Comparison between ADDE and ABC variants
In this section, we compare with several well-known ABC variants, GABC, MABC, IABC, and QABC, and the statistical results based on Wilcoxon's rank sum test are recorded in Tables 3-5. As we can see from Tables 3 to 5, firstly, in terms of the 10D real number optimization, the novel ADDE performs better than GABC on 21 benchmark functions and performs similar on two benchmark functions, better than IABC on 20 benchmark functions and performs similar on three benchmark functions, better than MABC on 23 benchmark functions and performs similarly on two benchmark functions, better than QABC in 22 benchmark functions and similar in four benchmark functions. Secondly, in terms of the 30D real number optimization, the novel ADDE performs better than GABC in 21 benchmark functions and similar in two benchmark functions, better than IABC in 23 benchmark functions and similar in two benchmark functions, better than MABC in 27 benchmark functions and similar in two benchmark functions, and better or similar in all 30 benchmark functions than QABC. Finally, in terms of the 50D real number optimization, the novel ADDE performs better than GABC in 21 benchmark functions and performs similar in one function, better than IABC in 25 benchmark functions and performs similar in one function, better than MABC in 26 benchmark functions and performs similar in one function, better than QABC in 23 benchmark functions and performs similarly in one function. In general, the performance of our algorithm is better than that of all the ABC variants. According to the statistical results, compared with GABC, ADDE algorithm has similar improvement results in different dimensions, with 21 benchmarks achieving better results. Compared with QABC, ADDE shows the best improvement results in 30D, among which Table 2 Parameter settings for experiment.

Algorithm
Initial parameter settings 29 benchmarks are achieving better results and one benchmark is similar, while none of the 30 benchmarks becomes worse Although it is more difficult to deal with highdimensional problems, compare with IABC, ADDE is superior in 20, 23 and 25 benchmarks of 10D, 30D and 50D respectively. The higher the benchmark of dimension, the better its performance is Finally, the improvement results of ADDE at 10D and 50D were similar to those of MABC, obtaining 26 better benchmarks, and the improvement results were slightly better at 30D, obtaining 27 better benchmarks. In general, the performance of our algorithm outperforms all the comparative ABC variants, especially when dealing with some more complex high-dimensional problems.

Comparison between ADDE and DE variants
In this section, we compare with the advanced DE variants include DE/rand/1, RBDE, DEGL, SADE, QUATRE, S-QUATRE and QUATRE-DEG. The experimental results are recorded in Tables 6-11. As shown in Tables 6-11, first of all, in terms of the 10D real number optimization, the novel ADDE performs better than DE/rand/1 and RBDE on 25 benchmark functions and performs similar on two benchmark functions, better than SADE in 21 benchmark functions and similar in six benchmark functions, better than QUATRE-DEG in 26 functions and similar in two functions, and better or similar than DEGL, QUATRE, and S-QUATRE on all 30 benchmark functions. Then, in terms of the 30D real number optimization, the novel ADDE performs better than DE/RAND/1 in 27 benchmark functions and is similar in two benchmark functions, better than SADE in 26 functions and similar in one function, better than QUATRE-DEG in 24 functions and similar in three functions, and better or similar than RBDE, DEGL, QUATRE, and S-QUATRE in all 30 benchmark functions. Finally, in terms of the 50D real number optimization, our algorithm performs better than DE/rand/1, RBDE, and DEGL on 28 benchmark functions and performs similar on one function, better than SADE in 24 functions and similar in two functions, better than QUATRE-DEG in 23 functions and similar in one function, better than QUATRE on 23 benchmark functions and performs similar on one function, better than S-QUATRE on 23 benchmark functions and performs similarly on two functions. Therefore, compared with these DE variants, we still get the best performance. According to the statistical results, at 10D and 30D, ADDE improved extremely well compared with DEGL, QUATRE and S-QUATRE, and none of the 30 benchmarks got worse. However, the effect was not so good in dealing with higher dimension (50D) benchmarks, and some benchmarks still got worse. In addition, both ADDE and RBDE improved better than DE/RAND/1 and RBDE on high-dimensional problems, both achieving 28 better and a similar result on the 50D basis. Secondly, compared with the improved results of SADE, which is also an adaptive scheme, ADDE has the best effect on the basis of 30D, achieving 26 better results and one similar results. It only becomes worse on F6, F7 and F27. Finally, the improvement of ADDE over advanced QUATRE-DEG deteriorates as the dimension increase, becoming best at 10D, obtaining 26 better benchmarks, and 24 and 23 better benchmarks at 30D and 50D, respectively. It can be analyzed that compared with the more advanced DE variants, the effect on the highdimensional benchmark is slightly decreased. This is because ADDE algorithm has enhanced certain search ability (from ABC), so the development energy will be relatively weaker, and some high-dimensional problems need to be developed more. Overall, however, we still got the best performance compared to these DE variants.  variant is better in FUNC1 and FUNC3 as a whole. It can be noted that our algorithm converges fastest and reaches the global optimal solution. In FUNC6, except for our ADDE and S-QUATRE, other algorithms converge prematurely on this function and fall into the local optimal solution. Our ADDE algorithm and S-QUATRE perform particularly well on this function because they are still searching for a better solution quickly. More importantly, our ADDE performs better than S-QUATRE at the current number of iterations. On the basis of FUNC9 and FUNC15, the convergence speed of the new ADDE is slightly better than that of other algorithms. At present, they have converged, and  other algorithms. On the benchmark of FUNC29, it converges to a similar global optimum with QABC, which is slightly better than S-QUATRE, QUATRE and IABC, while GABC, MABC, DE and RBDE converge too early and fall into a local optimum. In general, our algorithm ADDE has achieved the best results in comparison, so it is competitive in terms of convergence speed.

Comparison of complexity
We evaluate the algorithm complexity of the novel ADDE on 30D, and the algorithm complexity of the CEC2014 is given in Liang, Qu & Suganthan (2014). The running time of the given arithmetic expression is denoted as T 0 . The time spent in evaluating the f18 function in the CEC214 test set for 200,000 times is recorded as T 1 . The optimization algorithm performs 200,000 evaluations on the f18 function completely, and the time consumed is recorded as T 2 . We evaluated T 2 5 times and got the average time b T 2 . Accordingly, the complexity ð b T 2 À T 1 Þ=T 0 can be obtained. The comparison results are recorded in Table 12. As shown in Table 9, compared with some DE variants, the novel ADDE will take more time, the sorting behavior of ADDE and the new one-dimensional update equation increase the complexity of the algorithm. but we know that the improvement of the algorithm will inevitably add some code, which is tolerable. Compared with all ABC variants and DEGL, ADDE has better time complexity.

CONCLUSION
DE is a simple and powerful algorithm, which has attracted more and more attention in recent years. We know that the outstanding development ability of DE is largely due to its multidimensional search strategy, which enables it to quickly approach the global optimal solution. But for a better solution, the one-dimensional search strategy can ensure a greater update success rate. Especially in the late stage of the optimization iteration, the onedimensional search strategy can slow down the particles falling into the local optimal solution. Based on this, this article proposes an adaptive updating scheme, which uses multi-dimensional and one-dimensional updating strategies to achieve a better balance between search and development. The multi-dimensional updating strategy comes from the improvement of the standard DE algorithm, while the one-dimensional updating strategy comes from the improvement of the standard ABC algorithm. The main idea to improve them is to increase elitism which is used to improve the overall optimization speed. To verify the performance of our ADDE algorithm, we compared it with other famous ABC variants and DE variants on the CEC2014 dataset. The results show that the new algorithm is competitive. In the future, we will further use ADDE to solve practical engineering problems (Sung, Sun & Chang, 2020;Sung, Zhao & Chang, 2020;Xu et al., 2020;.