A Multistrategy Artificial Bee Colony Algorithm Enlightened by Variable Neighborhood Search

Artificial bee colony (ABC) has a good exploration ability against its exploitation ability. For enhancing its comprehensive performance, we proposed a multistrategy artificial bee colony (ABCVNS for short) based on the variable neighborhood search method. First, a search strategy candidate pool composed of two search strategies, i.e., ABC/best/1 and ABC/rand/1, is proposed and employed in the employed bee phase and onlooker bee phase. Second, we present another search strategy candidate pool which consists of the original random search strategy and the opposition-based learning method. Then, it is used to further balance the exploration and exploitation abilities in the scout bee phase. Last but not least, motivated by the scheme of neighborhood change of variable neighborhood search, a simple yet efficient choice mechanism of search strategies is presented. Subsequently, the effectiveness of ABCVNS is carried out on two test suites composed of fifty-eight problems. Furthermore, comparisons among ABCVNS and several famous methods are also carried out. The related experimental results clearly demonstrate the effectiveness and the superiority of ABCVNS.


Introduction
ere are a vast lot of problems to be optimized in the social production and life. In order to achieve better solutions to these problems, a great number of algorithms have been developed. Owing to shortages (e.g., function's continuity needed) of deterministic approaches, various nature-enlightened algorithms had been developed to better deal with extremely difficult optimization problems. Generally speaking, during the past four decades, various researchers have developed different nature-inspired approaches such as genetic algorithms [1,2], particle swarm optimization [3], differential evolution (called DE) [4], and artificial bee colony [5,6].
Besides practical applications of ABC in the research fields listed above, many researchers concentrate on improving the performance of the traditional ABC to solve function optimization problems with different characteristics of nonconvexity, noncontinuity, separability, etc. For example, Alatas [9] proposed a chaotic ABC, in which both a chaotic initialization technique and a chaotic search method are proposed. Enlightened from the search process of particle swarm optimization, Zhu and Kwong [10] designed a novel search technique for improving ABC. In the proposed GABC, the search can effectively utilize the information of the global best individual. Motivated from the search strategy of DE [4], Gao and Liu [11] creatively developed a modified artificial bee colony (named MABC), in which a new equation named ABC/best/1 is proposed. To enhance the level of information sharing among various individuals, Akay and Karaboga [12] introduced a new control parameter called modification rate. It was used to randomly change parameters. Furthermore, it effectively enhances its exploitation ability. Frequently, the ABC was hybridized with other local approaches [13] to enhance its search ability. More recently, many researchers would like to utilize hybridization of multiple search strategies to balance the exploration and exploitation abilities of ABC [18,21]. For instance, Gao et al. [18] first constructed a strategy candidate pool composed of three search strategies with different search abilities and then proposed an adaptive selection mechanism to select a search strategy for each individual based on previous search experiences. Moreover, the proposed algorithm, namely, MuABC, had achieved a better performance when compared with the standard ABC and other state-of-the-art algorithms. Kiran et al. [21] first chose five search strategies to form a strategy candidate pool and then designed a probabilistic selection scheme for choosing a search strategy during the evolving process. Furthermore, the proposed approach, called ABCVSS for short, outperformed the basic ABC, ABC variants, and other kinds of methods in terms of solution quality for most of the cases.
To better enhance the basic ABCs performance, a multistrategy ABC is proposed enlightened from the variable neighborhood search technique [28][29][30]. For convenience, it was named after ABCVNS, where both ABC/best/1 and ABC/rand/1 are used to construct the first search strategy candidate pool. It is utilized both in the first stage and in the second stage. Furthermore, the second search strategy candidate pool is employed in the scout bee phase, and it consists of an original random search strategy and an opposition-based learning method. en, a novel mechanism of choice of search strategies is proposed inspired by the variable neighborhood search method. In addition, an opposition-based learning method is employed to initially generate a population with better diversities. To comprehensively show the advantage of ABCVNS, a few experiments on a large number of benchmark problems are conducted. Comparisons of ABCVNS and many other famous methods are also provided. e related comparative results demonstrate that the proposed ABCVNS can be regarded as a competitive method. e remainder of the work is organized as follows. Section 2 briefly describes the basic ABC. Next, a novel multistrategy ABC is proposed and described in detail in Section 3. In Section 4, a few comparative experiments are carried out and the comparative results are provided and discussed in detail. Finally, Section 5 concludes the work and puts forward a few future research directions.

Classical ABC
Enlightened from the collective intelligence behaviors of bee swarm [5], Karaboga developed ABC in 2005. In ABC, the bee swarm is divided into three groups. ey are employed bees, onlooker bees, and scout bees, respectively. Among them, employed bees stand half of the swarm and onlooker bees form another half. As far as labor division of honey bees is concerned, the task of exploring nectar sources is undertaken by the employed bees. After that, they would pass the information of nectar amount onto onlooker bees. On the basis of the shared information, a food source is selected and exploited by an onlooker bee in turn with a certain percentage. If one employed bee or one onlooker bee exhausts a food source, then the corresponding bee would play a role of a scout bee, which will perform a random search to get out of a local trap.
Generally, by imitating the foraging behavior of artificial bee colony, the ABC is made up of four sequential phases. ey are the initialization, employed bee, onlooker bee, and scout bee phases, respectively.
Before gathering the nectar of honey bees, a population of N p artificial individuals is randomly generated according to the following equation to denote real-world honey bees: where i � 1, 2, . . . , N p and j � 1, 2, . . . , D; x max j and x min j represent the upper and lower bounds of the component j, respectively; and ξ is a random number in the range of [0,1). Randomly generated D-dimensional vector of x i indicates an artificial agent. Meanwhile, we should predefine a suitable stopping criterion and the parameter limit employed for controlling appearances of scout bees. Following the initialization phase, employed bees begin to explore food sources successively in the light of the following equation: where i � 1, 2, . . . , N p and k ∈ 1, . . . , N p together with j ∈ 1, . . . , D { } are randomly produced by a uniform distribution. In addition, k has to be different from i. φ i j is a randomly generated number between [− 1, 1].
Next, fitness values of artificial individuals are calculated as follows: where f i and fit i indicate the cost value and the fitness value of the i-th artificial individual, respectively. At the beginning of the second stage, probability values are calculated according to the following formula: where p i denotes the probability of the i-th artificial food source chosen by onlooker bees. It depends on nectar amounts of the corresponding food source. at is, the higher the fit i is, the higher the chance of choosing the i-th food source is. In this context, employed bees pass information on to onlooker bees.
Based on equation (2), each onlooker bee chooses a food source in turn. en, it begins to exploit around the corresponding food source. Next, a predetermined parameter limit is employed to decide whether or not a scout bee occurs. Concretely speaking, provided that a bee continuously performs limit searches around the same food source, and it fails to achieve a better one, then the bee will become a scout bee. at is, it will randomly search for a new food source to jump out of a local trap in the light of the following equation: where j � 1, 2, . . . , D. Besides, the other parameters are the same settings as those of equation (1). During the foraging processes of the honey bee colony, they may cross some borders.
at is, the artificial individuals/solutions may violate boundary constraints. To make solutions feasible, the following equation is employed to repair those infeasible solutions: To summarize, after initializing a population, other stages of ABC are executed repeatedly until one halt condition is encountered.

A Multistrategy Artificial Bee
Colony Algorithm

Initialize a Population in View of Opposition-Based
Learning. First of all, a population of N p artificial individuals is randomly produced according to equation (1). Based on the initial artificial individuals/solutions, some opposite solutions are generated to improve population diversities. at is, an opposition-based learning (OBL) method [31] is used to construct these opposite solutions. Since 2008, the OBL method has been widely applied in many population-based algorithms such as DE [32,33] and ABC [11]. More concretely, equation (7) is employed here to produce oppositional vectors where j � 1, 2, . . . , D. e rest of the parameters are the same settings as those of equation (1).
By integrating the OBL and the random initialization approaches, the corresponding integrated initialization approach can be listed in Algorithm 1.

Search Strategy Candidate Pool.
To simultaneously get better accuracies of solutions and a faster convergence performance, Gao et al. [18] and Kiran et al. [21] have proposed multiple search strategies to coordinate the exploitation and the exploration abilities of ABC from various perspectives, respectively. In addition, different neighbor search operators are also employed in other methods [34].
In this work, a new search strategy candidate pool is constructed with two search strategies. In addition, it is used in both the employed bee phase and the onlooker bee phase, respectively.
Enlightened from DE/best/1, Gao and Liu [11,35,36] first designed a very good equation named as ABC/best/1. In a few ABC variants like MABC [11], these enhanced versions achieved a better performance. is is the reason that the global best individual can guide faster the i-th individual around the best individual than a current individual i. Namely, ABC/best/1 is better than the traditional search strategy in the aspect of the exploitation ability. erefore, ABC/best/1 is suitable to be integrated into the first candidate pool in this work. Its formula can be described as follows: where j ∈ 1, 2, . . . , D { }, i ∈ 1, 2, . . . , N p , and a, b ∈ 1, 2, . . . , N p are mutually different random integers. e two numbers i and best are also mutually different. best is the index of the global best individual in a population. ξ is a function to produce a uniformly distributed random number between [0, 1).
Enlightened from DE/rand/1, Gao and Liu [36] designed ABC/rand/1. Its exploitation ability is worse than that of ABC/best/1. However, its exploration ability is better than that of ABC/best/1. To better coordinate the two kinds of abilities, ABC/rand/1 is also added into the candidate pool. Its formula is described as follows: where a, b, c ∈ 1, 2, . . . , N p are randomly generated, and a ≠ b ≠ c ≠ i. e remainder of the parameters are the same settings as those of equation (8).
To further coordinate the exploration and the exploitation abilities of ABC, in the last stage, another search strategy candidate pool is constructed. e second candidate pool of search strategies is also composed of two search strategies.
Equation (5) is chosen as the first search strategy in the second strategy candidate pool. e second search strategy is described by equation (7).
at is, the original random search strategy and the OBL search strategy are integrated to make up the second candidate pool of search strategies.
To improve the comprehensive performance of ABC, two candidate pools with different strategies are presented in this work.

e Choice of Search Strategies.
As far as multiple search strategies are considered, the choice of search strategies also plays a key role during the process of improving the performance of ABC. Inspired by the variable neighborhood search (VNS) method [28][29][30], a simple yet efficient mechanism of choosing a search strategy for each bee is proposed.
In VNS, during the process of neighborhood changes, there are three steps: (i) set an iterative variable denoting a neighborhood as k � 1; (ii) if k is less than k max predetermined, perform a local search in the k-th neighborhood, and then the objective value of new generated solution x′ is compared with that of the incumbent x previously achieved; and (iii) if an improvement is obtained, k is reset to its initial value and the incumbent is updated, namely, replace x with x′. Otherwise, the next neighborhood is considered, i.e., k � k + 1, and go to (ii). More details can be found in the literature [37].
In this work, a search strategy in a search strategy candidate pool is considered as a neighborhood. us, the main idea of choosing a search strategy (or changing neighborhood in VNS) is described as follows: (i) set k � 1; (ii) perform a search according to the search strategy k in the corresponding search strategy candidate pool by a bee; and (iii) if an improvement is obtained, the next bee continues to search a food source using the same search strategy. Otherwise, the next search strategy is chosen, namely, set k � k + 1, which also means that the next bee performs a search using the next search strategy. Provided that k is greater than its predefined maximum value, reset k to 1. Namely, let k � 1, which indicates that the first search strategy will be used by the next bee.

e Proposed Method.
In the light of the aforementioned analysis, the major steps of ABCVNS are summarized in Algorithm 2.

Benchmark Problems and Experimental Settings.
To test the effects of modifications of ABCVNS, twenty-eight benchmark problems given in Table 1 are employed in the following comparisons. eir detailed information can be found in the research work [21]. Many different kinds of optimization problems are covered by these benchmark problems, and more details can be found in [21].
Except for functions f 22 and f 23 in the experiments, all other benchmark problems are partitioned into two groups in the following experiments: One group consists of 30dimensional functions, and another group consists of 60dimensional functions. In the first group, functions f 22 and f 23 with D � 100 are tested. In the second group, functions f 22 and f 23 with D � 200 are tested. Accordingly, the number of maximum function evaluations (parameter maxFEs) is set to 15e4 and 30e4, respectively. To address the advantage of ABCVNS, it is compared with ABC at the beginning. As far as the two contenders are concerned, the other parameters of each algorithm are given below: (i) ABC: the population size is 40, namely, N p � 20 [18], and the control parameter limit is set to N p * D [8,21] (ii) ABCVNS: the population size is 40, namely, N p � 20 [18], and the control parameter limit is also set to N p * D [8,21] For the following experiments, the aforementioned values of parameters are used only if a change is mentioned. Moreover, the corresponding algorithm is independently run over 30 times while optimizing each benchmark problems.

Comparison of ABC vs. ABCVNS.
To address the ABCVNSs effectiveness, comprehensive comparisons between ABCVNS and ABC are carried out. e sizes of test problems are set as D � 30 and D � 60, respectively. e corresponding results are provided in Tables 2 and 3. Concretely, we provide some statistical results such as standard deviation values (Std. listed in the ninth column of Tables 2 and 3) obtained by ABC and ABCVNS through 30 independent runs. Furthermore, the Wilcoxon signed rank tests between ABC and ABCVNS are also performed at the 5% significance level. e related significance statuses (Sig. for short) are listed in the tenth column of Tables 2 and 3, respectively. e symbols "+/ ≈ /− " show that ABCVNS is better than, equal to, or inferior to ABC, respectively. Moreover, some representative convergence curves of ABC and ABCVNS are shown in Figures 1 and 2 to display the convergence rate of ABCVNS more clearly.
In the light of the tenth column of Table 2, ABCVNS is superior to or equal to ABC on almost all the test problems. In terms of mean values found by ABC and ABCVNS, the solution accuracies of ABCVNS are obviously enhanced with respect to those of ABC on nine benchmark functions, namely, f 01 , f 02 , f 03 , f 04 , f 05 , f 08 , f 18 , f 20 , and f 26 . As a note, the two algorithms, i.e., ABCVNS and ABC, are all coded in MATLAB R2014a, which implies that experimental results report zero until they are less than 1e− 308, respectively. Furthermore, it is worth noting that ABCVNS achieves global optima on seven test problems, namely, f 07 , f 11 , f 12 , f 14 , f 20 , f 24 , and f 25 , whose characteristics include the characteristic of unimodal-separable (US) function, the characteristic of multimodal-separable (MS) function, and the characteristic of multimodal-nonseparable function.
Choose a better one between x ⌣i and x i (7) end for ALGORITHM 1: e integrated initialization method.

Test problems
Domain range Optimum   " †" indicates ABCVNS is better than ABC by the Wilcoxon signed rank test at α � 0.05."-" means that ABCVNS is inferior to ABC. "≈" means that there is no significant difference between ABCVNS and ABC. From the experimental results, it is clear that ABCVNS is obviously better than ABC. As seen from Figure 1, ABCVNS is superior to ABC in terms of solution accuracy or convergence speed on most of the representative problems. Especially, the convergence speeds of ABCVNS are faster than those of ABC although the solution accuracies found by ABC are the same as those obtained by ABCVNS on some cases. e superiority of ABCVNS embodies on a few problems such as f 11 and f 24 is shown in Figures 1(f ) and 1(j).
According to the last column of Table 3, we can find that ABCVNS is superior to or equal to ABC on almost all benchmark problems although the problem sizes raise from 30 to 60. ese verify that ABCVNS is robust to problem sizes. Furthermore, ABCVNS achieves global optima on the seven benchmark functions, namely, f 07 , f 11 , f 12 , f 13 , f 20 , f 24 , and f 26 . Like the test problems with D � 30, the superiority of ABCVNS is also kept on the test problems with D � 60 in terms of solution accuracy as well as convergence speed as shown in Figure 2. Generally speaking, ABCVNS achieves better performance than ABC on most of the test problems. at is, our modifications on ABC are active.

Comparisons among ABCVNS as well as Other Famous
ABCs. To further verify the superiority of ABCVNS, comparisons among ABCVNS together with a few wellknown or recently published research works are done again.
ese famous contenders include GABC [10], ABCBest1 [35], MABC [11], and ABCVSS [21]. To make a fair comparison, the terminal condition for the approaches is the maximum number of function evaluations. It is set to 15e4 for 30-dimensional test problems (problems f 22 and f 23 with D � 100). It is set to 30e4 for 60-dimensional test problems (problems f 22 and f 23 with D � 200). e remaining parameters are the same as those employed in the research work [21]. As seen from Tables 4 and 5, some statistical results found by each algorithm are provided. For brevity, results of the competitors of ABCVNS are straightly adopted from the research findings of Kiran et al. [21].
Next, comparisons among ABCVNS and its competitors are carried out based on rank. e related results are listed in Tables 6 and 7, respectively. For the comparisons, mean value is first used to compare ABCVNS and its contender. If mean values found by ABCVNS and its contender are the same with each other, then standard deviation values are used to decide which method is better. If both of them are also the same with each other, then their ranks are the same.
As seen from Tables 4 and 6, ABCVNS is superior to or the same to its competitors like GABC, ABCBest1, and MABC together with ABCVSS in most of the cases. As seen from Table 6, ABCVNS gets the first prize in terms of the average rank among five algorithms. In sum, ABCVNS is a competitive algorithm.
From Tables 5 and 7, it can be observed that the best results are mainly achieved by ABCVNS or ABCVSS. Especially, ABCVNS still keeps original competitive advantage over its contenders although problem sizes increase from 30 to 60. As before, ABCVNS wins the first place compared with the four other famous approaches as shown in the last row of Table 7.
For a fair comparison, the number of maximum function evaluations is employed as the halt condition. It is set to 10e4, which is also used in other research works [38,39]. In     Table 8. Note that the average and the standard deviation of the function error values f(X best ) − f(X * ) are provided. Here, X best denotes the best solution obtained by the corresponding algorithm in each run, and X * represents the real global optimal solution. Furthermore, except for results found by ABCVNS, all other reported ones are directly taken from the research work [38].
Both DFSABC_elite and ABCVNS are better than dABC, qABC, and ABCVSS as shown in Table 8. More concretely, DFSABC_elite wins the first place according to the average rank value reported in the last row of Table 8. Although ABCVNS is slightly inferior to DFSABC_elite, the orders of magnitude of mean values obtained by ABCVNS are very close to those searched by DFSABC_elite on ten test functions, i.e. , F7, F10, F11, F12, F14, F16, F21, F22, F26, and F28. Especially, for the benchmark functions F26 and F28,    the mean values of solutions found by ABCVNS are equal to those of solutions found by DFSABC_elite, respectively. DFSABC_elite wins ABCVNS on the two functions merely because the standard deviation values of solutions found by DFSABC_elite are slightly better than those of solutions obtained by ABCVNS. In sum, the proposed ABCVNS is also suitable for solving the very difficult problems.
To comprehensively investigate ABCVNS, another comparison among the three recent ABC variants and ABCVNS is carried out on twenty-two traditional complex functions with D � 30. ese problems are also employed in the research work [38]. e experimental results are reported in Table 9. Among them, except for results found by ABCVNS, all other results are directly adopted from the research work [38].
As seen from Table 9, ABCVNS wins the first place according to the average rank value. Furthermore, it is worth noting that ABCVNS ranks 1st in thirteen of twenty-two functions. DFSABC_elite gets the second place. In addition, qABC is better than dABC. Especially, ABCVNS searches global optima on five test functions, i.e., f 07 , f 11    Bold entity refers to one of the best results.

Conclusion
In this work, two search strategy candidate pools are proposed in order to overcome the contradiction between the fast convergence speed and the high solution accuracy. e first search strategy candidate pool is composed of two search strategies, i.e., ABC/best/1 and ABC/rand/1. e first search strategy candidate pool is employed in the employed bee phase and the onlooker bee phase. e second search strategy candidate pool also consists of two search strategies. ey are the original random search strategy and the OBL method. In addition, it is employed in the scout bee phase to get a better compromise between the exploration ability and the exploitation ability. In addition, a simple yet efficient choice mechanism of search strategies is presented under the inspiration of variable neighborhood search algorithm. en, a new variant of ABC is proposed, and it is called ABCVNS for short. To validate the convergence performance of the proposed ABCVNS, experiments on twentyeight benchmark functions are performed. e basic comparison results between ABC and ABCVNS demonstrate that the modifications on ABC take effect. at is, ABCVNS obtains better performance than ABC. To fully validate the effectiveness of ABCVNS, it is compared with four other famous algorithms including ABCVSS. Related experimental results show that ABCVNS wins the first place according to the average rank. Subsequently, ABCVNS is further tested on a very difficulty test suite, i.e., CEC2014 benchmark functions. e related experimental results also demonstrate its superiority.
In a word, the proposed ABCVNS can be considered as a promising method. In the future, more smarter mechanisms of choosing different strategies is worth developing to take full advantage of various strategies.

Data Availability
e related benchmark problems used to support the findings of this study can be found in this article or the web site https://www.ntu.edu.sg/home/epnsugan/.

Conflicts of Interest
All authors declare that there are no conflicts of interest regarding the publication of this paper.