Integrated Optimization of Differential Evolution with Grasshopper Optimization Algorithm

This paper proposes a scheme to improve the differential evolution (DE) algorithm performance with integrated the grasshopper optimization algorithm (GOA). The grasshopper optimization algorithm mimics the behavior of grasshopper. The characteristic of grasshoppers is slow movement in the larval stage but sudden movement in the adulthood which seem as exploration and exploitation. The grasshopper optimization algorithm concept is added to DE to guide the search process for potential solutions. The efficiency of the DE/GOA is validated by testing on unimodal and multimodal benchmarks optimization problems. The results prove that the DE/GOA algorithm is competitive compared to the other meta-heuristic algorithms.


INTRODUCTION
Meta-heuristic algorithm techniques constitute range from simple local search procedures to complex learning processes. The basic concepts of meta-heuristics permit an abstract level description. Nature-inspired meta-heuristic can be grouped in three main classes: Evolution-based, physics-based and swarm-based.
Evolution-based algorithms are inspired by the concepts of nature evolution. The population in search process are randomly generated which are developed over subsequent generation. The next generation is created by combination of the individuals in the previous generation. These provide the population to be optimized over the course of generations. The most popular evolution-based algorithm is genetic algorithms (GA) [1]. GA has a process of fitness-based selection and recombination to produce a successor population for the next generation. During recombination, child chromosomes are produced by recombined genetic material from parent chromosomes are selected. As this process is repeated, a sequence of consecutive generations evolves and the average fitness of the chromosomes inclines to increase until some stopping criterion is reached. Other popular algorithms such as differential evolution (DE) algorithms [2], the principal difference between genetic algorithms and differential evolution is that DE use mutation as the primary search mechanism, while GA use crossover as the probabilistic mechanism to exchange of information among solutions to locate better solutions. Biogeography-based optimizer (BBO) [3] algorithm is applied from nature phenomenon regarding the distribution of living creatures in various islands. In this field of study different ecosystems are explored for finding the relations between different species in terms of immigration, emigration, and mutation. The main inspiration of the BBO algorithm is to reach a stable situation based on migration and mutation in the evolution of ecosystems.
The second class of meta-heuristic is physics-based algorithms that imitate the physical rules. For example, gravitational search algorithm (GSA) [4], GSA is a nature-inspired conceptual framework based on gravitational kinematics, a branch of physics that models the motion of masses moving under the influence of gravity. Galaxy-based search algorithm [5] is inspired on the spiral arm of spiral galaxies to search its surrounding. This spiral movement is improved by chaos to escape from local optimums. Curved space optimization [6] inspired by general relativity theory is used to enhance the efficiency of a simple random search, and convert it to a very robust optimization tool.
The third class of meta-heuristic is swarm-based algorithms that imitate the social behavior of swarms, herds, flocks, or schools of animals. The most popular algorithms are particle swarm optimization (PSO) [7]. PSO inspired by bird flocking or fish schooling. Each particle will be iteratively updated to improve its solution during movement around the search space. The result leads to the best value of global solution. Other popular swarm-based algorithms are bat algorithms (BA) [8]. BA is inspired by echolocation behavior of microbats to avoid obstacles, find their home, and detect their prey in the darkness. Firefly algorithm (FA) [9] applied from the flashing light of fireflies when they want to mate or to warn others about predators.
The new meta-heuristic has been focused on this paper is differential evolution algorithm which is a global search optimization algorithm and improved performance of it by added grasshopper optimization algorithm (GOA) to increase convergence rate.

Article History
Received 24 March 2018 Accepted 6 June 2018

Keywords
Meta-heuristic differential evolution algorithm grasshopper optimization algorithm optimization

A B S T R A C T
This paper proposes a scheme to improve the differential evolution (DE) algorithm performance with integrated the grasshopper optimization algorithm (GOA). The grasshopper optimization algorithm mimics the behavior of grasshopper. The characteristic of grasshoppers is slow movement in the larval stage but sudden movement in the adulthood which seem as exploration and exploitation. The grasshopper optimization algorithm concept is added to DE to guide the search process for potential solutions. The efficiency of the DE/GOA is validated by testing on unimodal and multimodal benchmarks optimization problems. The results prove that the DE/GOA algorithm is competitive compared to the other meta-heuristic algorithms.
where l is used to balance the difference vectors between the best vectors and the target vectors. The value of l that is generated by Eq. (5) is decreased in each generation.
Meanwhile the mutation parameter F is used to perform the amplification of the difference between two random population members. When the procedure is trapped in the local optimum, the value of F is increased by Eq. (6).
The parameters c 1 in Eq. (5) and c 2 in Eq. (6) are uniform random numbers. c 1 ∈ [0, 0.1] is a random value and c 2 ∈ [0, 1] is a random value. The crossover parameter (CR) is regenerated at the end of each generation to find an optimized parameter. The new crossover parameter is generated by Eq. (7).
where mean A is the usual arithmetic mean. S CR is the set of crossover parameter values. C 3 ∈ [0, 1] is a uniform random number.
After that, the crossover operation is rearranged. The vectors that created from the target vectors and the mutant vector is called the prime vectors ( ) ¢ U i . The prime vectors is used to generate the trial vectors by comparing the fitness value between the prime vectors and the mutant vectors, otherwise the trial vectors is generated by grasshopper optimization algorithm.
The grasshopper optimization algorithm was proposed in Saremi et al. [10] mimics the behavior of grasshopper. The characteristic of the swarm in the larval stage is slow movement and small steps but is long range and suddenly movement in adulthood. These two characters seem as exploration and exploitation. The search agents are encouraged to move suddenly in exploration and conduce to move locally during exploitation. These two functions as well as target seeking are performed by grasshopper naturally.
The proposed algorithm takes the feature of movement of the grasshopper to generate a new value for the trial vectors is showed in Eq. (8).
where c max and c min are the maximum and the minimum value, respectively. l is the current iteration and L is the maximum number of iterations.
The rest of the paper is structured as follows. Section 2 describes some backgrounds of the differential evolution algorithm. In Section 3, the proposed algorithm is presented. Section 4 shows the experimental results and the conclusions will be discussed in Section 5.

THE DIFFERENTIAL EVOLUTION ALGORITHM
The differential evolution algorithm optimizes a problem by sustaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple procedure, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand.
The procedure of differential evolution algorithm starts to initialize where i = 1, 2, ..., NP. NP is the population size. N is the dimension of the population. The superscript G identifies the G th generation.

The target vectors are used to generate the mutant vectors
in next steps. The DE/current-tobest/1 mutation scheme is shown in Eq. (1).
where r 1 and r 2 are integer number chosen from the set {1, 2, ..., NP} and must be different from index i. X G best is the best solution in the generation G. F is used to control amplification of the differential evolution. The value of The last step is the selection operation to choose a better vector for next generation. The new generations are selected by comparing the fitness value between the target vector and the trial vector. The selection is shown in Eq. (3).
The whole process is repeated until the termination criteria are satisfied or a predefined number of iterations are reached.

THE PROPOSED ALGORITHM
The schema of DE/current-to-best/1 in mutation operation is updated by splitting F mutation parameter to l and F. The new mutation schema is shown in Eq. (4).

THE EXPERIMENTAL RESULTS
The proposed algorithm has been evaluated performance with thirteen benchmark functions [11]. The test functions are unimodal ( f 1f 7 ) and multimodal ( f 8f 13 ). The details of benchmark functions are shown in Table 1. The mutation parameters set l, F = 0.5, the crossover parameter set CR = 0.9 and population size set NP = 100.
The proposed algorithm was run 30 times on each benchmark functions. The average results are summarized in Table 2. The performance of the proposed algorithm is compared with two EAs: DE and fast evolutionary programming (FEP) and compared with swarm-based algorithm: PSO. The results of DE, FEP and PSO were taken from the results reported in Saremi et al. [12].
According to the results of increase convergence rate because the characteristic of GOA is both exploration and exploitation. Meanwhile the results of multimodal functions are not perfect that may be due to the multimodal functions have many increasing number of local optimum.

CONCLUSION
This work proposed to improve the DE algorithm performance with integrated the GOA. Thirteen benchmark functions were employed the performance in terms of exploitation and exploration. The results showed that DE/GOA was able to provide competitive results compared to well known heuristics such as DE, FEP and PSO.