Utilizing cumulative population distribution information in differential evolution
Graphical abstract
The mutation, crossover, and selection of CPI-DE.
Introduction
Differential Evolution (DE), proposed by Storn and Price [1], [2] in 1995, is a very popular evolutionary algorithm (EA) paradigm. During the past two decades, DE has attracted a lot of attention and has been successfully applied to solve a variety of numerical and real-world optimization problems [3], [4], [5].
The remarkable advantages of DE are its simple structure and ease of implementation. In DE, each individual in the population is called a target vector. DE contains three basic operators: mutation, crossover and selection. During the evolution, DE generates a trial vector for each target vector through the mutant and crossover operators. Afterward, the trial vector competes with its target vector for survival according to their fitness. DE also involves three control parameters: the population size, the scaling factor, and the crossover control parameter. The performance of DE is dependent mainly on these three operators and three control parameters. In order to further improve the performance of DE, a lot of DE variants have been designed, such as JADE [6], jDE [7], SaDE [8], EPSDE [9], CoDE [10], and so on.
DE is a population-based optimization algorithm; however, population distribution information has not yet been widely utilized in the DE community, which makes DE inefficient especially when solving some optimization problems with complex characteristics. Very recently, two attempts have been made along this line [11], [12]. However, the methods proposed in Refs. [11], [12] only utilize the distribution information from a single population of one generation, and the cumulative distribution information of the population over the course of evolution has been ignored. Moreover, these methods introduce some extra parameters. Therefore, new insights into the usage of the population distribution information in DE are quite necessary.
In 2001, Hansen and Ostermeier [13] proposed the well-known covariance matrix adaptation evolution strategy, called CMA-ES. CMA-ES generates offspring by sampling a multivariate normal distribution, which includes three main elements: mean vector of the search distribution, covariance matrix, and step-size. Indeed, covariance matrix reflects the population distribution information to a certain degree [12]. In CMA-ES, the covariance matrix is self-adaptively updated according to the information from the previous and current generations.
In this paper, we make use of the cumulative distribution information of the population to establish an Eigen coordinate system in DE, by considering CMA as an effective tool. Furthermore, we suggest a cumulative population distribution information based DE framework called CPI-DE. In CPI-DE, for each target vector, the crossover operator of DE is implemented in both the original coordinate system and the Eigen coordinate system and, as a result, two trial vectors are generated. Subsequently, the target vector is compared with these two trial vectors and the best one will enter the next population. CPI-DE is applied to two classic DE versions as well as three state-of-the-art DE variants. Extensive experiments across two benchmark test sets from the 2013 IEEE Congress on Evolutionary Computation (IEEE CEC2013) [14] and the 2014 IEEE Congress on Evolutionary Computation (IEEE CEC2014) [15] have been implemented to verify the effectiveness of CPI-DE.
The main contributions of this paper can be summarized as follows:
- •
Due to the fact that single population fails to contain enough information to estimate the covariance matrix reliably, this paper updates the covariance matrix in DE by an adaptation procedure, which makes use of the cumulative distribution information of the population.
- •
CPI-DE provides a simple yet efficient synergy of two kinds of crossover: the crossover in the Eigen coordinate system and the crossover in the original coordinate system. The former aims at identifying the properties of the fitness landscape and improving the efficiency and effectiveness of DE by producing the offspring toward the promising directions. In addition, the purpose of the latter is to maintain the superiority of the original DE. Moreover, no extra parameters are required in CPI-DE.
- •
Our experimental studies have shown that CPI-DE is capable of enhancing the performance of several classic DE versions and advanced DE variants.
The rest of this paper is organized as follows. Section 2 describes the basic procedure of DE. Section 3 briefly reviews the recent developments of DE in the last five years. The proposed CPI-DE is presented in Section 4. The experimental results and the performance comparison are given in Section 5. Finally, Section 6 concludes this paper.
Section snippets
Differential evolution (DE)
Similar to other EA paradigms, DE starts with a population of NP individuals, i.e., , where g is the generation number, D is the dimension of the decision space, and NP is the population size. In , each individual is also called a target vector. At g = 0, the jth decision variable of the ith target vector is initialized as follows:where rand(0,1) represents a uniformly distributed random number
The related work
Recent two decades have witnessed significant progress in the developments of DE. In 2011, Das and Suganthan [16] presented a comprehensive survey on DE, including the basic concepts and major variants of DE, as well as the applications and theoretical studies of DE. Next, we will briefly introduce the recent developments of DE in the last five years.
Motivation
Based on the above introduction, it is clear that population distribution information has seldom been involved in the current state-of-the-art DE.
Very recently, Guo and Yang [11] and Wang et al. [12] made the first attempt to exploit the population distribution information in DE. The methods proposed in Refs. [11], [12] share some similar ideas. More specifically, these two methods firstly compute the covariance matrix of the population. Subsequently, the Eigenvectors obtained from the Eigen
Experimental study
In this paper, two sets of benchmark test functions are employed to demonstrate the effectiveness of CPI-DE, i.e., 28 test functions with 30 dimensions (30D) and 50 dimensions (50D) at IEEE CEC2013 [14], and 30 test functions with 30 dimensions (30D) and 50 dimensions (50D) at IEEE CEC2014 [15]. The 28 test functions in the first set are denoted as CEC20131-CEC201328, and the 30 test functions in the second set are denoted as CEC20141-CEC201430.
In our experiments, the function error value
Conclusion
A simple yet efficient DE framework, which is referred as CPI-DE, has been presented in this paper. In CPI-DE, the cumulative population distribution information is utilized to establish an Eigen coordinate system for DE’s crossover. Moreover, CPI-DE performs the crossover in both the original coordinate system and the Eigen coordinate system in a deterministic manner. As a result, two trial vectors are generated for each target vector and the best one among the target vector and two trial
Acknowledgments
The authors would like to thank the anonymous reviewers for their very constructive and helpful suggestions. This work was supported in part by the National Basic Research Program 973 of China (Grant No. 2011CB013104), in part by the Innovation-driven Plan in Central South University (No. 2015CXS012 and No. 2015CX007), in part by the National Natural Science Foundation of China under Grant 61273314, in part by the EU Horizon 2020 Marie Sklodowska-Curie Individual Fellowships (Project ID:
References (72)
- et al.
An improved (μ+λ)-constrained differential evolution for constrained optimization
Inf. Sci.
(2013) - et al.
Differential evolution algorithm with ensemble of parameters and mutation strategies
Appl. Soft Comput.
(2011) - et al.
Differential evolution based on covariance matrix learning and bimodal distribution parameter setting
Appl. Soft Comput.
(2014) - et al.
A differential evolution algorithm with intersect mutation operator
Appl. Soft Comput.
(2013) - et al.
A directional mutation operator for differential evolution algorithms
Appl. Soft Comput.
(2015) - et al.
Subspace clustering mutation operator for developing convergent differential evolution algorithm
Math. Probl. Eng.
(2014) - et al.
Empirical investigations into the exponential crossover of differential evolutions
Swarm Evol. Comput.
(2013) - et al.
Enhancing the search ability of differential evolution through orthogonal crossover
Inf. Sci.
(2012) - et al.
Repairing the crossover rate in adaptive differential evolution
Appl. Soft Comput.
(2014) - et al.
Adaptive population tuning scheme for differential evolution
Inf. Sci.
(2013)
Self-adaptive differential evolution algorithm with discrete mutation control parameters
Expert Syst. Appl.
Enhancing distributed differential evolution with multicultural migration for global numerical optimization
Inf. Sci.
A new self-adaptation scheme for differential evolution
Neurocomputing
hABCDE: a hybrid evolutionary algorithm based on artificial bee colony algorithm and differential evolution
Appl. Math. Comput.
DE-VNS: self-adaptive differential evolution with crossover neighborhood search for continuous global optimization
Comput. Oper. Res.
Differential evolution improved with self-adaptive control parameters based on simulated annealing
Swarm Evol. Comput.
Adaptive memetic differential evolution with global and local neighborhood-based mutation operator
Inf. Sci.
Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces
J. Global Optim.
MOMMOP: multiobjective optimization for locating multiple optimal solutions of multimodal optimization problems
IEEE Trans. Cybern.
JADE: adaptive differential evolution with optional external archive
IEEE Trans. Evol. Comput.
Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems
IEEE Trans. Evol. Comput.
Differential evolution algorithm with strategy adaptation for global numerical optimization
IEEE Trans. Evol. Comput.
Differential evolution with composite trial vector generation strategies and control parameters
IEEE Trans. Evol. Comput.
Enhancing differential evolution utilizing Eigenvector-based crossover operator
IEEE Trans. Evol. Comput.
Completely derandomized self-adaptation in evolution strategies
Evol. Comput.
Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-parameter Optimization
Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-parameter Numerical Optimization, Technical Report 201311
Differential evolution: a survey of the state-of-the-art
IEEE Trans. Evol. Comput.
Differential evolution with ranking-based mutation operators
IEEE Trans. Cybern.
Differential evolution with neighborhood and direction information for numerical optimization
IEEE Trans. Cybern.
Differential evolution enhanced with multiobjective sorting-based mutation operators
IEEE Trans. Cybern.
Improving differential evolution with successful-parent-selecting framework
IEEE Trans. Evol. Comput.
Differential evolution using mutation strategy with adaptive greediness degree control
Proc. Genet. Evol. Comput. Conf. (GECCO)
Gaussian bare-bones differential evolution
IEEE Trans. Cybern.
Cited by (83)
An adaptive biogeography-based optimization with integrated covariance matrix learning for robust visual object tracking
2023, Expert Systems with ApplicationsA population state evaluation-based improvement framework for differential evolution
2023, Information SciencesFunction value ranking aware differential evolution for global numerical optimization
2023, Swarm and Evolutionary Computation