Elsevier

Applied Soft Computing

Volume 41, April 2016, Pages 479-487
Applied Soft Computing

Not guaranteeing convergence of differential evolution on a class of multimodal functions

https://doi.org/10.1016/j.asoc.2016.01.001Get rights and content

Highlights

  • We constructed a Linear Deceptive function as the representative of a class of multimodal functions.

  • DE cannot guarantee convergence in probability on the above class of multimodal functions.

  • A random drift model was firstly used to analyze the convergence of a real-coded evolutionary algorithm.

  • DE's mutation operators prefer to search in the aggregating region of the target individuals.

Abstract

The theoretical studies of differential evolution algorithm (DE) have gradually attracted the attention of more and more researchers. According to recent researches, the classical DE cannot guarantee global convergence in probability except for some special functions. Along this perspective, a problem aroused is that on which functions DE cannot guarantee global convergence. This paper firstly addresses that DE variants are difficult on solving a class of multimodal functions (such as the Shifted Rotated Ackley's function) identified by two characteristics. One is that the global optimum of the function is near a boundary of the search space. The other is that the function has a larger deceptive optima set in the search space. By simplifying the class of multimodal functions, this paper then constructs a Linear Deceptive function. Finally, this paper develops a random drift model of the classical DE algorithm to prove that the algorithm cannot guarantee global convergence on the class of functions identified by the two above characteristics.

Introduction

The differential evolution algorithm (DE) proposed by Storn and Price in 1995 [1] is a population-based stochastic parallel evolutionary algorithm. DE emerged as a very competitive form of evolutionary computing [2], [3], [4] and has got many practical applications, such as function optimization, multi-objective optimization, classification, scheduling and so on.

Since that theoretical studies benefit understanding the algorithmic search behaviors and developing more efficient algorithms, more and more researchers pay attention to the theoretical studies on DE with the popularity in applications. In 2005, Zielinski et al. [5] investigated in theory the runtime complexity of DE for various stopping criteria including a fixed number of generations (Gmax) and maximum distance criterion (MaxDist). From 2001 to 2010, Zaharie [6], [7], [8], [9], [10], [11], Dasgupta et al. [12], [13] and Wang et al. [14] analyzed the dynamical behavior of DE's population from different perspectives, i.e., the statistics characteristics, the gradient-descent type search characteristics and stochastic evolving characteristics respectively. Recently some convergent DE algorithms [15], [16], [17], [18], [19] have developed.

This paper focuses on the convergence analyses of the classical DE. Several important conclusions on the convergence have been drawn. In 2005, Xu et al. [20] performed a mathematical modeling and convergence analysis of continuous multi-objective differential evolution (MODE) under certain simplified assumptions, and this work was extended in [21]. In 2012, Ghosh and Das et al. [22] used Lyapunov stability theorem to establish the asymptotic convergence behavior of a classical DE (DE/rand/1/bin) algorithm on a class of special functions identified by the following two properties, 1) the function has the second-order continual derivative in the search space, and 2) it possesses a unique global optimum in the range of search. In 2013, Hu et al. [23] proposed and proved a sufficient condition for global convergence of the modified DE algorithms. In 2014, Hu et al. [24] developed a Markov chain model of the classical DE and proved then that it cannot guarantee global convergence in probability. In a word, the classical DE cannot guarantee global convergence in probability except for some special functions.

Along this perspective, this paper does two works as follows:

  • Firstly, this paper addresses that DE variants are difficult to solve a class of multimodal functions. By abstracting the characteristics of the class of functions, this paper then constructs a Linear Deceptive function which can simplify the theoretical analyses on DE.

  • This paper develops a random drift model of the classical DE algorithm to prove the conclusion that the algorithm cannot guarantee global convergence on a class of functions represented by the Linear Deceptive function.

The rest is organized as follows. Section 2 introduces the classical DE algorithm. As the research background of this paper, Section 3 presents a problem that many DE variants are difficult to solve a class of multimodal functions. Section 4 qualitatively analyzes the reason resulting the problem by using distribution characteristics of the trial population, and offers the proof idea of the main conclusion in the following sections. Sections 5 Construction and analysis of Linear Deceptive function, 6 Random drift analysis of not guaranteeing global convergence of differential evolution prove the main conclusion that the classical DE cannot guarantee global convergence on a class of multimodal functions by constructing a Linear Deceptive function and developing a random drift model of the classical DE. Finally the concluding remarks are presented in Section 7.

Section snippets

Classical differential evolution

DE is used for dealing with the continuous optimization problem. We suppose in this paper that the objective function to be minimized is f(x),x=(x1,,xn)Rn, and the feasible solution space is Ψ=j=1j=n[Lj,Uj]. The classical DE [1], [3], [26] works through a simple cycle of operators including mutation, crossover and selection operator after initialization. The classical DE procedures are described in detail as follows.

Experimental background of DE on two multimodal functions

The analyses in the introduction section demonstrate that the classical DE cannot guarantee global convergence in probability except for some special functions. Next, a question aroused is on which functions DE algorithms are not convergent in probability. We notice the following two test functions are difficult to be solved by using the classical DE and improved DE algorithms.

Distribution analysis of trial population and insight of proof idea

What is the theoretical reason that DE and the variants cannot solve the above two functions well? Some perceptual knowledge may be derived from analyzing the exploration ability of DE algorithm. Since the exploration ability of DE depends on the distribution of its trial population to a great extent, this section qualitatively analyzes the distribution characteristic of DE's trail population.

Construction and analysis of Linear Deceptive function

In order to give a succinct and rigorous proof, we construct the following Linear Deceptive function by simplifying the DE Deceptive function.f(x)=kx12/kx<1/kkx+11/kx<0x/k+10x<k11/kk1xkHere k  10. The global minimum of the function is −1/k with the function value 0, and a local minimal region is [k  1, k] on which the function value of each point is 1/k.

Now, let k equal 10. At this case, the global minimum of this function is −0.1 with the function value 0, while the

Random drift analysis of not guaranteeing global convergence of differential evolution

In order to analyze the convergence properties of the classical DE, a convergence definition must be given. There are several differential convergence definitions. The following convergence definition, global convergence in probability, is employed in this paper.

Definition 1

[24] (Global convergence in probability) Let {x(t), t = 0, 1, 2, … } be a population sequence generated by a population-based stochastic algorithm, then the algorithm holds global convergence in probability if and only if, for any initial

Concluding remark and future work

Numerous researches have shown that DE variants are difficult to solve the following two multimodal functions, i.e., DE Deceptive function and Shifted Rotated Ackley's function with Global Optimum on Bounds. This paper revealed that those two functions hold two common characteristics. One is that the global optima of the functions are near the boundary of their search spaces, the other is that the functions have larger deceptive optima sets in their search spaces. The distribution analyses for

Acknowledgements

This work was supported in part by the National Nature Science Foundation of China (no. 61370092), Institute of Applied Mathematics of Yangtze University Support Foundation (no. KF1502), Hubei Provincial Department of Education Outstanding Youth Scientific Innovation Team Support Foundation (T201410) and Natural Science Foundation of Hubei Province of China (no. 2013CFC2005).

References (31)

  • D. Zaharie

    Parameter adaptation in differential evolution by controlling the population diversity

  • D. Zaharie

    A comparative analysis of crossover variants in differential evolution

  • D. Zaharie

    Statistical properties of differential evolution and related random search algorithms

  • S. Dasgupta et al.

    The population dynamics of differential evolution: a mathematical model

  • S. Dasgupta et al.

    On stability and convergence of the population-dynamics in differential evolution

    AI Commun.

    (2009)
  • Cited by (30)

    • Particle Swarm Optimization or Differential Evolution—A comparison

      2023, Engineering Applications of Artificial Intelligence
    • Dual adaption based evolutionary algorithm for optimized the smart healthcare communication service of the Internet of Things in smart city

      2022, Physical Communication
      Citation Excerpt :

      This strategy accelerates convergence and diversity. In [23], the researchers presented a non-guaranteed convergence rate for the DE algorithm. If the best solution is trapped in a local optima, then the majority of the solutions will be as well.

    • Test case generation using improved differential evolution algorithms with novel hypercube-based learning strategies

      2022, Engineering Applications of Artificial Intelligence
      Citation Excerpt :

      DE, proposed by Storn and Price (1997), is characterized by a simple structure, fast convergence and strong robustness. DE process can be summarized by the following four steps (Storn and Price, 1997; Hu et al., 2016). In this section, the overall framework of ATCG-PC based on the proposed algorithms is first introduced.

    • A prescription of methodological guidelines for comparing bio-inspired optimization algorithms

      2021, Swarm and Evolutionary Computation
      Citation Excerpt :

      Many versions of Genetic Algorithms (GA), Particle Swarm Optimization (PSO), or Differential Evolution (DE) [9], have exploited this characteristic, as it has been traditionally where the optimum of the problem under analysis was located. Those algorithms, on the other hand, tend to exhibit a bad performance near to the bounds of the domain search [10]. In [11] and [12] a detailed experimentation about the structural bias in search algorithms is given.

    View all citing articles on Scopus
    View full text