Elsevier

Information Sciences

Volume 193, 15 June 2012, Pages 36-53
Information Sciences

Investigating Smart Sampling as a population initialization method for Differential Evolution in continuous problems

https://doi.org/10.1016/j.ins.2011.12.037Get rights and content

Abstract

Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques.

Introduction

The task of global optimization has arisen in several areas of real-world problems, such as protein structure prediction [8], logistics or circuit design (traveling salesman problem) [6], chemical engineering [35], and airspace design [10]. This task involves the minimization or maximization of a known objective function or an unknown black-box function. In general, these functions are highly complex and may be time-consuming, taking several days, weeks or months to achieve an adequate result, which may not be the global optimum. To solve this type of task, several global optimization metaheuristics have been developed.

Metaheuristics [13], [46], [21] are optimization techniques used to search for high-quality solutions of a problem of which one is expected to be the global optimum. One of the main characteristics is that metaheuristics need neither gradient information to guide the search nor specific knowledge of a problem (heuristic), which makes them useful to solve a wide range of problems, including black-box ones. Several strategies have been investigated to improve the exploratory efficiency of metaheuristics to reach the global optimum of a problem. For instance, strategies have been developed to reduce premature convergence and to increase the chance of escaping from local optima. Basically, those strategies, when applied to populational metaheuristics, involve the maintenance of diversity in the set of solutions. Another procedure that has been investigated to improve a metaheuristic’s performance is the step related to the population initialization [15], [27], [34].

Initial population generation involves an exploration phase. This phase allows the algorithm to select locations to be explored and others to be discarded. Traditional metaheuristics move toward the best solutions. Thus, a bad initialization that generates solutions close to each other (clusters of solutions) could leave large areas with no solutions to explore. On the other hand, if the population is very disperse, a large number of iterations may be required to reach a local optimum. Moreover, if two solutions far from each other are combined, there is a high chance that the offspring will be closer to the best solution found, which could leave a large unexplored gap. Thus, an initialization method capable of providing a better exploration of the search-space and presenting only high-quality solutions should improve the performance of a metaheuristic.

Metaheuristics themselves are naturally guided towards promising regions [36] (see Fig. 1). There is an exploration phase and then the population moves towards the best solution found in order to exploit that region. On the other hand, a desired movement could be the one presented in Fig. 2. The exploration phase may take longer and the population can be split into more than one region. After that, the regions can be exploited independently.

This paper presents an approach to explore the search-space and to find promising regions. The objective is to aid global optimization algorithms by indicating the initial search-space areas with higher possibility of finding the global optimum. The approach is iteratively applied to explore the search-space inside promising regions which become smaller at each iteration, similarly to the strategy proposed in [17], excluding areas considered unfavorable. The new approach, called Smart Sampling (SS), seeks to preserve diversity in more than one promising region, providing a better exploration of the search-space.

First of all, SS generates some solutions, evaluates them using the objective function, and splits them into good and bad based on a threshold applied to the function value of each solution. Then, SS employs a machine learning algorithm to map characteristics of these good solutions, allowing it to check if a new solution is good without being evaluated. If the new solution is identified as a good one, then it can be evaluated by the objective function. This approach works well on problems with small or high numbers of variables. In the last step, another machine learning algorithm separates the different promising regions to be exploited by any metaheuristic, which will refine the high-quality solutions found during the SS process.

Therefore, SS is employed to increase the efficiency of global optimization algorithms, and can be essential to obtain satisfactory results in situations in which the execution of a large number of experiments is not viable. Furthermore, and most important, several researchers have studied ways to improve well-known metaheuristics by using heuristics, or local-search methods, or creating hybrids. Here, we propose a technique that is neither an operator nor a strategy to be included in a search technique, and has been developed for use as a preprocessing phase. It can be directly used to improve the performance of any populational continuous global optimization technique.

SS is tested in conjunction with Differential Evolution (DE), a well-known metaheuristic, in several bound-constrained optimization problems with different properties. The quality of SSDE’s solutions is better than DE’s with fewer evaluations. The paper also presents a performance comparison with other three approaches (ODE, QODE and UQODE) that improve DE to explore the search-space in an attempt to find promising regions and escape from local optima. The results have shown that SSDE provides considerably better performance than the other three approaches.

The paper is organized as follows: Section 2 contains some related works on population initialization and machine learning techniques used to improve metaheuristics; in Section 3, the proposed SS algorithm is presented in details; the Differential Evolution algorithm is briefly described in Sections 4 Differential Evolution, 5 Evaluating SS presents a preliminary study on SSDE’s behavior using some well-known benchmark functions. Some charts show the distribution of high-quality solutions during SS procedure and convergence curves through the optimization process. The experiments comparing SSDE, DE and the other three DE improvements are presented and discussed in Section 5. Finally, Section 6 concludes the paper and presents future works.

Section snippets

Related works

The use of machine learning algorithms to improve metaheuristics is not new. Jourdan et al. [12] showed how classification and clustering techniques are applied to hybridize metaheuristics reducing the computational time and simplifying the objective function by an approximation technique or improve the quality of the search by adding background knowledge to the operators. Ramsey and Grefenstette [34] initialized a genetic algorithm using case-based reasoning to allow the system to bias the

Smart Sampling

The basic flowchart of SS is presented in Fig. 3. First, SS samples the search-space to identify the first large regions which must be explored. The higher the dimensionality of the problem, the larger the first sample. The main idea of SS is to perform a resampling only in areas considered promising regions, avoiding wasting evaluations in non-promising areas. If a simple random resampling is executed inside promising regions, final iterations will lead to local optima. This behavior should be

Differential Evolution

Differential Evolution was introduced by Storn and Price in 1995 [37]. It is a floating-point encoding populational metaheuristic, similar to classical evolutionary algorithms, successfully used to solve several benchmarks and real-world problems [22], [26], [43], [42].

Population P of D dimensions is randomly initialized (using a uniform distribution) inside the problem’s bounds and evaluated using the fitness function for the problem. Next, until a stop criterion has been met, the algorithm

Evaluating SS

The functioning of SS in continuous 2D test functions (Ackley, Alpine, Griewank, Parabola, Rastrigin, Rosenbrock and Tripod, see mathematical definition in Table 1) is illustrated, respectively, in Fig. 7, Fig. 8, Fig. 9, Fig. 10, Fig. 11, Fig. 12, Fig. 13. For this experiment, SS was configured as follows: window_size = 0.01; lower_lim = −0.5 and upper_lim = 1.5. Those lim values are used in Algorithm 2.

The graphs in the figures show the evolution/reduction in promising regions through SS iterations

Computational experiments

This paper presents the effects of SS on global optimization problems using a well-known global optimization algorithm, i.e., the classical Differential Evolution (DE) [38]. The results using SS (our approach) in conjunction with DE – called SSDE – are compared to the results presented by ODE [33], QODE [31] and UQODE [29].

Conclusions and future works

This paper has presented a technique to find promising regions of the search-space of continuous functions. The approach, named Smart Sampling (SS), uses a machine-learning technique to identify promising and non-promising solutions to guide the resampling procedure to smaller areas where higher-quality solutions can be found. This iterative process ends when a stop criterion has been achieved, for instance, when a promising region is too small. At this point, another machine-learning technique

Acknowledgments

The authors would like to acknowledge CAPES (a Brazilian Research Agency) for the financial support given to this research.

References (46)

  • D.W. Aha, D. Kibler, M.K. Albert, Instance-based learning algorithms, in: Machine Learning, 1991, pp....
  • T. Back et al.

    Handbook of Evolutionary Computation

    (1997)
  • G.E.P. Box et al.

    Statistics for Experimenters

    (1978)
  • T.R. Dastidar et al.

    A synthesis system for analog circuits based on evolutionary search and topological reuse

    IEEE Transactions on Evolutionary Computation

    (2005)
  • K. Deb et al.

    An investigation of niche and species formation in genetic function optimization

  • P. Gabriel et al.

    Representations for evolutionary algorithms applied to protein structure prediction problem using hp model

  • D.E. Goldberg

    Genetic Algorithms in Search, Optimization, and Machine Learning

    (1989)
  • S. Hitzel et al.

    Aerodynamic optimization of an ucav configuration

  • M. Jelasity et al.

    Uego, an abstract clustering technique for multimodal global optimization

    Journal of Heuristics

    (2001)
  • L. Jourdan, C. Dhaenens, E.-G. Talbi, Using datamining techniques to help metaheuristics: a short survey, in: Hybrid...
  • A.Y.S. Lam et al.

    Chemical-reaction-inspired metaheuristic for optimization

    IEEE Transactions on Evolutionary Computation

    (2010)
  • K.-H. Liang et al.

    Evolutionary search of approximated n-dimensional landscapes

    International Journal of Knowledge-Based Intelligent Engineering Systems

    (2000)
  • H. Maaranen et al.

    On initial populations of a genetic algorithm for continuous optimization problems

    Journal of Global Optimization

    (2007)
  • Cited by (0)

    View full text