Genetic algorithm optimization of multi-peak problems: studies in convergence and robustness

https://doi.org/10.1016/0954-1810(95)95751-QGet rights and content

Abstract

Engineering design studies can often be cast in terms of optimization problems. However, for such an approach to be worthwhile, designers must be content that the optimization techniques employed are fast, accurate and robust. This paper describes recent studies of convergence and robustness problems found when applying genetic algorithms (GAs) to the constrained, multi-peak optimization problems often found in design. It poses a two-dimensional test problem which exhibits a number of features designed to cause difficulties with standard GAs and other optimizers. The application of the GA to this problem is then posed as a further, essentially recursive problem, where the control parameters of the GA must be chosen to give good performance on the test problem over a number of optimization attempts. This overarching problem is dealt with both by the GA and also by the technique of simulated annealing. It is shown that, with the appropriate choice of control parameters, sophisticated niche forming techniques can significantly improve the speed and performance of the GA for the original problem when combined with the simple rejection strategy commonly employed for handling constraints. More importantly, however, it also shows that more sophisticated multi-pass, constraint penalty functions, culled from the literature of classical optimization theory, can render such methods redundant, yielding good performance with traditional GA methods.

References (5)

  • J.N. Siddall
  • D.E. Goldberg
There are more references available in the full text version of this article.

Cited by (73)

  • Evolutionary ORB-based model with protective closing strategies

    2021, Knowledge-Based Systems
    Citation Excerpt :

    However, an uneven solution space with multiple peaks cannot guarantee a gradient indicating the correct direction. GA is a heuristic approach in evolutionary computation [22], which has proven highly effective in nonconvex multi-peak optimization problems [23]. This approach is based on the concept of survival by natural selection [24], in which a strong individual (a strong parameter set) has a higher likelihood of survival.

  • A meta optimisation analysis of particle swarm optimisation velocity update equations for watershed management learning

    2018, Applied Soft Computing
    Citation Excerpt :

    The field of meta optimisation dates back to 1978 when it was first applied to tune a genetic algorithm [32]. In the years since, meta optimisation has been applied to ant colony optimisation [6], differential evolution [34], COMPLEX-RF [22], particle swarm optimisation [31,35] and genetic algorithms [14,4,20]. There are limitations to the previous studies conducted in applying meta optimisation to PSO.

  • Meta-harmony search algorithm for the vehicle routing problem with time windows

    2015, Information Sciences
    Citation Excerpt :

    The main role of the meta-HSA optimizer is to adjust HSA parameter values, the local search algorithm type and the local search configurations (parameter values and the neighborhood structures) during the search without any external influence. The main difference between the proposed meta-HSA and the existing ones [24–28] is that the proposed meta-HSA is used to adjust the parameter values, local search types and local search configurations (parameter values and neighborhood operators) while the existing meta-optimizers only adjust the parameter values. Furthermore, the proposed meta-HSA adjusts these components and configurations in an online manner, while existing ones use training and testing instances, which might make them well suited to the training instances only.

  • Field performance of a genetic algorithm in the settlement prediction of a thick soft clay deposit in the southern part of the Korean peninsula

    2015, Engineering Geology
    Citation Excerpt :

    A back-analysis method based on a genetic algorithm (GA) can be used as a parallel and global search tool that emulates natural genetic operators. GAs generally show better performance when searching for a solution than conventional optimization algorithms because GAs, which make use of an entire set of solutions spread throughout the solution space, are less affected by local optima (Holland, 1975; Goldberg, 1989; Keane, 1995). Park et al. (2009) showed that the GA back-analysis method has the advantage of robustly searching for a global solution while avoiding local solutions compared with conventional optimization schemes in a multi-dimensional consolidation problem with three consolidation layers.

View all citing articles on Scopus
View full text