Elsevier

Applied Soft Computing

Volume 13, Issue 2, February 2013, Pages 1247-1264
Applied Soft Computing

Global optimization using a multipoint type quasi-chaotic optimization method

https://doi.org/10.1016/j.asoc.2012.10.025Get rights and content

Abstract

This paper proposes a new global optimization method called the multipoint type quasi-chaotic optimization method. In the proposed method, the simultaneous perturbation gradient approximation is introduced into a multipoint type chaotic optimization method in order to carry out optimization without gradient information. The multipoint type chaotic optimization method, which has been proposed recently, is a global optimization method for solving unconstrained optimization problems in which multiple search points which implement global searches driven by a chaotic gradient dynamic model are advected to their elite search points (best search points among the current search histories). The chaotic optimization method uses a gradient to drive search points. Hence, its application is restricted to a class of problems in which the gradient of the objective function can be computed. In this paper, the simultaneous perturbation gradient approximation is introduced into the multipoint type chaotic optimization method in order to approximate gradients so that the chaotic optimization method can be applied to a class of problems for which only the objective function values can be computed. Then, the effectiveness of the proposed method is confirmed through application to several unconstrained multi-peaked, noisy, or discontinuous optimization problems with 100 or more variables, comparing to other major meta-heuristics.

Highlights

► A global optimization method based on chaotic optimization method is proposed. ► A stochastic gradient approximation technique is introduced. ► Quasi-chaotic search trajectory from the approximated gradient dynamics is utilized. ► The proposed method is applied to high dimensional and multi-peaked problems. ► The proposed method performs well, comparing to other major meta-heuristics.

Introduction

The development of global optimization methods, which obtain global minima without being trapped at local minima, has been investigated extensively.

So-called physically inspired optimization methods, which use dynamic models as computation models, have been proposed and have mainly been applied to continuously differentiable problems. The common characteristic among these models is that a global search is executed using the autonomous movement of the search point, which is driven by a vector quantity given by its dynamic system, such as a gradient vector, and the search range is then narrowed by an annealing procedure. Examples of these methods include the chaotic optimization method [1], [25], [17], [20] and the Hamiltonian algorithm [18].

Okamoto and Aiyoshi [20] proposed a multipoint type chaotic optimization method (M-COM1) for unconstrained optimization problems with continuous variables. The M-COM is a global optimization method in which multiple search points which implement global searches driven by a chaotic gradient dynamic model are advected to their elite search points by using a coupling model. Its superior global search capability has been confirmed through application to several unconstrained multi-peaked optimization problems with 100 or 1000 variables. The M-COM uses a gradient as a driving force for the search points. Hence, computation of a gradient is required to implement the M-COM. However, in actual optimization problems, a gradient is not usually easily computed, because the algorithms or formulae for the computation of the objective functions are not available or because the objective functions are nondifferentiable. In this paper, we consider the introduction of the gradient approximation methods to the chaotic optimization method so that the chaotic optimization method can be applied to a class of problems for which only the objective function values can be computed.

Gradient approximation methods include the finite difference gradient approximation (FDGA), which is a classical approach, and the simultaneous perturbation gradient approximation (SPGA) [23]. The SPGA can approximate a gradient vector by evaluating the objective function only two times, whereas the FDGA requires a number of evaluations equal to the number of decision variables. For accuracy of the SPGA, it is known that expectation of the gradient estimated by the SPGA approximates the true gradient.

Herein, we propose a new global optimization method in which the SPGA is introduced into the M-COM in order to carry out optimization without gradient information at a low cost. Then, we confirm the effectiveness of the proposed method through application to several unconstrained multi-peaked optimization problems with 100 or more variables. When introducing gradient approximation methods to chaotic optimization, the first and natural choice is the FDGA. However, if the FDGA is simply introduced into the M-COM to solve an optimization problem with a large number of decision variables, we expect the computational cost to be very high. Computational cost is one of our motivation for introducing not the FDGA but the SPGA into the M-COM. The search trajectory generated from the approximated gradient dynamics using the SPGA of the proposed method is analogous to the search trajectory generated from the gradient dynamics of the M-COM. Furthermore, our conclusion based on a theoretical analysis given by Maryak and Chin [15] is that the search trajectory generated from the approximated gradient dynamics using the SPGA has a potentiality for finding a global optimal solution. Thus, we introduce the SPGA into the M-COM.

Among global optimization methods based only on objective function evaluations, meta-heuristics, in which heuristics are combined on the basis of a very good search strategy, such as diversification or intensification [5], have achieved a certain level of success. Examples of meta-heuristics for unconstrained optimization problems with continuous variables include particle swarm optimization (PSO) [7], [8] and differential evolution (DE) [24], [22]. Generally, in these methods, interaction among all search points is used as the main driving force. Therefore, these methods have the drawback that once all of the search points have been attracted to a single search point, diversity is lost and stagnation of the search occurs. In such a case, either the search is terminated or a random search, which is not necessarily based on a good strategy, may be executed in order to reestablish diversity. In the proposed method, the search points autonomously implement global searching using the quasi-chaotic search trajectory generated in the gradient dynamics using the SPGA, regardless of attraction to a particular search point. Hence, the aforementioned stagnation does not tend to occur with the proposed method; therefore, we expect that the proposed method performs well for high-dimensional and multi-peaked optimization problems.

This paper is organized as follows. In Section 2, we briefly describe the chaotic optimization method and the M-COM. In Section 3, we describe two gradient approximation methods, focusing on the SPGA. In addition, we describe the simultaneous perturbation stochastic approximation (SPSA) method in which a gradient model using the SPGA is used. The SPSA has strong relevance to the proposed method. In Section 4, we propose a new global optimization method called the quasi-chaotic optimization method in which the SPGA is introduced into the chaotic optimization method; then, we explain the dynamic characteristics of the proposed method and its similarity to the chaotic optimization method; finally, we propose the multipoint type quasi-chaotic optimization method based on the M-COM. In Section 5, we confirm the effectiveness of the proposed method through applications to several unconstrained multi-peaked optimization problems with 100 or more variables, comparing it to the parallel SPSA, the conventional method (M-COM) with FDGA, and other major meta-heuristics.

Section snippets

Optimization problem

Let us consider an unconstrained optimization problem: minimizexf(x)wherexSNS=x|xnl<xn<xnu(n=1,,N). Here, x = (x1, …, xN)T is a decision variable vector, f(x) is the objective function, f:N, and n = 1, …, N unless otherwise stated. S defines the bounded search space. We assume a global minimum of problem (1) is located within the bounded search space S and apply optimization methods to this space.

Single-point type chaotic optimization method and its drawbacks

Let us assume the objective function f(x) to be twice continuously differentiable in

Gradient approximation methods

In this paper, we introduce a gradient approximation method so that the M-COM can be applied to a class of problems in which only the objective function value can be computed. There are two gradient approximation methods commonly used. One is the FDGA, which is a classical method. The other is the SPGA [23].

Proposed method

In this section, we propose a new global optimization method called the quasi-chaotic optimization method, in which the SPGA is introduced into the chaotic optimization method in order to approximate the gradient so that the chaotic optimization method can be applied to a class of problems for which only the objective function values can be computed. Then, we explain its dynamic characteristics and its similarity to the chaotic optimization method using bifurcation diagrams and linear stability

Numerical experiments and comparisons with other methods

In this section, we confirm the effectiveness of the proposed method through application to several benchmark problems. In addition, we briefly discuss comparisons of the proposed method to other major meta-heuristics on computational complexities.

Conclusion

In this paper, we proposed a new global optimization method called the multipoint type quasi-chaotic optimization method. In the proposed method, simultaneous perturbation gradient approximation is introduced into a multipoint type chaotic optimization method in order to carry out optimization without gradient information. We explained the similarities between the proposed method and the chaotic optimization method using bifurcation diagrams and linear stability theory. We also explained the

Takashi Okamoto was born on August 25th 1980. He completed the doctoral program at Keio University in 2007, with Ph.D. in engineering. Now, he is an assistant professor in the Graduate School of Engineering, Chiba University. His research interests are system engineering, optimization theory, and complex system. Especially, he is interested in optimization methods using nonlinear dynamics.

References (26)

  • S. Kiranyaz et al.

    Fractional particle swarm optimization in multidimensional search space

    IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics

    (2010)
  • J. Liang et al.

    Performance evaluation of multiagent genetic algorithm

    Natural Computing

    (2006)
  • J.J. Liang et al.

    Novel composition test functions for numerical global optimization

  • Cited by (19)

    • Intelligent STEP-NC-compliant setup planning method

      2022, Journal of Manufacturing Systems
      Citation Excerpt :

      To improve the performance of the BPNN, firefly algorithm (FA) [42,43] with fast convergence and high precision is used to obtain better initial weights and thresholds. Meanwhile, chaos algorithm [44] with randomicity, ergodicity and regularity is utilized to avoid FA falling into local optimal. Thus, the IBPNN can be obtained by combining the BPNN with FA and chaos algorithm.

    • A novel method to improve the performance of chaos based evolutionary algorithms

      2015, Optik
      Citation Excerpt :

      Chaotic systems are widely used in science and engineering [2,3]. Recently, many studies [4–14] discuss the mutual intersection of two interesting fields of research, i.e., deterministic chaos and evolutionary computation. Chaotic dynamics can be observed from inside evolutionary algorithms.

    • Parallel chaos optimization algorithm with migration and merging operation

      2015, Applied Soft Computing Journal
      Citation Excerpt :

      Generally speaking, chaos has several important dynamical characteristics, namely, the sensitive dependence on initial conditions, ergodicity, pseudo-randomness, and strange attractor with self-similar fractal pattern [1–5]. Recently, chaotic sequences generated by means of chaotic maps have been used in the development of global optimization techniques, and particularly, in the specification of chaos optimization algorithm (COA) [2–9]. Due to the unique characteristic of chaos, COA carries out global exploration search at higher speed than stochastic ergodic searches that depend on the probabilities [5–9].

    • Parameter extraction of solar cell models using mutative-scale parallel chaos optimization algorithm

      2014, Solar Energy
      Citation Excerpt :

      Chaos optimization algorithm (COA) (Li and Jiang, 1998; Yang et al., 2007; Yuan and Wang, 2008) is a novel global optimization technique, which employs numerical sequences generated by means of chaotic maps. Due to the distinctive properties of chaotic sequences, such as sensitive dependence on initial conditions, stochasticity and ergodicity, many existing application results have demonstrated that COA escapes from local minima more easily than algorithms such as genetic algorithm (GA), simulated annealing (SA), particle swarm optimization (PSO), differential evolution (DE) and harmony search algorithm (HSA) (Okamoto and Hirata, 2013; Yang et al., 2014). In this article, a novel mutative-scale parallel chaos optimization algorithm (MPCOA) is proposed to extract parameters of solar cell models.

    View all citing articles on Scopus

    Takashi Okamoto was born on August 25th 1980. He completed the doctoral program at Keio University in 2007, with Ph.D. in engineering. Now, he is an assistant professor in the Graduate School of Engineering, Chiba University. His research interests are system engineering, optimization theory, and complex system. Especially, he is interested in optimization methods using nonlinear dynamics.

    Hironori Hirata was born on June 2nd 1948. He completed the doctoral program at Tokyo Institute of Technology in 1976, and joined the faculty of Chiba University as a research associate. He has been a professor there since 1994. His research interests are modeling, analysis, and design of complex system, especially ecological system, and VLSI layouts. He is also interested in fundamental theory of distributed system. He holds a D.Eng. degree.

    View full text