Elsevier

Information Sciences

Volume 586, March 2022, Pages 176-191
Information Sciences

PSO-sono: A novel PSO variant for single-objective numerical optimization

https://doi.org/10.1016/j.ins.2021.11.076Get rights and content

Abstract

Particle Swarm Optimization(PSO) is a well-known and powerful meta-heuristic algorithm in Swarm Intelligence (SI), and it was invented by simulating the foraging behavior of bird flock in 1995. Recently, many different PSO variants were proposed to tackle different optimization applications, however, the overall performance of these variants were not satisfactory. In this paper, a new PSO variant is advanced to tackle single-objective numerical optimization, and there are three contributions mentioned in the paper: First, a sorted particle swarm with hybrid paradigms is proposed to improve the optimization performance; Second, novel adaptation schemes both for the ratio of each paradigm and the constriction coefficients are proposed during the iteration; Third, a fully-informed search scheme based on the global optimum in each generation is proposed which helps the algorithm to jump out the local optimum and improve the overall performance. A large test suite containing benchmarks from CEC2013, CEC2014 and CEC2017 test suites on real-parameter single-objective optimization is employed in the algorithm validation, and the experiment results show the competitiveness of our algorithm with the famous or recently proposed state-of-the-art PSO variants.

Introduction

Particle Swarm Optimization (PSO) algorithm is a meta-heuristic global optimization algorithm proposed by Kennedy and Eberhart [1] in 1995. The underlying physical model upon which the transition rules are based is one of emergent collective behavior arising out of social interaction of flocks of birds or schools of fish [2], [3], [4]. In PSO, each individual in the swarm is called a particle, which represents a potential solution with two main characteristics (vectors), namely the position and the velocity [5], [6], [7], [8]. In the initialization stage, the position of the ith particle (in the 0th generation) can be initialized according to the following equation:Xi,0=Xmin+rand1×D(0,1)·(Xmax-Xmin)where Xmin and Xmax are the bounds of the solution space [9], [10], [11]. There are two different approaches to the initialization of the velocities in PSO: one is that the velocities are initialized to random values within the velocity bounds [11], [12], [13] and the other is that the velocities are initialized to zeros [14], [15], [16]. For a single-objective optimization problem that is defined below:ΩargminXΩf(X)={XΩ:f(X)f(X),XΩ}where X denotes a D-dimensional vector in solution space Ω, and Ω denotes the set of best candidates of the solution space, the PSO algorithm may return tolerable solution after finding it or arriving at the maximum number of function evaluations.

The iteration paradigm of the canonical PSO is simple, and the velocity of the particle is only affected by its current velocity, its historical best location and the population’s global best location, in other words, the particle only can learn from its own memory Xpbest,G and the best of the population Xgbest,G. The insufficient use of population information showed many disadvantages not only in scientific research [17], [18], [19], [20], [21], [22], [23], [24] but also in some engineering applications of PSO [25], [26], [27], [28], [29], [30]. Therefore, Suganthan [17] proposed a dynamic neighborhood operator to enhance the PSO algorithm. The neighborhood in this algorithm was initialized with some particles at the beginning of the iteration and then gradually increased to the whole population when arriving at the end of the iteration. Mendes et al. [19] proposed a fully informed particle swarm optimizer in which all neighbors of a certain particle are considered as a source of influence, and then the size of the neighborhood determined the diversity of the influence. Liang et al. [12], in CLPSO algorithm, proposed a comprehensive learning strategy in which a particle’s velocity is updated by other particles’ historical best. This technique improved the performance of PSO on multi-modal objectives at the expense of slowing down the convergence speed. Nasir et al. [13] further extended the CLPSO algorithm by delimiting the exemplar particle to a dynamic neighborhood, and this further improved the diversity of particles, however, both the CLPSO and DNLPSO are time consuming. Lynn et al. [31] presented a review of population topology or sociometry that enhanced the population diversity on the performance improvement of PSO. By incorporating the neighborhood topology, these PSO variants obtained a better use of the population information and consequently obtained significant performance improvement regarding different objectives.

Besides exploiting the information within the population, PSO researchers also proposed parameter-based techniques for performance improvements [32], [33], [34], [35], [36], [37], [38]. Shi and Eberhart [32] found that the convergence speed was usually accelerated by adding an inertia weight in front of the velocity of PSO algorithm, therefore, the inertia weight incorporated iteration equation was employed in the following PSO variants. The authors also found that better optimization performance was obtained when employing a linear reduction strategy of the inertia weight [33], and the recommended inertia weight at the initialization was 0.9 and at the termination was 0.4 at the end of the iteration. Taherkhani and Safabakhsh [37] proposed a stability-based adaptive inertia weight PSO in which the inertia weight of each dimension of a particle was different, and these values were determined by the performance of a certain particle and the distance from its historical best position. Bratton and Kennedy [34] reviewed some advances of PSO algorithm and defined a standard PSO algorithm in which the iteration equation employed a constriction version and the inertia weight of the velocity is set to a constant value 0.72984. Bansal et al. [36] made a comparison of different PSO variants under different inertia weight strategies, and they found that the chaotic inertia weight strategy outperformed others from the optimization accuracy perspective while random inertia weight strategy had a better convergence speed. However, the comparison was conducted under a test suite containing a small number of benchmarks, and maybe this comparison was biased according to the “No Free Lunch Theorem” [39]. Harrison et al. [38] further examined 18 inertia weight strategies under a large test suite containing 60 benchmarks, and the results showed that employing a fixed constant inertia weight was also the good choice for an arbitrary optimization problem.

To summarize, there are three aspects that can improve the performance of PSO algorithm when tackling single-objective numerical optimization. The first can be the improvement of the iteration paradigm of all particles. This may involve the improvement of a single iteration paradigm or the improvement of a hybrid paradigms. The second can be the improvement of adaptation strategies either for the constriction coefficients or for the adaptive selection of an iteration paradigm. The third can be some fully-informed techniques which can improve the exploration capacity of a PSO variant or help it jump out some local optima. Here in this paper, we propose a novel PSO variant with hybrid paradigms and parameter adaptation schemes for single-objective numerical optimization, and the main highlights of our algorithm are given as follows:

  • 1.

    A novel sorted particle swarm with hybrid paradigms is proposed in our algorithm which achieved a balance of exploitation and exploration.

  • 2.

    Novel adaptation schemes both for the ratio of each sub-population and for the constriction coefficients are proposed during the iteration, which can enhance the advantage of each iteration paradigm.

  • 3.

    A novel fully-informed search based on the global optimum in each generation is proposed which helps the algorithm jump out the local optimum and improve the overall performance.

  • 4.

    A large test suite containing all the benchmarks from CEC2013, CEC2014 and CEC2017 test suites for real-parameter single-objective optimization is used for algorithm validation, which, to some extent, avoids the over-fitting problem in comparison with employing just one test suite containing a small number of benchmarks.

The rest of this paper is organized as follows: Section 2 presents the brief review of several PSO variants which are closely related to the proposed PSO variant in this paper. Section 3 gives the details of our algorithm, and the novel algorithm is verified in Section 4 under our test suite containing 88 benchmarks. Finally, the conclusion is given in Section 5.

Section snippets

Several closely related PSO variants

In this section, several closely related PSO variants are reviewed including the inertia weight PSO (iwPSO) [32], the Comprehensive Learning PSO (CLPSO) [12], the Dynamic Neighborhood Learning PSO (DNLPSO) [13], the Social Learning PSO (SLPSO) [40], the Ensemble PSO (EPSO) [41], and the modified PSO with chaotic-based inertia weight (MPSO) [42].

The novel PSO-sono algorithm

In this section, we present the novel PSO variant in details, and we name it PSO-sono because the algorithm aims at providing excellent performance on single-objective numerical optimization. The whole description of PSO-sono can be divided into three parts: the first part discusses the hybrid paradigms of our novel PSO algorithm; the second part presents the adaptation schemes both for the ratio of each paradigm and the coefficients which constrict the cognitive component and the social

Experiment analysis

In this part, we present the detailed analysis of the PSO-sono algorithm, and all the experiments were conducted on Matlab 2011b version of a personal computer with an Intel(R) Core(TM) i5-8265U 1.8 GHz CPU and Microsoft Windows 10 Enterprise 64-bit Operating System. Based on the observation that employing a smaller test suite in algorithm validation may have over-fitting problem, here we employed 88 benchmarks from CEC2013, CEC2014 and CEC2017 test suites for real-parameter single-objective

Conclusion

In this paper, we propose a new PSO variant, namely PSO-sono, aiming at improving the performance of PSO on single-objective numerical optimization. The novel algorithm has three contributions, the first one is that a sorted particle swarm with hybrid paradigms is proposed. By employing an adaptive selection rate of each paradigm, the PSO-sono algorithm can achieve a good use of the advantages of all the paradigms. The second contribution is that novel adaptation schemes both for the

CRediT authorship contribution statement

Zhenyu Meng: Conceptualization, Methodology, Supervision, Writing - review & editing, Software. Yuxin Zhong: Writing - original draft. Guojun Mao: Supervision. Yan Liang: Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work is supported by the National Natural Science Foundation of China with the Grant No. 61906042 & No. 61773415 & No. 42077395, Natural Science Foundation of Fujian Province with the Grant No. 2021J05227 and Scientific Research Startup Foundation of Fujian University of Technology (GY-Z19013).

References (50)

  • A. Lin et al.

    Global genetic learning particle swarm optimization with diversity enhancement by ring topology

    Swarm and evolutionary computation

    (2019)
  • N. Lynn et al.

    Population topologies for particle swarm optimization and differential evolution

    Swarm and evolutionary computation

    (2018)
  • M. Taherkhani et al.

    A novel stability-based adaptive inertia weight for particle swarm optimization

    Applied Soft Computing

    (2016)
  • R. Cheng et al.

    A social learning particle swarm optimization algorithm for scalable optimization

    Information Sciences

    (2015)
  • N. Lynn et al.

    Ensemble particle swarm optimizer

    Applied Soft Computing

    (2017)
  • H. Liu et al.

    A modified particle swarm optimization using adaptive strategy

    Expert Systems With Applications

    (2020)
  • G. Xu et al.

    Particle swarm optimization based on dimensional learning strategy

    Swarm and Evolutionary Computation

    (2019)
  • Z. Meng et al.

    Quasi-affine transformation evolutionary (quatre) algorithm: a cooperative swarm based algorithm for global optimization

    Knowledge-Based Systems

    (2016)
  • Z. Meng et al.

    QUasi-Affine TRansformation Evolution with External ARchive (QUATRE-EAR): An enhanced structure for differential evolution

    Knowledge Based Systems

    (2018)
  • R. Eberhart, J. Kennedy, Particle swarm optimization, in: Proceedings of the IEEE international conference on neural...
  • M. Clerc et al.

    The particle swarm – explosion, stability, and convergence in a multidimensional complex space

    IEEE Transactions on Evolutionary Computation

    (2002)
  • M.R. Bonyadi et al.

    Particle swarm optimization for single objective continuous space problems: a review

    Evolutionary Computation

    (2017)
  • J. Kennedy, The particle swarm: social adaptation of knowledge, in: Proceedings of 1997 IEEE International Conference...
  • Y. Shi, Particle swarm optimization: developments, applications and resources, in: Proceedings of the 2001 congress on...
  • D. Wang et al.

    Particle swarm optimization algorithm: an overview

    Soft Computing

    (2018)
  • Cited by (47)

    View all citing articles on Scopus
    View full text