Elsevier

Information Sciences

Volume 427, February 2018, Pages 63-76
Information Sciences

A competitive mechanism based multi-objective particle swarm optimizer with fast convergence

https://doi.org/10.1016/j.ins.2017.10.037Get rights and content

Abstract

In the past two decades, multi-objective optimization has attracted increasing interests in the evolutionary computation community, and a variety of multi-objective optimization algorithms have been proposed on the basis of different population based meta-heuristics, where the family of multi-objective particle swarm optimization is among the most representative ones. While the performance of most existing multi-objective particle swarm optimization algorithms largely depends on the global or personal best particles stored in an external archive, in this paper, we propose a competitive mechanism based multi-objective particle swarm optimizer, where the particles are updated on the basis of the pairwise competitions performed in the current swarm at each generation. The performance of the proposed competitive multi-objective particle swarm optimizer is verified by benchmark comparisons with several state-of-the-art multi-objective optimizers, including three multi-objective particle swarm optimization algorithms and three multi-objective evolutionary algorithms. Experimental results demonstrate the promising performance of the proposed algorithm in terms of both optimization quality and convergence speed.

Introduction

In the real world, many optimization problems may involve multiple conflicting objectives to be optimized simultaneously [10], [11], [14], [29], [30], [46], [49]. Such optimization problems are usually called multi-objective optimization problems (MOPs), which are generally more challenging to be solved than single-objective optimization problems (SOPs), since there usually exist a set of solutions to be obtained as trade-offs between different objectives for MOPs [39].

In the past two decades, multi-objective optimization has attracted increasing interests in the evolutionary computation community, and a large number of multi-objective optimization algorithms have been developed on the basis of different population based meta-heuristics, such as genetic algorithm [41], immune clone algorithm [31], differential evolution algorithm [1], firefly algorithm [17] and neural network regression [9]. It is worth noting that nature-inspired optimization algorithms have also been extensively applied to solve other optimization problems, e.g., creation of graphic characters [18], optimal outcome of evolutionary games [45] and inventory control [34], [35], [36], [38]. Particle swarm optimization (PSO) [20], as one of the most classical swarm intelligence algorithms, has been widely applied to solve SOPs due to its simple implementation and fast convergence. Moreover, as reported in some recent studies [5], [6], [25], PSO also has good potential in solving MOPs.

In order to apply PSO to multi-objective optimization, there are at least two fundamental issues to be addressed. The first issue is how to define the personal and global best particles, given that there does not exist any particle which can perform the best on all objectives of an MOP. Since the personal and global best particles are used to guide the search direction of particles in the swarm, they have considerable influence on the performance of PSO algorithms, especially in solving MOPs [6]. The second issue is how to balance convergence and diversity of the swarm. Since the target of multi-objective optimization is to obtain a set of trade-off solutions, diversity maintenance is particularly important. A PSO based multi-objective algorithm is very likely to be trapped into local optimum (or one of the many optima) of an MOP due to its fast convergence. Therefore, striking a balance between convergence and diversity is curial to the performance of multi-objective PSO algorithms [23].

In the past ten years, a lot of multi-objective PSO algorithms have been suggested by addressing the above two issues [24], [25], [28], [32], [43], which can be roughly divided into two categories. The first category is to define the personal and global best particles based on the Pareto ranking scheme [19]. Three representatives of this category are multi-objective particle swarm optimization [5], improved multi-objective particle swarm optimizer [32] and speed-constrained multi-objective PSO [25]. There are also some multi-objective PSO algorithms proposed on the basis of some enhanced ranking schemes, such as global margin ranking [22] and preference order ranking [42]. In these algorithms, an archive is maintained to store elite particles determined by the ranking schemes and these elite particles are used as candidates for personal and global best particles. The second category adopts the decomposition strategy to transform MOPs into a set of SOPs, such that the single-objective PSO algorithms can be directly applied to multi-objective optimization. The first decomposition based multi-objective PSO algorithm was suggested by Parsopoulos and Vrahatis based on dynamic weighted aggregation [16], [27]. Recently, several improved multi-objective PSO algorithms based on decomposition were also reported in the literature [6], [23], [24], [28]. Generally, the multi-objective PSO algorithms as mentioned above can achieve a good balance between convergence and diversity for most MOPs, but still encounter great challenges when tackling complex MOPs, especially for those with a large number of local optima (e.g., DTLZ1 and DTLZ3 [8]).

To further enhance the robustness of PSO in solving MOPs, in this paper we suggest a multi-objective PSO algorithm inspired by the recently developed competitive swarm optimizer [2]. The competitive swarm optimizer is a variant of PSO and the main difference lies in the fact that the search process is guided by the competitors in the current swarm instead of the historical positions, i.e., the personal and global best particles. Both theoretical analysis and empirical results have demonstrated that the competitive swarm optimizer is able to achieve a better balance between convergence and diversity than original PSO by adopting the competition mechanism [2]. By taking advantage of such a competition mechanism, in this paper, we propose a competitive mechanism based multi-objective PSO, termed CMOPSO, where a competition mechanism based learning strategy is designed to guide the search of PSO for multi-objective optimization. In summary, the main contributions of this paper are as follows.

  • (1)

    A competition mechanism based learning strategy is suggested for the updating of particles in multi-objective PSO. In the proposed strategy, pairwise competitions are randomly performed between particles in the current swarm. The winner particle is used to guide the particle by updating the velocity accordingly. Compared to the updating strategies in existing multi-objective PSO algorithms, the proposed competition mechanism based learning strategy achieves better balance between convergence and diversity.

  • (2)

    A novel multi-objective PSO algorithm, called CMOPSO, is proposed on the basis of the competition mechanism based learning strategy. In CMOPSO, no additional storage is required to record the historical information in the search process, such that it does not need any external archive. By contrast, most existing multi-objective PSO algorithms often need to maintain an archive with a high computational cost, e.g., multi-objective PSO algorithms developed in [23], [25], [32], [43].

  • (3)

    The performance of the proposed CMOPSO is verified by comparing it with six existing algorithms on 21 benchmark MOPs, including three popular multi-objective PSO algorithms, namely, MPSOD [6], MMOPSO [23], SMPSO [25] and three well-known multi-objective evolutionary algorithms (MOEAs), namely, NSGA-II [7], MOEA/D [47] and SPEA2 [53]. The experimental results demonstrate that the proposed CMOPSO shows significantly better overall performance than the compared algorithms, in terms of both quality of solution set and convergence speed.

The rest of this paper is organized as follows. In Section 2, we review a few multi-objective PSO algorithms and briefly introduce the competitive swarm optimizer. The details of the proposed CMOPSO are given in Section 3 and the performance of CMOPSO is verified in Section 4 by comparing it with existing multi-objective PSO algorithms and MOEAs. Finally, conclusion and future work are presented in Section 5.

Section snippets

Existing multi-objective PSO algorithms

PSO is a well-known swarm intelligence paradigm originally inspired by the behavior of bird flocking in nature, and later has been widely applied to solve SOPs [15], [26], [33], [37], [44]. Due to the high convergence speed and simple implementation, recently, a number of multi-objective PSO algorithms have also been proposed in the literature [5], [6], [23], [24], [25], [28], [32], [43]. In the following, we briefly review some representative multi-objective PSO algorithms.

The first PSO based

The proposed CMOPSO

In this section, we first present the framework of the proposed CMOPSO, and then elaborate the details of the competition mechanism based learning strategy suggested in CMOPSO for multi-objective PSO algorithms.

Experimental studies

In this section, we verify the performance of CMOPSO by comparing it with three existing multi-objective PSO algorithms, MPSOD [6], MMOPSO [23] and SMPSO [25], and three popular MOEAs, NSGA-II [7], MOEA/D [47] and SPEA2 [53]. A total of 21 benchmark MOPs from three test suits, ZDT [52], DTLZ [8] and WFG [13] are used to evaluate the performance of the algorithms, where ZDT1 to ZDT4 and ZDT6 are bi-objective problems and two-/three-objective DTLZ and WFG test problems are considered. For

Conclusion and remark

In this paper, we have proposed a competitive mechanism based multi-objective particle swarm optimizer (CMOPSO) inspired from the recently proposed competitive swarm optimizer [2]. In CMOPSO, a competition mechanism based learning strategy has been suggested for updating the particles, where each particle is made to learn from the winner of each pairwise competition. Since the competitions are performed among the elite particles selected from the current swarm, there is no external archive

Acknowledgments

This work was supported in part by National Natural Science Foundation of China (Grant No. 61672033, 61502004, 61502001), and the Joint Research Fund for Overseas Chinese, Hong Kong and Macao Scholars of the National Natural Science Foundation of China (Grant No. 61428302).

References (54)

  • Y. Wang et al.

    Particle swarm optimization with preference order ranking for multi-objective optimization

    Inf. Sci. (Ny)

    (2009)
  • H. Abbass et al.

    A Pareto-frontier differential evolution approach for multi-objective optimization problems

    Proceedings of IEEE Congress on Evolutionary Computation

    (2001)
  • R. Cheng et al.

    A competitive swarm optimizer for large scale optimization

    IEEE Trans. Cybern.

    (2015)
  • R. Cheng et al.

    Test problems for large-scale multi-objective and many-objective optimization

    IEEE Trans. Cybern.

    (2016, in press)
  • R. Cheng et al.

    A benchmark test suite for evolutionary many-objective optimization

    Complex Intell. Syst.

    (2017)
  • C.C. Coello et al.

    MOPSO: a proposal for multiple objective particle swarm optimization

    Proceedings of IEEE Congress on Evolutionary Computation

    (2002)
  • K. Deb et al.

    A fast and elitist multi-objective genetic algorithm: NSGA-II

    IEEE Trans. Evol. Comput.

    (2002)
  • K. Deb et al.

    Scalable multiobjective optimization test problems

    Proceedings of IEEE Congress on Evolutionary Computation

    (2002)
  • I. Fister et al.

    Artificial neural network regression as a local search heuristic for ensemble strategies in differential evolution

    Nonlinear Dyn.

    (2016)
  • J. Handl et al.

    Multiobjective optimization in bioinformatics and computational biology

    IEEE Trans. Comput. Biol. Bioinf.

    (2007)
  • J. Herrero et al.

    Effective evolutionary algorithms for many-specifications attainment: application to air traffic control tracking filters

    IEEE Trans. Evol. Comput.

    (2009)
  • W. Hu et al.

    Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system

    IEEE Trans. Evol. Comput.

    (2015)
  • S. Huband et al.

    A review of multiobjective test problems and a scalable test problem toolkit

    IEEE Trans. Evol. Comput.

    (2006)
  • H. Ishibuchi et al.

    A multi-objective genetic local search algorithm and its application to flowshop scheduling

    IEEE Trans. Syst., Man, Cybernet.-Part C

    (1998)
  • Y. Jin et al.

    Dynamic weighted aggregation for evolutionary multi-objective optimization: why does it work and how?

    Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation

    (2001)
  • I.F. Jr et al.

    A review of chaos-based firefly algorithms: perspectives and research challenges

    Appl. Math. Comput.

    (2015)
  • I.F. Jr et al.

    Particle swarm optimization for automatic creation of complex graphic characters

    Chaos, Solitons Fractals

    (2015)
  • Cited by (278)

    View all citing articles on Scopus
    View full text