Advanced orthogonal learning-driven multi-swarm sine cosine optimization: Framework and case studies

https://doi.org/10.1016/j.eswa.2019.113113Get rights and content

Highlights

  • Orthogonal learning, multi-population, and greedy selection are embedded into sine cosine algorithm.

  • Orthogonal learning procedure is introduced to improve its neighborhood search capabilities.

  • Multi-population scheme with three sub-strategies is adopted to enhance the global exploration capabilities.

  • Greedy selection strategy is applied to improve the qualities of the search agents.

  • Extensive benchmark problems and methods are used to verify the proposed method.

Abstract

Sine cosine algorithm (SCA) is a widely used nature-inspired algorithm that is simple in structure and involves only a few parameters. For some complex tasks, especially high-dimensional problems and multimodal problems, the basic method may have problems in harmonic convergence or trapped into local optima. To efficiently alleviate this deficiency, an improved variant of basic SCA is proposed in this paper. The orthogonal learning, multi-swarm, and greedy selection mechanisms are utilized to improve the global exploration and local exploitation powers of SCA. In preference, the orthogonal learning procedure is introduced into the conventional method to expand its neighborhood searching capabilities. Next, the multi-swarm scheme with three sub-strategies is adopted to enhance the global exploration capabilities of the algorithm. Also, a greedy selection strategy is applied to the conventional approach to improve the qualities of the search agents. Based on these three strategies, we called the improved SCA as OMGSCA. The proposed OMGSCA is compared with a comprehensive set of meta-heuristic algorithms including six other improved SCA variants, basic version, and ten advanced meta-heuristic algorithms. We employed thirty IEEE CEC2014 benchmark functions, and eight advanced meta-heuristic algorithms on seventeen real-world benchmark problems from IEEE CEC2011. Also, non-parametric statistical Wilcoxon sign rank and the Friedman tests are performed to monitor the performance of the proposed method. The obtained experimental results demonstrate that the introduced strategies can significantly improve the exploratory and exploitative inclinations of the basic algorithm. The convergence speed of the original method has also been improved, substantially. The results suggest the proposed OMGSCA can be used as an effective and efficient auxiliary tool for solving complex optimization problems.

Introduction

In the past two decades, meta-heuristic algorithms (MAs) have been widely implemented in various optimistic scenarios to solve complex optimization problems in modern industry. The ideological inspiration of these MAs frequently comes from biological or terrestrial phenomena in nature (Chen et al., 2019; Chen, Yang, Heidari & Zhao, 2019; Chen, Zhang, Luo, Xu & Zhang, 2019; Heidari et al., 2019; Y. Xu et al., 2019; Y. Xu et al., 2019; Yu, Zhao, Wang, Chen & Li, 2019). Their gradual process of finding the optimal solution follows the principle of “trial and error.” In general, MAs contain many different operators, like crossover, mutation, orthogonal learning, chaotic local search, chaotic initialization, greedy selection, etc. These operators are used to generate new individuals. Next, the qualities of the solutions are continually improved during an iterative process. In terms of efficiency, MAs have proven to be more efficient than gradient-based algorithms (Zhang, Wang, Zhou & Ma, 2019). However, MAs also have some shortcoming. For example, the convergence rate is slow, and as no free lunch (NFL) theorem proved, there is no universal best method for solving all types of possible problems. For the above reasons, the notable invention and successful development of the new MAs are still a challenging task.

MAs consists of two major classes: swarm intelligence and evolutionary algorithms. Among them, the evolutionary algorithm, which is divinely inspired by the evolutionary process of biology, contains the processes and concepts of competition and elimination. Representative evolutionary algorithms are genetic algorithm (GA) (Deng et al., 2017; Zheng, Lu, Guo, Guo & Xu, 2014), differential evolution (DE) (Storn & Price, 1997), evolution strategies (ES) (Rechenberg, 1973) and others. The inspiration of swarm intelligence algorithms comes from the social organizational behavior of animal groups. Representative swarm intelligence algorithms are particle swarm optimization (PSO) (Kennedy & Eberhart, 1995; Zhang, Hu, Xie, Bao & Maybank, 2015), ant colony optimization (ACO) (Deng, Xu & Zhao, 2019; Dorigo, Maniezzo & Colorni, 1996), artificial bee colony (ABC) (Karaboga & Basturk, 2007), gray wolf optimizer (GWO) (Cai et al., 2019; Mirjalili, Mirjalili & Lewis, 2014), Harris hawks optimizer (HHO) (Heidari et al., 2019) and others (Chen, Xu, Wang & Zhao, 2019; Zhang et al., ; Zhang et al., 2019).

As a recently developed meta-heuristic algorithm, sine cosine algorithm (SCA) (Mirjalili, 2016) belongs to the group of swarm intelligence algorithm. To determine the optimal solution in the search space, in SCA, both the sine and cosine functions are properly utilized. Based on its simplicity, flexibility, and efficiency, SCA has gained many attentions from the community of intelligent optimization algorithms. Solving multi-modal problems has always remained an intractable problem for many years, and in the face of this problem, SCA has also exposed its methodological shortcoming, that is, it is relaxed to fall into local optimum. Similarly, how new particles are generated is closely related to optimization issues. This means we should try more new particle generation methods as we continue dealing with different types of optimization problems. Due to the updating formula based on the sine and cosine functions, SCA can carefully explore the search space near the optimal solution very efficiently. However, due to the evident lack of effective corresponding strategies, the traditional sine cosine algorithm has insufficient exploration capability to scan the entire feature space. Because the mechanism of the original SCA algorithm is too simple, it may not be able to show strong search capabilities in dealing with some problems. The potential of the original SCA is limited in terms of exploitation as well. This shortcoming limits the convergence rate of the SCA, and it may not get a better solution within many iterations. Therefore, we devoted this paper to mitigate the core shortcomings of SCA. Also, the original SCA is inclined to falling into local optimum in some feature spaces. This is because SCA has limited efficacy in avoiding the local optima. As a result, the original SCA algorithm is prone to stagnation problem. The convergence speed of the SCA algorithm is also mediocre. The SCA algorithm does not have stable power in jumping out of the local optimal solutions. When the algorithm falls into local optimum, it cannot evade it, and then, no better solution can be obtained by more iterations. This observation is very unfavorable for some complex search process. Therefore, we plan to improve these weaknesses of SCA by revising its core structure by more efficient strategies.

In this paper, we proposed a novel and modified version of SCA. The reason is that SCA has some excellent features compared to other well-established methods in dealing with various kinds of problems.(Mirjalili, 2016) This is not only verified in the original work but also observed in resulted literature after the invention of SCA.(Gupta & Deep, 2018; Kumar, Hussain, Singh & Panigrahi, 2017; Qu, Zeng, Dai, Yi & He, 2018) This can be beneficial to the improvement of the algorithm, and improved strategies proposed in this paper can further help this method to achieve better performance. This method attracted many researchers because the structure of the SCA algorithm is relatively simple compared to other well-established optimizers, and its computational complexity is not high. This makes it favorable to be integrated with other efficient evolutionary strategies.(Abd Elaziz, Oliva & Xiong, 2017; Nenavath & Jatoth, 2018) Also, the structure of the population in SCA is relatively clear, which is very suitable for the construction of the multi-swarm structure. Embedding multi-swarm strategy to the SCA algorithm can fully exploit the effectiveness of the multi-swarm structure in optimizing the feature space.(Niu, Zhu, He & Wu, 2007; Xia et al., 2018) For the basic local search ability of the SCA, the orthogonal learning strategy can be a good solution to further improve its performance.(Rizk-Allah, 2018;.Rizk-Allah, 2018) The greedy selection strategy can play a functional screening role for the solutions obtained by the core functions of SCA. Therefore, compared with other modified versions of SCA, the improvements made in this paper are more in accordance with the muliterent characteristics of SCA, and the observed improvements are also very significant compared to other well-regarded optimizers.

This new SCA variant incorporates a multi-swarm mechanism and combines an orthogonal learning mechanism with a greedy selection strategy, namely OMGSCA. We have compared many mechanisms in the initial experiments to find a proper approach to improve the core mechanisms of the SCA. Different mechanisms have been combined with SCA and then, we compared in the same experimental environment. Through this exploratory analysis and observations, we found that the improved variants of SCA resulted from the combination of orthogonal learning strategy, multi-swarm strategy, greedy selection strategy can show very good performance. For the improvement of the local search of SCA, the orthogonal learning strategy performs best. For the structure adjustment of the SCA's population and augmentation of its global searching trends, the multi-swarm strategy has the best effect. The greedy selection strategy helps to preserve the useful information in the solution. Therefore, we choose these three strategies to improve SCA.

Initially, all particles are divided into many limited sub-swarms. This is because it helps to extensively explore the global space. Second, it is noted that swarm-based algorithms try to gradually shift from global exploration to area exploitation by more iterations. Therefore, different sub-swarm numbers are set in this paper at various stages of the iteration. Hence, the sub-swarm number and the number of particles in the sub-swarm can be dynamically changed. Next, the greedy selection strategy is introduced into the basic method to increase the convergence speed. This paper introduces the greedy selection strategy into the level of dimensional iteration in the original approach to effectively combine the greedy selection strategy with SCA. At the final stage, the quality offspring will be retained after greedy selection. Through the variation mechanism and based on the orthogonal learning principle, SCA can produce more promising solutions with better fitness values. 30 benchmark functions with 30 dimensions (30D) are employed to deeply evaluate the performance of the proposed algorithm. These functions are selected from well-regarded IEEE CEC 2014 (Liang, Qu & Suganthan, 2013). Also, a comprehensive set of methods including the basic MAs and the advanced MAs are used to validate the novel method. We conducted the Wilcoxon sign rank test (García, Fernández, Luengo & Herrera, 2010) and Friedman test (Alcalá-Fdez et al., 2009) to verify the significant differences.

The organization of this paper is as follows. Section 2 describes a review of the literature and summarizes the research and development history of SCA and multi-swarm mechanism. Section 3 briefly describes the original SCA. Section 4 will explain the proposed algorithm and the various mechanisms it contains. Section 5 is devoted to the experimental part. Finally, the conclusion and future works are summarized in Section 6.

Section snippets

Sine cosine algorithm

Until now, many improved SCA-based methods have been proposed to deal with various problems. Abd Elaziz et al. (2017) proposed a variant of SCA based on opposition-based learning (OBL). First, in the swarm initialization phase and iterative process, new solutions were generated based on the principle of OBL. Then, based on the size of the fitness value, the optimal half is retained. The proposed algorithm was tested on 29 standard functions. Kumar et al. (2017) proposed a variant of SCA that

Sine cosine algorithm (SCA)

Any problem can be modeled and solved based on the concepts, rules, and verified equations in mathematics (Gao, Guirao, Abdel-Aty & Xi, 2019, 2018; Gao, Wang, Dimitrov & Wang, 2018; Gao, Wu, Siddiqui & Baig, 2018; Wei, Darko & Hosam, 2018). The basic SCA is a new swarm-intelligence meta-heuristic optimizer proposed based on well-known mathematical rules in 2016 (Mirjalili, 2016). The core idea of SCA is to take the advantages of the mathematical sine and cosine functions. Similar to many MAs,

Proposed OMGSCA

SCA has a rapid convergence and coherent structure. However, for some complex optimization problems, the main method will fall into local optima. The weaknesses of SCA are increasingly evident on high-dimensional and multi-modal functions. The optimization ability of the standard SCA is directly related to the leading role of the best individual. When an isolated individual is captured by the local optimum, it can quickly jump out with the power of the population. However, if most individuals

Experimental results

The experimental part of OMGSCA proposed in this paper was carried out in five aspects. First, OMGSCA was compared with other basic methods. Second, OMGSCA was compared against other advanced methods. Third, for the selection of a key parameter Limit in the OL strategy, we also performed a number of tests to select the optimal value of the parameter Limit. Fourth, some selected real-world benchmark problems from CEC2011 (Swagatam Das & Suganthan, 2010) are used to test the performance of the

Conclusion and future directions

This paper proposed an effective method that introduces orthogonal learning, multi-swarm, and greedy selection mechanisms into the recently proposed SCA algorithm. We utilized these mechanisms to enhance the performance of this method significantly. The successful fusion of these key strategies has further balanced the extensive exploration and intensive exploitation of SCA and substantially enhanced the diversity of the swarm. Based on an extensive set of benchmark functions from CEC2014, the

CRediT authorship contribution statement

Hao Chen: Conceptualization, Methodology, Software, Writing - original draft, Investigation, Writing - review & editing. Ali Asghar Heidari: Writing - original draft, Writing - review & editing, Software, Visualization, Investigation. Xuehua Zhao: Writing - review & editing, Software, Visualization. Lejun Zhang: Writing - review & editing, Software, Visualization. Huiling Chen: Conceptualization, Funding acquisition, Resources.

Declaration of Competing Interest

The authors declare that there is no conflict of interests regarding the publication of article.

Acknowledgement

This research is supported by National Natural Science Foundation of China (U1809209), Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044 and Six Talent Peaks Project in Jiangsu Province No.XYDXX-108, and Guangdong Natural Science Foundation (2018A030313339), Ministry of Education in China Youth Fund Project of Humanities and Social Sciences (17YJCZH261), Scientific Research Team Project of Shenzhen Institute of Information Technology (SZIIT2019KJ022).

References (111)

  • W. Gao et al.

    Partial multi-dividing ontology learning algorithm

    Information Sciences

    (2018)
  • W. Gao et al.

    Nano properties analysis via fourth multiplicative ABC indicator calculating

    Arabian journal of chemistry

    (2018)
  • W. Gao et al.

    Study of biological networks using graph theory

    Saudi journal of biological sciences

    (2018)
  • S. García et al.

    Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power

    Information Sciences

    (2010)
  • S. Gupta et al.

    A hybrid self-adaptive sine cosine algorithm with opposition based learning

    Expert Systems with Applications

    (2019)
  • A.A. Heidari et al.

    Harris hawks optimization: Algorithm and applications

    Future Generation Computer Systems

    (2019)
  • F.z. Huang et al.

    An effective co-evolutionary differential evolution for constrained optimization

    Applied Mathematics and Computation

    (2007)
  • M. Issa et al.

    ASCA-PSO: Adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment

    Expert Systems with Applications

    (2018)
  • D. Jia et al.

    A hybrid particle swarm optimization algorithm for high-dimensional problems

    Computers & Industrial Engineering

    (2011)
  • A. Kaveh et al.

    A new meta-heuristic method: Ray optimization

    Computers and Structures

    (2012)
  • K.S. Lee et al.

    A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice

    Computer Methods in Applied Mechanics and Engineering

    (2005)
  • S. Mirjalili

    Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm

    Knowledge-Based Systems

    (2015)
  • S. Mirjalili

    SCA: A sine cosine algorithm for solving optimization problems

    Knowledge-Based Systems

    (2016)
  • S. Mirjalili et al.

    Salp swarm algorithm: A bio-inspired optimizer for engineering design problems

    Advances in Engineering Software

    (2017)
  • S. Mirjalili et al.

    The whale optimization algorithm

    Advances in Engineering Software

    (2016)
  • S. Mirjalili et al.

    Grey wolf optimizer

    Advances in Engineering Software

    (2014)
  • H. Nenavath et al.

    Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking

    Applied Soft Computing Journal

    (2018)
  • H. Nenavath et al.

    A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking

    Swarm and Evolutionary Computation

    (2018)
  • B. Niu et al.

    MCPSO: A multi-swarm cooperative particle swarm optimizer

    Applied Mathematics and Computation

    (2007)
  • R.V. Rao et al.

    Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems

    CAD Computer Aided Design

    (2011)
  • R.M. Rizk-Allah

    Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems

    Journal of Computational Design and Engineering

    (2018)
  • P. Savsani et al.

    Passing vehicle search (PVS): A novel metaheuristic algorithm

    Applied Mathematical Modelling

    (2016)
  • A.M. Turky et al.

    A multi-population harmony search algorithm with external archive for dynamic optimization problems

    Information Sciences

    (2014)
  • Y. Wang et al.

    cPSO-CNN: An efficient PSO-based algorithm for fine-tuning hyper-parameters of convolutional neural networks

    Swarm and Evolutionary Computation

    (2019)
  • X. Xia et al.

    A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting

    Applied Soft Computing Journal

    (2018)
  • X. Xia et al.

    Particle swarm optimization using multi-level adaptation and purposeful detection operators

    Information Sciences

    (2017)
  • G. Xiong et al.

    Enhancing the performance of biogeography-based optimization using polyphyletic migration operator and orthogonal learning

    Computers and Operations Research

    (2014)
  • X. Xu et al.

    Dynamic multi-swarm particle swarm optimizer with cooperative learning strategy

    Applied Soft Computing Journal

    (2015)
  • Y. Xu et al.

    An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks

    Expert Systems with Applications

    (2019)
  • Y. Xu et al.

    Enhanced moth-flame optimizer with mutation strategy for global optimization

    Information Sciences

    (2019)
  • J.B. Yu

    Evolutionary manifold regularized stacked denoising autoencoders for gearbox fault diagnosis

    Knowledge-Based Systems

    (2019)
  • J. Alcalá-Fdez et al.

    KEEL: A software tool to assess evolutionary algorithms for data mining problems

    Soft Computing

    (2009)
  • J.J. Bird et al.

    A deep evolutionary approach to bioinspired classifier optimisation for brain-machine interaction

    Complexity

    (2019)
  • Z. Cai et al.

    Evolving an optimal kernel extreme learning machine by using an enhanced grey wolf optimization strategy

    Expert Systems with Applications

    (2019)
  • H. Chen et al.

    Efficient multi-population outpost fruit fly-driven optimizers: Framework and advances in support vector machines

    Expert Systems with Applications

    (2019)
  • H. Chen et al.

    An efficient double adaptive random spare reinforced whale optimization algorithm

    Expert Systems with Applications

    (2019)
  • H. Chen et al.

    An enhanced bacterial foraging optimization and its application for training kernel extreme learning machine

    Applied Soft Computing

    (2019)
  • W. Chen et al.

    Particle swarm optimization with an aging leader and challengers

    IEEE Transactions on Evolutionary Computation

    (2013)
  • X. Chen et al.

    Biogeography-based learning particle swarm optimization

    Soft Computing

    (2017)
  • Coell et al.

    Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art

    Computer Methods in Applied Mechanics and Engineering

    (2002)
  • Cited by (0)

    1

    These authors contributed equally to this work

    View full text