Advanced orthogonal learning-driven multi-swarm sine cosine optimization: Framework and case studies
Introduction
In the past two decades, meta-heuristic algorithms (MAs) have been widely implemented in various optimistic scenarios to solve complex optimization problems in modern industry. The ideological inspiration of these MAs frequently comes from biological or terrestrial phenomena in nature (Chen et al., 2019; Chen, Yang, Heidari & Zhao, 2019; Chen, Zhang, Luo, Xu & Zhang, 2019; Heidari et al., 2019; Y. Xu et al., 2019; Y. Xu et al., 2019; Yu, Zhao, Wang, Chen & Li, 2019). Their gradual process of finding the optimal solution follows the principle of “trial and error.” In general, MAs contain many different operators, like crossover, mutation, orthogonal learning, chaotic local search, chaotic initialization, greedy selection, etc. These operators are used to generate new individuals. Next, the qualities of the solutions are continually improved during an iterative process. In terms of efficiency, MAs have proven to be more efficient than gradient-based algorithms (Zhang, Wang, Zhou & Ma, 2019). However, MAs also have some shortcoming. For example, the convergence rate is slow, and as no free lunch (NFL) theorem proved, there is no universal best method for solving all types of possible problems. For the above reasons, the notable invention and successful development of the new MAs are still a challenging task.
MAs consists of two major classes: swarm intelligence and evolutionary algorithms. Among them, the evolutionary algorithm, which is divinely inspired by the evolutionary process of biology, contains the processes and concepts of competition and elimination. Representative evolutionary algorithms are genetic algorithm (GA) (Deng et al., 2017; Zheng, Lu, Guo, Guo & Xu, 2014), differential evolution (DE) (Storn & Price, 1997), evolution strategies (ES) (Rechenberg, 1973) and others. The inspiration of swarm intelligence algorithms comes from the social organizational behavior of animal groups. Representative swarm intelligence algorithms are particle swarm optimization (PSO) (Kennedy & Eberhart, 1995; Zhang, Hu, Xie, Bao & Maybank, 2015), ant colony optimization (ACO) (Deng, Xu & Zhao, 2019; Dorigo, Maniezzo & Colorni, 1996), artificial bee colony (ABC) (Karaboga & Basturk, 2007), gray wolf optimizer (GWO) (Cai et al., 2019; Mirjalili, Mirjalili & Lewis, 2014), Harris hawks optimizer (HHO) (Heidari et al., 2019) and others (Chen, Xu, Wang & Zhao, 2019; Zhang et al., ; Zhang et al., 2019).
As a recently developed meta-heuristic algorithm, sine cosine algorithm (SCA) (Mirjalili, 2016) belongs to the group of swarm intelligence algorithm. To determine the optimal solution in the search space, in SCA, both the sine and cosine functions are properly utilized. Based on its simplicity, flexibility, and efficiency, SCA has gained many attentions from the community of intelligent optimization algorithms. Solving multi-modal problems has always remained an intractable problem for many years, and in the face of this problem, SCA has also exposed its methodological shortcoming, that is, it is relaxed to fall into local optimum. Similarly, how new particles are generated is closely related to optimization issues. This means we should try more new particle generation methods as we continue dealing with different types of optimization problems. Due to the updating formula based on the sine and cosine functions, SCA can carefully explore the search space near the optimal solution very efficiently. However, due to the evident lack of effective corresponding strategies, the traditional sine cosine algorithm has insufficient exploration capability to scan the entire feature space. Because the mechanism of the original SCA algorithm is too simple, it may not be able to show strong search capabilities in dealing with some problems. The potential of the original SCA is limited in terms of exploitation as well. This shortcoming limits the convergence rate of the SCA, and it may not get a better solution within many iterations. Therefore, we devoted this paper to mitigate the core shortcomings of SCA. Also, the original SCA is inclined to falling into local optimum in some feature spaces. This is because SCA has limited efficacy in avoiding the local optima. As a result, the original SCA algorithm is prone to stagnation problem. The convergence speed of the SCA algorithm is also mediocre. The SCA algorithm does not have stable power in jumping out of the local optimal solutions. When the algorithm falls into local optimum, it cannot evade it, and then, no better solution can be obtained by more iterations. This observation is very unfavorable for some complex search process. Therefore, we plan to improve these weaknesses of SCA by revising its core structure by more efficient strategies.
In this paper, we proposed a novel and modified version of SCA. The reason is that SCA has some excellent features compared to other well-established methods in dealing with various kinds of problems.(Mirjalili, 2016) This is not only verified in the original work but also observed in resulted literature after the invention of SCA.(Gupta & Deep, 2018; Kumar, Hussain, Singh & Panigrahi, 2017; Qu, Zeng, Dai, Yi & He, 2018) This can be beneficial to the improvement of the algorithm, and improved strategies proposed in this paper can further help this method to achieve better performance. This method attracted many researchers because the structure of the SCA algorithm is relatively simple compared to other well-established optimizers, and its computational complexity is not high. This makes it favorable to be integrated with other efficient evolutionary strategies.(Abd Elaziz, Oliva & Xiong, 2017; Nenavath & Jatoth, 2018) Also, the structure of the population in SCA is relatively clear, which is very suitable for the construction of the multi-swarm structure. Embedding multi-swarm strategy to the SCA algorithm can fully exploit the effectiveness of the multi-swarm structure in optimizing the feature space.(Niu, Zhu, He & Wu, 2007; Xia et al., 2018) For the basic local search ability of the SCA, the orthogonal learning strategy can be a good solution to further improve its performance.(Rizk-Allah, 2018;.Rizk-Allah, 2018) The greedy selection strategy can play a functional screening role for the solutions obtained by the core functions of SCA. Therefore, compared with other modified versions of SCA, the improvements made in this paper are more in accordance with the muliterent characteristics of SCA, and the observed improvements are also very significant compared to other well-regarded optimizers.
This new SCA variant incorporates a multi-swarm mechanism and combines an orthogonal learning mechanism with a greedy selection strategy, namely OMGSCA. We have compared many mechanisms in the initial experiments to find a proper approach to improve the core mechanisms of the SCA. Different mechanisms have been combined with SCA and then, we compared in the same experimental environment. Through this exploratory analysis and observations, we found that the improved variants of SCA resulted from the combination of orthogonal learning strategy, multi-swarm strategy, greedy selection strategy can show very good performance. For the improvement of the local search of SCA, the orthogonal learning strategy performs best. For the structure adjustment of the SCA's population and augmentation of its global searching trends, the multi-swarm strategy has the best effect. The greedy selection strategy helps to preserve the useful information in the solution. Therefore, we choose these three strategies to improve SCA.
Initially, all particles are divided into many limited sub-swarms. This is because it helps to extensively explore the global space. Second, it is noted that swarm-based algorithms try to gradually shift from global exploration to area exploitation by more iterations. Therefore, different sub-swarm numbers are set in this paper at various stages of the iteration. Hence, the sub-swarm number and the number of particles in the sub-swarm can be dynamically changed. Next, the greedy selection strategy is introduced into the basic method to increase the convergence speed. This paper introduces the greedy selection strategy into the level of dimensional iteration in the original approach to effectively combine the greedy selection strategy with SCA. At the final stage, the quality offspring will be retained after greedy selection. Through the variation mechanism and based on the orthogonal learning principle, SCA can produce more promising solutions with better fitness values. 30 benchmark functions with 30 dimensions (30D) are employed to deeply evaluate the performance of the proposed algorithm. These functions are selected from well-regarded IEEE CEC 2014 (Liang, Qu & Suganthan, 2013). Also, a comprehensive set of methods including the basic MAs and the advanced MAs are used to validate the novel method. We conducted the Wilcoxon sign rank test (García, Fernández, Luengo & Herrera, 2010) and Friedman test (Alcalá-Fdez et al., 2009) to verify the significant differences.
The organization of this paper is as follows. Section 2 describes a review of the literature and summarizes the research and development history of SCA and multi-swarm mechanism. Section 3 briefly describes the original SCA. Section 4 will explain the proposed algorithm and the various mechanisms it contains. Section 5 is devoted to the experimental part. Finally, the conclusion and future works are summarized in Section 6.
Section snippets
Sine cosine algorithm
Until now, many improved SCA-based methods have been proposed to deal with various problems. Abd Elaziz et al. (2017) proposed a variant of SCA based on opposition-based learning (OBL). First, in the swarm initialization phase and iterative process, new solutions were generated based on the principle of OBL. Then, based on the size of the fitness value, the optimal half is retained. The proposed algorithm was tested on 29 standard functions. Kumar et al. (2017) proposed a variant of SCA that
Sine cosine algorithm (SCA)
Any problem can be modeled and solved based on the concepts, rules, and verified equations in mathematics (Gao, Guirao, Abdel-Aty & Xi, 2019, 2018; Gao, Wang, Dimitrov & Wang, 2018; Gao, Wu, Siddiqui & Baig, 2018; Wei, Darko & Hosam, 2018). The basic SCA is a new swarm-intelligence meta-heuristic optimizer proposed based on well-known mathematical rules in 2016 (Mirjalili, 2016). The core idea of SCA is to take the advantages of the mathematical sine and cosine functions. Similar to many MAs,
Proposed OMGSCA
SCA has a rapid convergence and coherent structure. However, for some complex optimization problems, the main method will fall into local optima. The weaknesses of SCA are increasingly evident on high-dimensional and multi-modal functions. The optimization ability of the standard SCA is directly related to the leading role of the best individual. When an isolated individual is captured by the local optimum, it can quickly jump out with the power of the population. However, if most individuals
Experimental results
The experimental part of OMGSCA proposed in this paper was carried out in five aspects. First, OMGSCA was compared with other basic methods. Second, OMGSCA was compared against other advanced methods. Third, for the selection of a key parameter Limit in the OL strategy, we also performed a number of tests to select the optimal value of the parameter Limit. Fourth, some selected real-world benchmark problems from CEC2011 (Swagatam Das & Suganthan, 2010) are used to test the performance of the
Conclusion and future directions
This paper proposed an effective method that introduces orthogonal learning, multi-swarm, and greedy selection mechanisms into the recently proposed SCA algorithm. We utilized these mechanisms to enhance the performance of this method significantly. The successful fusion of these key strategies has further balanced the extensive exploration and intensive exploitation of SCA and substantially enhanced the diversity of the swarm. Based on an extensive set of benchmark functions from CEC2014, the
CRediT authorship contribution statement
Hao Chen: Conceptualization, Methodology, Software, Writing - original draft, Investigation, Writing - review & editing. Ali Asghar Heidari: Writing - original draft, Writing - review & editing, Software, Visualization, Investigation. Xuehua Zhao: Writing - review & editing, Software, Visualization. Lejun Zhang: Writing - review & editing, Software, Visualization. Huiling Chen: Conceptualization, Funding acquisition, Resources.
Declaration of Competing Interest
The authors declare that there is no conflict of interests regarding the publication of article.
Acknowledgement
This research is supported by National Natural Science Foundation of China (U1809209), Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044 and Six Talent Peaks Project in Jiangsu Province No.XYDXX-108, and Guangdong Natural Science Foundation (2018A030313339), Ministry of Education in China Youth Fund Project of Humanities and Social Sciences (17YJCZH261), Scientific Research Team Project of Shenzhen Institute of Information Technology (SZIIT2019KJ022).
References (111)
- et al.
An improved opposition-based sine cosine algorithm for global optimization
Expert Systems with Applications
(2017) - et al.
Economic dispatch using chaotic bat algorithm
Energy
(2016) - et al.
PSOSCALF: A new hybrid pso based on sine cosine algorithm and levy flight for solving optimization problems
Applied Soft Computing
(2018) - et al.
An opposition-based sine cosine approach with local search for parameter estimation of photovoltaic models
Energy Conversion and Management
(2019) - et al.
A balanced whale optimization algorithm for constrained engineering design problems
Applied Mathematical Modelling
(2019) - et al.
A hybrid particle swarm optimizer with sine cosine acceleration coefficients
Information Sciences
(2018) - et al.
Parameters identification of solar cell models using generalized oppositional teaching learning based optimization
Energy
(2016) - et al.
Dynamic multi-swarm differential learning particle swarm optimizer
Swarm and Evolutionary Computation
(2018) - et al.
Solution of short-term hydrothermal scheduling using sine cosine algorithm
Soft Computing
(2017) - et al.
Water cycle algorithm - A novel metaheuristic optimization method for solving constrained engineering optimization problems
Computers and Structures
(2012)
Partial multi-dividing ontology learning algorithm
Information Sciences
Nano properties analysis via fourth multiplicative ABC indicator calculating
Arabian journal of chemistry
Study of biological networks using graph theory
Saudi journal of biological sciences
Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power
Information Sciences
A hybrid self-adaptive sine cosine algorithm with opposition based learning
Expert Systems with Applications
Harris hawks optimization: Algorithm and applications
Future Generation Computer Systems
An effective co-evolutionary differential evolution for constrained optimization
Applied Mathematics and Computation
ASCA-PSO: Adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment
Expert Systems with Applications
A hybrid particle swarm optimization algorithm for high-dimensional problems
Computers & Industrial Engineering
A new meta-heuristic method: Ray optimization
Computers and Structures
A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice
Computer Methods in Applied Mechanics and Engineering
Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm
Knowledge-Based Systems
SCA: A sine cosine algorithm for solving optimization problems
Knowledge-Based Systems
Salp swarm algorithm: A bio-inspired optimizer for engineering design problems
Advances in Engineering Software
The whale optimization algorithm
Advances in Engineering Software
Grey wolf optimizer
Advances in Engineering Software
Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking
Applied Soft Computing Journal
A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking
Swarm and Evolutionary Computation
MCPSO: A multi-swarm cooperative particle swarm optimizer
Applied Mathematics and Computation
Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems
CAD Computer Aided Design
Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems
Journal of Computational Design and Engineering
Passing vehicle search (PVS): A novel metaheuristic algorithm
Applied Mathematical Modelling
A multi-population harmony search algorithm with external archive for dynamic optimization problems
Information Sciences
cPSO-CNN: An efficient PSO-based algorithm for fine-tuning hyper-parameters of convolutional neural networks
Swarm and Evolutionary Computation
A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting
Applied Soft Computing Journal
Particle swarm optimization using multi-level adaptation and purposeful detection operators
Information Sciences
Enhancing the performance of biogeography-based optimization using polyphyletic migration operator and orthogonal learning
Computers and Operations Research
Dynamic multi-swarm particle swarm optimizer with cooperative learning strategy
Applied Soft Computing Journal
An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks
Expert Systems with Applications
Enhanced moth-flame optimizer with mutation strategy for global optimization
Information Sciences
Evolutionary manifold regularized stacked denoising autoencoders for gearbox fault diagnosis
Knowledge-Based Systems
KEEL: A software tool to assess evolutionary algorithms for data mining problems
Soft Computing
A deep evolutionary approach to bioinspired classifier optimisation for brain-machine interaction
Complexity
Evolving an optimal kernel extreme learning machine by using an enhanced grey wolf optimization strategy
Expert Systems with Applications
Efficient multi-population outpost fruit fly-driven optimizers: Framework and advances in support vector machines
Expert Systems with Applications
An efficient double adaptive random spare reinforced whale optimization algorithm
Expert Systems with Applications
An enhanced bacterial foraging optimization and its application for training kernel extreme learning machine
Applied Soft Computing
Particle swarm optimization with an aging leader and challengers
IEEE Transactions on Evolutionary Computation
Biogeography-based learning particle swarm optimization
Soft Computing
Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art
Computer Methods in Applied Mechanics and Engineering
Cited by (0)
- 1
These authors contributed equally to this work