Orthogonally-designed adapted grasshopper optimization: A comprehensive analysis

https://doi.org/10.1016/j.eswa.2020.113282Get rights and content

Highlights

  • This paper proposes an improved variant of the grasshopper optimization algorithm.

  • Orthogonal learning and chaos-based exploitative search are introduced.

  • Extensive comparison using various datasets and benchmark problems are performed.

  • A new feature selection model is established using the proposed method.

Abstract

Grasshopper optimization algorithm (GOA) is a newly proposed meta-heuristic algorithm that simulates the biological habits of grasshopper seeking for food sources. Nonetheless, some shortcomings exist in the basic version of GOA. It may quickly drop into local optima and show slow convergence rates when facing some complex basins. In this work, an improved GOA is proposed to alleviate the core shortcomings of GOA and handle continuous optimization problems more efficiently. For this purpose, two strategies, including orthogonal learning and chaotic exploitation, are introduced into the conventional GOA to find a more stable trade-off between the exploration and exploitation cores. Adding orthogonal learning to GOA can enhance the diversity of agents, whereas a chaotic exploitation strategy can update the position of grasshoppers within a limited local region. To confirm the efficacy of GOA, we compared it with a variety of famous classical meta-heuristic algorithms performed on 30 IEEE CEC2017 benchmark functions. Also, it is applied to feature selection cases, and three structural design problems are employed to validate its efficacy in terms of different metrics. The experimental results illustrate that the above tactics can mitigate the deficiencies of GOA, and the improved variant can reach high-quality solutions for different problems.

Introduction

In recent years, several meta-heuristic algorithms (MAs) have been developed and adapted to deal with various problems (Chen, Heidari, Zhao, Zhang & Chen, 2019; Chen, Wang & Zhao, 2020; Chen, Zhang, Luo, Xu & Zhang, 2019; Deng, Zhao, Yang, Xiong, Sun & Li, 2017a; Deng, Zhao, Zou, Li, Yang & Wu, 2017b; Deng, Xu & Zhao, 2019; Luo et al., 2019; Wang & Chen, 2019; Y. Xu et al., 2019, 2019; Yu, Zhao, Wang, Chen & Li, 2020) owing to its simplicity, effectiveness, and excellent global searching ability. MAs have shown to be more effective than traditional gradient-based algorithms (Zhang, Wang, Zhou & Ma, 2019; Qiao, Tian, Tian, Yang, Wang & Zhang, 2019; Qiao and Yang, 2019, Qiao and Yang, 2019). There are many novel and traditional methods, which each of them is fitter to be utilized with specific kinds of problems (Zhou, Moayedi and Foong, 2020; Zhou, Moayedi, Bahiraei and Lyu, 2020). Harris hawks optimizer (HHO) (Chen, Jiao, Wang, Heidari & Zhao, 2019; Heidari et al., 2019) is a new swarm intelligence algorithm in this field. Also, there is various famous MAs in the literature, for instance: particle swarm optimization (PSO) (Kennedy & Eberhart, 1995), differential evolution (DE) (Storn & Price, 1997), bacterial foraging optimization (BFO) (Passino, 2002), artificial bee colony optimization (ABC) (Karaboga & Basturk, 2007), firefly algorithm (FA) (Yang, 2009), bat algorithm (BA) (Yang, 2010), fruit fly optimization algorithm (FOA) (Pan, 2012), flower pollination algorithm (FPA) (Yang, 2012), gray wolf optimization (GWO) (Mirjalili, Mirjalili & Lewis, 2014), moth-flame optimization algorithm (MFO) (Mirjalili, 2015b), ant lion optimization (ALO) (Mirjalili, 2015a), sine cosine algorithm (SCA) (Mirjalili, 2016b), whale optimization algorithm (WOA) (Mirjalili & Lewis, 2016), multi-verse optimization algorithm (MVO) (Mirjalili, Mirjalili & Hatamlou, 2016), dragonfly algorithm (DA) (Mirjalili, 2016a), salp swarm algorithm (SSA)(Mirjalili et al., 2017), moth search algorithm (MSA) (Wang, 2018) and grasshopper optimization algorithm (GOA) (Saremi, Mirjalili & Lewis, 2017). Among all these algorithms, GOA has been widely studied in recent years owing to its simple implementation and relatively impressive performance in realizing complex problems.

Until now, the basic GOA has been widely utilized in various fields because of its reasonably good optimization capability and simple implementation. Aljarah et al. (2018) optimized parameters on support vector machines by using GOA. Arora and Anand (2018) improved the original GOA by using a chaotic map to keep the exploration and exploitation in a proper balance. Ewees, Abd Elaziz and Houssein (2018) improved GOA through an opposition-based learning strategy and compared it to four engineering cases. Luo et al. (2018) equipped GOA with three kinds of approaches, such as levy flight, opposition-based learning, and gauss mutation, which successfully demonstrated the predictive ability of financial stress problems. Mirjalili, Mirjalili, Saremi, Faris and Aljarah, 2018) put forward a multi-objective GOA based on the original GOA and optimized it for a group of different standard multi-objective test problems. The performance of methods and obtained results reveal that this method has substantial superiority and competitiveness.

Saxena, Shekhawat and Kumar (2018) developed a modified GOA based on ten kinds of chaotic maps. Tharwat, Houssein, Ahmed, Hassanien and Gabel (2018) designed an enhanced multi-objective GOA to advance similar problems, and they observed that results are better than other algorithms. Barik and Das (2018) proposed a method that was coordinating the generation and load demand of microgrid through GOA to cope with the unpredictability of renewable energy and dependence on nature. Crawford, Soto, Peña and Astorga (2019) verified the fantastic results in solving combination problems (such as SCP) with the help of improved GOA that percentile concept was equipped with the general binarization mechanism of continuous element heuristic. El-Fergany (2018) has shown that it was feasible and effective to optimize the parameter of the fuel cells stack based on the searching phases of GOA. Hazra, Pal and Roy (2019) presented a comprehensive method to prove the superiority of GOA in dealing with wind power availability compared with other algorithms when realizing the economic operation of the hybrid power system. Jumani et al. (2019) optimized a grid-connected MG controller developed by GOA. Based on the performance of the existing controller under the condition of MG injection and sudden load change, the superiority of GOA was proved. Mafarja et al. (2018) adopted GOA to be an exploration strategy in the feature selection method of wrapper design. The experimental results demonstrated the advantage of the proposed GOA methods compared with others based on 22 UCI datasets. Taher, Kamel, Jurado and Ebeed (2019) proposed a modified GOA (MGOA) to realize the optimization of the power flow problem, which was realized by modifying the mutation process of traditional GOA. Wu et al. (2017) came up with an adaptive GOA (AGOA) for finding a better solution for the cooperative target tracking trajectory. In AGOA, it adapted several kinds of optimization strategies, such as the dynamic feedback mechanism, the survival of the fittest mechanism, and democratic selection strategy to primal GOA. Tumuluru and Ravi (2017) proposed a GOA-based deep belief neural networks to perform the cancer classification with improved classification accuracy, for which the logarithmic transformation and Bhattacharya distance were used.

Although the above GOA variants improve the search capability or convergence speed, they are still challenging to avoid local optimum when faced with the complex and high-dimensional optimization task. The next deductions have been drawn from the literature. At first, the limited search capability makes basic GOA easy to fall into the local optimum and results in slow convergence. Secondly, single mutation algorithms can hardly achieve the right balance the exploration and exploitation abilities. To alleviate this situation and enhance the efficacy of this method, a revised variant of GOA named orthogonal learning and chaotic exploitation-based GOA (OLCGOA) is developed in this work. In OLCGOA, two useful strategies (orthogonal learning (OL) and chaotic exploitation (CLS)) are combined into GOA. Also, orthogonal learning was used to enhance the ability of searching solution space, while the CLS mechanism was merged to give the best current agent more opportunities to execute deeper exploitation in the adjacent area. In other words, the developed OLCGOA enriches the individual diversity of GOA through inserting patterns induced by the orthogonal experiments into its exploratory movement and enhances the local searching ability through the CLS to the position changing procedure of GOA. The expression of OLCGOA was assessed based on 30 classical reference functions in the CEC2017 (Wu, Mallipeddi & Suganthan, 2016) with several classical MAs and part of advanced optimization methods. The results illustrate that the enhanced OLCGOA is superior to the basic GOA and other MAs. Besides, OLCGOA has also been validated for some well-known engineering problems and feature selection problems successfully. According to the result of the experiment, it demonstrates that the modified OLCGOA is better than the rest of the methods in terms of generating more competitive optimal solutions when dealing with the constraint problems.

The rest of the paper is divided into four chapters: a simple description of GOA, OL, and CLS are provided in par 2. Part 3 illustrates the improved GOA in detail. We descript the experimental research and simulation results in part 4. Finally, part 5 gives a summary and outlook.

Section snippets

Grasshopper optimization algorithm (GOA)

Saremi et al. (2017) proposed a new heuristic algorithm named GOA, which mimics the aggregation and foraging behavior of the grasshoppers in nature. Grasshopper populations establish a relationship with each other, and the repulsion and attraction between individuals give grasshoppers an optimal position to move to for finding the food source. Inspired by this behavior, it can be mathematically defined by:Xi=Si+Gi+Ai

According to the Eq. (1), the way grasshoppers get their food is disrupted by

Proposed OLCGOA method

In this part, we will combine the previous chapter to describe OLCGOA based on OL strategy and CLS strategy in detail. In OLCGOA, the grasshopper search process employed two significant strategies, namely CLS and OL, to maintain a more stable equilibrium between the exploitation and exploration cores.

Increasing the diversity of the population can reduce the situation of falling into local optimum and improve the evolutionary algorithm in terms of exploratory trends. In the proposed method, the

Experimental research

In this part, the results of the proposed GOA-based technique are compared with other algorithms in competition functions, engineering applications, and feature problems. The experiments are done to validate the achieved efficacy compared to other peers. Firstly, the added strategies on GOA were tested on 30 CEC2017 functions to see if they were improved. After that, OLCGOA, original algorithm, and some advanced MAs are compared based on the attained results under the same test environment.

Conclusion and future works

In this study, a new variant of GOA named orthogonal learning and chaotic exploitation-based GOA (OLCGOA) is developed. In OLCGOA, two effective strategies (orthogonal learning (OL) and chaotic exploitation (CLS)) are embedded in GOA. The simulation results reveal that these two mechanisms are significantly useful to augment the main trends of the GOA further and mitigate the immature convergence drawback. Firstly, the availability of the mentioned OLCGOA method was demonstrated by comparing it

CRediT authorship contribution statement

Zhangze Xu: Writing - original draft, Writing - review & editing, Software, Visualization, Investigation. Zhongyi Hu: Conceptualization, Methodology, Formal analysis, Investigation, Writing - review & editing, Funding acquisition, Supervision. Ali Asghar Heidari: Writing - original draft, Writing - review & editing, Software, Visualization, Investigation. Mingjing Wang: Writing - review & editing, Software, Visualization. Xuehua Zhao: Writing - review & editing, Software, Visualization. Huiling

Declaration of Competing Interest

The authors declare that there is no conflict of interests regarding the publication of article.

Acknowledgement

This research is supported by National Natural Science Foundation of China (U1809209), Science and Technology Plan Project of Wenzhou, China (ZY2019020, ZG2017019), and Guangdong Natural Science Foundation (2018A030313339), MOE (Ministry of Education in China) Youth Fund Project of Humanities and Social Sciences (17YJCZH261), Scientific Research Team Project of Shenzhen Institute of Information Technology (SZIIT2019KJ022), National Natural Science Foundation of China (71803136, 61471133). We

References (90)

  • H. Faris et al.

    An efficient binary Salp Swarm Algorithm with crossover scheme for feature selection problems

    Knowledge-Based Systems

    (2018)
  • W. Gao et al.

    Partial multi-dividing ontology learning algorithm

    Information Sciences

    (2018)
  • W. Gao et al.

    Nano properties analysis via fourth multiplicative ABC indicator calculating

    Arabian journal of chemistry

    (2018)
  • W. Gao et al.

    Study of biological networks using graph theory

    Saudi journal of biological sciences

    (2018)
  • Q. He et al.

    An effective co-evolutionary particle swarm optimization for constrained engineering design problems

    Engineering Applications of Artificial Intelligence

    (2007)
  • A.A. Heidari et al.

    Harris hawks optimization: Algorithm and applications

    Future Generation Computer Systems

    (2019)
  • D. Jia et al.

    An effective memetic differential evolution algorithm based on chaotic local search

    Information Sciences

    (2011)
  • J. Luo et al.

    Multi-strategy boosted mutative whale-inspired optimization approaches

    Applied Mathematical Modelling

    (2019)
  • J. Luo et al.

    An improved grasshopper optimization algorithm with application to financial stress prediction

    Applied Mathematical Modelling

    (2018)
  • M. Mafarja et al.

    Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems

    Knowledge-Based Systems

    (2018)
  • S. Mirjalili

    The ant lion optimizer

    Advances in Engineering Software

    (2015)
  • S. Mirjalili

    Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm

    Knowledge-Based Systems

    (2015)
  • S. Mirjalili

    SCA: A sine cosine algorithm for solving optimization problems

    Knowledge-Based Systems

    (2016)
  • S. Mirjalili et al.

    Salp swarm algorithm: A bio-inspired optimizer for engineering design problems

    Advances in Engineering Software

    (2017)
  • S. Mirjalili et al.

    S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization

    Swarm and Evolutionary Computation

    (2013)
  • S. Mirjalili et al.

    The whale optimization algorithm

    Advances in Engineering Software

    (2016)
  • S. Mirjalili et al.

    Grey wolf optimizer

    Advances in Engineering Software

    (2014)
  • H. Nenavath et al.

    Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking

    Applied Soft Computing Journal

    (2018)
  • W.T. Pan

    A new fruit fly optimization algorithm: Taking the financial distress model as an example

    Knowledge-Based Systems

    (2012)
  • S. Saremi et al.

    Grasshopper optimisation algorithm: Theory and application

    Advances in Engineering Software

    (2017)
  • R.P. Singh et al.

    Particle swarm optimization with an aging leader and challengers algorithm for the solution of optimal power flow problem

    Applied Soft Computing Journal

    (2016)
  • J. Wu et al.

    Distributed trajectory optimization for multiple solar-powered UAVs target tracking in urban environment by adaptive grasshopper optimization algorithm

    Aerospace Science and Technology

    (2017)
  • Y. Xu et al.

    An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks

    Expert Systems with Applications

    (2019)
  • Y. Xu et al.

    Enhanced Moth-flame optimizer with mutation strategy for global optimization

    Information Sciences

    (2019)
  • H. Yu et al.

    Chaos-enhanced synchronized bat optimizer

    Applied Mathematical Modelling

    (2020)
  • G. Zhou et al.

    Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings

    Journal of Cleaner Production

    (2020)
  • J. Alcalá-Fdez et al.

    KEEL: A software tool to assess evolutionary algorithms for data mining problems

    Soft Computing

    (2009)
  • I. Aljarah et al.

    Simultaneous feature selection and support vector machine optimization using the grasshopper optimization algorithm

    Cognitive Computation

    (2018)
  • S. Arora et al.

    Chaotic grasshopper optimization algorithm for global optimization

    Neural Computing and Applications

    (2018)
  • A.K. Barik et al.

    Expeditious frequency control of solar photovoltaic/biogas/biodiesel generator based isolated renewable microgrid using grasshopper optimisation algorithm

    IET Renewable Power Generation

    (2018)
  • Y. Cao et al.

    Comprehensive learning particle swarm optimization algorithm with local search for multimodal functions

    IEEE Transactions on Evolutionary Computation

    (2018)
  • H. Chen et al.

    Advanced orthogonal learning-driven multi-swarm sine cosine optimization: Framework and case studies

    Expert Systems with Applications

    (2019)
  • H. Chen et al.

    Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts

    Journal of Cleaner Production

    (2019)
  • H. Chen et al.

    An enhanced bacterial foraging optimization and its application for training kernel extreme learning machine

    Applied Soft Computing

    (2019)
  • X. Chen et al.

    An improved particle swarm optimization with biogeography-based learning strategy for economic dispatch problems

    Complexity

    (2018)
  • Cited by (59)

    • Self-adaptive classification learning hybrid JAYA and Rao-1 algorithm for large-scale numerical and engineering problems

      2022, Engineering Applications of Artificial Intelligence
      Citation Excerpt :

      To demonstrate the performance of the EHRJAYA algorithm, more well-known metaheuristics reported are selected for comparison, including Tunicate Swarm Algorithm (Kaur et al., 2020) (TSA), Sooty Tern Optimization Algorithm (Gaurav and Amandeep, 2019) (STOA), Seagull optimization algorithm (Gaurav and Vijay, 2019) (SOA), Chimp Optimization Algorithm (Khishe and Mosavi, 2020) (ChOA), Aquila Optimizer (Abualigah et al., 2021c) (AO), Salp Swarm Algorithm (Mirjalili et al., 2016a) (SSA), Whale Optimization Algorithm (Mirjalili and Lewis, 2016) (WOA), Grasshopper Optimization Algorithm (Saremi et al., 2017) (GOA), Multi-Verse Optimizer (Mirjalili et al., 2016b) (MVO), Slime Mould Algorithm (Li et al., 2020) (SMA), Reptile Search Algorithm (Abualigah et al., 2022) (RSA), Moth–Flame Optimization Algorithm (Mirjalili, 2015) (MFO), Marine Predators Algorithm (Faramarzi et al., 2020a) (MPA), HGSO (Hashim et al., 2019), Teaching–learning-based optimization (Huang et al., 2015) (TLBO), and Equilibrium optimizer (Faramarzi et al., 2020b) (EO). In addition, some well-known improved algorithms are introduced as comparison algorithms, including Adaptive opposition slime mould algorithm (Naik et al., 2021) (AOSMA), Hybrid GWO with WOA (Mohammed and Rashid, 2020) (WOAGWO), Socio-behavioural simulation (Akhtar et al., 2002) (SBM), Derivative-Free filter simulated annealing method (Hedar and Fukushima, 2006) (FSA), Improved particle swarm optimizer (He et al., 2004) (IPSO), Co-evolutionary differential evolution (Huang et al., 2007) (CDE), Coevolutionary particle swarm optimization (Krohling and Coelho, 2006) (CPSO), Hybrid Nelder–Mead simplex search and particle swarm optimization (Zahara and Kao, 2009) (NM-PSO), Orthogonally-designed adapted grasshopper optimization (Xu et al., 2020) (OLCGOA), Differential evolution with dynamic stochastic selection (Zhang et al., 2008) (DSS-MDE), Society and civilization (Ray and Liew, 2003) (SC), Powerful variant of symbiotic organisms search (Çelik, 2020) (ISOS), Mine blast algorithm (Sadollah et al., 2013) (MBA), Hybridizing particle swarm optimization with differential evolution (Liu et al., 2010) (PSO-DE), Modified Differential Evolution (Mezura-Montes et al., 2006) (MDE), Intensify Harris Hawks optimizer (Kamboj et al., 2020) (hHHO-SCA), Effective teaching–learning-based cuckoo search (Huang et al., 2015) (TLCS), Hybrid evolutionary algorithm and adaptive constraint-handling technique (Wang et al., 2009) (HEA-ACT), Comprehensive learning Jaya algorithm (Zhang and Jin, 2021) (CLJAYA). For the fairness of the experiments, all populations are set to 50, and all algorithms are run 30 times on each engineering problem and the best solution is selected.

    • Enhanced Jaya algorithm: A simple but efficient optimization method for constrained engineering design problems

      2021, Knowledge-Based Systems
      Citation Excerpt :

      Obviously, EJAYA beats CSA in terms of computational efficiency. This problem has been optimized previously by CPSO [76], MFO [77], DELC [78], SSA [73], GWO [48], WOA [49], RO [72], HEAA [79], WEO [80], EO [56], IHS [81], HGA [82], CA [83], CDE [70], MSCA [84], OLCGOA [85], CSA [74], ISOS [86], and GPEA [87]. Table 9 presents the statistical results of EJAYA on this problem.

    • Boosting slime mould algorithm for parameter identification of photovoltaic models

      2021, Energy
      Citation Excerpt :

      Wang et al. [106] introduced the chaotic intensification strategy into MFO for enhancing the kernel extreme learning machine. Xu et al. [107] proposed an enriched grasshopper optimization algorithm based on orthogonal learning and chaotic exploitation to handle continuous optimization problems more effectively. Liang et al. [108] presented an enriched SCA with the chaotic local search strategy and the opposition-based learning strategy for global optimization tasks.

    View all citing articles on Scopus
    1

    These authors contributed equally to this work.

    View full text