Elsevier

Applied Soft Computing

Volume 93, August 2020, 106367
Applied Soft Computing

A memory-based Grey Wolf Optimizer for global optimization tasks

https://doi.org/10.1016/j.asoc.2020.106367Get rights and content

Highlights

  • A modified Grey Wolf Optimizer (mGWO) is proposed for global optimization.

  • The mGWO is validated on standard IEEE CEC 2014 and IEEE CEC 2017 benchmark problems.

  • The mGWO algorithm is used for solving engineering design and thresholding problems.

  • Comparison with other algorithms illustrate the effectiveness of the mGWO.

Abstract

Grey Wolf Optimizer (GWO) is a new nature-inspired metaheuristic algorithm based on the leadership and social behaviour of grey wolves in nature. It has shown potential to solve several real-life applications, but still for some complex optimization tasks, it may face the problem of getting trapped at local optima and premature convergence. Therefore, in this study, to prevent from these drawbacks and to get a more stable sense of balance between exploitation and exploration, a new modified GWO called memory-based Grey Wolf Optimizer (mGWO) is proposed. In the mGWO, the search mechanism of the wolves is modified based on the personal best history of each individual wolves, crossover and greedy selection. These strategies help to enhance the global exploration, local exploitation and an appropriate balance between them during the search procedure. To investigate the effectiveness of the proposed mGWO, it has been tested on standard and complex benchmarks given in IEEE CEC 2014 and IEEE CEC 2017. Furthermore, some real engineering design problems and multilevel thresholding problem are also solved using the mGWO. The results analysis and its comparison with other algorithms demonstrate the better search-efficiency, solution accuracy and convergence rate of the proposed mGWO in performing the global optimization tasks.

Introduction

Mathematical optimization refers to the finding of an element within an available domain for a given function so that the function value is the maximum or minimum. Optimization problems exist in all fields of study and thus the development of optimization methods are essential. Most of the conventional optimization methods are deterministic and based on the derivative information of the functions involved in the problem. But, in real life, it is not possible always to calculate the derivative of the functions. Nature-inspired algorithms are becoming increasingly popular due to their characteristics of derivative-free mechanism and excellent optimization ability. Flexibility, easy implementation and the ability to escape from local optima are some other advantages with these algorithms.

In the category of the nature-inspired algorithm, swarm intelligence based algorithms are quite popular. The main advantage with these algorithms is their collaboration strength between search agents. This collaboration helps in exchanging the information available to search agents and participates in exploiting and exploring the promising search areas of the solution space. The exploration is the ability to discover new search regions and the exploitation is the process of evaluating the potential of available promising areas of the solution space. The exploration and exploitation are considered two foundation factors for any nature-inspired algorithm. These factors are conflicting [1] and therefore, maintaining an appropriate balance between them is a challenging task for the proper working of an algorithm. Thus, the main advantages with swarm intelligence based algorithms are

  • 1.

    Swarm intelligence based algorithms preserve the potential of solution space during the iterations, which helps in discovering new promising regions.

  • 2.

    Swarm intelligence based algorithms have fewer parameters to adjust.

  • 3.

    Swarm intelligence based algorithms are easy to implement.

  • 4.

    The collaborative strength among individual search agents helps to avoid the situation of getting trapped at local optima.

A large number of swarm intelligence based algorithms are developed in the literature and successfully applied to solve several real-life application problems. For example: PSO  [2], ABC  [3], ACO  [4], CS  [5], BA  [6], FA  [7]. Some of the other algorithms, which are recently added to this field are GWO  [8], MFO  [9], WOA  [10], SSA  [11], SS  [12], HHO  [13], and many others.

In the present article, the GWO developed by Mirjalili, Mirjalili, & Lewis [8] is focused on study. The reason for working with the GWO is its special and different search mechanism, which is based on the leadership hierarchy of grey wolves. In the GWO, three different wolves (alpha, beta and delta) are used to guide the search activities. The search mechanism based on the leadership hierarchy maintains an appropriate exploration in the algorithm and prevents the search agents to stagnate at local optima. Since past few years, GWO has become quite popular. In the literature, GWO has been successfully used to solve several real-life application problems such as economic dispatch problem  [14], [15], training of q-Gaussian radial basis  [16], power dispatch problem  [17], scheduling problem  [18], parameter estimation in surface waves  [19], feature selection  [20] and many others.

Despite the successful applicability of the GWO on several real-life problems arising in different fields, several works  [21], [22] criticized that it suffers from degraded performance when applying to solve highly multimodal problems. In order to deal such problems and to improve the solution accuracy, several attempts are made in the literature. Some of which are point out as follows:

In the GWO, control parameter a plays an important role in transition from exploration process to the exploitation process. Therefore, a non-linear formulation of this parameter has been adopted in  [23], [24], [25] to establish an appropriate balance between exploitation and exploration, and to simulation the nature of non-linear search procedure of the GWO. Although, these variants have performed well on the test problems considered in the work, but in some cases, especially on multimodal problems they suffers from the problem of high diversity. This high diversity is the main cause of the skipping of true solutions during the search process. In order to speed up the convergence rate in GWO, opposition-based learning is employed in  [26], [27], [28], [29]. This opposition-based learning is effective on those cases where the optima is far away from the current regions which are explored. The search based on current solution and its opposite approximate although improve the search performance but it needs high computational effort as it requires function evaluations double of the population size. Levy-flight based search  [21] and random walk search mechanisms  [30], [31] are also employed in the GWO to enhance its local exploitation and global exploration skills. These strategies generates long jump occasionally which is the cause of skipping of true solutions. The search equation of the GWO, which considers the mean of the positions obtained by the guidance of leading wolves is also modified to improve the search mechanism  [32], [33]. These variants focuses on exploiting the elite areas obtained by the wolves only and therefore, insufficient diversity is the main disadvantage with them. This degrades the performance of the algorithm on multimodal problems with massive number of optima. The parameter tuning of GWO parameters is also a key role to determine the best setting of these parameters. In this direction of study, fuzzy logic has been used in GWO to dynamically adopt these parameters  [34], [35]. In order to combine the advantages of different algorithms in a sense of maintaining comparatively better balance between exploitation and exploration, GWO has been hybridized with SCA  [36] and DE  [37]. The hybridization with SCA utilizes the exploration ability of the SCA, but since the SCA suffers from high diversity in its search mechanism  [38] therefore, there may be a chance that this hybridized algorithm may experience the same problem. The hybridization with DE also adds the exploration ability of the DE in GWO. This variant is not able to increase the exploitation skills and therefore the results on unimodal problems are not good enough. In  [39], various selection schemes are introduced in the GWO to analyse its performance. The results of this study provide that a tournament based selection is best among other selection scheme to select the leaders for the pack. But, the results indicated in the paper shows the low exploration and exploitation in the algorithm. The chaotic maps also used in GWO to increase the randomness during the search procedure  [40], [41]. This randomness perturb the wolves in a chaotic manner and therefore the algorithm may suffers from the skipping true solutions during the search procedure. In  [42], different search strategies such as enhancement of global search ability, cooperative hunting strategy and disperse foraging strategies are employed in the GWO. The first strategy helps to increase the local exploitation around the best obtained solution, second strategy helps to increase the diversity by adopting one dimensional and total dimensional update alternatively. The third strategy avoids the local optima during the search procedure and speed up the convergence rate. In this variant extra parameters are used and therefore, their tuning is another tedious task for the users.

Although, the above modifications have tried to improve the search performance and solution accuracy of the GWO, but still in some situations such as for more complex optimization problems, the algorithm suffers from the situation of stagnation at local optima. Another reason for modifying the GWO can be answered with the fact provided by the “No Free Lunch” theorem  [43], which states that there is no unique optimization algorithm is available or can be developed which is suitable for all optimization problems. Hence, this theorem open the area of research in improving the search mechanism of the existing algorithms. By motivating these facts, the present study proposed a modified GWO called, Memory-based Grey Wolf Optimizer (mGWO). To the best of our knowledge, this modified variant is the first version of the GWO, where the leading and personal best guidance of wolves are used simultaneously to guide the search procedure.

The mGWO algorithm tries to enhance the collaborativestrength and exploration ability of wolves by employing four different strategies. In the first strategy, the information collected by each wolf in the past history of its search procedure is utilized to modify its hunting mechanism. This information is about the quality of the search regions they have visited. In the second strategy an additional search mechanism is proposed to contribute the information available at each wolves. This strategy has been employed to enhance the exploration strength of the algorithm in terms of personal best guidance based search. The third strategy applied the crossover between the positions obtained by modified hunting mechanism and a novel search equation based on personal best guidance. This crossover helps to preserve the features of leading and personal best guidance. In the fourth strategy, a greedy selection is applied between the positions of current and previous iteration to preserve the information of best obtained areas of the solutions space and to avoid the divergence of wolves from available promising areas of the search space. To investigate the effectiveness of the proposed mGWO, it has been tested on standard and complex benchmarks given in IEEE CEC 2014  [44] and IEEE CEC 2017  [45]. The efficacy of the proposed mGWO algorithm is evaluated on these benchmark problems using statistical, convergence and diversity analysis. In the paper, the mGWO is also used to find the solution of real engineering problems and to find the optimal thresholds for the segmentation of grey images.

The rest of the paper is structured as follows: Section 2 describes the classical version of GWO. Section 3 presents the main inspiration for this paper and proposes the Memory-based Grey Wolf Optimizer (mGWO). The experimental results of the mGWO algorithm on benchmark problems are presented and analysed in Section 4. Section 5 uses the mGWO for solving real engineering problems. In Section 6, the mGWO is used for multilevel thresholding of grey images. Finally, Section 7 concludes the work and discusses possible future research.

Section snippets

Overview of GWO

The GWO algorithm [8] mimics the behaviour of social interaction and dominant leadership characteristic of the grey wolf pack. Grey wolves give preference to live in a group (pack), where the leadership behaviour and discipline are maintained by dividing the wolves into four types, namely —

  • 1.

    Alpha wolf — the most dominant wolf in the pack, which is responsible for all the crucial and major decision in the pack

  • 2.

    Beta wolf — the secondary wolf, which acts as a leading wolf in the absence of alpha

  • 3.

Motivation of the work

The search equation of standard GWO indicate the dependency of the search directions towards the leading wolves only. The leading wolves sometimes get traps at local solutions particularly in multimodal problems where a large number of valleys are present. In the standard GWO, when the leading wolves get trapped at these local optima, then it is difficult for the pack to escape from them because of the dependency of the search on leading wolves. The stagnation at local optima is the cause of

Benchmark suites and parameter setting

The benchmark suite is a collection of test problems which are used for the evaluation, characterization, and measurement of the performance of an optimization method. In the present paper, the search efficiency of the proposed Memory-based GWO (mGWO) algorithm is examined on standard and complex benchmark set CEC 2014 [44] and CEC 2017 [45]. These benchmarks sets contain 30 test problems with varying difficulty levels. These benchmark sets have four category of functions — unimodal,

Engineering optimization problems

In this section, the efficiency of the proposed mGWO algorithm is investigated on real-engineering optimization problems. In these problems, constraints are handled by a simple constraint handling mechanism. This constraint handling mechanism, evaluates the constraint violation [56] value corresponding to each candidate solution (wolf). For a general optimization problem — MinimizeFX,X=x1,x2,,xDRDs.t.Gj(X)0,j=1,2,,J(Inequality constraints)HkX=0,k=1,2,,K(Equality constraints)lbixiubi,i=

Image thresholding/segmentation

In this section, the proposed mGWO algorithm is employed to find the optimal thresholds for image segmentation. The multilevel thresholding is useful to achieve this objective. But, when the histogram of an image consists of many peaks with different heights and wide valleys then the multilevel thresholding is time consuming and tough task. In this situation, there is a higher chance of getting trapped in local optima for any search algorithm. In the literature, researchers have attempted to

Conclusions

The present study proposes a modified version of GWO called Memory-based GWO (mGWO). In the mGWO, the personal best history of each wolf and the elite wolf of the pack are simultaneously used to update their positions. The crossover and greedy selection mechanisms are also employed during the search to include the contribution of each wolf and to avoid the ignorance of obtained promising areas of the search space. These strategies enhance the collaborative strength of the pack and maintain and

CRediT authorship contribution statement

Shubham Gupta: Conceptualization, Methodology, Writing - original draft, Validation, Writing - review & editing. Kusum Deep: Writing - review & editing, Visualization, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

The first author would like to thank Ministry of Human Resources, Government of India for funding this research. Grant No. MHR-02-41-113-429.

References (67)

  • SongX. et al.

    Grey wolf optimizer for parameter estimation in surface waves

    Soil Dyn. Earthq. Eng.

    (2015)
  • EmaryE. et al.

    Binary grey wolf optimization approaches for feature selection

    Neurocomputing

    (2016)
  • HeidariA.A. et al.

    An efficient modified grey wolf optimizer with Lévy flight for optimization tasks

    Appl. Soft Comput.

    (2017)
  • LongW. et al.

    An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization

    Eng. Appl. Artif. Intell.

    (2018)
  • IbrahimR.A. et al.

    Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization

    Expert Syst. Appl.

    (2018)
  • SaxenaA. et al.

    Intelligent grey wolf optimizer–development and application for strategic bidding in uniform price spot energy market

    Appl. Soft Comput.

    (2018)
  • GuptaS. et al.

    A novel random walk grey wolf optimizer

    Swarm Evol. Comput.

    (2019)
  • RodríguezL. et al.

    A fuzzy hierarchical operator in the grey wolf optimizer algorithm

    Appl. Soft Comput.

    (2017)
  • LongW. et al.

    Inspired grey wolf optimizer for solving large-scale function optimization problems

    Appl. Math. Model.

    (2018)
  • SinghN. et al.

    A novel hybrid GWO-SCA approach for optimization problems

    Eng. Sci. Technol. Int. J.

    (2017)
  • GuptaS. et al.

    Improved sine cosine algorithm with crossover scheme for global optimization

    Knowl.-Based Syst.

    (2019)
  • KohliM. et al.

    Chaotic grey wolf optimization algorithm for constrained optimization problems

    J. Comput. Des. Eng.

    (2018)
  • TuQ. et al.

    Multi-strategy ensemble grey wolf optimizer and its application to feature selection

    Appl. Soft Comput.

    (2019)
  • MuroC. et al.

    Wolf-pack (Canis lupus) hunting strategies emerge from simple rules in computational simulations

    Behav. Process.

    (2011)
  • DerracJ. et al.

    A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms

    Swarm Evol. Comput.

    (2011)
  • RashediE. et al.

    GSA: a gravitational search algorithm

    Inform. Sci.

    (2009)
  • GaoW. et al.

    A global best artificial bee colony algorithm for global optimization

    J. Comput. Appl. Math.

    (2012)
  • DebK.

    An efficient constraint handling method for genetic algorithms

    Comput. Methods Appl. Mech. Engrg.

    (2000)
  • MlakarU. et al.

    A hybrid differential evolution for optimal multilevel image thresholding

    Expert Syst. Appl.

    (2016)
  • SathyaP.D. et al.

    Optimal multilevel thresholding using bacterial foraging algorithm

    Expert Syst. Appl.

    (2011)
  • HusseinW.A. et al.

    A fast scheme for multilevel thresholding based on a modified bees algorithm

    Knowl.-Based Syst.

    (2016)
  • DeyS. et al.

    Multi-level thresholding using quantum inspired meta-heuristics

    Knowl.-Based Syst.

    (2014)
  • ČrepinšekM. et al.

    Exploration and exploitation in evolutionary algorithms: A survey

    ACM Comput. Surv.

    (2013)
  • Cited by (118)

    • An adaptive ranking moth flame optimizer for feature selection

      2024, Mathematics and Computers in Simulation
    View all citing articles on Scopus
    View full text