Dynamically Dimensioned Search Grey Wolf Optimizer Based on Positional Interaction Information

. The grey wolf optimizer (GWO) algorithm is a recently developed, novel, population-based optimization technique that is inspired by the hunting mechanism of grey wolves. The GWO algorithm has some distinct advantages, such as few algorithm parameters, strong global optimization ability, and ease of implementation on a computer. However, the paramount challenge is that there are some cases where the GWO is prone to stagnation in local optima. This drawback of the GWO algorithm may be attributed to an insufficiency in its position-updated equation, which disregards the positional interaction information about the three best grey wolves (i.e., the three leaders). This paper proposes an improved version of the GWO algorithm that is based on a dynamically dimensioned search, spiral walking predation technique, and positional interaction information (referred to as the DGWO). In addition, a nonlinear control parameter strategy, i.e., the control parameter that is nonlinearly increased with an increase in iterations, is designed to balance the exploration and exploitation of the GWO algorithm. The experimental results for 23 general benchmark functions and 3 well-known engineering optimization design applications validate the effectiveness and feasibility of the proposed DGWO algorithm. The comparison results for the 23 benchmark functions show that the proposed DGWO algorithm performs significantly better than the GWO and its improved variant for most benchmarks. The DGWO provides the highest solution precision, strongest robustness, and fastest convergence rate among the compared algorithms in almost all cases.


Introduction
e rapid development of arti cial intelligence (AI) is primarily attributed to the considerable progress of computational intelligence (CI).CI that is based on complex systems mainly consists of two categories [1], i.e., singlesolution-based metaheuristics and population-based metaheuristics.Both single-solution-based algorithms and population-based algorithms employ a variety of mechanisms and are designed to solve extremely challenging problems in di erent complex systems.
Single-solution-based metaheuristics are usually only suitable for speci c complex optimization problems because of their single particle scale and weak coordination capability.A heuristic based on simulated annealing (SA) is designed to solve the machine reassignment problem [2].
e threshold-accepting (TA) metaheuristic method is applied to solve the job shop scheduling problem of dehydration plants [3].e microcanonical annealing (MA) algorithm is proposed for remote sensing image segmentation [4].e tabu search (TS) metaheuristic is collaborated with a regenerator-reducing procedure to solve the regenerator location problem [5].e guided local search (GLS) approach is introduced for multiuser detection in ultra-wideband systems [6].e dynamically dimensioned search (DDS) is introduced for automatic calibration in watershed simulation models [7][8][9][10][11].
Compared with single-solution-based algorithms, the research and applications of population-based algorithms are more extensive because of the following three main advantages [11,12]: more information can be obtained to guide the trial solutions toward a promising area within the search space by a set of trial solutions; local optimization can be e ectively avoided because of the interaction of a set of trial solutions; and in terms of exploration ability, population-based heuristic algorithms are superior to singlesolution-based heuristic algorithms.e genetic algorithm (GA) is used to address the characterization of hyperelastic materials [12,13].Particle swarm optimization (PSO) is attributed to improving and evaluating the performance of automated engineering design optimization [13,14].Differential evolution (DE) is presented for mobile robots to avoid obstacles [14,15].e dragonfly algorithm (DA) has been improved to train multilayer perceptrons [15,16].Shuffled complex evolution (SCE) is designed to optimize the load balancing of gateways in wireless sensor networks [16,17].e dolphin echolocation algorithm (DEA) is applied to design a steel frame structure [17,18].e bat algorithm (BA) is introduced to optimize the placement of a steel plate shear wall [18,19], and the artificial bee colony (ABC) algorithm is presented to image steganalysis [19,20].
e grey wolf optimizer (GWO) is adopted for parameter estimation in surface waves [20,21].
e GWO is one of the most impressive swarm intelligence algorithms and is the only algorithm that is based on leadership hierarchy theory; it was introduced by Mirjalili et al. [22].
e GWO algorithm has three advantages [23,24]: it has universal applicability to some real-life optimization problems; it is insensitive to derived information in the initial search; and it requires fewer algorithm parameters for adjustment.ese features render it a simple, flexible, adaptable, usable, and stable algorithm [24,25].erefore, since the GWO was proposed, researchers have conducted a considerable amount of in-depth research and applications.Regarding the improvements in research on the GWO algorithm, researchers tend to improve the performance of the GWO from four aspects [25]: positionupdating mechanisms, new control parameters, encoding scheme of individuals, and population structure and hierarchy.Typical study cases are listed as follows: Mittal et al. [26] used the exponential decay function a to enhance the exploration process in the GWO.However, this algorithm suffers premature convergence.Kishor and Singh [27] proposed a modified version of the GWO by incorporating a simple crossover operator between two randomly selected individuals.However, this technique has low capacity in solving high-dimensional complex problems.A complexvalued encoding strategy was employed by Luo et al. [28] to substitute a typical real-valued strategy that was adopted in the standard GWO and propose a complex-encoded GWO.
e main shortcoming of this method is that it suffers premature convergence.Yang et al. [29] used an effective cooperative hunting group and a random scout group strategy to propose a novel grouped grey wolf optimizer.
is approach employs a complex mechanism.Xu et al. [30] proposed a chaotic dynamic weight grey wolf optimizer (CDGWO), in which a new position-updated equation, formed by employing a chaotic map and dynamic weight, was built to guide the search process for potential candidate solutions.Gupta and Deep [23] proposed a novel random walk grey wolf optimizer (RW-GWO); in the RW-GWO, the random walk strategy was used for improving the search ability of the GWO.However, it shows low solution accuracy.In addition, an improved grey wolf optimization (VW-GWO) algorithm based on variable weight strategies and the social hierarchy in the searching positions was presented by Gao and Zhao [31].However, it employs a complex methodology.In terms of successful applications of the GWO, representative application research can be summarized as flow shop scheduling [32], machine learning [33][34][35][36], economic load dispatch [37], robotics and path planning [38,39], channel estimation in wireless communication systems [40], and other applications detailed in References [24,25].eoretical and practical research has shown the potential of the GWO algorithm in real life.However, numerous studies and experimental results concluded that the optimization performance of the GWO algorithm needs improvement.Specifically, the trial solution diversity would be hampered by the three best wolves that were identified in the accumulative search [12].Many metaheuristics, such as the GWO, can be easily trapped in the local optima when solving multimodal optimization problems, where multiple global optimum solutions exist [41] and the linear control parameter strategy is not the perfect design for balancing the exploration and exploitation.ese drawbacks may lead to an undesirable optimal performance of the algorithm [27,42].In addition, existing research on the GWO algorithm does not discuss improvements in the performance of the GWO algorithm by considering the positional interaction information among the three leaders (i.e., the first three best wolves).In the actual hunting process, however, better predation efficiency can be obtained only when positional information is communicated among the three leaders.In this paper, the positional interaction information refers to the information communication among the three leaders in their predation process as reflected by the relative change in position.In addition to not considering the positional interaction information among the three leaders of the grey wolves, existing research does not explore other predation methods, such as spiral walking hunting, which may help hunting and increase the chance of jumping out of the local optima for the GWO algorithm.In summary, the GWO algorithm is a strong algorithm but suffers from the abovementioned shortcomings.Considering the drawbacks of the GWO algorithm, this paper decided to improve upon it.
Based on this analysis, this paper improves the GWO algorithm from the following three aspects: a hunting model is built based on the spiral walking, a position-updated equation is rebuilt based on the positional interaction information among the three leaders of grey wolves, and a nonlinear control parameter is designed to replace the linear control parameter of the standard GWO algorithm.e proposed algorithm is tested on 23 classical benchmark problems, CEC2014 suite, and three well-known engineering optimization problems.e experimental results reveal that the proposed method is robust, efficient, and superior compared to other algorithms.
e remaining sections of the paper are organized as follows: e original GWO algorithm and DDS are briefly overviewed in Section 2. In Section 3, the dynamically dimensioned search grey wolf optimizer, which is based on the 2 Complexity deep search strategy (DGWOD), is proposed.e principle of searching the GWO by the DDS is detailed, and the position-updated equations, which are based on the deep search strategy and the nonlinear control parameter equation, are constructed.Section 4 provides the experimental results and a discussion of a set of well-known test functions.is paper is concluded and future research directions are presented in Section 5.

Overview of GWO and DDS
2.1.Standard GWO Algorithm.In this section, four parts of the basic GWO algorithm [22] inspired by the complete process of hunting the grey wolf for preying are described.

Foundation of the Social Hierarchy.
In the GWO algorithm, the search is executed by the joint guidance of the first three best grey wolves (i.e., α, β, and δ), and the positions of the ω grey wolves (solutions) are constantly adjusted with the guidance of the first three best grey wolves with an increase in the iteration number.

Encircling Prey.
Grey wolves hunt their prey by encircling them, which is considered to be wise behavior.To describe the principle of this predation from the perspective of a mathematical model, Mirjalili et al. [22] constructed the following equation: where t is the current number of iterations, X → (t + 1) is the position vector of the grey wolf at the (t + 1) th iteration, the symbol "•" indicates dot product, X → p (t) is the position vector of the prey at the (t) th iteration, D → is a vector that is relative to the position of the prey X → p (t), A → and C → are the coefficient vectors, a → is a vector whose values are linearly decreased from 2 to 0 over iterations, and r → 1 and r → 2 are randomly generated vectors whose values lie between 0 and 1.

Hunting.
As described in Section 2.1.2,the action of the grey wolf that encircles its prey provides the leader of the grey wolf group with necessary position information and forces the prey into promising areas.After the leader of the grey wolf group receives the position information about the prey, the next step is to guide omega (ω) wolves to conduct the hunting.To describe the hunting behaviors of grey wolves from the perspective of a mathematical model, we assumed that X → α , X → β , and X → δ represent the positions of the α wolf, β wolf, and δ wolf, respectively.erefore, the mathematical models for grey wolf hunting are described as follows:

and D
→ δ are calculated using equation ( 1) as follows: 2.1.4.Attacking Prey.In the GWO algorithm, the behaviors of grey wolves that attack their prey are controlled by constant changes in the value of the linear control parameter a → .According to equation ( 3), the expression of the vector A → is correlated with the parameter a → .When the value of a linearly decreases from 2 to 0, the value of the vector A → also decreases.When | A → | ≤ 1, the hunting of a grey wolf will occur at any position between its current position and that of its prey.When | A → | > 1, wolves will search the entire solution space to locate the prey (optima).erefore, | A → | represents a controlling parameter vector that causes exploration and exploitation.We determine that different values of the control parameter a → have a different role in the exploration and exploitation of the GWO algorithm.According to Reference [42], a larger a → is favorable for global exploration, while a smaller a → facilitates local exploitation.erefore, the control parameter a → has an important role in balancing the exploration and exploitation of the GWO algorithm.However, for the standard GWO algorithm, several studies have shown that the value of the control parameter a → linearly changes and the design of the position-updated equation will cause some drawbacks, such as premature convergence of the algorithm and powerlessness when solving multimodal problems [12,27,42].
Based on this description, the pseudocode of the GWO algorithm is shown in Algorithm 1.

DDS Algorithm.
e DDS algorithm is a powerful single-solution-based metaheuristic algorithm, which was employed for calibration problems that arise in watershed simulation models.DDS was developed by Tolson and Shoemaker in 2007 [7] and was proposed for optimization problems that are bound constrained.us, achieving excellent optimization results for bounded constrained global Complexity optimization problems is the advantage of the DDS algorithm.
DDS is a point-to-point stochastic-based heuristic global search algorithm with no parameter tuning; global solutions are obtained by scaling within a user-specified maximum number of function evaluations (MaxIter) [43].Since it is a simple model that is easily programmed and a global search algorithm, many researchers have focused their great attention on it.At the beginning, when the number of iterations is small, the global search of the algorithm is dominant.As the number of iterations approached the maximum, the algorithm evolved into a local search.e key idea for the DDS algorithm to transit from a global search to a local search is to dynamically and probabilistically reducing the number of dimensions to be perturbed in the neighborhood of the current best solution [11,43].e operation to dynamically and probabilistically reduce the number of dimensions to be perturbed can be summarized as follows: in each iteration, the jth variable is randomly selected with the probability P t from m decision variables for inclusion in the neighborhood I perturb .e probability P t is expressed as where t indicates the current iteration and MaxIter represents the maximum number of iterations.At each iteration t, a new potential solution X →new (t) is obtained by perturbing the current best X →best (t) in randomly selected dimensions.ese perturbation magnitudes are sampled using the standard normal random variable N(0, 1) and reflecting decision variable bounds as [11] where j � 1, 2, . . ., m; r is a scalar neighborhood size perturbation factor; μ → j is a random vector that is generated for the jth variable to be perturbed; ub j and lb j correspond to the upper bound and lower bound of the jth variable; and

Proposed Algorithm
As presented in the previous sections, the GWO algorithm encounters a few drawbacks, such as premature convergence and a low capability to handle the difficulties of a multimodal search landscape [25].To overcome these weaknesses, the most effective improvement is to increase the diversity of candidate solutions and further improve the balance between exploration and exploitation during the iteration.In terms of increasing the diversity of candidate solutions, inspired by the core idea of the DDS algorithm, this study adopts two improved strategies to increase the performance of the GWO algorithm.One approach is to dynamically and probabilistically reduce the number of dimensions to be perturbed in the neighborhood, which enables candidate solutions to be perturbed between the current solutions and each of the three best solutions.Another method is to use the positional interaction information about the first three best grey wolf individuals (i.e., α, β, and δ) in the process of encircling and preying on the prey to perform a deep search.
e position-updated equation of the GWO algorithm, which is based on the dynamically dimensioned perturbation and position interaction information, is proposed, which is referred to as the DGWO.To balance the exploration and exploitation of the GWO algorithm, the introduction of the search mechanism of the DDS algorithm enables the GWO algorithm to gradually transform from a  4 Complexity global search to a local search with an increase in the number of iterations.e GWO algorithm has a strong exploration ability in the initial search stage and a strong exploitation ability in the subsequent stage of iteration.e nonlinear control parameter a → ′ is proposed to replace the linear control parameter a → of the standard GWO algorithm.is nonlinear control parameter strategy produces a GWO algorithm with a strong exploitation ability in the early stage of searching and a strong exploration ability in the subsequent stage of searching.erefore, the introduction of the DDS and the nonlinear control parameter strategy strengthens the balance between exploration and exploitation of the GWO algorithm, and positional interaction information is utilized to conduct in-depth search and ensure the diversity of the candidate solution.

Two Ways to Hunt Prey Are Freely Switched Using DDS.
As described in Section 2.1.3,a grey wolf hunts by direct encirclement.However, though actual observation, we found that, in addition to the previously mentioned hunting strategy, the grey wolf also approaches its prey by spiral walking when hunting. is way of spiral walking around the prey is often considered to be a very effective way to hunt [44].Although we have determined that a grey wolf hunts by direct encirclement and spiral walking, a reasonable conversion of these two hunting methods has not been performed via research.e traditional method is to randomly convert between those two hunting methods by equal probability [44].In an actual situation, the conversion probability between these two methods is not equal.A reasonable conversion method is that the grey ), I perturb � [0, 0, . . ., 0] (2) while t ≤ MaxIter do (3) Compute the probability of perturbing the decision variables P t using equation (12) (4) for j � 1 to m do (5) Generate uniform random numbers, rand(j) (6) if rand(j) < P t then (7) Set I perturb (j) � 1 (8) end if (9) end for (10) Generate a standard normal random vector, μ → (11) (13) end for (14) Set X →new j � lb j (19) end if (20) end if (21) if X →new j > ub j then (22) Set X →new Set X →new j � ub j (25) end if (26) end if (27) end for (28) Evaluate f(X →new ) Complexity wolf can freely switch between these two hunting methods during its predation process.us, the grey wolf has the best hunting effect, that is, to ensure that the grey wolf achieves the best prey (global optimum) in the best situation or obtains the relatively better prey (global approximation solution) in poorer conditions.We determined that the DDS method employs the conversion technique that we expected.According to Section 2.2, the core principle of the DDS algorithm is to transit the search from global to local by dynamically and probabilistically reducing the number of dimensions to be perturbed in the neighborhood of the current best solution, which causes the DDS to converge to the desired region to locate the global optimum in the best case or a reasonable local optimum in the worst case.Based on this analysis, the DDS method is introduced in the GWO algorithm to conduct free switching of the hunting behavior between direct encirclement and spiral walking to improve the quality of the solution of the GWO algorithm.e implementation steps are described as follows: (iii) Finally, using the ideas of the DDS algorithm to transition the search from global to local, and D → δ are recalculated using the following equations: where r → 3 , r → 4 , and r → 5 are random vectors between 0 and 1 and r is a scalar neighborhood size perturbation factor, whose value is 0.2 in this paper.

Position-Updated Equation Based on the Positional
Interaction Information.As described in the original literature [22] of the GWO algorithm, the alpha (α) wolf is the supreme leader of the grey wolf pack and is primarily responsible for commanding all wolves to hunt, sleep, and wake.e leader in the second tier is referred to as the beta (β) wolf, which is controlled by α and is responsible for commanding the remaining wolves.e third tier of leadership entails the delta (δ) wolf, who has to submit to α and β but dominate the ω wolf.ω is the common wolf and has the subordinate role of listening to the orders of the first three leaders.is top-down leadership mechanism of the grey wolf pack enables the GWO algorithm to have strong exploration ability.As previously described, the cooperative hunting behavior of the grey wolf group is outstanding.One situation is that the first three best grey wolves (leaders) directly lead the ω wolf to hunt.In another situation, the α wolf commands the β wolf and the δ wolf to hunt and the β wolf commands the δ wolf to hunt.
e leadership relationship between these three leaders usually occurs via the relative position changes, that is, positional interaction information.In the standard GWO algorithm, however, only the former case is considered and the latter case is disregarded, which is very important for the hunting of the grey wolf group.Based on this shortcoming of the standard GWO algorithm, we design a position-updated equation that is based on positional interaction information as 6 Complexity where where t and MaxIter indicate the current iteration and the maximum iteration number, respectively.Figure 1 shows the transition between exploration and exploitation that was caused by the adaptive values of the control parameters a → and a → ′ .As shown in Figure 1, with respect to the GWO algorithm, half of the iterations are adapted to exploration (| A → | > 1) and the rest of the iterations are devoted to exploitation (| A → | ≤ 1).However, with regard to the DGWO algorithm, the number of iterations used for exploration and exploitation is 60.2% and 39.8%, respectively.

Framework and Pseudocode of the DGWO Algorithm.
In this paper, the proposed hunt strategy of spiral walking is added to the GWO algorithm to enhance the ability of the predation.is strategy is freely switched between the original encirclement methods using the search mechanism of DDS.
is method combines the proposed nonlinear control parameter strategy and the position-updated equation, which considers the positional interaction information, and develops the DGWO algorithm.e pseudocode of the proposed DGWO algorithm is shown in Algorithm 3.

Time Complexity of DGWO.
e time complexities of the DGWO and GWO are summarized as follows:

Analysis and Comparison of the Diversity between GWO and DGWO.
From equations ( 5) to (8), we can know that the grey wolves update their position under the leadership of the three best wolves.However, when the three fittest wolves get into local optimum, the whole search agents will all concentrate in this region, which leads to the decrease in diversity in the population, and the algorithm easily falls into local optimum.Based on this point, the DGWO algorithm is proposed to enhance the diversity of the GWO algorithm.To analyze and compare the diversity between the GWO and the DGWO, we choose the Sphere function as the benchmark test problem to see the difference in diversity between the GWO and the DGWO at different iterations.We set the population size as 30, the dimension of the problem as 2, and the upper and lower boundaries of the problem as 10 and − 10, respectively.e diversity distributions of the DGWO and GWO at different iterations are plotted in Figure 2.
From Figure 2(a), when the number of iterations is 2, both the DGWO and GWO have high diversity individuals.However, from Figures 2(b) to 2(d), the DGWO algorithm shows better diversity of solutions than the GWO algorithm.is comparison confirms that the DGWO algorithm has higher diversity of solutions than the standard GWO algorithm.

Test Function Selection and Control Parameter Settings.
In this section, to validate the performance of the proposed DGWO algorithm, 23 benchmark problems with various complexity and sizes are collected from studies [21,23,43].e characteristics of the selected test functions are summarized in Table 1, where f min denotes the global optimal value.In this table, the key to test functions, the mathematical expression of Input: population size N, scalar neighborhood size perturbation factor r, maximum number of iterations MaxIter, number of variables m, and upper bounds ub and lower bounds lb Output: optimal individual position X → α and best fitness value f(X → α ) (1) Randomly initialize N individuals' position r to construct a population (2) Calculate the fitness value of each individual, find α, β, and δ, and set t � 0 (3) while t ≤ MaxIter do (4) Compute the probability (P n ) of perturbing the decision variables using equation ( 12) and the value of the nonlinear control parameter a → ′ using equation ( 27) end if (19) end for (20)  Table 1: Description of 23 classic benchmark functions.
[− 50, 50] 10, 30, 50, M 0 [− 50, 50] 10, 30, 50, M 0 10 Complexity each benchmark test problem, the boundary of variables, the dimensions of the solution, and the category of each function are detailed.ese test problems were divided into three categories: unimodal, multimodal, and fixed-dimension multimodal.In Table 1, f 1 − f 7 are unimodal problems that are used to benchmark the exploitation of algorithms because they have one global optimum and no local optima.Conversely, functions f 8 − f 23 are multimodal and fixed-dimension multimodal problems, which are helpful in examining exploration and local optima avoidance of algorithms, since they have a large number of local optima [26,42].
e same control parameter settings for the GWO and DGWO algorithms are listed in Table 2, where "m" represents the dimension of the problem, "N" represents the size of the population, "MaxIter" represents the maximum number of iterations, "w 1 " and "w 2 " represent the position weight values, and "R" represents the independent simulation experiment times for each test problem.e proposed DGWO and standard GWO algorithms were coded in MATLAB R2015a.All simulation experiments were performed on a personal computer with Windows 10 64 bit professional OS and 4 GB RAM.

Impact of Position
Weights w 1 and w 2 .As described in Section 3.2, the strategy of the modified position-updated equation (i.e., equation ( 26)) has an important role in balancing between exploration and exploitation in the evolution process.However, in equation ( 26), w 1 and w 2 are two crucial position weights for improving the optimization performance of the DGWO.In this section, to further investigate the impact of the position weight coefficients w 1 and w 2 , several independent experiments were designed and conducted.We varied the values of w 1 and w 2 and kept other algorithm's parameters fixed for all benchmark functions.Values w 1 � 0.1, w 2 � 0.9, w 1 � 0.3, w 2 � 0.7, w 1 � 0.5, w 2 � 0.5, w 1 � 0.7, w 2 � 0.3, and w 1 � 0.9, w 2 � 0.1 are selected to conduct experiments using 23 test functions.Among these test functions, the dimension of 13 test problems (f 1 -f 13 ) is 30.All experimental results are reported in Table 3. ''Mean'' and ''St.dev.''values are two performance evaluation indexes shown in Table 3.
As shown in Table 3, the comprehensive optimization performance of the DGWO algorithm with position weights w 1 � 0.1 and w 2 � 0.9 is superior to that of other algorithms.Compared with the DGWO with w 1 � 0.1 and w 2 � 0.9, the proposed DGWO algorithm with position weights w 1 � 0.3 and w 2 � 0.7 achieved better and similar results for 9 functions (i.e., f 4 -f 8 , f 12 -f 14 , and f 16 ) and 3 functions (i.e., f 11 , f 17 , and f 18 ), respectively, and achieved worse results for 11 functions (i.e., f 1 -f 3 , f 9 -f 10 , f 15 , and f 19 -f 23 ).Compared with the DGWO with w 1 � 0.1 and w 2 � 0.9, the DGWO with w 1 � 0.5 and w 2 � 0.5 obtained better and similar results for 6 problems (i.e., f 5 -f 7 , f 12 -f 13 , and f 16 ) and 3 problems (i.e., f 11 f 14 , and f 18 ), respectively, and presented worse optimization results for 14 functions (i.e., f 1 -f 4 , f 8 -f 10 , f 15 , f 17 , and f 19 -f 23 ).Compared with the DGWO with w 1 � 0.1 and w 2 � 0.9, the DGWO method with w 1 � 0.7 and w 2 � 0.3 attained better results for 6 functions (i.e., f 5 , f 12 -f 14 , f 16 , and f 20 ), obtained similar results for two functions (i.e., f 11 and f 18 ), and obtained worse results for 15 functions (i.e., f 1 -f 4 , f 6 -f 10 , f 15 , f 17 , f 19 , and f 21 -f 23 ).Compared with the DGWO algorithm with w 1 � 0.1 and w 2 � 0.9, the DGWO with w 1 � 0.9 and w 2 � 0.1 achieved better optimization performance for 4 functions (f 5 , f 14 , f 16 , and f 20 ), achieved similar results for one function (i.e., f 18 ), and achieved worse results for the remaining functions.Based on this analysis, the optimization performance of the DGWO worsens as the position weight value w 1 increases and w 2 decreases.erefore, considering all w 1 and w 2 values, we concluded that setting the position weight values w 1 � 0.1 and w 2 � 0.9 for the DGWO algorithm was an ideal choice, and the position weights w 1 and w 2 of the DGWO algorithm were set as 0.1 and 0.9, respectively, in the next experiments.
e convergence curves of the average objective function values of the DGWO with different position weight values w 1 and w 2 for 10 typical benchmark functions are plotted in Figure 3.

Effectiveness Analysis of the Two Components in DGWO.
In the DGWO algorithm, two main components, namely, the modified position-updated equation and the nonlinear control parameter strategy, are proposed.To validate the effectiveness of these two components in improving the optimization performance of the DGWO, two experiments were conducted for 23 benchmark functions recorded in Table 1.Among those functions, the dimension of f 1 -f 13 is 30.e algorithm parameters are set the same as in Table 2.In the first experiment, the DGWO employed the modified position-updated equation (i.e., equation ( 26)), and the linear control parameter a → that is similar to that in the study of Mirjalili et al. [22] is referred to as the DGWO-1.In the second experiment, the DGWO only used the nonlinear control parameter strategy (i.e., equation ( 27)), and the position-updated equation ( 8) is referred to as the DGWO-2.Two statistical criteria, "Mean'' and ''St.dev.,'' and the results of the DGWO-1, DGWO-2, and DGWO are shown in Table 4. Sign-rank sum tests at 0.05 and 0.1 significance levels were performed between the DGWO and each of DGWO-1 and DGWO-2.
From Table 4, compared to the DGWO, the DGWO-1 achieved better results on 6 functions (i.e., f 4 , f 6 , f 8 , f 16 , f 21 , and f 22 ), showed similar or approximate performance on 3 test functions (i.e., f 9 , f 10 , and f 11 ), and provided slightly poorer results on the rest of the test functions.It should be emphasized that the DGWO-1 could obtain very competitive optimization results compared to the DGWO, and its performance is not significantly inferior to that of the DGWO.We attribute the first experiment results to the fact that the modified position-updated equation has more advantages in balancing between exploration and exploitation and could ensure more potential solution diversity in the evolution process.erefore, we can conclude that the performance differences between the DGWO and the DGWO-1 were not significant.From the results of the second experiment, it is found that the DGWO surpassed the DGWO-2 on 19 test functions and obtained similar results    Complexity on function f 18 .To better understand this phenomenon, we need to know that the nonlinear control parameter strategy was specifically designed for the modified position-updated equation and is not suitable for independent use in the search process.us, the performance of the DGWO significantly outperformed that of the DGWO-2.e convergence curves of the average objective function values of the DGWO, DGWO-1, and DGWO-2 on 10 typical test functions are plotted in Figure 4. From Table 4 and Figure 4, we can conclude that the two components of the DGWO are able to compensate for each other to improve the optimization performance of the GWO.

Performance Comparison with the Standard GWO
Algorithm.We independently tested each problem 30 times    Complexity to obtain four statistical criteria for comparing the performance of the algorithms; that is, "Best" indicates the best value, "Worst" represents the worst value, "Mean" denotes the average best values, and "St.dev." indicates the standard deviation value.
e simulation experimental results are described in Table 5.
As shown in Table 5, the DGWO has a better optimization performance for the seven unimodal benchmarks, with the exception of f 5 and compared with the standard versions of the GWO, since the DGWO provides the best "Best," "Worst," "Mean," and "St.dev." values for 6 of 7 unimodal benchmarks.For the six multimodal benchmarks (f 8 -f 13 ) in Table 5, none of the standard versions of the GWO has a better optimization performance than the DGWO algorithm for all test problems using the "Mean" statistical criterion.As observed in Table 5, the DGWO algorithm achieved a better performance than the GWO for 5 fixeddimension multimodal test functions (i.e., f 14 , f 20 -f 23 ) and provided slightly better results than the GWO for functions f 16 , f 18 , and f 19 .For function f 16 , however, the GWO obtained better results than the DGWO.
e percentage of problems solved by the GWO and DGWO is recorded in Table 6.It should be noted that when we use a certain algorithm to solve the 13 test functions (i.e., f 1 -f 13 ) and 10 test functions (i.e., f 14 -f 23 ) listed in Table 1, if the error between the actual value and its theoretical value is 10 − 5 and 10 − 3 , respectively, then this problem can be regarded as having been successfully solved.From Table 6, it can be seen that, for functions f 1 , f 2 , f 10 , f 16 , f 18 , and f 23 , the DGWO and GWO obtained the same percentage of solving problems.On 13 test problems (i.e., f 3 , f 4 , f 6 -f 9 , f 11 -f 15 , and f 21 -f 22 ), the DGWO has shown a higher percentage than the GWO.However, the GWO has shown a higher percentage than the DGWO for functions f 17 , f 19 , and f 20 .
To obtain an intuitive cognition of the convergence rate of the DGWO and GWO algorithms, Figure 5 shows the convergence curves of the DGWO and GWO for 12 typical test functions with m � 10, 30, 50, and 100.As shown in Figure 5, the DGWO algorithm achieved a faster convergence than the standard GWO algorithm for all 12 test problems.
is finding verifies that the position-updated strategy and the nonlinear control parameter proposed in this paper can achieve faster search and excellent optimization performance of the DGWO algorithm for low-and high-dimensional problems.16 Complexity To further compare the optimization performance of the proposed DGWO algorithm with that of other improved GWO variants, i.e., the modified grey wolf optimizer (mGWO) algorithm [26], the grey wolf optimizer, which is based on the Powell local optimization (PGWO) method [45], and the exploration-enhanced grey wolf optimizer (EEGWO) algorithm [42], the parameters of the mGWO, PGWO, and EEGWO algorithms were established as follows: their population size was set to 30, and the maximum number of iterations was 500.e 23 benchmark test functions were selected from Table 1.e dimensions for 13 test functions (f 1 -f 13 ) were set to 10, 30, 50, and 100.Each algorithm was independently run 30 times using each test function for their corresponding dimension.e mean (denoted by "Mean") and standard deviation (denoted by "St.dev.") of the fitness value are the two statistical criteria used to evaluate the performance of the algorithm.e simulation results of these four algorithms are recorded in Table 7.
As shown in Table 7, the DGWO obtained the best "Mean" and "St.dev." for functions f 1 , f 2 , f 3 , f 8 , and f 13 with low dimensions (m � 10 and 30) and high dimensions (m � 50 and 100) compared with the mGWO, PGWO, and EEGWO.For the test problems f 4 and f 7 with m � 30, 50, and 100, the EEGWO achieved the best results among the four modified GWO algorithms, and the DGWO achieved slightly worse results than the EEGWO but better results than the mGWO and PGWO.For functions f 9 , f 10 , and f 11 , the DGWO and EEGWO achieved the same results and are better than the mGWO and PGWO; note that the DGWO and EEGWO can obtain theoretical optima (0) for functions f 9 and f 11 .
e PGWO obtained the best results on test problems f 5 , f 6 , and f 12 with all dimensions (m � 10, 30, 50, and 100) and attained the global theoretical optima (0) on problem f 6 .However, the DGWO obtained the second best results for functions f 5 , f 6 , and f 12 , which are similar to the results of the PGWO.For functions f 14 to f 23 with a fixed number of dimensions, the DGWO achieved the best results for 7 test functions (f 14 , f 15 , f 17 , f 19 , and f 21 -f 23 ).Compared to the mGWO, the PGWO attained almost the same results for functions f 16 and f 20 , which are better than those of the DGWO and EEGWO.On test function f 18 , the mGWO and PGWO obtained the best fitness values.In addition, the   From Table 7, we can see that the EEGWO provides very competitive results compared to the DGWO, and it is challenging to determine which algorithm is better.
erefore, it is necessary to conduct an appropriate statistical analysis to see whether the obtained results of the employed algorithms are significant at a given confidence interval.In this paper, the sign test is adopted, which is obtained from references [11,46].e statistical results are recorded in Table 8.It should be noted that these statistical analysis results are conducted based on the average results of the 20 independently obtained best results.As seen from Table 8, the DGWO is significantly better than the GWO, mGWO, and PGWO on unimodal and multimodal test functions at a significance level of 0.05 but shows a nonsignificant performance difference on 10 fixed-dimension multimodal benchmark functions.In addition, when compared to the EEGWO, the DGWO provides a nonsignificant performance difference on 13 unimodal and multimodal test functions but obtains significantly better results on 10 fixed-dimension multimodal benchmark functions at a significance level of 0.1.
e percentages of problems solved by the mGWO, PGWO, and EEGWO are recorded in Table 9.Compared to the mGWO, the DGWO obtained the same percentage on six functions (i.e., f 1 , f 2 , f 5 , f 16 , f 18 , and f 23 ) and a higher percentage on thirteen functions (i.e., f 3 -f 4 , f 6 -f 9 , f 11 -f 13 , f 15 , and f 21 -f 22 ), while the DGWO showed a lower percentage on three functions (i.e., f 17 , f 19 , and f 20 ).Compared to the PGWO, the DGWO provided the same and higher percentage on five functions (i.e., f 1 , f 2 , f 6 , f 11 , and f 18 ) and eleven functions (i.e., f 3 -f 4 , f 7 -f 12 , f 14 -f 15 , and f 23 ), respectively; on the contrary, the PGWO showed a higher percentage than the DGWO on six functions (i.e., f 5 , f 17 , f 19 , and f 20 -f 22 ).For function f 13 , the DGWO showed a higher percentage than the PGWO when the dimensions were 10, 30, and 50 but obtained a lower percentage when the dimension was 100.Compared to the EEGWO, the DGWO achieved the same and higher percentage on nine functions (i.e., f 1 -f 5 , f 9 -f 11 , and f 20 ) and twelve functions (i.e., f 8 , f 12f 13 , f 14 -f 19 , and f 21 -f 23 ), respectively.For function f 7 , however, the EEGWO obtained a higher percentage than the DGWO.
To investigate the convergence speed of the three modified versions of the GWO mentioned in this paper and the proposed DGWO algorithm for low-dimensional and high-dimensional problems, Figure 6 plots the convergence curves of 10 typical functions (f 1 -f 4 , f 6 -f 7 , f 9 -f 10 , and f 12 -f 13 ) with dimensions of 30 and 100.For functions f 1 -f 4 , f 7 , and f 9 , the DGWO and EEGWO achieve the fastest convergence speed, whereas the EEGWO achieves a faster convergence speed for high-dimensional functions; however, the DGWO attains a better convergence speed for low-dimensional problems.
e PGWO has a fast convergence speed for functions f 6 and f 12 , and the DGWO ranked second.e DGWO exhibits the fastest convergence speed for functions f 10 and f 13 , and the EEGWO shows the same convergence speed for function f 10 .ese analysis results verify that the proposed DGWO achieves excellent convergence performance for both low-dimensional problems and high-dimensional problems.
In addition to the abovementioned GWO versions, an interesting GWO variant named "GWO-EPD" [47] has successfully caught our attention because it exhibits some similarities and differences compared with our proposed DGWO algorithm.
e GWO-EPD algorithm has some features that are similar to those of the DGWO algorithm, such as dynamically removing some inferior solutions and repositioning them by adopting alpha, beta, and delta wolves.However, the differences between those two algorithms are also easy to distinguish.For example, in the DGWO, some variables of the current best solution are removed and repositioned by using probability that was modeled as equation (12), while in GWO-EPD, half of the worst search agents are eliminated and reinitialized with equal probability.In addition, in the DGWO, the variables are repositioned by employing the modified positionupdated equation (see equation ( 26)); however, in GWO-EDP, the mechanism of EDP is applied in the GWO algorithm to randomly reinitialize its worst search agents.To further verify the scalability of the DGWO, we compared it with GWO-EPD on 13 test functions (i.e., f 1 -f 13 ), and the results are recorded in Table 1, with dimensions from 30 to 100.All of the DGWO parameters were kept the same as those defined in the above section.e parameter values of GWO-EPD were kept the same as in its original papers.In addition, the maximum number of iterations and population size were set as 500 and 30, respectively, and 30 independent runs were executed for each test function.e experimental results are presented in Table 10.
As seen from Table 10, for m � 30, compared to the GWO-EPD algorithm, the DGWO provided better results on eleven functions (i.e., f 1 -f 4 , f 6 -f 7 , and f 9 -f 13 ).Similarly, for m � 100, when compared to GWO-EPD, the DGWO also offered better results on eleven functions (i.e., f 1 -f 4 , f 6 -f 7 , and f 9 -f 13 ).However, better results for f 5 and f 8 were obtained by the GWO-EPD algorithm.In summary, the increase in dimensions has little impact on the performance of the DGWO algorithm.Even when suffering from large-scale optimization problems, the DGWO still worked well and obtained promising results.

Performance Comparison with Other State-of-the-Art
Algorithms (m � 30).In this section, we compared the DGWO to seven recently proposed state-of-the-art population-based optimization methods, such as the autonomous particles groups for particle swarm optimization (AGPSO) [48], the improved PSO with time-varying accelerator coefficients (IPSO) [49], the improved PSO algorithm based on asymmetric time-varying acceleration coefficients (MPSO) [50], the time-varying acceleration coefficients particle swarm optimization (TACPSO) [51], the hybrid differential evolution with biogeography-based optimization (DEBBO) [52], the hybrid whale optimization algorithm with simulated annealing (WOA-SA) [53], and the salp swarm algorithm (SSA) [54].All DGWO parameters were kept the same as those listed in Section 4.1.e parameter settings of the seven algorithms are listed as follows: the population size is 30, the maximum number of iterations is 500, and the other algorithm parameters are the same as in their original papers.
To compare the optimization performance of the seven algorithms, the results are achieved over 30 independent runs.e best (denoted by "Best"), average (denoted by "Mean"), and standard deviation (denoted by "St.dev.") of the best solution in the last iteration are collected in Table 11.e best obtained results are highlighted in boldface type.
Table 11 shows the results for 23 test functions.As presented in this table, the DGWO had the best results for three of seven unimodal benchmark problems (i.e., f 3 , f 4 , and f 7 ).For function f 6 , the DGWO performed slightly worse than the SSA and obtained the second best result.For functions f 1 and f 2 , the WOA-SA achieved the global optimal values (0), and the DGWO provided solutions near 0. For the multimodal benchmark functions f 8 -f 13 , the DGWO presented the best results, with the exception of functions f 12 and f 13 .For functions f 8 , f 12 , and f 13 , the DEBBO obtained almost the same results as the DGWO.Compared to the WOA-SA, the DGWO obtained similar and worse results for  e percentages of problems solved by the seven stateof-the-art algorithms are listed in Table 12.As seen from this table, the AGPSO, IPSO, MPSO, and TACPSO have all failed to solve thirteen test functions (i.e., f 1 -f 13 ) but completely solved all five test functions (i.e., f 14 and f 16 -f 19 ).Of the thirteen functions f 1 -f 13 , nine functions are completely solved by the DGWO, three functions are fully solved by the DEBBO, six functions are fully solved by the WOA-SA, and two functions are fully solved by the SSA.Of the ten functions f 14 -f 23 , four functions are completely solved by the DEBBO and SSA, three functions are fully solved by the WOA-SA, and two functions are fully solved by the DGWO.However, for functions f 15 and f 22 -f 23 , the DGWO has achieved the highest percentage.
Figure 7 plots the convergence curves of the average objective function values of the algorithms for some typical test problems, where f 1 , f 3 , f 4 , and f 7 are unimodal functions, f 9 , f 10 , and f 11 are multimodal benchmark functions, and f 15 , f 21 , and f 23 are fixed-dimension multimodal benchmark functions.As observed from these curves, the DGWO has the best convergence rates for all 10 classic benchmark functions.Note that unimodal test problems are suitable for benchmarking the convergence ability of algorithms since they have only one global minimum and do not have local minima in the search space [48].Since multimodal and fixed-dimension multimodal benchmark functions have more than one local optimal solution, they are suitable for benchmarking the capability of algorithms in avoiding local minima [48].As indicated by the results, the DGWO performs better than the seven compared algorithms on both the unimodal and multimodal benchmark functions.e DGWO achieves superior results because the candidate particles have diversity in the population and the balance between exploration and exploitation during the iteration is achieved by the strategies of the modified position-updated equation (i.e., equation ( 26)) and nonlinear control parameter (i.e., equation ( 27)).
To further investigate the optimization performance of the DGWO on some standard and complex benchmark problems, we compared it with the TACPSO, IPSO, and GWO on a CEC2014 benchmark test suite with the dimension 30.e parameter settings of the DGWO and other selected algorithms were the same as mentioned above.e maximum number of iterations for the DGWO was 5 × 10 4 , and for each problem, 20 independent runs were implemented.e experiment results are shown in Table 13.
From Table 13, it can be seen that, of the unimodal test functions (f 1 -f 3 ), the proposed DGWO achieves the best performance on f 1 and f 2 .Of the 13 multimodal functions (f 4 -f 16 ), the DGWO shows better results on 11 benchmark test functions and similar results on two test functions (i.e., f 13 and f 16 ).Of the 6 hybrid functions (f 17 -f 22 ), the DGWO gives the best results on four functions (f 17 , f 19 , f 21 , and f 22 ), while it becomes the second best algorithm for function f 18 .Of the 8 composition functions (f 23 -f 30 ), the proposed DGWO gives the best results on all test functions except for function f 26 but provides the worst result on function f 26 .
From the statistical analysis listed in Table 13, the DGWO performs better than the TACPSO at a significance level of 0.05 and better than the IPSO and GWO at a significance level of 0.1.

Experiment on Real-World Engineering Problems.
In this section, several classic real-world engineering optimization problems were selected to validate the practical optimization performance of our proposed algorithm.e DGWO and GWO methods were applied to solve three well-known constrained engineering design problems, including Himmelblau's problem, the gear train design, and the pressure vessel design.It is noted that the penalty function methods were employed to address the constrained optimization 30 Complexity problems [55].DGWO and GWO parameters for these three real-world engineering optimization applications were provided as follows: the population size was 30, the maximum number of iterations was 1000, and each problem was run independently 30 times.
Several researchers have employed different algorithms to solve this problem, such as the generalized reduced gradient (GRG) [56], the genetic algorithm (GA) [57], GA solution based on a global reference (GA-G) [58], and GA solution based on a local reference (GA-L) [58].Table 14 illustrates the results of the best run obtained by the DGWO and the previously mentioned methods.Table 14 reveals that the results achieved by employing the DGWO algorithm are better than those of the previously reported best feasible where the gear ratio � y 1 y 2 /y 3 y 4 .
Table 15 shows the optimization results of the best run of the gear train design problem, which is solved by different algorithms and the proposed DGWO algorithm.e statistical results of these algorithms and the results of the GSA-GA and CS algorithms proposed by Gandomi et al. [60] and Garg [61], which are shown in Table 16 with studies [60,61], conclude that the result proposed by the DGWO algorithm is superior to the results of the two algorithms, and the worst (Worst), mean (Mean), and standard deviation (St.dev.) are low.e results obtained by the DGWO are slightly better than those by the GWO and are significantly better than those reported by different methods in [59][60][61][62].

Pressure Vessel Design Problem.
In this problem, a cylindrical pressure vessel is mounted on both ends by hemispherical balls, and its cylinder is formulated by combining two longitudinal welds, as described in Figure 9 [63].e four decision variables, which include the thickness of the pressure vessel (T s ), thickness of the head (T h ), inner radius of the vessel (R), and length of the cylindrical section of the vessel (L), are selected to be optimized to achieve the minimized total cost of the pressure vessel.erefore, the formulation for this problem consists of four variables Y � [T s , T h , R, L], which are modeled as follows: e results obtained for this problem are computed by the proposed DGWO method and are compared with the results of the best run achieved by other algorithms in Table 17.
e practical optimization performance of the DGWO algorithm is superior to that of existing approaches but slightly worse than that of the GWO.
e statistical results after 30 independent runs are recorded in Table 18, which further validates the finding that the standard deviation of the proposed DGWO method is less than that of other algorithms except for the result reported in [64] and the worst result is better than that of the compared algorithms.In addition, the DGWO could obtain very close best and average results to the GWO and is better than other compared algorithms.

Several Insights for Applying the DGWO Algorithm.
As discussed above, the optimization performance of the DGWO algorithm has been validated on several classical well-known benchmark functions.As seen from Table 3, the different position weight values w 1 and w 2 play an important role in improving the optimization performance of different types of problems.If the objective problems are a kind of unimodal or multimodal problem (such as f 1 -f 13 ), the values of w 1 and w 2 can be 0.1 and 0.9 or 0.3 and 0.7, respectively, and both of these two strategies can obtain relatively high-quality solutions.If the objective problems are a 32 Complexity kind of fixed-dimension multimodal problem (such as f 14 -f 23 ), w 1 � 0.1 and w 2 � 0.9 can achieve better results than other position weight values.In addition, from Table 4, we can observe that the DGWO-1 algorithm, which employed the modified position-updated equation (i.e., equation ( 26)) and the linear control parameter a → , can

Conclusions
In this paper, an improved version of the GWO (referred to as the DGWO) algorithm is proposed to solve continuous numerical optimization problems.First, the DDS method is introduced into the GWO algorithm to perturb a set of dimensions of the first three best solutions to increase the diversity of a particle and to enhance the exploration ability of the GWO algorithm.is method realizes the predation mode of freely switching between direct encirclement and spiral walking.Second, the position interaction information about the three leaders (i.e., α, β, and δ) in the predation process is further considered, and the position-updated equation that is based on this information is proposed to increase the ability of the GWO algorithm to jump out of the local optimum.Finally, the proposed nonlinear control parameter strategy is designed to enhance the exploitation ability of the GWO algorithm, as well as the convergence precision and convergence rate.Based on the three improvements to the GWO algorithm, the balance between exploration and exploitation, convergence precision, and convergence rate have been enhanced.Twenty-three benchmark test problems, the CEC2014 benchmark suite, and three classic real-world engineering design applications were employed to verify the practical optimization performance of the proposed DGWO technique.First, the experimental results on unimodal functions show the exploitation ability of the DGWO, which helps accelerate the convergence speed and enhance the solution accuracy.Second, the exploration capability of the DGWO was demonstrated by the results on the multimodal functions.ird, the results from fixed-dimension multimodal functions and composite functions show that the DGWO succeeds in jumping out of local optima by balancing the exploration and exploitation.
e simulations confirmed that the DGWO could find very competitive optimization results compared to recent GWO variants and state-of-theart heuristic algorithms.However, the optimization performance of the DGWO algorithm for Himmelblau's nonlinear engineering design problem is not very competitive but shows excellent results for the gear train design problem and the pressure vessel design problem.Although several experiments have demonstrated that the DGWO is efficient, effective, and robust, it also has several obvious shortcomings, such as the greater number of parameters that need to be adjusted compared with the original GWO algorithm, the poor optimization performance on the complex problems that are included in the CEC2014 suite, and the percentage of problems solved is not 100% guaranteed.
In future works, there are two main aspects that need to be implemented.An interesting research point is to further simplify the spiral walking predation technique and to improve the positional interaction information strategy to propose a variant of the GWO with a simpler algorithm structure and higher optimization performance.In addition, we intend to utilize the proposed DGWO algorithm for solving multiobjective optimization problems and economic load dispatch problems and training neural networks in our future research.

( 1 )
In the initialization phase, the DGWO and GWO require O(N × m) time, where N represents the population size and m represents the dimension of the problem (2) Calculation of the control parameters of the DGWO and GWO requires O(N × m) time (3) Update of the agents' position-updated equations of the DGWO and GWO requires O(N × m) time (4) Evaluation of the fitness value of each agent requires O(N × m) time Based on the above analysis, for each generation, the total time complexity is O(N × m), and given a maximum number of iterations, the total time complexity of the DGWO and GWO is O(N × m × MaxIter), where MaxIter indicates the maximum number of iterations.

Figure 1 :
Figure 1: Updating the values of control parameters a → and a → ′ over the course of iterations.
n" refers to nonsignificant at the significance levels of 0.05 and 0.1.

Figure 8 :
Figure 8: Structure of the gear train design problem.

Figure 9 :
Figure 9: Structure of the pressure vessel design problem.
Randomly initialize N individuals' position to construct a population (2) Calculate the fitness value of each individual and find α, β, and δ (3) while t ≤ MaxIter or stopping criteria not met do

Table 3 :
Experimental results of the DGWO using different position weight values w 1 and w 2 for 23 functions.

Table 2 :
Experimental parameter settings for the GWO and DGWO.

Table 5 :
Results obtained by the DGWO and GWO algorithms on 23 test problems.

Table 6 :
Percentage of problems solved by the GWO and DGWO.

Table 7 :
Results obtained by the three GWO variants and DGWO on 23 test problems.

Table 8 :
Sign test results of the five different GWO algorithms.

Table 9 :
Percentage of problems solved by the mGWO, PGWO, and EEGWO.

Table 10 :
Mean and St. dev.results of the DGWO and GWO-EPD for thirteen functions (m � 30 and 100).

Table 11 :
Comparison results of the DGWO and other seven algorithms on 23 test functions (m � 30).

Table 11 ,
the results of the AGPSO, IPSO, MPSO, and TACPSO are equal for four functions (f 14 , f 16 , f 18 , and f 20 ) and better than those of the DGWO.However, the DGWO achieved the best results of all algorithms for six fixed-dimension unimodal benchmark problems (i.e., f 15 , f 17 , f 19 , and f 21 -f 23 ).e WOA-SA and SSA obtained similar results for function f 20 and are better than the other algorithms.

Table 12 :
Percentage of problems solved by the GWO and DGWO.

Table 13 :
[59]arison between the DGWO and other algorithms on CEC2014 benchmark functions.integervariablesandwasinitiallyintroduced by Sandgran[59].etask of solving this problem is to determine the optimal number of teeth of the gearwheel between 12 and 60 to minimize the gear ratio of the gear train displayed in Figure8.e optimization model of this problem, with the decision vector Y � [T d , T b , T a , T f ] � [y 1 , y 2 , y 3 , y 4 ], is formulated as follows: 2 , 12 ≤ y 1 , y 2 , y 3 , y 4 ≤ 60, y i ∈ Z + , (

Table 14 :
Comparison results of the best Himmelblau's nonlinear optimization problem obtained by different algorithms.
NA: not available.

Table 17 :
Comparison results of the best pressure vessel design problem obtained by different algorithms.Complexity provide very competitive results for unimodal and multimodal problems and slightly poorer results on fixed-dimension multimodal problems.erefore, if the objective problems are unimodal or multimodal problems, the control parameter of the DGWO can adopt both a → and a → ′ ; otherwise, if the objective problems are fixed-dimension multimodal problems, we recommend that the practitioners use the nonlinear control parameter strategy proposed in this paper (i.e., equation (27)).