An improved hybrid whale optimization algorithm for global optimization and engineering design problems

The whale optimization algorithm (WOA) is a widely used metaheuristic optimization approach with applications in various scientific and industrial domains. However, WOA has a limitation of relying solely on the best solution to guide the population in subsequent iterations, overlooking the valuable information embedded in other candidate solutions. To address this limitation, we propose a novel and improved variant called Pbest-guided differential WOA (PDWOA). PDWOA combines the strengths of WOA, particle swarm optimizer (PSO), and differential evolution (DE) algorithms to overcome these shortcomings. In this study, we conduct a comprehensive evaluation of the proposed PDWOA algorithm on both benchmark and real-world optimization problems. The benchmark tests comprise 30-dimensional functions from CEC 2014 Test Functions, while the real-world problems include pressure vessel optimal design, tension/compression spring optimal design, and welded beam optimal design. We present the simulation results, including the outcomes of non-parametric statistical tests including the Wilcoxon signed-rank test and the Friedman test, which validate the performance improvements achieved by PDWOA over other algorithms. The results of our evaluation demonstrate the superiority of PDWOA compared to recent methods, including the original WOA. These findings provide valuable insights into the effectiveness of the proposed hybrid WOA algorithm. Furthermore, we offer recommendations for future research to further enhance its performance and open new avenues for exploration in the field of optimization algorithms. The MATLAB Codes of FISA are publicly available at https://github.com/ebrahimakbary/PDWOA.


INTRODUCTION
As optimization problems in various disciplines become increasingly challenging, it becomes apparent that classical optimization methods suffer from limitations.These limitations include convergence to local optima, requirements of differentiability and continuity, and high computational burdens.Consequently, there is a growing need to develop more robust tools for optimal problem-solving.In recent years, metaheuristic methods, such as particle swarm optimization (PSO) (Kennedy & Eberhart, 1995) and genetic algorithm (GA) (Holland, 1992), have gained popularity and success in solving optimization problems.Various metaheuristic methods are still being proposed such as the termite life cycle optimizer (TLCO) (Minh et al., 2023b;Minh et al., 2023a), K-means optimizer (KO) (Minh et al., 2022), planet optimization algorithm (POA) (Sang-To et al., 2022), a combination of artificial neural network (ANN) and balancing composite motion optimization (BCMO) (Tran et al., 2023), and the new movement strategy of cuckoo search (NMS-CS) (Cuong-Le et al., 2021).
Researchers tend to utilize metaheuristic methods for optimization problems due to their derivative-free formulation and their ability to escape local optima and find global optima.However, it is important to consider the No Free Lunch theorem (Wolpert & Macready, 1997), which suggests that no single optimization algorithm performs best for all problems.Therefore, there is a need to explore and develop new metaheuristic algorithms that are specifically designed to address the challenges of different optimization problems.
The whale optimization algorithm (WOA) is a recent metaheuristic method suggested by Mirjalili & Lewis (2016), inspired by the hunting strategy of humpback whales.WOA has gained significant attention from engineers, designers, and researchers worldwide for its effectiveness in optimizing various problems.However, the original WOA formulation has a limitation: it only considers the best solution from each iteration, neglecting valuable information from other individuals and their best positions.This limitation can hinder the algorithm's overall optimization performance.
To address this drawback, our proposed approach introduces an enhanced version of WOA called the Pbest-guided differential Whale Optimization Algorithm (PDWOA).PDWOA incorporates efficient features from PSO and differential evolution (DE) algorithms (Storn & Price, 1997) to improve the algorithm's ability to avoid local optima and achieve global optima, particularly in shifted optimization problems.In addition, two non-parametric statistical tests, including the Wilcoxon signed-rank test and the Friedman test (Derrac et al., 2011;Buch, Trivedi & Jangir, 2017;Ghasemi et al., 2023), are employed to validate the performance improvements achieved by PDWOA over the original WOA.
The contributions of this study are outlined as follows: 1. Overview and analysis of the Whale Optimization Algorithm (WOA) to understand its functionality and limitations, particularly in complex real-world problems.

Development of a new enhanced version of WOA known as the Pbest-guided differential
Whale Optimization Algorithm (PDWOA) to address the identified limitations of the original algorithm.3. Evaluation of the performance of PDWOA compared to the original WOA through experiments on 30 shifted test functions from CEC2014.The results demonstrate the efficiency of PDWOA in obtaining optimal solutions.Statistical tests, such as the Wilcoxon signed-rank test and the Friedman test, are employed to validate the performance improvements.
4. Application of PDWOA to solve three real-world engineering problems, providing practical validation of its optimization performance in real-world scenarios.5. Discussion of potential future improvements by exploring the integration of models from other powerful optimization algorithms, aiming to expand the range of problems that can be accurately solved by the proposed algorithm.
The remaining sections of this paper are organized as follows.The ''Related Work'' section provides an overview of the related work in the field.''WOA'' presents a brief introduction to the WOA.The ''Challenges and Enhanced Hybrid Version of WOA'' section discusses the main drawbacks of WOA and proposes the Pbest-guided differential WOA (PDWOA) by incorporating efficient features of PSO and DE algorithms.The ''Simulation Results'' section presents the simulation results, where extensive experiments are conducted to evaluate the performance of PDWOA, including the statistical tests.''Discussion and Future Studies'' discusses the results and provides potential areas for future studies.Finally, the paper is concluded in the ''Conclusion'' section.

RELATED WORKS
A comprehensive overview of the applications of WOA, including various improvements, has been presented in Gharehchopogh & Gholizadeh (2019).Some notable examples of these improvements include the use of WOA for detecting weak signals in rotating (He et al., 2019), analyzing clinical data of anaemic pregnant (Saidala & Devarakonda, 2017), scheduling tasks in cloud computing (Sreenu & Sreelatha, 2017), and suppressing sidelobe in multiple input and multiple output radar systems (Yuan et al., 2018).Additionally, Mohammadi & Mehdizadeh (2020) proposed a novel hybrid model that combines support vector regression with WOA for the daily estimation of reference evapotranspiration, demonstrating superior performance compared to support vector regression-only models.Qais, Hasanien & Alghuwainem (2020a) proposed a new enhanced version of WOA, called EWOA, specifically designed for maximizing power extraction from variable-speed wind generators (VSWGs).Instead of using the parameters suggested in the original WOA, EWOA incorporates a cosine function to control the searching and encircling behavior.Wang & Chen (2020) proposed a novel approach for medical diagnosis by improving a support vector machine (SVM) using chaotic WOA with multiple swarms (CMWOA).Their technique exhibited excellent performance in terms of avoiding local optima and achieving fast convergence.Cao et al. (2020) incorporated chaos theory to enhance the exploration ability and convergence characteristics of WOA, resulting in the development of a new chaotic-based improved version called CIWOA.This approach was specifically applied to achieve efficient terminal voltage control for proton exchange membrane fuel cells (PEMFCs).
Akyol & Alatas (2020) applied WOA and social impact theory based optimization for sentiment classification in online social media.Furthermore, Zeng et al. (2021) proposed a competitive mechanism enhanced WOA (CMWOA) for effectively addressing multiobjective optimization problems.Qais, Hasanien & Alghuwainem (2020b) introduced a novel design of Sugeno fuzzy logic controllers (FLCs) based on WOA (WOA-FLCs) to enhance the low voltage ride-through of VSWGs, resulting in improved time response characteristics surpassing those obtained by GA and grey wolf optimizer (GWO).Jain, Katarya & Sachdeva (2020) employed a novel social network-based WOA (SNWOA) to identify opinion leaders in social networks.Rosyadi, Penangsang & Soeprijanto (2017) applied the WOA to determine the optimal placement and size of filters in distribution systems.Chen, Li & Yang (2020) utilized chaos mechanism and quasi-opposition to enhance the convergence speed of WOA and mitigate the issue of local optima when solving large-scale problems.Liu et al. (2020) proposed the utilization of WOA for evaluating the resilience of regional flood disasters, demonstrating improved generalization performance and remarkable stability.Wang et al. (2019) introduced an opposition-based variant of WOA for tackling multi-objective optimization problems.Srivastava et al. (2018) utilized WOA to estimate the parameters of a permanent magnet synchronous motor.An improved version of WOA optimizer was suggested in Abdel-Basset, Mohamed & Mirjalili (2021), which comprises three modifications compared to the original WOA.Firstly, the dynamic distance control factor was used rather than a fixed one.Secondly, a certain probability was used to achieve a compromise between movement towards the best solution and its opposite for escaping from local optimal solutions.Finally, Nelder-Mead was used along with the Pareto archived evolution strategy (PAES) to further improve WOA.Authors of Mahdad (2018) solved the optimal power flow (OPF) problem utilizing a new partitioning whale algorithm.
In Chen et al. (2020), an improved WOA named RDWOA was suggested for improving the convergence and global optimization performance of WOA in solving multidimensional problems.The improvement included two schemes, random spare or random replacement and double adaptive weight, which were used for advancing the convergence, exploration at the initial phases, and exploitation at subsequent phases.The proposed strategies considerably increased the convergence speed and the optimization performance of WOA.The efficiency of RDWOA was proved by utilizing typical benchmarks and engineering problems.Trivedi et al. (2016) applied WOA to solve emission constraint environmental economic dispatch problems.Tu et al. (2021) proposed another enhanced variant of WOA to improve its convergence performance and prevent being trapped in local optimal solutions.The enhancement employs a new communication mechanism (CM) for improving the global optimization performance and biogeography-based optimization (BBO) to compromise between the exploring and exploiting performances.The effectiveness of BBO was confirmed using benchmark and engineering problems.
Abdel-Basset, Abdle-Fatah & Sangaiah (2018) proposed an enhanced version of WOA that incorporates Lévy flight (LF) for problem-solving in the cloud computing environment.Mafarja & Mirjalili (2017) proposed a hybrid approach that combines WOA with simulated annealing for feature selection.In Nazari-Heris et al. (2017), the optimal generations of combined heat and power units were determined using WOA.Guo et al. (2020) augmented WOA by incorporating adaptive social learning (ASL) and wavelet mutation.At first, a novel exploration probability was formulated for improving the performance of WOA.Then, an ASL strategy was utilized for constructing the adaptive social network (ASN) of the WOA population, to enhance its diverseness.Finally, the suggested procedure was augmented using the Morlet wavelet mutation strategy.WOA was proposed in Reddy, Reddy & Manohar (2017) for the solution of optimally identifying the size of renewable energy resources.
In Samadianfard et al. (2020) a hybridization of the multi-layer perceptron (MLP) neural network and WOA was roposed for wind speed forecasting.Content-based image retrieval was solved in Aziz, Ewees & Hassanien (2018) using multi-objective WOA (MOWOA) algorithm.In Wu et al. (2018), the path planning problem for solar-powered UAV in urban environment was solved using WOA enhanced with adaptive chaos-Gaussian switching solving strategy and coordinated decision-making mechanism.In Hou et al. (2020), a hybrid of quantum simultaneous WOA (QSWOA) and a multi-objective economic model predictive control (MOEMPC) was proposed for controlling gas turbines.A new improved opposition-based WOA (IOWOA) was used for estimating the parameters of solar cells diode models (Abd Elaziz & Oliva, 2018).Binary WOA was utilized in Eid (2018) to deal with feature selection problems.In Liu, Yao & Li (2020) a hybridization of LF-augmented WOA and DE was suggested for dealing with the job shop scheduling problem (JSSP), where, LF and DE are used for improving the exploration and exploitation performances, respectively.Data clustering based on WOA was proposed in Canayaz & Özdağ (2017).Qiao et al. (2020) employed a novel improved variant of WOA called IWOA for short-term natural gas consumption forecasting.Khalilpourazari, Pasandideh & Ghodratnama (2018) proposed the utilization of Whale Optimization Algorithm (WOA) and Water Cycle Algorithm (WCA) for programming a multi-item economic order quantity model.Pham et al. (2020) proposed the utilization of WOA for the optimal allocation of resources in wireless networks.

WOA
The mathematical model of the original WOA, inspired by the hunting strategy of humpback whales, which consists of the stages of surrounding prey, bubble-net hunting maneuver, and search for prey, is briefly discussed in this section, Mirjalili & Lewis (2016).

Encircling prey
During the encircling prey stage, the WOA algorithm emulates the ability of humpback whales to recognize the prey's position and encircle them.In each iteration, the best solution, acting as the leader, is considered the target prey.The behavior is defined in Eqs.(1)-(3): X Leader denotes the position vector of the best solution found so far, and → X represents the position vector of each member in the algorithm population.Furthermore, the sign denotes the element-wise multiplication.Notably, to enhance the performance of WOA, the value of a linearly declines from 2 to zero during the iterations, and → r is a vector of uniformly distributed random numbers between zero and one.

Exploitation stage: bubble-net attacking maneuver
The bubble-net attacking maneuver, inspired by the hunting behavior of humpback whales, is modeled using two strategies: 1.Declining surrounding strategy: This strategy is achieved by reducing the value of a in Eq. ( 2).Notably, the range of variation in vector → A is directly proportional to a, where → A consists of randomly generated values betweena and a. 2. Spiral position update: The whale's displacement towards the prey's position, simulating the spiral motion of humpback whales, is formulated as Eq. ( 4): where X denotes how far is the ith whale from the prey, B is a constant that describes the logarithmic spiral motion, and L is a random value between −1 and 1.
It is important to mention that the selection between the declining surrounding strategy and the spiral position update is equally probable.

Search for prey
The search for prey, representing the exploration stage of WOA, can be achieved by adjusting the vector → A. This mechanism facilitates a global search by setting the absolute value of the vector Mathematically, this stage can be formulated as Eqs.( 5) and (6): where → X rand denotes the position vector of an indiscriminately chosen solution.It is important to note that the WOA method relies on two main parameters, → A and → C , which need to be tuned.

CHALLENGES AND ENHANCED HYBRID VERSION OF WOA
In the current section, the challenges of WOA are discussed, followed by the introduction of a novel improved version of WOA to address those challenges.

Challenges of WOA
In practical applications, we encounter optimization problems with diverse behaviors and levels of complexity.Therefore, researchers strive to find an algorithm that is robust, requires minimal parameter tuning, and offers simplicity and fast convergence speed (Talbi, 2009).Real-world problems often involve shifted functions, where the global optimal solutions do not reside at the origin of coordinates and vary across dimensions.It is welldocumented in the literature that many algorithms exhibit reduced performance for shifted functions (Liang, Qu & Suganthan, 2013), which necessitates appropriate modifications.
To investigate this issue with WOA, we conducted experiments using the conventional model of the sphere function (Mirjalili & Lewis, 2016) and its shifted counterpart, known as the Shifted Sphere Function (Suganthan et al., 2005).We aimed to determine the optimal solutions for these functions with 30 dimensions using WOA, PSO, and DE methods.Each function was independently evaluated in 25 runs, with 300,000 function evaluations (Suganthan et al., 2005) and a population size of 30 for the algorithm.The mean values obtained by WOA for the optimal response of the traditional sphere function and the shifted sphere function were 0 and 0.478, respectively.Figure 1 illustrates the convergence characteristics of WOA, DE/best/1, and PSO algorithms for both functions.It is worth noting that all algorithm parameters were set according to the recommendations in the original codes, leading to improved average performance across a wide range of problems.From the figure, it is evident that the original WOA exhibits reduced performance for shifted functions.Therefore, it is crucial to either tune the key controlling parameters or modify the WOA formulation to enhance its efficiency in solving a wider range of engineering and real-world problems.
Another issue with the original WOA is that it only stores the best solution among the entire population in each iteration.In contrast, algorithms like particle swarm optimization (PSO) store the personal best position (Pbest) for each member in each iteration, which enables directing the population members to avoid local optima.Therefore, an enhanced hybrid model of WOA can be developed by leveraging the advantageous features of other algorithms as an auxiliary operator.In this study, we present a new efficient hybrid variant of WOA that incorporates the formulations of PSO (Eberhart & Kennedy, 1995) and differential evolution (DE) (Storn & Price, 1997).This hybrid variant will be discussed in detail in the next section.

Pbest-guided differential WOA
The storage of only the best solution in WOA, similar to GA, is identified as a fundamental weakness of the algorithm based on our investigation.This limitation arises from eliminating many candidate solutions in each iteration, which could potentially be useful in subsequent iterations and enhance the algorithm's optimization capability, as observed in DE and PSO algorithms.Consequently, we can leverage the models/formulations of basic DE, PSO, and their advanced variants, which have gained significant popularity in recent years, to enhance WOA's performance in locating the global optimum of real-world optimization problems.

PSO-based Modification:
The first modification proposed in this study involves storing the personal best(Pbest ) position of each member in each iteration, denoted by → Xpbest > 1, similar to the PSO algorithm.With this Pbest-guided modification, the search equations can be rewritten as Eqs.( 7)-( 14): where Eqs. ( 9) and ( 10) denote the encircling prey phase, Eqs. ( 11) and ( 12) model the search for prey phase, and Eqs. ( 13) and ( 14) demonstrate the spiral position update.Note that in the proposed algorithm, the same as PSO, for each member of the population in each iteration,

DE-based modification:
In the DE-based modification, we incorporate the best position found by each individual in all previous iterations.This enables us to leverage the mutations proposed in the DE algorithm to effectively enhance the original WOA.Therefore, in the second stage of the modification, a mutation phase, as defined in Eq. ( 16), is added to the formulation of WOA immediately after the main phases of the algorithm: where → Xpbest r1 and → Xpbest r2 are the personal best positions of two solutions randomly chosen from the population for updating each solution.Similarly, rand1 and rand2 are random vectors with dimensions equal to D (problem's dimension), where the elements' values range between 0 and 1.Subsequently, a random variable randj is generated for each dimension j of each solution, leading to Eq. ( 17): Here, Cr represents a control parameter, similar to the crossover rate used in evolutionary algorithms.
Finally, we want to emphasize that we employed the penalty method, a widely-used approach for addressing constraints in constrained optimization problems, in our research.The penalty method utilizes penalty functions to guide the optimization algorithm towards feasible solutions while penalizing infeasible solutions.To provide a visual representation of the proposed approach, we have included a flowchart of the Pbest-guided differential Whale Optimization Algorithm (PDWOA) in Fig. 2.

SIMULATION RESULTS
The verification of the effectiveness of PDWOA is achieved using two sets of experiments, firstly it was used for solving CEC 2014 Test Functions (Liang, Qu & Suganthan, 2013), and then it is exploited for solving three engineering problems.

Solving CEC 2014 test functions using PDWOA
In order to compare the performance of PDWOA with that of the original WOA, 30 test functions with 30 dimensions have been selected from CEC 2014 Test Functions (Liang, Qu & Suganthan, 2013).These functions include unimodal (F1-F3), multimodal (F4-F16), hybrid (F17-F22), and composition (F23-F30) functions.For both algorithms, we consider a population number equal to 30 and iteration numbers equal to 10,000, i.e., the number of function evaluations (NFEs) done by each algorithm (for each test function) is 300,000.To find the optimal solution of each function, 25 separate runs have been executed for each algorithm and then, statistical analysis has been performed on the results.
A comparative study between DE, PSO, the original WOA, and the proposed PDWOA with three different Cr settings, i.e., a random value and the fixed values of 0.1 and 0.9, is presented in Table 1.In this table, the terms ''Mean'' and ''Std''.represent the average value and standard deviation, respectively, of the results obtained from 25 independent runs for optimizing each function using each algorithm.The term ''Rank'' indicates the ranking of the algorithm's Mean index, reflecting its effectiveness in optimizing the considered function.Additionally, ''NB'' represents the number of functions for which the algorithm achieves the best Mean index, while ''MR'' represents the mean of the Rank indices of the algorithm across all functions.It is evident from the table that the proposed algorithms, with two different Cr tunings, outperform the original algorithm significantly.Specifically, the PDWOA with Cr set to a random value and 0.1 surpasses the performance of the original WOA for 21 and 24 shifted test functions, respectively.Notably, even in cases where the suggested algorithm exhibits worse performance, the resulting outcomes do not deviate significantly from those obtained by the original WOA.
It can be further seen from results in Table 1 the suggested PDWOA could attain results of a much higher quality for test functions F1, F2, F3, F7, F10, F17, F18, F20, and F30 compared to the original algorithm.Furthermore, the convergence characteristics of the algorithms for some of the test functions are depicted in Fig. 3, which confirms the higher performance and convergence rate of the suggested PDWOA.Table 2 presents the average simulation time of 25 runs for each of the CEC 2014 test functions, with the aim of comparing the computational burden of the proposed PDWOA  to that of the original WOA, PSO, and DE algorithms.It is important to note that, due to the small difference in computational burden between different versions of PDWOA, only the simulation times of PDWOA/Cr = rand are reported in this table.The results indicate that, for 29 and 25 out of 30 test functions, PDWOA has lower mean simulation times than DE and PSO algorithms, respectively.However, for 24 out of 30 test functions, PDWOA has higher mean simulation times than the original WOA.Nonetheless, the maximum increase in mean simulation times by using the proposed improved version of WOA is only about 8%, occurring for test function 1.This increase is not too high considering the degree of improvement in the final solutions.Table 3 displays a comparative analysis of the performance of the selected variant of the proposed Pbest-guided differential Whale Optimization Algorithm (i.e., PDWOA/Cr = rand) and several other state-of-the-art methods, including Arithmetic Optimization Algorithm (AOA) (Abualigah et al., 2021), Hierarchical Multi-swarm Cooperative TLBO (HMCTLBO) (Zou et al., 2017), Moth-Flame Optimization algorithm (MFO) (Mirjalili, 2015), Adaptive Weighted Particle Swarm Optimizer (AWPSO) (Liu et al., 2021), Gaussian bare-bones gradient-based optimization (GOMGBO) (Qiao et al., 2022)  In this table, the symbols '=', '−', and '+' are used to indicate the comparison between the method under consideration and the proposed PDWOA.The symbol '=' represents an equal result, '−' indicates that the method performs worse than the proposed PDWOA, and '+' signifies that the method performs better than the proposed PDWOA.Furthermore, Nw, Nb, and Ne represent the number of times the considered method performs worse than, better than, or equal to the proposed PDWOA, respectively.The table presents a comprehensive comparison of the results achieved by PDWOA in relation to the benchmarked algorithms, shedding light on the efficacy and competitiveness of PDWOA in addressing the CEC 2014 test functions.

Statistical analysis
In this subsection, we present the results of two non-parametric statistical tests conducted to assess the performance of the proposed improved versions of the whale optimization algorithm (WOA), namely ''PDWOA/Cr = rand'', ''PDWOA/Cr = 0.1', and ''PDWOA/Cr = 0.9'.The tests include the Wilcoxon signed-rank test and the Friedman test, which provide insights into the algorithm rankings and pairwise comparisons.
The Friedman test was used to rank the algorithms based on their mean performance across all benchmark functions.The results of the Friedman test are presented in Table 4.In this table, the mean rank index represents the average of the Rank indices of each algorithm for all test functions, while the RankT index shows the rank of each algorithm in the list of sorted mean rank indices.Specifically, PDWOA/Cr = rand achieved the best performance with the lowest mean rank of 3.0167, followed by PDWOA/Cr = 0.1 (mean rank: 3.9), PSO (mean rank: 5.9), and PDWOA/Cr = 0.9 (mean rank: 5.9667).
Additionally, the results of the Wilcoxon signed-rank test with a significance level of 0.05 are presented in Table 5, showing the p-values and confidence intervals for pairwise comparisons between PDWOA/Cr = rand and other algorithms.In this table, SoPR and SoNR represent the combined positive and negative ranks.Similarly, MoPR and MoNR represent the average positive and negative ranks, respectively.The notation F(i)<F(j) indicates how many times the first algorithm performs better than the second one, while F(j)<F(i) signifies the opposite scenario.It's important to highlight that in the Wilcoxon test, positive ranks correspond to cases where the first algorithm surpasses the second one.The test results reveal statistically significant differences in performance between PDWOA/Cr = rand and several other algorithms.
PDWOA/Cr = rand was found to have significantly better performance compared to DE/rand/1, PSO, WOA, PDWOA/Cr = 0.9, AOA, AWPSO, GOMGBO, MFO, and LJA, as indicated by the low p-values obtained.The confidence intervals further support this finding, showing that PDWOA/Cr = rand consistently outperformed these algorithms over a wide range of objective function values.However, when comparing PDWOA/Cr = rand with PDWOA/Cr = 0.1, the obtained p-value (0.0661) suggests that the difference in performance between these two algorithms is not statistically significant at the 0.05 significance level.It is worth noting that PDWOA/Cr = rand still exhibits a slightly better performance trend.
Overall, the results of the statistical tests support the superiority of PDWOA/Cr = rand compared to the other algorithms tested.It demonstrates consistent and competitive performance, as evidenced by its lower mean rank in the Friedman test and its significant performance advantages in the pairwise comparisons based on the Wilcoxon signed-rank test.

PDWOA for solving constrained engineering optimization
So as to further demonstrate the optimization power of the suggested algorithm, we have selected three renowned engineering problems and solved them with the proposed method.To solve these problems, the population sizes selected for each algorithm is 60 and the number of iterations of each algorithm for each run is 1,000.For each problem, optimization was performed in 30 independent runs.All parameters of the algorithms used here are exactly according to the main references suggested by the algorithm designers.

Welded beam optimal design (engineering problem 3)
The problem is focused on optimally finding four continuous decision variables for minimizing the cost of a welded beam (Fig. 6) subject to two linear and five nonlinear inequality constraints.The optimization variables are x 1 or h, x 2 or l, x 3 or t, and x 4 or b (Askarzadeh, 2016).Minimize: subject to: g 5 (X ) = 0.125 − x 1 ≤ 0, (33) where P = 6,000 lb; L = 14 in; E = 30e6 psi; G = 12e6 psi; τ max =13,000 psi; σ max =30,000 psi; δ max = 0.25 in; Table 10 presents the results of the proposed method for solving the engineering problem 3 in comparison to several other algorithms, including a cooperative PSO with stochastic movements (EPSO) (Ngo, Sadollah & Kim, 2016), BFOA (Mezura-Montes & Hernández-Ocana, 2008), T-Cell Algorithm (Aragón, Esquivel & Coello, 2010), CDE (Huang, Wang & He, 2007), CPSO (He & Wang, 2007), Derivative-Free Filter Simulated Annealing Method (FSA) (Hedar & Fukushima, 2006), TEO (Kaveh & Dadras, 2017), SBO (Ray & Liew, 2003), GA4 (Coello Coello et al., 2002), (l + k)-ES (Mezura-Montes & Coello, 2005), UPSO (Parsopoulos & Vrahatis, 2005), GWO (Mirjalili, Mirjalili & Lewis, 2014), SFO (Shadravan, Naji & Bardsiri, 2019), HGSO (Hashim et al., 2019), WCA (Eskandar et al., 2012), BIANCA (Montemurro, Vincenti & Vannucci, 2013), SBO (Ray & Liew, 2003), KO (Minh et al., 2022), TLCO (Minh et al., 2023b;Minh et al., 2023a), POA (Sang-To et al., 2022), CS (Cuong-Le et al., 2021), and NMS-CS (Cuong-Le et al., 2021).Table 11 presents the best solutions found by the original and proposed versions of WOA for this problem.The outcomes exhibit the efficacy of the suggested PDWOA in attaining top-notch solutions for the optimization issue.with a slight difference, whereas the proposed algorithm with Cr equal to 0.1 achieved the best final value, as demonstrated in Table 12.As part of future work, an efficient modification can be explored to improve the Mean obtained by PDWOA and align it with the best value.Figure 7 illustrates the average performance of the suggested algorithm across multiple test functions in 30 runs.These results demonstrate the significant improvement achieved by the suggested algorithm in enhancing WOA.Although the suggested algorithm is robust and effective in many cases, further development can be pursued by exploring numerous enhanced versions of PSO or DE.In future work, we will present some examples of these enhanced versions.For instance, we can draw inspiration from the colonial competitive differential evolution algorithm (Ghasemi et al., 2016), which suggests distributing the population into several groups and conducting colonial competition between these groups using a specific mutation for each group.To enhance the proposed version, we can implement a similar mechanism by dividing the WOA population into multiple groups, where the best member of each group serves as the leader, and conduct colonial competition among these groups.Additionally, instead of using the specific mutation equation defined in Eq. ( 17), we can employ alternative mutations (or crossover coefficients) for each group of whales.By leveraging the efficient operators from various evolutionary algorithms, we can increase the population diversity during iterations.This approach, guided by multiple leaders within distinct groups, allows the population to explore several different areas in the search space, effectively avoiding local optima.In this context, several new optimization algorithms that involve population division into multiple groups (Mallipeddi et al., 2011;Zhang, 2015;Chen et al., 2018;Band et al., 2022) can be applied to enhance the proposed version of WOA.

DISCUSSION AND FUTURE STUDIES
Furthermore, a highly effective and adaptive method for selecting Cr was proposed in Zhang & Sanderson (2009), which is recognized as one of the most powerful versions of DEs.The mutation equation presented in Eq. ( 17) has drawn inspiration from this method.In future studies, we can explore the application of this strategy to enhance the performance of the suggested method.Additionally, other efficient adaptive techniques proposed in Brest et al. (2006) and Zhu et al. (2013) can be further investigated as potential avenues to improve the proposed algorithm.Additionally, there are several new models of DE proposed in the literature that extend beyond the scope of this study but warrant further investigation.These models include fuzzy adaptive differential evolution (Al-Dabbagh et al., 2014), Gaussian bare-bones differential evolution (Wang et al., 2013), and parallel DE with self-adapting control parameters and generalized opposition-based learning (Wang, Rahnamayan & Wu, 2013).Future studies can delve into these models in more detail to explore their potential implications.

CONCLUSIONS
In this study, we addressed the limitations of the original WOA, such as susceptibility to getting trapped in locally optimal solutions, particularly in complex real-world problems.
To overcome these drawbacks, we proposed a new and high-performing version of WOA called PDWOA.The performance of PDWOA was evaluated by comparing it with the original WOA on 30 shifted test functions from CEC2014, each with a dimension of 30, under identical conditions.The simulation results demonstrated the efficiency of the suggested algorithm in achieving optimal solutions for the test cases.Moreover, the proposed PDWOA algorithm was evaluated using two non-parametric statistical tests, including the Friedman test and the Wilcoxon signed-rank test, which confirmed its superior performance compared to other algorithms.Furthermore, PDWOA was applied to three real-world engineering problems, providing additional evidence of its optimization performance.Additionally, we discussed models of powerful algorithms from the literature that could be explored in future studies to further enhance the proposed algorithm.By integrating these models into the proposed formulation, we aim to achieve accurate solutions for a wider range of real-world optimization problems.
• Seyedali Mirjalili analyzed the data, authored or reviewed drafts of the article, and approved the final draft.
• Stephen Andrew Gadsden analyzed the data, authored or reviewed drafts of the article, and approved the final draft.
• Pavel Trojovský analyzed the data, authored or reviewed drafts of the article, and approved the final draft.
• Eva Trojovská analyzed the data, authored or reviewed drafts of the article, and approved the final draft.

Rahimnejad et al. (2023), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.1557 25/37
Table12presents the best solutions found by the proposed and original versions of WOA.The results indicate that optimizing the algorithm's parameters, particularly the value of Cr, can significantly enhance the optimization performance.For example, when comparing the final results for F29 (shown in Table1), the original WOA yielded the best mean value