DCWPSO: particle swarm optimization with dynamic inertia weight updating and enhanced learning strategies

Particle swarm optimization (PSO) stands as a prominent and robust meta-heuristic algorithm within swarm intelligence (SI). It originated in 1995 by simulating the foraging behavior of bird flocks. In recent years, numerous PSO variants have been proposed to address various optimization applications. However, the overall performance of these variants has not been deemed satisfactory. This article introduces a novel PSO variant, presenting three key contributions: First, a novel dynamic oscillation inertia weight is introduced to strike a balance between exploration and exploitation; Second, the utilization of cosine similarity and dynamic neighborhood strategy enhances both the quality of solution and the diversity of particle populations; Third, a unique worst-best example learning strategy is proposed to enhance the quality of the least favorable solution and consequently improving the overall population. The algorithm’s validation is conducted using a test suite comprised of benchmarks from the CEC2014 and CEC2022 test suites on real-parameter single-objective optimization. The experimental results demonstrate the competitiveness of our algorithm against recently proposed state-of-the-art PSO variants and well-known algorithms.


INTRODUCTION
Optimization algorithms are methodologies crafted to explore solutions for optimization problems, with the goal of identifying the most favorable solution based on a predefined criterion (Engelbrecht, 2007).The primary objective of the optimization process is to discover viable solutions that effectively address the given problem while satisfying any constraints.The intricate nature of specific optimization problems has contributed to the increasing prominence of meta-heuristic algorithms employed for solving optimization problems.Intelligent optimization algorithms have gained significant attention in recent years due to their ability to efficiently solve complex problems across various domains, such as, medicine, engineering, etc.These algorithms, inspired by natural phenomena or artificial intelligence principles, leverage advanced computational techniques to explore solution spaces and find optimal or near-optimal solutions.The intersection of nature-inspired computing, machine learning, and optimization has paved the way for the development of highly adaptive and efficient algorithms capable of addressing real-world challenges.One prominent category within this realm is metaheuristic algorithms, which encompass a diverse set of optimization techniques.Metaheuristics, such as genetic algorithm (GA) (Wang, 2003), artificial bee colony (ABC) (Karaboga, 2010), differential evolution (DE) (Price, 1996), simulated annealing (SA) (Bertsimas & Tsitsiklis, 1993), ant colony optimization (ACO) (Dorigo, Birattari & Stutzle, 2006) and Particle swarm optimization (PSO) (Kennedy & Eberhart, 1995), mimic natural processes or societal behaviors to iteratively improve candidate solutions.
PSO has emerged as a powerful optimization algorithm, drawing inspiration from the collective behavior of birds and fish.In 1995, Kennedy and Eberhart introduced the PSO algorithm, which navigates the problem space through the continuous adjustment of particles' velocity and position (Kennedy & Eberhart, 1995).Its simplicity, ease of implementation, and ability to explore high-dimensional solution spaces have made it a popular choice for solving complex optimization problems in various domains.Nevertheless, the investigation uncovered shortcomings in the PSO algorithm, particularly in terms of premature convergence and diminished convergence performance, especially as the optimization problem dimension increases (Liang et al., 2006;Mendes, Kennedy & Neves, 2004;Qu, Suganthan & Das, 2012).
The design of a rational and efficient evolutionary strategy has been a prevalent focus among researchers in the current year.Employing a single learning strategy may constrain the intelligence level of each particle, thereby diminishing the performance of PSO in addressing optimization problems with intricate fitness scenarios.Consequently, employing a hybrid learning strategy throughout the entire search process is considered to enhance the diversity of particle populations.In this study, we propose a dynamic oscillation inertia weight, cosine similarity based on dynamic neighborhood strategy and a worst-best example learning strategy based on PSO (DCWPSO), which introduces enhancements not only in the selection of inertia weights but also in the learning strategy.The contributions of this article have the following aspects: . A novel dynamic oscillation inertia weight is proposed to strike a more effective balance between exploration and exploitation in the algorithm.
. A dynamic neighborhood strategy is proposed, deviating from the singular selection of Pbest and Gbest.Instead, particles are randomly chosen from their respective neighborhoods.This modification shows beneficial in enhancing both the diversity of particle motions and the diversity of particle populations.Additionally, the evolution of particles is fine-tuned by considering the cosine similarity between Pbest and Gbest.
. Worst-best example learning strategy is introduced to fine-tune the worst particle population, thereby enhancing the overall performance of the particle population.
The proposed algorithm undergoes analysis in terms of accuracy, stability, convergence and statistical analysis through experiments compared with PSO variants and well-known algorithms.
The remainder of this article is organized as follows: "Related Work" introduces classic PSO, parameter adjustment and strategy hybridization."Proposed Algorithm" introduces the proposed algorithm."Experimental Results and Analysis" discusses setup of experiments and analyzes experimental results."Conclusions and Future Works" gives the conclusion and directions for future work.

RELATED WORK
In this section, the primary focus is on the speed update mechanism of the canonical PSO algorithm and the key strategies employed by researchers to enhance the PSO algorithm.

Canonical PSO
where Pbest i denotes the historical optimal solution of the particle i, Gbest represents the historical optimal solution of the entire population, the position of the ith particle at the tth iteration is denoted as x i ðtÞ ¼ ðx i1 ðtÞ; x i2 ðtÞ; . . .; x iD ðtÞÞ, the velocity of particle i at the tth iteration is represented by v i ðtÞ ¼ ðv i1 ðtÞ; v i2 ðtÞ; . . .; v iD ðtÞÞ.The parameter x, known as the inertial weight, regulates the impact of the previous velocity on the current velocity.Additionally, r 1 and r 2 are two randomly selected numbers from a uniform distribution [0,1].c 1 represents the individual cognitive acceleration coefficient, while c 2 represents the social acceleration coefficient.These coefficients play a crucial role in shaping the behavior of the PSO algorithm.

Parameter adjustment
Parameter adjustment in PSO primarily centers on the inertia weight coefficients x and acceleration coefficients c 1 , c 2 .The effectiveness of an optimization algorithm typically relies on achieving a balance between global search and local search across the entire search space.In light of this consideration, an inertia weight is introduced into the Eq.(1) for a particle.In previous studies, researchers have proposed various enhancements to inertia weights (Chatterjee & Siarry, 2006;Arumugam & Rao, 2008;Al-Hassan, Fayek & Shaheen, 2006;Panigrahi, Pandi & Das, 2008;Feng et al., 2007).In PSO, c 1 and c 2 are referred to as the cognitive component and the social component, respectively.They serve as stochastic acceleration coefficients responsible for adjusting the particle velocity with respect to Pbest and Gbest.Hence, these two components play a crucial role in achieving the optimal solution rapidly and accurately.Some researchers have dedicated efforts to the selection of these two parameters (Chen et al., 2018;Tian, Zhao & Shi, 2019;Kassoul, Belhaouari & Cheikhrouhou, 2021;Moazen et al., 2023;Sedighizadeh et al., 2021;Harrison, Engelbrecht & Ombuki-Berman, 2018).

Strategy hybridization
In general, there are two main ways to improve PSO through hybrid strategies, as shown in the following: Improving PSO's performance by combining it with other search approaches.Engelbrecht (2016) introduced two adaptations of a parent-centric crossover PSO algorithm, leading to enhancements in solution accuracy compared to the original parentcentric PSO algorithms.The amalgamation of GA and PSO involves the partial integration of gene operations from GA, encompassing selection, crossover, and mutation, into PSO to enhance population diversity (Molaei et al., 2021;Shi, Gong & Zhai, 2022).Inspired by the bee-foraging search mechanism of the artificial bee colony algorithm, Chen, Tianfield & Du (2021) proposed a novel bee-foraging learning PSO (BFL-PSO) algorithm.Singh, Singh & Houssein (2022) proposed a novel hybrid approach known as the hybrid salp swarm algorithm with PSO (HSSA-PSO) for the exploration of high-quality optimal solutions in standard and engineering functions.Hu, Cui & Bai (2017) modified the constant acceleration coefficients by employing the exponential function, based on the combination of gravitational search algorithm (GSA) and PSO(PSO-GSA).Khan & Ling (2021) proposed a novel hybrid gravitational search PSO algorithm (HGSPSO).The fundamental idea behind this approach is to integrate the local search ability of GSA with the social thinking capability (Gbest) of PSO.
Incorporating topology in the PSO algorithm.Liu & Nishi (2022) proposed a novel strategy for exploring the neighbors of elite solutions.Additionally, the proposed algorithm was equipped with a constraint handling method to enable it to address constrained optimization problems.Lee, Baek & Kim (2008) proposed the repulsive PSO (RPSO) algorithm as a relatively recent heuristic search method.This algorithm was proposed as an effective approach to enhance the search efficiency for unknown radiative parameters.Mousavirad & Rahnamayan (2020) proposed a center-based velocity, incorporating a new component known as the "opening center of gravity factor", into the velocity update rule to formulate the center-based PSO (CenPSO).The center of gravity factor leveraged the center-based sampling strategy, a novel direction in population-based metaheuristics, particularly effective for addressing large-scale optimization problems.Xu et al. (2019) proposed the Two-Swarm Learning PSO (TSLPSO) algorithm, which was based on different learning strategies.One subpopulation constructed learning exemplars using the Dynamic Learning Strategy (DLS) to guide the local search of the particles, while the other subpopulation constructed learning exemplars using a comprehensive learning strategy to guide the global search.Meng et al. (2022) proposed a sorted particle swarm with hybrid paradigms to enhance optimization performance.

PROPOSED ALGORITHM
The pseudo-code for the DCWPSO algorithm is presented in detail as Algorithm 1.The novelty of t proposed algorithm is encapsulated in the following discoveries: (1) A new dynamic oscillation inertia weight that better balance between global and local exploration.
(2) The change involves altering the single method of selecting Pbest and Gbest, fostering increased population diversity.Additionally, cosine similarity is employed to assess the similarity between Pbest and Gbest, directing populations with low similarity to advance.
(3) Strengthening the p worst particles within the population to enhance the overall performance of the particle swarm.

Dynamic oscillation inertia weight
Within the context of PSO, the inertia weight holds significance as a pivotal parameter governing the dynamics of particle movement.Primarily, the role of the inertia weight lies in harmonizing the historical velocities of particles with the influences arising from individual experiences and group synergies.Traditional PSO employs a fixed-value inertia weight, limiting particles' ability to adapt to diverse environments and making the algorithm susceptible to local optima (Kennedy & Eberhart, 1995).Recognizing this limitation, Shi & Eberhart (1999) observed substantial enhancements in PSO performance by introducing a linearly changing inertia weight.While some investigations have utilized linear adaptive weights (Xu & Pi, 2020;Van Den Bergh, 2001;Eberhart & Shi, 2000), it has been acknowledged that, especially in the case of intricate optimization problems, nonlinear adaptive weights offer a better fitness to the environment and possess superior dynamic adjustment capabilities (Ratnaweera, Halgamuge & Watson, 2004;Liu, Zhang & Tu, 2020;Chatterjee & Siarry, 2006).This article introduces a novel nonlinear inertia weight represented by Eq. (3).
where, r is a random number uniformly distributed in the interval of [0,1].The parameters x max and x min are defined as 0.9 and 0.4, respectively.Function evaluations (FEs) denote the current number of evaluations, while maximum number of function evaluations (MaxFEs) represents the predefined maximum number of evaluations.Figure 1 depicts the trends of proposed and original weights curves.From Fig. 1, it can be observed that with the progression of population iterations, the right side of Fig. 1 exhibits a linear decrease, while the left side of Fig. 1 demonstrates a fluctuating descent pattern.Incorporating this fluctuation strategy into the inertia weights aids the population in transitioning more frequently between the searching, following and scaping stage.This approach enhances the diversity of particle movement and contributes to an increased population diversity.This method of dynamic oscillation achieves a more optimal balance between the global and local search capabilities of particles, preventing them from becoming ensnared in local optima.

Cosine similarity and dynamic neighborhood strategy
Throughout the evolution of the particle swarm algorithm, particles guide the population in the pursuit of the optimal solution by assimilating knowledge from historical personal best experiences (Pbest) and global best experiences (Gbest), however, depending solely on these two learning paradigms might not be adequate to convey the population with a comprehensive search knowledge.As the iteration progresses into its later stages, the Pbest and Gbest particles gradually converge towards the identified optimal regions, the particle population may incline towards local optimal solutions due to a shortage of search information.Hence, specific measures can be employed to assess the similarity between particles, followed by the selection of an appropriate learning paradigm.This ensures that all particles gain access to informative search information throughout the evolutionary process.There are two primary methods for assessing the similarity between two vectors in a high-dimensional space (Qian et al., 2004).Generally, whereas cosine similarity characterizes the relative distinction in direction, Euclidean distance characterizes the absolute distinction in objective value.In PSO algorithm, Pbest and Gbest mainly guide the movement of the particle swarm in the direction.Therefore, in this article we use the cosine similarity to compute the similarity between these two vectors to guide population evolution through angular information.Cosine similarity is independent of vector length, relying solely on the direction in which the vector is oriented.The mathematical expression for cosine similarity is denoted by Eq. (4). Figure 2 shows the cosine similarity of two particles from each of neighborhood.
where M ¼ ½y 1 ; y 2 ; . . .; y D and N ¼ ½z 1 ; z 2 ; . . .; z D .M Á N denotes the inner product of vector M and vector N. kMk 2 and kNk 2 represents the 2-Norm of vector M and vector N.
The characteristic of neighborhood has been employed in the variable neighborhood search (VNS) (Mladenović & Hansen, 1997) algorithm.It has the capability to discover the optimal solution within the current neighborhood and has the flexibility to escape the current neighborhood in search of a superior solution.In the classical PSO, the update of each particle is solely determined by the Pbest of an individual particle and the Gbest acquired from the entire particle swarm.This single selection method elevates the probability of particles being trapped in local optima.In this method, the closest K particles are selected to form a neighborhood by calculating the Euclidean distance of all particles from Pbest and Gbest, the equations are presented in Eqs. ( 5) and (6).A single particle is randomly chosen as the updated reference to guide the entire population within the respective neighborhood of Pbest and Gbest.Figure 2 shows the neighborhood of Pbest i ðtÞ and Gbest, along with the particles within these neighborhoods, where the red particle represents Gbest and green particle represents Pbest i ðtÞ.The particles depicted in white signify the particles within the solution space.The blue particle and the red particle are randomly selected from the neighborhoods of Pbest i ðtÞ and Gbest, respectively. (5) In general, a higher degree of similarity between two learning paradigms implies that their motion directions are more aligned, the positional difference is smaller, and the number of feasible solutions contained between the paradigms is reduced.This results in less information that particles can learn from these paradigms, which hinders the evolutionary process.Conversely, a low degree of similarity indicates relatively independent motion directions and larger positional differences between the paradigms.In such cases, the paradigms encompass more feasible solutions, allowing particles to extract more valuable search information, facilitating the enhancement of solution quality.Therefore, in this method, the expandable range of Pbest and Gbest is augmented by elevating the particles' knowledge acquisition capability through the utilization of the neighborhood method when the similarity between Pbest and Gbest is high (cosh < 0:5).
The equation for velocity update in cosine similarity dynamic neighborhood strategy is presented as Eq. ( 7).
According to the introduction above mentioned, the pseudo-code of the update method can be detailed as in Algorithm 2.

Worst-best example learning strategy
In endeavors to enhance the collective performance of a group, the emphasis is often placed on elevating the capabilities of the least proficient individuals rather than solely promoting the top performers.This approach aims to catalyze substantial improvements in the overall group performance.The phenomenon referred to as the "cask effect", also known as the "short board effect", which implies that the water-holding capacity of a wooden bucket is determined not by its longest board but by its shortest board.Enhancing the length of the shortest board and removing constraints created by this short board can augment the water storage capacity of the wooden bucket.Similarly, within PSO, the overall performance of the entire population can be enhanced by adjusting the movement direction of the worst p particles.The equation of velocity update as follows: where x w j is the position of the particle j from the worst particle neighborhood in the current population.
From Eq. ( 8), the update direction of a worst particle is solely oriented towards the global optimal experience and remains unaffected by individual optimal experiences.This facilitates a rapid improvement of the particle.These aspects enable the worst-best example learning strategy to enhance the quality of the population and mitigate the risk of falling into local optima.

Complexity analysis
Time complexity is a key indicator of an algorithm's efficiency.The time complexity of the canonical PSO algorithm is OðN Á DÞ, where D is the dimension.The time complexity calculation of DCWPSO mainly includes two parts: dynamic neighborhood strategy and velocity and position update.For the dynamic neighborhood strategy, For the dynamic neighborhood strategy, firstly, the distances from each particle's position to Pbest and Gbest are calculated.Subsequently, the particles are sorted based on these distances, and the k closest particles are selected.So the time complexity is OðN 2 Þ.The time complexity of the update operation is consistent with that of canonical PSOs.In summary, the time complexity of DCWPSO is OðN 2 þ N Á DÞ, which is slightly higher than that of OðN Á DÞ of canonical PSO.

EXPERIMENTAL RESULTS AND ANALYSIS
In this section, the proposed algorithm was rigorously compared with various PSO variants and other well-known algorithms on the CEC2014 and CEC2022 test suites, respectively.Comprehensive statistical analyses were performed to meticulously evaluate and elucidate the comparative performance of these algorithms.

Setup of experiments
To validate DCWPSO, numerous experiments were conducted on complex functions extracted from the CEC2014 (Liang, Qu & Suganthan, 2013) and CEC2022 (Yazdani et al., 2021) test suites.This choice was made due to the high complexity of functions within the CEC2014 test suites compared to classical functions, rendering them notably challenging to solve.In CEC2014, 30 functions can be divided into four types relying on their properties, that is, unimodal functions ðf 1 $ f 3 Þ, simple multimodal functions ðf 4 $ f 16 Þ, hybrid functions ðf 17 $ f 22 Þ and composition functions ðf 23 $ f 30 Þ. Additionally, 12 functions from the latest CEC2022 suites were selected to further assess the algorithm's capability in addressing contemporary complex optimization problems.Specifically, f 1 is a unimodal function, f 2 $ f 5 are basic functions, f 6 $ f 8 are hybrid functions, and f 9 $ f 12 are composition functions.
Each function is independently run 30 times, To accurately reproduce the performance of the comparison algorithms, the termination criteria are defined as follows: for CEC2014, the maximum number of evaluations is set to D Â 10 4 , where D is the dimension.For CEC2022, the termination criterion is set to the maximum number of iterations, which is set to 10 4 .Search range is ½À100; 100 D .

Comparisons of the solution accuracy and stability
The proposed algorithm and comparison algorithms are tested on 30 À D CEC2014 test suites and 20 À D CEC2022 test suites.Tables 3 and 4 list the mean and standard deviation value for each function and the best results are denoted in bold.In Table 3, The best values are highlighted in bold.Among the 30 test functions, DCWPSO achieves the highest number of best-performing functions, with 12 in terms of mean and 10 in terms of standard deviation.Both of these counts are the highest among all the algorithms.The proposed algorithm achieves the top ranking across all unimodal functions (f 1 $ f 3 ), underscoring its effectiveness in solving such functions.For the evaluation of 13 simple multimodal functions ðf 4 $ f 16 Þ, the proposed algorithm secures the third position, following APSO and ADFPSO.Notably, in addressing hybrid functions ðf 17 $ f 22 Þ, the proposed algorithm attains the first rank in f 17 , f 18 , f 20 and f 21 , outperforming other PSO variants.For composition functions ðf 23 $ f 30 Þ, DCWPSO is ranked second among all algorithms, demonstrating notable proficiency in solving f 23 , f 29 and f 30 .In terms of solution accuracy across the 30 test functions, DCWPSO ranks first in 12 of them, securing the top overall rank with a significant advantage over other algorithms.DCWPSO also demonstrates commendable stability in the comparisons.Table 3 presents the standard deviation of the outcomes from 30 independent executions for each of the 30 test functions in CEC2014. Figure 3 illustrates a subset of these test functions through box plots depicting their results.The proposed algorithm secured the first rank in 10 out of all the tested functions.It not only excels in accuracy but also demonstrates notably competitive stability, holding a substantial advantage in comprehensive performance when compared with other PSO variants.As shown in Table 4, The best values are highlighted in bold.Among the 12 test functions, DCWPSO achieves the highest number of best-performing functions, with 9 in terms of mean and 5 in terms of standard deviation.Both of these counts are the highest among all the algorithms.The algorithm proposed in this article rank first in the unimodal function f 1 and exhibited the lowest standard deviation also.This finding indicates that the algorithm has significant advantages in solving unimodal functions.Similarly, for the basic functions (f 2 $ f 5 ), the proposed algorithm consistently ranked first.This indicates that the proposed algorithm possesses a distinct advantage in solving basic functions.For the hybrid functions (f 6 $ f 8 ), the proposed algorithm achieves a strong performance for function f 6 and f 7 , but a mediocre performance for functions f 8 .For the composition functions (f 9 $ f 12 ), The proposed algorithm is ranked first in two out of four functions.This suggests that the DCWPSO algorithm has significant advantages in addressing certain challenging composition functions.
Among the 12 test functions of CEC2022, the proposed algorithm achieved the highest mean value rankings in nine test functions and the lowest standard deviation rankings in five test functions.This demonstrates that the proposed algorithm excels in both accuracy and stability.

Comparisons of convergence performance
This experiment is conducted on the CEC2014 test suite to scrutinize the convergence performance of the DCWPSO algorithm across four types of functions.To accentuate the performance of the DCWPSO algorithm, only nine convergence curve figures featuring representative functions are selected.Specific experimental results are shown in the figures below.
Figure 4 displays the convergence curves of the proposed algorithm alongside those of the comparison algorithms for the CEC2014 functions.The consistent outperformance of f 1 and f 2 function over other PSO variants from the beginning to the end of the iteration suggests that the proposed algorithm has a clear advantage in solving unimodal functions of this type.For function f 4 , the proposed algorithm consistently achieves superior solutions compared to other PSO variants.Analysis of the iteration curves shows its capability to avoid local optima in later stages and find global optimum solutions, highlighting its strength in escaping local optima.For function f 7 , the proposed algorithm excels in rapidly achieving superior solutions.While most other PSO variants converge to nearly identical solution, DCWPSO ans APSO requires fewer iterations.For function f 12 , In the initial stages, the proposed algorithm may not discover as optimal a solution as the APSO algorithm.However, as iterations progress, it demonstrates the ability to escape local optima and achieve superior solutions more rapidly than several other PSO variants, underscoring the robust tuning capability of DCWPSO.For function f 17 and f 20 , the proposed algorithm consistently explores new globally optimal solutions during the initial and middle stages of iteration, with gradual convergence observed in later stages.This behavior indicates the algorithm's proficiency in effectively tackling hybrid functions as well.Similarly, for function f 23 and f 30 , it can also be seen that the proposed algorithm is also fast in finding better solutions when solving composition functions.

Statistical analysis of experimental results
In this section, we employ two widely recognized statistical tests to assess the efficacy of the proposed DCWPSO algorithm compared to other peer algorithms.Specifically, the Wilcoxon sign-rank test (Derrac et al., 2011) is utilized to determine whether there exists a significant difference between the performance of DCWPSO and those of other competitors on individual test functions.Additionally, a Friedman test is applied to evaluate the overall performance of all the peer algorithms.

Wilcoxon sign-rank test
To highlight distinctions between DCWPSO and other PSO variants on the CEC2014 test suites, this study employs the Wilcoxon sign-rank test, a nonparametric statistical analysis method.The objective of this test is to scrutinize performance variations between these algorithms.
The results of the Wilcoxon nonparametric test for DCWPSO and these compared algorithms are shown in Table 5.The symbols "n/+/−/=" represents the n number of test functions, and that DCWPSO is superior to, inferior to and equals to for comparison, respectively.The table is indexed by various function types in CEC2014.It is evident that the proposed algorithm exhibits only a slight worse in performance in the assessment of simple multimodal functions.Nevertheless, it consistently outperforms other algorithms overall.

Friedman test
A Friedman test is conducted to provide a comprehensive assessment of the performance of the six algorithms.The results are presented in Table 6, with the algorithms arranged in ascending order based on their ranking values (lower values indicating better performance).Furthermore, separate Friedman tests are performed on the four different types of functions, and the outcomes are detailed in Table 6.
To enhance the presentation of the Friedman test results, we construct a heat map illustrating the performance of all algorithms across four distinct types of test functions and the overall test set in Fig. 5.The visual analysis reveals the outstanding performance of the proposed algorithms in solving unimodal functions and hybrid functions, even though the overall results may not surpass ADFPSO.

CONCLUSIONS AND FUTURE WORKS
Addressing the drawbacks of conventional PSO, such as premature convergence and susceptibility to local optima, this article formulates a hybrid learning strategy to enhance the performance of the particle swarm algorithm.Firstly, this study proposes a novel dynamic oscillation inertia weight, which produces oscillatory nonlinear inertia weights during iterations.This methodology achieves a more effective equilibrium between algorithmic exploration and exploitation through the modification of the search process.
Secondly, this study presents a neighborhood learning strategy and cosine similarity to modify the update of particle velocity based on the observation of cosine similarity between Pbest and Gbest neighborhoods.Finally, to enhance the overall performance of the entire population, this article introduces the worst-best example learning strategy.This strategy facilitates rapid improvement of the worst p particles, contributing to an overall enhancement in effectiveness.
To validate the proposed algorithm's performance, this study conducts experiments comparing performance in accuracy, stability, convergence and statistical analysis.The experimental results indicate that the proposed algorithm generally outperforms peer algorithms in many aspects.However, it is observed that the proposed algorithm shows some limitations in solving multimodal functions.In future work, the integration of

Table 1
The parameter settings of PSO variants.

Table 4
Comparison results of DCWPSO with well-known algorithms on CEC2022 test set (D = 20).The best values are highlighted in bold.

Table 5
Wilcoxon signed-rank test with different function types on CEC2014 test suite.Table 6 Friedman-test results on CEC2014 test suite.