An Expanded Heterogeneous Particle Swarm Optimization Based on Adaptive Inertia Weight

In this study, a modified version of multiswarm particle swarm optimization algorithm (MsPSO) is proposed. However, the classical MsPSO algorithm causes premature stagnation due to the limitation of particle diversity; as a result, it is simple to slip into a local optimum. To overcome the above feebleness, this work presents a heterogeneous multiswarm PSO algorithm based on adaptive inertia weight strategies called (A-MsPSO). )e MsPSO’s main advantages are that it is simple to use and that there are few settings to alter. In the MsPSO method, the inertia weight is a key parameter affecting considerably convergence, exploration, and exploitation. In this manuscript, an adaptive inertia weight is adopted to ameliorate the global search ability of the classical MsPSO algorithm. Its performance is based on exploration, which is defined as an algorithm’s capacity to search through a variety of search spaces. It also aids in determining the best ideal capability for searching a small region and determining the candidate answer. If a swarm discovers a global best location during iterations, the inertia weight is increased, and exploration in that direction is enhanced. )e standard tests and indications provided in the specialized literature are used to show the efficiency of the proposed algorithm. Furthermore, findings of comparisons between A-MsPSO and six other common PSO algorithms show that our proposal has a highly promising performance for handling various types of optimization problems, leading to both greater solution accuracy and more efficient solution times.


Introduction
Particle swarm optimization (PSO) is a stochastic population-based algorithm developed by Eberhart and Kennedy [1]. PSO was derived from the intelligent collective behavior of animal species (e.g., flocks of birds), especially that of searching food. At the start of plowing, one bird discovers the food source. en, it is followed very quickly by another and so on. In this process, the information regarding a potential feast is widely disseminated within the group of gulls that fly in search of food in a more or less orderly manner. e gathering takes place through a (voluntary or not) social exchange of information between individuals of the same species. One of them found a solution and the others adopt it. Kennedy and Eberhart [2] initially attempted to simulate the birds' ability to fly synchronously and to shift a path suddenly while staying in optimum shape. en, they expanded their model into a simple and efficient optimization algorithm and the particles are individuals moving through the research hyperspace. PSO has been effectively deployed in a number of locations in recent years. Due to its multiple advantages, such as a wide search range, speedy convergence, and ease of implementation, it has been used in a variety of research fields and applications. On another side, since the PSO algorithm is easily stuck in a local optimum, it was greatly improved. e abovementioned enhancements are organized by the particular gap in the PSO they seek to fill.
e Binary PSO. To evolve the network architecture, Eberhart and Kennedy introduced a binary version of the PSO [3] to represent and solve problems of a binary nature as well as to compare genetic algorithms (GA) encoded binarily with PSO.
Rate of Convergence Improvements. Different strategies have been suggested to increase the PSO's convergence rate. ey usually require improving the PSO update equations without altering the algorithm structure, which generally enhances the optimization of local performance, in several local minima functions. ere are three parameters that change the PSO update equations: the inertia weight control [4][5][6] and the two acceleration coefficients. In fact, the inertia weight is among the earliest improvements of the original PSO made to further enhance the algorithm convergence rate. One of the most common enhancements is the implementation of Shi and Eberhart inertia weights [7]. In 1998, the authors suggested a technique to dynamically adapt the inertia weight using a fuzzy controller [8]. en, researchers, in [9], demonstrated that the application of a constriction factor is required for the PSO algorithm to converge.
In the same framework and from another analytical view, PSO algorithms have been successfully used in a multitude of industrial applications and are readily trapped in local optima. To solve such issue, several improvements have been made: enhancing particle displacement adjustment parameters and improving topological structures, heterogeneous updating rules, and population grouping using multiple swarms. In [10], Liang and Sughantan have proposed a dynamic multiswarm particle swarm optimizer (DMSPSO) wherein the entire population is split in numerous swarms by applying clustering strategies to regroup these swarms and ensure the exchange of information between them. In [11], Yen and Leong have suggested two strategies, namely, swarm growth strategy and swarm decline strategy, which enable swarms to distribute information with the best individuals in the population. In addition, in [12], a cooperative method of PSO, which divides the population into four subswarms and affects significantly the convergence of the algorithm, has been introduced. In [13], Li et al. proposed an improved adaptive holonic PSO algorithm that did not consider connections between particles and used a clustering strategy. Obviously, the introduced algorithm enhanced the search efficiency. In [14], two optimization examples were solved using an improved optimization algorithm (OIPSO): optimization of resource constraints with the shortest project duration and optimization of resource leveling at fixed duration. rough these last two, it was proven that the developed algorithm allowed improving the optimization effect of PSO and accelerating the optimization speed. Similarly, in 2014, a new metaheuristic search algorithm has been proposed by Ngaam [15]; the population was partitioned into 4 subswarms: 2 basic subswarms for exploitation search as well as adaptive subswarm and the exploration subswarm. e main advantage of MsPSO is the possibility of using easily the tuning parameters, which improves the search performance of the algorithm. Despite the reported improvements, they cannot always meet the requirements of applications in various areas. Because of the importance of inertia weight as a tuning parameter used to ensure convergence and improve accuracy, Zdiri et al. [16] have performed a comparative study of the different inertia weight tuning strategies and they have also shown that the adaptive inertia weight is the best technique that allows obtaining greater precision. In this context and to assess the MsPSO algorithm performance, a new modified version, named multiswarm particle swarm optimization algorithm based on adaptive inertia weight (A-MsPSO), is proposed in this study. e main contribution of this article is resumed as follows: the population is split into 4 subswarms, each of which represents a single PSO. In each subswarm, we use an adaptive inertia weight strategy which oversees the search situation and adapts according to the evolutionary state of each particle. Adaptation is carried out, at each iteration, by a mechanism defined by the best overall aptitude (Gbest). e latter is determined from the best overall aptitude of each subswarm. is takes place with 2 constant acceleration coefficients applied to enhance the local search capacity of each subswarm and that of global exploration and speed of convergence. e remaining of this paper is structured as follows: Section 2 presents the PSO algorithm. MsPSO algorithm is defined in Section 3. Section 4 presents the introduced optimization algorithm (A-MsPSO). In Section 5, the relation between the position and the velocity of the particles in each subswarm of A-MsPSO is well explained. Section 6 presents the simulation results and shows the efficiency of the introduced algorithm by comparing it with other approaches. Section 7 suggests the future work possibilities. Finally, Section 8 concludes this work.

Definition.
At the outset of the algorithm, the particles of the swarm are randomly distributed in the search space where each particle has a randomly defined movement speed and position. ereafter, it is able, at each instance, to (i) Evaluate its current position and memorize the best one it has reached so far and the fitness function value in this position. (ii) Communicate with the neighboring particles, obtain from each of them its own best performance, modify its speed according to best solutions, and consequently, define the direction of the next displacement. e strategy of particle displacement is explained in Figure 1.

Formalization.
e i th particle displacement between iteration t and iteration t + 1 is formulated analytically by the two following relations of velocity and position, respectively, where w is a constant called inertia weight; x pbest denotes the best historical personal position determined by the i th particle; x gbest corresponds to the best overall position found by the population; c 1 designates the cognitive parameter; c 2 is the social parameter; and r 1 , r 2 are generated randomly by a uniform distribution in [0, 1]. In what follows, we briefly review some of these approaches. Among the techniques widely used to increase PSO performance, we can mention hybridization with evolutionary algorithms (AE) and particularly genetic algorithms (GA). Angeline [17] utilized a tournament selection mechanism whose purpose is to replace the position and speed of each poorly-performing particle with those of the best performing particle. is movement in space improves performance in 3 test functions used, yet it has moved away from the classical PSO. Moreover, Zhang and Xie used, in [18], different techniques in tandem in their PSO with differential evolution (DEPSO). e canonical PSO and DE (differential evolution) operators were utilized in alternate generations. Obviously, hybridization was efficiently applied in some test functions and the obtained results indicate that DEPSO can improve the PSO efficiency in solving larger dimensional problems. Liu and Abraham, in [19], hybridized a turbulent PSO (TPSO) with a fuzzy logic controller. e TPSO was based on the idea that particles stuck in a suboptimal solution, which led to the premature convergence of PSO. e velocity parameters were, then, adaptively regulated during optimization using a fuzzy logic extension. e efficiency of the suggested method in solving high and low dimensionality problems was tested, with positive results in both cases. In particular, when the performance of canonical PSO deteriorated considerably in the face of problems with strong dimensionality, the TPSO and FATPSO were slightly affected. Ying Tan described a PSO algorithm based on a cloning process (CPSO) inspired by the natural immune system in [20]. e method is characterized by its quick convergence and ease of implementation. e applied algorithm increased the area adjacent to the potential solution and accelerated the evolution of the swarm by cloning the best individual in the following generations, resulting in greater optimization capacity and convergence. To diversify the population, Xiao and Zuo [21] adopted a multipopulation technique, using each subpopulation, to a separate peak. en, they utilized a hybrid DEPSO operator in order to determine the optima in each one. Moving peaks benchmark tests yielded much lower average offline error than competing solutions. Chen et al. integrated the PSO and CS (cuckoo search), in [22], to develop PSOCS, where cuckoos near excellent solutions interacted with each other, migrated slowly towards the ideal solutions and led by the global bests in PSO. In [23], a new crow swarm optimization (CSO) algorithm, that combines PSO with CSA (crow search algorithm) and allows people to explore unknown locations while being guided by a random individual, was suggested. CSO algorithm was tested on several benchmark functions. e suggested CSO's performance was measured in terms of optimization efficiency and global search capability and all of that are vastly enhanced over either PSO or CSA.

Improvement of the Topological Structure.
ere are two original versions of PSO: the classic local ring version (Lbest) and the global star version (Gbest). In addition to these two original models, several topologies were proposed. e majority of these enhanced the PSO algorithm's performance. A variety of topologies were introduced and improved from static topologies to dynamic ones. Some of the widely used and successful topologies are presented below.
(1) Static Topologies. e best known static versions are the global version and the classic local version. In the latter, the swarm converges more slowly than in Gbest, but can probably locate the global optimum. In general, using the Lbest model reduces the risks of premature convergence of the algorithm, which affects positively the performance of the algorithm, particularly for multimodal problems. We present some more recent examples as follows: (2) Four Clusters [24]. e four-cluster topology uses four groups of particles linked together by several gateways. From a sociological point of view, this topology resembles four isolated communities. In each community, a few particles can communicate with a particle belonging to another community. e particles are divided into four groups, each of which consists of five particles and exchanges information with the other three groups. Each gateway is linked to only one group. is topology is characterized by the number of edges that indirectly connect two particles. e diameter of the graph is three, which means that information can be transmitted from one particle to another by borrowing a maximum of three edges.
(3) Wheel or Focal [25]. In this topology, all particles are isolated from each other and they are related only to the focal particle. Moreover, information is communicated by this particle that uses this data to adapt its trajectory. In [26], the authors developed a new improved algorithm (FGLT-PSO) named fusion global-local topology particle swarm optimization that simultaneously utilizes local and global topologies in PSO to get out of local optima.
is proposal enhanced the PSO algorithm performance in terms of solution precision and convergence velocity. e most important studies dealing with this network are represented in Figure 2.
(4) Dynamic Topologies. e authors tried to set up dynamic topologies having better performance than static topologies by changing their structures from one iteration to another.
is change allows particles to alter their travel paths so that they can escape local optima. In the following part, we will discuss a set of dynamic topologies: Fitness-distance ratio [27]: it is a variation of PSO based on a special social network; it was inspired by the observation of animal behavior.
is social network does not use a specific geometric structure to effect communication between particles. In fact, the position and speed of a particle are updated by dimension. For each dimension, the particle has only one informant, which prevents several informants from communicating information that cancels each other out. e choice of this informant, named nbest (best nearest), is based on two criteria: (i) it has to be close to the current particle and (ii) it must have visited a position with better fitness. Bastos-Filho [28]: in this structure, the swarm is divided into groups or clans of particles. Each clan, containing a fixed number of fully-connected particles, admits a leader. e authors added the possibility of the particles migration from one clan to another and, at each iteration, the particle with the best location was the leader.

Control of the PSO Algorithm Parameters.
Many recent works focusing on optimization have shown that the best performance can be obtained only by selecting the adequate adjustment parameters [29,30]. In PSO, two acceleration coefficients and the inertia weight are the three main factors affecting the algorithm performance. ey affect also the orientation of the particle on its future displacement in order to guarantee a balance between local research and global research. eir control prevents the swarm from exploding and makes convergence easier. e inertia weight allows defining the exploration capacity of each particle in order to improve its convergence. Great value equals great range of motion and, therefore, global exploration. A weak value is synonymous to low range of motion and, thus, local exploration. erefore, fixing this factor amounts to finding a compromise between local exploration and global exploration. e size of the inertia weight influences the size of explored hyperspace and no value can guarantee convergence towards the optimal solution. Like the performance of other swarm algorithms, that of PSO can be improved by selecting the right parameters. us, Clerc [31] gave some general guidelines for choosing the right combination. e acceleration coefficients were first used by Kennedy and Eberhart. e social parameter they utilized is the same as the cognitive parameter with a constant value. Although several enhancements were made, the combination of the parameters w, c 1 , and c 2 is the only solution that makes the adjustment of the balance between the phases of intensification and diversification of the research process possible. For example, in [32], a new optimization strategy for particle swarms (APSO) was formulated; the coefficients are equal to 2.0 and adjusted adaptively according to the evolutionary state (exploitation, exploration, convergence, and jumping out). It allows the automated adjustment of acceleration coefficients and inertia weight during runtime, resulting in faster convergence and better search efficiency. PSO presents a major problem called premature convergence. is problem can lead to the algorithm stagnation in local optimum. In fact, much effort has been made by scientists and researchers to enhance the performance of this algorithm. In 2014 [15], a new algorithm, called multiswarm particle swarm optimization based on the distribution of population into four subswarms cooperating through a specific strategy, has been proposed by Ngaam. In the following section, we will present MsPSO in detail.

General Description.
is new optimization algorithm is an improved version of PSO. In this approach, the population is divided into four subswarms (an adaptive subswarm, an exploring subswarm, and two basic subswarms) to obtain the best management of the exploration and the exploitation of the research process in order to reach the best solution.
e subswarm cooperation and communication model is shown in Figure 3. S 1 and S 2 are the basic subswarms, whereas S 3 and S 4 are the adaptive and exploration subswarms, respectively. All the subswarms share information with each other and participate in the overall exploitation. Based on cooperation and information sharing, the particles in S 3 use the speed messages and fitness values of the particles in S 1 and S 2 to refresh their speeds and positions. However, the speed of the particles in S 4 is updated from the concentration of particles in S 1 , S 2 , and S 3 . To improve the PSO convergence rate, several techniques, modifying the inertia weights and/or the acceleration coefficients in the algorithm update equations, have been recently developed.

e Inertia Weight in the Literature.
e inertia weight, which was first developed by Shi and Eberhart to ensure a balance between exploitation and exploration, controls the effect of the particle's orientation on future displacement. e authors showed that, if w ∈ [0.8, 1.2] convergence is faster with a reasonable number of iterations and a large value of w facilitates a global exploration, a small value will make it easier to explore locally. Several methods were suggested to solve PSO-based optimization problems by using an inertia weight in the particle velocity equation with some modifications. e forces changing the inertia weight can be classified into three groups.

Constant Inertia Weight.
In many introduced methods, PSO-based optimization problems were solved by utilizing a constant inertia weight in a modified particle velocity equation. In [33], for example, Shi and Eberhard used a random value of w to follow the optima.
where rand( ) is generated randomly by a uniform distribution in [0, 1].

Time-Varying Inertia Weight.
In this class, the inertia weight value is determined as a function of time using a specific number of iterations. ese strategies can be nonlinear or linear and decreasing or increasing. In what follows, we present some of these methods that have an impact factor in the literature. For example, in [34], Xin and Chen introduced a decreasing linear inertia weight efficiently used to enhance the fine tuning characteristics of the PSO. Besides, in [35], the authors added a chaotic term to the weight of inertia of linear growth. Moreover, Kordestani and Meybodi [36] developed an oscillating triangular inertia weight (OTIW) to follow the optima in a dynamic environment.  Mathematical Problems in Engineering feedback parameters [37]. In [38], Rao and Arumugam employed the ratio between the best overall shape and the best local mean shape of the particles in order to calculate the inertia weight in each iteration and avoid the premature convergence towards the local minimum. e strategy of the adaptive inertia weight has been made available to improve its research capacity and control the diversity of the population by an adaptive adjustment of the inertia weight.

Limitations and Discussion of MsPSO
Algorithm. e MsPSO algorithm presents three main adjustment parameters: w, c 1 , and c 2 that affect significantly the convergence of the algorithm. e four subswarms use two constant acceleration coefficients and time-varying inertia weight. e MsPSO algorithm contains a time-varying inertia weight function given by Besides, c 1 and c 2 , are adjusted to a constant value.
is category of functions can cause incorrect update of the particle velocity as no data about the evolving state showing the population diversity is identified or used. erefore, the choice of the parameters can disturb the direction of the future displacement of each particle and degrade the algorithm performance in terms of computation time and convergence. To overcome the mentioned problems, we suggest, in this manuscript, a modified version of the MsPSO algorithm called adaptive multiswarm particle swarm optimization (A-MsPSO) where a mechanism is applied to adjust both the inertia weight and acceleration coefficient. e advantage of this mechanism is that it considers the evolutionary state of the particles of the four abovementioned subswarms. is enhancement made to introduce a process of exchange and cooperation between the subswarms to guarantee the best exploration and the most efficient exploitation of the search space.

General Description of A-MsPSO Algorithm.
A-MsPOS algorithm is used in this paper to enhance the exploration and exploitation of the basic MsPSO that incorporates four subswarms, an adaptive inertia weight method, and two constant acceleration coefficients. e populations are split into four subswarms containing N particles in a search space of D dimension where each subswarm runs a single PSO and searches a local optimum or, at the same time, a global optimum. us, from the best local individual of the four subswarms, the best global optimum can be obtained as follows: g best � f(g sub best /sub � 1, 2, 3, 4) . e particle settings guarantee the update of speed and position using their subswarms equation (the details are presented in subsection 4.2). e four subswarms maintain specific binding to enhance the search capability. e cooperation and communication between the subswarms are presented in Figure 4. e pseudocode of the introduced A-MsPSO is provided in Algorithm 1 and the organization chart is exposed in Figure 5. e A-MsPSO algorithm contains 4 subswarms and uses various cooperative strategies to broaden the exploration and exploitation of the MsPSO. Figure 6 describes the mechanism of exploring the new region where x 1 , x 2 , and x 3 positions of the particles of three subswarms S 1 , S 2 , and S 3 are randomly placed in the search space; they approach the new captured Gbest position. At time T, the velocities of each particle are defined randomly and their position, at time T + 1, is determined according to the equations update. Position x 4 has the ability to explore a new area from which another space of unknown search is provided using the modification equation (12). e search solutions are obtained from a circle containing the overall optimum. Graphically, the old region and the new one are separated by an arc of a circle. Furthermore, the i th particle in S 3 is modified according to the fitness values and the velocity of the particles in the basic subswarms. In S 4 , the i th particle changes its velocity depending on velocities of the particles in S 1 , S 2 , and S 3 .

A-MsPSO Search Behavior.
e four subswarms collaborate and search for the optimum Gbest solution by using the best solutions of each subswarm. As shown in Figure 4, the global optimum is defined as a function of the global optimum of the 4 subswarms. e expressions of the speed and position of the particles of S 1 and S 2 are given by the following equations: where r 1 and r 2 are randomly generated in [0, 1], the superscript of (1/2) represents the basic subswarms; and w a is the adaptive inertia weight obtained by where where g best represents the best position shared by the entire population; w min and w max denote initial value and final one, respectively. e particles in the adaptive subswarm S 3 adjust their flight directions to the base of best particles in S 1 and S 2 . e speed in S 3 is revised in accordance with the following equation: 6 Mathematical Problems in Engineering

Mathematical Problems in Engineering
where r 3 and r 4 are randomly generated in [0, 1]. e fitness values c 1 and c 2 of the particles in S 1 and S 2 are declared in the following equation: e velocity update equation in S 4 is written as follows: e position of S 4 is updated by applying the following equation: with where New region v1 + v2 -v3 Figure 6: e cooperative evolutionary process in A-MsPSO.

Begin
Set the particle size of each subswarm Initialize the positions, velocities, inertia weight, and acceleration factors Determine each particle's shape worth Find the Pbest in subswarms S 1 , S 2 , and S 3 and the g best in the population Repeat until you hit the maximum number { Calculate the velocities v 1 , v 2 , v 3 , and v 4 in subswarms S 1 , S 2 , S 3 , and S 4 Update the positions Evaluate the fitness of the i th particle Update g best If the guide condition is satisfied In each subswarm, use diversity guided convergence strategy. End if } End do Return g best of the algorithm ALGORITHM 1: e proposed algorithm (A-MsPSO). 8 Mathematical Problems in Engineering e impact factors α 1 , α 2 , and α 3 are employed to control the effects of the information on the position of the particle in S 4 .
In A-MsPSO, information is shared by all particles in various areas. e basic subswarms (S 1 and S 2 ) influence the velocity and position information for other particles in other swarms. In

Particle Paths: Relation between the Particles Position and Velocity
We examine theoretically, in this section, the trajectories of the particles, the convergence of the A-MsPSO algorithm, and the speed of the particles of each subswarm. Convergence is defined based on the following limit: where x i is the position at time t of the i th particle and P i corresponds to local or global optimum. e velocity and position update equations for the A-MsPSO with the adaptive inertia weight is applied as follows: Authors, in [39], investigated the convergence of MsPSO and demonstrated that parameters setting affects the particle performance. e performed analysis shows that the relation between the particles in the 4 subswarms can be presented as a system to validate the convergence of the A-MsPSO algorithm.
Applying (2)-(11), we get the following A-MsPSO system: where F is the system matrix and R is the constant matrix. (1) , p (2) , p (3) , p (4) , g] T is the vector consisting of individual optimums and the single global optimum. e used symbols are written as follows: where r ij is the random number in [0, 1], j � 1, 2, and i � 1, 2, 3. e equation of a particle position in S 1 is formulated as follows: In subswarm S 3 , the equation applied to obtain the particle position is written as follows: e equation of a particle position in S 4 is formulated as follows: en, A-MsPSO system is obtained as shown in (B 1 ).
According to the convergence analysis performed in [40], the particles of each subswarm converge to stable points defined by the limits given in the following equations:

Experimental Study
is section includes three parts. In the first part, we present the test functions used in the experimental study. However, the second part shows the good performance of A-MsPSO algorithm through computational experiments, while a nonlinear system is used, in the last part, to validate the suggested method performance.

Test Function and Parameter Settings.
To test the efficiency of A-MSPSO from the benchmark functions available in the literature, we chose 4 unimodal functions (F 1 , F 2 , F 3 , and F 4 ) and 11 multimodal functions (from F 5 to F 15 ) [41]. e formulas and the properties employed in these functions are shown in Table 1. e utilized experimental machine has a third-generation i3 processor of 2.5 GHz and 128 GB of storage capacity and the applied programming language is MATLAB.

Results and Discussions.
In this subsection, we compare the introduced method with the most widely used inertia weight strategies, on the one hand, and with other versions of PSO, on the other hand. In subsection 6.2, several inertia weight strategies, such as random-MsPSO, constant-MsPSO, linear time-varying-MsPSO, and sigmoid-MsPSO, are analyzed on 15 test functions. In subsection 6.2, the A-MsPSO algorithm is compared to a number of PSO versions and to the basic MsPSO algorithm. en, in subsection 6.2, the computational cost of the A-MsPSO algorithm is compared to that of the existing PSO variations. Finally, in subsection 6.2.1, the performance of this algorithm on Box and Jenkins gas furnace data is validated.

Comparisons of Inertia Weight Diversity.
e most intensively used inertia weight strategies and the A-MsPSO algorithm are generally applied to solve benchmark problems. Table 2 lists the parameters used in each approach. All of the functions were tested on 30 dimensions. e maximum number of iterations was set to 2000, while the number of executions for each test was equal to 30. e performance of the algorithm was measured by the standard deviation and the mean values, as illustrated in Table 3. e values in bold indicate the best solution with a significance level of 5%. In this context, the A-MsPSO uses an adaptive parameter adjustment strategy based on the evolving state of each particle improving performance in terms of overall optimization and convergence velocity. is finding shows that the A-MsPSO has a better search capacity compared to the other search algorithms (random-MsPSO, constant-MsPSO, linear-time-varying-MsPSO, and sigmoid MsPSO) applied on 13 functions from F 1 to F 15 , except F 8 and F 15 .
In Figure 7, the vertical axis and the horizontal axis represent the best fitness and the number of iterations, respectively. Each curve demonstrates a shift in the shape of each algorithm based on a predetermined number of iterations. e final point of each curve represents the global optimum.
e proposed algorithm has significantly faster convergence when applied on the functions F 1 , F 2 , F 3 , F 4 , F 5 , F 6 , F 9, and F 11 than when applied on the functions F 7 , F 8 , F 10 , F 12 , F 13 , F 14, and F 15 . However, the ability to jump out of the local optimum is improved. e curves in the functions F 7 , F 10 , F 12 , F 14, and F 15 remain stable after 200 iterations; this shows that the swarm is trapped to a local minimum, and the A-MsPSO method loses its global search capability.

Comparison of the Introduced Algorithm with Other PSO Algorithms.
e proposed A-MsPSO was compared to six other variants of PSO including LW-PSO, CLWPSO, SIW-APSO, MSCPSO, NPSO, and standard MsPSO. e parameters of the algorithms are listed in Table 4 according to their references. e versions of PSO were applied to the 6 test functions (F 1 , F 2 , F 3 , F 4 , F 5, and F 6 ) with 30 dimensions. e maximum number of iterations was set to 1000 and 30 independent tests of each algorithm were performed. e results of standard deviation (STD) and mean values (Mean) are listed in Table 5 and the best are in bold. From this table, we notice that, on the 4 reference functions (F 1 , F 2 , F 3, and F 4 ), the results obtained by A-MsPSO are better than those provided by the other PSO variants. ey show that the search method with an adaptive parameter adjustment strategy based on the evolutionary state of each and every particle is the most efficient in improving the PSO algorithm performance in solving optimization functions compared to the other.

Computational Cost.
e CPU time was used to compare the computational efficiency of the PSO variants; the computational efficiency of each algorithm on the test function f was computed using the following formula: where T e is the time of computing the benchmark function algorithm f, and T tot denotes the cumulative time of all the function algorithms f. Figure 8 shows  functions, first in F 1 and third in F 12 and F 14 . e comparison of S-MsPSO and A-MsPSO demonstrates that the latter is the longest, on 13 reference functions, and the standard MsPSO is the fastest. is observation means that the computational complexity of introduced algorithm increases by using an adaptive inertia weight.

Box-Jenkins Gas Furnace Data.
Box and Jenkins gas oven dataset is a benchmark widely used as a standard test in fuzzy modeling and identification techniques. It consists of 296 pairs of observations in/outs [u (k); y (k)] k from 1 to 296. Methane and air were combined to form a mixture of CO 2 -containing gases. e input u (k) of the furnace corresponds to the flow of methane and the y (k) output is the concentration of CO 2 percentage in the outgoing gases.
Step 1: the experiment was conducted on 296 pairs of data. e population size in the four subswarms was set to 6, the number of iterations was 50, the rule number was 3, the acceleration coefficients were fixed at 1.494 (5), and the inertia weight was chosen as declared previously according to equations (7) and (8). e model obtained for the candidate characteristic variables is visualized in Figure 9. e A-MsPSO algorithm had a performance index of 0.106, which is the best compared to those of the other algorithms indicated in Table 6.
Step 2: the primary 148 data pairs were employed as training data to verify the robustness of A-MsPSO, while the remaining 148 data pairs were utilized as test data to predict the performance. e comparison of the performance of A-MsPSO and that of the real system is Table 2: Description of the various inertia weight adjustment mechanisms.

Future Work Possibilities
In future work, the proposed approach can be applied in several applications to (i) encrypt images using a 7D hyperchaotic map [30]; (ii) adjust the hyperparameters of a 4D chaotic map [58]; and (iii) optimize the parameters of 5D hyperchaotic map using A-MsPSO to perform encryption and decryption processes [29]; (iv) it may also be used in photovoltaic water pumping applications [59]. In addition, future research will focus on a thorough examination of A-MsPSO's applications in increasingly complicated practical optimization situations in order to deeply analyze the attributes and evaluate its performance.

Conclusion
An adaptive inertia weight approach was utilized in this paper to present a modified version of the multiswarm particle swarm optimization algorithm. Based on the comparison of four methods for establishing inertia weight strategies in the MsPSO algorithm, the ability of the A-MsPSO algorithm to optimize the issues with a bigger search space may be demonstrated by comparing experimental results in higher attributes. It was concluded that the MsPSO algorithm with adaptive inertia weight strategy is the best for greater accuracy, according to the results obtained by the performed tests. eoretically, the four subswarms of A-MsPSO can also converge towards their own position of stable equilibrium. Furthermore, the experimental findings show that the suggested algorithm is capable of producing a robust model with improved generalization ability. We expect that this study will be extremely useful to researchers and will inspire them to develop good solutions to solve optimization problems using the A-MsPSO algorithm.