Enhanced Comprehensive Learning Particle Swarm Optimization with Dimensional Independent and Adaptive Parameters

Comprehensive learning particle swarm optimization (CLPSO) and enhanced CLPSO (ECLPSO) are two literature metaheuristics for global optimization. ECLPSO significantly improves the exploitation and convergence performance of CLPSO by perturbation-based exploitation and adaptive learning probabilities. However, ECLPSO still cannot locate the global optimum or find a near-optimum solution for a number of problems. In this paper, we study further bettering the exploration performance of ECLPSO. We propose to assign an independent inertia weight and an independent acceleration coefficient corresponding to each dimension of the search space, as well as an independent learning probability for each particle on each dimension. Like ECLPSO, a normative interval bounded by the minimum and maximum personal best positions is determined with respect to each dimension in each generation. The dimensional independent maximum velocities, inertia weights, acceleration coefficients, and learning probabilities are proposed to be adaptively updated based on the dimensional normative intervals in order to facilitate exploration, exploitation, and convergence, particularly exploration. Our proposed metaheuristic, called adaptive CLPSO (ACLPSO), is evaluated on various benchmark functions. Experimental results demonstrate that the dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities help to significantly mend ECLPSO's exploration performance, and ACLPSO is able to derive the global optimum or a near-optimum solution on all the benchmark functions for all the runs with parameters appropriately set.


Introduction
Particle swarm optimization (PSO) [1,2] is a powerful class of metaheuristics for global optimization. PSO simulates the social behavior of sharing individual knowledge when a flock of birds search for food. In PSO, flock and bird are, respectively, termed as swarm and particle, and each particle represents a candidate solution. Suppose the problem to be solved has D decision variables, each particle, denoted as i, "flies" in a D-dimensional search space and is accordingly associated with a D-dimensional velocity V i � V i,1 , V i,2 , . . . , V i,D }, a D-dimensional position P i � P i,1 , P i,2 , . . . , P i,D }, and a fitness f (P i ) indicating the optimization performance of P i . e swarm of particles randomly initialize velocities and positions and search the global optimum iteratively, and the final solution found is the historical position that exhibits the best fitness value among all the particles. In each iteration (or generation), i updates V i according to the present value of V i , the historical position giving i's best fitness value so far (i.e., i's personal best position), and the personal best positions of other particles.
Many different PSO variants have been proposed in the literature since the introduction of PSO in 1995 [3]. For the earliest proposed global PSO (GPSO) [3,4], the global best position with the best fitness value among all the particles' personal best positions is used for particle velocity update. To be specific, in each generation, i's velocity V i and position P i are adjusted on each dimension as follows: where d (1 ≤ d ≤ D) is the dimension index; w is the inertia weight; a and b are the acceleration coefficients; r and s are two random numbers uniformly distributed in [0, 1]; B i � B i,1 , B i,2 , . . . , B i,D is i's personal best position; and G � G 1 , G 2 , . . . , G D is the global best position. GPSO is liable to get stuck in a local optimum if the global best position is far from the global optimum. Local PSO (LPSO) [5] sets up a social topology with the shape of, e.g., ring, star, and pyramid. i's neighborhood comprises i itself and the particles that are directly connected with i in the topology. Unlike GPSO, LPSO takes advantage of i's local best position L i � L i,1 , L i,2 , . . . , L i,D that gives the best fitness value among i's neighborhood to guide the flight trajectory update of i, as can be seen from the following equation: Compared with GPSO, LPSO reduces the chance of resulting in a local optimum. With respect to both GPSO and LPSO, i's personal best position and the global/local best position are used for updating V i on all the dimensions. However, the personal best position and the global/local best position actually do not always contribute to the velocity update on each dimension. Comprehensive learning PSO (CLPSO) [6] and orthogonal learning PSO (OLPSO) [7] encourage i to learn from different exemplars on different dimensions according to equation (4) when updating V i .
where E i � E i,1 , E i,2 , . . . , E i,D is i's exemplar position. In CLPSO, i is additionally associated with a fixed learning probability that controls E i,d � B i,d or B j,d on each dimension d, where j is a particle randomly selected, and j ≠ i. OLPSO sets E i,d as i's personal best position or the global/local best position on each dimension d with the aid of orthogonal experimental design; OLPSO therefore has two versions: the global version OLPSO-G and the local version OLPSO-L. CLPSO and OLPSO redetermine E i if i's personal best fitness value f (B i ) does not improve for a consecutive number of generations. CLPSO and OLPSO significantly outperform GPSO and LPSO in terms of preserving the particles' diversity and probing different regions of the search space to obtain a promising solution.
Metaheuristics including PSO need to address three important issues, namely, exploration, exploitation, and convergence. Exploration means searching diversely to locate a small region that possibly contains the global optimum, while exploitation refers to concentrating the search around the small region for solution refinement. Convergence is the gradual transition from initial exploration to ensuing exploitation. We studied enhancing the exploitation and convergence performance of CLPSO in [8], and our proposed PSO variant is called enhanced CLPSO (ECLPSO). ECLPSO calculates B d and B d which are, respectively, the lower bound and the upper bound of all the particles' personal best positions on each dimension d in each generation as follows: ] and no greater than absolute value 2 simultaneously), the swarm of particles enter the exploitation phase on dimension d (i.e., the global optimum or a near-optimum solution has been identified to be likely around the normative interval on dimension d); otherwise, the particles are still in the exploration phase on dimension d (i.e., searching different regions on dimension d). ECLPSO adaptively updates the learning probability of each particle based on the ranking of all the particles' personal best fitness values and the number of the dimensions that have entered the exploitation phase. In addition, ECLPSO conducts perturbation on each dimension d that has entered the exploitation phase in order to find a high-quality solution around the normative interval on that dimension.
For a PSO variant, the velocity V i,d of each particle i on each dimension d is usually clamped by a maximum velocity V d , i.e., have also indicated that the search process of the particles evolves differently on each dimension; hence, in this paper, we propose to assign an independent inertia weight and an independent acceleration coefficient corresponding to each dimension, as well as an independent learning probability for each particle on each dimension. e dimensional independent maximum velocities, inertia weights, acceleration coefficients, and learning probabilities are adaptively updated based on the dimensional normative intervals in order to facilitate exploration, exploitation, and convergence, particularly exploration. We call the variant with the dimensional independent and adaptive parameters as adaptive CLPSO (ACLPSO).
We note that existing PSO variants, e.g., , have rarely considered using dimensional independent parameters other than the dimensional independent maximum velocities, and we find only one work [26] as an exception. In [26], Taherkhani and Safabakhsh modified GPSO, CLPSO, and OLPSO with an independent inertia weight and an independent acceleration coefficient for each particle i on each dimension d; the inertia weight and the acceleration coefficient are adaptively adjusted according to the improvement status of i's personal best fitness value and the distance between i's dimensional position P i,d and i's dimensional personal best position B i,d to achieve better exploration and faster convergence. e rest of this paper is organized as follows. Section 2 reviews the related work on PSO. e more detailed working principles of CLPSO and ECLPSO are elaborated in Section 3. Section 4 presents our proposed dimensional independent and adaptive parameters and the space and time complexity analysis of ACLPSO. Performance evaluation of ACLPSO on a variety of benchmark functions is given in Section 5. Section 6 concludes this paper.

Related Work
A lot of researchers worldwide have studied PSO. e status quo and research trend of PSO relevant research are to investigate multistrategy and adaptivity based on the 4 typical PSO variants, i.e., GPSO, LPSO, CLPSO, and OLPSO. Multistrategy refers to employing multiple strategies, while adaptivity stands for adaptively setting some parameters as well as appropriately invoking and switching the strategies. Multistrategy and adaptivity aim to realize goals such as exploration, exploitation, and convergence and help the particles efficiently find the global optimum or a near-optimum solution.
2.1. Related Work Based on GPSO/LPSO. Zhan [9] proposed adaptive GPSO; the variant identifies the swarm's evolution status based on the distribution of the distances among each particle and all the other particles; the inertia weight and acceleration coefficient are adaptively adjusted according to the swarm's evolution status for expediting convergence; the variant additionally takes advantage of Gaussian mutation to appropriately impose some momentum on the global best position to help escaping from a local optimum. Medianoriented GPSO was studied in [10]; the variant assigns an independent acceleration coefficient for each particle i; i is intentionally guided away from the swarm's median position that gives the median fitness value among all the particles' fitness values during the flight velocity update, and i's associated acceleration coefficient is adaptively updated based on i's fitness value, the swarm's worst fitness value, and the swarm's median fitness value so as to benefit jumping out of premature stagnancy in a local optimum and accelerating convergence. Chen et al. [11] introduced an aging mechanism with an aging leader and challengers for GPSO to address exploration; by evaluating the improvement status of the global best fitness value f (G), all the particles' personal best fitness values, and the leader's fitness value, the variant adaptively analyzes the leader's leading capability, adjusts the leader's life span, and generates a challenger through uniform mutation to possibly replace the leader when the leader's span becomes exhausted. GPSO augmented with multiple adaptive strategies was presented in [12]; nonuniform mutation and adaptive subgradient are alternatively applied to the global best position, respectively, contributing to escaping from a local optimum and conducting local search; the variant also performs Cauchy mutation on a randomly selected particle; as Cauchy mutation hinders convergence, the variant assigns an independent inertia weight and an independent acceleration coefficient for each particle and minimizes the sum of the distances between each particle and the global best position such that the inertia weights and acceleration coefficients are adaptively set and convergence is accordingly accelerated. In [13], LPSO with adaptive time-varying topology connectivity was investigated; for each particle i, the variant determines i's historical contribution status to the global best position and the historical status of i's topology connectivity getting stuck in a threshold value for every 5 consecutive generations and then adaptively updates i's connectivity in the topology; the variant relies on neighborhood search to help the particles with their personal best fitness values ceasing improving in the present generation to jump out of stagnancy. Xia et al. [14] discussed GPSO with tabu detection and local search in a shrunk space; each dimension d is segmented into 7 regions of equivalent sizes; for every 5 consecutive generations, the variant calculates the excellence level of each region on dimension d based on the ranking of all the particles' personal fitness values and the distribution of all the particles' personal best positions in the regions; according to the excellence level of the region that the global best position belongs to, the variant appropriately randomly generates a possible replacement from some other region to assist escaping from a local optimum; when the global best position falls in a region on dimension d for 80 consecutive generations, the variant shrinks the dimensional search space to that specific region for the purpose of speeding up convergence; moreover, the variant conducts local search with the aid of differential evolution. Other recent works related to integrating GPSO/LPSO with multistrategy and/or adaptivity include [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32].
Computational Intelligence and Neuroscience

Related Work Based on CLPSO/OLPSO. Liang and
Suganthan [33] proposed adaptive CLPSO with history learning; for every 20 consecutive generations, the variant adaptively updates each particle's learning probability based on the best learning probability out of all the particles' learning probabilities (i.e., having resulted in the biggest improvement for the personal best fitness value) and Gaussian distribution. Memetic CLPSO was introduced in [34]; the variant employs chaotic local search to help each particle that cannot improve the personal best fitness value for 10 consecutive generations getting out of stagnancy and applies simulated annealing to the particle whose personal best fitness value continues improving for 3 consecutive generations and whose personal best position is actually the global best position for solution refinement. Zheng et al. [35] studied adaptively determining the inertia weight for CLPSO according to the relative ratio of the number of particles with improved personal fitness values in the present generation and adaptively setting the acceleration coefficient by considering the sum of the ratio of each particle's fitness change over the particle's position change in the present generation. Superior solution guided CLPSO was presented in [36]; for the variant, the set of superior solutions includes not only each particle's personal best position but also other historically experienced positions with excellent fitness values; each particle learns from the superior solutions for velocity update; the variant applies nonuniform mutation on each particle i to help escaping from a local optimum, and the mutation is activated only when i's personal best fitness value ceases improving for 50 consecutive generations, and the average distance between i's position in the present generation and i's position in the previous 5 generation is less than a threshold value; the variant additionally takes advantage of some local search techniques (e.g., quasi-Newton, pattern search, and simplex search) to refine the global best position after 80% of the search progress. Qin et al. [37] investigated 4 auxiliary strategies for OLPSO to generate an appropriate exemplar position, respectively, for the purpose of preserving diversity, jumping out of premature stagnancy, accelerating convergence, and local search; the variant mutates the global best position to further strengthen exploration. Other recent works including [38][39][40][41][42] are also related to multistrategy and/or adaptivity research based on CLPSO/OLPSO.

Comprehensive Learning Particle Swarm Optimization.
In equation (4), the inertia weight w linearly decreases in each generation, and the acceleration coefficient a is a constant value equivalent to 1.5. Let k max be the predefined maximum number of generations; w is updated in each generation k according to the following equation: where w max � 0.9 and w min � 0.4 are, respectively, the maximum and minimum inertia weights. Equation (8) is the empirical expression for setting each particle i's learning probability L i . All the particles are associated with different learning probabilities.
where L max � 0.5 is the maximum learning probability and L min � 0.05 is the minimum learning probability. For each particle i on each dimension d, a random number uniformly distributed in [0, 1] is generated; if the number is no less than L i , the dimensional exemplar To determine j, two different particles excluding i are randomly selected, and j is the winner with a better fitness value out of the two particles. If E i,d is the same as B i,d on all the dimensions, CLPSO randomly chooses one dimension to learn from some other particle's personal best position. CLPSO redetermines i's exemplar position E i if i's personal best fitness value ceases improving for 7 consecutive generations.
CLPSO calculates the fitness value of i only if i is feasible (i.e., within the dimensional search space [P d , P d ] on each dimension d). If i is infeasible, as all the dimensional exemplars are feasible, i will eventually be drawn back to the search space.

Enhanced Comprehensive Learning Particle Swarm
Optimization. ECLPSO introduces two enhancements, namely, perturbation-based exploitation (PbE) and adaptive learning probabilities (ALPs), to improve the exploitation and convergence performance of CLPSO.
In each generation, regarding each dimension d, if the dimensional normative interval [B d , B d ] is indeed small, the PbE enhancement updates the dimensional position V i,d for each particle i according to equation (9) instead of equation (4): where w PbE � 0.5 is the inertia weight used exclusively for the PbE enhancement; a PbE � 1.5 is the acceleration coefficient used exclusively for the PbE enhancement; and c is the perturbation coefficient. c is randomly generated from a Gaussian distribution with mean 1 and standard deviation 0.65, and c is clamped to 10 times of the standard deviation on both sides of the mean. Each particle i is pulled towards e PbE enhancement contributes to sufficient exploitation around the indeed small dimensional normative interval. Note that V i,d updated by equation (9) is not limited by the dimensional maximum velocity V d . e minimum learning probability L min is fixed at 0.05. As expressed in equation (10), the maximum learning probability L max logarithmically increases in each generation k: 4 Computational Intelligence and Neuroscience where M k is the number of exploitation valid dimensions (i.e., the number of the dimensions whose normative intervals have ever become indeed small) before or just in generation k; h � 0.25 is the difference coefficient; and q � 0.45 is the rate coefficient. L max is small (i.e., 0.3) when M k � 0 and benefits initial exploration. L max increases rapidly with the particles' exploitation progress to facilitate convergence. e ALP enhancement adaptively determines all the particles' learning probabilities based on the ranking of the particles' personal best fitness values, i.e., where T i is i's rank. If i gives the best personal best fitness value, then T i � 1. A low-ranked particle is often better on more dimensions with respect to the personal best position than a high-ranked particle.

Dimensional Independent and Adaptive Maximum
Velocities. Suppose the optimization problem to be solved is f (X) with X being the D-dimensional decision vector, and the global optimum is X * � (X * 1 , X * 2 , . . . , X * D ). CLPSO and ECLPSO fail to observe and address the fact that, on a dimension d, if the dimensional global optimum X * D is located near either bound of the dimensional search space [P d , P d ] and all the particles' dimensional personal best positions are scattered (i.e., the dimensional normative interval [B d , B d ] is large), then it would be difficult for the swarm of particles to locate X * D ; this is because the dimensional maximum velocity V d updated by equation (4) is restricted to be 20% of [P d , P d ]. Figure 1 illustrates this phenomenon. In Figure 1(a), a particle i's dimensional position P i,d and X * D are close to different bounds of and i needs at least 2 generations to reach around E i,d . As can be seen from Figure 1(b), when i flies past E i,d , i's dimensional velocity update is influenced by two forces, i.e., the inertia force wV i,d and the exemplar force ar (E i,d -P i,d ); the more i being away from E i,d , the more the exemplar force is to pull it back to E i,d . In Figure 1(c), X * D is around P d , and P i,d is not that far from X * D ; however, E i,d is close to P d ; E i,d guides i to fly away from X * D . As a result, the chance for a particle to reach close to the dimensional global optimum is small. Furthermore, in case the dimensional global optimum is located near the dimensional search space bound and the dimensional normative interval is large on a significant number of dimensions, CLPSO and ECLPSO would fail to find the global optimum or a near-optimum solution, e.g., on Rosenbrock's function, rotated Schwefel's function, and rotated Rastrigin's function, for all the runs as reported in [8]. erefore, V d should not be fixed at 20% of [P d , P d ]. We propose to adaptively adjust V d in each generation according to the following equation: where s is the scaling coefficient and is a positive value. V d is positively related with the dimensional normative interval's size B d ] is large, V d is large and contributes to timely flight for getting close to X * D ; on the contrary, V d is small when [B d , B d ] becomes small in order to benefit fine-grained search.
Allowing each particle i's position P i,d on each dimension d to be infeasible also inhibits the particles to move close to X * D . Figure 1(d) shows an example; X * D is near P d , P i,d trespasses P d and is infeasible, and E i,d is around P d ; because of the force imposed by E i,d , P i,d is pulled to a feasible dimensional position far from X * D . Accordingly, an infeasible dimensional position is proposed to be repaired immediately by reinitialization between the previous feasible dimensional position and the trespassed dimensional search space bound [43,44].

Dimensional Independent and Adaptive Inertia Weights and Acceleration Coefficients.
For CLPSO and ECLPSO, the inertia weight w used in equation (4) is initially large to result in a large inertia force and is helpful for exploration, and w linearly decreases in each generation to gradually decrease the inertia force for the purpose of facilitating convergence and solution refinement. As w is dynamically updated according to the generation counter k in equation (7), w might obstruct exploration if the swarm of particles had not found the global optimum or a near-optimum solution even when k is large, and w might also impede convergence if a promising solution had already been located, and the particles can thus start solution refinement even when k is small. In addition, the same w is used with respect to all the dimensions. e search processes of the particles often evolve differently on different dimensions, i.e., taking different number of generations for the exploration phase. We thus propose to assign an independent weight w d for each dimension d to replace w in equation (4) where u is the tradeoff coefficient and is a positive number. u adjusts the tradeoff between the term We further propose to assign an independent acceleration coefficient a d for each dimension d to replace a in equation (4). w d and a d must satisfy the following so-called stability condition [26,45,46]: Hence, a d is simply adaptively adjusted according to the following equation:

Dimensional Independent and Adaptive Learning
Probabilities. Regarding each particle i in CLPSO and ECLPSO, a large value for i's learning probability L i enables i to learn more from its own personal best position for velocity update and hence is beneficial for solution refinement, while a small value for L i will let i to learn more from other particles' personal best positions and accordingly encourages i to search diversely. L i is adaptively updated based on i's fitness rank T i and the number of exploitation valid dimensions M k in each generation k. A serious issue occurs if M k is 0 or a small value in all the generations, e.g., as reported on Rosenbrock's function, rotated Schwefel's function, and rotated Rastrigin's function in [8]; small M k leads to small learning probabilities for the swarm of particles and fails to realize convergence. We propose to assign an independent learning probability L i,d for each particle i on each dimension d and adaptively set L i,d in each generation k as follows: where L min � 0.05; L max � 0.75; and v is the learning probability-based coefficient and is a positive number no greater than 1. e term log k max k grows logarithmically with the generator counter k in order to facilitate convergence.

Workflow and Complexity Analysis. ACLPSO is our
proposed PSO variant based on ECLPSO with dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities. e detailed step-by-step workflow of ACLPSO is as follows: Step  (9); otherwise, update the dimensional inertia weight w d according to equations (13) and (14), the dimensional acceleration coefficient a d according to equation (16), and V i,d according to equations (4) and (6); update P i,d according to equation (2), and repair P i,d if P i,d trespasses [P d , P d ] Step 6: for each particle i, Step 7: output the global best position with the best fitness value among all the particles' personal best positions As analyzed in [8], the time and space complexities of ECLPSO are, respectively, O (ND) bytes and O (k max (NlogN + ND)) basic operations plus O (k max N) function evaluations (FEs). Concerning ACLPSO, storing the dimensional independent inertia weights and acceleration weights requires O (D) bytes, and storing the dimensional independent learning probabilities needs O (ND) bytes. Adaptively updating the dimensional independent maximum velocities, inertia weights, and acceleration coefficients calls for O (k max D) basic operations, and adaptively adjusting the dimensional independent learning probabilities demands for O (k max ND) basic operations. erefore, the time and space complexities of ACLPSO are the same as those of ECLPSO.

Experimental Settings.
e experimental hardware platform is a Microsoft Surface Pro laptop computer with an Intel Core i5-7300U central processor at the frequency of 2.6 GHz, 8 GB internal memory, and 256 GB solid-state disk external memory. e operating system is 64 bit Windows 10.
16 commonly studied 30-dimensional functions [6-8, 24, 47] are used in this paper for benchmarking ACLPSO and other PSO variants. e name, the expression, the global optimum, the function value of the global optimum, the search space, and the initialization space of each function are listed in Table 1. e functions are classified into 5 categories, namely, unimodal, multimodal, shifted, rotated, and shifted rotated. Rosenbrock's function f 3 is unimodal in a 2-dimensional or 3-dimensional search space, but is multimodal in higher-dimensional cases [48]; it features a narrow valley from perceived local optima to the global optimum. With the incorporation of the cosine term cos(2πX d ), there are a large number of regularly distributed local optima for Rastrigin's function f 5 . Ackley's function f 7 has a deep global optimum and many minor local optima. Griewank's function f 8 contains a cosine multiplication term ) that causes linkages among the decision variables; f 8 is similar to f 5 in terms of many regularly distributed local optima. Schwefel's function f 9 has a global optimum that is distant from the local optima. With respect to the unimodal and multimodal functions f 1 to f 9 , the dimensional values of the global optimum are the same on all the dimensions. A shifted function shifts the global optimum X * to a vector Z that can be different on each dimension. A rotated function multiplies the original decision vector X by an orthogonal matrix O to get a rotated decision vector Y � XO; because of the rotation, if one dimension of X changes, all the dimensions of Y get affected. A shifted rotated function is both shifted and rotated.
e shifted global optima of the shifted functions f 10 to f 12 and the shifted rotated function f 16 can be found in [47]. e orthogonal matrices of the rotated functions f 13 to f 15 and the shifted rotated function f 16 are generated by Salomon's method [49]. e initialization spaces of the functions f 1 , f 2 , f 4 , f 5 , f 6 , f 7 , f 8 , f 13 , and f 15 are intentionally set to be asymmetric.
We conduct experiments to investigate the following 3 issues: (1) what are the key parameters of ACLPSO and how do the key parameters impact the performance of ACLPSO? (2) How do the dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities improve the performance of ACLPSO? (3) How is the performance of ACLPSO as compared with other PSO variants? We consider 3 variants of ACLPSO, i.e., ACLPSO-1, ACLPSO-2, and ACLPSO-3. ACLPSO-1, ACLPSO-2, and ACLPSO-3 are the same as ACLPSO, except that ACLPSO-1 does not repair the dimensional position P i,d for each particle i on each dimension d if P i,d trespasses the dimensional search space [P d , P d ], ACLPSO-2 does not adopt the dimensional independent and adaptive inertia weights and acceleration coefficients, and ACLPSO-3 does not take advantage of the dimensional independent and adaptive learning probabilities. Besides ACLPSO-1, ACLPSO-2, and ACLPSO-3, ACLPSO is further compared with CLPSO [6], ECLPSO [8], OLPSO-L [7], adaptive GPSO (AGPSO) [9], feedback learning GPSO with quadratic inertia weight (FLGPSO-QIW) [15], and GPSO with an aging leader and challengers (ALC-GPSO) [11]. For ACLPSO, ACLPSO-1, ACLPSO-2, ACLPSO-3, CLPSO, and ECLPSO, they are all implemented by Java, the number of particles N is set as 40, and 25 runs are executed on each function. e parameters of CLPSO, ECLPSO, OLPSO-L, AGPSO, FLGPSO-QIW, and ALC-GPSO take the recommended values that were empirically determined based on extensive experiments on various benchmark functions in [6-9, 11, 15]. Note that the value of N could be different for different PSO variants. N is fixed at 40 for CLPSO, ECLPSO, and OLPSO-L in [6][7][8], while it is equal to 20 for AGPSO, FLGPSO-QIW, and ALC-GPSO in [9,11,15]. As we do not have the source codes of OLPSO-L, AGPSO, FLGPSO-QIW, and ALC-GPSO, we directly copy the results of these 4 variants from [7,24] for performance comparison. Concerning all the PSO variants compared, each run consumes 200,000 FEs.

Experimental Results and Discussion.
e mean and standard deviation (SD) global best fitness value results of Figure 1: Illustration of why it is difficult for particle i to reach the dimensional global optimum X * D on dimension d. (a) P i,d and X * D are close to different bounds of [P d , P d ], and E i,d is located in between P i,d and

Category
Benchmark function

Shifted rotated
Shifted rotated Ackley  Table 4. A two-tailed t-test with degree of freedom 48 and significance level 0.05 is performed between the global best fitness value results of ACLPSO with the best (s, v) combination and those of ECLPSO on each function, and the t-test results on all the functions are listed in Table 5. Table 6 gives the mean and SD execution time results of ACLPSO with the best (s, v) combination, ECLPSO, and CLPSO on all the functions. Table 7 lists the mean and SD global best fitness value results of ACLPSO with other parameter settings on f 3 , f 4 , f 12 , and f 14 . e mean and SD global best fitness value and execution time results of CLPSO with other parameter settings on f 1 and f 2 are given in Table 8. Figure 2 illustrates the changes of the global best fitness value during the search process of ACLPSO with the best (s, v) combination and in the best run on f 2 , f 3 , f 4 , f 12 , f 13 , and f 14 .
As can be seen from Table 2, the best (s, v) combinations  are, respectively, IV, IV, III, IV, I, II, II, II, I, I, II, IV, II, II, II, and IV on the 16 functions. ACLPSO with the best (s, v) combination is able to find the global optimum or a nearoptimum solution on each function for all the 25 runs. ACLPSO is likely to get trapped in an unsatisfactory local optimum on f 3 with combinations II and IV, on f 4 with combinations I, II, and III, on f 5 with combinations II and IV, on f 9 with combinations II, III, and IV, on f 10 with combinations II and IV, on f 12 with combinations I and II, on f 13 with combinations I, III, and IV, and on f 14 with Computational Intelligence and Neuroscience 9 combinations III and IV. e accuracy of the mean global best fitness value with the best (s, v) combination is noticeably excellent on f 1 , f 2 , f 5 , f 6 , f 7 , f 8 , f 11 , f 15 , and f 16 . e observations indicate that s and v are the key parameters for ACLPSO, and the performance of ACLPSO is sensitive to the values of s and v. e normative interval scaling coefficient s determines the search granularity; large s encourages the particles to search in a large granularity so as to escape from an unsatisfactory local optimum and locate the global optimum or a near-optimum solution, whereas small s will let the particles to search in a small granularity so as not to miss the deep global optimum or a deep near-optimum solution during the search process. e learning probabilitybased coefficient v controls for a particle the number of dimensions learning from the particle's own personal best position; large and small values for v, respectively, contribute to the exchange of valuable information among the particles and preserving valuable information embodied in each particle; large v also benefits convergence and exploitation but might lead to premature stagnancy and hinders exploration in case some valuable information about the global optimum or a near-optimum solution was not preserved. We can see from Table 3  In Table 4, ACLPSO-1, ACLPSO-2, and ACLPSO-3 take the same values for s and/or v as the best (s, v) combination of ACLPSO on each function. e mean global best fitness value results of ACLPSO-1 are worse than those of ACLPSO on f 1 , f 2 , f 3 , f 10        ACLPSO-1, ACLPSO-2, and ACLPSO-3 validate that the repairing of a particle's dimensional position when trespassing the dimensional search space, the dimensional independent and adaptive inertia weights and acceleration coefficients, and the dimensional independent and adaptive learning probabilities is appropriate to be employed by ACLPSO.