A Novel Particle Swarm Optimization with Improved Learning Strategies and Its Application to Vehicle Path Planning

In order to balance the exploration and exploitation capabilities of the PSO algorithm to enhance its robustness, this paper presents a novel particle swarm optimization with improved learning strategies (ILSPSO). Firstly, the proposed ILSPSO algorithm uses a self-learning strategy, whereby each particle stochastically learns from any better particles in the current personal history best position (pbest), and the self-learning strategy is adjusted by an empirical formula which expresses the relation between the learning probability and evolution iteration number. +e cognitive learning part is improved by the self-learning strategy, and the optimal individual is reserved to ensure the convergence speed. Meanwhile, based on the multilearning strategy, the global best position (gbest) of particles is replaced with randomly chosen from the top k of gbest and further improve the population diversity to prevent premature convergence. +is strategy improves the social learning part and enhances the global exploration capability of the proposed ILSPSO algorithm. +en, the performance of the ILSPSO algorithm is compared with five representative PSO variants in the experiments. +e test results on benchmark functions demonstrate that the proposed ILSPSO algorithm achieves significantly better overall performance and outperforms other tested PSO variants. Finally, the ILSPSO algorithm shows satisfactory performance in vehicle path planning and has a good result on the planned path.


Introduction
e general optimization problem is to find strategies or a set of parameter values, such that one-or multicriteria attain to the optimum under some restricted conditions. Research studies on optimization are general problems in every field of engineering technology, science research, natural world, and human society activities. With the development of production becoming deeper and deeper, traditional optimization algorithms may no longer satisfy with the request for the actual application owing to the limit of performance, and consequently entail new rapid and efficient algorithms [1]. Over the last decades, researchers have designed a lot of feasible and effective heuristic intelligent optimization algorithms for optimization problems based on simulating all kinds of organism in nature. Intelligent optimization algorithms are a typical heuristic algorithm, and they can be divided into two types according to the number of generated solutions in each iteration [2]. One is individual-based algorithms, such as Simulated Annealing (SA) [3], Mountain Climb Servo (MCS) [4], and Tabu Search (TS) [5]. Another is population-based algorithms, such as Genetic Algorithms (GAs) [6], Ant Colony Optimization (ACO) [7], Particle Swarm Optimization (PSO) [8], and Artificial Bee Colony (ABC) [9]. e abovementioned algorithms have been successfully implemented in the practical application. e PSO algorithm is a relatively new embranchment of swarm intelligence. e PSO algorithm was proposed in 1995 by Kennedy and Eberhart [8], and its core idea is to use information sharing mechanism and learning from each other particles' experience, particle experience from each other, the development of population is achieved. It is characterized by simplistic implementation, fast convergence, and few parameters. As a result, the PSO algorithm is developing rapidly in recent years and has been successfully applied in various scientific and engineering fields, such as in water distribution network design [10], power systems [11], manipulator motion planning [12], optimal control [13], image processing [14], artificial neural networks [15], and other fields [16][17][18][19].
e velocity and position of each particle in the basic PSO algorithm are updated through its own personal history best position (pbest) and global best position (gbest). e PSO algorithm does not use the external information during the search process and only utilizes the fitness value of the function. In the application to solving the complicated large-scale combinatorial optimization problems, the basic PSO algorithm usually tends to converge prematurely and lacks quality solution because of the insufficient diversity of individuals [20]. Besides, the algorithm converges prematurely in high dimension optimization problems because of an intense spread of information flow to attract particles to the gbest in the basic PSO algorithm. erefore, the basic PSO algorithm has some inherent shortcomings which are hard to overcome and in need of a modified PSO algorithm. Numerous PSO variants have been proposed to enhance the searching performance, which can be roughly divided into the following categories, the adaptive control of the parameters, new learning strategies to enhance population diversity, hybridization with other algorithms, and direct control of the probability distribution of the population. Moreover, other researchers proposed some improved PSO algorithms for specific application environments. However, these PSO variants often maintain population diversity with the cost of slow convergence speed or complex algorithmic structures. us, it is still a challenging task to synthetically improve the performance of the PSO in the algorithm research. is article proposed a new PSO variant by further modifying the velocity update mechanism in the PSO algorithm and named PSO with improved learning strategies (ILSPSO).
is new PSO variant employs the self-learning strategy and multilearning strategy to increase the convergence speed and global searching capability of the PSO algorithm. en, the performances of ILSPSO are tested by 16 well-known benchmark functions and compared with five representative PSO variants. e proposed ILSPSO algorithm is superior to the other tested PSO variants in the majority of the benchmark functions.
In recent years, the agricultural equipment industry has developed rapidly in China. As one of the key technologies of autonomous navigation, path planning is the research focus of intelligent agricultural equipment. According to the application purpose, the path planning of mobile agricultural vehicles can be divided into optimal time, optimal path, and so on. Under known environmental information, according to the prebuilt environment model, the path planning from the initial location to the target location is called global path planning. e advantage is that the optimality and accessibility of the planning path are guaranteed. e optimal path search algorithm for global path planning can be mainly divided into graph search algorithm, random sampling algorithm, intelligent algorithm, and so on [21]. GA and ACO in the intelligent algorithm have potential parallelism, which is suitable for solving and optimizing complex problems, but there are problems such as slow operation speed and premature solution. erefore, the PSO algorithm with simple and fast convergence has broad application prospects in the path planning of mobile vehicles.
is article aims to propose a new PSO variant by further modifying the velocity update mechanism in PSO and employ it to achieve the vehicle path planning. e article is organized as follows. e basic PSO algorithm and several PSO-related studies are briefly discussed in Section 2. Section 3 elucidates the methodologies of the ILSPSO algorithm. Subsequently, to verify the performance of the proposed ILSPSO algorithm, the experimental settings and simulation results are given in Section 4. Section 5 applies the proposed ILSPSO algorithm to the vehicle path planning. Finally, the article is concluded with a brief summary in Section 6.

Related Works
Firstly, this section discusses the mechanism of the basic PSO algorithm. en, the PSO variants are reviewed according to the different improvement ideas of scholars.

Basic PSO Algorithm.
In the basic PSO algorithm, each particle has its own trajectory, namely, position x i and velocity v i and moves in the search space. e position of each particle represents the potential solution, and it is evolved with regard to the fitness function. In D-dimensional search space, the position and velocity of ith particle are expressed as , respectively. During the search process, the trajectory of ith particle is adjusted according to the current velocity, pbest i and gbest. Among them, pbest i � [pbest 1 i , pbest 2 i , . . . , pbest D i ] and gbest � [gbest 1 , gbest 2 , . . . , gbest D ]. erefore, the velocity and position of ith particle on the dimension d are updated by using equations (1) and (2) as follows [22]: where w is the inertia weight which is used to measure the percentage of the previous velocity, c 1 and c 2 are the acceleration parameters, rand d 1 and rand d 2 are two uniformly distributed random numbers in the range of [0, 1], pbest d i is the personal history best position of ith particle, and gbest d is the global best position of the particle. e 1st term in equation (1) represents the effect of the inertia of the particle, the 2nd term represents the cognitive learning (particle memory influence), and the 3rd term represents the social learning (swarm influence). It would be worthwhile to point out that, in general, the social aspect of the PSO algorithm is guided by the best solution in the particle's neighborhood.

PSO Variants and Improvements.
Since the introduction of the PSO algorithm, researchers have studied it from different methods and angles to support the complex multimodal function optimization requirements and proposed many PSO variants. A brief review of these variants is presented as follows.
e adaptive control of the parameters is one of the widely used strategies to enhance the searching performance of the basic PSO algorithm. As a global parameter of the PSO algorithm, inertia weight w plays an important role in the algorithm, and it can easily control searching capability and convergence speed of the algorithm. An inertia weight w within [0.9, 1.2] was found to be a good choice, but the strategy of linearly decreasing inertia weight can also help faster convergence [22]. Shi and Eberhart proposed a fuzzy adaptive method to modify inertia weight w [23]. A novel parameter automation strategy for the acceleration coefficients c 1 and c 2 (HPSO-TVAC) is introduced by Ratnaweera et al. [24]. Tang et al. proposed a feedback learning PSO with quadratic inertia weight (FLPSO-QIW), and the parameters are controlled by introducing a fitness feedback mechanism [25]. Xu developed a new strategy in which the inertia weight is dynamically adjusted according to the mean absolute value of velocity [26]. In the CDW-PSO algorithm, Chen et al. proposed a chaotic map and dynamic weight is introduced to modify the search process [27]. Overall, adjusting parameters can improve the performance of the PSO algorithm, but this strategy is mainly temporary [28].
Another category of PSO variants introduces new learning strategies to enhance population diversity. Peram et al. proposed the fitness-distance-ratio-based PSO (FDR-PSO) algorithm adds a new dimension to this approach; each particle also learns from the experience of the neighboring particles that have a better fitness than itself [29]. Mendes et al. developed a fully informed PSO (FIPS) in which each particle is updated based on the history best positions of several neighbors instead of gbest or pbest [30]. Liang et al. proposed comprehensive learning PSO (CLPSO) by learning from different personal history best positions [31]. A distance-based locally informed particle swarm optimizer is proposed by Qu et al. [32]. Sabat et al. proposed a novel integrated learning PSO (ILPSO) for optimizing complex multimodal functions [33]. Cheng and Jin aimed to improve the learning efficiency of particles by introducing social learning mechanisms into PSO to develop a social learning PSO (SL-PSO) [34]. A self-regulating PSO (SRPSO) algorithm is incorporating the best human learning strategies for finding the optimum solution proposed by Tanweer et al. [35]. To solve unconstrained single-objective optimization problems with continuous search space, Lim and Mat Isa proposed a new PSO variant with adaptive division of labor (ADOLPSO) [20]. Wang et al. developed a hybrid PSO algorithm which employs self-learning based on the candidate generation strategy to ensure the exploration ability, and competitive learning-based prediction strategy to guarantee exploitation of the algorithm [36]. e Hybrid PSO algorithm by combining PSO with other search strategies to enhance performance can be classified as the third category of PSO variants. Lvbjerg et al. added crossover operation which is one of the three basic genetic operators used in the genetic algorithm to increase the population diversity [37]. Liu et al. have recently proposed a novel algorithm (PSO-DE) which integrates PSO with differential evolution (DE) to solve constrained numerical and engineering optimization problems [38]. A diversity-guided hybrid PSO based on gradient search is proposed by Han and Liu [39]. Other specific search operators have also been introduced into PSO, including repulsion techniques [40], variable neighborhood search [41], local stochastic search [42], Newton's laws of motion [43], multicrossover mechanism of the GA, and bee colony mechanism [44]. In general, the hybrid PSO algorithm improves the performance of the basic PSO algorithm, but they are difficult to implement and generally requires more time to calculate than the basic PSO algorithm. e PSO algorithm is a stochastic global optimization technique, and the searching performance is closely related to the probability distribution of the population. erefore, the algorithm can be improved through the direct control of the probability distribution of the population. Kennedy analyzed the distribution of particles with the change of iterations and proposed Bare Bones PSO (BBPSO) based on probability distribution [45]. In the literature [46], Kennedy summarized the previous studies, and the algorithm is decomposed into its essential steps and the alternative interpretations of those steps are studied. Based on Kennedy's research, Lin and Feng replaced the normal distribution with chaotic search which can avoid the influence of pseudorandom number [47]. Mendes and Kennedy improved the selection way of the center of the normal distribution function, explored the mean of the personal best positions of several particles in the same neighborhood and the β-distribution is used to define the mean of the Gaussian sample. [48]. Zhao and Suganthan proposed two local bests based multiobjective PSO which focuses the search around small regions in the parameter space in the vicinity of the best existing fronts and applies different mutation operators to different subswarms to accelerate the convergence [49].

ILSPSO Algorithm
Although many PSO variants are reported in the related literatures (as shown in Section 2.2), they still have some problems in the algorithm implementation or lack of quality solution. In this section, a novel particle swarm optimization with improved learning strategies (ILSPSO) is proposed to further modify the cognitive learning strategy and social learning strategy in the basic PSO algorithm. In the ILSPSO algorithm, the self-learning strategy and multilearning strategy are adopted to overcome the shortcomings and improve the performance of the basic PSO algorithm for optimizing higher dimension multimodal objective functions.

Self-Learning Strategy.
In the basic PSO algorithm, each particle only learns from its pbest in the cognitive learning part (the 2nd term in equation (1)). e pbest represents the personal history best position for each particle. However, pbest is not always better in the whole particle swarm within the current evolutionary search space. As a result, the potential information of the better particles is not efficiently used.
In view of the above problem, this section presents a selflearning strategy to improve pbest in cognitive learning part of the basic PSO algorithm. In order to evaluate the pbest, the pbest is sorted according to the fitness values. Subsequently, the pbest of the ith particle, except for the best one, will stochastic learn from any better particles in the current personal best solutions. After the self-learning strategy, the pbest of the ith particle on the dimension d can be expressed as pbest d si . erefore, the pbest d i of the cognitive learning part of equation (1) can be replaced by the pbest d si . From the various simulation experiments, it can be seen that the fixed self-learning probability is bad for population diversity [31]. For achieving better population diversity and faster convergence speed, the empirically self-learning probability is adopted: (3) Figure 1 presents an example of assigned for an evolution iteration number of 100000. e self-learning probability from 1 to 100000 has a value ranging from 0.3 to 0.9. Finally, the self-learning description can be seen in Algorithm 1, where M is the population size and D is the search dimensionality.

Multilearning
Strategy. In addition, learning from a single gbest in the social learning part (the 3rd term in equation (1)) improves the convergence speed in the basic PSO algorithm. Yet, the population diversity of particles is reduced and easily attracted to the local optimal region. Hence, the basic PSO algorithm often falls into local optimum in the process of solving higher dimension multimodal objective functions. For the heuristic swarm intelligence optimization algorithms including PSO, the crux of optimization is that the algorithm should not only provide good local searching capability, but also with strong global searching capability. Generally, the global searching capability of the PSO algorithm is more important [50] because only the global searching capability can search through the total solution space and find the globally optimal solution over a domain. e global searching capability of the PSO algorithm is close to the diversity of population. erefore, after self-learning of pbest, the global searching capability should be enhanced in the improved PSO algorithm.
In order to balance the exploration and exploitation capabilities of the PSO algorithm and enhance its robustness, multilearning strategy is adopted to improve the social learning part in the process of velocity updating (the 3rd term in equation (1)), which will ultimately achieve rapid convergence speed and strong global searching capability in looking for answers. Assuming gbestd � [gbest d 1 , gbest d 2 , . . . , gbest k d ], gbestd indicates the top k of gbest on the d dimension, and k determines the greediness of the novel multilearning strategy. erefore, the gbest on the d dimension can be expressed as gbestk d � gbestd (unidrnd(k)), and function unidrnd(k) generates random numbers for the discrete uniform distribution with maximum k. After the multilearning strategy, the gbest of the ith particle on the dimension d can be expressed as gbestk d . erefore, the gbest d of the social learning part of equation (1) can be replaced by the gbestk d . e multilearning strategy can be described by the following Figure 2 [33]. In Figure 2, the orange dots rep- and the gbestd are surrounded by a circle which shows the probable learning region of particles and the black dots represent the position of particles in the evolutionary search space. e gbestd represents the multiple gbest solutions on the corresponding d dimension which include the current gbest solution, and the number is decided by the parameter k (k � 3 in Figure 2). For the particles on the dimension d, the role of the current gbest d solution can be replaced through randomly selecting from the gbestd according to the multilearning strategy.
at is, all particles move toward the gbestk d (gbestk d � gbestd(unidrnd(k))) in the social learning part (the 3rd term in equation (1)). In this way, the multilearning strategy not only can keep the population diversity but also avoid premature convergence.

Proposed
Approach. Based on Sections 3.1 and 3.2, the velocity updating equation (1) can be rewritten as follows: where pbest d si (t) is obtained through the self-learning strategy and gbest d k (t) is obtained through the multilearning strategy.
e position updating of ith particle on the dimension d still uses equation (2). In each generation, iteration procedure of any particle in the ILSPSO algorithm is illustrated in Figure 3 [16]. e proposed ILSPSO algorithm employs the improved learning strategies (self-learning strategy and multilearning strategy) to improve the performance of the basic PSO algorithm.
e self-learning strategy aims to increase the convergence speed, and the multilearning strategy aims to enhance the global exploratory capability. Algorithm 2 shows the detail steps of the ILSPSO algorithm (Max_gen is the maximum evolution number).

Experimental Settings and Simulation Results
In this section, firstly, the experimental settings used in the performance evaluation of the proposed ILSPSO algorithm are described. Subsequently, the simulation results have been displayed.

Benchmark Functions.
A total of 16 well-known benchmark functions are used to extensively test the performances of the algorithms. e involved benchmark functions, which are listed in Table 1, can be divided into three classes. F1-F4 are unimodal functions and called conventional problems. ey only have a single optimum. Hence, the exploitation capability of algorithms can be tested through these unimodal functions. F5-F8 are complex multimodal functions and called complex problems. ey have one global optimum and many local optimum, and the local optimum number of the functions increase exponentially with increasing problem dimension. us, they are suitable for testing the exploration capability of algorithms. Moreover, it is very important to obtain good results on multimodal functions for the optimization algorithms because these good results show that the algorithm can jump out from the local optimum and gradually move toward the global optimum. Finally, F9-F16 are the rotated version of the F1-F8 and called rotated problems [51]. ese rotated functions are complex and can be used to test the exploitation and exploration capabilities of algorithms at the same time. e involved benchmark functions are listed in Table 1, which consists of their formulae, feasible search ranges, global optimal values, and accuracy levels (ε) [33,52,53].   Table 2.
In order to ensure that all PSO variants have a fair comparison, they are tested using the same population size (M), search dimensionality (D), and maximum evolution number (Max_gen) in each run for each test function. eir values are 40, 30, and 1 × 10 4 , respectively. Moreover, to minimize statistical errors, all the tested PSO variants are run 30 times independently for every function. e experiments have been conducted on a PC with an Intel Core i5-6600 3.3 GHz CPU, Microsoft Windows 10 64-bit operating system, and MATLAB 2015a.

Comparison of ILSPSO with Other Representative PSO
Variants. In this study, the performance of the PSO variants are tested in terms of accuracy, reliability, and efficiency through the mean fitness value (F mean ) and algorithm's rank (Rank). e F mean and Rank results on 16 benchmark functions are listed in Table 3, and the best results obtained for each function are expressed in boldface. Besides, if F mean statistically significantly outperforms the accuracy level, a "+" is labeled below the algorithm; if there is no significance between F mean and the accuracy level, a "�" is labeled, and if F mean is outperformed, a "− " is labeled. A summary of total number of "+," "�," and "− ," average rank, and overall rank are counted in the bottom of Table 3. e convergence chart of the tested PSO variants is shown in Figure 4. Table 3 and Figure 4 show that the proposed ILSPSO algorithm achieved better performance in the majority of test functions.
(1) Conventional problems: as shown in Table 3, for the four conventional problems F1-F4, the proposed ILSPSO algorithm achieved the best performance on all of the four functions F1-F4. Also, the h c also indicates that this dominance is significant statistically. Furthermore, the convergence chart of the tested PSO variants in Figures 4(c) and 4(d) show that the proposed ILSPSO algorithm has a faster convergence speed than other tested algorithms. In Figures 4(a) and 4(b), the convergence chart of the ILSPSO algorithm is second and the result is also satisfactory. (2) Complex problems:  Table 3(F7) and Figure 4(g). All the above illustrates that the proposed ILSPSO algorithm exhibits a satisfactory exploration capability on the complex multimodal functions. is  (11) For d � 1:D do (12) / * Self-learning strategy, refer to A1gorithm 1 * / (13) Calculate the pbest d si of ith particle according to Algorithm 1; (14) / * Multilearning strategy * / (15) Calculate the gbestk d of ith particle according to the top k of gbest; (16) / * Update the velocity and position of ith particle * / (17) Calculate the velocity v d i of ith particle using equation (4); (18) Limit the velocity using v d Update the position x d i of ith particle using equation (2); (20) Limit the position using End For (22) Calculate the fitness value f(t); (23) Update the personal best solution pbest(t), fpbest(t); (24) Update the global best solution gbest(t), fgbest(t); (25) End For (26) gen++; (27) End While (28) Output the final optimal solution. ALGORITHM 2: e pseudocode of the proposed ILSPSO algorithm. 6 Mathematical Problems in Engineering is mainly because the novel learning strategy enhances the global searching capability and the improvement strategy is effective in solving the most complex problems. In summary, the analysis results of Table 3 and Figure 4 show that the proposed ILSPSO algorithm significantly improves the PSO's performance and better than other

Algorithm
Parameter settings PSO-W [22] w max � 0.9, w min � 0.4, tested PSO algorithms in most benchmark functions due to the balance of the global and local searching capabilities of the algorithm. e proposed self-learning strategy of the ILSPSO algorithm increases the convergence speed on the premise of population diversity and promotes the exploitation capability. e multilearning strategy of the ILSPSO  algorithm enhances the exploration capability, assists the algorithm in implementing global search, and effectively prevents premature convergence.

Vehicle Path Planning Using the ILSPSO Algorithm
Path planning is one of the basic links of vehicle navigation [54]. It searches for an optimal or near-optimal collision-free path from the initial state to the target state according to a certain performance index. In Section 4, the performance of the ILSPSO algorithm was compared with five representative PSO variants in the experiments. e test results on benchmark functions demonstrate that the proposed ILS-PSO algorithm achieves significantly better overall performance and outperforms other tested PSO variants. In this section, the ILSPSO algorithm will be applied to vehicle path planning. In order to simplify the problem, the path planning is divided into two steps: (1) using the MAKLINK graph theory method and Dijkstra algorithm to find a near-optimal collision-free path and (2) optimizing the near-optimal collision-free path with the ILSPSO algorithm to obtain the global optimal collision-free path.

Solution of Near-Optimal Collision-Free Path Based on the Dijkstra Algorithm.
rough the environmental model established above, the path planning problem of the vehicle is transformed into the shortest path problem of the undirected network map. Because the Dijkstra algorithm is a typical single-source shortest path algorithm, it is used to calculate the shortest path from one node to all other nodes in a nonnegative weight graph. erefore, the Dijkstra algorithm can be used to generate a near-optimal collisionfree path through the path nodes S, D 1 , D 2 , . . . , D n , E on the MAKLINK map. e free connections corresponding to the nodes are named as L i (i � 1, 2, . . . , n). e flow of the Dijkstra algorithm is as follows [55].
Step 1. Initializing a set of nodes V that has not determined the shortest path and a set of nodes W that has determined the shortest path. en, the adjacency matrix arcs with weighted graph are used to initialize the source point to other nodes for the shortest path length F. If the source point has a join arc to other nodes, the corresponding value is the weight of the join arc; otherwise, the corresponding value is the maximum value.
Step 2. Assuming that the minimum value is F[i] in F, F[i] is the shortest path length from the source point to point i, and the point i is taken out of the set V and placed in the set W.
Step 3. Modifying the path length value of the source point in the array F to the node k in the set V according to the node i.
Step 4. Repeat Step 2 and Step 3 until the shortest path from the source to all nodes is found. Finally, the shortest path length obtained with the Dijkstra algorithm is 229.0611 m, and the result is shown in Figure 7.

Solution of Global Optimal Collision-free Path Based on the ILSPSO Algorithm.
e Dijkstra algorithm described above implements the shortest path solution for the undirected network map in Figure 6. However, the vehicle can follow the boundary of the 2-dimensional path planning feasible space which is constructed by the MAKLINK line, rather than having to follow the undirected network path in Figure 6. erefore, the above path is not necessarily the optimal solution of the entire planning space. Subsequently, the nearoptimal collision-free path is optimized using the ILSPSO algorithm to obtain a global optimal collision-free path.
In order to ensure that the vehicle is not close to obstacles when operating, the boundary of the obstacle is extended outward by 1/2 of the maximum dimension of the vehicle body in the length and width direction plus the minimum sensing distance of the sensor [56]. At this time, the vehicle is considered a mass point and the size is negligible. As shown in Figure 8 where h i is the proportional parameter and n is the number of MAKLINK lines on the near-optimal collision-free path.
In the near-optimal collision-free path shown in Figure 7, there are 6 free-sliding path points. erefore, the search space of particles in the ILSPSO algorithm is 6 dimensions. In the process of optimization solution, the population size is 30, the maximum evolution number is 500, the inertia weight w is 0.729, the acceleration parameters c 1 and c 2 are 1.494 and 1.494, and the parameter k is 3, respectively. e optimization results of the global collision-free path based on the ILSPSO algorithm are shown in Figure 9. From the Figure 9(a), the ILSPSO algorithm can converge to optimization solution after about 200 evolution iteration (the shortest path length is 169.9), 25.83% lower than the initial path. e proportional parameter h i of 6 free-sliding path points are shown in   Figure 9(b). Finally, the comparison between the global optimal collision-free path and near-optimal path is show in Figure 10.

Conclusions
is paper presents a novel particle swarm optimization (ILSPSO) which employs the improved learning strategies, namely, self-learning strategy and multilearning strategy. In the self-learning strategy, each particle of the pbest, except for the best one, will stochastically learn from any better particles in the current pbest solutions according to the sort of particles. At the same time, in order to insure the population diversity in the searching process, an empirical formula is used to adjust the learning probability along with the change of evolution iterations. is strategy aims to increase the convergence speed on the premise of population diversity. Moreover, the gbest is replaced with randomly chosen from the top k of gbest (gbestk) in the corresponding dimension according to the multilearning strategy, and it enhances the global searching capability of the algorithm. Subsequently, the proposed ILSPSO algorithm is compared with other five representative PSO variants based on the 16 well-known benchmark test functions belonging to three classes. e experimental and statistical analyses demonstrate that the proposed ILSPSO algorithm shows significantly better overall performance and outperforms other tested PSO variants in most benchmark functions irrespective of whether they are unrotated or rotated. e vehicle's path is initially planned by MAKLINK graph theory and Dijkstra algorithm. Subsequently, the proposed ILSPSO algorithm is employed to the quadratic programming of the initial path. e vehicle path obtained by quadratic programming of the ILSPSO algorithm is 25.83% shorter than the initial plan, and it shows that the ILSPSO algorithm has a satisfactory performance in vehicle path planning. All the results above prove that the proposed ILSPSO algorithm is a feasible and effective way to solve reallife application problems.
Data Availability e simulation code and data used to support the findings of this study have been deposited in the GitHub website, and this web URL is https://github.com/jsluen/PSO.git.

Conflicts of Interest
e authors declare that they have no conflicts of interest.  Figure 10: Comparison between the global optimal collision-free path and near-optimal collision-free path.