New Ant Colony Optimization Algorithm for the Traveling Salesman Problem

As one suitable optimization method implementing computational intelligence, ant colony optimization (ACO) can be used to solvethetravelingsalesmanproblem(TSP).However,traditionalACOhasmanyshortcomings,includingslowconvergenceand low efficiency. By enlarging the ants’ search space and diversifying the potential solutions, a new ACO algorithm is proposed. In this new algorithm, to diversify the solution space, a strategy ofcombining pairs of searching ants is used. Additionally, to reduce the influence of having a limited number of meeting ants, a threshold constant is introduced. Based on applying the algorithm to 20 typical TSPs, the performance of the new algorithm is verified to be good. Moreover, by comparison with 16 state-of-the-art algorithms, the results show that the proposed new algorithm is a highly suitable method to solve the TSP, and its performance is better than those of most algorithms. Finally, by solving eight TSPs, the good performance of the new algorithm has been analyzed more comprehensively than that of the typical traditional ACO. The results show that the new algorithm can attain a better solution with higher accuracy and less effort.


INTRODUCTION
As a computational intelligence optimization method, ant colony optimization (ACO) [1] is a swarm intelligence algorithm inspired by the foraging behavior of real ants.The first proposed algorithm of ACO, called the ant system (AS) [2], was developed by three Italian scholars (Dorigo, Colorni, and Maniezzo) in 1991.As the original version of various ACO algorithms, the AS has several obvious merits, including its robustness and its status as a constructive greedy heuristic [1].Therefore, the AS has been a heavily researched topic from its creation and has been used to solve difficult combinatorial optimization problems, such as the traveling salesman problem (TSP).However, as a heuristic algorithm, the AS has several shortcomings [1], including slow convergence and difficulty in expanding the search space.To solve the TSP well, several scholars have proposed many corrective algorithms for ACOs, such as the elitist AS [3], the ant colony system [4], the rank-based AS [5], the max-min AS [6], the novel max-min AS [7], the adaptive dynamic AS [8], the moderate AS [9], an improved ACO algorithm [10], the cooperative genetic AS [11], a hybrid method of ant colony optimization and the genetic algorithm (ACO-GA) [12], a hybrid method of ant colony optimization and the cuckoo search algorithm (ACO-CS) [12], a hybrid max-min ant system integrated with an inequality constraint on four vertices and three lines (HMMAS) [13], a modified AS [14], a nearest neighbor ant colony system (NNACS) [15], a hybrid elitist-ant system (HEAS) [16], a hybrid method based on ant colony optimization and the 3-Opt algorithm (ACO-3Opt) [17], and a parallel ACO algorithm based on a quantum dynamic mechanism [18], and so on.The details of these algorithms are as follows.The elitist AS [3] introduced the idea of the elitist strategy in the context of the AS to give extra emphasis to the best path found so far to improve the global searching of AS.However, the elitist AS cannot provide a good solution to the slow convergence of AS.The ant colony system [4] was built on the AS to improve efficiency by using a local pheromone update in addition to the pheromone update applied at the end of the construction process.On the other hand, it cannot greatly decrease the probability of being trapped in a local minimum.To avoid premature convergence, the rank-based AS [5] introduced a ranking strategy into AS by weighting the contribution of an ant to the trail level update.However, this system does not provide a good solution to the slow convergence of AS.The max-min AS [6] improved the pheromone update method of AS by only allowing the best ant to update the pheromone trails and by limiting the value of pheromone on the paths to avoid premature convergence of the search.Nevertheless, this system also does not solve the problem of slow convergence, and more parameters need to be determined.To select suitable parameters of the max-min AS, the novel max-min AS [7] used the objective function value to set the pheromone trail value.However, its convergence rate needs to be improved.The adaptive dynamic AS [8] improved AS by modifying the pheromone updating rule and the transition rule with evenness of solution, interesting and acceleration to avoid premature convergence.However, this does not solve the problem of slow convergence.Inspired by adaptive behavior, the moderate AS [9] was developed with a new transition rule that can enhance the capability of AS to find new solutions to avoid premature convergence, but its convergence rate needs to be improved.An improved version of the ACO algorithm [10] was proposed based on a candidate list strategy for rapid convergence speed and a dynamic updating rule for heuristic parameters using entropy to improve performance.However, this algorithm is very complex, and its performance cannot be verified well.The cooperative genetic AS [11] combined both the genetic algorithm (GA) and ACOin a cooperative manner to improve performance: in this system, the mutual information exchanged between two algorithms at the end of the current iteration.However, this algorithm is also very complex, and more parameters are required.In a hybrid method of ACO and GA (ACO-GA) [12], the GA utilizes all effective paths of ACO and then identifies an effective and efficient way to obtain a better solution in the search space.Moreover, similar to ACO-GA, in the hybrid method of ACO and the CS algorithm (ACO-CS) [12], the CS was used in ACO to search for a better solution.Nevertheless, the above two hybrid algorithms are very complex, and more verification of their performance is needed.In the hybrid max-min AS integrated with an inequality constraint on four vertices and three lines (HMMAS) [13], the MMA was used to search for an approximate result, and with the approximate one, the inequality constraint on four vertices and three lines was used in a local search to obtain a better approximation.However, the performance of this algorithm is not very good, and it cannot be used for large-scale problems.For the modified AS [14], the adaptive tour construction and adaptive pheromone updating strategies are embedded into the AS to achieve better solutions.Nonetheless, deep verification of it is lacking, and the convergence rate should be improved.To reduce computing time and without sacrificing the optimality properties of the solutions, a NNACS [15] was proposed by implementing enhanced pheromone updating rules in the ant colony system for which a large number of inefficient solutions are eliminated at the outset on the basis of proximity-based neighborhoods.However, currently, the NNACS only focuses on problems of small to moderate sizes.The HEAS [16] introduced an external memory into an elitist-AS, called an elite pool, which contains high-quality and diverse solutions to maintain a balance between diversity and quality of the search to enhance the performance of the algorithm.However, this algorithm is very complex, and more parameters are required.In the hybrid ACO-3Opt [17], the 3-Opt algorithm was combined with ACO to avoid local minima, and it can improve the quality of the obtained solutions and the robustness of the algorithm.However, the ACO-3Opt is very complex, and its performance is affected by many factors.Finally, for the parallel ACO algorithm based on a quantum dynamic mechanism [18], to improve the performance of ACO, the improved 3-opt operator was used to provide superior local search ability, and several antibody diversification schemes were incorporated to improve the balance between exploitation and exploration.However, this algorithm is very complex, and finding a suitable parameter setting for it is hard.
Because the TSP is a well-known NP-hard combinatorial optimization problem that is computationally difficult, in addition to ACO, many other new metaheuristic optimization algorithms have been applied to solve it, such as the quantum heuristic algorithm (QHA) [19], the discrete artificial bee colony algorithm with a neighborhood operator (DABC-NO) [20], the shrinking blob algorithm (SBA) [21], the discrete cuckoo search algorithm (DCSA) [22], the random-key cuckoo search (RKCS) [23], the African buffalo optimization (ABO) [24], the discrete bat algorithm (DBA) [25], the fruit fly optimization algorithm (FFOA) [26], a hybrid algorithm using a GA and a multiagent reinforcement learning heuristic (GA-MRLH) [27], the artificial atom algorithm (AAA) [28], the greedy flower pollination algorithm (GFPA) [29], the imperial competitive algorithm (ICA) [30], the black hole algorithm (BHA) [31], the simulated annealing-based symbiotic organisms search optimization algorithm (SA-SOSOA) [32], the discrete symbiotic organisms search algorithm (DSOSA) [33], the hybrid discrete artificial bee colony algorithm with a threshold acceptance criterion (DABC-TAC) [34], a minimum spanning tree-based heuristic (MSTH) [35], a genetic algorithm with local operators (GAL) [36], a new hybrid optimization algorithm based on wolf pack search and local search (WPS-LS) [37], discrete spider monkey optimization (DSMP) [38], discrete pigeon-inspired optimization (DPIO) [39], and the parthenogenetic algorithm (PGA) [40], and so on.For those algorithms, many are newly proposed metaheuristic algorithms, such as QHA, SBA, DCSA, RKCS, ABO, DBA, FFOA, AAA, GFPA, ICA, BHA, DSOSA, MSTH, DSMP, DPIO, and PGA.Most of them have simple structure and are easy to be applied.Using those new metaheuristic algorithms, the results for solving TSPs are perfect in terms of the solution quality and speed for most algorithms, except GFPA which mainly considers the robustness of the solutions.However, the theoretical bases of those new metaheuristic algorithms are all lacking.Moreover, the verifications are not enough and more experiments should be conducted.For example, only the small-scale problems have been solved by some algorithms (including SBA, RKCS, ABO, FFOA, AAA, ICA, BHA, MSTH, and DSMP), and only large-scale problems have been solved by some algorithms (including DPIO and PGA).Furthermore, some algorithms have their specific shortcomings.For example, for QHA [19], an open question still remains as to whether the quadratic or faster speedup can still be achieved when employing structural or geometrical information on city-pair costs, and the performance of GFPA [29] will decrease quickly as the number of cities increases.For the hybrid algorithms, DABC-NO [20] has combined neighborhood operator into newly proposed metaheuristic algorithm (discrete artificial bee colony algorithm) to improve solution quality for TSP, and results are perfect in terms of the solution quality and speed.However, only the small-scale problems have been solved by DABC-NO.The GA-MRLH [27] is the combination of genetic algorithm and a multiagent reinforcement learning heuristic, and its results for solving TSPs are good in terms of the solution quality and speed.However, the algorithm is very complex and more parameters are needed.The SA-SOSOA [32] is a new hybrid algorithm based on the simulated annealing and symbiotic organisms search optimization algorithm, and its results for solving TSPs are good in terms of the solution quality and speed.However, its theoretical basis lacks and more experiments should be conducted.DABC-TAC [34] is a hybrid discrete ABC, which uses acceptance criterion of threshold accepting method, and its results for solving TSPs are also good in terms of the solution quality and speed.However, this algorithm is very complex and more parameters are needed.GAL [36] is a hybrid algorithm of genetic algorithm with local operators and its results for solving TSPs are good in terms of the solution quality and speed too.However, the algorithm is also very complex and more parameters are needed.At last, WPS-LS [37] is a new hybrid algorithm based on wolf pack search and local search, and its results for solving TSPs are good in terms of the solution quality and robustness of the solutions.However, this algorithm is very complex and only the small-scale problems have been solved.
As mentioned previously, ACO was introduced by means of a proof-of-concept application to the TSP.Therefore, ACO has been applied to many combinatorial optimization problems [41].First, classical problems other than the TSP, such as assignment problems, scheduling problems, graph coloring, the maximum clique problem, and vehicle routing problems, were addressed.More recent applications include electrical engineering problems, clustering problems, and civil engineering problems [42].
Intuitively, the most important theme in improvement algorithms for ACO is the balance between intensification and diversification.However, excessive emphasis on intensification can make ants converge to a local optimum, while excessive emphasis on diversification can lead to an unstable state [43].To overcome the shortcomings of traditional ACO methods, a meeting strategy has been introduced to favor search diversification and maintain intensification at an appropriate level.In other words, this new strategy is to adjust the trade-off between intensification and diversification.Thus, a new ACO algorithm is proposed.To verify the new algorithm, the results of the new ACO algorithm for 20 TSPs are compared to the best known solutions and the results of 16 state-ofthe-art algorithms.Furthermore, the computing effectiveness and efficiency of the new algorithm are compared with those of typical traditional ACO when the algorithm is used to solve eight generally used TSPs.
The paper is organized as follows.After this introduction, the typical traditional ACO is introduced in Section 2. The details of the proposed new ACO algorithm are given in Section 3. Section 4 presents the experimental results obtained from the new ACO algorithm and compares it to certain state-of-the-art heuristic methods.Section 5 contains the statistical analysis of two algorithms (traditional ACO and the new ACO algorithm).Finally, the study's conclusions are presented in Section 6.

TRADITIONAL ACO
By application to the TSP, the main steps of traditional ACO are introduced as follows.First, when t = 0, the ants, of which there are m, are randomly positioned at different cities, of which there are n.At this time, the intensity of pheromone () on all paths is the same; this state can be described as where c is a constant.
The next step for each ant is selecting another city that it has not yet visited using some rules.There are two main rules followed by the ants, which are 1.The pheromone intensity on the path between cities i and j at time t is  ij (t), which is information provided by the algorithm.
2. The heuristic information controlling the move from city i to j is  ij , which can be determined by a heuristic algorithm according to the problem to be solved.Generally, it can be given as where d ij is the length of the path between cities i and j.
By using the two rules, at time t, the ant k at city i selects the next city j, which it has not yet visited, according to the following probability: where  and  are two parameters that determine the relative influence of the pheromone trail and the heuristic information, and is the feasible neighborhood of ant k when it is at city i, that is, the set of cities that ant k has not visited yet.
However, to avoid visiting a city more than one time, for each ant k, there exists one tabu list set tabu (k), which records the cities visited by this ant.
After the ant completes one cycle (visiting all cities), the pheromone intensity on all paths should be updated.That is, after the paths of the ants have been built, the pheromones for all ants are updated using the following equation: where  is the residual ratio of the pheromone.However, to avoid infinite accumulation of the pheromone,  must be less than 1.
Δ k ij is the increase of the trail level on edge ( i, j ) caused by ant k.Depending on the problem, there are three descriptions of Δ k ij , as follows: where Q is the quantity of pheromone laid by an ant per tour and T k is the length of the tour that ant k has found.
In the three descriptions above, global information is used for the first, while local information is used for the other two descriptions.Generally, the first description is used.
The procedure of traditional ACO is a kind of iterative process, and its pseudocode is as follows.
Begin Initialization of parameters: , , , Q, c, m and n By using C++, the traditional ACO can be implemented in this study.

NEW ACO ALGORITHM
The new ACO algorithm is described in detail as follows for application to the TSP.In the traditional ACO, for each ant k, at the beginning, the number of cities in the set tabu(k) is small; thus, the ant selects the next cities with high flexibility.In contrast, at the end of the search process, the city number in the set tabu(k) is large, and the ant selects the next cities without any diversification.Thus, the ants will move along certain paths and lose the ability to explore other potential paths.Additionally, the convergence of the algorithm will be accelerated, which is biased by positive feedback, but diversification will be weakened.To improve the search diversification and maintain the necessary intensification, a meeting strategy is introduced into the traditional ACO, and a new ACO algorithm is proposed.The meeting strategy is as follows.
When approximately half of the cities have been visited by ants based on the same operations as in the traditional ACO, all ants should be evaluated to determine whether they can meet with other ants.This determination can be made easily based on the number of cities that are stored in the ants' tabu lists.Two ants are a pair of meeting ants if the union of the cities in their tabu lists covers all the cities in the problem.Then, the meeting ants will quit the current search iteration, and a new tour will be created by the meeting ants; this tour can be easily realized by combining the two ants' tabu lists.If the number of cities in the combined tabu list is larger than the number of all cities, the repeated cities in the new tabu list should be eliminated to guarantee that the number of cities in the new tabu list is equal to the number of all cities.Thus, according to the cities in the combined tabu list, the new tour can be constructed.Additionally, the pheromone values will be added to the new combined tours.Unfortunately, in real practice, the number of ants in each iteration is finite, and the number of meeting ants is also limited.To normalize the search process, a threshold constant v is applied.If the number of meeting ants surpasses the threshold constant, the search process will stop, and the pheromone values will be deposited on the new tour.If the number of meeting ants does not exceed the threshold constant, the search process will continue until all tours are generated, and the pheromone update law is the same as that of the traditional ACO.
In this algorithm, the pheromone updates according to the following equation: To diversify the search process, the pheromone values of all paths should be limited to an interval [ min ,  max ], which is as follows: where  min and  max are the pheromone extent values, which can be determined from experiments.A detailed description of  min and  max can be found in reference [6].
Because the cities on the new tour generated by the meeting ants are visited by two ants and not by one ant, as in the traditional ACO, the time to create a new tour will be reduced by approximately half.Moreover, a limiting strategy for the pheromone values is used.Because the meeting strategy is used, at the end of the search process, the ant selects the next cities with a certain amount of diversification, and its ability to explore other potential paths will not be lost.Additionally, the convergence of the algorithm will be accelerated, which is biased by the positive feedback.Therefore, in this new ACO algorithm, the search time will be greatly reduced, and the search diversification of the ant colony will be expanded enormously.The flow chart of the new ACO algorithm is shown in Figure 1.In this new algorithm, the termination condition is that the length difference for the optimal paths of neighboring iterations is less than 10 −5 .In addition, to avoid infinite iteration, the maximum number of iterations N is given.
By using C++, the proposed new ACO algorithm can be implemented in this study.

SIMULATION EXPERIMENTS
To verify the computing performance of the proposed new ACO algorithm, a number of TSP instances are used.All of these TSPs are available from the TSPLIB benchmark library, which can be found on the web at www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/.
For numerous studies on ACO for the TSP, the selection of algorithm parameters is performed through a series of experiments.By means of these experiments, a law of selecting parameters can be obtained easily.Based on this selection law and using trial calculations, the parameters of the new algorithm can be determined to be: As shown in this configuration, the number of ants is set to the same as the number of cities, and the threshold constant is set to 1 to achieve peak performance for the algorithm.Because the new ACO algorithm is a random search algorithm, the average results from 30 runs are obtained.Experiments are conducted on a laptop with Intel Core-i5-4200U, 3.4 GHz CPU and 32 GB of RAM.The results are summarized in Table 1.
In Table 1, the first column shows the names of the instances, the second column shows the number of cities, the third column shows the optimal solution taken from the TSPLIB, the column "best" shows the best solution found by the algorithm, the column "average" gives the average solution of the 30 independent runs of the algorithm, the column "PD best (%)" gives the percentage deviation of the best solution from the optimal solution over 30 runs, and the column "PD avg (%)" denotes the percentage deviation of the average solution from the optimal solution over 30 runs.PD best and PD avg are given by the formulas: PD best (%) = best solution -optimal solution optimal solution × 100 (10) PD avg (%) = average solution -optimal solution optimal solution × 100 The column "iterations" gives the average number of iterations over 30 runs, and the column "times" gives the average CPU computing times over 30 runs.
As shown in Table 1, for all 20 selected TSP instances, most of the best solutions found by the new algorithm are the given optimal solutions, except for seven instances (Bier127, Pr136, Ch150, A280, Lin318, Rd400, and Rat575), which are shown in bold in column PD best (%).However, the percentage deviations of the best solutions for those seven instances (Bier127, Pr136, Ch150, A280, Lin318, Rd400, and Rat575) are all very small, less than 0.5% (except for that of Rat575, which is only 1.62%).In other words, 95% of the values of PD best (%) are less than 0.5%, which means that the best solution found in the 30 trials approximates the given optimal solution with a percentage deviation of less than 0.5%.Moreover, most of the average solutions found by the new algorithm in 30 trials approximate the given optimal solutions, except for five instances (including Att48, Eil51, Berlin52, Eil76, and Eil101), the average solutions of which are the same as the given optimal solutions.Furthermore, most of the values of PD avg (%) are less than 1%, except for three instances (including KroA200, Rd400, and Rat575).In other words, 85% of the values of PD avg (%) are less than 1%, which means that the average solution found in the 30 trials approximates the given optimal solution with a deviation of less than 1%.The value of 0 shown in bold in column PD avg (%) indicates that all solutions found in the 30 trials have the same length as the given optimal solutions.Moreover, according to the computing times and number of iterations, the computing speed of the new algorithm is fast.The longest computing time and the largest number of iterations are 68.7 s and 865.7, respectively.There are only five instances whose computing times are longer than 30 s, and only six instances whose number of iterations are larger than 500.Therefore, the numerical values presented in Table 1 show that the new ACO algorithm can indeed provide good solutions for TSPs at a fast speed.
Finally, from Table 1, it can be found that, as the scale of the TSP problem increases, the corresponding computing time will increase too.However, their increasing rates are different.For example, when the scale of the TSP problem increases from 51 to 200, which increases about four times.However, the corresponding computing time increases from 7.3 to 27.1, which increases only about 3.7 times.In this increasing range for the problem scale which increases 149, the computing time increases 19.8.The rate of the increasing time to the increasing scale is 0.13.Moreover, when scale of the TSP problem increases from 400 to 575, the corresponding computing time increases from 55.5 to 68.7.In this range, the rate of the increasing time to the increasing scale is only 0.07 which is much less than 0.13.That is to say, as the problem scale increases, the increase of the computing time will become slow.Therefore, the computing efficiency of new ACO algorithm will improve as the scale of the TSP problem increases, and which is an advantage of the new ACO algorithm.
To verify the computational performance of the new ACO algorithm proposed in this paper, the computational results of the algorithm have been compared with those of 16 state-of-the-art algorithms from the literature, most of which have been proposed in the past three years, as summarized in Table 2.
From Table 2, according to the PD best and PD avg , of the 17 algorithms (including the new ACO algorithm proposed in this study), the BHA and HMMAS are the worst-performing algorithms.Their best solutions and average solutions are both poor.However, due to its limited results, the BHA performs more poorly than the HMMAS.Moreover, the SA-SOSOA, DBA, DCSA, and the algorithm proposed here are the four well-performing ones, and most of their best solutions are equal to the optimal solutions.Furthermore, the average solutions of those algorithms approach the optimal solutions; in fact, a number of their average solutions are equal to the optimal solutions.Therefore, the results of the algorithm proposed here outperform most state-of-the-art algorithms.
As shown in Figure 2, for PD best , the proposed new algorithm is almost the best, and PD best is zero in most instances.The SA-SOSOA is the best; its PD best values are zero or negative for all instances except one.However, for PD avg , the proposed new algorithm is not a better one, and the SA-SOSOA is the best.Moreover, as shown in Table 3, for 15 TSPs, using the algorithm proposed here, there are only three best results that are not the optimal solutions, and the average value of PD best is 0.13.Moreover, there are only four average solutions that are equal to the optimal solutions, and the average value of PD avg is 0.44.For the DCSA, there are four best results that are not the optimal solutions, and the average value of PD best is 0.21, which is larger than that of the algorithm proposed here.In addition, there are eight average solutions that are equal to the optimal solutions, and the average value of PD avg is 0.42, which is similar to that of the algorithm proposed here.In other words, although the number of average solutions obtained by the DCSA that are equal to the optimal solutions is more than that of the new algorithm proposed here, the average value of PD avg is not less than that of the algorithm proposed here for large values of its PD avg .Therefore, the algorithm proposed here is better than the DCSA.Moreover, for the DBA, there are also three best results that are not the optimal solutions, and the average value of PD best is 0.13, which is the same as that of the algorithm proposed here.There are six average solutions that are equal to the optimal solutions, and the average value of PD avg is 0.34, which is smaller than that of the DCSA or the algorithm proposed here.Therefore, the algorithm proposed here performs more poorly than the DBA.Finally, for the SA-SOSOA, there are three best results that are not the optimal solutions, and the average value of PD best is only 0.005, which is much less than that of the DBA.Moreover, there are only five average solutions that are different from the optimal solutions, and the average value of PD avg is only 0.16, which is also much less than that of the DBA.Therefore, the SA-SOSOA is the best algorithm for those 15 TSPs.Moreover, the new algorithm proposed in this study is a highly suitable method to solve the TSP, and its performance is better than most state-of-the-art algorithms, except for a few algorithms, such as the DBA and the SA-SOSOA.

DISCUSSION
To verify the performance of the new ACO algorithm, the comprehensive comparison study of the new ACO algorithm and traditional ACO is discussed.
Based on testing and experience, the parameters of the traditional ACO are as follows: For comparison, the parameters of the new ACO algorithm are as follows: It must be noted that to make the comparison fairer, the same parameters are used for both algorithms.However, the parameters used in this study may not be the most suitable values for both algorithms.For comparison, eight TSPs selected from Table 1 are used in this study: Eil51, St70, Pr76, KroA100, Lin105, Pr124, Bier127, and Ch150.The best, worst and average results of 30 runs are obtained.
The results are shown in Table 4.
In Table 4, the "RE" denotes the percentage deviation of the solution obtained from the new ACO algorithm compared to the solution obtained from traditional ACO, which is defined as RE (%) = solution from traditina 1 algorithm -solution from new algorithm solution from traditina 1 algorithm × 100 (12) As shown in Table 4, for all eight TSPs, the lengths found by the new ACO algorithm (including best, worst, and average ones) are all shorter than those found by traditional ACO.For the best results, the RE is always less than 1, and its average value is only 0.13%.That is, the differences between the best results of the two algorithms are not very large.In other words, both algorithms can find relatively suitable lengths for those eight TSPs.For the worst results, the RE is generally less than 10 except for one, and its average value is 5.27%, which is much larger than that of the best results.
In other words, the new ACO algorithm can find much shorter lengths for the worst results compared to traditional ACO.Moreover, for the average results, the RE is always less than 3%, and its average value is 1.77%, which is larger than that of the best results but is much smaller than that of the worst results.However, as for the difference between the best results and the worst results, the RE is very large, generally larger than 70%, except for two results.In addition, the RE's average value is 81.56%, which is still larger than 80%.In other words, the results found by the new ACO algorithm are more aggregated than those found by traditional ACO.In other words, the new ACO algorithm is more stable than traditional ACO.Therefore, the new ACO algorithm can significantly improve traditional ACO.
Moreover, the statistical variation of the results for eight TSPs solved by two algorithms over the 30 independent runs is shown in Table 5, and the comparison of the results for eight TSPs solved by two algorithms is also shown in Figure 3.In this study, two types of statistical indexes are used, namely, the coefficient of variation and the relative error.The coefficient of variation, defined as the ratio between the standard deviation and the mean value, gives information on the uniformity of solutions.The relative error, defined as the percentage deviation of the computing solution from the optimal solution over 30 runs, gives information on the precision of solutions.In this instance, to comprehensively study the problem, three relative error indexes are used: PD best , PD worst , and PD avg .These indexes denote the percentage deviation of the best solution from the optimal solution over 30 runs, the percentage deviation of the worst solution from the optimal solution over 30 runs, and the percentage deviation of the average solution from the optimal solution over 30 runs.
As shown in Figure 3, for all eight TSPs, the results of the new ACO algorithm (including the best, worst, and average ones) are better than those of traditional ACO.Moreover, from Table 5, for traditional ACO, the average values of PD best , PD worst , and PD avg are 0.14%, 6.48%, and 2.07%, respectively.In comparison, those for the new ACO algorithm are 0.0075%, 0.82%, and 0.98%, respectively.Thus, the computational relative errors of the new ACO algorithm are much smaller than those of traditional ACO.In other words, the computational precision of the new ACO algorithm is much better than that of traditional ACO.Moreover, compared with the coefficients of variation of traditional ACO, whose average value is 1.62, those of the new ACO algorithm are much smaller, with an average value of 0.98.In other words, the uniformity of the solutions of the new ACO algorithm is also much better.Therefore, the new ACO algorithm can improve the computational results and computational stability greatly for TSPs.Furthermore, compared with the results of traditional ACO, as the TSPs become more complex (greater length or more cities), the computing results are much better, and the advantages of the new ACO algorithm are better.The reason for this phenomenon may be that when the TSPs become more complex (greater length or more cities), the meeting strategy of the new ACO algorithm will be used more frequently, and thus, the increased advantage of the method of computing results in the new ACO algorithm compared with that of traditional ACO will be more noticeable.
To compare the computing speed of the two algorithms, the iterations with the best results as shown in Table 4 for the two algorithms are shown in Figure 4.
As shown in Figure 4, for all TSPs, the computing speeds of the new ACO algorithm are all faster than those of traditional ACO.Moreover, as the number of cities increases, the difference between the computing times for the two algorithms will also increase.Therefore, as the complexity of the problem increases, the computational efficiency of the new ACO algorithm increases.In other words, the more complicated the problem, the better the computational efficiency of the new ACO algorithm.
To study the computational efficiency more comprehensively, the statistical results of the computing time for eight TSPs solved by two algorithms over 30 independent runs are shown in Table 6.In this study, two statistical indexes are used: the average computing times and the coefficient of variation for computing times.The average   computing times provide information on the specific computing speed.The coefficient of variation, defined as the ratio between the standard deviation and the mean value, gives information on the uniformity of the computing speed.
As shown in Table 6, for all eight TSPs, the computing times of the new ACO algorithm are much less than those of traditional ACO, and the coefficients of variation for the new ACO algorithm are also less.To analyze the degree of reduction of computing time by both algorithms, the reduction rate, which is the ratio of the reduction value to the computing time by traditional ACO, is obtained.For eight TSPs from Eil51 to Ch150, the reduction rates are 41.1%, 43.6%, 44.9%, 46.1%, 46.8%, 48.9%, 50.2%, and 52.4%, respectively.Therefore, as the complexity of the TSPs increases, the reduction rate of the computing time for the two algorithms also increases.Moreover, using the traditional ACO, as the scale of the TSP problem increases from 51 to 100 and 100 to 150, the corresponding computing time increases from 12.4 to 19.3 and 19.3 to 41.4, respectively.And the corresponding rates of the increasing time to the increasing scale are 0.14 and 0.44.However, for the new ACO algorithm, in the same increasing ranges of the

Figure 1
Figure 1 Flow chart of new ant colony optimization algorithm.

Figure 2
Figure 2 Comparison of four good state-of-the-art algorithms for 15 TSPs.

Figure 3 Figure 4
Figure 3 Comparison of new ant colony optimization algorithm and traditional ant colony optimization for eight TSPs.

Table 1
Results for 20 TSPs by new ant colony optimization algorithm.

Table 2
Comparison with 16 state-of-the-art algorithms.

Table 3
Comparison results of four good algorithms.

Table 4
Comparison results of the new ant colony optimization algorithm and traditional ant colony optimization for eight TSPs.

Table 5
Comparison of statistical results for the new ant colony optimization algorithm and traditional ant colony optimization for eight TSPs.

Table 6
Comparison of statistical results for computing time by the new ant colony optimization algorithm and traditional ant colony optimization for eight TSPs.