Next Article in Journal
Solving Hydrodynamic Problems Based on a Modified Upwind Leapfrog Scheme in Areas with Complex Geometry
Previous Article in Journal
Distributed H and H2 Time-Varying Formation Tracking Control for Linear Multi-Agent Systems with Directed Topologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Discrete Carnivorous Plant Algorithm with Similarity Elimination Applied to the Traveling Salesman Problem

College of Engineering, Northeast Agricultural University, Harbin 150030, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(18), 3249; https://doi.org/10.3390/math10183249
Submission received: 24 August 2022 / Revised: 3 September 2022 / Accepted: 5 September 2022 / Published: 7 September 2022

Abstract

:
The traveling salesman problem (TSP) widely exists in real-life practical applications; it is a topic that is under investigation and presents unsolved challenges. The existing solutions still have some challenges in convergence speed, iteration time, and avoiding local optimization. In this work, a new method is introduced, called the discrete carnivorous plant algorithm (DCPA) with similarity elimination to tackle the TSP. In this approach, we use a combination of six steps: first, the algorithm redefines subtraction, multiplication, and addition operations, which aims to ensure that it can switch from continuous space to discrete space without losing information; second, a simple sorting grouping method is proposed to reduce the chance of being trapped in a local optimum; third, the similarity-eliminating operation is added, which helps to maintain population diversity; fourth, an adaptive attraction probability is proposed to balance exploration and the exploitation ability; fifth, an iterative local search (ILS) strategy is employed, which is beneficial to increase the searching precision; finally, to evaluate its performance, DCPA is compared with nine algorithms. The results demonstrate that DCPA is significantly better in terms of accuracy, average optimal solution error, and iteration time.

1. Introduction

Combinatorial optimization problems [1] explore extreme values in discrete areas, which commonly exists in numerous fields such as agriculture, industry, national defense, engineering, transportation, finance, energy, and communication. Many of these problems belong to NP-hard [2], in which the spatial scale, time scale, and complexity of the solution increase exponentially as the scale of the problem grows. Therefore, many current methods are not sufficient in solving this problem. The traveling salesman problem (TSP) is one of the most extensively studied combinatorial optimization problems in operation research. The TSP and its various variants are not only famous theoretical problems but also important practical application problems. It can be applied to various fields, including scheduling and drilling holes in printed circuit boards in manufacturing [3]; designing reasonable traffic roads to avoid traffic congestion [4]; planning the best transportation route to maximize the benefits to the company [5]; laying out the appropriate location for a router so that the information transmission efficiency is the strongest [6], and many other areas [7,8]. Usually, a minor improvement in solution quality or a reduction in execution time in these areas can save millions of dollars or significantly develop productivity, bringing enormous economic benefits to enterprises and society. In addition, the TSP is commonly adopted as a measurement tool of algorithms’ performances. Therefore, the research of the TSP has high theoretical value and practical significance, and its solution method has attracted the attention of scholars in recent years.
The TSP has only simple loop constraints, so it is relatively simple. However, as a typical NP-hard problem, the calculation difficulty increases exponentially as the city number grows. The method for TSP can broadly divide into exact algorithms and heuristic algorithms. Exact algorithms, such as linear programming [9], dynamic programming [10], and branch and bound [11], find the optimal solution by traveling all solution spaces with exponential time complexity. Thus, the exact algorithms have been insufficient to satisfy the solution of a TSP with a large size. Heuristic algorithms, such as the nearest neighbor [12] and local search algorithms [13,14], can find the suboptimal solution or the optimal solution with a certain probability. However, they are prone to fall into local optimums. To solve such problems, some scholars have innovatively proposed an important branch of heuristic algorithms, called metaheuristic algorithms, which involve the ant colony algorithm (ACO) [15,16], particle swarm optimization (PSO) algorithm [17], genetic algorithm (GA) [18,19,20], differential evolution (DE) [21], sparrow search algorithm [22], artificial bee algorithm [23], etc. They solve large-scale instances by obtaining inspiration through the cognition of relevant behaviors, principles, and action mechanisms in biology, physics, chemistry, and other fields and designing and using heuristic rules to search the solution space. The metaheuristic algorithm is widely applied in optimization, as it can obtain a high-quality satisfactory solution in a reasonable time, which greatly enriches the algorithm system.
Most metaheuristics algorithms were created for continuous optimization problems and cannot be directly applied to solve TSPs with discrete properties [24]. To explore the performance of such algorithms in the TSP, many scholars have conducted research and proposed a variety of discrete methods. References [25,26] employed swap sequence and swap operator, reference [27] adopted swap, shift, and symmetry operators to alternate the update method of the algorithm, and references [28,29] introduced the Hamming distance and a local search algorithm to updating individuals. However, among the above three update methods, the first two methods lack heuristics and exhibit poor convergence speed. The last one lacks exploration ability and easily to falls into stagnation. References [30,31] applied the rank-order and rounding decoding methods, respectively. Although the above two methods are simple to implement, they are not heuristic and may have a negative effect on the solution quality of its randomicity. Researchers such as Kenneth Sörensen [32] think that in the field of optimization computing it is important not to propose new algorithms but to establish universal optimization-algorithm-applicable rules and strategies and research common problems in optimization problems and optimization algorithms. Therefore, it is necessary to design a new universal discrete method or improve existing methods to enhance the algorithms’ performance when solving the TSP.
The carnivorous plant algorithm (CPA) is a new swarm intelligence optimization algorithm proposed by Meng et al. [33], and the algorithm has been verified to successfully solve high-dimensional continuous problems. To the best of our knowledge, CPA has not been adopted to solve discrete optimization problems. In addition, the grouping method of CPA easily results in an imbalance in search ability among subgroups, which has a negative influence on the search performance of the algorithm. The growth of carnivorous plants or the updates of prey depend on the attraction probability, and the attraction probability in CPA is a constant that cannot improve the balance between exploration and the exploitation ability. Therefore, CPA performs convergence prematurely and is easy to trap into a local optimum. For optimization problems, when the algorithm finds a region with an extreme value, individuals continue to approach this extreme point. As the iteration time grows, the number of identical or similar individuals in the population increases. If the two individuals selected for the update are highly similar or identical, the possibility of excellent offspring being generated is reduced, which affects the convergence speed. The limitations of the discrete method for TSPs and the promising results achieved by CPA in continuous problems have formed the main motivation for this paper.
To address the above problems, in this work a new type of individual generation method is designed, using redesigned subtraction, multiplication, and addition operators to satisfy the legitimacy of the TSP solution. The method combines set difference operation, optimal operation, three crossover operators, and symmetry transportation to maintain the exploration and exploitation abilities of the algorithm. Then, a simple sorting grouping method is proposed, which can promote the speed of grouping, reduces the rate of assimilation of the population, and heightens the exploitation ability. After that, an adaptive attraction probability is designed, which enables the prey to update the position in the early stage with a high probability and heightens the exploration ability. Carnivorous plants grow in the late stage with a high probability to strengthen the exploitation ability. In addition, to maintain population diversity and avoid the situation that affects the efficiency of the algorithm, the similarity-eliminating operation, comprising the same quantity of city sequences and route length, is designed. The former is applied to eliminate the individuals who participate in the update with high similarity, and the latter is applied to eliminate the same individuals in the population. Finally, the iteration local search algorithm (ILS) is introduced to jump out of the local optimum and find the global theoretical optimum.
The primary contributions of this work are:
(1)
A new method of carnivorous plant growth and prey updating for TSP is designed to ensure that the algorithm can switch from continuous space to discrete space without losing information.
(2)
A simple sorting grouping method is proposed to reduce the computational complexity of the algorithm.
(3)
An adaptive attraction probability is presented to balance the exploitation and exploration ability.
(4)
The similarity-eliminating operation is added to maintain population diversity and extend the search space.
(5)
ILS is adopted to improve search precision and reduce the probability of individual stagnation.
(6)
To verify the validity of the discrete carnivorous plant algorithm (DCPA), nine algorithms are conducted as comparison algorithms, and the results on 34 instances from TSPLIB show the superior performance of the proposed algorithm.
The remaining part of this work proceeds as follows: Section 2 gives a brief overview of the TSP and the CPA; Section 3 describes the formula of the TSP; Section 4 presents a discrete framework of the proposed DCPA; Section 5 is an account of the simulation experiment and data analysis; and Section 6 gives a conclusion.

2. Related Works

2.1. Literature Review

Researchers have proposed various algorithms over the years to solve the TSP, which can be broadly grouped into permutated coded algorithms and real coded algorithms.
Among the permutated coded algorithms, the ACO and GA are suitable for solving discrete optimization problems, as their update method can directly generate legitimate offspring. Stodola et al. [34] proposed an adaptive ant colony optimization algorithm with nod clustering (AACO-NC). In AACO-NC, three techniques are adopted to enhance the algorithm’s performance. First, the node clustering principle is adopted to decrease the optimization time; second, the adaptive pheromone evaporation coefficient based on the population’s diversity is employed to extend the search space; and third, a new termination condition based on entropy is proposed to decrease the inaccuracy of the iteration number setting. Yong W et al. [35] presented a hybrid genetic algorithm with two local optimization algorithms. One is applied to the local Hamiltonian paths to enhance the solution quality, and another is applied to prevent falling into the local optimum. Q. M et al. [36] proposed a hybrid genetic algorithm with a splitting algorithm when solving the traveling salesman problem with a drone.
Several algorithms are only applicable for continuous optimization problems. It is necessary to modify the update method to maintain the discrete characteristics when solving TSPs. Wang et al. [37] improved a discrete symbiotic organism search with an excellent coefficient and self-escape strategy. The former helps to enhance the exploitation capability, and the latter helps to maintain population diversity. Eneko et al. [28] proposed a discrete water cycle algorithm where the Hamming distance is adopted to measure the difference between two individuals and the insert and 2-Opt operators are adopted as movement operators to avoid the local optimum. Kóczy et al. [38] presented a discrete version of the bacterial memetic evolutionary algorithm (DBMEA). In DBMEA, three nearest neighbor heuristic methods and a random creation method are employed to generate the initial population, two local search algorithms are introduced to improve the search precision, and a combined gene transfer operation maintains population diversity. Zhong et al. [39] developed a discrete pigeon-inspired optimization algorithm (DPIO). A new map and landmark operator with learning ability are employed in DPIO. The former helps to heighten the exploration ability, and the latter helps to enhance the exploitation ability. The metropolis acceptance criterion in the proposed algorithm helps to avoid stagnation. A discrete bat algorithm with Lévy flight (DBAL) was designed by Saji et al. [40], where a new updating method is proposed to solve TSP, and the crossover operator and three local search algorithms are employed to enhance the searching precision. A. Benyamin et al. [41] designed a discrete farmland fertility optimization algorithm (DFFA). In DFFA, the swap, inversion, and insertion operators are employed to expand the search space, the metropolis acceptance strategy is employed to prevent prematurely accepting a solution, and a crossover operator is adopted to maintain the features of the standard algorithm. G. H. Al-Gaphari et al. [42] proposed a crow-inspired algorithm with three new discrete methods to map the continuous variables into discrete variables, which are: the modular arithmetic and set theory, basic operators, and dissimilarity operation. Z. Zhang and J. Yang [43] developed a discrete cuckoo search algorithm with random work and a local adjustment operator for the TSP. The former is utilized to maintain population diversity, and the latter is adopted to promote the convergence rate.
Some algorithms need to introduce the mapping method to transform continuous variables into discrete variables. Samanlioglu et al. [44] improved a random-key genetic algorithm with the ranked-order value decoding for the TSP. Ezugwu et al. [45] adopted the rounding method and restructured symbiotic organism search by incorporating swap, insert, and inverse operators. Ali et al. [46] adopted the rank-order and best-matched decoding in a novel discrete differential evolution, where the k-means clustering-based repairing method is employed in the algorithm to improve the individuals’ quality, and a combined mutation is introduced to maintain the population diversity. F. S. Gharehchopogh and B. Abdollahzadeh [47] employed random-key encoding in the Harris hawk optimization algorithm, where a mutation operator, the metropolis acceptance strategy, and a local search algorithm are adopted in the algorithm to maintain population diversity, prevent prematurely accepting a solution, and enhance the solution quality, respectively. Zhang et al. [30] applied the order-based arrangement to map continuous variables into discrete ones in the discrete sparrow search algorithm (DSSA). In the DSSA, Gaussian mutation and a swap operator are adopted to maintain population diversity, and the 2-Opt algorithm is adopted to enhance search precision.

2.2. Standard Carnivorous Plant Algorithm

CPA is a metaheuristic algorithm that simulates the whole process of carnivorous plants’ attraction, predation, and digestion. The algorithm is mainly composed of four stages: the grouping, growth, reproduction, and recombination phases.

2.2.1. Grouping Phase

Let the population size be n and rank the individuals according to the fitness value from small to large. The best (n1 individuals) are regarded as carnivorous plants, and the rest (n2 individuals) are regarded as prey (n1 can be arbitrarily determined, as long as n2 > n1, n2 + n1 = n is satisfied and n2 can be divisible by n1). The population is divided into n1 groups, and individuals in each subgroup are comprised of one carnivorous plant and n2/n1 prey. Table 1 presents the grouping process in the CPA at a population size of 12, where the population before ordering is X = (X1, X2, …, X n 1 + n 2 ), after ordering is X′ = ( X 1 , X 2 , …, X n + n ), and the fitness of individuals satisfies F ( X 1 ) ≤ F ( X 2 ) ≤ … ≤ F ( X n 1 + n 2 ). The number of carnivorous plants (n1) is understood to be 3 in this example, so that the three best individuals in the population are regarded as carnivorous plants, and the remaining individuals are prey. In Table 1, prey X 4 , X 5 , and X 6 are attracted by carnivorous plants X 1 , X 2 , and X 3 , respectively. Then, prey X 7 , X 8 , and X 9 are attracted by carnivorous plants X 1 , X 2 , and X 3 , respectively. The process is repeated until the n2-th prey is attracted by the n1-th carnivorous plant.

2.2.2. Growth Phase

The attraction rate is adopted in the CPA. While the attraction probability γ (γ = 0.8) is greater than a random number between [0, 1], the individual executes the growth formula of carnivorous plants:
X n e w i = P i v + α ( X i P i v )
α = g r r a n d
where ⊗ indicates that two vectors are multiplied by elements at the same position, Xi indicates the carnivorous plants in the i-th subgroup, Piv indicates the v-th prey in the i-th subgroup, rand indicates a random vector with m-dimensional components and each component is uniformly distributed in the range [0, 1], m is the dimension number in the individual, and gr indicates the growth rate, which is equal to 2.
While γ is less than the random number, the individual executes the update formula of grey:
P n e w i j = P i v + α ( P i u P i v )
α = g r r a n d f ( P i v ) > f ( P i u ) 1 g r r a n d f ( P i v ) < f ( P i u )
where ⊗ indicates that two vectors are multiplied by elements at the same position, Piu and Piv indicate the u-th and v-th prey in the i-th subgroup, respectively, rand indicates a random vector with m-dimensional components and each component is uniformly distributed in the range [0, 1], m is the dimension number in the individual, and f(Piu) and f(Piv) indicate the fitness values of the u-th and v-th prey, respectively.

2.2.3. Reproduction Phase

In the CPA, only the best individual can execute the reproduction operation. Its mathematical model is as follows:
X n e w i = X 1 + β r a n d ( X v X i   ) f ( X i ) > f ( X v ) X 1 + β r a n d ( X i X v ) f ( X i ) < f ( X v )
β = μ r a n d
where ⊗ indicates that two vectors are multiplied by elements at the same position, X1 indicates the best individual, Xi and Xv indicate carnivorous plants in the i-th and v-th subgroup, respectively, the “rand” is a random vector with m-dimensional components and each component is uniformly distributed in the range [0, 1], m is the dimension number in the individual, f(Xv) and f(Xi) indicate the fitness values of the i-th and v-th carnivorous plants, respectively, and μ indicates the reproduction rate, which is equal to 1.8.

2.2.4. Recombination Phase

Recombination refers to merging the population before and after updating into a large population and calculating and ranking the fitness from small to large. n best individuals are selected from the large population.
The procedure of standard carnivorous plant algorithms is summarized in Algorithm 1, where n is the population size, n1 is the number of carnivorous plants, n2 is the number of prey, iter is the current iteration number, and Maxiter is the maximum iteration number.
Algorithm 1. The standard CPA
1:Initialize the relevant parameters;
2:Generate n initial individuals in the population;
3:Calculate and sort the fitness value;
4:iter = 1;
5:While iter < Maxiter
6:iter = iter +1;
7: Regard the n1 best individuals as carnivorous plants, the remaining n2 individuals as prey, and sort into subgroups as
8:shown in Table 1;
Xnew and Pnew are updated with Equations (1) and (3);
9:Xnew is updated with Equation (5);
10:Xnew, Pnew, X, and P are combined to form a new population;
11: The new population is sorted by its fitness, and n best individuals are selected;
12:End While
13:Output the best solution and its fitness value;

3. Problem Formulation

The TSP refers to a salesman starting from a city, passing through cities 1, 2, …, m to deliver goods, and eventually returning to the starting city. Since the distance from each city to other cities is different and the total path lengths corresponding to the sequences of different routes are different, it is necessary to design the sequence of cities and ensure that the final journey of the salesman is the shortest and each city is only visited once. If the city size is slight, the problem is easy to calculate, but with the city size expansion, it grows complex. The TSP can be also defined as an undirected weighted complete graph, G = (V, A), where V is the set of m vertices and A is the set of edges. For each edge, (i, j) ∈ A(i, jV). The modeling of the TSP can be expressed as:
M i n f ( c ) = i = 1 m j = 1 m z i j c i j
where f indicates the distance traveled by the salesman in the TSP, cij represents the distance between city i and city j, zij indicates the decision variable, and its formula is as follows:
z i j = { 1 if   city   i   is   visited   next   to   j 0 Otherwise
The calculation formula of cij is:
c i j = ( h i h j ) 2 + ( y i y j ) 2
where (hi, yi) and (hj, yj) indicate the coordinates of the i-th and j-th city, respectively.

4. The DCPA for TSP

A discrete carnivorous plant algorithm (DCPA) for the TSP is proposed in this section, where a novel individual-generated method and several improved components are introduced.

4.1. Individual-Generated Method

The method of generating initial solutions by the CPA is unsuitable for the TSP. Variables in the CPA are continuous, while the solutions in the TSP are discrete. For the TSP with m cities, each route is composed of m random integers without repetition, which are between [1, m]. Therefore, the encoding method of the DCPA employs permutation encoding.
The CPA is suitable for continuous optimization problems. For combinatorial optimization problems such as the TSP, the solution update method should be redesigned, so that it cannot only conform to the properties of TSP but also retain the good characteristics of the standard update method. Therefore, the subtraction, multiplication, and addition operations in CPA are redefined. The details are presented from Section 4.1.1, Section 4.1.2 and Section 4.1.3.

4.1.1. Redefining the Subtraction Operation

From Section 2.2.1, the population in the algorithm has been divided into carnivorous plants and prey. Randomly select an individual from the carnivorous plants and regard it as Xi. Randomly select an individual from the prey and regard it as Piv. Let U = XiPiv, Xi = ( x i 1 , x i 2 , …, x i m ), and Piv = ( x i v 1 , x i v 2 , …, x i v m ). t is set as the index of Xi (t∈[1, 2, …, m]), x i m indicates the m-th city in Xi, x i v m indicates the m-th city in Piv, and rr = 1. The main steps of XiPiv are:
Step 1: Let t = rr. Then, Xi(t) = x i t , and the index, ε, is found of x i t in Piv. If ε = m, then ε = 0;
Step 2: Xi(t+1) = x i t + 1 . x i v u is found on Piv(ε + 1) in Piv;
Step 3: If x i t + 1 = x i v u , st = (0, 0). Otherwise, st = ( x i t + 1 , x i v u );
Step 4: Let r = r + 1. If t > m − 1, turn to Step 5. Otherwise, Step 1–Step 4 are repeated;
Step 5: Let t = m. Then, Xi(t) = x i m , and the index, θ, is found of x i m in Piv. If θ = m, then θ = 0. Xi(1) = x i 1 and x i v r is found on Piv(θ+1) in Piv. If x i 1 = x i v r , st = (0, 0). Otherwise, st = ( x i 1 , x i v r );
Step 6: Let S = {s1, s2, …, sm};
Step 7: The variables in Xi, which are different from those in S, are assigned to 0 to obtain a new vector, X i 0 , and U = X i 0 ;
For a clearer explanation, set an example with Xi = (6, 3, 5, 1, 2, 4) and Piv = (5, 1, 2, 3, 4, 6). From Step 1 to Step 5, s1 = (6, 3), s2 = (3, 5), s3 = (0, 0), s4 = (0, 0), s5 = (2, 4), and s6 = (0, 0) can be obtained. Then, S = (6, 3, 3, 5, 0, 0, 0, 0, 2, 4, 0, 0) and U = (6, 3, 5, 0, 2, 4) according to Step 6–Step 7.
It can be seen from the above description that the redefined subtraction operation employed the concept of the set difference operation, which can make the offspring have the inherent characteristics of the better individuals in the parents and make the subtraction have an inheritance.

4.1.2. Redefining the Multiplication Operation

r is a random number between [0, 2]. If r ≥ 1, U′ = U, and U is obtained by Section 4.1.1; otherwise, the calculation steps of U′ are:
Step 1: Let U′ be a zero vector with an m-dimensional;
Step 2: V′ = (l1, l2, …, lcc), which is a vector after deleting zero elements in U, where cc is the number of nonzero components in U. The distance between adjacent cities in V′ is marked as d, where d = (dl1, l2, dl2, l3, …, dlcc−1, lcc, dlcc, 1), and the vector D is calculated with Equation (10):
D = d q
where ⊗ indicates that two vectors are multiplied by elements at the same position, q is a random vector with a cc-dimensional and each component is uniformly distributed in the range [0, 2] when the offspring are generated by Equations (1) and (3), and each component is uniformly distributed in the range [0, 1.8] with a cc-dimensional when the offspring are generated by Equation (5).
Step 3: The smallest ⌊r * cc⌋ elements are selected in D (⌊ ⌋ represents round down), and their positions are taken as the index. Keep these components corresponding to the index in X′ and assign them to U′ according to the same position.
For a clearer explanation, set an example with U = (6, 3, 5, 0, 2, 4). If the random number r = 1.5, U′ = (6, 3, 5, 0, 2, 4); if r = 0.43, V′ = (6, 3, 5, 2, 4) and cc = 5. Let U′ = (0, 0, 0, 0, 0, 0), d = (15, 20, 31, 11, 35), and the random vector q = (1.91, 0.97, 1.60, 0.28, 0.84). Then, D = (28.65, 19.40, 49.60, 3.08, 29.40), ⌊r * cc⌋ = 2, the index of the smallest two elements in D are 2 and 4, and the corresponding values of index 2 and 4 in V′ are 3 and 2. Then, 3 and 2 are assigned to U′ according to index 2 and 4, and U′= (0, 3, 0, 2, 0, 0).
From the above description, it can be seen that the redefined multiplication operation enables the offspring to inherit the good characteristics of the parent. In Equation (10), a random vector and the distance between cities are introduced in D. The former can reduce the probability of falling into local optimum, and the latter can enhance the exploitation ability.

4.1.3. Redefining the Addition Operation

Crossover is an operation that generates new potential offspring by reconstructing parents, which helps to enhance the global search ability. The addition operation in the CPA is replaced by a crossover operation so that the offspring can not only satisfy the characteristics of the TSP but also retain the attraction of carnivorous plants to prey to improve the exploitation ability. If there is no zero vector in the parents who participate in the crossover, three crossover operators are applied to realize the addition operation; otherwise, the addition operation is realized by a symmetry transformation.
For the convenience of describing the addition operation, the Piv and U′ (Piv is randomly selected in the population, and U′ is obtained by multiplication) with eight cities are set as an example, and the distances between eight cities are shown in Table 2.
  • Partial heuristic crossover operator
When ||U′|| ≠ 0 and U′ involves zero elements, the offspring generated by the traditional crossover operator may not be a legal TSP route. To fix this problem, a partial heuristic crossover operator is proposed. Suppose Piv = (5 7 4 6 1 8 3 2) and U′ = (4 5 0 0 0 0 2 8), the main steps to generate offspring O are as follows:
Step 1: O = U′;
Step 2: The nonzero cities u1 and u2, which are nearest to 0, are found in O. Suppose u1 is randomly selected and the position of u1 in Piv is taken as the starting point, traverse clockwise and counterclockwise, respectively (the red color indicates clockwise, and the green color indicates counterclockwise), and two new individuals, a1 and a2, with first component, u1, are generated;
Mathematics 10 03249 i001
Step 3: The cities v1 and v2, which are nearest to u1, are found in a1 and a2;
Step 4: Determine whether cities v1 and v2 are included in U′, and three situations may exist at this time:
Case 1: If v1 and v2 are not included in U′, the distances between u1 and v1, and u1 and v2 are compared, and the city with the shortest distance is selected to replace element 0 after u1 in O. Then, turn to Step 5.
Mathematics 10 03249 i002
Case 2: If only one of v1 and v2 is included in U′, the included city is deleted in a1 and a2 and the nonexistent city is selected to replace the element 0 after u1 in O. Then, turn to Step 5.
Mathematics 10 03249 i003
Case 3: If both v1 and v2 are included in U′, v1 and v2 are deleted in a1 and a2, and cities e1 and e2 adjacent to u1 are found in a1 and a2. Let v1 = e1 and v2 = e2. Then, Step 4 is repeated.
Mathematics 10 03249 i004
Step 5: If element 0 still exists in O, turn to Step 2; otherwise, output O.
Mathematics 10 03249 i005
2.
Bidirectional heuristic crossover operator
For the TSP, if the distance between the t-th (t = 1, 2, …, m − 1) visited city and the (t + 1)-th visited city is the smallest, the chances of a short route will increase. The bidirectional heuristic crossover operator [48] can select the city with the shortest distance from the cities next to the t-th city in the parents as the (t + 1)-th visited city, which can enhance the search precision. Piv = (5, 7, 6, 1, 3, 2, 8, 4) and U′ = (7, 4, 6, 1, 8, 3, 2, 5) are randomly generated and set as an example. The main steps of this operator are:
Step 1: Assuming that an integer 4 between [1, 8] is randomly generated, traverse clockwise and counterclockwise in Piv and U′ with the starting city 4 (the red color represents clockwise, and the green color represents counterclockwise), respectively. Then, four new individuals a1, a2, a3, and a4 are generated;
Mathematics 10 03249 i006
Step 2: The cities 5, 8, 6, and 7 adjacent to city 4 are found in a1, a2, a3, and a4, and calculate the distances between city 4 and cities 5, 8, 6, and 7. City 5, with the shortest distance, is selected as the second visited city of O;
Mathematics 10 03249 i007
Step 3: City 4 is deleted in Piv and U′. Step 2 is repeated with city 5 as the starting city to obtain b1, b2, b3, and b4. The cities 8, 2, and 6 adjacent to city 5 are found in b1, b2, b3, and b4, and calculate the distances between city 5 and cities 8, 2, and 6. City 8, with the shortest distance, is regarded as the third visited city of O;
Mathematics 10 03249 i008
Step 4: The determined city in O is deleted in Piv and U′, and Step 3 is repeated until the city size of O is 8;
Mathematics 10 03249 i009
3.
Completely mapped crossover operator
Zahid [49] proposed a completely mapped crossover operator in 2020. Suppose Piv = (1 2 3 4 5 6 7 8) and U′ = (6 1 5 3 8 7 2 4) in this part The two individuals are set as an example, and 1–8 represent different city numbers. In Piv, the index of cities 1–8 is regarded as 1–8. In U′, the index of city 6 is 1, the index of city 1 is 2, and so on. The main steps of the completely mapped crossover operator are:
Step 1: if Piv = U′, O = Piv. Otherwise, Step 2 is executed.
Step 2: Compare two cities corresponding to the same index in Piv and U′ from the first component to the eighth component until the two cities are different so that W1 is equal to the city corresponding to the index in U′.
W1 = (6)
Step 3: The index k1 with city 6 is found in Piv, and the city corresponding to k1 in U′ is 7. The k2 with city 7 is found in Piv, and the city corresponding to k2 in U′ is 2. Then, W2 is:
W2 = (2)
Step 4: The index k3 with the city g2 is found in Piv, and the city corresponding to k3 in U′ is 1. The index k4 with city 6 is found in Piv, and the city corresponding to k4 in U′ is 7. Then, W1 and W2 are:
W1 = (6 1)
W2 = (2 7)
Step 5: If the first city in Piv appears in W2, turn to Step 6; otherwise, Step 4 is repeated.
W1 = (6 1 2 7)
W2 = (2 7 6 1)
Step 6: Let O1 = Piv and O2= U′. The cities where O1 is equal to W1 and O2 is equal to W2 are replaced by *. * indicates that the city is unknown. O1 and O2 are:
O1 = (* * 3 4 5 * * 8)
O2 = (6 1 5 3 8 2 7 4)
Step 7: * in O1 and O2 are replaced by cities in W1 and W2 in sequence, respectively.
O1 = (2 7 3 4 5 6 1 8)
O2 = (6 1 5 3 8 2 7 4)
Step 8: The individual with better fitness values in O1 and O2 are selected as newly generated offspring O.
Among the three crossover operators, the partial heuristic crossover operator applies to the case where element 0 is included in the parent; otherwise, the bidirectional heuristic crossover and the completely mapped crossover operator are selected. The former considers the distances between cities, which exhibits fast convergence speed, but easily falls into local optimums. The latter exhibits strong exploration ability and slow convergence speed. To better balance the search ability, the completely mapped crossover operator in the early iteration and the bidirectional heuristic crossover operator in the late iteration are employed with high probability. Thus, an adaptive parameter ρ is designed with Equation (11):
ρ = 0.7 0.4 T T max 0.8
where T is the current running time and Tmax is the maximum running time.
When ||U′|| ≠ 0, the selection methods of the three crossover operators are: if element 0 is not included in U′ in the growth phase and if the random number kρ (k∈[0,1]), the completely mapped crossover operator is selected; otherwise, the bidirectional heuristic crossover operator is selected. If element 0 is included in U′, the partial heuristic crossover operator is selected. In the reproduction phase, if element 0 is not included in U′, the bidirectional heuristic crossover operator is selected; otherwise, the partial heuristic crossover operator is selected.
4.
Symmetry transformation
A zero vector may be generated after the subtraction and multiplication operations, that is, ||U′|| = 0. Three crossover operators do not apply to the case where a zero vector is included in parent individuals. To realize the addition operation, a symmetry transformation is employed to Piv. The specific steps of the symmetry transformation are as follows:
Step 1: Four unequal random integers, N1, N2, N3, and N4, are randomly generated between [1, m] and sorted from small to large, where m is the city size;
Step 2: The cities between N1 and N2, and N3 and N4 are flipped in Piv;
Step 3: The city sequences between N1 and N2, and N3 and N4 are swapped in Piv to generate the offspring O.
Suppose Piv = 5 7 4 6 1 8 3 2, N1 = 2, N2 = 3, N3 = 5, and N4 = 6. The individual after symmetry transformation is O = 5 8 1 6 4 7 3 2.
The completely mapped crossover operator exhibits inheritance but does not exhibit a heuristic. Compared with the other two crossover operators, it has a strong global search ability. Partial heuristic crossover operators have some heuristics and inheritance. Compared with the other two crossover operators, the global search ability and local search ability are relatively balanced. The bidirectional heuristic crossover operator exhibits heuristic and strong local search abilities compared with the other two crossover operators. In addition, symmetry transformation can maintain population diversity. Therefore, the addition operation takes into account global and local search capabilities, which is beneficial for improving the performance of the algorithm.

4.2. Simple Sorting Grouping

The CPA divides the population into n1 subgroups, each group is composed of one carnivorous plant and n2/ n1 prey, and the details are shown in Table 1. It can be found that: (1) the implementation is complex; (2) the quality of subgroups decreases with the ranking of carnivorous plants, which leads to the imbalance in search ability among the different subgroups; and (3) the quality of individuals and search abilities in the first subgroup is significantly better than that in the other subgroups. After the offspring individuals are grouped, the individuals with the information of the first subgroup may become the local optimum of the other subgroups, which increases the assimilation of individuals and the probability of falling into the local optimum.
Pointing at the above problems, a simple sorting grouping method is proposed, which sorts the individuals according to the fitness value and divides them into two groups for updating with randomly selected individuals. The specific operations are as follows:
Step 1: The individuals are sorted from small to large for the fitness value with the population size n;
Step 2: The population is divided into two groups. The first group is carnivorous plants with the first n1 individuals, and the second group is prey with the remaining n2 individuals (n1 can be arbitrarily determined as long as n2 > n1, n2 + n1 = n is satisfied and n2 can be divisible by n1).
Step 3: In the update method of the growth phase and reproduction phase, the carnivorous plants are randomly selected from the first group, and the prey are randomly selected from the second group.
Simple sorting grouping divides the population into two groups according to the fitness value of the individual, which is easier and faster to group. Carnivorous plants and prey are randomly selected from the two groups to update in the simple sorting grouping, which can reduce the probability of becoming stuck in a local optimum and improve the exploration ability.

4.3. Adaptive Attraction Probability

The growth of carnivorous plants or the update of prey in the CPA depends on the attraction probability, γ. The update of prey can enhance the exploration ability, and the growth of carnivorous plants can enhance the exploitation ability. However, γ in the CPA is a constant 0.8, which does not improve the balance between the exploration and exploitation ability. Therefore, an adaptive attraction probability is designed with Equation (12):
γ = 0.45 + 0.45 T 1 T max 0.4
where T1 is the current running time and Tmax is the maximum running time.
When γ is greater than the random number, λ (λ ∈ [0, 1]), the carnivorous plant grows; otherwise, the prey updates its position. It can be seen from Equation (12) that γ is small in the early stage of iteration, and the prey has a high probability of updating, which exhibits a strong exploration ability. With the increase in time, the probability of the carnivorous plant growing increases, which exhibits a strong exploitation ability.

4.4. Iteration Local Search Algorithm

The idea of ILS [50] is to perturb the local optimal solution obtained by the local search algorithm. Then, the local search algorithm is conducted on the perturbed individual. Three parts are mainly included in ILS: a local search algorithm, a perturbation method, and an acceptance criterion. In ILS, the local search algorithm is first applied to the population. Then, the optimal individual is selected, and the double-bridge perturbation is executed to obtain an intermediate solution. Finally, the intermediate solution is searched locally, and the newly generated solution is used to replace the optimal solution if its fitness value is better than the optimal solution. The details of ILS are shown below.
  • Local search algorithm
The local search process is to search the neighborhood of the current solution to find a better solution and update the solution until it cannot be improved. The 2-Opt algorithm [51] is an efficient method for the TSP, which can quickly eliminate the intersection edges in each path and enhance search precision. Its main idea is that for each route the two nonadjacent edges are exchanged in turn to obtain the set of paths, and the reservation can improve the exchange of paths.
2.
Perturbation
Four nonadjacent edges are selected in the double-bridge [52] perturbation to be disconnected and reconnected, aiming to reduce the probability of becoming stuck in a local optimum. X1= ( x 1 1 , x 1 2 , …, x 1 m ) is taken as an example to illustrate the process of the double-bridge, where m is the city size and x 1 m indicates m-th city in X1.
Step 1: Integers J1, J2, J3, and J4, are randomly generated between [2, m − 6], [2 + J1, m − 4], [2 + J2, m − 2], and [2 + J3, m], respectively;
Step 2: Let G1 = J1 − 1, G2 = J2 − 1, G3 = J3 − 1, and G4 = J4 − 1;
Step 3: The cities corresponding to indexes J1, J2, J3, J4, G1, G2, G3, and G4 in X1 are regarded as aa1, aa2, aa3, aa4, bb1, bb2, bb3, and bb4, respectively.
Step 4: The edges (aa1, bb1), (aa2, bb2), (aa3, bb3), and (aa4, bb4) are disconnected, and the edges (aa1, bb3), (aa2, bb4), (aa3, bb1), and (aa4, bb2) are reconnected.
An example with X1 = (3, 2, 1, 6, 5, 4, 7, 10, 9, 11, 8) is provided in Figure 1a, and the initial path after the double-bridge perturbation is provided in Figure 1b.
The pseudo-code of ILS is summarized in Algorithm 2. In Algorithm 2, XX indicates the individual in the population.
Algorithm 2. ILS
1:Execute the 2-Opt algorithm on XX and calculate the fitness of the population after the 2-Opt algorithm;
2:Find the best individual and record as Xi;
3:Execute the double-bridge perturbation and 2-Opt algorithm on Xi and regard the individual after the two operations
as X*;
4:If f(X*) < f(X)
5: Replace X with X*;
6:End

4.5. Similarity-Eliminating Operation

For a multi-extremum optimization problem, the individuals in the population continue to move to extreme points and finally gather near these points as the iteration time grows. Therefore, the number of similar and identical individuals increases as the iteration time grows, which leads to poor population diversity and the possibility of excellent individuals being generated being reduced.
To avoid the above phenomenon, the similarity-eliminating operation is added to DCPA with two suboperations: suboperation 1 is based on the same quantity of city sequences, and suboperation 2 is based on the route length.
  • The similarity-eliminating operation based on the same quantity of city sequences:
The solution of the TSP is a sequence of m nonrepeating integers, and m is the city size. The same city sequence in different individuals will increase as the iterations increase. If the quantity of the same city sequence in different individuals is greater than n0.8, these individuals are regarded as similar. X1 = (1 2 3 4 5 6 7 8 9) and X2 = (7 9 1 2 3 4 8 5 6) are randomly generated and taken as an example to illustrate the similarity-eliminating operation based on the same quantity of city sequences. Each component in X1 and X2 represent a city number. X1 and X2 are depicted in Figure 2.
Step 1: From Figure 2, it can be found that the same quantity of the city sequences of X1 and X2 is 6, which are (1, 2), (2, 3), (3, 4), (5,6), (6, 7), and (9,1).
Step 2: X1 and X2 are regarded as similar individuals for the same quantity of city sequence 6 > 90.8, where 9 is the city size.
Step 3: The individual X1 with the shortest route length is preserved, and X2 is removed. Then, X2 is randomly generated to maintain the same population size.
2.
The similarity-eliminating operation based on the route length:
Individuals with the same route length in the population are considered identical. The steps of the similarity-eliminating operation based on the route length are as follows:
Step 1: Calculate and find individuals with the same route length in the population;
Step 2: One of the same individuals is kept and the rest are removed;
Step 3: The removed individuals are replaced with randomly generated individuals to maintain the same population size.
An example of population size n = 10 and city size m = 6 is given in Table 3, where I is the initial population, f is the route length of individuals, I1 is the population after removing the identical individuals, Y is the randomly generated individuals, and I′ is the population after the similarity-eliminating operation based on the route length. The bold in f means individuals with the same route length.
According to the description, identical individuals must be similar individuals, but the converse is not true. For example, 1 2 3 4 and 4 1 2 3 are identical and similar individuals; 1 2 3 4 5 6 7 8 9 and 7 9 1 2 3 4 8 5 6 are similar individuals, but not identical individuals. Suboperation 1 performs the similarity-eliminating operation on the individuals who execute the update method. Therefore, identical individuals may exist in the recombined phase. To maintain population diversity and reduce the possibility of trapping into a local optimum, suboperation 2 is added after the recombination phase.

4.6. The Framework of DCPA

The framework of the DCPA is shown in Figure 3. It can be observed that the performance of the DCPA is mainly concerned with the update method of offspring, the grouping method, the local search strategy, and the similarity-eliminating operation. A new generation method that redefines subtraction, multiplication, and addition operations is designed in the DCPA. The method not only satisfies the legitimacy of the TSP but also enhances the search precision. In addition, a simple sorting grouping method is proposed, which can promote the grouping speed and exploration ability compared with the method in the CPA. At the same time, an adaptive attraction probability is proposed to better balance the search ability, which can fix the defect that γ is a constant. Then, ILS is introduced to heighten the exploitation ability and reduce the probability of solution stagnation. Finally, the similarity-eliminating operation composed of two suboperations is added, which can not only identify the identical individuals but also the individuals with high similarity in the update operation so that the DCPA can maintain the population diversity during the iteration. Therefore, the DCPA can better balance the exploration and exploitation abilities.
The pseudo-code of the DCPA is summarized in Algorithm 3, where n is the population size, T is the iteration time of the proposed algorithm, and Tmax is the maximum iteration time.
Algorithm 3. The DCPA
1: Let T = 0;
2:Randomly generate n initial individuals, record the population as A, and calculate its fitness;
3:Divide individuals into subgroups according to Section 4.2;
4:While T < Tmax,
5: Perform the eliminate operation based on the same city sequence on the involved individuals and the offspring generated in the growth
6:phase by Equation (1) and Equation (4) with the redesigned subtraction, multiplication, and addition operators;
   Perform the eliminate operation based on the same city sequence on the involved individuals and the offspring generated by Equation (5)
7:with the redesigned subtraction, multiplication, and addition operators and record the population as B;
   Perform Algorithm 2 on B and record the population as C;
8:   A, B, and C combine to form a new population, recorded as I;
9:   Perform the eliminate operation based on route length on I, which is discussed in Section 4.5;
10: Select n best chromosomes according to the objective function from I;
11: Record the running time and update runtime;
12:End While
13:Output the best solution and the optimal value;

5. Experimental Results and Discussions

The algorithms involved in this work were developed in Matlab R2019b. To ensure the fairness of the experiment, all experiments were performed on the same computer. The computer used Windows 10 and operated at 3.09 GHz with 32GB of RAM. Several comparisons were conducted for evaluating the performance of the proposed algorithm in various instances from TSPLIB as follows: (1) self-comparison, where the algorithm was compared with other versions of the DCPA to evaluate the significance of improvements, (2) DCPA without a local search algorithm was compared with the other algorithms without local search, and finally, (3) a DCPA comparison with other algorithms with local search.

5.1. Experiment Termination Conditions

Due to the different complexities of the algorithms participating in the comparison, the traditional method of setting the maximum number of iterations as the termination condition could not fairly measure the performance. Thus, the maximum iteration time with different city sizes was set as the termination condition in this work, which is shown in Table 4. In addition, if the length of the current optimal route was less than or equal to the best known optimum before the maximum iteration time, the iteration was ended. Each experiment was performed with 20 independent runs, and the best solution of each dataset for each run was recorded.

5.2. Comparisons and Analysis

To evaluate the performance of the proposed algorithm, the minimum (Best), maximum (Worst), average (Avg), standard deviation (SD), and deviation percentage of the Avg (PDA) values were adopted as a measurement. The Friedman test [53,54] and Holm’s procedure [55] were employed to verify the significant differences among the participating algorithms.

5.2.1. Validation of the Improved Component

Several versions of the proposed algorithm, starting from the CPA with swap sequence and with ILS to the final version of the DCPA, which incorporates a new individual-generated method, similarity-eliminating operation, simple sorting group method, adaptive attraction rate, and local search algorithm, are described in Table 5. For the seven algorithms in Table 5, the population size was n = 100, the quantity of the carnivorous plant was 25, and the quantity of the prey was 75. For each algorithm, each instance was run 20 times independently, and the maximum running time is summarized in Table 4.
  • Comparison with the other individual-generated methods
Three algorithms, Version 1–Version 3, were adopted to illustrate the effectiveness of our proposed individual generation method, and the results are shown in Table 6. The description of the participating algorithms is shown in Table 5.
It can be observed from Table 6 that the PDA of Version 3 was less than 3.35% on 11 instances and more than 3.35% on 5 instances, such as rat99, eil101, bier127, ch130, and d198. However, the PDA of the other two methods was more than 3.35% on 16 instances, and compared with the other two methods, our method could obtain better solutions on 15 instances, except on bay29. In addition, the mean values of Version 3 were superior to the other two methods on 15 instances, except on d198. The above analyses verified that our individual-generated method is suitable for the TSP, and it can direct the produced discrete solutions towards optimality compared with the other two methods.
2.
Self-comparisons
Several self-comparisons between different versions of the DCPA, to illustrate the effect of each improvement component (simple sorting group, similarity-eliminating operation, and adaptive attraction probability) on the overall performance of the proposed DCPA, were conducted, and the results are summarized in Table 7. The description of the participating algorithms is shown in Table 5.
Compared with Version 3 in Table 7, it can be noticed that the PDA obtained by Version 4 only lost in pr107 and improved in 21 instances. The PDA obtained by Version 5 won in 22 instances compared with Version 4. For Version 6, it lost in 6 instances compared to Version 5 with regard to PDA and performed better in 16 instances. The analyses above indicates that adding the new components can increase the performance of the proposed algorithm.
The mean rank and final rank of the Friedman rank test are depicted in Figure 4, where the mean rank gradually decreased and the final rank gradually increased among Version 3, Version 4, Version 5, and Version 6, which verified that each added component enhanced the algorithm’s performance.
The Friedman statistic, χ2, with the significance level of 0.05 is summarized in Table 8, where χ2 is 59.62. The critical value of χ 0.05 [ 3 ] 2 is 7.82 with s − 1 = 3 degrees of freedom. Thus, the null hypothesis is rejected at α = 0.05.
Holm’s procedure was used to further verify the significant impact of a single component on the algorithm performance. The results of all pairwise comparisons are summarized in Table 9, where v is the rank value corresponding to the unadjusted p-value, sorted from small to large.
As presented in Table 9, it is noted that the statistical test does not show a significant difference between Version 6 and Version 5 for the adjusted p-value and unadjusted p-value, which are more than 0.05 at a 95% confidence level, and this is because the adaptive attraction rate focuses on enhancing the exploration rather than the exploitation ability of the algorithm. However, compared to Version 5, Version 6 enhanced the mean PDA, which verified that the adaptive attraction rate plays a positive role in the algorithm’s performance. The rest of the adjusted p-values are all lower than 0.05, which confirms that the similarity-eliminating operation and the simple sorting group can significantly enhance the algorithm’s performance. The analyses above confirmed the efficiency of the introduced components in improving the performance of the proposed algorithm.

5.2.2. Validation of the Proposed Algorithm

  • Compared with the algorithms without the local search algorithm:
In this paper, three algorithms without local search were compared with DCPA without iterated local search (Version 6), which include DJAYA [27], the agglomerative greedy brain storm optimization algorithm (AGBSO3) [56], and DSMO [25]. The parameters of the participating algorithms are summarized in Table 10, and the comparison results on 10 instances from TSPLIB are shown in Table 11.
According to the results in Table 11, it can be observed that Version 6 achieved a smaller PDA (%) for 8 instances out of 10 instances. For Version 6, the PDA (%) was no more than 2.45% for all 10 instances. However, there were only six instances of DSMO, nine instances of AGBSO3, and one instance of DJAYA that were no more than 2.45%. The experimental results illustrate that Version 6 has a strong optimization capability and is stable to compute a satisfactory solution in each experiment. Thus, our proposed method without the local search algorithm is robust and effective in solving the TSP compared with the other three algorithms without the local search algorithm.
The mean rank and final rank of the Friedman rank test are depicted in Figure 5, where the mean rank and final rank of Version 6 are better than the other three algorithms.
The Friedman statistic, χ2, with the significance level of 0.05 is summarized in Table 12, where χ2 is 23.3. The critical value of χ 0.05 [ 3 ] 2 is 7.82 with s − 1 = 3 degrees of freedom. Thus, the null hypothesis is rejected at α = 0.05.
Holm’s procedure was used to find the concrete pairwise comparisons that produced significant differences among the four participating algorithms. The results of Holm’s procedure are summarized in Table 13.
As shown in Table 13, the unadjusted p-values and adjusted p-values are all lower than 0.05, which illustrates that there are significant differences between Version 6 and DJAYA, Version 6 and DSMO, and Version 6 and AGBSO3. From the analysis in Figure 5 and Table 12 and Table 13, it can be confirmed that Version 6 performed significantly better than DJAYA, DSMO, and AGBSO3.
2.
Compared with the algorithms with the local search algorithm:
For demonstrating the competitiveness of DCPA, 27 instances with cities from 29 up to 1577 were selected and compared with six algorithms in the literature, namely, DSSA [30], DSFLA [57], DBAL [40], D-GWO [29], ABC [26], and PACO-3Opt [58]. The parameters of the participating algorithms are shown in Table 14. Among them, the DSSA, DSFLA, and D-GWO use the 2-Opt local search algorithm, the ABC and PACO-3Opt use the 3-Opt local search algorithm, and the DBAL uses 2-Opt, 2.5-Opt, and 3-Opt local search algorithms.
The comparison results in Table 15 show that our proposed method outperformed the most competitive methods in solving TSPs, as it can obtain better Best values and MPDA (%) among the seven participating algorithms. The DCPA could obtain 15 theoretical optimal solutions of 27 instances, and 4 of the remaining 12 instances were close to the optimal solutions (as their percentage deviation of the best found solution to the theoretical optimal solution was no more than 0.26%). In general, the seven algorithms are effective for TSP, but the performances of the comparison algorithms were challenged by the increasing problem scales. In some relatively large instances, such as d657, u724, pr1002, rl1323, and fl1577, the percentage deviation of the best found solution to the theoretical optimal solution of our proposed algorithm was no more than 2.31%. However, the values of the comparison algorithm were all more than 2.37%. In addition, the MPDA (%) achieved by the DCPA were 0.69, 1.32, 0.83, 2.39, 2.18, and 0.9, which were superior to DSSA, DSFLA, DBAL, DGWO, ABC, and PACO-3Opt, respectively.
Then, the Friedman rank test was used to order the participating algorithms based on their performance in solving TSPs. The order of the seven algorithms is shown in Figure 6.
The Friedman statistic χ2 with the significance level of 0.05 is summarized in Table 16, where χ2 is 105.32. The critical value of χ 0.05 [ 6 ] 2 is 12.59 with s − 1 = 6 degrees of freedom. Thus, the null hypothesis is rejected at α = 0.05.
Lastly, Holm’s procedure was used to find the concrete pairwise comparisons that produced significant differences among the seven participating algorithms. The results of Holm’s procedure are summarized in Table 17.
The results obtained from Holm’s procedure show that the DCPA did not show a significant difference from the DSSA for the adjusted p-value and unadjusted p-value, which are more than 0.05 at a 95% confidence level. However, compared to the DSSA, the DCPA significantly enhanced the MPDA and MPDB, which verified the algorithm’s superior performance. The statistical test shows that the DCPA was significantly different from the other five comparison algorithms, with all p-values less than 0.05 at a 95% confidence level. The analyses above demonstrate the outstanding performance of the DCPA.
3.
Comparison of convergence speed
For demonstrating the convergence performance of the DCPA, the convergence analysis of the participating algorithms was analyzed with average computation time and convergence curves. The average computation times of 16 instances in 20 runs are summarized in Table 18, and the graphical representation of the convergence analysis is depicted in Figure 7. The Tavg in Table 18 denotes the average time of 16 instances. The x-axis in Figure 7 is taken as the running time, and the y-axis is taken as the length of the route.
The analyses above show that the DCPA can achieve a higher quality than the comparison algorithms, and the results in Table 18 confirmed that the DCPA also performs faster in computational time than the others, as the DCPA won on 11 of 16 instances. The Tavg of the DCPA was the shortest among the seven algorithms, which means that the proposed algorithm can exhibit superior quality solutions in a much faster computational time than the other participating algorithms.
As depicted in Figure 7, the convergence speed of the DCPA was slower than the DSSA on pr299 and d657 and slower than ABC and D-GWO on fl417, rat783, and rl1323 in the early stage of iteration. ABC and D-GWO call the local optimization algorithm various times in each iteration. Therefore, these two algorithms converge faster in the early stage of the iteration, but the global search ability decreases with the iteration time, which easily falls into the local optimum. The DSSA employed order-based decoding, which exhibits strong global search ability in the early stage and converges faster. However, with the iteration time increase, the local search ability of the DSSA becomes weak and it converges prematurely. The DCPA takes into account both the global and local search ability during the iteration, and it maintains a high-level convergence speed in all participating algorithms.

6. Conclusions and Future Work

This work proposes a new discrete carnivorous plant algorithm with similarity elimination for the TSP, which considers intelligence and heuristics in both the intensification and diversification stages and finally increases the overall performance of the algorithm. The DCPA presents a new individual generation method that redefines the addition, subtraction, and multiplication operators. The newly generated individual can not only satisfy the discrete properties of the TSP but also maintains the good characteristics of the better individual in the parents, which helps to enhance the search precision. After that, a simple sorting grouping method is proposed, which randomly selects carnivorous plants and prey for updating, and reduces the computational complexity and the assimilation speed. Then, an adaptive variable attraction probability is proposed, and the high probability of prey updating enhances the exploration capability in the early stage of iteration. At the late stage, a high probability of carnivorous plant growth enhances the exploitation capability. Finally, the similarity-eliminating operation based on the same quantity of city sequences and route length is added to the DCPA, which effectively reduces the number of similar and identical individuals, helps to maintain population diversity, and reduces the probability of stagnation.
To verify the effectiveness of the improvements in the DCPA, two sets of experiments were designed. The first experiment compared the new generation method with the two generation methods in the literature. Then, the self-comparison of different versions of the DCPA was carried out in the second experiment. From the results, the following conclusions can be drawn: (1) the new generation method can solve the TSP effectively; and (2) simple sorting grouping, adaptive attraction probability, and the similarity-eliminating operation play positive roles in the enhancement of the algorithm’s performance.
To assess the performance of the proposed algorithm, the algorithm was compared with nine algorithms. The parametric and nonparametric statistical tests proved that the proposed algorithm has superior performance in solution quality. In addition, the convergence curves show that the DCPA converges more quickly than the other comparison algorithms. The above analyses demonstrate that the DCPA is an effective and competitive choice to solve the TSP.
The DCPA was only verified on the instances of the TSP and the parameters were determined with trial and error. It is not suitable for other optimization problems with discrete domains. In future work, the proposed DCPA will be tested for more complex discrete problems, such as multi-objective traveling salesman problems and multi-traveling salesman problems. In addition, the optimal parameter combination, the research work on the running time, the accuracy of the algorithm, and the applied research can be carried out.

Author Contributions

Conceptualization, P.-L.Z.; X.-B.S. and J.-Q.W.; methodology, P.-L.Z. and X.-B.S.; software, P.-L.Z. and X.-B.S.; validation, P.-L.Z.; X.-B.S. and J.-Q.W.; formal analysis, H.-H.S. and J.-L.B.; investigation, P.-L.Z. and X.-B.S.; resources, P.-L.Z.; X.-B.S. and J.-Q.W.; data curation, H.-H.S., J.-L.B. and H.-Y.Z.; writing—original draft preparation, P.-L.Z. and X.-B.S.; writing—review and editing, P.-L.Z.; X.-B.S. and J.-Q.W.; visualization, P.-L.Z.; X.-B.S. and J.-Q.W.; supervision, J.-Q.W.; project administration, P.-L.Z. and X.-B.S.; funding acquisition, J.-Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Fund of China, grant number 21BGL17.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the anonymous reviewers for their valuable and constructive comments that greatly improved the quality and completeness of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Papadimitriou, C.H.; Steiglitz, K. Combinatorial Optimization: Algorithms and Complexity; Courier Corporation: Chelmsford, MA, USA, 1998. [Google Scholar]
  2. Hartmanis, J. Computers and intractability: A guide to the theory of np-completeness (Michael R. Garey and David S. Johnson). Siam Rev. 1982, 24, 90. [Google Scholar] [CrossRef]
  3. Eldos, T.; Kanan, A.; Nazih, W.; Khatatbih, A. Adapting the Ant Colony Optimization Algorithm to the Printed Circuit Board Drilling Problem. World Comput. Sci. Inf. Technol. J. 2013, 3, 100–104. [Google Scholar]
  4. An, H.; Li, W. Synthetically improved genetic algorithm on the traveling salesman problem in material transportation. In Proceedings of the 2011 International Conference on Electronic & Mechanical Engineering and Information Technology, Harbin, China, 12–14 August 2011; pp. 3368–3371. [Google Scholar]
  5. Savla, K.; Frazzoli, E.; Bullo, F. Traveling Salesperson Problems for the Dubins Vehicle. IEEE Trans. Autom. Control 2008, 53, 1378–1391. [Google Scholar] [CrossRef]
  6. Cheng, H.; Yang, S. Genetic algorithms with elitism-based immigrants for dynamic shortest path problem in mobile ad hoc networks. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 3135–3140. [Google Scholar]
  7. Gharehchopogh, F.S. Advances in tree seed algorithm: A comprehensive survey. Arch. Comput. Methods Eng. 2022, 29, 3281–3304. [Google Scholar] [CrossRef]
  8. Ghafori, S.; Gharehchopogh, F.S. Advances in spotted hyena optimizer: A comprehensive survey. Arch. Comput. Methods Eng. 2021, 29, 1569–1590. [Google Scholar] [CrossRef]
  9. Dantzig, G.; Johnson, R.F. Solution of a Large-Scale Traveling-Salesman Problem. J. Oper. Res. Soc. Am. 1954, 2, 393–410. [Google Scholar] [CrossRef]
  10. Bellman, R.E.; Dreyfus, S.E. Applied Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  11. Padberg, M.; Rinaldi, G. Optimization of a 532-city symmetric traveling salesman problem by branch and cut. Oper. Res. Lett. 1987, 6, 1–7. [Google Scholar] [CrossRef]
  12. Kizilateş, G.; Nuriyeva, F. On the nearest neighbor algorithms for the traveling salesman problem. In Advances in Computational Science, Engineering and Information Technology; Springer: Cham, Switzerland, 2013; pp. 111–118. [Google Scholar]
  13. Kanellakis, P.-C.; Papadimitriou, C.H. Local search for the asymmetric traveling salesman problem. Oper. Res. 1980, 28, 1086–1099. [Google Scholar] [CrossRef]
  14. Gu, J.; Huang, X. Efficient local search with search space smoothing: A case study of the traveling salesman problem (TSP). IEEE Trans. Syst. Man Cybern. 1994, 24, 728–735. [Google Scholar]
  15. Wang, Y.; Han, Z. Ant colony optimization for traveling salesman problem based on parameters optimization. Appl. Soft Comput. 2021, 107, 107439. [Google Scholar] [CrossRef]
  16. Shahadat, A.S.B.; Akhand, M.A.H.; Kamal, M.A.S. Visibility Adaptation in Ant Colony Optimization for Solving Traveling Salesman Problem. Mathematics 2022, 10, 2448. [Google Scholar] [CrossRef]
  17. Liu, Q.; Du, S.; Wyk, B.; Sun, Y. Niching particle swarm optimization based on Euclidean distance and hierarchical clustering for multimodal optimization. Nonlinear Dyn. 2020, 99, 2459–2477. [Google Scholar] [CrossRef]
  18. Deng, Y.; Xiong, J.; Wang, Q. A Hybrid Cellular Genetic Algorithm for the Traveling Salesman Problem. Math. Probl. Eng. 2021, 2021, 6697598. [Google Scholar] [CrossRef]
  19. Wang, J.; Ersoy, O.K.; He, M.; Wang, F. Multi-offspring genetic algorithm and its application to the traveling salesman problem. Appl. Soft Comput. 2016, 43, 415–423. [Google Scholar] [CrossRef]
  20. Nagata, Y.; Soler, D. A new genetic algorithm for the asymmetric traveling salesman problem. Expert Syst. Appl. 2012, 39, 8947–8953. [Google Scholar] [CrossRef]
  21. Mi, M.; Xue, H.; Ming, Z.; Yu, G. An Improved Differential Evolution Algorithm for TSP Problem. In Proceedings of the Intelligent Computation Technology and Automation, International Conference, Changsha, China, 11–12 May 2010. [Google Scholar]
  22. Gharehchopogh, F.S.; Namazi, M.; Ebrahimi, L.; Abdollahzadeh, B. Advances in Sparrow Search Algorithm: A Comprehensive Survey. Arch. Comput. Methods Eng. 2022, 1–29. [Google Scholar] [CrossRef]
  23. Zhong, Y.; Lin, J.; Wang, L.; Zhang, H. Hybrid discrete artificial bee colony algorithm with threshold acceptance criterion for traveling salesman problem. Inf. Sci. 2017, 421, 70–84. [Google Scholar] [CrossRef]
  24. Gharehchopogh, F.S. An Improved Tunicate Swarm Algorithm with Best-random Mutation Strategy for Global Optimization Problems. J. Bionic Eng. 2022, 19, 1177–1202. [Google Scholar] [CrossRef]
  25. Akhand, M.; Ayon, S.I.; Shahriyar, S.A.; Siddique, N.; Adeli, H. Discrete Spider Monkey Optimization for Traveling Salesman Problem. Appl. Soft Comput. 2019, 86, 105887. [Google Scholar] [CrossRef]
  26. Khan, I.; Maiti, M.K. A swap sequence based Artificial Bee Colony algorithm for Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 428–438. [Google Scholar] [CrossRef]
  27. Gunduz, M.; Aslan, M. DJAYA: A discrete Jaya algorithm for solving traveling salesman problem. Appl. Soft Comput. 2021, 105, 107275. [Google Scholar] [CrossRef]
  28. Osaba, E.; Del Ser, J.; Sadollah, A.; Bilbao, M.N.; Camacho, D. A discrete water cycle algorithm for solving the symmetric and asymmetric traveling salesman problem. Appl. Soft Comput. 2018, 71, S1568494618303818. [Google Scholar]
  29. Panwar, K.; Deep, K. Discrete Grey Wolf Optimizer for symmetric travelling salesman problem. Appl. Soft Comput. 2021, 105, 107298. [Google Scholar] [CrossRef]
  30. Zhang, Z.; Han, Y. Discrete sparrow search algorithm for symmetric traveling salesman problem. Appl. Soft Comput. 2022, 118, 108469. [Google Scholar] [CrossRef]
  31. Ezugwu, A.E.-S.; Adewumi, A.O.; Frîncu, M.E. Simulated annealing based symbiotic organisms search optimization algorithm for traveling salesman problem. Expert Syst. Appl. 2017, 77, 189–210. [Google Scholar] [CrossRef]
  32. Sörensen, K. Metaheuristics—The metaphor exposed. Int. Trans. Oper. Res. 2015, 22, 3–18. [Google Scholar] [CrossRef]
  33. Ong, K.M.; Ong, P.; Sia, C.K. A carnivorous plant algorithm for solving global optimization problems. Appl. Soft Comput. 2021, 98, 106833. [Google Scholar] [CrossRef]
  34. Stodola, P.; Otřísal, P.; Hasilová, K. Adaptive Ant Colony Optimization with node clustering applied to the Travelling Salesman Problem. Swarm Evol. Comput. 2022, 70, 101056. [Google Scholar] [CrossRef]
  35. Yong, W. The hybrid genetic algorithm with two local optimization strategies for traveling salesman problem. Comput. Ind. Eng. 2014, 70, 124–133. [Google Scholar]
  36. Ha, Q.M.; Deville, Y.; Pham, Q.D.; Hà, M. A hybrid genetic algorithm for the traveling salesman problem with drone. J. Heuristics 2020, 26, 219–247. [Google Scholar] [CrossRef]
  37. Wang, Y.; Wu, Y.W.; Xu, N. Discrete symbiotic organism search with excellence coefficients and self-escape for traveling salesman problem. Comput. Ind. Eng. 2019, 131, 269–281. [Google Scholar] [CrossRef]
  38. Kóczy, L.T.; Földesi, P.; Tüű-Szabó, B. Enhanced discrete bacterial memetic evolutionary algorithm—An efficacious metaheuristic for the traveling salesman optimization. Inf. Sci. 2018, 460–461, 389–400. [Google Scholar] [CrossRef]
  39. Zhong, Y.; Wang, L.; Lin, M.; Zhang, H. Discrete pigeon-inspired optimization algorithm with Metropolis acceptance criterion for large-scale traveling salesman problem. Swarm Evol. Comput. 2019, 48, 134–144. [Google Scholar] [CrossRef]
  40. Saji, Y.; Barkatou, M. A discrete bat algorithm based on Lévy flights for Euclidean Traveling Salesman Problem. Expert Syst. Appl. 2021, 172, 114639. [Google Scholar] [CrossRef]
  41. Benyamin, A.; Farhad, S.G.; Saeid, B. Discrete farmland fertility optimization algorithm with metropolis acceptance criterion for traveling salesman problems. Int. J. Intell. Syst. 2021, 36, 1270–1303. [Google Scholar] [CrossRef]
  42. Al-Gaphari, G.H.; Al-Amry, R.; Al-Nuzaili, A.S. Discrete crow-inspired algorithms for traveling salesman problem. Eng. Appl. Artif. Intell. 2021, 97, 104006. [Google Scholar] [CrossRef]
  43. Zhang, Z.; Yang, J. A discrete cuckoo search algorithm for traveling salesman problem and its application in cutting path optimization. Comput. Ind. Eng. 2022, 169, 108157. [Google Scholar] [CrossRef]
  44. Samanlioglu, F.; Jr, W.G.F.; Kurz, M.E. A memetic random-key genetic algorithm for a symmetric multi-objective traveling salesman problem. Comput. Ind. Eng. 2008, 55, 439–449. [Google Scholar] [CrossRef]
  45. Ezugwu, A.E.-S.; Adewumi, A.O. Discrete symbiotic organisms search algorithm for travelling salesman problem. Expert Syst. Appl. 2017, 87, 70–78. [Google Scholar] [CrossRef]
  46. Ali, I.M.; Essam, D.; Kasmarik, K. A novel design of differential evolution for solving discrete traveling salesman problems. Swarm Evol. Comput. 2020, 52, 100607. [Google Scholar] [CrossRef]
  47. Gharehchopogh, F.S.; Abdollahzadeh, B. An efficient harris hawk optimization algorithm for solving the travelling salesman problem. Clust. Comput. 2021, 25, 1981–2005. [Google Scholar] [CrossRef]
  48. Zhang, P.; Wang, J.; Tian, Z.; Sun, S.; Li, J.; Yang, J. A genetic algorithm with jumping gene and heuristic operators for traveling salesman problem. Appl. Soft Comput. 2022, 127, 109339. [Google Scholar] [CrossRef]
  49. Iqbal, Z.; Bashir, N.; Hussain, A.; Cheema, S.A. A novel completely mapped crossover operator for genetic algorithm to facilitate the traveling salesman problem. Comput. Math. Methods 2020, 2, e1122. [Google Scholar] [CrossRef]
  50. Loureno, H.R.; Martin, O.; Stützle, T. Iterated Local Search. In Handbook of Metaheuristics; Springer: Boston, MA, USA, 2003; Volume 57. [Google Scholar]
  51. Croes, G.A. A Method for Solving Traveling-Salesman Problems. Oper. Res. 1958, 6, 791–812. [Google Scholar] [CrossRef]
  52. Lin, S. An effective heuristic algorithm for the traveling salesman problem. Ann. Ops. Res. 1973, 21, 498–516. [Google Scholar] [CrossRef]
  53. Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. Publ. Am. Stat. Assoc. 1939, 32, 675–701. [Google Scholar] [CrossRef]
  54. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  55. Holm, S. A Simple Sequentially Rejective Multiple Test Procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
  56. Wu, C.; Fu, X. An Agglomerative Greedy Brain Storm Optimization Algorithm for Solving the TSP. IEEE Access 2020, 8, 201606–201621. [Google Scholar] [CrossRef]
  57. Huang, Y.; Shen, X.-N.; You, X. A discrete shuffled frog-leaping algorithm based on heuristic information for traveling salesman problem. Appl. Soft Comput. 2021, 102, 107085. [Google Scholar] [CrossRef]
  58. Gülcü, A.; Mahi, M.; Baykan, M.K.; Kodaz, H. A parallel cooperative hybrid method based on ant colony optimization and 3-Opt algorithm for solving traveling salesman problem. Soft Comput. 2018, 22, 1669–1685. [Google Scholar] [CrossRef]
Figure 1. The double-bridge perturbation. (a) Initial path; (b) The path after double-bridge perturbation. The number in the circle represents the city, and the blue dotted line represents the path after reconnection.
Figure 1. The double-bridge perturbation. (a) Initial path; (b) The path after double-bridge perturbation. The number in the circle represents the city, and the blue dotted line represents the path after reconnection.
Mathematics 10 03249 g001
Figure 2. The routes of X1 and X2.
Figure 2. The routes of X1 and X2.
Mathematics 10 03249 g002
Figure 3. The framework of the DCPA.
Figure 3. The framework of the DCPA.
Mathematics 10 03249 g003
Figure 4. The ranks of the five participating algorithms.
Figure 4. The ranks of the five participating algorithms.
Mathematics 10 03249 g004
Figure 5. The ranks of the four participating algorithms.
Figure 5. The ranks of the four participating algorithms.
Mathematics 10 03249 g005
Figure 6. The ranks of the seven participating algorithms.
Figure 6. The ranks of the seven participating algorithms.
Mathematics 10 03249 g006
Figure 7. The convergence cure of the participating algorithms.
Figure 7. The convergence cure of the participating algorithms.
Mathematics 10 03249 g007
Table 1. The grouping process at size 12 in the CPA.
Table 1. The grouping process at size 12 in the CPA.
Before SortingAfter SortingGroupCarnivorous PlantsPrey
X1 X 1 Group 1 X 1 X 4 X 7 X 10
X2 X 2
X3 X 3
X4 X 4 Group 2 X 2 X 5 X 8 X 11
X5 X 5
X6 X 6
X7 X 7 Group 3 X 3 X 6 X 9 X 12
X8 X 8
X9 X 9
X10 X 10
X11 X 11
X12 X 12
Table 2. The distances between eight cities.
Table 2. The distances between eight cities.
12345678
1811210741
29736213
312751679
41433865
575121061
653681142
73145851
82296121
Table 3. The similarity-eliminating operation based on the route length.
Table 3. The similarity-eliminating operation based on the route length.
IfI1fYfIf
63512433635124336354122063512433
51234646512346461642353451234646
45236128452361282135644345236128
4315624143156241 43156241
6451323864513238 64513238
4523612824531629 24531629
2453162942651340 42651340
12346546 63541220
42651340 16423534
36145228 21356443
Table 4. The maximum iteration times with different city sizes.
Table 4. The maximum iteration times with different city sizes.
City NumberRun Time (s)
m < 5010
50 ≤ m < 10020
100 ≤ m < 20050
200 ≤ m < 300100
300 ≤ m < 600160
600 ≤ m < 1000250
m ≥ 1000600
Table 5. Description of different versions of the CPA.
Table 5. Description of different versions of the CPA.
AlgorithmReferenceDescription
Version 1[26]CPA with swap sequence and swap operator
Version 2[30]CPA with the order-based decoding
Version 3-CPA with a new type of individual generation method
Version 4-Version 3 + simple sorting group
Version 5-Version 4 + similarity-eliminating operation
Version 6-Version 5 + adaptive attraction rate
DCPA-Version 6 + iterated local search
Table 6. Comparison of the proposed individual-generated method with the other two individual-generated methods.
Table 6. Comparison of the proposed individual-generated method with the other two individual-generated methods.
NameBKSEvaluation IndicatorsBestMeanSDPDANameBKSEvaluation IndicatorsBestMeanSDPDA
bayg299073Version 190739219.25177.283.36eil101629Version 1736748.358.0521.30
Version 2907710,127.51091.1111.62Version 212611701.35274.05170.48
Version 39073911742.780.48Version 36386536.923.79
att4833,522Version 135,39136,165.95439.7310.12lin10514,379Version 115,18715,944.65356.4214.97
Version 239,57749,339.77817.547.19Version 241,81253,678.97589.82273.31
Version 333,522 34,206530.072.04Version 3144,58146,41102.911.82
eil51426Version 1478481.95113.38pr10744,303Version 145,17445,852.6337.534.47
Version 2525705.05112.0765.50Version 210,8219171,205.542,135.77286.44
Version 34294364.862.41Version 344,71545,7431370.783.25
berlin527542Version 179248064.25112.478.47pr12459,030Version 161,87964,7351537.7812.91
Version 2974611,709.051395.8855.25Version 2201,876329,922.449,473.11458.91
Version 375427716154.192.30Version 359,24660,513.85775.802.51
st70675Version 1771794.857.1318.96bier127118,282Version 1127,565133,797.42118.5615.34
Version 210171554.55281.25130.3Version 2315,939368,448.525,343.56211.5
Version 367769313.512.62Version 3120,557123,923.42214.124.77
eil76538Version 1599608.52.9114.13ch1306110Version 170257111.5538.2717.71
Version 29141189.5124.17121.1Version 220,27424,971.052122.1308.69
Version 354255410.433.00Version 362176386.15109.194.52
rat991211Version 114231441.459.1219.98ch1506528Version 170537116.9515.059.38
Version 229353660.3471.74202.3Version 219,44930,320.053536.21364.46
Version 31224126426.264.37Version 365896714.696.172.86
kroA
100
21,282Version 123,22424,032.4417.1516.30d19815,780Version 117,62017,780.25128.9413.92
Version 251,82267,556.6512,309.2217.4Version 267,59593,606.9513,727.66493.2
Version 321,28221,650285.891.73Version 317,35518,823564.7019.29
Note: the best is set in bold.
Table 7. Experimental results of the DCPA and its different versions.
Table 7. Experimental results of the DCPA and its different versions.
NameBKSEvaluation IndicatorsBestWorstMeanSDPDA
(%)
NameBKSEvaluation IndicatorsBestWorstMeanSDPDA
(%)
bayg299073Version 3907392379116.742.780.48lin10514,379Version 314,45814,82614,641.3102.911.82
Version 4907392139107.3042.910.38Version 414,49015,29114,640.8184.951.82
Version 5907390949074.854.70.02Version 514,37914,71514,570.184.741.33
Version 6907390779073.401.200.00Version 614,37914,62514,500.7585.960.85
att4833,522Version 333,52234,92934,205.65530.072.04pr10744,303Version 344,71549,12645,7431370.783.25
Version 433,52235,10734,093.50394.831.70Version 444,35848,06446,058.51131.313.96
Version 533,52234,46533,944.05329.511.26Version 544,35445,11644,632.4222.290.74
Version 633,52234,82833,837.403180.94Version 644,48645,07244,773.65167.881.06
eil51426Version 3429448436.254.862.41pr12459,030Version 359,24661,67160,513.85775.82.51
Version 4428444436.003.602.35Version 459,21061,01159,847.8623.211.39
Version 5426436429.553.290.83Version 559,08760,63959,597.75522.620.96
Version 64264364293.350.79Version 659,07660,41359,786.4442.271.28
berlin527542Version 3754282037715.65154.192.30bier127118,282Version 3120,557128,935123,923.42214.124.77
Version 4754281597703.85187.202.15Version 4120,035129,044123,524.12533.84.43
Version 5754279447689.45164.41.96Version 5119,315130,820122,036.52267.633.17
Version 6754279947662.45152.951.60Version 6119,534124,323121,398.21131.272.63
st70675Version 3677724692.713.512.62ch1306110Version 3621766146386.15109.194.52
Version 4675725692.5011.582.59Version 4614365276297.2588.843.06
Version 5676703687.37.851.82Version 5617464326272.0559.482.65
Version 6675708683.806.801.30Version 6615963516258.256.352.43
eil76538Version 3542570554.1510.433.00pr14458,537Version 359,13663,28760,308.31286.943.03
Version 4540570553.857.932.95Version 458,65860,88259,550.8749.641.73
Version 5541555550.23.912.27Version 558,67359,44859,009.6282.10.81
Version 6541561550.7562.37Version 658,62059,39558,846.6228.590.53
pr76108,159Version 3109,809115,790112,139.71532.743.68kroB15026,130Version 326,56127,46127,011.05297.033.37
Version 4108,856116,962111,818.52119.33.38Version 426,48427,55026,957.35451.153.17
Version 5109,118113,585111,1901257.322.80Version 526,38027,54926,688.65311.212.14
Version 6108,644113,296111,014.51322.672.64Version 626,18027,40926,687.8303.452.13
rat991211Version 3122413041263.9526.264.37ch1506528Version 3658969636714.696.172.86
Version 4122812961256.6519.993.77Version 4658169446713.7107.032.84
Version 5121412691240.915.92.47Version 5656467276640.9547.381.73
Version 6121312841240.3521.282.42Version 665556758664253.521.75
kroA10021,282Version 321,28222,17321,650.15285.891.73d19815,780Version 317,35520,24318,823.25564.719.29
Version 421,28222,00021,541.4190.341.22Version 417,70218,91618,292.2361.0115.92
Version 521,28221,90121,478.35183.970.92Version 516,38417,55716,884.75283.947.00
Version 621,28221,75821,480.3130.230.93Version 616,46317,45416,847.5269.846.76
kroB10022,141Version 322,40423,34622,820.65259.053.07kroA
200
29,368Version 329,71831,83530,708.55647.384.56
Version 422,27023,16122,584.35221.262.00Version 429,79131,38030,439.35435.43.65
Version 522,24922,93022,482.5210.131.54Version 529,53330,60930,058.8263.782.35
Version 622,22022,67022,434.75198.771.33Version 629,45530,59529,997.2299.182.14
eil101629Version 3638667652.856.923.79pr22680,369Version 382,50693,96486,059.552469.647.08
Version 4640671652.77.163.77Version 481,68187,22484,539.71533.335.19
Version 5630664645.37.942.59Version 580,88284,03481,824.95662.41.81
Version 6632660644.46.692.45Version 681,03183,80081,936.4645.21.95
Note: the best is set in bold.
Table 8. The results of the Friedman statistic when the algorithms number s = 4 and standard instances number n = 22.
Table 8. The results of the Friedman statistic when the algorithms number s = 4 and standard instances number n = 22.
αχ2 χ α ( s 1 ) 2 Null HypothesisAlternative Hypothesis
0.0559.627.82RejectAccept
Table 9. The unadjusted and adjusted p-values for multiple comparisons among the five algorithms.
Table 9. The unadjusted and adjusted p-values for multiple comparisons among the five algorithms.
vComparisonUnadjusted p-ValueAdjusted p-ValuevComparison Unadjusted p-ValueAdjusted p-Value
1Version 6 vs. Version 35.59 × 10−123.34 × 10−114Version 5 vs. Version 47.08 × 10−42.12 × 10−3
2Version 5 vs. Version 311.05 × 10−85.25 × 10−85Version 4 vs. Version 31.95 × 10−123.90 × 10−2
3Version 6 vs. Version 45.00 × 10−62.00 × 10−56Version 6 vs. Version 50.240.24
Table 10. Parameter settings of the participating algorithms.
Table 10. Parameter settings of the participating algorithms.
AlgorithmYearReferenceParameters
Version 62022-population size: 100, carnivorous plant: 25, prey: 75
DJAYA2021[27]N = 20; ST1 = 0.5; ST2 = 0.5
AGBSO32020[56]cluster-num: 5, p-replace: 0.3, p-one: 0.6, p-two: 0.4, p-one-center:0.45, p-two-center: 0.5, MM: 100
DSMO2019[25]MG: 5, LLL: 50–100, GLL: 50–100, pr: 0.1, population size: 200
Table 11. Comparison results of Version 6 and the other algorithms without a local search algorithm.
Table 11. Comparison results of Version 6 and the other algorithms without a local search algorithm.
NameBKSEvaluation IndicatorsBestWorstMeanSDPDA
(%)
NameBKSEvaluation IndicatorsBestWorstMeanSDPDA
(%)
bayg299073Version 6907390779073.41.20.00kroA10021,282Version 621,28221,75821,480.3130.230.93
DSMO907390779073.61.430.00DSMO21,48321,89221,676.61481.85
AGBSO3907391209083.116.160.11AGBSO321,28222,09221,495.55238.71.00
DJAYA907393329208.5154.861.49DJAYA21,54922,92622,237.1370.194.49
eil51426Version 6426436429.353.350.79kroB10022,141Version 622,22022,67022,434.75198.771.33
DSMO427432429.351.960.79DSMO22,23222,72222,512.75166.71.68
AGBSO3426438430.153.780.97AGBSO322,28422,90522,525.45280.531.74
DJAYA429446438.358.342.90DJAYA22,55123,52922,960.2370.193.70
st70675Version 6675708683.86.81.30eil101629Version 6632660644.46.692.45
DSMO678696685.453.751.55DSMO648664658.154.994.63
AGBSO3675711685.658.261.58AGBSO3637652645.154.242.57
DJAYA687749716.2519.896.11DJAYA651681664.74.245.68
eil76538Version 6541561550.755.662.37lin10514,379Version 614,37914,62514,500.7585.960.85
DSMO551561557.053.183.54DSMO14,47114,81214,639.15103.591.81
AGBSO3542559550.054.652.24AGBSO314,37914,99214,535.5166.091.09
DJAYA554585564.47.194.91DJAYA14,68314,65315,081307.674.88
rat991211Version 6121312841240.3521.282.42ch1506528Version 665556758664253.521.75
DSMO122012981276.6515.935.42DSMO666469186773.856.863.77
AGBSO3122012761240.615.672.44AGBSO3657868356667.6732.14
DJAYA127013341299.9519.97.35DJAYA656469636762.7123.323.60
Note: the best is set in bold.
Table 12. The results of the Friedman statistic when the algorithms number s = 4 and standard instances number n = 10.
Table 12. The results of the Friedman statistic when the algorithms number s = 4 and standard instances number n = 10.
αχ2 χ α ( s 1 ) 2 Null HypothesisAlternative Hypothesis
0.0523.37.82RejectAccept
Table 13. P-values by Holm post hoc test (Version 6 is the control method).
Table 13. P-values by Holm post hoc test (Version 6 is the control method).
AlgorithmUnadjusted p-ValueAdjusted p-Value
DJAYA2.00 × 10−66.00 × 10−6
DSMO9.38 × 10−31.88 × 10−2
AGBSO34.64 × 10−24.64 × 10−2
Table 14. Parameter settings of algorithms.
Table 14. Parameter settings of algorithms.
AlgorithmYearReferenceParameters
DCPA2022This studypopulation size: 100, carnivorous plant: 25, prey: 75
DSSA2022[30]population size: 50, PD: 0.4, alarm value: 0.8, PV: 0.2
DSFLA2021[57]α = [1,5], β = [1,5], Q = 1000, ρ = 0.1, N = 20, m = 10
DBAL2021[40]population size: 50, ri: 0.5, Ai:0.5
D-GWO2021[29]population size: 50
ABC2018[26]Eb = 20, Ob = 0.5Eb + 1, Mait3 = 10, lim = 5
PACO-3Opt2018[58]Q = 30, ρ = 0.1, I = 5, α = 3, β = 2, m = 10
Table 15. Experiment results of the DCPA and the six other participating algorithms.
Table 15. Experiment results of the DCPA and the six other participating algorithms.
NameBKSEvaluation
Indicators
DCPADSSADSFLADBALD-GWOABCPACO-3Opt
att4833,522Best33,52233,52233,52233,52233,52233,52233,522
Worst33,52233,52233,52233,52233,70033,96633,606
Mean33,52233,52233,52233,52233,560.533,632.3533,541
SD0.000.000.000.0047.28126.7833.32
PDA (%)0.000.000.000.000.110.330.06
eil51426Best426426426426426426426
Worst428427428427428436428
Mean426.5426.2427.35426.1426.8428.8426.95
SD0.600.400.960.300.772.370.77
PDA (%)0.120.050.320.020.190.660.22
berlin527542Best7542754275427542754275427542
Worst7542754275427542754277597542
Mean754275427542754275427572.77542
SD0.000.000.000.000.0066.480.00
PDA (%)0.000.000.000.000.000.410.00
eil76538Best538538538538539538538
Worst540543543538550571544
Mean538.35541.05540.65538545.35545.65540.15
SD0.661.341.190.002.927.951.61
PDA (%)0.070.570.490.001.371.420.40
pr76108,159Best108,159108,159108,159108,159108,159108,159108,159
Worst108,159108,159108,961109,186109,043114,574109,043
Mean108,159108,159108,564.9108,571.90108,462.9109,053.7108,327.7
SD0.000.00250.35341.54348.811507.54303.48
PDA (%)0.000.000.380.380.280.830.16
rat991211Best1211121112111211121312111211
Worst1220121412151214124612921232
Mean12121211.451212.951211.41229.31233.551215.65
SD2.690.871.050.87.7818.915.72
PDA (%)0.080.040.160.031.511.860.38
kroC10020,749Best20,74920,74920,74920,74920,74920,74920,749
Worst20,74920,749208,7520,76920,98521,47120,983
Mean20,74920,749208,19.5520,75020,802.7520,948.0520,792.25
SD0.000.0036.24.3674.95190.1563.39
PDA (%)0.000.000.340.000.260.960.21
pr10744,303Best44,30344,30344,30344,30344,34244,38744,387
Worst44,38744,35844,38144,38744,62148,55344,577
Mean44,309.444,316.4544,317.8544,311.444,453.645,117.544,492
SD26.6419.9322.5630.3487.11995.3667.09
PDA (%)0.010.030.030.020.341.840.43
pr12459,030Best59,03059,03059,03059,03059,03059,03059,030
Worst59,03059,03059,16459,29359,46363,07259,087
Mean59,03059,03059,045.959,047.7559,175.560,477.359,048.95
SD0.000.0032.8858.06132.941209.6623.7
PDA (%)0.000.000.030.03 0.252.450.03
ch1306110Best6110611061286110614261106121
Worst6172613861836147624563026193
Mean6134.66120.86170.056125.456199.56197.356157.75
SD20.358.0414.9411.5428.9748.8319.91
PDA (%)0.40 0.180.980.251.461.430.78
pr14458,537Best58,53758,53758,57058,53758,53758,53758,537
Worst58,53758,53758,63658,55458,79762,19258,590
Mean58,53758,53758,611.2558,537.8558,617.459,376.358,539.65
SD0.000.0023.113.7189.07992.5311.57
PDA (%)0.000.000.130.000.141.430.00
kroB15026,130Best26,13026,13026,22426,13026,30426,13226,130
Worst26,22926,26926,49226,26126,57628,43326,438
Mean26,149.4526,200.326,413.9526,197.5526,43226,781.726,251.55
SD28.8531.4959.1444.86103.62609.9597.99
PDA (%)0.070.271.090.261.162.490.47
pr15273,682Best73,68273,68273,81873,68273,78073,78073,682
Worst73,81873,68674,22774,10574,42477,12274,215
Mean73,756.873,682.273,961.473,866.7574,080.574,877.8573,889.25
SD69.030.87131.35144.91246.86873.65131.26
PDA (%)0.100.000.380.250.541.620.28
rat1952323Best2329234023442336244823622333
Worst2347237123592379252624662366
Mean2339.42360.252350.152362.72398.252408.552349.4
SD7.267.164.058.9417.1729.098.89
PDA (%)0.711.601.171.713.243.681.14
kroA20029,368Best29,36829,47929,50229,40129,60529,45129,411
Worst29,45929,62729,65329,55830,13430,83529,802
Mean29,399.129,545.5529,567.729,443.7529,867.830,134.329,572.95
SD26.0639.3446.5940.39161.07336.5111.47
PDA (%)0.110.600.680.261.702.610.70
tsp2253916Best3916391639643930396139713933
Worst3963398740153995406641753981
Mean3936.83970.23999.23966.64020.44035.73956.95
SD17.117.1611.2117.726.9350.5313
PDA (%)0.531.382.121.292.673.061.05
pr29948,191Best48,22948,64849,41648,67050,60648,90948,887
Worst48,62049,14950,08249,06351,87652,62649,665
Mean48,373.548,917.2549,716.248,865.4551,46849,830.4549,191.25
SD94.62133.29170.26127.63330.01884.1218.58
PDA (%)0.381.513.161.406.803.402.08
lin318420,29Best42,07142,40243,02842,31243,98042,92042,475
Worst42,55442,98643,45142,80145,18044,11343,099
Mean42,351.2542,726.843,272.942,563.4544,738.2543,606.9542,810.9
SD153.61135.28122.61116.89304.42358.82193.01
PDA (%)0.771.662.961.276.453.751.86
fl41711,861Best11,87711,99211,98712,00912,16611,92911,906
Worst11,90612,05112,07512,17712,44013,62812,050
Mean11,891.7512,021.8512,032.612,069.612,273.6512,528.511,960.75
SD10.0117.1122.445.1572.54593.4835.16
PDA (%)0.261.361.451.763.485.630.84
pcb44250,778Best50,99851,95553,03051,82852,30952,34052,234
Worst51,87552,38753,79652,53653,48554,07153,153
Mean51,376.0552,187.453,460.2552,113.8552,786.7552,818.5552,664.65
SD193.49121.12216.41165.17339.6446306.49
PDA (%)1.182.785.282.633.964.023.72
d49335,002Best35,32735,69136,50035,80335,76835,69136,148
Worst36,04536,10537,03836,24936,29137,06236,651
Mean35,685.8535,904.3536,806.9535,971.536,124.536,361.6536,365.8
SD164.8195.92132.71113.38132.3348.15141.33
PDA (%)1.952.585.162.773.213.883.90
d65748,912Best49,48350,49351,46150,48152,85450,46950,558
Worst50,26850,96052,33451,24454,09554,43151,432
Mean49,954.850,736.551,865.150,935.4553,482.851,451.4550,981.35
SD259.18133.79224.87157.9325.46867.34246.45
PDA (%)2.133.736.044.149.345.194.23
u72441,910Best42,51443,49843,77643,71843,24243,19143,586
Worst43,34044,03644,52044,28043,92944,73344,160
Mean42,859.6543,822.5544,19244,008.843,610.8543,885.9543,849.25
SD223.26161.46182.92151.69199.12357.65163
PDA (%)2.274.565.455.014.064.714.63
rat7838806Best9102922092629194919092329158
Worst9240934393849406937996039271
Mean9171.159291.159324.69337.359246.459335.859241.3
SD48.5832.5832.7254.5953.42100.1931.26
PDA (%)4.155.515.896.035.006.024.94
pr1002259,045Best265,037271,315274,076271,780284,870270,993269,286
Worst268,393274,092277,273274,884289,243284,976273,567
Mean266,654.7272,733275,562.5273,650.6287,131.7274,699.1271,607.6
SD861.72697.16853.03838.311332.263986.591195.93
PDA (%)2.945.286.385.6410.846.044.85
rl1323270,199Best273,796279,779281,554280,997297,682279,315280,227
Worst278,664284,327286,376286,261304,697295,711286,598
Mean276,211.5282,183283,997.3284,064.8301,421.3285,740.6283,804.3
SD1272.41125.221508.61421.582085.743683.211682.25
PDA (%)2.234.445.115.1311.565.75 5.04
fl157722,249Best225,8622,77622,91423,14123,81123,24222,967
Worst23,00023,17023,19423,57524,11024,52023,723
Mean22,780.523,036.2523,008.0523,370.923,926.923,649.223,309.5
SD84.63106.1297.8126.36100.06323.09244.68
PDA (%)2.393.54 3.41 5.04 7.54 6.29 4.77
MPDB / MPDA (%)0.48/0.851.14/1.541.67/2.171.23/1.682.57/3.241.35/3.031.21/1.75
Note: the best is set in bold, the MPDB means the mean percentage deviation of the best found solution to the best known solution, and the MPDA (%) means the mean of PDA.
Table 16. The results of the Friedman statistic when the algorithms number s = 7 and standard instances number n = 27.
Table 16. The results of the Friedman statistic when the algorithms number s = 7 and standard instances number n = 27.
αχ2 χ α ( s 1 ) 2 Null HypothesisAlternative Hypothesis
0.05105.3212.59RejectAccept
Table 17. p-values by Holm post hoc test (DCPA is the control method).
Table 17. p-values by Holm post hoc test (DCPA is the control method).
AlgorithmUnadjusted p-ValueAdjusted p-Value
ABC00
DGWO5.28 × 10−121.70 × 10−11
DSFLA4.24 × 10−88.24 × 10−8
PACO-3Opt8.20 × 10−52.46 × 10−4
DBAL3.07 × 10−36.14 × 10−3
DSSA5.88 × 10−25.88 × 10−2
Table 18. The computation times of the participating algorithms with city sizes less than 250.
Table 18. The computation times of the participating algorithms with city sizes less than 250.
InstanceAlgorithms
DCPA (s)DSSA (s)DSFLA (s)DBAL (s)D-GWO (s)ABC (s)PACO-3Opt (s)
att480.731.040.988.437.187.483.75
eil5112.311.8817.946.5016.2118.4014.89
berlin520.420.230.461.332.1910.140.82
eil769.674.5620.117.9021.0819.1017.89
pr762.332.7319.9819.7414.8816.5311.23
rat998.7111.0619.3213.3820.5219.8219.39
kroC1004.754.8747.7319.6637.7644.8332.23
pr10713.5231.6629.9216.4051.1350.0850.49
pr1242.222.5628.9721.0447.0847.1732.00
ch13042.5246.9950.2146.6952.9049.3850.46
pr1444.231.8150.2218.4940.3045.1123.28
kroB15041.8248.2150.3249.4054.8750.2148.79
pr15223.6017.3850.3444.3253.4750.1647.78
rat19550.0450.0950.4950.5152.3750.3251.46
kroA20083.20100.08100.59100.43100.92100.44101.60
tsp22589.90100.10100.52100.66103.01100.36100.04
Tavg24.3726.5839.8832.8042.2442.4737.88
Note: the best is set in bold.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, P.-L.; Sun, X.-B.; Wang, J.-Q.; Song, H.-H.; Bei, J.-L.; Zhang, H.-Y. The Discrete Carnivorous Plant Algorithm with Similarity Elimination Applied to the Traveling Salesman Problem. Mathematics 2022, 10, 3249. https://doi.org/10.3390/math10183249

AMA Style

Zhang P-L, Sun X-B, Wang J-Q, Song H-H, Bei J-L, Zhang H-Y. The Discrete Carnivorous Plant Algorithm with Similarity Elimination Applied to the Traveling Salesman Problem. Mathematics. 2022; 10(18):3249. https://doi.org/10.3390/math10183249

Chicago/Turabian Style

Zhang, Pan-Li, Xiao-Bo Sun, Ji-Quan Wang, Hao-Hao Song, Jin-Ling Bei, and Hong-Yu Zhang. 2022. "The Discrete Carnivorous Plant Algorithm with Similarity Elimination Applied to the Traveling Salesman Problem" Mathematics 10, no. 18: 3249. https://doi.org/10.3390/math10183249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop