The Hamiltonian Cycle and Travelling Salesperson problems with traversal-dependent edge deletion

Variants of the well-known Hamiltonian Cycle and Travelling Salesperson problems have been studied for decades. Existing formulations assume either a static graph or a temporal graph in which edges are available based on some function of time. In this paper, we introduce a new variant of these problems inspired by applications such as open-pit mining, harvesting and painting, in which some edges become deleted or untraversable depending on the vertices that are visited. We formally define these problems and provide both a theoretical and experimental analysis of them in comparison with the conventional versions. We also propose two solvers, based on an exact backward search and a meta-heuristic solver, and provide an extensive experimental evaluation.


Introduction
Finding a closed loop on a graph where every vertex is visited exactly once is a Hamiltonian Cycle Problem (HCP), and its corresponding optimization problem in a weighted graph is a Travelling Salesperson Problem (TSP).Variants of the HCP and the TSP have been studied for decades.However, the wealth of research on this topic does not cover problems where the availability of an edge in a graph depends on the vertices already visited.This specific type of dynamic graph is relevant to many real-world applications, such as open-pit mining, harvesting and painting.
For instance, consider the mining inspired example shown in Fig. 1, where the graph depicts a representation of a mining field and each vertex is a place to be drilled by a drilling machine.The problem is to find a route such that each vertex is visited and drilled exactly once, i.e., an instance of a HCP/TSP.However, in this problem, drilling at a vertex creates a pile of rubble, which not only makes traversing that vertex again impossible but also affects the availability of some edges around it.For example, as depicted in Fig. 1(a), when vertex  is drilled, indicated by a red circle, the rubble obstructs three edges, ,  and , which are all deleted, whereas a different traversal only results in the removal of edge , as shown in Fig. 1(b).
To model a graph that changes due to the path of already visited vertices, as exemplified in the scenario above, we introduce a new class S. Carmesin et al.

Related work
There is a large body of research on the HCP, the TSP and their variants.As mentioned, this paper focuses on a particular type of HCP and TSP where the edges become deleted or untraversable depending on the vertices visited.None of the existing variants of these problems with dynamic graphs has this property.In a TSP on temporal networks, an edge's weight and/or availability changes with respect to some notion of time [1,2], and the unavailable edges can reappear again, as opposed to HCP-SD where the deleted edges are never re-enabled.The other difference is that the weight or availability of an edge in a temporal network changes with time and not due to the way the graph is traversed.
The Covering Canadian Traveller Problem (CCTP) [3] is to find the shortest tour visiting all vertices where the availability of an edge is not known in advance.The traveller only discovers whether an edge is available once reaching one of its end vertices.The availability of an edge is set in advance and not dependent on the traversal.
The Sequential Ordering Problem (SOP), sometimes known as precedence constraint TSP [4], is the problem of finding a minimal cost tour through a graph subject to certain precedence constraints [5].These constraints are given as a separate acyclic-directed graph.In the SOP, the precedence relation is solely between vertices, however, in our problem we have precedence relations between vertices and edges.Therefore, SOP is a special case of our problem, and we prove this formally in Lemma 4.
The Minimum Latency Problem (MLP) [6,7] is a variant of the TSP where the cost of visiting a node depends on the path that a traveller takes.Given a weighted graph and a path, the latency of a vertex  on that path is defined as the distance travelled on that path until arriving at  for the first time.The goal of the MLP is to find a tour over all vertices such that the total latencies are minimal.Similarly, in our problem the availability of an edge depends on the path taken.However, in a MLP, the graph never changes and the latencies are the result of a simple sum.
On the HCP, some theoretical analysis focuses on investigating conditions, e.g., vertex degree [8,9], under which a graph contains a Hamiltonian cycle.For instance, Pósa [10] and Komlós and Szemerédi [11] proved that there is a sharp threshold for Hamiltonicity in random graphs as the edge density increases.An intuitive approach to finding a Hamilton cycle is to use a depth-first-search (DFS).Rubin [12] introduced some rules to prune the search tree.His rules do not improve the worst-case computation time (!),where  is the number of vertices, however statistical analysis has shown that using such criteria improves the average computation time [13,14].
In terms of applications of TSP in automated planning, different variants have been used in coverage route planning [15], e.g., for an autonomous lawnmower [16], or for autonomous drilling of a PCB [17].Those most relevant to this paper are coverage planning problems whose environments change due to the coverage actions by agents, e.g., robots, that operate within them.The open-pit mining scenario described earlier is an example of such a coverage planning problem for which a specialized solver for the mining case is proposed by [18].Autonomous harvesting is another instance where heavy vehicles should not pass through the areas already harvested to avoid soil compaction.The harvested areas also limit the mobility of harvesting machines, hence affecting the reachability among the nodes representing areas to be harvested.Ullrich, Hertzberg, and Stiene [19] formulate this application as an optimization problem for which a specialized solver is also proposed.In both cases described above, the authors did not study the theoretical underpinning of the problem, nor provide a general solution that can easily be employed for other instances of problems with traversal-dependent edge deletion.

Problem statement
In this section, we formally define self-deleting graphs and introduce the corresponding notions of walks and paths.We then proceed to give a formal definition of the HCP-SD and the TSP-SD problems.Definition 1.A self-deleting graph  is a tuple  = (,  ) where  = ( , ) is a simple, undirected graph and  ∶  → 2  .The function  specifies for every vertex  ∈  which edges  () are deleted from  if the vertex  is processed.We refer to  as the delete-function.
If a vertex  is processed, we delete edges  () from .For a selfdeleting graph  and set  ⊂  of vertices, the residual graph   of  after processing  is defined as: We call a simple path  = ( 1 , … ,   ) in a self-deleting graph conforming if for every 1 ≤  <  the edge   = {  ,  +1 } is in the residual graph  { 1 ,…,  } .An  -conforming simple path  traverses the graph  while processing every vertex on  when it is visited.
In contrast to a path, vertices on a walk can be visited more than once.For a walk on a self-deleting graph, a vertex is processed when it is visited for the last time.Formally, we call a walk  = ( 1 , … ,   )  -conforming if for every 1 ≤  <  the edge   = {  ,  +1 } is in the residual graph  { 1 ,…,  }⧵{ +1 ,…,  } .
Following standard terminology we call a sequence of vertices  = ( 1 , … ,   ,  1 ) an  -conforming cycle if ( 1 , … ,   ) is an  -conforming path and the edge {  ,  1 } exists in the residual graph   .Then, a Hamiltonian cycle of self-deleting graph  is an  -conforming cycle that contains all vertices of  exactly once.Problem 1.Given a self-deleting graph  = (,  ), the Hamiltonian Cycle Problem on Self-Deleting graphs (HCP-SD) is to find a Hamiltonian cycle on .Problem 2. Given a self-deleting graph  = (,  ), the weak Hamiltonian Cycle Problem on Self-Deleting graphs (weak HCP-SD) is to find an ( -conforming) closed walk on  that contains every vertex at least once.

Observation 1. Every Hamiltonian cycle of 𝑆 is a Hamiltonian cycle of 𝐺.
This implies that the HCP-SD is at least as hard as finding a Hamiltonian path.
Using a weighted graph as the underlying graph of a self-deleting graph we can define optimization problems on self-deleting graphs as follows.
Problem 3. Given a self-deleting graph  = (,  ), where  is a weighted graph, the Travelling Salesperson Problem on self-deleting graphs (TSP-SD) is to find a shortest Hamiltonian cycle on .Problem 4. Given a self-deleting graph  = (,  ), the weak Travelling Salesperson Problem on self-deleting graphs (weak TSP-SD) is to find a shortest ( -conforming) closed walk on  that contains every vertex at least once.

Properties of self-deleting graphs
In this section, we provide some formal analysis of self-deleting graphs, in comparison to static graphs.First, we analyse path segments in self-deleting graphs.Lemma 2 indicates the inherent difference between static and selfdeleting graphs.In static graphs, every segment of a shortest path is a shortest path.This fact is exploited by different algorithms, often based on dynamic programming, for path finding in static graphs, e.g.Dijkstra's algorithm [20].As a consequence, these types of algorithms cannot easily be applied to self-deleting graphs.Lemma 3. Let  = (,  ) be a self-deleting graph, where for every vertex  ∈  (),  () deletes only edges that are incident to , then the Hamiltonian path problem on self-deleting graphs is equivalent to the Hamiltonian path problem on directed graphs.
(Here (, ) describes the directed arc from  to , while {, } describes the undirected edge between  and .) Another way to explain this construction is as follows.We make  a directed graph in which each edge is replaced by two arcs in opposite directions.For every vertex  we then delete all outgoing arcs corresponding to an edge in  ().
We now prove that a path  is  -conforming in  if and only if  is a path in . ⇒: Every We proved that any valid path in a SOP is  -conforming in the corresponding self-deleting graph and vice-versa.This holds in particular for Hamiltonian paths.□

Exact and heuristic solvers
Next, we describe two solvers for the HCP-SD and TSP-SD problems: one that produces an exact solution and one which relies on heuristics.

Exact solvers
An intuitive approach to solving the HCP on a self-deleting graph  is to employ a DFS in a forward-search manner: starting with some vertex  1 , we delete all edges in  ( 1 ) in , then choose a neighbour  2 of  1 as the next vertex on the path and repeat until the path is a Hamiltonian cycle or the current path cannot be extended, in which case we backtrack.This approach can be improved with methods used in algorithms for Hamiltonian cycles in conventional graphs, namely graph/search-tree pruning, as introduced by [12,21].Their algorithms identify edges that must be in a Hamiltonian cycle, e.g., edges incident to a vertex of degree 2, and employ these required edges to improve the average runtime of a forward DFS.However, even with these pruning rules, the algorithm fails to detect paths that cannot be extended to a Hamiltonian cycle early.This is due to the fact that the edge deletion is traversal dependent.
Since failures occurring at a late stage are often due to the choices at an earlier stage of the search, we propose a backward search algorithm, shown in Algorithm 1.This takes advantage of the late failures to greatly reduce the size of the search tree.Instead of exploring the path from a start vertex and deleting edges subsequently, Algorithm 1 starts by deleting all edges that would get deleted at some point.It then explores the graph in a backward fashion, adding edges according to visited vertices as follows.During this backward exploration of the graph, edges are added, so searching for required edges, as is done in conventional forward DFS for Hamiltonian cycles, is not possible.
The first call of ℎ receives a single start vertex as the path and the self-deleting graph.During the repeated calls of ℎ, the path grows backwards, so the first call will be with ℎ = ( 1 ), the next with ℎ = (  ,  1 ), then ℎ = ( −1 ,   ,  1 ) and so forth.During each call of ℎ the residual graph  with respect to ℎ is calculated (line 1).In line 2 follows a goal check where it is first verified whether the path has the correct length and if so, whether the missing edge between both end vertices exists (line 3).If the initial check fails, the algorithm calculates the set  of vertices that are candidates for the second vertex in the Hamiltonian path in line 9.If all the candidates are already on ℎ the path cannot be extended to a Hamiltonian cycle.We check this condition in line 10.In line 13 the set  of neighbours of the first vertex of the current ℎ in  is calculated.For every neighbour, ℎ is called with an extended path until one of them returns a Hamiltonian cycle.The algorithm then checks whether the edge ( 1 ,  2 ) exists in .If so,  is returned, which is a contradiction since we assumed   is returned.However, if there is no edge ( 1 ,  2 ) in  then  is not a Hamiltonian cycle, contradicting the assumption.
In line  We continue in line 13.Here, the list of neighbours  of current first vertex in  that are not already on the path is calculated.We now consider two cases: (a)   ∉ :  contains all neighbours of  +1 in .So if   ∉  then there is no edge between   and  +1 in the residual graph after processing  1 , … ,   .Thus,  is no Hamiltonian cycle, a contradiction.So (b) must hold.
Since we always arrive at a contradiction, the assumption does not hold.Thus, if  is Hamiltonian the algorithm finds a Hamiltonian cycle.□ In order to investigate the behaviour of both exact algorithms, we first need to define the Average Vertex Degree (AVD) for a self-deleting graph.AVD is a metric commonly used in the analysis of static graphs for the HCP.Let  be the number of times an edge  appears in the delete function  .The probability that the edge will be deleted after processing any  vertices from  in arbitrary order is given by Then, the expected ''static'' AVD of  after processing any  vertices can be determined as () = ( − 1) − 2

∑ 𝑛 𝑙=1 𝛿(𝑙).
A dataset random24-100 of 14 400 random self-deleting graphs with 24 vertices was generated in order to compare both exact algorithms.The delete function  was sampled uniformly randomly with overlapping of  () for two distinct  allowed.I terms of AVD, the dataset uniformly covers the interval from 0 to 12.In an experimental comparison between backward and forward search, both solving the same dataset random24-100 and capped at 10 000 expanded search nodes, the backward search performs much better.It was able to solve all instances and on average was able to identify a Hamiltonian instance after 27.9 explored nodes and a non-Hamiltonian instance after 1.6 explored nodes.The forward search failed to find a solution within the limit for most instances.The diagrams in Fig. 3 show the average explored nodes by which either algorithm was able to decide the instance or the limit was reached.
Fig. 4(a) shows the percentage of infeasible instances decided by the backward search at various search depths while using the same random24-100 dataset.Infeasible instances with AVD less than 3 are detected instantly at depth 1.The hardest instances to detect are located between AVDs 6 and 7. Above 7, the dataset does not contain any infeasible instances.Finally, more than 80% of infeasible instances are detected at depth 10, less than half of | |.
Fig. 4(b) illustrates how the percentage of detected infeasible instances at various depths depends on | |.At a fixed depth, the percentage unsurprisingly decreases with increasing | |, but even for | | = 200 about 50% instances are detected at depth 10.Interestingly, the percentage increases when using a relative depth and close to 100% infeasible instances are detected at depth 0.2| |, when | | > 100.This experiment indicates that the backward search algorithm's ability to detect infeasible instances of HCP-SD early on in the search improves with increasing | | and, consequently, the algorithm may be scalable enough to find feasible solutions even for instances with | | of practical interest.

Heuristic solver
The proposed exact solver is likely to provide limited scalability when addressing optimization problems due to its exhaustive nature.Also, finding near-optimal solutions is often sufficient in practical applications, therefore, heuristic algorithms may be the only computationally feasible approach to obtain them.A common procedure is to design a problem-specific metaheuristic algorithm, that is tailored to a particular application.Various heuristic approaches were successfully applied to problems related to the TSP-SD, such as metaheuristics based on local search [22], evolutionary optimization [23] or swarm optimization [24].
In this paper, we use a generic metaheuristic solver for problems with permutative representation [25], so that we can remain application agnostic regarding multiple variants of TSP-SD.The solver implements several high-level metaheuristics and also a bank of lowlevel local search operators, perturbations and construction procedures.
These can be readily applied to various problems, whose solution can be encoded as a sequence of potentially recurring nodes.The only user requirement is to specify a set of nodes , lower and upper bounds ,  of the frequency of their occurrence in a solution sequence  = ( 1 ,  2 , … ,   ), where   ∈ ; a fitness function  () and an aggregation of penalty functions ().The bounds are always respected by the solver, whereas the penalty functions are treated as soft constraints.Their purpose is to direct the search process towards valid solutions.TSP-SD can be described in the solver formalism as follows: Here, the set of nodes to visit  corresponds to the set of vertices  ().
Each node   has to be processed exactly once, thus   =   = 1.Then,   is the edge {  ,  +1 mod  },  { 1 , 2 ,…,  } is the residual graph after processing first  nodes in  and  is a large constant introduced to penalize using an already deleted edge   in .The goal is to minimize the total length of the cycle given by  and force all penalties   () to zero, if possible.
For the weak TSP-SD, both the set of nodes  and the respective bounds ,  are defined in the same way as in the TSP-SD, but the definition of  () and   () differs: Here,   is the shortest path from   to  +1 mod  in the residual graph  { 1 , 2 ,…,  } , which is found using the A* algorithm [26].Thus, the time complexity of weak TSP-SD fitness evaluation is higher than TSP-SD by (||).Only the first and last vertex of   are processed.If   does not exist, a large constant  is added to the penalty () via   ().The goal is to minimize the total length of the closed walk given by .

Statistical analysis of HCP-SD
In this section, we investigate properties analogous to those previously studied in the literature for HCP, since they are crucial for understanding behaviour and evaluating the performance of the proposed solvers.For the HCP, the probability density function of a randomly generated graph being Hamiltonian was experimentally shown to be sigmoidally shaped around a certain threshold point [14].This threshold corresponds to the graph's AVD, for which the probability is approximately 0.5.Their experiments indicate that HCP instances close to this boundary are the most to decide for various exact algorithms in terms of computational cost, although isolated clusters of hard instances were also identified far away from it.The location of this threshold has been proved to be ( ) + (( )), which is called the Komlós-Szemerédi bound [11].
First, we replicated the experiment from [14], showing the probability density function of Hamiltonicity for a randomly generated graph with 24 vertices.For this purpose, we generated a dataset of 100 random graphs for every number of edges from 1 to 144, resulting in 14 400 graphs with AVD ranging from 0 to 12.The HCP was decided for the whole dataset using the Concorde TSP solver and the result of the experiment is shown in Fig. 5(a) -HCP (exact).The dataset random24-100 of 14 400 random self-deleting graphs with 24 vertices was created analogously, covering the same range of AVDs.On this dataset, HCP-SD was decided with both an exact and heuristic solver and weak HCP-SD with a heuristic solver described in Section 5.The exact solver was always terminated after successfully deciding the problem, whereas the heuristic solver was terminated either after finding a feasible solution, or reaching a time budget of | | seconds.Therefore, the heuristic solver's results are suitable for assessing the solver's properties, rather than reasoning about the problem itself.Fig. 5(a) indicates that the probability density function of HCP-SD is shaped similarly to that of HCP but is steeper and the threshold point is located further to the right.
The weak HCP-SD appears to have similar properties, but there is no exact solver available, and using the heuristic solver may affect the location of the threshold point, as it may label a feasible instance as infeasible.We can see that instances with AVD less than 3 that were shown to be easy to decide for the exact solver in Fig. 4(a), actually have zero probability of being Hamiltonian.Instances with AVD between 6 and 7, which were shown to be the hardest to decide, are located close to the HCP-SD Hamiltonicity threshold point.Thus, in a similar fashion to HCP, HCP-SD instances close to the threshold point are computationally harder for the exact solver.
Second, 12 more datasets of random self-deleting graphs with 10 to 200 vertices and uniformly randomly sampled  were generated to investigate the Hamiltonicity bound w.r.t. to | | for both variants of HCP-SD.Each of these datasets was generated to cover an interval that contains the threshold point of both problems and consisted of 2500 instances, evenly distributed across the interval into groups of 50 instances with the same AVD.Again, the HCP-SD was decided with an exact and heuristic solver and the weak HCP-SD with a heuristic solver, and the location of the threshold point was determined for each dataset and problem.The locations of the threshold points are shown in Fig. 5(b), thus showing a bound analogous to the Komlós-Szemerédi bound.The bound HCP-SD (exact) follows a sublinear, presumably logarithmic trend, similar to the Komlós-Szemerédi bound but faster growing.As for the weak HCP-SD, the heuristic data evidently do not provide an accurate estimate of the bound.
The threshold points should never be higher than for the HCP-SD because all self-deleting graphs feasible in HCP-SD are also feasible in weak HCP-SD.The bound HCP-SD (heuristic) illustrates that the heuristic solver consistently struggles with finding feasible solutions close to the real Hamiltonicity bound, found by the exact solver.

TSP-SD solvers evaluation
So far, we have focused only on the results relevant to decision problems, but both proposed solvers are designed to address the formulated optimization problems as well.Each solver has unique properties that are investigated in a series of eight experiments on a newly created dataset. 1 The dataset consists of 11 instances of self-deleting graphs with a size ranging from 14 to 1084 vertices.The instances are selected from the TSPLIB library [28], but a uniformly randomly generated delete function  is added.To give an idea about the delete function, Fig. 6 shows the sets of edges deleted by processing four different nodes in the instance berlin52-13.2.In terms of the AVD, most of the instances are generated close to the HCP-SD Hamiltonicity bound of the heuristic solver so that they could be solved by the heuristic solver alone.The following naming format is used: original_name| |-AVD.
The heuristic solver offers a portfolio of alternative components, each suitable for a different set of problems with permutative representation.The solver must be tuned to achieve the best performance for a specific problem.The tuning was carried out using the irace package [29] with a tuning budget of 2500 experiments.The configuration obtained is shown in Table 1.The tuner selected the Basic Variable   Neighborhood Search (VNS) [30] to use as a high-level metaheuristic and the Pipe Variable Neighborhood Descent (VND) [31] to control the local search.The results of the exact solver were generated on a dedicated machine with Ubuntu 18.04 OS, Intel Core i7-7700 CPU.Experiments using the heuristic solver were generated on an AMD EPYC 7543 CPU cluster.Each instance was solved once by the exact solver and 50 times by the heuristic solver, since it is stochastic.The heuristic solver always had a time budget of 10| | seconds per single run.We present the results in Tables 2, 3 and 4. Individual experiments are referred to by the column letter of the corresponding table.Finally, the relative improvement brought by an experiment B relative to an earlier experiment A in a particular instance  is calculated as 100 × (1 −   () () ), where   () is the objective value on  in .This value is eventually averaged across the entire dataset.
The proposed backward search is introduced as a decision algorithm for the HCP-SD in Algorithm 1.To address the optimization problem TSP-SD, only a slight modification is required.The algorithm does not stop when the first valid solution is found (line 4).Instead, it continues to search until a given time limit is reached while storing the best solution found so far.Another minor modification is the order of expansion at line 14.In the default variant, the nodes  ∈  are traversed in arbitrary order, determined by the iterator implementation of the set .In the following experiments, a greedy expansion is also tested.In this variant, nodes  ∈  are sorted according to their distance from ℎ.  and expanded from closest to farthest.
Table 2 documents the performance of the exact TSP-SD solver.The backward search performs the path expansion in default order in experiments in columns A and B, whereas greedy expansion is used in experiments in columns C and D. Column A presents the objective values and computation times needed to find the first valid solution of TSP-SD while using the default expansion.A solution is found within one second for instances with up to | | = 202 and within one minute for all instances in the dataset.The dataset contains two variants of the berlin52 instance with different values of AVD, from which the berlin52-10.4instance is closer to the Hamiltonicity bound.Finding a valid solution for berlin52-10.4requires 10 times more time than berlin52-13.2.Thus, AVD seems to be an important factor playing against the backward search.The scalability of the exact solver in this experiment is surprisingly good, as was already indicated in Fig. 4(b).
In Table 2, column B, the exact solver was given a budget of 12 h to solve the TSP-SD for each instance.The first three were solved to optimality, but the remaining eight reached the time limit.On average, the first valid solution was improved by 9.75%, but the improvement decreases with increasing instance size.In the case of the three largest instances, the improvement is only 1%.This experiment only confirms the expectation of poor scalability when using an exact approach in an optimization problem due to its exhaustive nature.Unlike in the previous experiment, the berlin52-10.4variant was actually easier to solve when addressing the optimization problem, as the backward search tree is presumably pruned more with a lower AVD.
Table 2, column C, depicts the benefit of using the greedy expansion in the backward search.The computation times needed to find the first valid solution are slightly, but consistently better than with the default expansion.More importantly, the objective values are frequently more than ten times better than with the default expansion, which is a considerable improvement brought by a simple heuristic rule.On average, the first valid solutions found with the greedy expansion are better by 56% than with the default expansion.The improvement increases with increasing instance size and is around 90% for the four largest instances.Fig. 7(a) shows the best solution obtained by the exact solver with default expansion, while Fig. 7(b) with greedy expansion.The figures illustrate that using the default expansion is equivalent to generating a random valid solution, whereas the greedy solution behaves reasonably in less dense areas.As shown in Table 2, column D, increasing the time budget to 12 h further improves the objective by 6% on average relative to the first valid greedy solutions.Similarly to random expansion, this improvement decreases with increasing instance size and is less than 1% for the largest instance.
Table 3, column A, presents the results of the heuristic solver alone on the TSP-SD.Each instance was solved 50 times with a time budget of 10| | seconds, e.g.140 s for the burma14-3.1 instance.The optimal solution was found for the two smallest instances.However, the solver cannot find a valid solution every time and fails entirely to provide any valid solutions in all 50 runs for the berlin52-10.4instance.In terms of solution quality, the best solutions found by the heuristic solver alone are worse by 26% on average than the first valid solutions found by the greedy exact solver.Furthermore, the mean success rate is only 62%.The heuristic solver is expected to converge faster than the exact solver, but presumably spends a large portion of the time budget on finding a valid initial solution instead.This assumption is confirmed in Table 3, column B, where the heuristic solver is initialized with the first valid solution found by the exact solver (Table 2, column C).Here, the best solutions found by the warm-started heuristic solver in 10| | seconds are better by 5% on average than those obtained by the greedy exact solver in 12 h and by 11.3% than the first valid solutions.Most importantly, the improvement does not decrease with increasing instance size and is consistent across the entire dataset.The   previous two experiments reveal the drawbacks of both approaches: the exact solver scales poorly in the optimization problem, whereas the penalty-based heuristic solver does not provide a valid solution reliably.On the other hand, the exact solver provides valid solutions to all instances very fast, and the heuristic solver is much better at refining good-quality solutions.Therefore, using both solvers sequentially, i.e., implementation of a warm start optimization, combines the advantages of both.Fig. 7(c) shows the best solution of berlin52-13.2obtained by the heuristic solver alone while Fig. 7(d) the best-known solution, obtained by the warm-started heuristic solver.Both solutions remain entangled in the centre area with the most vertices, which may be attributed to the naturally denser randomly generated delete function  in this area, as indicated in Fig. 6.Table 4 illustrates the benefit of relaxing TSP-SD to weak TSP-SD.Every solution to the TSP-SD is also valid for the weak TSP-SD, but the weak formulation might yield a better optimal value.On the other hand, the fitness evaluation in weak TSP-SD calculates the shortest paths   instead of reading the edge weights.Thus, the time complexity of the evaluation is higher by (||), and the heuristic solver is drastically slower when solving the weak TSP-SD.The performance of the heuristic alone is shown in Table 4A.Regarding the success rate, the heuristic is significantly more successful than with TSP-SD, as the space of valid solutions in the weak TSP-SD formulation is much larger.In Table 4, column B, the best-known TSP-SD solution from the initialized heuristic solver (Table 3, column B) was used as an initial solution.The experiment shows that only the TSP-SD solution of the smallest instance was not improved in the weak TSP-SD formulation.In the remaining instances, the weak TSP-SD solution is better by 7% on average than the best-known TSP-SD solution, so the relaxation is highly beneficial.

Conclusions
We introduce new variants of the Hamiltonian Cycle and the Travelling Salesperson Problems with self-deleting graphs, for which formal definitions, theoretical analyses and two solvers were proposed.In the future, we intend to investigate general heuristics for the proposed backward search.We also want to develop a new solver which works in

Fig. 1 .
Fig. 1.Representation of a mining example, where due to different traversals, indicated with thicker edges, in (a) and (b) different edges are deleted.

Fig. 2 .
Fig. 2. Illustrations for the proof of Lemma 2: If the path  is shorter than the path p , then  was not a shortest path.

𝑣 𝑥 ) be an 𝑓 -conforming path from the vertex 𝑣 1 to the vertex 𝑣 𝑥 and let |𝑝| denote the length of the path 𝑝. We call 𝑝 a shortest 𝑓 -conforming path from 𝑣 1 to 𝑣 𝑥 if for every other 𝑓 -conforming path p = (𝑣 1 , … , 𝑣 𝑥 ) from 𝑣 1 to 𝑣 𝑥 it holds that |𝑝| ≤ | p|. Lemma 2. Let
= ( 1 , … ,   ) be a shortest  -conforming path from  1 to   on a self-deleting graph  = (,  ).The following two statements hold:1.For every 1 <  <  it holds that the path   = ( 1 , … ,   ) is not necessarily a shortest  -conforming path in . 2. It further holds that the path p = (  , … ,   ) is a shortest conforming path from   to   in the self-deleting graph  ′ = ( { 1 ,…,  } ,  ).Let  = ( 1 , … ,   ) be a shortest  -conforming path from  1 to   on a self-deleting graph  = (,  ).For any 1 <  <  we denote the path segment of  from  1 to   by   and the path segment from   to   by p .Due to Lemma 1, the path segments   and p are  -conforming.We now prove that the path p = (  , … ,   ) is a shortest conforming path from   to   in the self-deleting graph  ′ = ( { 1 ,…,  } ,  ).For a contradiction assume there is a vertex   , with 1 <  < , such that there is a  -conforming path  from   to   in  ′ that is shorter than p .We consider the following two cases.(a) The paths  and   do not share a vertex, as depicted in Fig. 2(a).If this is the case, then the path from  1 to   that consists of the path   and the path  is shorter than the path .This is a contradiction.(b) The paths  and   share a vertex   , as depicted in Fig. 2(b).Since by assumption  is  -conforming and || < | p |, the walk from  1 to   consisting of   and  is shorter than .We can create an even shorter simple path form  1 to   by omitting the circle that is created by going from   to   via  and then returning to   via .This is a contradiction to the assumption that  is a shortest path from  1 to   .□ We now prove the two statements separately. 1.A shortest  -conforming path from  1 to   could contain a vertex   for which  (  ) deletes an edge needed in the second part p of the  -conforming path .So,   is not necessarily a shortest  -conforming path from  1 to   .2.
path through  is an  -conforming path through  and viceversa.So the Hamiltonian path problem on  is equivalent to the Hamiltonian path problem on .□Asequential ordering problem (SOP) is defined as a graph  = ( , ) accompanied by a precedence graph  .The precedence graph  is a directed graph defined on the same set of vertices  .It represents the precedence relation between the vertices of .An edge from   to   in  implies that   must precede   in any path through .The problem is to find a Hamiltonian path in  that does not violate the precedence relation given by  .For every sequential ordering problem  there is a corresponding self-deleting graph   such that a path  is a solution to  if and only if  is a Hamiltonian path of   .Proof.Let a SOP be given by the graph  and the precedence graph  .Let () ⊆  () be the set of vertices that precede  in  , formally () = { | (, ) ∈ ( )}.We construct the corresponding self-deleting graph   = (,  ) as follows.Let  = ( 1 , … ,   ) be a path in  that satisfies the precedence relations given in  .So, for every 1 ≤  ≤  <  the vertices   and  +1 are not required to precede   .Thus, the edges (  ,   ) and ( +1 ,   ) are not in  and   ,  +1 ∉ (  ).So by construction of  the edge (  ,  +1 ) does not get deleted by any  (  ) for 1 ≤  ≤ .This implies that the edge (  ,  +1 ) is in the residual graph  { 1 ,…,  } .Thus,  is  -conforming in   .⇐:If the path  = ( 1 , … ,   ) is  -conforming in   it holds per definition that for every 1 ≤  <  the edge (  ,  +1 ) is in the residual graph  { 1 ,…,  } .Thus, it holds that the edge (  ,  +1 ) has not been deleted by any vertex   with 1 ≤  ≤ .It follows that   and  +1 are not in (  ) with 1 ≤  ≤ .Thus,   and  +1 are not required to be visited before   with 1 ≤  ≤  and the path  satisfies the precedence conditions in  .It is therefore a valid path in .