Understanding Sample Generation Strategies for Learning Heuristic Functions in Classical Planning

We study the problem of learning good heuristic functions for classical planning tasks with neural networks based on samples represented by states with their cost-to-goal estimates. The heuristic function is learned for a state space and goal condition with the number of samples limited to a fraction of the size of the state space, and must generalize well for all states of the state space with the same goal condition. Our main goal is to better understand the influence of sample generation strategies on the performance of a greedy best-first heuristic search (GBFS) guided by a learned heuristic function. In a set of controlled experiments, we find that two main factors determine the quality of the learned heuristic: the algorithm used to generate the sample set and how close the sample estimates to the perfect cost-to-goal are. These two factors are dependent: having perfect cost-to-goal estimates is insufficient if the samples are not well distributed across the state space. We also study other effects, such as adding samples with high-value estimates. Based on our findings, we propose practical strategies to improve the quality of learned heuristics: three strategies that aim to generate more representative states and two strategies that improve the cost-to-goal estimates. Our practical strategies result in a learned heuristic that, when guiding a GBFS algorithm, increases by more than 30% the mean coverage compared to a baseline learned heuristic.


Introduction
A planning task is defined by the initial state that describes the starting conditions of the environment, the goal condition that specifies the conditions to be achieved, and the operators that can be applied to states of the planning task.Each operator has preconditions that must be satisfied to be applicable and effects that are the changes that occur in the state after the operator is applied.Planning is finding a sequence of operators that transforms an initial state into one that satisfies the goal condition.Applying an operator has a cost, and the cost of a plan is the total cost of its operators.A plan is optimal if its cost is minimal among all plans.One effective approach for solving planning tasks is to use algorithms from the best-first search family, which rely on a guiding function f that estimates the quality of a state s.Best-first search expands first the state with the best quality.A heuristic function h(s) estimates the cost-to-goal from state s and may be used as a quality measure by f .One algorithm within the best-first search family is greedy best-first search (GBFS) (Doran and Michie 1966), which is only guided by the heuristic function h(s).Another example is the A * search algorithm, guided by f (s) = g(s) + h(s) (Hart et al. 1968), where g(s) is the current cost to reach the state s from the initial state.Generally, best-first search algorithms are more effective when the heuristic function h better estimates the optimal cost-to-goal, although this is not always the case (Holte 2010).
Some of the most successful heuristic functions h are based on methods that solve a simplified version of the planning task and use the cost to solve the simplified task to produce the cost-to-goal estimate.Examples of methods used to produce heuristic functions include delete relaxation (Bonet and Geffner 2001;Hoffmann and Nebel 2001), landmarks (Helmert and Domshlak 2009;Hoffmann et al. 2004;Karpas and Domshlak 2009), critical paths (Haslum and Geffner 2000), constraints-based (Bonet 2013;van Den Briel et al. 2007), and abstractions (Culberson and Schaeffer 1998;Edelkamp 2001;Helmert et al. 2007).Many of these heuristics come with additional properties, such as admissibility.A heuristic is admissible if it does not overestimate the optimal cost-to-goal.A * guided by admissible heuristic is guaranteed to find an optimal plan if one exists.
Increasing interest in learning heuristic functions with neural networks (NNs) has been driven by rapid progress in other application areas.Some works in this area include those by Agostinelli et al. (2019), Arfaee et al. (2011), Ferber et al. (2020), Ferber et al. (2022), O'Toole et al. (2022), Samadi et al. (2008), Shen et al. (2020), Toyer et al. (2020), and Yu et al. (2020).The basic approach is simple: generate a set of state pairs along with their estimated cost-to-goal values, then train a supervised model on these pairs.This model learns to predict cost-to-goal values for new, unseen states.Since learning heuristics that match the true cost-to-goal may not be optimal, an alternative method involves learning to rank states by optimizing the Kendall rank correlation coefficient during training, as in Garrett et al. (2016).Nonetheless, a successful approach to planning has to solve several challenges: C1) State spaces are implicitly defined and mostly exponential in the size of a compact description.Therefore, random samples are hard to generate and may be infeasible, unreachable from an initial state, or unable to reach the goal.Samples are usually generated by expanding the state space through forward (progression) or backward search (regression).
C2) Estimates of cost-to-goal are typically hard to obtain and finding the perfect cost-to-goal amounts to solving the task on the samples.
C3) Planning domains are very different, and logic-based heuristics, which leverage the logical relationships encoded in the description of the planning task, apply to any domain.This results in the problem of transferring a learned heuristic to new domains, tasks, or state spaces.
C4) Planners, which are systems designed to find solutions for planning tasks, rely on evaluating many states per second, so computing the heuristic function should be fast, or the learned heuristic must be more informed.However, there is a trade-off between a more informed learned heuristic and the complexity of the model.
In this paper, we are interested in strategies for generating samples which we introduce in Section 3, and in particular, what characterizes samples that lead to heuristic functions that perform well.To this end, Section 4 presents a systematic study of the contributions of each strategy when solving distinct initial states of a single state space.We learn state-spacespecific heuristics using a feedforward neural network (FNN), focusing mainly on the quality of the learned heuristic and its influence on the number of expanded states and coverage.We aim to understand better how to learn high-quality heuristics.In experiments on small state spaces in Section 5, we investigate the effect of different sampling strategies, the quality of the learned heuristic with an increasing number of samples, and the effect of a different subset of states part of the sample set on the learned heuristic; we also evaluate how the quality of the estimates of cost-to-goal influences the effectiveness of learned heuristic to guide a search algorithm.Then, in Section 6 we compare the best-proposed strategy with a baseline and traditional logic-based heuristics over large state spaces.Furthermore, we qualitatively compare existing methods in the literature in Section 6.1.Finally, we summarize the main findings in Section 7.

Contributions
Through controlled experiments on planning tasks with small state spaces, we identify several techniques that improve the quality of the samples used for training.The contributions include: • A sample generation algorithm that can better generate a representative subset of the state space through a combination of breadth-first search (expanding states close to the goal) followed by random walks from the breadth-first search's leaves (Section 3.1.1).
• State space-based estimations to limit the sampling regression depth to avoid large cost-to-goal overestimates (Section 3.1.2).
• Two methods to improve cost-to-goal estimates based on detecting samples from the same or neighboring states (Section 3.2).
• A systematic study on sampling quality (Section 4).

Preliminaries
This section introduces the foundational concepts and notations necessary for the discussions that follow.We also provide an overview of the existing literature related to our work.

Background and Notation
A SAS + planning task (Bäckström and Nebel 1995) is a tuple Π = ⟨V, O, s 0 , s * , cost⟩, where V is a set of variables, O a set of operators, s 0 an initial state, s * the goal condition, and cost : O → R + a function mapping operators to costs.A variable v ∈ V has a finite domain D(v), and a partial state s is a partial function s : We also write s| U for the restriction of s to a subset U ⊆ V, i.e. the state that agrees with s on U and is undefined otherwise.A (complete) state s is a partial state with dom(s) = V.The initial state s 0 is a state, and the goal condition s * is a partial state.A partial state s can also be interpreted as a set of facts {(v, s(v)) | v ∈ dom(s)}.We use these interpretations interchangeably, and their use will be clear from the context.We further write S(s) for the set of all complete states that agree with partial state s, namely S(s Relevance requires that at least one variable is defined both in the effect and in the partial state to be regressed, and consistency requires an agreement on such variables or preconditions that are maintained.An operator o then is backward applicable in partial state s if it is relevant and consistent with s and leads to predecessor pred(s, o) = pre(o) • (s| dom(s)\effr ).Note that succ(pred(s, o), o) ⊇ s, but may differ from s. Similar to progression, a partial state s has predecessors pred(s) = {pred(s, o) | o ∈ O, o backward applicable to s}.A regression sequence from state s 0 then is valid if o i can be applied to s i−1 and produces s i = pred(s i−1 , o i ).All partial states s i can reach a partial state s sup.Since the goal is a partial state, a valid regression sequence ρ = (o 1 , . . ., o k ) will generate a partial state that can reach the goal in at most k steps and with a cost at most i∈[k] cost(o i ).
Given a planning task, the set of all states over V is the state space.The forward state space (FS) is the set of all states reachable from the initial state s 0 by applying a sequence of operators.Similarly, the backward state space (BS) is the set of all partial states reachable from the goal s * by applying a sequence of backward applicable operators.Note that it is possible that the backward state space may contain partial states that cannot be reached by a forward search (also called spurious states, e.g. by Alcazar et al. (2013)).This problem is addressed in Section 3.

Representing and Generating Samples for Learning Heuristic Functions
A heuristic function h : S → R + ∪ {∞} maps each state in the state space S to a nonnegative cost-to-goal estimate (h-value), or to ∞ when the heuristic concludes that the state cannot reach the goal.For example, the goal-count heuristic for a state s and goal condition s * computes the number of unsatisfied facts of the goal condition by the state i.e. |{v ∈ dom(s * ) | s(v) ̸ = s * (v)}| -the resulting count indicates how far s is from satisfying the goal condition.
Many heuristics for classical planning are derived from a model of the task, such as the SAS + model introduced in the previous section.An obvious alternative is to learn to map a state s to its heuristic value h(s).We focus on learning with NNs, although other supervised learning methods could be used.To learn a heuristic function, an NN is trained on pairs of states s and cost-to-goal estimates c.The learned heuristic functions are usually not admissible, so traditional optimality guarantees are lost.
A propositional representation of a state is more suitable for learning functions over states, as the variables in a planning task are categorical variables.To this end, consider a planning task Π = ⟨V, O, s 0 , s * , cost⟩, and let V = {v 1 , . . ., v n } and D(v i ) = {d i1 , . . ., d i,z i }, i ∈ [n] be some order of the variables and their domains.We represent a state s by a sequence of facts where each fact each variable assumes at most one value, and f ∈F i f = 0 only if v i is undefined.More generally, for any set of propositions P we write mutex(P) if p∈P [p] ≤ 1 must be satisfied in states of Π.Many planning systems can deduce mutexes from the description of the planning task Π (Helmert 2009); we will discuss and analyze their utility for sampling states later.The target output for training may be the cost-to-goal estimates directly or some encoding of them.
An important aspect of sample generation is the degree of dependency on the domain model or the planning task.We would like to learn in a black-box setting where we interact with the planning task only by functions that allow accessing the initial state s 0 , the goal condition s * , and the applicable operators in a partial state.In this setting, we do not have access to the logical description of operators, but only to functions that return the successors or predecessors of a partial state (Sturtevant and Helmert 2019).
For several reasons, the black-box setting is interesting for studying the problem of learning heuristic functions.For example, it still can be applied if the domain model is unavailable because a simulator or learned models generate successors and predecessors.Also, a black-box setting can more easily be adapted than traditional logic-based heuristics to domain models with richer descriptions.Finally, we can address a question of Frances et al. (2017), asking if a planner that has only access to the state structure and goals can approach the performance of planners that also have access to the logical description of the domain, in the specific case of only learning heuristics functions.
Another aspect of sample generation is the cost of generating them.This depends on the number of samples and the cost to generate each.In particular, we have the problem of deciding how many samples are required since, generally, only a very small part of the state space can be sampled.Also, we have to decide which label to assign to each sample.The perfect heuristic h * , that maps each state s to the cost of an optimal s-plan or ∞ if no such plan exists, would provide useful labels.However, even with samples labeled with perfect heuristic h * , we expect the search performance to also depend on the region of the state space represented by the samples and how well the NN generalizes.In general, labeling samples with h * is impractical.Therefore, we are mainly interested in good heuristic estimates that can be generated fast.We analyze the influence of sample size and quality experimentally later.The generation of samples to produce heuristic functions is not restricted to learning heuristics with NN.For example, Haslum et al. (2007) generates samples by random walks from the initial state to evaluate which patterns should be used to produce a pattern database heuristic.
Additionally, network architecture and sample generation depend on the range of tasks the learner intends to generalize.This may be a state space with a fixed goal condition, a planning domain, or an entire planning formalism.In the first case, the learned function has to generalize over a set of planning tasks defined by any pair of initial state s 0 part of a fixed FS and fixed goal s * .In the second case, the learned function has to generalize over all tasks part of a domain.Finally, a learning-based heuristic that generalizes over a planning formalism is domain-independent.

Related Work
There have been two main approaches to define the architecture of neural network for learning heuristic functions.In the first, the architecture depends on the domain model (e.g.uses information about preconditions and effects of operators), in the second the architecture is independent of the model.
The usual setting in the first approach (Gehring et al. 2022;Shen et al. 2020;Ståhlberg et al. 2022;Toyer et al. 2018;Toyer et al. 2020) is to train a NN with samples of small tasks of a domain and evaluate it on larger tasks of the same domain.The networks trained can be general networks such as neural logic machines (Dong et al. 2019) and graph neural networks (Gori et al. 2005;Rivlin et al. 2020;Scarselli et al. 2008), or networks proposed explicitly in the context of planning such as Hypergraph Neural Networks (Shen et al. 2020) and Action Schema Networks (Toyer et al. 2018).These networks require the logical description of the domain and the task to be instantiated and can typically be generalized across different state spaces of a domain, and sometimes across domains, as is the case for Hypergraph Neural Networks.These approaches also help in understanding learning heuristics.For example, the main goal of Ståhlberg et al. (2022) is to understand the expressive power and limitations of learning heuristics.The main limitation of these approaches is the strong dependence on the domain model and task description.
In the second approach (Ferber et al. 2020;Ferber et al. 2022;O'Toole et al. 2022;Yu et al. 2020) typically trains an FNN and evaluates the learned heuristic on a state space using tasks with the same goal and different initial states.These networks are trained with pairs of states and cost-to-goal estimates.Ferber et al. (2020) systematically study hyperparameters on the architecture of the FNN and found that their influence is secondary.They found that for a fixed architecture, two aspects significantly influence how informed the heuristic is: the subset of selected samples and the size of the sample set.
Furthermore, Yu et al. (2020) and O'Toole et al. ( 2022) perform backward searches from the goal, the former with depth-first search and the latter with random walks.Both use the lowest depth in which the state was generated as cost-to-goal estimates.Ferber et al. (2022) uses a combination of backward and forward searches (Arfaee et al. 2011).First, it generates new initial states with backward random walks and then solves them with a GBFS guided by a learned heuristic.The plans found provide the samples, and each sample is a state in the plan with the cost-to-goal estimate as its distance to the goal.O'Toole et al. (2022) also proposed a method to generate samples that do not use expansions.This method includes randomly generated states in the sample set with a cost-to-goal estimate equal to the maximum value in the sample set plus one.O'Toole et al. (2022) showed that this method substantially increases coverage.The methods from the second set of approaches are highly independent of the domain model and planning task description and require low computational resources to generate samples and train the FNN.Also, despite having competitive results compared to logic-based heuristics, they are still unable to surpass the goal-count heuristic.
In particular, this work follows the second set of approaches since we aim to use minimal information from the task description.In Section 6.1, we compare the proposed approach to O'Toole et al. ( 2022) and Ferber et al. (2022) as they share the same NN architecture and dataset, with variations in the sampling and training procedures.

Sample Generation
We aim to investigate sample generation systematically.Therefore, we focus on how two aspects of sample generation influence the performance of the learned heuristic to guide a search algorithm: the states s i in the state space included in the samples set and the quality of the estimates h i of samples with respect to the h * -value.Learning a heuristic function requires a set of samples (s 1 , h 1 ), . . ., (s N , h N ), where each sample (s i , h i ), i ∈ [N ] consists of a state s i and a cost-to-goal estimate (or h-value) h i .
We restrict this study to generalizing over planning tasks with initial states part of the same forward state space (FS) and a fixed goal condition.We study black-box approaches with access to predecessors and successors of partial states through a black-box function, to the goal condition, and to the domain of each variable.We also study approaches that have access to mutexes derived from a SAS + model.We address first, in Section 3.1, the generation of states, and then, in Section 3.2, the estimation of the cost-to-goal.In both sections, we discuss approaches from the literature and introduce new methods.The methods are a novel sampling strategy combining regression by breadth-first search with random walk, an adaptive regression limit, and two improvement methods for cost-to-goal estimates.

Generation of States
Unlike other domains in machine learning, where datasets of samples are often collected in real-world experiments and need to be manually annotated and curated, sample generation here is an algorithmic problem since we can generate the state space and compute cost-to-goal estimates.Approaches from the literature to generate the states include methods based on progression from one or more initial states, random sampling of the state space, or regression from the goal.In both progression and regression, one can apply different expansion strategies such as random walks, breadth-or depth-first searches, or combinations of more than one strategy such as bootstrapping (Arfaee et al. 2011).A problem in progression and random sampling is obtaining the cost-to-goal estimates.Without access to efficiently computable heuristic functions, or in black-box approaches, these values have to be obtained by search, which can have a cost exponential in the size of the task.To remain less dependent on models than logic-based methods and also more general, we focus on regression for which an upper bound on the cost-to-goal is readily available, as discussed in Sections 3.1.1and 3.1.2.Since regression leads to partial states, the problem of generating complete states is addressed in Section 3.1.3.Random sampling is also discussed in Section 3.1.4.

Sampling by Regression
To generate samples, we expand partial states through regression from the backward state space.Expansion methods include breadth-first search (BFS), depth-first search (DFS), random walks (RW), as well as a combination of BFS with random walks.We discuss the completion of these partial states in Section 3.1.3.A regression rollout is defined as a series of partial state expansions, and it stops if the last expanded partial state has no predecessors or is at the depth limit L. The sampling generation process stops if the number of required samples N has been reached.Note that random walks can have multiple rollouts due to the depth limit L, while BFS and DFS only have one.
During expansion, we optionally use mutexes obtained from an analysis of the planning task -in this case as computed by Fast Downward (Helmert 2006) -to discard partial states which cannot be completed to complete states without violating a mutex, as described in Section 3.1.3.We also discard repeated partial states for random walk rollouts, such that a single rollout never cycles, although the same partial state may be sampled several times in different rollouts.Starting from h(s * ) = 0, a partial state s ′ ∈ pred(s, o) obtained by applying an operator o backwards to partial state s has a cost-to-goal estimate h(s ′ ) = h(s) + cost(o).For samples s that satisfy the goal condition (S(s) ⊆ S(s * )) we reset the cost-to-goal estimate to h(s) = 0. Partial states are added to the sample set when generated in the random walks and when expanded in BFS and DFS.In all methods, operators backward applicable to a partial state are applied in random order.
Different expansion strategies generate sample sets with varying frequencies of optimal distances from the goal.In our experience, good coverage of states close to the goal, such as those obtained by BFS or random walks, is useful, as is the greater depth obtained by DFS or random walks.However, random walks from the goal often sample states close to the goal multiple times, and DFS can lead to a concentration of distant samples from the goal.Based on these observations, we propose a novel combination of BFS and random walks called FSM that aims to have a good coverage close to the goal and a diverse set of samples from the remaining state space.FSM has two phases.In the first phase, a fixed percentage p FSM of the N samples is generated by BFS.(The value of p F SM also serves as a constraint on computational resources, and the BFS process terminates when either the desired number of samples or the resource limit is reached first.)The BFS expands a partial state from layer k that generates n partial states from layer k + 1, and these partial states are sampled only if the current total samples plus n are within p FSM N samples; otherwise, no partial states are sampled and BFS expands another partial state.Let Q be the partial states of the set of samples that have not been expanded.The second phase generates multiple random walk rollouts, each starting from a partial state in Q chosen randomly with a complete replacement only after all partial states have been selected once.This is repeated until reaching N samples in the sample set.During a random walk, partial states sampled in the BFS phase are not resampled.

Maximum Regression Limit
A simple strategy to limit the sample generation depth is to use a fixed maximum limit L. Yu et al. (2020) andO'Toole et al. (2022) used this strategy with respective limits of L = 200 and L = 500.However, since tasks have state spaces with different maximum distances to the goal, a fixed limit is undesirable for generating a representative part of the state space.If the maximum regression limit L overestimates the maximum distance-to-goal, the corresponding cost-to-goal estimates could be considerably larger than their respective h * -values because states could be generated through extremely non-optimal paths.If the maximum regression limit L underestimates the maximum distance-to-goal, some part of the state space may never be generated.
The ideal regression limit for a given algorithm should allow it to generate a representative part of the state space without reaching states by extremely non-optimal paths.Let d * denote the distance from the goal to its farthest state part of the state space.For BFS, d * is an upper bound on the regression limit because at depth d * the whole state space has been explored; for DFS and random walks, higher limits are required since they do not follow optimal paths.Since d * , in general, is unknown and hard to compute, we propose two practical methods to define state-space-dependent maximum regression limits that aim to estimate d * .
The first method uses the number of facts F = |F(s 0 )| to estimate d * .Since a set of facts represents each state, if one assumes that all operators have no precondition and change exactly one fact, one can reach any state in the state space from any other state by applying at most F operators.However, operators can modify more than one fact at a time.We refine this approximation by considering the average number of facts changed by the task operators.The second method is defined as F = F/eff where eff = o∈O | eff(o)|/|O|, i.e., the number of facts per mean number of effects in the operators.

Sample Completion
Regression sampling generates a set of partial states, while the NN is trained on and receives as input complete states during the search.Therefore, we evaluate three approaches to complete the undefined variables U = V \ dom(s) of a partial state s.The first approach is a random assignment.The method assigns to each undefined variable v ∈ U of a partial state a random value in dom(v).This completion strategy can be applied in a black-box setting since it uses information about the structure of a state.
The second method is mutex-based and aims to avoid states that are impossible to reach during the search.For each partial state, the method processes the variables U in a random order and assigns to each of the variables in U a random value in dom(v) that does not violate the mutexes.We repeat this method for each partial state at most 10 K times.If after 10 K times this method does not produce a complete state that respects the mutexes, we still include the partial state in the sample set.In this case, we maintain all variables v ∈ U undefined, which sets all facts F v for variables v ∈ V u to false 1 .
Since the set of mutexes we use is incomplete, the mutex-based method can still generate complete states that are impossible to reach during the forward search.Thus, we evaluate an ideal completion method to investigate the influence of generating only complete states that are reachable during the search.For each partial state s, this method samples a random complete state from in the forward state space.Only for this method, if during the sample generation regression, a partial state s is generated such that no full state is in the forward 1. Empirically this case is negligible since it occurs in about 0.1 % of the samples, in four of nine domains.
state space, then s is considered an invalid predecessor.This ideal method can only be applied to small tasks where we can enumerate the complete FS of the initial state s 0 .
In all three methods, we do not check for repeated states after completion, i.e., two partial states can generate the same complete state.As a result, it is possible for two complete states to have different h-values, and we address this in Section 3.2.1.

Randomly Generated Samples
O'Toole et al. ( 2022) have shown that adding randomly generated samples to a set of samples generated by expansion improves the performance of the search algorithm guided by the learned heuristic.They propose to set the cost-to-goal estimate for randomly generated samples to L + 1 for a maximum regression limit of L. To study the effect of randomly generated samples, we include this method in this study.These samples are generated from fully undefined states using the mutex-based completion technique.If the generated state s is already part of the sample set, i.e., s = s i for some i ∈ [N ] it receives cost-to-goal estimate h i , otherwise the cost estimate 1 + max i∈[N ] h i that is larger than all samples estimates.

Improving Cost-to-Goal Estimates
We start by observing that cost-to-goal estimates never underestimate the true cost-to-goal h * , as follows.
Property 3.1.The cost-to-goal estimate h(s) of a sample s obtained by regression satisfies h(s) ≥ h * (s).
Proof.This follows because each estimate is witnessed by a plan.As observed in Section 2, a valid regression sequence ρ = (o 1 , . . ., o k ) generates a sequence of partial states s i = pred(s i−1 , o i ), i ∈ [k] starting from the goal s 0 , such that s k can reach the goal in at most k steps and with cost at most i∈[k] cost(o i ).Furthermore, if r = pred(t, o) and r ′ ∈ S(r) is a complete state, we have succ(r ′ , o) ∈ S(t).Therefore, for any complete state s sampled from s k , o k , o k−1 , . . ., o 1 is a valid plan.Thus, h(s) cannot be lower than the optimal cost h * (s).
In general, we expect better h-value estimates to lead to better-learned heuristics, and in turn to less expanded states during a search, but as previously noted this is not necessarily the case (Holte 2010).Therefore, we apply two procedures that improve the cost-to-goal estimates but maintain Property 3.1.The first, dubbed SAI, minimizes estimates over repeated samples, and the second, SUI, over successors of samples.

Improvement of Repeated Samples
In most state spaces, it is common for a state to be sampled in more than one random walk rollout, ending up with multiple duplicates with different estimates.Thus, for all sampled states s we update each cost-to-goal estimate to the best estimate h(s) = min{h i | s = s i , i ∈ [N ]} Since different partial states can generate identical complete states, the improvement is applied to exact partial states as well as complete states.We call this procedure sample improvement (SAI).Choosing the minimum h-value is clearly sound since, in all cases, we have valid plans from a regression that witness these distances; for the same reason, Property 3.1 still holds.

Improvement over Successors
Besides sampling the same states, it is common to sample states that are neighbors in the state space, particularly for states close to the goal.Information from neighboring states can be used to improve the cost-to-goal estimates, as follows.First, for fast subset testing, we create an empty trie T , and for each partial state s i , i ∈ [N ], we insert s i into T keyed by its facts (s(v)) v∈V .Next, we build a graph G = (V, A) with all sampled partial states V = {s i | i ∈ [N ]}, and A = ∅.For every pair of partial states s, t ∈ V such that for some operator o ∈ O applicable to s we have S(succ(s, o)) ⊆ S(t), we add an arc (s, t) of weight w(s, t) = cost(o) to A. (Unlike in regression, if pre(o) mentions an undefined variable in s then o is not applicable.)We use the trie T to search for each successor in the partial states that are supersets.For partial states generated by regression, by construction, at least one such successor exists, except for the goal s * .Using graph G, we propagate the cost-to-goal estimate from each sampled state to its sampled predecessors.We iterate over each arc (s, t) ∈ A and update the cost-to-goal estimate h(s) = min(h(s), h(t) + w(s, t)).The process continues as long as there are updates.We call this procedure successor improvement (SUI).As for SAI, we only add valid transitions among partial states, so all distances are still witnessed by plans, and Property 3.1 is maintained.

Training Set Generation Workflow
The process described in this paper follows the workflow illustrated in Figure 1.It begins with the generation of samples using a sampling algorithm such as FSM.The next steps improve the sample estimates using SAI, followed by SUI, both of which are applied on partial states.The second and third steps are optional and can be applied independently.However, if applied, they are applied in this order.The fourth step uses a state completion technique to produce complete states from partial states.Then, optionally, we add randomly generated samples to the sample set.Finally, if SAI was used previously, then it will be applied to the complete states.

Experimental Setup
In this section, we present the experimental settings used throughout the experiments in small state spaces (Section 5), where we can enumerate the complete forward state space with associated perfect cost-to-goal estimates h * , and in large state spaces (Section 6), in order to validate our findings in a practical setting with large planning tasks.
Common Settings.We use a residual neural network (He et al. 2016) to learn a heuristic for a state space.The network's input is a Boolean representation of the states, where a propositional fact is set to 1 if it is true in the state and 0 otherwise, as explained in Section 2.2, and its output is a single neuron with the predicted h-value.The network has two hidden layers followed by a residual block with two hidden layers.Each hidden layer has 250 neurons that use ReLU activation and are initialized as proposed by He et al. (2015).We select the domains and tasks from Ferber et al. (2022), namely: Blocks World, Depot, Grid, N-Puzzle, Pipesworld-NoTankage, Rovers, Scanalyzer, Storage, Transport, and VisitAll.All domains have unit costs except for Scanalyzer and Transport, for which we consider the variant with unit costs.All methods are implemented on the Neural Fast Downward2 planning system with PyTorch 1.9.0 (Ferber et al. 2020;Paszke et al. 2019).The source code, planning tasks, and experiments are available3 .All experiments were run on a PC with an AMD Ryzen 9 3900X 12-core processor, running at 4.2 GHz with 32 GB of main memory, using a single core per process distributed among 12 (for small planning tasks) and 10 cores (for large planning tasks).We solve all tasks with GBFS guided by the learned heuristic ĥ.When multiple states have the same heuristic value, the state that was generated earlier (i.e., has a lower generation order) is selected for expansion first (FIFO).
Initialization.An NN may fail to train if, after initialization, it outputs zero for all training samples (this is called "born dead" in Lu et al. 2020).This condition arises when the weights and biases of the NN are initialized in a way that consistently maps the ReLU activation region to negative values, resulting in a zero gradient and no weight updates during training.Among all small state space experiments using neural networks, about 5 % of the networks were born dead, where the Blocks World domain represented 84 % of the total of born deads, followed by VisitAll (15 %) and Transport (about 1 %).In the large state space experiments, less than 1 % of the networks were born dead.Pipesworld-NoTankage represented 98 % of born deads, and Storage 2 %.In the case of a born dead NN, we reinitialize it with a different seed until the NN outputs a non-zero value for some sample.In our experiments, no NN requires more than one reinitialization.
Baseline.We compare distinct configurations to a baseline ĥ0 similar to previous approaches from the literature.For the baseline, the NN is trained by a sampling method that uses random walks with a regression limit of 200 backward steps.In addition, mutexes are used during regression and for sample completion, but resetting the h-value to 0 in goal states and the improvement strategies SAI and SUI are turned off.

Experiments in Small State Spaces
In this section, we study the behavior of different sampling methods on small state spaces.In particular, we analyze the effects of the following factors on the number of expanded states during search: the algorithms used to generate samples (partial states and their respective cost-to-goal estimates) (Section 5.1), the partial state completion methods (Section 5.2), and the accuracy of cost-to-goal estimates and methods to improve them (Section 5.3).Finally, we compare the proposed learning-based approaches to commonly used heuristics during search (Section 5.4).
For each domain, we select the task with the largest size state space that can be enumerated completely to obtain h * -values.We only select tasks with state spaces with 30 K states or more, and fewer than 1 M states.Table 1 shows the tasks and their state space sizes.For domains Grid, Rovers, Scanalyzer, and Transport, the best task found had fewer than 30 K states, and VisitAll more than 1 M, so we manually modified these tasks.We could not find a non-trivial task within the limits for Depot, Pipesworld-NoTankage and Storage, so they were excluded from the experiments.We generate the initial states for the small state spaces by performing a random walk of length 200 from the original initial state of a task.Rovers, Scanalyzer and VisitAll had duplicated initial states or states that satisfied the goal condition.Thus, we generate the initial states for these domains with random walks of length 25, 50, and 8, respectively.
In the small state space experiments, the coverage for all methods is 100 %.Therefore, we use the number of expanded states to evaluate the quality of the heuristic function.In these experiments, we conducted tests using a total of 25 combinations of sample seeds and network seeds (5 sample seeds × 5 network seeds).Specifically, we trained five networks for each sample seed, ranging from sample seed 1 to sample seed 5, and repeated the process for each network seed.Additionally, we evaluated each network over 50 different initial Furthermore, if not stated otherwise, for the fixed parameters we use the baseline setting defined in Section 4 (regression limit L = 200, mutexes, no cost-to-goal improvement methods).

Sample Generation
Our first set of experiments aims to analyze the influence of the sample generation methods on the quality of the learned heuristics.Section 5.1.1 compares the different sampling algorithms, Section 5.1.2compares the adaptive regression limits, Section 5.1.3investigates the effect of randomly generated samples.
In all the experiments below when comparing several treatments, we apply the nonparametric Mack-Skillings test for blocked designs with an equal number of replications in each experimental cell, with a confidence level α = 0.01 to see if the treatments are significantly different4 , such .If not specified otherwise, the test is applied to each domain separately, with initial states as blocking factors.If treatments are significantly different we apply a corresponding post-hoc test to find the different treatments, again with a family-wise error rate of α = 0.01.For details about the tests see Hollander et al. (2014, ch. 7.9 and 7.10).In all following tables, the statistically best values are highlighted in bold.

Generation Algorithms
This experiment compares the four sample generation algorithms BFS, DFS, RW, and FSM and shows that a sample set including diverse regions of the forward state space (such as those generated by RW and FSM) yields fewer expansions in the geometric mean.
To control the effect of the cost-to-goal estimates on the quality of the learned heuristic, we replace sample estimates with optimal values h * before training.The left side of Table 2 shows the geometric mean of the number of expanded states of a GBFS guided by the learned heuristics.The right side in column FS shows the mean h * -value of the whole forward state We now compare these results to results shown in Table 3, obtained on exactly the same states but using the cost-to-goal estimates obtained during sampling for training the NN.Note that the results for BFS with estimated costs to the goal differ from those with exact values in Table 2.This happens because, during regression with BFS, the cost-to-goal estimates are only exact on partial states; when turning them to complete states, the estimates can be larger than h * .Thus, ĥbfs with the estimates obtained during regression is less informed.
We can see that the relative order of the methods concerning the number of expanded states remains the same, although all methods expand more states.Again, all algorithms have a different performance for α = 0.01.The increase in the number of expanded states is highest for ĥdfs , which expands a mean of 177.02 states using h-values instead of 40.12 when using h * , i.e. about four times more states.In contrast, the other methods expand about twice as much, meaning that the estimates produced by DFS during regression are inferior to those produced by the other methods.
Comparing the mean h * -values from Table 2 to the mean h-values Table 3, we can see that the methods generate samples that overestimate h * -values and that DFS overestimates much more than the others.Also, when contrasting the two extreme sample sets generated In an additional experiment we verified that not only the absolute predicted h-values, but their ranking agrees well with h * -values.The mean correlation between these values, over all domains, are 0.302, 0.237, 0.464, 0.521, for methods BFS, DFS, RW, and FSM, respectively.Except for BFS, this agrees with the ranking of the methods according to the number of expanded states, and shows that a better sample distribution leads to predicted h-values that guide better.
Although BFS has an estimation quality close to the h * -values, and a better correlation, it expands more states than the other methods.These results suggest that sampling more states in localized regions of the state space is not sufficient to achieve good results during search with GBFS.Furthermore, ĥfsm expands fewer states than ĥrw and is the best in five of seven domains.Because ĥfsm had a lower increase in expansions compared to ĥrw , we focus on FSM in the remaining experiments.

Regression Limit
In this experiment, we analyze the influence of the regression limit on the number of expanded states with sample generation strategy FSM.The experiments show that values of regression limits slightly larger than d * , which is the maximum distance to the goal in the state space, tend to yield fewer expansions in the geometric mean.
We compare the fixed regression limits L = 40 (a good fixed limited for the small state spaces chosen by observing d * ) and L = 200 (used by Yu et al. (2020)) with two state-spacedependent strategies that aim to estimate d * : setting the regression limit to the number of facts F or to the number of facts divided by the mean number of effects F .We also present results for Blocks), which is desirable since random walks do not follow the optimal paths.Finally, F estimates d * better than F for all domains.
The right-hand side of Table 4 gives the number of expanded states for GBFS guided by ĥ trained on FSM samples with different regression limits L and no h-value improvements.We see that limits d * , 2d * , L = 40, F , and F perform better than d * /2 and the fixed limit 200.The results with d * /2 show that underestimating d * generally degrades performance.Also, note that F underestimates d * in Blocks, and the performance substantially degrades.The proposed strategies F and F yield similar results to d * , 2d * , and L = 40.However, we do not have access to d * in general.Therefore, the proposed strategies provide good limits in practice and perform better than the previously used fixed limit of L = 200.

Random Samples
In this experiment, we evaluate the effect of adding randomly generated samples (as explained in Section 3.1.4)to a sample set generated using the sampling strategy FSM and regression limit F .Unlike the previous experiments, we additionally apply the improvement strategies SAI and SUI to the states sampled by regression (SAI and SUI are discussed in Section 5.3.1).
The following experiments show that randomly generated samples are beneficial up to about 60 % of the total sample set, provided their cost-to-goal estimates are larger than the existing samples.We generate samples S = {(s 1 , h 1 ), . . ., (s N , h N )} where 10 %, 20 %, . . ., 100 % are random samples (to which SAI and SUI are not applied) and the rest is sampled with FSM and a regression limit F .Random samples get an h-value of H + 1 where H = max i∈[N ] h i is the largest h-value in samples S, except when they are part of the samples, in which case they receive the corresponding estimate (this happens in fewer than 1 % of the states).Note that when using 100 % of random samples, each has the cost-to-goal estimate equal to the regression limit L + 1 instead of H + 1, as we do not have samples in S.
Table 5 shows the performance up to 70 % random samples.We have omitted 80 %, 90 %, and 100 % since expansions are higher (respectively 45.12, 56.85, and 12081.97).The Table 5: Expanded states of GBFS with a learned heuristic over samples generated by FSM with regression limit F , all cost-to-goal improvement strategies, and a varying percentage of randomly generated samples.number of expansions is considerably reduced when using random samples, with 20 % random samples performing slightly better than other percentages.
To better understand the effect of random samples, we have performed four additional experiments with 20 % of random samples.The first focuses on cost-to-goal estimates.We keep the samples but replace H + 1 by small values, namely either a random h-value from the sample set S, or a random value drawn from U [1,5].This leads to overall geometric means of 173.91 and 3231.64 expanded states, respectively.The second experiment forces the random samples to be part of the FS.This leads to a geometric mean of 31.11expanded states.The third experiment does not apply mutexes and leads to a geometric mean of 30.27 expanded states.The fourth experiment generates random samples with true noise (each Boolean fact of a state has a 50 % chance of being 0 or 1), yielding a geometric mean of 45.01 expanded states.From these additional experiments, it is clear that the most relevant factor is a high h-value, although generating random state samples with true noise yields more expansions than not using random samples.The most probable explanation for the effect of random samples seems to be that they enhance the likelihood of guiding the search toward samples where the network has learned good cost-to-go estimates, but that would require further experiments.

Partial State Completion
Now we focus on how sampled partial states are converted to complete states.In this experiment, we use FSM, regression limit F , and all the samples have optimal cost-to-goal estimates h * .We compare three different sample completion strategies for a partial state s.All of them select a random state from the set of states S(s) represented by s or a restriction of it: the set equals either to all states in s (no restrictions), only those states that satisfy mutexes, or only states from the forward state space (perfect baseline).Note that even without restrictions, state completion is subject to the mutexes implicit in the SAS + encoding.Table 6 presents the expanded states and the probability of a completed state being in the FS for these approaches.We can see that applying mutexes leads to a moderate reduction of the number of expanded states, and is very close to an ideal completion of the states.However, completing randomly also presents competitive results, except for N-Puzzle and Blocks World.In precisely these two domains completing using mutexes is much better than randomly, since generating a valid combination of facts "clear" in Blocks World, or assigning the empty tile to the correct position is improbable.For Rovers, Scanalyzer, Transport, and VisitAll, there is no distinction between the "None" and "Mutex" approaches because either there are no mutexes or the mutexes rarely apply to the sample states.

Estimates of Cost-to-Goal
This set of experiments aims to analyze how techniques used to improve cost-to-goal estimates influence sample quality.To this end, in Section 5.3.1, we directly compare the difference of the improved estimates (with SAI and SUI) to h * -values.Then, to see how well the learned heuristics generalize, in Section 5.3.2we evaluate them over the complete forward state spaces from Table 1.

Quality of Estimates
First, we compare the quality of the cost-to-goal estimates to h * with and without h-value improvement techniques and distinct regression limits.We generate the samples with FSM and varying regression depth limits, and complete the partial states with mutexes.We demonstrate that using adaptive regression limits and cost-to-goal improvements (SAI and SUI) consistently brings the estimates closer to h * .The results in Table 7 show the mean absolute difference between the sample estimates and h * , so smaller means indicate better approximations.Note that we are not evaluating the output of an NN, but the cost-to-goal estimates of the sample sets.The improvement strategies SAI and SUI substantially reduce the estimates for all regression limiting methods.For regression limits L of 200, F and F , using only SAI reduces the estimates to 31.28, 13.95, 4.92 respectively, and using only SUI reduces the estimates to 11.10, 5.48, 1.94 respectively.Thus, SUI has the most effect on improving the cost-to-goal estimates compared to SAI.Also, the adaptive regression limiting methods are better than the fixed default 200, and F has the best results.When comparing 200 to F without h-value improvements, the cost-to-goal estimate difference to h * decreases by about 6 times in the geometric mean, with Blocks World having the best performance, improving more than 25 times.Finally, using both h-value improvements and a regression limit F reduces the difference to h * from 33.45 to only 1.60.

Evaluation Over The Forward State Spaces
We now analyze the quality of the proposed learned heuristics and the logic-based heuristics fast-forward h FF (Hoffmann and Nebel 2001) and goal-count h GC over all states from the forward state space FS of each task.In summary, h FF better approximates the h * estimates over the whole forward state space, but the proposed approaches are close.Table 8 shows the results.Except for the baseline ĥ0 , the samples are generated with FSM limited by an L of 200, F or F , and improved with SAI and SUI.The learned heuristic ĥ20% F is the same as ĥF , but 20 % of the samples are randomly generated.We see that ĥF reduces the difference of the predicted result to the real one by about 11 times when compared to ĥ0 , and when compared to h GC it has the smallest difference in all but two domains.Also, ĥF presents a similar mean difference to h FF .Note that due to the randomly generated samples in the sample set, ĥ20% F doubles the difference compared to ĥF .Blocks World is the only domain with a lower value, due to around two-thirds of the FS states having an h * -value within a range of two or less the value assigned for random samples, thus improving the average.
Comparing Tables 7 and 8, we observe that relative order between 200, F and F is preserved.The mean difference of the samples' estimates to h * for F is 1.60 in Table 7, and when the corresponding ĥF is required to generalize over the entire FS, the mean difference is 3.57.We now compare the NN-based heuristics with logic-based heuristics and show that the learned heuristics are competitive with commonly used heuristics.The number of expanded states of GBFS guided by different heuristic functions is shown in Table 9.The NNs are trained with samples obtained with FSM, all cost-to-goal improvement strategies and regression depth limited by F .
First, we see that ĥ0 expands fewer states than h GC in most domains except Scanalyzer and VisitAll, but it is far worse than h FF except in Blocks World and VisitAll, where the learned heuristic has particularly good results.We also see that ĥF expands fewer states than ĥ0 in five domains.In turn, h FF has better results than ĥF in five domains.However, ĥF surpasses h FF if 20 % of the samples are random, or if we increase the budget of ĥF to 5 % (instead of 1 %) of the number of states in the forward state space as shown in Table 10.This table also indicates that after increasing the budget to 50 % of the number of states in the forward state space, the gains in quality of the learned heuristic are negligible.
Comparing Tables 8 and 9, we see that for the NN-based heuristics, the order of approaches in terms of h-h * difference remains consistent for ĥ0 and ĥF , but not for ĥ20% F , which has a higher mean difference than ĥF but presents the least state expansions, even when compared to h FF .With these results, we conclude that a better generalization over the forward state space is good for the samples obtained during regression.In contrast, despite worsening the mean difference to the FS, random samples are obtained after the regression procedure and can be helpful.

Experiments in Large State Spaces
The main goal of the following experiments is to verify the findings from the previous sections on large state spaces, so we compare different configurations of the improved methods with logic-based heuristics and a baseline.In summary, the following experiments confirm that the previous findings also apply to large state spaces.Then, in Section 6.1, we compare the proposed approach with other learning-based approaches from the literature.We report We use the benchmark defined by Ferber et al. (2020) and Ferber et al. (2022).Ferber et al. (2020) selects for each domain, IPC planning task with their original initial states, that are solved within 1 and 900 seconds by GBFS with h FF .Each domain has the following number of selected tasks: Blocks World, 5; Depot, 6; Grid, 2; N-Puzzle, 8; Pipesworld-NoTankage, 10; Rovers, 8; Scanalyzer, 6; Storage, 4; Transport, 8; VisitAll, 6.For each selected task, Ferber et al. (2020) generates new initial states with random walks from the original initial state.These new initial states with the original goal conditions of the selected tasks define the test planning tasks.For selected task Ferber et al. (2022) experiments with 50 moderate tasks resulting in a benchmark with 3, 150 planning tasks.We use the same benchmark used by Ferber et al. (2022).
We generate samples within 1 hour and set 1 hour as the maximum training time.Each of the 50 initial states must be solved separately with GBFS within 5 minutes and 2 GB RAM.We fix the number of samples at N = 16M/|V|, such that domains with more variables V receive proportionally less samples.This results in a mean of 500 MB RAM during sampling and 2 GB during the h-value improvement.First, we reassess the previous results using the regression limits F and F on large state spaces since the previous experiments (Section 5.1) produced similar results.Table 11 shows the mean coverage and number of expanded states for the methods using F or F , with all h-value improvements, and with and without mutexes (denoted by ĥ′ ).
When comparing the learned heuristic ĥF over ĥF , we see a mean coverage improvement of about 9 %.All domains are improved or have very similar results, except VisitAll, where limiting the regression limit by F is better -this is also observed in the small state space experiments.Without mutexes, the coverage improvements from ĥ′ F over ĥ′ F are minor.The smaller number of expanded states in approaches with F indicate samples of higher quality.With or without mutexes, using F has the highest positive effect in N-Puzzle, increasing its coverage by about four times.Also, we see that not using mutexes improves the results in Blocks World, Depot and Transport, while having a minimal effect in Pipesworld-NoTankage, Rovers, Scanalyzer and VisitAll.From these results, we conclude that F has better performance than F for large state spaces.Therefore, the following experiments will use F .
Next, we compare the logic-based heuristics h FF and h GC , the baseline ĥ0 and the best approach ĥ20% F .The results are presented in Table 12.We see that h FF dominates in most domains, achieving twice the mean coverage of the baseline ĥ0 .However, ĥ20% F has only 12 % less mean coverage than h FF , improving ĥ0 by about 31 %, with competitive coverage in most domains.Note that ĥ20% F achieves better mean coverage than h GC , with higher or equal coverage in 6 out of 10 domains.Also, in all domains except Pipesworld-NoTankage and Transport, the best-learned heuristic expands fewer states on the same initial states when compared to h FF , indicating that the learned heuristic is more informed and that the inferior coverage is an effect of the slower expansion speed of the NN-based heuristics.Note, however, since only tasks solved by all methods are considered, the evaluation is biased towards easier tasks.Furthermore, when limiting h FF by the same number of expansions as the learned heuristic, h FF achieves coverage of 81.20, meaning that it still excels in most states.Because the dataset used was generated from tasks that are solvable by h FF within 900 seconds, the results are also biased towards better performance with a search guided by h FF .When comparing Tables 11 and 12, we notice that all NN-based heuristics have similarly poor results in Rovers, independent of configuration.Considering only the learned heuristics, when using 20 % of random samples ( ĥ20% F ) instead of 0 % ( ĥF ), there are intermediate improvements of about 15 % in Storage, Transport and VisitAll, and a significant improvement in Pipesworld-NoTankage, from approximately 18 % to 80 % coverage.

Comparison to Other Sample-Based Methods
In this section we compare the learned heuristics to those of Ferber et al. (2022) andO'Toole et al. (2022).All methods share the same benchmark and NN architecture, use a Boolean representation for the samples, and mutexes to complete unassigned state variables.They differ, however, in batch size, patience value, NN initialization functions, and percentages of data split into training and validation sets.Furthermore, machine configurations, libraries, and resource limits are different, and existing results cannot be easily reproduced.Thus, we limit ourselves to a qualitative comparison.Even within these constraints, the following experiments indicate that the proposed approach achieves better coverage in less time.Ferber et al. (2022) learns using bootstrapping (Arfaee et al. 2011), which iteratively improves the learned heuristic by training with samples with cost-to-goal estimates of increasing distances from the goal.The Bootstrap method generates new initial states with backward random walks from the goal for a given state space.For each generated initial state, the method attempts to find a plan with GBFS guided by the current learned heuristic.The plans found with their respective states and cost-to-goal estimates are used as training samples.If the method finds a plan for more than 95% of the initial states, it generates initial states with longer random walks.Ferber et al. (2022) perform sampling and training for up to 28 hours, with a search time-limit of 10 hours.O'Toole et al. (2022) generates samples using random walks from the goal.They perform 5 rollouts with a regression limit of L = 500 and use the current depth as cost-to-goal estimates; they use the Tarski framework (Francés et al. 2018) to perform the regression procedure and develop a cost-to-goal improvement strategy equivalent to SAI, and 50 % of the samples are randomly generated as described in Section 3.1.4.O'Toole et al. ( 2022) spend an unreported amount of time to generate 100 K samples and an average of 23 minutes in training, with a search time-limit of 6 minutes.
We now compare the coverage results between all methods over the same tasks.In the comparison we consider only the best configurations of both Ferber et al. (2022) andO'Toole et al. (2022).We also use 100 K samples to make them more comparable to O' Toole et al. (2022).In these experiments, our sampling and training procedures can take a combined time of up to 2 hours, and we use a search time limit of 5 minutes.
Table 13 shows the coverage results of our best method using 0 %, 20 % and 50 % of random samples, together with the results as reported by Ferber et al. (2022) ( ĥBoot ) and O'Toole et al. (2022) ( ĥN-RSL ).Considering coverages, we notice that the heuristics ĥ•% F have results more similar to ĥN-RSL than to ĥBoot , and higher coverage, except for Grid, Rovers, Scanalyzer, and Storage.Heuristic ĥBoot is best in Grid, Rovers, and Storage.Particularly in Storage, ĥBoot has 89 % coverage, while coverage of the other approaches is close to 20 %.Heuristics ĥ•% F have the best coverage in six out of ten domains, namely Blocks World, Depot, N-Puzzle, Pipesworld-NoTankage, Transport, and VisitAll.All approaches have comparatively low coverage in Rovers, and ĥBoot and ĥN-RSL have low coverage in N-Puzzle when compared to the proposed approaches.
Generating samples only through regression (i.e., without solving states) and training afterward is faster than bootstrapping, as the states generated by the backward random walk must be solved with the currently learned heuristic to produce plans used as samples.Both ĥN-RSL and the proposed methods suggest that sampling using regression with improvement strategies (such as SAI, SUI, and random samples) gives competitive results on most domains.
According to O'Toole et al. (2022), the proportion of random samples in the final sample set has the most positive effect on coverage -approximately doubling it when going from 0 % to 50 % of random samples (34.7 vs. 59.9, from their supplementary material).As seen in Table 13, we also notice an improvement from using random samples, although smaller.The experiments show that all domains either improve or have similar results, and the mean coverage improves by about 10 %.Also, except for Pipesworld-NoTankage, we saw no improvements above 5 % using 50 % of random samples compared to 20 %.This means that despite having the highest coverage with 20 %, using 50 % of random samples can maintain

Conclusion
We have presented a study of sample generation and improvement strategies for training NNs to learn heuristic functions for classical planning.We have revised existing approaches to sample generation and proposed a new strategy that uses regression with breadth-first search and random walks, as well as several techniques that improve cost-to-goal estimates.
A systematic analysis using complete information of small state spaces indicates that: for the samples obtained through regression with states covering diverse portions of the state space without repeated samples close to the goal works best, and enough samples of good quality translate to good search performance that can be compared to logic-based heuristics.Also, we confirm the results from O'Toole et al. (2022) showing that having randomly generated samples up to a limit in the final sample set is positive.Among the contributions, the h-value improvement strategy SUI and the adaptive regression limit F have the most positive effects on sampling quality.The former improves the accuracy of cost-to-goal estimates by analyzing the successors of a state, while the latter avoids overestimates by limiting the maximum regression limit.

Figure 1 :
Figure 1: Training set generation workflow, shaded steps are optional.
A sequence of operators π = (o 1 , ..., o k ) with o i ∈ O is valid for initial state s 0 if for i ∈ [k] operator o i can be applied to s i−1 and producess i = succ(s i−1 , o i ).A plan is a valid sequence π for s 0 such that s k ⊇ s * and the cost of plan π is i∈[k] cost(o i ).The predecessor pred(s, o) of a partial state s under operator o is obtained by regression.It is again a partial state, and all complete states in S(pred(s, o)) can reach s by applying operator o.For regression, we consider an operator o ∈ O to be relevant for partial state s is a pair of preconditions and effects (pre(o), eff(o)), both partial states.Operator o can be applied to state s if s ⊇ pre(o), and its application produces a successor state s ′ = succ(s, o) := eff(o) • s, where s ′ = t • s is defined by s ′ (v) = t(v) for all v ∈ dom(t), and s ′ (v) = s(v) otherwise.The set of all successor states of state s is succ(s) = {succ(s, o) | o ∈ O, o applicable to s}.

Table 1 :
Size of the forward state spaces for the selected small tasks in seven domains.Tasks marked with * were modified.In other words, when we compare the number of expanded states, each cell in the tables represents the geometric mean of 25 networks over 50 test tasks, i.e., 1250 searches with GBFS.The training time has been limited to 30 minutes.If not stated otherwise, methods BFS, DFS, RW and FSM use mutexes, the improvement strategies SAI and SUI, and the number of samples is equal to 1% of the state space size.Under these conditions, more than 98.5 % of the NNs converge.

Table 2 :
Comparison of sampling strategies BFS, DFS, RW, and FSM on h * -values.Expanded states of GBFS with learned heuristics, and mean h * -values over the entire forward state space (FS) and the generated sample sets.
space, and columns BFS, DFS, RW, and FSM show the mean h * -values of the sample sets representing 1 % of the forward state space FS.We see that heuristic ĥbfs leads to more expanded states than ĥdfs , which in turn expands about 40 % more states than ĥrw and ĥfsm , which perform similarly.All algorithms are significantly different for α = 0.01.Using ĥbfs is significantly worse, and leads in all domains the highest or close to the highest number of expansions.Heuristic ĥdfs has a high number of expansions in Blocks World, N-Puzzle and Transport.Looking at the mean h * -values, we see that samples generated by BFS have the lowest, and those by DFS the highest geometric mean estimates in all domains.Although the mean h * -value of DFS is closest to that of the whole forward state space FS, the resulting heuristic expands more states than RW and FSM, which generate states closer to the goal.Therefore, multiple random walk rollouts seem better than a single rollout with BFS or DFS.

Table 3 :
Comparison of sampling strategies BFS, DFS, RW, and FSM on estimated h-values.Expanded states of GBFS with learned heuristics, and mean h * -values over the entire forward state space (FS) and the mean estimated h-values over the generated sample sets.
by BFS and DFS with RW and FSM, we find that in both Tables, BFS generates samples closer to the goal and DFS more distant from it.

Table 4 :
d * /2, d * and 2d * .The mean values for F , F , and d * , are shown on the left-hand side in Table4.Both F and F overestimate the mean largest distance d * (except for F in State space information and expanded states of GBFS guided by ĥ trained on FSM samples with different regression limits L and no h-value improvements.

Table 6 :
Expanded states of GBFS with ĥ trained with FSM, F , h * cost-to-goal estimates, and different state completion approaches, and the probability P (s ∈ F S) of a completed state being in the forward state space.

Table 7 :
Mean difference of estimates of samples of the sample set to h * .

Table 8 :
Mean difference of h FF , h GC and ĥ, to h * when evaluated over the FS.

Table 9 :
Expanded states of GBFS with different heuristic functions.

Table 10 :
Expanded states of GBFS with ĥF trained with a number of samples corresponding to some percentage of the number of states in the FS of each task.

Table 11 :
Mean coverages and expanded states of the learned heuristics with both regression limit and their respective approaches not using mutexes ( ĥ′ ).Expanded states consider only the initial states solved by all heuristics; Grid, N-Puzzle and Storage had no common solved initial state.Geometric mean is used for the overall mean of expanded states.

Table 12 :
Mean coverages and expanded states of the logic-based heuristics h FF and h GC compared to the baseline learned heuristic ĥ0 and the best learned heuristic ĥ20% F , obtained via training over samples with FSM, F , 20 % of random samples, and all h-value improvement strategies.Expanded states consider only the initial states solved by all heuristics; N-Puzzle and Storage had no common solved initial state.For expanded states we use the geometric mean.