Permutation group algorithms based on directed graphs

We introduce a new framework for solving an important class of computational problems involving finite permutation groups, which includes calculating set stabilisers, intersections of subgroups, and isomorphisms of combinatorial structures. Our techniques are inspired by and generalise 'partition backtrack', which is the current state-of-the-art algorithm introduced by Jeffrey Leon in 1991. But, instead of ordered partitions, we use labelled directed graphs to organise our backtrack search algorithms, which allows for a richer representation of many problems while often resulting in smaller search spaces. In this article we present the theory underpinning our framework, we describe our algorithms, and we show the results of some experiments. An implementation of our algorithms is available as free software in the GraphBacktracking package for GAP.


Introduction
Many of the most important problems in computational permutation group theory can be phrased as search problems, where we typically search for the intersection of subsets of a symmetric group. Standard problems that match this description are the computation of point stabilisers or set stabilisers, of transporter sets, or of normalisers or centralisers of subgroups. Searching for automorphisms and isomorphisms of a wide range of combinatorial structures can be done in this way as well as deciding whether or not combinatorial objects are in the same orbit under some group action, as is the case for element and subgroup conjugacy.
For many of these problems, the best known way to solve them is based on Leon's partition backtrack algorithm (see [12]), which often performs excellently, but has exponential worst-case complexity. Leon's algorithm conducts a backtrack search through the elements of the symmetric group, which it organises around a collection of ordered partitions (see [9] for more details). By encoding information about the given problem into those partitions, it is possible to cleverly prune (i.e., omit superfluous parts of) the search space. There have already been some extensions and improvements, inspired by graph-based ideas of McKay (see for example [14] and [8]). This leads us to believe that more powerful pruning, and ultimately better performance, could be obtained by using graphs directly, at the expense of the increased computation required at each node of the remaining search. In the present paper, we therefore place labelled directed graphs at the heart of backtrack search algorithms.
The basic idea is parallel to that described above for Leon's algorithm: When we search for an intersection of subsets of the symmetric group, we find suitable labelled digraph stacks such that the intersection can be viewed as the set of isomorphisms (induced from the symmetric group) from the first labelled digraph stack to the second (see Section 3). We encode information about the subsets into the labelled digraph stacks with refiners (see Section 4), and we just remark here that this generalises partition backtrack, because partition backtrack can be viewed as using vertex-labelled digraphs without arcs, where the vertex labels are in one-to-one correspondence with the cells of the ordered partitions. Approximators capture the fact that we typically overestimate the set of isomorphisms rather than calculate it exactly (see Section 5). The last ingredient comes into play when our approximation indicates that the refiners have encoded as much information as possible in a given moment and cannot restrict the search space further. Then we divide the search into smaller areas by defining new labelled digraph stacks, which is known as splitting (see Section 6). We discuss algorithms based on the method just described and prove their correctness in Sections 7 and 8, and in Section 9 we give details of various experiments that compare our algorithms with the current state-of-the-art techniques. We conclude, in Section 10, with brief comments on the results of this paper and the directions that they suggest for further investigation.
Finally, we would like to mention that we expect the reader to be familiar with basic concepts of permutation group theory and graph theory and that we only briefly explain our notation before moving to the main content. There is an extended version of this article [9], where we give proofs that are omitted here, along with much more detail and background information. We also include additional examples there and new material that is currently in preparation for a separate publication.
leading up to it. The first and third authors are supported by the Royal Society (Grant codes RGF\EA\181005 and URF\R\180015). Special thanks go to Paula Hähndel and Ruth Hoffmann for frequent discussions on topics related to this work, and for suggestions on how to improve this paper. Finally, the authors thank the referees for reading our drafts carefully and for making many helpful suggestions.

Preliminaries
Throughout this paper, Ω denotes some finite totally-ordered set on which we define all of our groups, digraphs, and related objects. For example, every group in this paper is a finite permutation group on Ω, i.e., a subgroup of Sym(Ω), the symmetric group on Ω. We follow the standard group-theoretic notation and terminology from the literature, such as that used in [2], and write · for the composition of maps in Sym(Ω), or we omit a symbol for this binary operation altogether. We write N for the set {1, 2, 3, . . .} of all natural numbers, and N 0 := N ∪ {0}. If n ∈ N, then S n := Sym({1, . . . , n}). For many types of objects that we define on Ω, for example lists, sets, or graphs, we give a way of applying elements of Sym(Ω) to them (denoted by exponentiation) in a structure-preserving way.
Let Γ and ∆ be digraphs (which is short for directed graphs) with vertex set Ω. Then we say that a permutation g ∈ Sym(Ω) induces an isomorphism from Γ to ∆ if and only if it defines a structure-preserving map from Γ to ∆, in which case we write Γ g = ∆.
We use the notation Iso(Γ, ∆) for the set of isomorphisms from Γ to ∆ that are induced by elements of Sym(Ω). If Iso(Γ, ∆) is non-empty, then we call Γ and ∆ isomorphic. Similarly, we write Aut(Γ) := Iso(Γ, Γ) for the subgroup of Sym(Ω) consisting of all elements that induce automorphisms of Γ.

Labelled digraphs
Our techniques for searching in Sym(Ω) are built around digraphs in which each vertex and arc (i.e. directed edge) is given a label from a set of labels L. We define a vertex-and arc-labelled digraph, or labelled digraph for short, to be a triple (Ω, A, Label), where (Ω, A) is a digraph and Label is a function from Ω ∪ A to L. More precisely, for any δ ∈ Ω and (α, β) ∈ A, the label of the vertex δ is Label(δ) ∈ L, and the label of the arc (α, β) is Label(α, β) ∈ L. We call such a function a labelling function. We point out that our notion of digraphs is very general and that loops are allowed. (For more details and some examples see [9]. ) We fix L as some non-empty set that contains every label that we require and serves as the codomain of every labelling function. For the equitable vertex labelling algorithm discussed in Section 5.2, we require some arbitrary but fixed total ordering on L.
In other words, the arcs are mapped according to g, and the label of a vertex or arc in Γ g is the label of its preimage in Γ. This gives rise to a group action of Sym(Ω) on LabelledDigraphs(Ω, L).

Stacks of labelled digraphs
In this section we introduce labelled digraph stacks, with the rough idea in mind that we use them to approximate the set of permutations we search for. More precisely, we attempt to choose suitable labelled digraph stacks in such a way that the set of isomorphisms from one to the other approximates the set we search for as closely as possible.
A labelled digraph stack on Ω is a finite (possibly empty) list of labelled digraphs on Ω. We denote the collection of all labelled digraph stacks on Ω by DigraphStacks(Ω). The length of a labelled digraph stack S, written as |S|, is the number of entries that it contains. A labelled digraph stack of length 0 is called empty, and is denoted by EmptyStack(Ω). We use notation typical for lists, whereby if i ∈ {1, . . . , |S|}, then S[i] denotes the i th labelled digraph in the stack S.
We allow any labelled digraph stack on Ω to be appended onto the end of another. If S, T ∈ DigraphStacks(Ω) have lengths k and l, respectively, then we define S T to be the labelled digraph stack [S [1], . . . , S[k], T [1], . . . , T [l]] of length k + l.
We define an action of Sym(Ω) on DigraphStacks(Ω) via the action of Sym(Ω) on the set of all labelled digraphs on Ω. More specifically, for all S ∈ DigraphStacks(Ω) and g ∈ Sym(Ω), we define S g to be the labelled digraph stack of length |S| with S g [i] = S[i] g for all i ∈ {1, . . . , |S|}. An isomorphism from S to another labelled digraph stack T (induced by Sym(Ω)) is therefore a permutation g ∈ Sym(Ω) such that S g = T .
In particular, only digraph stacks of equal lengths can be isomorphic. We note that every permutation in Sym(Ω) induces an automorphism of EmptyStack(Ω). As we do with digraphs, we use the notation Iso(S, T ) for the set of isomorphisms from the stack S to the stack T induced by elements of Sym(Ω), and Aut(S) for the group of automorphisms of S induced by elements of Sym(Ω).
Remark 3.1. Let S, T, U, V ∈ DigraphStacks(Ω). It follows from the definitions that In addition Aut(S U ) ≤ Aut(S), and if |S| = |T |, then Iso(S U, T V ) ⊆ Iso(S, T ). Roughly speaking, the automorphism group of a labelled digraph stack, and the set of isomorphisms from one labelled digraph stack to another one of equal length, become potentially smaller as new entries are added to the stacks.
We define the first entry of S via the orbital graph of K := (1 2)(3 4)(5 6), (2 4 6) with base-pair (1, 3). The automorphism group of this orbital graph (as always, induced by Sym(Ω)) is K itself; in other words, this orbital graph perfectly represents K via its automorphism group. In order to define S[1], we convert this orbital graph into a labelled digraph by assigning the label white to each vertex and assigning the label solid to each arc. This does not change the automorphism group of the digraph.
We define the second entry of S to be the labelled digraph on Ω without arcs, whose vertices 1 and 2 are labelled black, and whose remaining vertices are labelled white. The automorphism group of this labelled digraph is the setwise stabiliser of {1, 2} in Sym(Ω).

The squashed labelled digraph of a stack
For our exposition in Section 5, it is convenient to have a labelled digraph whose automorphism group is equal to that of a given labelled digraph stack. This is analogous to the final entry of an ordered partition stack [12,Section 4]. This special labelled digraph is a new object defined from the stack, but it is not part of the stack itself.
For this we fix a symbol # that is never to be used as the label of a vertex or an arc in any labelled digraph.
Note that the labelling function of a squashed labelled digraph of a stack can be used to reconstruct all information about the stack from which it was created. We also point out that Squash(S) g = Squash(S g ) for all S ∈ DigraphStacks(Ω) and g ∈ Sym(Ω). Therefore the following lemma holds.  There are ten arcs in Squash(S), which in total have five different labels: Since automorphisms of labelled digraphs preserve the sets of vertices with any particular label, it is clear that Aut(Squash(S)) ≤ (1 2), (3 4), (5 6) . This containment is proper, since Aut(Squash(S)) = Aut(S) by Lemma 3.6, and Aut(S) = (1 2)(3 4)(5 6) , as discussed in Example 3.3. Indeed, inspection of the arc labels in Squash(S) shows that any automorphism that interchanges the pair of points in any of {1, 2}, {3, 4}, or {5, 6} also interchanges the other pairs.

Adding information to stacks with refiners
In this section we introduce and discuss refiners for labelled digraph stacks. We use refiners to encode information about a search problem into the stacks around which the search is organised, in order to prune the search space.
Leon introduces the concept of refiners in [12]. Although the word "refiner" might seem counter-intuitive from the definition, Remark 3.1 makes it clear that the stacks S f L (S) and T f R (T ) do indeed give rise to a closer (or "finer") approximation of the set we search for.
While a refiner depends on a subset of Sym(Ω), we do not include this in our notation in order to make it less complicated. As a trivial example, every pair of functions from DigraphStacks(Ω) to itself is a refiner for the empty set, and it is in fact relevant for practical applications to be able to search for the empty set.
In the following lemma, we formulate additional equivalent definitions of refiners. (i) (f L , f R ) is a refiner for U .
(ii) For all isomorphic S, T ∈ DigraphStacks(Ω): (iii) For S, T ∈ DigraphStacks(Ω) and g ∈ U : by assumption, and since S and T have equal lengths, then (ii) ⇒ (iii). Let S, T ∈ DigraphStacks(Ω) and let g ∈ U . If S g = T , then g ∈ Iso(S, T ) by definition, and so g ∈ Iso(S f L (S), T f R (T )) by assumption. Since S and T have equal lengths, and S f L (S) and T f R (T ) have equal lengths, it follows that so too do f L (S) and f R (T ). Then f L (S) g = f R (T ), since for each i ∈ {1, . . . , |f L (S)|}, (iii) ⇒ (i). This implication is immediate.
Perhaps Lemma 4.2(ii) most clearly indicates the relevance of refiners to search. Suppose we wish to search for the intersection U 1 ∩· · ·∩U n of some subsets of Sym(Ω). Let i ∈ {1, . . . , n}, let (f L , f R ) be a refiner for U i , and let S and T be isomorphic labelled digraph stacks on Ω, such that Iso(S, T ) overestimates (i.e., contains) U 1 ∩ · · · ∩ U n .
We may use the refiner (f L , f R ) to refine the pair of stacks (S, T ): we apply the functions f L and f R , respectively, to the stacks S and T and obtain an extended pair of stacks (S f L (S), T f R (T )). We call this process refinement. Note that a refiner for U i need not consider the other sets in the intersection.
By Lemma 4.2(ii), the set of induced isomorphisms Iso(S f L (S), T f R (T )) contains the elements of U i that belonged to Iso(S, T ). Since U i contains U 1 ∩ · · · ∩ U n , it follows that Iso(S f L (S), T f R (T )) is a (possibly smaller) new overestimate for U 1 ∩ · · · ∩ U n which is contained in the previous overestimate by Remark 3.1.
The following straightforward example illustrates how the condition in Lemma 4.2(iii) is useful for showing that a pair of functions is a refiner for some set. Since the permutations of Ω that induce isomorphisms from [Γ] to [∆] are exactly those that induce isomorphisms from Γ to ∆, it follows by Lemma 4.2(iii) that (f Γ , f ∆ ) is a refiner for Iso(Γ, ∆). In particular, (f Γ , f Γ ) is a refiner for Aut(Γ). Example 4.3 illustrates the principle that the functions of the refiner are equal if it is a refiner for a subgroup. The next lemma states a slightly stronger observation. We omit the proofs of the next few results and refer to [9].  Next, we see that any refiner for a non-empty set can be derived from a function that satisfies the condition in Lemma 4.5.
Lemma 4.6. Let U be a non-empty subset of Sym(Ω), fix x ∈ U , and let f L and f R be functions from DigraphStacks(Ω) to itself. Then the following are equivalent: In particular, if U is a right coset of a subgroup G ≤ Sym(Ω), then (f L , f R ) is a refiner for the coset U = Gx if and only if (f L , f L ) is a refiner for the group G, and f R (S) = f L (S x −1 ) x for all S ∈ DigraphStacks(Ω).
For some pairs of functions, such as those in the upcoming Example 4.12, one may use the following results to show that a pair of functions gives a refiner. Lemma 4.7. Let U ⊆ Sym(Ω), and let f L , f R be functions from DigraphStacks(Ω) to itself such that U ⊆ Iso(f L (S), f R (T )) for all isomorphic S, T ∈ DigraphStacks(Ω). Then (f L , f R ) is a refiner for U .

Examples of refiners
Here we give several further examples of refiners for subgroups and their cosets, for typical group theoretic problems. We use refiners from Example 4.10 in our experiments of Section 9.1. The refiners given in Examples 4.8 and 4.10 have in common that they perfectly capture all the information about the set that we search for. This is also the case for the refiners given in Example 4.12 for sets of pairwise disjoint subsets of Ω, and for sets of subsets of Ω with pairwise distinct sizes.
As we saw in Lemmas 4.5 and 4.6, the crucial step when creating a refiner for a subgroup G ≤ Sym(Ω) or one of its cosets is to define a function f from DigraphStacks(Ω) to itself such that f (S g ) = f (S) g for all S ∈ DigraphStacks(Ω) and g ∈ G.
Example 4.8 (Permutation centraliser and conjugacy). For every g ∈ Sym(Ω), let Γ g be the labelled digraph on Ω whose set of arcs is {(α, β) ∈ Ω×Ω : α g = β}, and in which all labels are defined to be 0. For every S ∈ DigraphStacks(Ω), define f g (S) = [Γ g ]. Let g, h ∈ Sym(Ω) be arbitrary. Then (f g , f g ) is a refiner for the centraliser of g in Sym(Ω), and (f g , f h ) is a refiner for the set of conjugating elements {x ∈ Sym(Ω) : g x = h}.
Let V and W be arbitrary lists of subsets of Ω with notation as explained above.
To demonstrate this,  For unordered partitions, the following example is applicable.
We define Γ V to be the labelled digraph on Ω whose set of arcs is where the label of a vertex α is a list of length max{|V i | : i ∈ {1, . . . , k}} with i th entry The connected components of Γ V with at least two vertices are the sets in V that are not singletons. The label of a vertex (or arc) encodes, for each size of subset, the number of subsets in V that have that size and contain that vertex (or arc). For every S ∈ DigraphStacks(Ω), we define f V (S) to be the length-one stack [Γ V ]. In addition, for all g ∈ Sym(Ω), we define V g = {V g 1 , . . . , V g k }. Let V and W be arbitrary sets of subsets of Ω. Since the labelled digraphs Γ V and Γ W were defined so that {g ∈ Sym(Ω) : As a specific example, we consider the sets V := {{1}, {1, 2, 3}, {2, 4}} and W := {{5}, {2, 3, 4}, {3, 4}}. Both V and W contain three subsets, which have sizes 1, 2 and 3, so it seems superficially plausible that there exist elements of S 5 that map V to W.
In order to search for the set {g ∈ S 5 : V g = W}, then (with all the following notation as defined above) we can use the refiner contains this transporter set. The labelled digraphs Γ V and Γ W are depicted in Figure 4.13; although we do not give the correspondence explicitly, two vertices or two arcs have the same visual style if and only if they have the same label. There are many ways to show that Γ V and Γ W are non-isomorphic: for example, they have different numbers of arcs. Hence no element of S 5 maps V to W.

Approximating isomorphisms and fixed points of stacks
When searching with labelled digraphs stacks, it might be too expensive to compute the set of isomorphisms exactly, which is why we choose to only approximate this set instead. Our methods always lead to an overestimation of the set, and worse approximations typically lead to larger searches. As a consequence, there is a compromise to be made between the accuracy of such overestimates, and the amount of effort spent in computing them.
In Definition 5.1, we introduce the concept of an isomorphism approximator for pairs of labelled digraphs stacks, which is a vital component of the algorithms in Section 7. Later, we define the approximators that we use in our experiments.
Definition 5.1. An isomorphism approximator for labelled digraph stacks is a function Approx that maps a pair of labelled digraph stacks on Ω to either the empty set ∅, or a right coset of a subgroup of Sym(Ω), such that the following statements hold for all S, T ∈ DigraphStacks(Ω) (we usually abbreviate Approx(S, S) as Approx(S)): Let Approx be an isomorphism approximator and let S, T ∈ DigraphStacks(Ω). The set Iso(S, T ) of isomorphisms induced by Sym(Ω) is either empty, or it is a right coset of the induced automorphism group Aut(S). Since id Ω ∈ Iso(S, S) = Aut(S), it follows by definition that Approx(S) is a subgroup of Sym(Ω) that contains Aut(S).
The value of Approx(S, T ) should be interpreted as follows. By Definition 5.1(i), Approx(S, T ) gives a true overestimate for Iso(S, T ). Hence if Approx(S, T ) = ∅, then the approximator has correctly determined that S and T are non-isomorphic. By Definition 5.1(ii), an isomorphism approximator correctly determines that stacks of different lengths are non-isomorphic. Otherwise, the approximator returns a right coset in Sym(Ω) of its overestimate for Aut(S).
In Section 8.1, we need the ability to compute fixed points of the automorphism group (induced by Sym(Ω)) of any labelled digraph stack. A point ω ∈ Ω is a fixed point of a subgroup G ≤ Sym(Ω) if and only if ω g = ω for all g ∈ G. Computing fixed points is particularly useful when it comes to using orbits and orbital graphs in our search techniques. However, it can be computationally expensive to compute the fixed points exactly, and so we introduce the following definition.
Definition 5.2. A fixed-point approximator for labelled digraph stacks is a function Fixed that maps each labelled digraph stack on Ω to a finite list in Ω, such that for each S ∈ DigraphStacks(Ω): (i) Each entry in Fixed(S) is a fixed point of Aut(S), and (ii) Fixed(S) g = Fixed(S g ) for all g ∈ Sym(Ω).

Computing automorphisms and isomorphisms exactly
One way to approximate isomorphisms and fixed points of labelled digraph stacks is simply to compute them exactly. For example, we can convert labelled digraph stacks into their squashed labelled digraphs in order to take advantage of existing tools for computing with digraphs.
To describe this formally, we require the concept of a canoniser of labelled digraphs. We can use the software bliss [10] or nauty [13] to canonise labelled digraphs, after converting them into vertex-labelled digraphs in a way that preserves isomorphisms.
Definition 5.4 (Canonising and computing automorphisms exactly). Let Canon be a canoniser of labelled digraphs. We define functions Fixed C and Approx C : for all S, T ∈ DigraphStacks(Ω), let g = Canon(Squash(S)) and h = Canon(Squash(T )), let L be the list [i ∈ Ω : i is fixed by Aut(Squash(S) g )], ordered as in Ω, and define Lemma 5.5. Let the functions Approx C and Fixed C be given as in Definition 5.4. Then Approx C is an isomorphism approximator, and Fixed C is a fixed-point approximator. Furthermore, for all S, T ∈ DigraphStacks(Ω), Approx C (S, T ) = Iso(S, T ).
Define L = [i ∈ Ω : i is fixed by Aut(Squash(S) g )], ordered as usual in Ω. Since Aut(S) g = Aut(Squash(S)) g = Aut(Squash(S) g ), it follows that L consists of fixed points of Aut(S) g , and so Fixed C (S) (which equals L g −1 ) consists of fixed points of Aut(S). Therefore Definition 5.2(i) holds. To show that Definition 5.2(ii) holds, let x ∈ Sym(Ω) be arbitrary and define r = Canon(Squash(S x )). Since Squash(S) and Squash(S x ) are isomorphic, it follows that Squash(S) g = Squash(S x ) r . In particular, g −1 xr is an automorphism of Squash(S) g , and so g −1 xr fixes L. Thus

Approximations via equitable labelled digraphs
We can use vertex labels to overestimate the set of isomorphisms from one labelled digraph to another, because these isomorphisms map the set of vertices with any particular label onto a set of vertices with the same label. In this section we use the term vertex labelling as an abbreviation for the restriction of a digraph labelling function to the set of vertices.
In order to present the following approximator functions, we require the notion of an equitable labelled digraph.

Equitable labelled digraphs
Definition 5.6. A labelled digraph (Ω, A, Label) is equitable if and only if, for all vertices α, β ∈ Ω with Label(α) = Label(β), and for all labels y and z: |{(α, δ) ∈ A : Label(δ) = y and Label(α, δ) = z}| = |{(β, δ) ∈ A : Label(δ) = y and Label(β, δ) = z}|, and In other words, the labelled digraph is equitable if and only if, for all labels x, y, and z, every vertex with label x has some common number of out-neighbours with label y via arcs with label z, and similarly, every vertex with label x has some common number of in-neighbours with label y via arcs with label z.
Definition 5.6 extends the well-known concepts of equitable colourings [13, Section 3.1] and partitions [8,Definition 29] of vertex-labelled graphs and digraphs. The traditional notion requires that for all labels y and z, there are constants for the number of arcs from each vertex with label y to vertices with label z, and for the number of arcs in the other direction. Definition 5.6 additionally takes arc labels into account.
It is possible to define a procedure that takes a labelled digraph Γ := (Ω, A, Label) and 'refines' the vertex labelling to obtain a labelled digraph Γ := (Ω, A, Label ), which uses the fewest possible labels such that: arc labels are unchanged, vertices with the same label in Γ have the same label in Γ, and Γ is equitable. Moreover, this can be done consistently between labelled digraphs Γ and ∆, such that the overestimate of Iso(Γ, ∆) that can be obtained from the equitable labelled digraphs is contained in the overestimate from the original vertex labels. We present an example of such a procedure, phrased as an algorithm, as Algorithm 4.8 in [9]. Here we just describe the idea of such an "equitable vertex labelling algorithm" and abbreviate it as EVLA.
Given a labelled digraph, EVLA repeatedly tests whether each set of vertices with the same label satisfies the condition in Definition 5.6. For each such set, either the condition is satisfied, and a new label for this set is devised that encodes information about how the condition was satisfied, or the condition is not satisfied, and the vertices are given new labels accordingly, which encode information about why they were created.
We define Equitable to be a function defined by EVLA that maps each labelled digraph to a list of pairs of the form (x, W ), for some label x ∈ L and non-empty W ⊆ Ω, sorted by first component (recall that L is totally ordered). This list encodes that the vertices in W are those with label x in the equitable digraph given by EVLA.
In the following lemma, we present some properties of Equitable. For a more detailed discussion we refer to [9], and we omit the proof of the lemma because it is mathematically straightforward. Then the following hold: for all g ∈ Sym(Ω).
(ii) Iso(Γ, ∆) = ∅, if k = l, or k = l and x i = y i for some i, . . , W k ]}, otherwise. By choosing meaningful new vertex labels as described, we can distinguish more pairs of labelled digraphs as non-isomorphic via Lemma 5.7(ii) than we can by defining new labels arbitrarily. The next example illustrates this principle.
Example 5.8. Let Γ be the labelled digraph on Ω with all possible arcs, and let ∆ be the labelled digraph on Ω without arcs, where every vertex and arc in Γ and ∆ has the label x, for some arbitrary but fixed label x ∈ L.
Then we may use EVLA to deduce that Γ and ∆ are non-isomorphic, even though both are regular (i.e. every vertex has a common number of in-neighbours, and a common number of out-neighbours). The new labels encode that each vertex in Γ and ∆ has |Ω| in-and out-neighbours, or zero in-or out-neighbours, respectively. Therefore, the labels given by Equitable(Γ) and Equitable(∆) are different, and so Γ and ∆ are nonisomorphic by Lemma 5.7(ii).
A note of warning: the choice of new labels plays a role! If new labels were instead, say, chosen to be incrementally increasing integers starting at 1, then we would have Equitable(Γ) = Equitable(∆), and the above deduction would not be possible.
In the previous example it is obvious to us that the digraphs are non-isomorphic, but for many more complicated examples, Lemma 5.7(ii) can still be used to detect less obvious non-isomorphism.

Strong and weak approximations via equitable labelled digraphs
We describe two strategies for using EVLA (which operates on labelled digraphs) to approximate isomorphisms and fixed points of stacks of labelled digraphs. In the first approach, we first combine the entries of a stack into a single digraph, namely the squashed labelled digraph of the stack, and then apply EVLA; in the other, we first apply EVLA to each of the entries in the stack, and then combine the information that we obtain. We call these approaches strong and weak equitable approximation, respectively, and we give an example of their use in Section 5.3. where h ∈ Sym(Ω) is any permutation such that V h i = W i for all i ∈ {1, . . . , k}. This is well-defined because, for all g, h ∈ Sym(Ω), we have that V g i = V h i for all i if and only if g and h represent the same right coset of G in Sym(Ω). Finally, we define where i 1 < · · · < i m and the sets V i j = {v i j } for each j ∈ {1, . . . , m} are exactly the singletons amongst V 1 , . . . , V k . Definition 5.10 (Weak equitable approximation). We define functions Approx W and Fixed W as follows. Let S, T ∈ DigraphStacks(Ω). For each i ∈ {1, . . . , |S|}, j ∈ {1, . . . , |T |}, there exist k i , l j ∈ N 0 , labels x i,1 , . . . , x i,k i , y j,1 , . . . , y j,l j , and partitions {V i,1 , . . . , V i,k i } and {W j,1 , . . . , W j,l j } of Ω such that Suppose otherwise. We define functions f and g that map vertices to lists of length |S| with entries in N. For each α ∈ Ω, the list entry f (α)[i] is the unique j ∈ {1, . . . , k i } such that α ∈ V i,j , and g(α)[i] is the unique j ∈ {1, . . . , k i } such that α ∈ W i,j . Thus f and g encode the 'equitable' label of a vertex in each entry of S and T , respectively. Then we partition Ω into subsets A 1 , . . . , A m according to, and ordered lexicographically by, f -value, and similarly we partition Ω into subsets B 1 , . . . , B n via g.
Given all of this, we let G denote the stabiliser of [A 1 , . . . , A m ] in Sym(Ω) and define where h ∈ Sym(Ω) is any permutation such that A h i = B i for all i ∈ {1, . . . , m}, and min(A i ) is the minimum vertex in A i with respect to the ordering of Ω. We also define where i 1 < · · · < i t and the sets A i j = {a i j } for each j ∈ {1, . . . , t} are exactly the singletons amongst A 1 , . . . , A m .
The following lemma holds by Lemma 5.7.
Lemma 5.11. The functions from Definitions 5.9 and 5.10, Approx S and Approx W , and Fixed S and Fixed W , are isomorphism and fixed-point approximators, respectively.

Comparing approximators
In this section, we give a simple example to compare the isomorphism approximators from Sections 5.1 and 5.2. We present the example in more detail in [9].
Weak equitable approximations should be the least accurate but cheapest to compute, whereas computing isomorphisms exactly should be the most expensive. Weak equitable approximation distinguishes vertices by distinguishing them in the individual entries of the stacks. Strong equitable approximation sometimes gives better results than this, because it considers the entire stacks simultaneously. Example 5.12. Let Γ 1 , Γ 2 , ∆ 1 , and ∆ 2 be labelled digraphs on {1, . . . , 6} whose arcs and arc labels are defined as in Figure 5.13, and where each vertex is labelled white. We approximate the isomorphisms from the stack S := [Γ 1 , Γ 2 ] to the stack T := [∆ 1 , ∆ 2 ].
Weak equitable approximation. The labelled digraphs Γ 1 , Γ 2 , ∆ 1 , and ∆ 2 are equitable, and their vertices are all white. Therefore EVLA makes no progress, and so weak equitable approximation gives the worst possible result Approx W (S, T ) = S 6 .
Strong equitable approximation. To see that the labelled digraphs Squash(S) and Squash(T ) are not equitable, note for example that there are vertices in each of these digraphs with different numbers of out-neighbours, yet all vertices have the same label. There exist labels x and y such that the EVLA assigns

Distributing stack isomorphisms across new stacks
In a backtrack search, when it is not clear how to further prune a search space, we divide the search across a number of smaller areas that can be searched more easily. We call this process splitting, and in this section we define the notion of a splitter for labelled digraph stacks. A splitter takes a pair of stacks that represents a (potentially large) search space, and defines new stacks that divide the space in a sensible way.  For this paragraph and the following remark, we keep the notation from Definition 6.1, with |Approx(S, T )| ≥ 2. The search space corresponding to the pair (S, T ) is Approx(S, T ). By Definition 6.1(i), if |Approx(S, T )| ≥ 2, then the splitter produces the search spaces Approx(S S 1 , T T i ) for each i ∈ {1, . . . , m}; note that the left-hand stack S S 1 does not vary here. Each one of these new search spaces is smaller than Approx(S, T ), by Definition 6.1(ii). This is required to show that our algorithms terminate. Definition 6.1(iii) means that the first stack given by a splitter is independent of the given right-hand stack. This is required by the technique in Section 8. Remark 6.2. If S = T , then it follows by Definition 6.1(i) that S 1 = T i for some i. Thus we may assume without loss of generality that S 1 = T 1 in this case.
The following lemma shows a way of giving a splitter by specifying its behaviour on the left stack that it is given. The proof is straightforward and therefore omitted; see [9].
In the following definition, we present a splitter that can be obtained with Lemma 6.3. We use a version of this splitter for our experiments in Section 9. Note that Γ g α = Γ α g for all g ∈ Sym(Ω). Let Approx be any isomorphism approximator such that Approx(U [Γ α ]) ≤ Approx(U ) ∩ {g ∈ Sym(Ω) : α g = α} for all α ∈ Ω and U ∈ DigraphStacks(Ω). We define a function σ from DigraphStacks(Ω) to itself by for all S ∈ DigraphStacks(Ω). Finally, define Split σ as in Lemma 6.3, for the isomorphism approximator Approx and the function σ.
The following corollary holds by Lemma 6.3, bearing in mind that the function σ has the required property by our choice of isomorphism approximator in Definition 6.4. Corollary 6.5. The function Split σ from Definition 6.4 is a splitter for any isomorphism approximator that satisfies the condition in Definition 6.4.

The search algorithm
Let U 1 , . . . , U k ⊆ Sym(Ω). In this section, we present our main algorithms, which combine the tools of Sections 3-6 to compute the intersection U 1 ∩· · ·∩U k . In Section 7.1, we show how to perform a backtrack search for one or all of the elements of U 1 ∩ · · · ∩ U k . In Section 7.2, when the result is known to form a group, we show how to search for a base and strong generating set instead (see [2, p. 101] for a definition). We explain these algorithms in further detail in [9]. A version of our algorithms is implemented in the GraphBacktracking package [6] for GAP [3].

Algorithm 7.1 A recursive algorithm using labelled digraph stacks to search in Sym(Ω).
Input: a sequence of subsets U 1 , . . . , U k ⊆ Sym(Ω); a sequence (f L,1 , f R,1 ), . . . , (f L,m , f R,m ), where each pair is a refiner for some U j ; an isomorphism approximator Approx and a splitter Split for Approx. Output: all elements of the intersection U 1 ∩ · · · ∩ U k , which we refer to as solutions.
The main recursive search procedure (Lemma 7.4).

5:
case Approx(S, T ) = {h} for some h: h is the sole potential solution here.

The basic method
We begin with a high-level description of Algorithm 7.1, which comprises the Search and Refine procedures. We say that an algorithm backtracks when it finishes executing a recursive call to a procedure, and continues executing from where the call was initiated.
The algorithm begins with a call to the Search procedure on line 21. This procedure, when given labelled digraph stacks S and T , finds those elements of U 1 ∩ · · · ∩ U k that induce isomorphisms from S to T (Lemma 7.4). It does so by searching in Approx(S, T ) rather than in Iso(S, T ), because we do not necessarily wish to compute Iso(S, T ) exactly.
The Search procedure first calls Refine. This applies the refiners in turn, aiming to prune the search space (Lemma 7.3). Then, if the remaining search space contains at most one element, the Search procedure backtracks, having potentially returned an element, if appropriate. Otherwise, it divides the search with a splitter, and recurses.
We assume the given approximator, splitter, and refiners are possible to compute. We thus claim that, given the specified inputs, and after a finite number of steps, Algorithm 7.1 returns the intersection U 1 ∩ · · · ∩ U k : The proof of Theorem 7.2 relies on the following lemmas. In particular, it follows from Lemma 7.4 by setting S = T = EmptyStack(Ω).  (Refine(S, T )).
Proof. The Refine procedure performs finitely many iterations of its while loop, and so terminates, since a new iteration occurs only if the prior one yields a smaller search space. It is evident in the definition of Refine that (i) holds. To prove (ii), note that Refine(S, T ) is obtained from (S, T ) by the repeated application of refiners to stacks of equal length (line 17). Thus it suffices to show that if i ∈ {1, . . . , m} and |S| = |T |, then  Proof. We proceed by induction on |Approx(Refine(S, T ))|, bearing in mind Definition 5.1(i) and Lemma 7.3(ii). If |Approx(Refine(S, T ))| ∈ {0, 1}, then it is straightforward to verify that the Search procedure terminates with the correct value.
Let n ∈ N with n ≥ 2, and assume the statement holds for all stacks S and T with |Approx(Refine(S, T ))| < n. If |Approx(Refine(S, T ))| = n, then on line 11, the splitter gives a finite list of stacks, giving a finite number of calls to Search. By Definition 6.1(i) and (ii), by Lemma 7.3(i), and by assumption, these recursive calls terminate with values whose union is (U 1 ∩ · · · ∩ U k ) ∩ Iso (Refine(S, T )). The result follows by Lemma 7.3(ii), and by induction.
Remark 7.5. Algorithm 7.1 finds all elements of U 1 ∩ · · · ∩ U k . To search for a single element, if one exists, one can modify the Search procedure to return a result on line 7, as soon as the first solution is found. We name this modified procedure SearchSingle. Thus SearchSingle(S, T ) gives a single element of Iso(S, T ) ∩ (U 1 ∩ · · · ∩ U k ), if one exists, else ∅. This is especially useful when one wishes to find an isomorphism from one combinatorial structure to another, or to prove that they are non-isomorphic.

Searching for a generating set of a subgroup
It is usually most efficient to compute with a permutation group via a base and strong generating set (BSGS). In the standard definition (see [2, p. 101]), a base for a subgroup G of Sym(Ω) is a list of points in Ω whose stabiliser in G is trivial. Here, we use a broader definition, where the list may contain any objects on which G acts.
In this section we present Algorithm 7.6, which, given subsets U 1 , . . . , U k ⊆ Sym(Ω) whose intersection is a subgroup, returns a base and strong generating set for U 1 ∩· · ·∩U k . The base is given as a list of labelled digraph stacks. This algorithm is derived from Algorithm 7.1: the first two cases simplify since Approx(S, S) is a subgroup, and the recursive case turns into a search for a stabiliser and coset representatives. Note that Algorithm 7.6 uses the partially-constructed generating set to prune the search on line 10. It thus usually performs a smaller search than does Algorithm 7.1 with the same input. Input: as in Algorithm 7.1, plus the assumption that U 1 ∩ · · · ∩ U k is a subgroup. Output: a base and strong generating set of the subgroup U 1 ∩ · · · ∩ U k .   (Base, X) ← SearchBSGS(S S 1 ) Find BSGS for the stabiliser of S 1 .

8:
Base ← [S 1 ] Base Prepend S 1 to the base for the stabiliser of S 1 .
14: procedure SearchSingle(S, T ) The procedure from Remark 7.5. Remark 7.8. Algorithm 7.6 is useful when searching for an intersection of cosets. Let the notation of Algorithm 7.6 hold, and suppose that each set U i is a coset of a subgroup of Sym(Ω). We can use the SearchSingle procedure to find some g ∈ U 1 ∩ · · · ∩ U k , or to prove that no such element exists. In the former case, then for all i ∈ {1, . . . , k}, (f L,i , f L,i ) is a refiner for the group U i g −1 by Lemma 4.6. Therefore we may use Algorithm 7.6 to search for a base and strong generating set of U 1 g −1 ∩ · · · ∩ U k g −1 , which in combination with g compactly describes U 1 ∩ · · · ∩ U k .
Lemma 7.9. Let the notation of Algorithm 7.6 hold. Then the SearchBSGS procedure, given S, terminates with a base and strong generating set of Aut(S)∩(U 1 ∩· · ·∩U k ).
Proof. The SearchBSGS procedure first calls the Refine procedure to prune the search (see Lemma 7.3). This returns a pair of equal stacks, since it is given equal stacks, and each refiner is a pair of equal functions (see Lemma 4.4). That one of the conditions on either line 3 or line 5 is satisfied follows by Lemma 5.1(i). The procedure accordingly returns a trivial solution, or searches recursively. The SearchBSGS procedure only differs significantly from the Search procedure of Algorithm 7.1 in its recursive step. Let Split(S, S) = [S 1 , S 1 , S 2 , . . . , S t ] as on line 6. Then Aut(S S 1 ) is the stabiliser of S 1 in Aut(S) by Remark 3.1, and by Definition 6.1 and Remark 6.2, its right cosets in Aut(S) are the non-empty sets Iso(S S 1 , S S i ) for i ∈ {2, . . . , t}. Analogous statements hold for the stabiliser Aut(S S 1 ) ∩ (U 1 ∩ · · · ∩ U k ) of S 1 in Aut(S) ∩ (U 1 ∩ · · · ∩ U k ), and for the sets Iso(S S 1 , S S i ) ∩ (U 1 ∩ · · · ∩ U k ). It is thus possible to build a BSGS of Aut(S) ∩ (U 1 ∩ · · · ∩ U k ) recursively from a BSGS of the stabiliser of S 1 in Aut(S) ∩ (U 1 ∩ · · · ∩ U k ), along with representatives of a sufficient selection of the right cosets of the stabiliser, as is done in lines 7-11.
The validity of the recursion in the SearchBSGS procedure can be shown by induction on |Approx(Refine(S, S))|, as in the proof of Lemma 7.4. It follows by Remark 7.5 that, on line 11, the SearchSingle procedure returns the desired representatives of the sets Iso(S S 1 , S S i ) ∩ (U 1 ∩ · · · ∩ U k ). Proof. Set S = T = EmptyStack(Ω) in Lemma 7.9.

Searching with a fixed sequence of left-hand stacks
In this section, we discuss a consequence of our definitions and the setup of our algorithms, which enables a significant performance optimisation, and which allows us to use a special kind of refiner. This idea was inspired by, and is closely related to, the R-base technique of Jeffrey Leon [12, Section 6] for partition backtrack search, although we present the idea quite differently.
Recall that Algorithms 7.1 and 7.6 are organised around a pair of labelled digraph stacks (the stacks are equal in the SearchBSGS procedure), with both stacks initially equal to EmptyStack(Ω). We observe that when each of these algorithms is executed with a particular input, then in each branch of the search, the left-hand stack is modified by appending the same sequence of stacks to it, up to the end of branch. (Note that different branches can have different lengths.) This is because the stacks in Algorithms 7.1 and 7.6 are only modified by appending stacks produced by refiners and splitters, and because decisions about the progression of the algorithm are made according to the size of the value of the isomorphism approximator. By the definition of a refiner as a pair of functions of one variable (Definition 4.1), the left-hand stack returned by a refiner depends only on the left-hand stack it is given; by Definition 6.1(iii), the left-hand stack defined by a splitter depends only on the given left-hand stack; and by Definition 5.1(iii), the size of the value of an isomorphism approximator is either zero, in which case the current branch immediately ends, or it depends only on the left-hand stack that is given.
Therefore we can store the new stacks that are appended to the left-hand stack as they are constructed, and simply recall them as they are needed later. This means that, on most occasions, when applying a refiner, we recall the value for the left stack, and compute only the value for the right-hand stack. This optimisation significantly improves the performance of the Refine procedure.

Constructing and applying refiners via the fixed sequence of left-hand stacks
We saw in Lemmas 4.5 and 4.6 that any refiner for a non-empty set is derived from a function f from DigraphStacks(Ω) to itself satisfying f (S g ) = f (S) g for certain g ∈ Sym(Ω). In this section, we give an example that demonstrates a difficulty in satisfying this condition, and a general method for giving refiners that overcome this difficulty. We use such refiners for groups and cosets in our experiments. Suppose that we are searching for an intersection D of subsets of Sym(Ω), one of which is G, and suppose that Iso(S, T ) overestimates the solution. We wish to give a refiner (f, f ) for G, where the function f takes into account that the elements of D respect the orbit structure of G.
One  [3,4] on Ω. This is valid, but not ideal, since a permutation could map [Γ U ] to [Γ V ] while mapping an orbit in U to any orbit of the same size in V. However, every element of G [1,2] · h maps the orbit O ∈ U to O h . Therefore, this refiner does not eliminate some elements that, to us, are obviously not in D.
This is unsatisfactory, and so we would like to define f (S) = f U (S) as in Example 4.10 for some ordered list U of the orbits of G [1,2] . But then how should we order the orbits of G [3,4] in the corresponding way, to obtain a stack f (T ) such that D ⊆ Iso(f (S), f (T ))?
To address the problem discussed in Example 8.1, we use a technique similar to that of Leon [12]. In essence, we specify a refiner iteratively during search, when applying it to a new version of the left-hand stack. We can make choices as we do so (about the ordering of orbits, for example). Then, for all right-hand stacks that we encounter, we consult the choice made for the current left-hand stack, and remain consistent with that.
In more detail, we create such a refiner (f, f ) for a subgroup G of Sym(Ω) as follows. Let Fixed be a fixed point approximator, and initially let V i and F i be empty lists for all i ∈ N 0 . We describe how to apply (f, f ) to stacks (S, T ) of length i := |S| = |T |.
If V i is still empty, then we redefine F i to be Fixed(S) and V i to be a non-empty labelled digraph stack on Ω whose automorphism group contains G F i , the stabiliser of F i in G. For example, V i could be a list of orbital graphs of G F i on Ω, represented as labelled digraphs, or it could be the length-one stack [Γ U ] from Example 4.10, for some arbitrarily-ordered list U of the orbits of G F i on Ω.
If V i is no longer empty, then we have already applied the refiner to stacks of length i. Since a search has at most one left-hand stack of any particular length, this means that we have already seen the left-hand stack S, and defined F i and V i in terms of it.
The refiner gives f (S) = V i and either f (T ) = V a i (if Fixed(S) a = Fixed(T ), for some a ∈ G) or f (T ) = EmptyStack(Ω). Note that, by Definition 5.2(ii), if no such element a exists, then S and T are not isomorphic via G, and so there are no solutions in the current branch. Therefore the algorithm should backtrack.
The mathematical foundation of this kind of refiner is given in Lemma 8.2. The notation in this lemma corresponds to the notation of the preceding paragraphs.
We may use this lemma with Lemma 4.6 to give refiners for cosets of subgroups.
Lemma 8.2. Let G ≤ Sym(Ω) and let Fixed be a fixed-point approximator. For all i ∈ N 0 , let V i ∈ DigraphStacks(Ω) be a labelled digraph stack on Ω, and let F i be a list of points in Ω whose stabiliser in G is a subgroup of Aut(V i ).
We define a function f from DigraphStacks(Ω) to itself as follows. For each S ∈ DigraphStacks(Ω), let EmptyStack(Ω) if no such element a ∈ G exists.
Then (f, f ) is a refiner for G.
Proof. Note that f is well-defined: if S ∈ DigraphStacks(Ω) and a, b ∈ G both map F |S| to Fixed(S), then ab −1 stabilises F |S| , and so ab −1 ∈ Aut V |S| by assumption; Let S ∈ DigraphStacks(Ω) and g ∈ G. By Lemma 4.5, it suffices to show that f (S g ) = f (S) g . There exists an element a ∈ G mapping F |S| to Fixed(S) if and only if there exists an element of b ∈ G mapping F |S g | = F |S| to Fixed(S), since g maps Fixed(S) to Fixed(S g ) by Definition 5.2(ii). In that case that no such a and b exist, then f (S g ) = EmptyStack(Ω) = f (S) g . Otherwise f (S) = (V |S| ) a and f (S g ) = (V |S| ) b . In this case, agb −1 fixed F |S| pointwise, and so agb −1 ∈ Aut V |S| by assumption. Therefore

Experiments
In this section, we provide experimental data comparing the behaviour of our algorithms against partition backtrack, in order to highlight the potential of our techniques. We repeat the experiments of [8,Section 6], which demonstrated improvements from using orbital graphs, and we also investigate some additional challenging problems. In many cases we observe a significant advancement with our new techniques. We decided not to investigate classes of problems where partition backtrack already performs very well, or problems where we would expect all techniques (including ours) to perform badly. Instead we have chosen problems that are interesting and important in their own right, including ones that we expect to be hard for many search techniques.
At the time of writing, we have focused on the mathematical theory of our algorithms, but not on the speed of our implementations. Therefore, we only analyse the size of the search required by an algorithm to solve a problem, and not the time required. We define a search node of a search to be an instance of the main searching procedure being called recursively during its execution; the size of a search is then its number of search nodes. If an algorithm requires zero search nodes to solve a problem, then it solves the problem without entering recursion, which in our situation implies that the problem has either no solutions, or exactly one.
The size of a search should depend only on the mathematical foundation of the algorithm, rather than on the proficiency of the programmer who implements it, and so it allows a fair basis for comparisons. That said, we expect that where our algorithms require significantly smaller searches, then with an optimised implementation, the increased time spent at each node will be out-weighed by the smaller number of nodes in total, giving faster searches than partition backtrack. This is because, in general, a backtrack search algorithm spends time at each search node to prune the search tree and organise the search. The computations at each node of our algorithms are largely digraph-based, and the very high performance of digraph-based computer programs such as bliss [10] and nauty [13] suggests that, in practice, these kinds of computations should be cheap.
For the problems that we investigate in Sections 9.1-9.3, we compare the following techniques: (i) Leon: Standard partition backtrack search, as described by Jeffrey Leon [11,12].
(iii) Strong: Backtrack search with labelled digraphs, using the isomorphism and fixed-point approximators from Definition 5.9 and the splitter from Definition 6.4.
(iv) Full: Backtrack search with labelled digraphs, using the isomorphism and fixedpoint approximators from Definition 5.4 and the splitter from Definition 6.4.
The Leon technique is roughly the same as backtrack search with labelled digraphs, where the digraphs are not allowed to have arcs. The Orbital technique is roughly the same as backtrack search with labelled digraphs using the 'weak equitable approximation' isomorphism and fixed-point approximators from Definition 5.10.
The Strong technique considers all labelled digraphs in the stack simultaneously to make its approximations, while the Full technique computes isomorphisms and fixed points exactly, rather than approximating them, and so in principle it is the most expensive of the four methods.
We require refiners for groups given by generators, for cosets of such groups, for set stabilisers, and for unordered partition stabilisers. We describe the refiners for set and unordered partition stabilisers in Section 9.1. For Leon we use the group and coset refiner described in [11]. For Orbital we use the DeepOrbital group and coset refiner from [8], although we get similar results for all the refiners described in that paper. For the Strong and Full techniques, we use refiners of the kind described in Section 8.1 for groups and cosets, using orbits and orbital graphs. These algorithms are similar to DeepOrbital, except they return the created digraphs instead of filtering them internally. For Strong and Full, we use the splitter given in Definition 6.4.
We performed our experiments using the GraphBacktracking [6] and Back-trackKit [7] packages for GAP [3]. BacktrackKit provides a simple implementation of the algorithms in [8,11,12], and provides a base for GraphBacktracking. We note that where we reproduce experiments from [8], we produce the same sized searches.

Set stabilisers and partition stabilisers in grid groups
We first explore the behaviour of the four techniques on stabiliser problems in grid groups. This setting was previously considered in [8, Section 6.1], and as mentioned there, these kinds of problems arise in numerous real-world situations. Let n ∈ N and Ω = {1, . . . , n}, and let G ≤ Sym(Ω × Ω) be the n × n grid group. If we consider Ω × Ω to be an n × n grid, where the sets of the form {(α, β) : β ∈ Ω} and {(β, α) : β ∈ Ω} for each α ∈ Ω are the rows and columns of the grid, respectively, then G is the subgroup of Sym(Ω × Ω) that preserves the set of rows and the set of columns.
We repeat the experiments in [8], and add an unordered partition stabiliser problem: (i) Compute the stabiliser in G of a subset of Ω × Ω of size n 2 /2 .
(ii) Compute the stabiliser in G of a subset of Ω×Ω with n/2 entries in each grid-row.
(iii) If 2 | n, then compute the stabiliser in G of an unordered partition of Ω × Ω that has two cells, each of size n 2 /2.
As in [8, Section 6.1], we compute with the n×n grid group as a subgroup of S n 2 , and the algorithms have no prior knowledge of the grid structure that the group preserves.
For Leon and Orbital, we refine for a set stabiliser as in [11]. For Strong and Full, our refiner for set stabiliser is the one from Example 4.10. The stabiliser in S n 2 of an unordered partition with two parts of size n 2 /2 is a subgroup isomorphic to the wreath product S n 2 /2 S 2 . For each technique, for an unordered partition stabiliser, we directly use the group refiner for this subgroup.
In [8, Section 6.1], the Orbital algorithm was much faster than the classical Leon algorithm at solving problems of types (i) and (ii). In Table 9.2, we see why: Orbital typically requires no search for these problems. Leon used a total of 65,834 nodes to solve all problems in Problem (i), and 37,882,616 nodes for Problem (ii), while Orbital required 567 for Problem (i) and 1073 for Problem (ii). The same numbers of nodes were also required for both Strong and Full, since there is no possible improvement.
In Table 9.3 for Problem (iii), however, we clearly see the benefits of our new techniques. Partition backtrack -Leon and Orbital -takes an increasing number of search nodes, with 140,177 nodes required for Leon and 57,120 nodes for Orbital to solve all instances. But Strong is powerful enough in almost all cases to solve these same problems without search, requiring only 450 nodes to solve all problem instances.

Intersections of primitive groups with symmetric wreath products
Next, as in [8, Section 6.2], we consider intersections of primitive groups with wreath products of symmetric groups. To construct these problems, we use the primitive groups library, which is included in the PrimGrp [5] package for GAP.
For a given composite n ∈ {6, . . . , 80}, we create the following problems: for each primitive subgroup G ≤ S n that is neither S n nor the natural alternating subgroup of S n , and for each proper divisor d of n, we construct the wreath product S n/d S d as a subgroup of S n , which we then conjugate by a randomly chosen element of S n . Finally, we use each algorithm in turn to compute the intersection of G with the conjugated wreath product. We create 50 such intersection problems for each n, G, and d.
For each k ∈ {6, . . . , 80}, we record the cumulative number of search nodes that each technique needs to solve all of the intersection problems for all composite n ∈ {6, . . . , k}. We show these cumulative totals in Figures 9.4 and 9.5, separating the groups that are 2-transitive from those that are primitive but not 2-transitive, as in [8,Section 6.2]. Note that the number of problems increases with the numbers of divisors of n and primitive groups of degree n; this explains the step-like structure in these figures.  For the primitive but not 2-transitive groups, the total number of search nodes required by the Leon algorithm is 3,239,403. The Orbital algorithm reduces this total search size to 2,079,356, and the cumulative search sizes for Strong (with 3,248 nodes) and Full (with 2,140 nodes) are even smaller.
This huge reduction happens because the Strong and Full algorithms solve almost every problem without search. Out of 40,150 experiments, the Strong algorithm required search for only 703, and the Full algorithm required search for just 654. On the other hand, the Leon and Orbital algorithms required search for every problem.  For the intersection problems involving groups that are at least 2-transitive, the improvement of the new techniques over the partition backtrack algorithms is much smaller, and all of the algorithms required a non-zero search size to solve every problem. This was to be expected: a 2-transitive group has a unique orbital graph, which is a complete digraph.

Intersections of cosets of intransitive groups
In this section, we go beyond the experiments of [8,Section 6], with a problem that we expect to be difficult for all search techniques: intersecting cosets of intransitive groups that have identical orbits, and where all orbits have the same size.
More precisely, we intersect right cosets of subdirect products of transitive groups of equal degree. Given k, n ∈ N, we randomly choose k transitive subgroups of S n from the transitive groups library TransGrp [4], each of which we conjugate by a random element of S n , and we create their direct product, G, which we regard as a subgroup of S kn . Then, we randomly sample elements of G until the subgroup that they generate is a subdirect product of G. If this subdirect product is equal to G, then we abandon the process and start again. Otherwise, the result is a generating set for what we call a proper (k, n)-subdirect product.
In our experiments, for various k, n ∈ N, we explore the search space required to determine whether or not the intersections of pairs of right cosets of different (k, n)subdirect products are empty. To make the problems as hard as possible, we choose coset representatives that preserve the orbit structure of the (k, n)-subdirect product.
We performed 50 random instances for each pair (k, n), for all k, n ∈ {2, . . . , 10}, and we show a representative sample of this data in Tables 9.6 and 9.7 and Figure 9.8. Table 9.6 shows results for each n for all k combined, and Table 9.7 gives a more in-depth view for two values of k. The tables omit data for the Full algorithm, because it was mostly identical to the data for the Strong algorithm.

Leon
Orbital Strong n Strong does not perform significantly better are those involving orbits of size 2 (n = 2). This is not surprising as there are very few possible orbital graphs for such groups. We note that the problems with n = 5 and 7 seem particularly difficult. This is because transitive groups of prime degree are primitive, and sometimes even 2-transitive, in which case they do not have useful orbital graphs.
On the other hand, Orbital solved a lot fewer problems without search, and Leon solved none in this way. Although the relatively low medians show that all of the algorithms performed quite small searches for many of the problems, we see a much starker difference in the mean search sizes. These means are typically dominated by a few problems; see Figure 9.8.  To give a more complete picture of how the algorithms perform, Figure 9.8 shows the search sizes for all 50 intersections problem that we considered for n = k = 7, sorted by difficulty. The data that we collected in this case was fairly typical. Figure 9.8 shows that Strong solves almost all problems with very little or no search, and it only requires more than 50 search nodes for the three hardest problems. On the other hand, Leon and Orbital need more than 50 nodes for the 18 hardest problems. All algorithms found around 30% of the (randomly generated) problems easy to solve.

Conclusions and directions for further work
We have discussed a new search technique for a large range of group and coset problems in Sym(Ω), building on the partition backtrack framework of Leon [11,12], but using stacks of labelled digraphs instead of ordered partitions.
Our new algorithms often reduce problems that previously involved searches of hundreds of thousands of nodes into problems that require no search, and can instead be solved by applying strong equitable approximation to a pair of stacks. There already exists a significant body of work on efficiently implementing equitable partitioning and automorphism finding on digraphs [10,13], which we believe can be generalised to work incrementally with labelled digraph stacks that grow in length. We did not yet concern ourselves with the time complexity or speed of our algorithms in this paper, nor did we discuss their implementation details. However, we do intend for our algorithms to be practical, and we expect that with sufficient further development into their implementations, our algorithms should perform competitively against, and even beat, partition backtrack for many classes of problems. This requires optimising the implementation of the algorithms that we have presented here. In future work, we plan to show how the algorithms described in this paper can be implemented efficiently, and compare the speed of various methods for hard search problems. In particular, we aim for a better understanding of when partition backtrack is already the best method available, and when it is worth using our methods. Further, earlier work which used orbital graphs [8] showed that there are often significant practical benefits to using only some of the possible orbital graphs in a problem, rather than all of them. We will investigate whether a similar effect occurs in our methods.
Another direction of research is the development and analysis of new types of refiners, along with an extension of our methods. For example, we have seen some refiners in our examples that perfectly capture all the information about the set that we search for, and it is worth investigating this more. See [9, Section 5.1] for first steps in this direction. We could also allow more substantial changes to the digraphs, such as adding new vertices outside of Ω. One obvious major area not addressed in this paper is normaliser and group conjugacy problems. These problems, as well as a concept for the quality of refiners are addressed in ongoing work that builds on the present paper.
While the step from ordered partitions to labelled digraphs already adds some difficulty, we still think that it is worth considering even more intricate structures. Why not generalise our ideas to stacks of more general combinatorial structures defined on a set Ω? The definitions of a splitter, of an isomorphism approximator, and of a refiner were essentially independent of the notion of a labelled digraph, and so they -and therefore the algorithms -could work for more general objects around which a search method could be organised.