Polynomial-delay generation of functional digraphs up to isomorphism

We describe a procedure for the generation of functional digraphs up to isomorphism; these are digraphs with uniform outdegree 1, also called mapping patterns, finite endofunctions, or finite discrete-time dynamical systems. This procedure is based on a reverse search algorithm for the generation of connected functional digraphs, which is then applied as a subroutine for the generation of arbitrary ones. Both algorithms output solutions with $O(n^2)$ delay and require linear space with respect to the number $n$ of vertices.


Introduction and motivation
A finite, discrete-time dynamical system (A, f ), called in the following just a dynamical system for brevity, is simply a finite set A of states together with a function f : A → A describing the evolution of the system in time.A dynamical system can be equivalently described by its transition digraph, which has V = A as its set of vertices and E = {(a, f (a)) : a ∈ A} as its set of arcs, that is, each state has an outgoing edge pointing to the next state.Since the dynamical systems we are dealing with are deterministic, their transition digraphs are all and only the digraphs having uniform outdegree 1, that is, functional digraphs.
The synchronous execution of two dynamical systems A and B gives a dynamical system A⊗B, whose transition digraph is the direct product [15] of the transition digraphs of A and B. This product, together with a disjoint union operation of sum, gives a semiring structure over dynamical systems up to isomorphism [10] with some interesting algebraic properties, notably the lack of unique factorisation into irreducible digraphs.In order to develop the theory of the semiring of dynamical systems, it is useful to be able to find examples and counterexamples to our conjectures, and this often amounts to being able to efficiently generate all functional digraphs of a given number n of vertices, up to isomorphism.Remark that the number of non-isomorphic functional digraphs over n vertices (sequence A001372 on the OEIS [23]) is exponential, asymptotically c × d n / √ n for some constants c and d > 1 [21].As a consequence, there is no hope to devise an algorithm listing these objects in polynomial time in n.Rather, the kind of efficiency we must aim for is either guaranteeing small exponential time aiming at reducing as much as possible the base of the exponent, referred to as the input-sensitive approach [12], or the ability to generate all solutions in a time which is polynomial in the sizes of the input plus the output, known as the output-sensitive approach [16].Within this second framework, algorithms running with polynomial delay (requiring polynomial time in the input size between consecutive outputs) are considered among the most efficient in algorithmic enumeration.In this paper, we place ourselves in the output-sensitive approach and refer the reader to [16,24] for more details on enumeration algorithms and their complexity.
Since the outdegree of each vertex is exactly 1 and the number of vertices is finite, the general shape of a functional digraph is a disjoint union of connected components, each consisting of a limit cycle ⟨v 1 , . . ., v k ⟩, where the vertices v 1 , . . ., v k are the roots of k directed unordered rooted trees (simply referred to as trees in the following), with the arcs pointing towards the root.
Enumeration problems for several classes of graphs have been analysed in the literature.For instance, efficient isomorphism-free generation algorithms for rooted, unordered trees are well known, even requiring only amortised constant time per solution [2], and there exist polynomial delay algorithms for the isomorphism-free generation of undirected graphs [13].More general techniques for the generation of combinatorial objects have been described by McKay [19].For a practical implementation of generators for several classes of graphs we refer the reader to software such as nauty and Traces [20].
However, the class of functional digraphs does not seem to have been considered yet from the point of view of efficient generation algorithms.Here we first propose a O(n 2 )-delay, linear space algorithm for the generation of connected n-vertex functional digraphs (sequence A002861 on the OEIS [22]), based on an isomorphism code which avoids generating multiple isomorphic digraphs.This assumes the word RAM model with word size O(log n) [14].The algorithm is based on the reverse search technique [1] that proved to be efficient for the generation of a wide range of objects, including spanning trees of a graph, triangulations in the plane, or cells in a hyperplane arrangement, to cite a few [1,7,27,29].In a nutshell, this technique amounts to traverse in a depth-first search manner an implicit solution tree where nodes are solutions, and where edges are defined by some parent-child relation between solutions.During the traversal, children are obtained by merging trees having adjacent roots along the limit cycle.A notable feature of this algorithm is that it can moreover be adapted in order to produce the successor (or predecessor) of any given solution in O(n 2 ) time as well, and only needs linear space.This procedure is then used as a subroutine in order to generate all, non necessarily connected functional digraphs with the same delay and space.

Isomorphism codes for connected functional digraphs
In order to avoid generating multiple isomorphic functional digraphs, we first define a canonical representation based on an isomorphism code, which would also allow us to check in polynomial time whether two given functional digraphs are isomorphic when given by another representation (e.g., adjacency lists or matrices). 1somorphism codes for unordered rooted trees (which can be taken as directed with the arcs pointing towards the root, as is needed in our case) are well known in the literature; for instance, level sequences (sequences of node depths given by a preorder traversal of the tree, arranged in lexicographic order) can be used for this purpose [2].Here we adopt a solution due to Jack Edmonds and described by Busacker and Saaty [6, pp. 196-199] and Valiente [28, p. 118], which has the useful property that the isomorphism code of a tree directly contains as subsequences the isomorphism codes of its subtrees.
Definition 1 (code of a tree).Let T = (V, E) be a tree.Then, the isomorphism code of T is the sequence of integers where T 1 , . . ., T k are the immediate subtrees of T , i.e., the subtrees having as roots the predecessors of the root of T , arranged in lexicographically nondecreasing order of code, and ⌢ denotes sequence concatenation.In particular, if T is trivial, i.e., if |V | = 1, then code T = ⟨1⟩.
See Fig. 1 for an example isomomorphism code for a tree.For simplicity, in the rest of the paper, we identify a tree with its own code, i.e., we often write "T " instead of "code T " where no ambiguity arises.We also denote the lexicographic order on tree isomorphism codes with the symbol ≤.As a consequence, a sentence such that "T 1 ≤ T 2 " is meant to be interpreted as "the isomorphism code of the tree T 1 is less than or equal to that of T 2 in lexicographic order".This order has the trivial tree ⟨1⟩ as its minimum.In the following, we will denote by |T | the length of the code of T ; note that this value coincides with the number of vertices of T as well as the first integer starting its code.
Since a connected functional digraph consists of a sequence of trees arranged along a cycle, and all rotations of the sequence are equivalent (i.e., they correspond to isomorphic digraphs), we choose a canonical one as its isomorphism code.
Definition 2 (code of a connected functional digraph).The isomorphism code of a connected functional digraph C = (V, E) is the lexicographically minimal rotation code C = ⟨code T 1 , . . ., code T k ⟩ of the sequence of isomorphism codes of its trees taken in order along the cycle, i.e., such that for all integer r we have ⟨code T 1 , . . ., code T k ⟩ ≤ ⟨code T 1+(1+r) mod k , . . ., code T 1+(k+r) mod k ⟩.For brevity, we refer to connected functional digraphs as components and, as with trees, we identify a component C with its own code.A valid code for a component C is also called a canonical form of C; unless otherwise specified, in the rest of the paper we consider all components to be in canonical form.As for trees, we denote the lexicographic order on components (more precisely, on their isomorphism codes) by ≤, and by |C| the number of trees along the cycle, i.e., the integer k in the definition above.
Isomorphism codes for arbitrary (i.e., non necessarily connected) functional digraphs will be defined later, in Section 4, since they will be based on the order of generation of components.
Notice that the space required for the isomorphism code of a tree or a connected functional digraph (and later, of an arbitrary functional digraph) of n vertices is linear on the word RAM model, although the actual number of bits is O(n log n), since the codes consist of n integers ranging from 1 to n.

Generation of connected functional digraphs
We describe an algorithm based on reverse search [1] for the enumeration of connected functional digraphs.This technique is a particular case of a technique called supergraph method (or solution graph traversal ) which amounts to traversing in a depth-first search (DFS) manner an implicit solution graph where nodes are solutions, and edges are defined by some reconfiguration relation between solutions [16,18].In the framework of reverse search, the solution graph is required to be a tree, where the edges are defined by a parent relation.The time and space complexities of the enumeration then essentially boil down to the time and space needed to generate the children of a given node and compute its parent.When finding a child of a solution, we continue the traversal on this node.When all children have been found, we backtrack.When backtracking, we find the next child and continue on this child.Thus, in general, reverse search only needs memory space that is linear in the height of the solution tree times the space needed to generate children.As of the delay, it only depends on the time needed to compute the parent and the delay needed to generate children when using folklore tricks such as the alternating output technique [26].We refer the reader to [1,11,25,27] for more details on this technique.
The parent relation we will consider will be based on the following tree merging operation, to be applied to the isomorphism codes of two adjacent trees along the cycle.
Definition 3 (tree merging).Let T 1 and T 2 be trees with isomorphism codes ⟨x 1 , . . ., x k ⟩ and ⟨y 1 , . . ., y ℓ ⟩, respectively.Then that is, the merge is obtained by the concatenation of the codes of the two trees, with the first element updated in order to reflect the new total length.
The following is an immediate consequence of the definition of the merging operation: Remark 4. If T 1 and T 2 are trees and merge(T 1 , T 2 ) is a valid tree isomorphism code, then it represents the tree obtained by connecting T 2 as an immediate subtree of T 1 (i.e., by connecting T 2 to the root of T 1 ).
However, notice that, depending on T 1 and T 2 , the result of merge(T 1 , T 2 ) is not necessarily a valid tree isomorphism code, as it may not be lexicographically nondecreasing.
We define an inverse of the merging operation on trees that we call unmerging.
Notice that there is only one way to unmerge a tree, i.e., detaching the code of its rightmost subtree T 2 (which is already a valid tree isomorphism code) and updating the first element of the remaining tree code in order to reflect the new, shorter length; this is done in linear time and produces a valid tree isomorphism code T 1 , since the remaining subtrees are still in nondecreasing order.Thus unmerge is indeed a well-defined function.Remark that unmerge(T ) = (T 1 , T 2 ) implies T 1 , T 2 < T , since T 1 , T 2 are proper subtrees of T and hence have strictly less vertices (their first elements are smaller than the the first element of T ).
Notice that a minimal lexicographic rotation always begins with a minimum element of the sequence (with respect to the ordering on elements); more generally, one of the longest subsequences of minimum elements must appear as the prefix of a minimal rotation.Furthermore, a maximal length sequence of one or more minimum elements can never appear as a proper suffix (otherwise, by rotating to the right, we would obtain a smaller rotation); nonetheless, further copies of the same minimum might appear in intermediate positions of the sequence.These basic observations will be of use in the proof of the next lemma.Lemma 8. Let C = ⟨T 1 , . . ., T k ⟩ be a component in canonical form with at least one nontrivial tree, and let T h be the leftmost such nontrivial tree.Furthermore, let is also in canonical form; we call C * the parent of C and denote it by Parent(C).
Proof.As discussed above, U 1 and U 2 , and as a consequence V 1 and V 2 , are always valid tree isomorphism codes.Thus, we only need to show that C * is a minimal rotation.We analyse three sub-cases.
1.If T 1 is the leftmost nontrivial tree of C (i.e., if h = 1), then C * = ⟨V 1 , V 2 , T h+1 , . . ., T k ⟩.The tree T 1 is a minimal one in C, and V 1 ≤ V 2 < T 1 , which implies that V 1 and V 2 in the first and second position are both necessary and sufficient for C * to be a minimal rotation.2. If the leftmost nontrivial tree of C is the last one (i.e., if This is always a minimal rotation since ⟨1⟩ ≤ V 1 ≤ V 2 .3. Finally, suppose that 1 < h < k, i.e., the leftmost nontrivial tree is neither the first nor the last.Then C * begins with a sequence of trivial trees of length h − 1, h, or h + 1, depending on whether V 1 and V 2 are trivial, followed by a nontrivial tree (resp., V 1 , V 2 , or T h+1 ).We distinguish cases accordingly: the maximal prefix of trivial trees of C * has length strictly larger than h − 1.If C * is not a minimal rotation, the actual minimal one would include a sequence of at least h trivial trees as a prefix, to be found as a subsequence of ⟨T h+1 , . . ., T k−1 ⟩.But this sequence would also exist in C and would lead to a smaller minimal rotation, since C begins with only h − 1 trivial trees, contradicting the hypothesis that C is a canonical form.-If both V 1 and V 2 are nontrivial, then C * has a prefix of trivial trees of length h−1, followed by V 1 .Suppose, by contradiction, that C * is not a minimal rotation.Then, a sequence of length h of the form P = ⟨⟨1⟩, . . ., ⟨1⟩, T ⟩ exists, as a subsequence of ⟨T h+1 , . . ., T k ⟩, with the property that P ≤ ⟨T 1 , . . ., T h−1 , V 1 ⟩, i.e., P is the prefix of the actual minimal rotation (P and the prefix of C * could be elementwise identical, the difference only appearing later).This implies that T ≤ V 1 .But V 1 ≤ T h and thus P ≤ ⟨T 1 , . . ., T h ⟩; since P is also a subsequence of C, this implies that C is not a minimal rotation, once again a contradiction.
A key property given by Lemma 8 is the following.Proof.This is a consequence of the fact that the parent of a component is obtained by unmerging its leftmost nontrivial tree, and hence increases its number of trees by exactly one.
Note that the n-vertex simple cycle is the unique n-vertex component maximising the length of its code, which is n.As a consequence, together with Proposition 9, it is easily proved that the Parent relation defines a rooted tree of height at most n − 1, having as nodes the n-vertex components, with the n-vertex cycle as the root, and with an edge between two components C and C * whenever C * = Parent(C).It now suffices to show how to generate all C such that C * = Parent(C) for a given component C * , in order to obtain an efficient generation algorithm by reverse search: all solutions will be obtained by a traversal of the solution tree initiated at the root being the n-vertex simple cycle, and duplicates are inherently avoided by the acyclicity of the solution tree; see e.g., [11] for a formalisation of the technique.In the following, we shall note Children(C * ) the family of such components C. In order to list this family, we define the set of candidate merges of a given component, that we will show to contain every child.Definition 10.Let C * = ⟨T 1 , . . ., T k ⟩ be a component with k ≥ 2 and let The candidate merges of C * thus consist of all components obtained by merging each pair of adjacent trees in both directions.We note that the candidate merges of two distinct components could intersect, a point that will not be important in the following as we will only explore actual children among these candidates; for instance, both ⟨⟨1⟩, ⟨1⟩, ⟨1⟩, ⟨3, 2, 1⟩⟩ and ⟨⟨1⟩, ⟨2, 1⟩, ⟨1⟩, ⟨2, 1⟩⟩ would share an element among their candidate merges: Proof.First of all, it is necessary to check if each T i is a valid isomorphism code for a tree.Since either T i = ⟨1⟩, or it is obtained by merging two valid tree codes, it suffices to check that the codes of the subtrees of T i are in lexicographic order (see Example 5).The root of each subtree is the length of the corresponding subsequence, hence the subtrees can be extracted in linear time, and the resulting sequence of tree codes can also be checked for lexicographic order in linear time.
If the previous test is satisfied for all T i , then it suffices to check if C is a lexicographically minimal rotation (see Example 6).On "flat" sequences of integers, linear time algorithms such as Booth's [3,4] are well known.We can solve our problem (which involves sequences of sequences of integers) by reduction to this simpler case.
T k be the concatenation of the codes of the given trees, each prefixed with an extra ⟨0⟩; remark that 0 is strictly less than any integer appearing in the code of a tree.Let P ′ be the minimal rotation of P ; then P ′ must begin with 0, since it is the minimum element of the sequence, and thus where each U i is one of the original trees T j and ⟨U 1 , . . ., U k ⟩ is a rotation of ⟨T 1 , . . ., T k ⟩.
We claim that ⟨U 1 , . . ., U k ⟩ is, more specifically, the minimal rotation of ⟨T 1 , . . ., T k ⟩.Otherwise, by contradiction, there would exist another rotation ⟨V 1 , . . ., V k ⟩ < ⟨U 1 , . . ., U k ⟩.Let V i and U i be the leftmost trees such that since the two prefixes U and V are identical and V i < U i .But this contradicts the assumption that P ′ is the minimal rotation of P .Then P is its own minimal rotation if and only if ⟨T 1 , . . ., T k ⟩ is a minimal rotation.Since P is a sequence of integers, the former property can be checked in linear time as mentioned above.
We are now ready to describe how to efficiently compute the children of a component.
By definition of unmerge, we have that at least one of holds depending on whether U 1 or U 2 is minimal.We conclude that at least one of L i (C * ) or R i (C * ) equals C for i = h, hence that C belongs to Candidate-Merges(C * ).We note that L i (C * ) = R i (C * ) whenever U 1 = U 2 ; even though this has no incidence in the proof as Candidate-Merges(C * ) is defined as a set, one should take care of this situation by considering only one of L i (C * ) or R i (C * ) in the implementation.
Regarding the delay, note that the number of merge operations to perform in order to generate Candidate-Merges(C * ) is 2(k − 1), and that each merge takes O(n) time.When performing a merge, we additionally check whether the obtained code is valid, which can be done in O(n) time by Lemma 11, and whether it is a child of C * , by performing one unmerge operation once again in O(n) time.Since k ≤ n, we obtain a O(n 2 )-delay algorithm generating all children of C * as desired.The space is linear since we only store a constant number of components.Now if Children(C * ) is empty, then the first part of the statement trivially holds, and the enumeration amounts to decide whether Children(C * ) = ∅ which is done by seeking for children as described above, in O(n 2 ) time and linear space as well.
Let us mention for completeness that situations can occur where for a given component C * and integer i, each of the components L i (C * ) and R i (C * ) define a distinct child; for example, this happens for C * = ⟨⟨2, 1⟩, ⟨3, 2, 1⟩⟩, n = 5 and i = 1.On the other hand, root-to-leaf paths of length less than n − 1 can occur in the solution tree, with components not having any children despite being of length more than 1; an example is given by C * = ⟨⟨2, 1⟩, ⟨2, 1⟩, ⟨2, 1⟩⟩ for n = 6.In situations of this kind, every valid candidate merge is actually the child of another component.
We note that the general framework of reverse search [1], equipped with the alternating output technique [26], yields a natural polynomial-time algorithm to produce the solution that comes after any given solution C in the enumeration, provided that we are able to decide in polynomial time whether C lies at even or odd depth in the solution tree, and to resume the children enumeration of an arbitrary node at child i for any integer i.This comes from the fact that the next solution is at distance at most 3 from C in the solution tree, hence that resuming the DFS on C, we obtain the next solution after at most 3 recursive calls and/or backtracks, in a time which is bounded by 3 times the delay needed to produce the next child.In our case, as the worst case delay to generate a child is the same as of generating all children, we do not even need to argue that we can resume the enumeration from the i-th child as long as we are interested in the asymptotic delay, though this can be done to speed up the implementation.As of deciding whether the current solution C lies at even or odd depth in the solution tree, it can be done by checking the parity of n − |C|, that is, the number of merges performed starting from the root in order to obtain C. By the same arguments, it is easily seen that we can compute the solution that comes right before any given solution C within the same time and space.In other words, our implementation of reverse search can be made so that it only uses the memory of the current node during the DFS of the solution tree, within the same delay.We conclude to the following.
Theorem 13.There is a O(n 2 )-delay and linear space algorithm generating all connected n-vertex functional digraphs.Moreover, given any such functional digraph, we can generate its successor (resp., predecessor) in the enumeration in O(n 2 ) time and using linear space.
As an example, the generation of all components of 4 vertices is depicted in Fig. 2.

Generation of arbitrary functional digraphs
We can now exploit the algorithms of Theorem 13, and more specifically our ability to generate the successor of a given component, as a subroutine for the efficient generation of arbitrary (non necessarily connected) functional digraphs.In order to avoid generating multiple isomorphic digraphs, we first define an appropriate isomorphism code.Definition 14 (code of a functional digraph).Let G = (V, E) be an arbitrary functional digraph having m connected components.Then, the isomorphism code of G is the sequence where, for each pair of consecutive components C i and C i+1 , either C i has less vertices than C i+1 , or both C i and C i+1 have the same number of vertices and their codes are in increasing order of generation by the algorithm of Theorem 13.
An example code for a disconnected functional digraph is depicted in Fig. 1.As usual, we identify a functional digraph G with its own code and refer to it as a canonical form for G.
Notice how the isomorphism code for a functional digraph resembles a PQ-tree, a data structure representing permutations of a given set of elements which, incidentally, is used to efficiently check isomorphic graphs of certain classes [5].However, for our application we need to represent the equivalence of all permutations of the components C 1 , . . ., C m , as well as the equivalence of all rotations of the trees T 1 , . . ., T k of a component C i , and this latter condition is not represented directly in a PQ-tree. is displayed on the top left.Note that the actual ordering of children of each node depends on the (arbitrary) order we chose for the generation of the candidate merges; in the picture we first compute Li(C * ) for i from 1 to k − 1, then the same for the Ri(C * ): this is the ordering that has been adopted in our implementation and that will be considered in the remaining of the paper.
Definition 15.We denote by C n the set of components of n vertices, and by C m n the set of functional digraphs having m components of n vertices each.
In terms of isomorphism codes, the elements of C m n have the form ⟨C 1 , . . ., C m ⟩ with all C i ∈ C n and C 1 ≤ • • • ≤ C m in generation order.Remark that C 0 n = {⟨⟩} as a base case, i.e., it only contains the empty functional digraph, and that C m n is not simply the Cartesian product of m copies of C n , due to the ordering requirement, but rather the set of multisets of m elements of C n .This set can be generated efficiently as follows.
that is, by replacing C i and every other component on its right by C ′ i .The resulting functional digraph code G ′ is still in nondecreasing order of generation of components, thus a valid isomorphism code, and G ′ > G in the induced lexicographical order, which avoids generating the same code multiple times.Furthermore, any valid functional digraph code can be constructed right-to-left from G 0 by applying the successor operation enough times to the cycle of length n.This is thus an exhaustive generation without duplicates.Intuitively, this amounts to generating all nondecreasing words of length m on the alphabet C n , where nondecreasing is to be interpreted with respect to the order of generation of C n .
Computing G ′ requires applying the successor operation to at most m components, and replicating at most m times the successor component C ′ i thus obtained, which can be carried out in O(mn2 ) time.The space bound is due to the fact that we only store one functional digraph.
Recall that an integer partition of a natural number n is an (unordered) multiset of positive integers having sum n.Partitions can be alternatively represented as a vector of n natural numbers (including zero) representing the multiplicities of each term of the sum.Definition 17.A partition of the natural number n is an n-tuple p = (p 1 , . . ., p n ) of natural numbers such that n i=1 p i i = n.We denote by P n the set of partitions of n.Definition 18.We denote by G n the whole set of functional digraphs over n vertices.Furthermore, for each partition p = (p 1 , . . ., p n ) ∈ P n , we denote by G p the set of functional digraphs having exactly p i components of i vertices for all 1 ≤ p ≤ n.
By definition we have G n = p∈Pn G p , and G p and G q are disjoint whenever p and q are are distinct partitions of n; in other words, {G p : p ∈ P n } is a (set) partition of G n .Since integer partitions can be generated efficiently, 2 we only need to show that each G p can be generated efficiently.By "grouping together" all components of the same size, we can analyse an arbitrary functional digraph G ∈ G p as the disjoint union of (possibly empty) digraphs Theorem 19.The set of functional digraphs over n vertices up to isomorphism can be generated with delay O(n 2 ) and using linear space.
Proof.Since integer partitions can be generated with linear delay [17], it suffices to show that each set G p can be generated with delay O(n 2 ) in order to prove the theorem.The set G p can be generated by enumerating the n-tuples ⟨G 1 , . . ., G n ⟩ of the Cartesian product n i=1 C pi i and then outputting G 1 ⌢ • • • ⌢ G n .Generating the next element of a Cartesian product may require, in the worst case, trying to compute the successor of each coordinate C pi i , and resetting that coordinate to the first element of C pi i (that is, the functional digraph consisting in p i cycles of length i) if there is no successor.By Lemma 16, each C pi i can be generated with delay O(p i i 2 ), which means that the generation delay for G p is asymptotically proportional Fig. 3. Generation (top to bottom, left to right) of the set of functional digraphs Gp over 8 vertices with partition p = (0, 1, 2, 0, 0, 0, 0, 0), corresponding to components of 2, 3, 3 vertices respectively.Each component is labelled by its generation order according to the algorithm of Theorem 13.As an example, the functional digraph in grey (the 7th generated one in Gp) has isomorphism code ⟨⟨⟨1⟩, ⟨1⟩⟩, ⟨⟨3, 2, 1⟩⟩, ⟨⟨1⟩, ⟨2, 1⟩⟩⟩.to n i=1 p i i 2 ≤ n n i=1 p i i.By recalling that n i=1 p i i = n (Definition 17) we obtain the required quadratic upper bound on the delay.Since we never store more than one functional digraph at a time, and since the successor of each component can be computed in linear space (Theorem 13), the overall space requirement for this algorithm is also linear.Fig. 3 shows an example generation of disconnected functional digraphs with a fixed partition of vertices into components, and highlights the similarity with the increment operation on a mixed radix integer (with the extra constraint that components of the same size must be in nondecreasing order of generation).

Conclusions
We have described the first polynomial-delay generation algorithm for the class of functional digraphs, both connected and arbitrary, which proves that these classes of graphs can be generated with an O(n 2 ) delay and linear space. 3t is, of course, an open problem to establish if functional digraphs can be generated with a smaller delay.That would require us to somehow avoid testing O(n) possible merges in order to construct the next candidate digraph, or to avoid spending linear time in order to check for valid isomorphism codes.Otherwise, is an amortised constant time delay possible?
Another interesting line of research is to find variations of the tree-merging approach suitable for the efficient generation either of restricted classes of functional digraphs (for instance, with cycles of given lengths or trees of given heights, which is sometimes useful in applications related to the decomposition of dynamical systems [10]), or of more general classes of graphs, without the uniform outdegree 1 constraint, possibly by means of a "functional digraph decomposition".

Fig. 1 .
Fig. 1.Isomorphism codes for a tree (Definition 1), a connected (Definition 2) and a disconnected functional digraph (Definition 14).Notice the different nesting depths for the angled brackets of the three codes.

Fig. 1
Fig. 1 also contains an example of connected functional digraph code.For brevity, we refer to connected functional digraphs as components and, as with trees, we identify a component C with its own code.A valid code for a component C is also called a canonical form of C; unless otherwise specified, in the rest of the paper we consider all components to be in canonical form.As for trees, we denote the lexicographic order on components (more precisely, on their isomorphism codes) by ≤, and by |C| the number of trees along the cycle, i.e., the integer k in the definition above.Isomorphism codes for arbitrary (i.e., non necessarily connected) functional digraphs will be defined later, in Section 4, since they will be based on the order of generation of components.Notice that the space required for the isomorphism code of a tree or a connected functional digraph (and later, of an arbitrary functional digraph) of n vertices is linear on the word RAM model, although the actual number of bits is O(n log n), since the codes consist of n integers ranging from 1 to n.

Lemma 12 .
Let C * be a component.Then every C ∈ Children(C * ) is a candidate merge of C * .Furthermore, the set Children(C * ) can be enumerated with delay O(n 2 ) and linear space.Proof.Let us first assume that Children(C * ) is nonempty and let

Fig. 2 .
Fig.2.Reverse search tree for the generation of components of 4 vertices, represented both in graphical form (top) and as isomorphism codes (bottom); the order of generation by the algorithm of Theorem 13 is displayed on the top left.Note that the actual ordering of children of each node depends on the (arbitrary) order we chose for the generation of the candidate merges; in the picture we first compute Li(C * ) for i from 1 to k − 1, then the same for the Ri(C * ): this is the ordering that has been adopted in our implementation and that will be considered in the remaining of the paper.

Lemma 16 .
C m n can be generated with delay O(mn 2 ) and in space O(mn).Proof.We generate C m n starting from the functional digraph G 0 = ⟨⟨⟨1⟩, . . ., ⟨1⟩ n times ⟩, . . ., ⟨⟨1⟩, . . ., ⟨1⟩ n times ⟩ m times ⟩ that is, the functional digraph consisting of m cycles of length n, which are the first elements (the roots of the solution trees) in the generation order of components.Now let G = ⟨C 1 , . . ., C m ⟩ be an arbitrary element of C m n .The successor G ′ of G is computed by taking the rightmost element C i that possesses a successor component C ′ i , if any, and letting