On Interval Decomposability of 2D Persistence Modules

In the persistent homology of filtrations, the indecomposable decompositions provide the persistence diagrams. However, in almost all cases of multidimensional persistence, the classification of all indecomposable modules is known to be a wild problem. One direction is to consider the subclass of interval-decomposable persistence modules, which are direct sums of interval representations. We introduce the definition of pre-interval representations, a more natural algebraic definition, and study the relationships between pre-interval, interval, and indecomposable thin representations. We show that over the ``equioriented'' commutative $2$D grid, these concepts are equivalent. Moreover, we provide a criterion for determining whether or not an $n$D persistence module is interval/pre-interval/thin-decomposable without having to explicitly compute decompositions. For $2$D persistence modules, we provide an algorithm together with a worst-case complexity analysis that uses the total number of intervals in an equioriented commutative $2$D grid. We also propose several heuristics to speed up the computation.


INTRODUCTION
In recent years, the use of topological data analysis to understand the shape of data has become popular, with persistent homology [18] as one of its leading tools. Persistent homology is used to study the persistence -the lifetime -of topological features such as holes, voids, etc, in a filtration -a one parameter increasing family of spaces. These features are summarized in a persistence diagram, a compact descriptor of the birth and death parameter values of the topological features. This is enabled by the algebraic result of being able to decompose any 1D persistence module into intervals [10,20]. The endpoints of these intervals are precisely the birth and death values of topological features.
The focus on one parameter families is a limitation of the current theory. While there is a need for practical tools applying the ideas of persistence to multiparametric data, multidimensional persistence [11] is known to be difficult to apply practically and in full generality. More precisely, there does not exist a complete discrete invariant that captures all the indecomposable modules in this setting. This is unlike the 1D case, where all indecomposables are guaranteed to be intervals and where the persistence diagram is a complete descriptor. In terms of representation theory, this difficulty can be expressed by the fact that the commutative nD grid is of wild type (see [13,Definition 6.4]) for n ≥ 2 (and grid large enough).
One way to avoid these difficulties is to consider persistence modules that decompose into indecomposables contained in a restricted set. A promising candidate is the class of interval-decomposable persistence modules, which decompose into the so-called interval representations (see Definition 2.3). For example, the paper [14] provides a polynomialtime algorithm for computing the bottleneck distance between two 2D interval-decomposable persistence modules. The paper [5] studies stability for certain subclasses of intervaldecomposable modules. In this work, we focus not only on interval representations, but also study some other related classes of indecomposable persistence modules. One reason is that the definition of interval representations used in the literature [5,6,14] depends on a choice of bases and seems to be overly restrictive. For example, being an interval representation is not closed under isomorphisms. This is unsatisfying from an algebraic/category-theoretic point of view. We review the definition of thin representations, introduce the new notion of preinterval representations, and study the relationship among thin, pre-interval, and interval representations.
As one contribution of this work, we answer the following question in Section 3: Given an nD persistence module, is there a way to determine, without explicitly computing its indecomposable decomposition, whether or not it is (pre)interval-decomposable or thindecomposable? Given some set S of indecomposable persistence modules, we provide in Theorem 3.1 equivalent conditions for determining S -decomposability. In the case that S is a finite set, this translates into an implementable criterion.
In Section 4, we focus on the equioriented commutative 2D grid. It is clear that over a 1D grid (i.e. the quiver #-A n , see Section 2), being a thin indecomposable is equivalent to being isomorphic to an interval representation, since each indecomposable is isomorphic to an interval [20], and conversely, interval representations are automatically thin and indecomposable in general. In subsection 4.1, we show that this relationship holds also in the equioriented commutative 2D grid: any thin indecomposable is isomorphic to an interval representation. In subsection 4.2, we give examples for a non-equioriented commutative 2D grid and for an equioriented commutative 3D grid showing that this relationship does not hold in general. Finally, we provide a count of the total number of intervals in an equioriented commutative 2D grid in Theorem 4.11 by relating intervals in this setting to the so-called parallelogram polyominos.
In Section 5, we provide a detailed algorithm (Algorithm 1) for determining intervaldecomposability, based on Theorem 3.1, and give its computational complexity. In particular, we give detailed descriptions of the computation of almost split sequences ending at interval representations and of dimensions of homomorphism spaces, which are used to compute multiplicities of interval summands. Furthermore, we propose several heuristics to reduce the number of interval representations to be checked.
Related to the question of determining interval-decomposability, we note the following results. Previous works [12,8] show that a pointwise finite-dimensional persistence module satisfies a certain local property called exactness if and only if it is rectangledecomposable. Rectangles are intervals, and thus this result gives a criterion for a restricted class of interval-decomposables. Unfortunately, no such local criterion exists for intervaldecomposability [7]. We note that our criterion in Theorem 3.1 is not local, as it relies on the computation of dimensions of certain homormophism spaces.
In their paper [15], Dey and Xin gave an algorithm to decompose a restricted class of nD persistence modules M. Their algorithm proceeds on the assumption that the module is "distinctly graded". One formulation of this condition is that there exists a projective presentation P 1 → P 0 → M → 0 of M with both P 0 and P 1 square-free 1 modules. Furthermore, in version 5 of their arXiv preprint, they claim that their "algorithm can be applied to determine whether a persistence module is interval decomposable" [15]. When the module is not distinctly graded, one can arbitrarily fix an order on the grades. The number of possible orders is finite, and they claim that at least one of those orders provide a full decomposition of the module, and therefore it is enough to test all possible orders. However we argue that this claim is erroneous. We provide a counter-example by giving an interval-decomposable module M that is not distinctly graded and such that there exists no order on the grades that leads to a full decomposition by applying their algorithm.
2. BACKGROUND 2.1. Quivers and their representations. We use the language of the representation theory of bound quivers. For more details, we refer the reader to the book [2], for example. Let us recall some basic definitions.
A quiver is a quadruple Q = (Q 0 , Q 1 , s,t) of sets Q 0 , Q 1 and maps s,t : Q 1 → Q 0 . If we draw each a ∈ Q 1 as an arrow a : s(a) → t(a), then Q can be presented as a directed graph. Then we call elements of Q 0 (resp. elements of Q 1 , s(a) and t(a)) vertices of Q (resp. arrows of Q, the source of a and the target of a for each a ∈ Q 1 ). Let n be a positive integer. We denote by #-A n the quiver presented as the directed graph The quiver #-A n plays a central role in persistence theory. A subquiver Q ′ of a quiver Q is a quiver Q ′ = (Q ′ 0 , Q ′ 1 , s ′ ,t ′ ) such that Q ′ 0 ⊆ Q 0 , Q ′ 1 ⊆ Q 1 , and s ′ (a) = s(a), t ′ (a) = t(a) for all a ∈ Q ′ 1 . A subquiver Q ′ is said to be full if it contains all arrows of Q between all pairs of vertices in Q ′ .
A quiver morphism from a quiver Q to a quiver Q ′ is a pair ( f 0 , f 1 ) of maps f 0 : Q 0 → Q ′ 0 and f 1 : A path from a vertex x to a vertex y of length n (≥ 1) in Q is a sequence α n · · · α 2 α 1 of arrows α 1 , α 2 . . . , α n of Q such that s(α 1 ) = x, t(α n ) = y, and s(α i+1 ) = t(α i ) for all 1 ≤ i ≤ n − 1. Here we call x and y the source and the target of this path, respectively. Note that this can be viewed as a quiver morphism f : #-A n+1 → Q, with f (1) = x and f (n + 1) = y. Next, we give some definitions concerning convexity and connectedness in quivers.
Definition 2.1 ([2, p. 303], Convex subquiver). Let Q be a quiver. A full subquiver Q ′ of Q is said to be convex in Q if and only if for all vertices x, y in Q ′ 0 , and for all paths p from x to y in Q, all vertices of p are in Q ′ 0 (and thus p is a path in Q ′ ). Definition 2.2 (Connected). A quiver Q is said to be connected if it is connected as an undirected graph, namely, if for each pair x, y of vertices of Q there exists a quiver W with underlying graph of the form 1 2 · · · n for some n (≥ 1) and a quiver morphism We give the following definition of intervals in a quiver. Definition 2.3 (Interval subquiver). Let Q be a quiver. An interval of Q is a convex and connected subquiver of Q.
This definition is a generalization of the one [6,14] for commutative grids used in persistence theory. This in turn generalizes intervals of #-A n in the usual sense: It is clear that an interval subquiver of #-A n is a full subquiver containing all vertices The interest in intervals comes mainly from the intuition about #-A n in persistence theory: they form the building blocks of representations of #-A n , are simple to describe (parameters b and d only), and have a useful interpretation as the births and deaths of topological features.
Throughout this work, we let K be a field, and Q a quiver. Paths in Q are said to be parallel if they have the same source and the same target. A relation is a K-linear combination of parallel paths of length at least 2. In what follows, we need the concept of bound quivers, which we denote by (Q, R) for a quiver Q with a set of relations R. First, we define the following special set of relations.
Definition 2.4 (Full commutativity relations). Let Q be a quiver. The set of full commutativity relations of Q is A representation V of Q is said to be a representation of the bound quiver (Q, R) if V satisfies the relations given by a set of relations for a path µ = α m · · · α 1 ). The category of finite-dimensional K-representations of (Q, R) will be denoted by rep K (Q, R). In this work, we consider only finite-dimensional representations.
As an example, Below, we define the equioriented grid by taking a product of #-A n . First, we give the general definition of products of quivers. Definition 2.6 (Products of quivers). Let Q = (Q 0 , Q 1 , s,t) and Q ′ = (Q ′ 0 , Q ′ 1 , s ′ ,t ′ ) be quivers.
• The Cartesian product Q × Q ′ is the quiver with the set of vertices Q 0 × Q ′ 0 and the set of arrows where the sources and targets are determined by • The tensor product Q⊗ Q ′ is the bound quiver Q× Q ′ with the commutativity relations (a, y ′ )(x, a ′ ) − (y, a ′ )(a, x ′ ) for all arrows a : x → y in Q and a ′ : Definition 2.7 (Equioriented commutative grid). Let m, n be positive integers. The bound quiver #-G m,n = #-A m ⊗ #-A n , which is the 2D grid of size m × n with all arrows in the same direction and with full commutativity relations, is called the equioriented commutative grid of size m × n.
In this work, we use the convention of displaying #-G m,n as a 2D grid with m columns and n rows, with arrows pointing right or up. For example, #- where R is the full commutativity relations, can be understood using the tensor product of path algebras. That is, we have R of the path algebra by the two-sided ideal generated by R. While not a focus of this paper, we define the equioriented commutative nD grid of size Similarly, non-equioriented versions of the commutative nD grid can be defined by taking the tensor product of A m i -type quivers, where for at least one i, the arrows in ith factor are not pointing in the same direction.

2.2.
Representations of interest. Throughout this section, we let (Q, R) be a bound quiver. We first start with the following straightforward definition.
Note that we do not require indecomposability for thin representations. If V is thin and indecomposable, we say that V is a thin indecomposable. Next, we provide our definition of (pre-)interval representations of a general (bound) quiver. Definition 2.9 (Interval and pre-interval representations).
(1) A representation V ∈ rep K (Q, R) is an interval representation if and only if • (Thinness) it is thin, and • (Interval support) its support supp(V ) is an interval of Q, and • (Identity over support) for all arrows α ∈ supp(V ), V (α) is an identity map. Note that this definition is not stable under isomorphism (see Remark 2.10). Thus, in this work, by interval representation we also mean "isomorphic to an interval representation" if there is no risk of confusion.
(2) If, instead of the third condition (Identity), V satisfies the condition • (Nonzero over support) for all arrows α ∈ supp(V ), V (α) is nonzero, then V is said to be a pre-interval representation.
Recall that the support of a representation V is the full subquiver of vertices x with V (x) = 0. Thus, the "identity/nonzero over support" conditions means that if V (x) and V (y) are nonzero, then all arrows α : x → y have V (α) identity or nonzero, respectively.

Remark 2.10.
(1) The condition "identity over support" implies that V (x) and V (y) are equal as (onedimensional) vector spaces. This condition is not stable under isomorphisms. For example, consider #-A 2 and its R-representations where f is the linear map determined by taking f (1) = a. Then, V ∼ = V ′ and are both pre-interval. Clearly, V is interval, but V ′ is not. The one-dimensional vector spaces R and Ra are not equal, only isomorphic. (2) In Section 4.2 we give examples where the three classes (thin indecomposable, preinterval, isomorphic to an interval) are not equal.
(3) Under thinness and interval support conditions, a representation V is isomorphic to an interval representation if and only if there exist bases v x ∈ V (x) for all x ∈ supp(V ) 0 such that the following holds: This condition will be used to show that a representation is isomorphic to an interval representation. (4) If the coefficient field is K = F 2 , then every pre-interval representation is isomorphic to an interval representation. We note that in topological data analysis it is indeed common to choose the base field F 2 . Thus, it may seem that there is no need to consider the pre-interval representations. However, we note the following two reasons for considering fields other than F 2 . First, homology over F 2 does not capture topological torsion. Therefore, working with other fields provides more information. Second, decomposition of representations over F 2 presents some deep algebraic complications, in the representationinfinite setting. An intuition into these complications can be obtained by contrasting the following two canonical forms arising in matrix decompositions. The Jordan canonical form, available over algebraically closed fields, is relatively simple compared to the rational canonical form, which involves irreducible polynomials in general. In this setting, decomposition over an infinite field (the algebraic closure) involves simpler summands.
We note that our definition generalizes the usual definition of intervals and interval representations in the literature. For example, [5] and [6] defines intervals and interval representations over posets in general, and [14] over the posetR n := (R ∪ {∞}) n . It is clear that, given a poset P, we can construct an acyclic quiver with full commutativity relations (Q, R), and vice-versa, such that rep K (Q, R) is equivalent to the category of pointwise finite-dimensional K-linear representations of P.
Then, it can be checked that an interval / 0 = I ⊂ P in the sense of [5,6] corresponds to a nonempty interval I in the sense of our Definition 2.3. In this setting, convexity corresponds to the condition that a, c ∈ I and a ≤ b ≤ c implies b ∈ I. On the other hand, connectedness corresponds to the condition that for any a, c ∈ I, there is a sequence a = x 0 , x 1 , . . . , x ℓ = c in I with x i and x i+1 comparable for all 0 ≤ i ≤ ℓ − 1. Similarly, given an interval J, the interval module I J as defined in Definition 2.1 of [6] is precisely an interval representation in the sense of our Definition 2.9 with support J. Lemma 2.11 (See also [6,Prop. 2.2]). Let V be a nonzero representation of (Q, R). If V is an interval or pre-interval representation, then V is indecomposable.
Proof. The proof is similar to the proof of Prop. 2.2 in [6]. If V is an interval or preinterval representation, then it is also thin, so without loss of generality, we assume that the vector spaces of V are K or 0. Then, endomorphisms of V act at each vertex by multiplication by some scalar. By commutativity requirements on endomorphisms together with the "nonzero over support" condition, each pair of these scalars over vertices in the same connected component are equal. Thus, by connectedness, End(V ) ∼ = K, and hence V is indecomposable.
In general, we have the following hierarchy of these classes of indecomposable representations: Later, we shall show that for the equioriented commutative 2D grid, these three collections are equal. We shall also provide examples of where the inclusions are strict in the general case.
Finally, we provide the following definitions concerning these special classes of indecomposables.
Definition 2.12. Let (Q, R) be a bound quiver.
(1) A representation V ∈ rep K (Q, R) is said to be interval-decomposable (resp. preinterval-decomposable, thin-decomposable) if and only if each direct summand in some indecomposable decomposition of V is an interval representation (resp. preinterval representation, thin representation). (2) The bound quiver (Q, R) itself is said to be interval-finite (resp. pre-interval-finite, thin-finite) if and only if the number of isomorphism classes of its interval representations (resp. pre-interval representations, thin indecomposables) is finite.
In the rest of this work, we consider only bound quivers (Q, R) such that KQ/ R is a finite-dimensional K-algebra. This holds, for example, if R is an admissible ideal, or if Q is a finite acyclic quiver. With this assumption, we can use the Auslander-Reiten theory needed for the next section. Furthermore, we fix a complete set L of representatives of isomorphism classes of (finite-dimensional) indecomposable representations of (Q, R), which we identify with the set of vertices of the Auslander-Reiten quiver of (Q, R). For more details on the Auslander-Reiten theory, we refer the reader to the books [2, 3].

Decomposition theory.
We consider only bound quivers (Q, R) such that KQ/ R is a finite-dimensional K-algebra. First recall the Krull-Schmidt Theorem, which can be stated as follows. In this subsection, let us review decomposition theory [1,17] which gives an algorithm to compute the multiplicity d M (L) for all L ∈ L by using Auslander-Reiten theory. For the details of Auslander-Reiten theory, we refer the reader to [2,Chapter IV] or [3,Chapter V].
Here, we briefly provide the definitions required for Theorem 2.16 and its dual. For a representation M recall that the sum of all simple submodules of M is called the socle of M, denoted by socM, and that the intersection of the kernels of all homomorphisms from M to simple modules is called the radical of M, denoted by rad M. We set top M := M/ rad M, and call it the top of M. Note that top P of an indecomposable projective representation P and soc I of an indecomposable injective representation I are simple. Definition 2.14. Let f : X → Y be a morphism of representations of (Q, R).
(1) f is said to be left minimal (resp. right minimal) if for any morphism h ∈ End(Y ) (resp. h ∈ End(X)) h f = f (resp. f h = f ) implies that h is an automorphism. (2) A non-section (resp. non-retraction) f is said to be left almost split (resp. right almost split) if for every non-section u : (3) f is called a source map (resp. sink map) if f is both left minimal and left almost split (resp. right minimal and right almost split).
is an almost split sequence if f is a source map and g is a sink map. Source maps and sink maps from every indecomposable representation is given as follows, which is a fundamental theorem in Auslander-Reiten theory:  (2) If L is not injective, then f is given by the composite of α followed by an isomor- is an almost split sequence. In particular, U ∼ = L E. For each indecomposable representation L of (Q, R) we can decompose L E and E L as for a unique subset J L (resp. K L ) of L and a unique function a L :  is called the starting function from X. Dually, is called the stopping function to X. Using these, the formulae (2.4) and (2.6) have the following forms: Note that the value of s X (M) (or t X (M)) can be computed as the rank of some matrix defined by M for each X (see [1] for details). For completeness we added the dual versions (2.5) and (2.6), which were not presented in [1,Thm. 3]. Later we will use formula (2.5) to examine the computational complexity of our algorithm for determining interval-decomposability.

DETERMINING S -DECOMPOSABILITY
To state our theorem, we first generalize the idea of interval-decomposability and thindecomposability in the following way. Let S be a subset of the chosen complete set L of representatives of isomorphism classes of indecomposable representations. Then, In this section, we use the decomposition theory to determine whether or not a given persistence module is S -decomposable, provided S is finite.
Theorem 3.1. Let S be a subset of L , and M ∈ rep(Q, R). Then the following are equivalent.
In the case that S is finite, Theorem 3.1 gives us a criterion to determine the Sdecomposability of a given M ∈ rep(Q, R). In particular, we only need to consider a finite number of values d M (X) for X ∈ S and then compare dim M with ∑ X∈S d M (X) dim X.
If these values are equal, then the given M ∈ rep(Q, R) is S -decomposable by the implication (2) ⇒ (1). The formula (3) gives a criterion for M to be S -decomposable by using the function dim and the values s X (M) of starting functions from indecomposable representations X ∈ S ∪ ( L∈S J L ), on which the computation of d M (L) depends.
Thus, it is important to determine whether or not a particular bound quiver is thinfinite or (pre-)interval-finite. In Section 4, we study the equioriented commutative 2D grid. Here, we give the following trivial observations of some settings where the criterion given by Theorem 3.1 can be immediately applied.

Lemma 3.2. Let Q be a finite (bound) quiver, and K a finite field. Then Q is thin-finite.
Proof. Consider the number of possible thin representations of Q. Since Q has a finite number of arrows, and over each arrow, a thin representation V can only have (up to isomorphism) f : K → K or K → 0 or 0 → K, where there are only a finite number of possibilities for f ∈ Hom(K, K) ∼ = K op . Thus the number of possible thin representations (up to isomorphism) of Q is finite.
Note that because of the hierarchy in Ineq. (2.2), thin-finiteness implies pre-intervalfiniteness. For finite quivers, interval-finiteness is automatic, as the next lemma shows.

Lemma 3.3. Let Q be a finite (bound) quiver. Then Q is interval-finite.
Proof. This follows by a similar counting argument for interval representations V as in the previous lemma, but this time the only possibility for f : K → K is the identity since V is an interval representation. Note that K being a finite field is not required.

EQUIORIENTED COMMUTATIVE 2D GRID
In this section, we focus our attention on the equioriented commutative 2D grid #-G m,n . We show that each thin indecomposable of #-G m,n is isomorphic to an interval representation and enumerate all interval representations of #-G m,n .
4.1. 2D thin indecomposables are interval representations. First, let us show that interval subquivers of #-G m,n can only have a "staircase" shape. To make this more precise, we define the following.
Let m and n be fixed positive integers, and let I m,n be the set of all nonempty interval subquivers of #- To make explicit the constants m and n, we say that such a set of slices is a staircase of #-G m,n . Proof. We construct a set bijection f : I m,n −→ I ′ m,n together with its inverse f −1 . For each interval subquiver I ∈ I m,n , we define f (I) to be the set of slices f (I) Note that since I is nonempty, 1 ≤ s ≤ t ≤ n. Then, for each j with s ≤ j ≤ t, the set {i | (i, j) ∈ I} is nonempty by the connectedness condition, and thus 1 ≤ b j ≤ d j ≤ m. Similarly, b j ≤ d j+1 follows from the connectedness of I.
The correctness of conditions b j+1 ≤ b j and d j+1 ≤ d j follows from the convexity of I. To see this, suppose to the contrary that b j+1 > b j . Then, we have a path (b j , j) to This contradicts convexity. A similar argument shows that d j+1 ≤ d j . The above arguments show that f (I) is indeed a staircase.
In the opposite direction, given a staircase to be the full subquiver with vertices It is clear that f and f −1 are inverses of each other.
When we display dimension vectors, we position the numbers dim K V (x) corresponding to the position where each vertex x ∈ Q 0 is graphically displayed (see Example 4.2). By definition, each interval representation M of #-G m,n can be uniquely expressed by its dimension vector, since it is uniquely determined by its support.
By Proposition 4.1, we identify interval subquivers of #-G m,n with staircases of #-G m,n . Thus, we shall also denote an interval by writing it as a set of slices as a staircase from s to t. We can visualize the correspondence f : I m,n −→ I ′ m,n in the proof of Proposition 4.1 using the dimension vector notation and staircase notation. Below, we illustrate some examples under this correspondence for #-G 6,4 .
Using this staircase shape, we are able to prove the following Lemma 4.3. Let m, n be positive integers. Any pre-interval representation of #-G m,n is isomorphic to an interval representation.
is an interval by definition, and thus a staircase by Proposition 4.1. Set B to be the quiver supp(V ) with full commutativity relations in it. Then V is regarded as a representation of B.
Let B ′ be the bound quiver obtained from supp(V ) by flipping all of its vertical arrows, together with the full commutativity relations. Thus the quiver of B ′ is a subquiver of #- Then by replacing all maps of V associated to the vertical arrows in supp(V ) (which are nonzero by definition) by their inverses, we obtain a representation V ′ of the bound quiver B ′ . To see that the commutative relations in B ′ are satisfied by V ′ , we note that the left square below is a commutative diagram of nonzero linear maps if and only if the right one is: We illustrate the construction with the following example, showing the quiver of B and B ′ , respectively: We view V ′ as a representation of B ′ and not of #- So for example, there is no problem with the upper right portion of the quiver of B ′ in Diagram (4.1) not satisfying a zero relation.
In general, let x be the upper left corner of the quiver of B ′ , and take a nonzero element v x of K = V ′ (x). For each vertex y of B ′ there exists a path µ from x to y in the quiver of B ′ , because supp(V ) has a staircase shape. Take v y := V ′ (µ)v x as the basis of V ′ (y). Since B ′ is defined by the full commutativity relations, v y does not depend on the choice of µ. In this way we can find bases v y of V ′ (y) for all vertices y in supp(V ′ ) that satisfy Condition (2.1) in Remark 2.10. Now v y are also bases of V (y) = V ′ (y) for all y ∈ supp(V ) 0 and satisfy Condition (2.1) for V . Thus V is isomorphic to an interval representation.
Finally, we prove the main result of this subsection. Proof. The proof will be done by contradiction and in two steps. First we show that any thin indecomposable representation that is not a pre-interval should have two non-zeros vector spaces with a path containing a zero map between them. Then we will show that this implies that the representation is decomposable. Lemma 4.3 will then allow us to conclude. Assume by contradiction that M is a thin indecomposable that is not a pre-interval representation. As M is an indecomposable representation, its support is connected. Therefore, either the convexity condition on the support of M fails, or the nonzero maps on support condition fails. In the first case, there exist vertices x, y, z ∈ Q 0 such that there is a path from x to y to z, and M(x) = 0, M(y) = 0 and M(z) = 0. In the second case, there exists an In either case, we have a path p from x to z with M(x) = 0 and M(z) = 0 such that p contains an arrow α with M(α) = 0.
Let us consider the representation M on a square with one corner at (i, j) ∈ Q 0 in the grid: where the maps are the values of the representation M on the arrows (for example, r . = M(β ) where β is the arrow β : (i + 1, j) → (i + 1, j + 1)). By full commutativity, the two paths (compositions of maps) from M(i, j) to M(i + 1, j + 1) are equal: rb = tl. Since the vector spaces have dimension at most 1 as M is thin, we can conclude the following. If at least one map is zero on one of these paths, then there is a zero map on the other path.
We use the above observation to build a line L intersecting only zero maps in M across the grid that separates M. We start with the arrow α with M(α) = 0 found previously and inductively build this line using the following observation. At each square of the grid, at least one of the following patterns is possible: where in each pattern, the line (colored red) intersects a pair of arrows β 1 , β 2 where M(β 1 ) = 0 and M(β 2 ) = 0. Note that if more maps are zero, we simply ignore them and choose to extend our line using only one of the four given patterns. As we are working over a finite 2D grid, this line cannot create a circle. Therefore it goes from one boundary of the grid to another, and divides the grid into two regions with vertices we denote by V ℓ and V r , for "left/bottom" and "right/top", respectively. Furthermore, both regions are non-trivial: by construction, x ∈ V ℓ and z ∈ V r with M(x) = 0 and M(z) = 0 since the arrow α was found as part of a path from vertex x to z with those properties. Let ) be the full subquivers generated by V ℓ and V r respectively, and let E(L) be the set of the arrows intersecting the line L constructed above. Then, the grid is partitioned as #- To see this, we note that by construction E(V ℓ ) and E(V r ) are disjoint. Furthermore, E(L) is by definition the arrows going from a vertex of V ℓ to V r , and is disjoint from E(V ℓ ) and E(V r ). Finally, each arrow on the grid is in one of these three sets. In Figure 1, we illustrate this partitioning.
Consider representations M ℓ and M r obtained by setting M to be zero outside of Q ℓ and Q r respectively. The support of M ℓ is included in Q ℓ . Note that by construction the arrows exiting Q ℓ are exactly the arrows E(L), which all support a zero map in M. Hence M ℓ is a subrepresentation of M. Clearly, M r is a subrepresentation of M since there are no arrows exiting Q r . Furthermore, as M restricted to E(L) is 0, we conclude that By the fact that M l (x) = 0 and M r (z) = 0, it follows that the decomposition above is nontrivial, and thus M is decomposable, a contradiction. Therefore M is a pre-interval representation, and Lemma 4.3 implies that M is isomorphic to an interval representation.

4.2.
Interesting examples. In this subsection, we give some interesting examples of where a thin indecomposable may not be isomorphic to an interval representation.
Over the equioriented commutative 3D grid, we provide the following example. Let λ be any element of K, and define M(λ ) : This, and higher-dimensional versions of this indecomposable were studied in the paper [9], where topological realizations were also given for λ = 0. It is easy to see that M(λ ) is indecomposable, and for any is not an interval representation, and is not isomorphic to one. Moreover M(0) is not a pre-interval representation and is not isomorphic to one but is still a thin indecomposable. Next, if the arrows are not oriented in the same direction, some thin indecomposables may not be interval representations. An example is the representation of a non-equioriented commutative 2D grid, where λ is not 1. If λ is not 0 and not 1, this also gives an example of a pre-interval representation that is not an interval representation (and not isomorphic to one). The above are variations on the same theme: we have an example of a thin indecomposable that is not pre-interval representation (when λ = 0), and an example of a pre-interval representation that is not isomorphic to an interval representation (when λ is not 0 nor 1). Hence we have strict inclusions in the hierarchy of Ineq. (2.2).
Next, let us provide an example of a bound quiver (Q, R) where pre-interval representations are always isomorphic to an interval representation, but thin indecomposables are not always pre-interval representations. Consider the quiver is an example of a thin indecomposable representation of (Q, R) that is not a pre-interval representation. Now suppose that V ∈ rep K (Q, R) is a pre-interval representation. In the case that V is a simple representation, it is automatically an interval representation. Otherwise, V is isomorphic to some Then, the relations R imply that f g − f g f g = 0 and g f − g f g f = 0. Together with the fact that f and g are nonzero because V is a pre-interval representation, we see that f and g are mutually-inverse isomorphisms. Thus, V is isomorphic to which is an interval representation.

4.3.
Listing all 2D intervals. By definition, an interval representation can be uniquely identified with its support, an interval subquiver. Recall that I m,n is the set of all nonempty interval subquivers of the equioriented commutative 2D grid #-G m,n . In this subsection, we count the elements of I m,n . Recall that by Proposition 4.1, we identify interval subquivers of #-G m,n with staircases of #-G m,n , and denote an interval by writing it as a set of slices s to t), we define the size of I as follows.

Definition 4.5 (Size of interval). For an interval
(4.5) Thus, to calculate #I m,n , it is enough to calculate the numbers #R(w, h). Next we give an explicit form for the value of #R(w, h), by relating it to a well-known concept in combinatorics. Equivalently, a parallelogram polyomino with a w × h bounding box is a pair of lattice non-increasing paths P, Q from (0, h) to (w, 0) so that P lies entirely above Q, and P and Q intersect only at (0, h) and (w, 0). In the definition above, h is taken to be the height and w the width. An example of a parallelogram polyomino having a 6 × 4 bounding box is given below.
(0, 0) By interpreting staircases I ∈ I w,h as the filled-in boxes on the lattice (not the grid lines!), it is clear that staircases in R(w, h) are in one to one correspondence with parallelogram polyominoes with a w × h bounding box. The example above is identified to a staircase in the following way.  N(a, b) is defined by using binomial coefficients as follows.
Narayana numbers are closely related with counting problems of parallelogram polyominoes. Especially, the following fact is well known. For example, see [4]. Hence we obtain the following formulas.
Proof. We use Lemma 4.8 and Proposition 4.10 and note that to obtain the first formula, and second formula follows from Equation (4.5).
In particular, for an equioriented commutative 2D grid of size m × 2 (an equioriented commutative ladder [19]), we obtain the following formula. Remark 4.13. We can apply Theorem 3.1 to a given representation M of #-G m,n in order to determine whether or not it is interval-decomposable. Then, Theorem 4.11 gives the cardinality of the set S of intervals over which we need to compute multiplicities. This cardinality is a large number. To mitigate this, we may replace the original quiver #-G m,n by the smallest equioriented commutative 2D-grid containing the support of M.

ALGORITHMS AND COMPUTATIONAL COMPLEXITY
In this section, we provide a detailed algorithm for determining interval-decomposability, based on Theorem 3.1. In the final subsection, we also give a remark concerning the use of the decomposition algorithm given in [15], for computing interval-decomposability. Here, we let ω < 2.373 be the matrix multiplication exponent [16,23].
Given a 2D persistence module M in Q = #-G m,n , the following procedure can be used to determine whether or not M is interval-decomposable. if dimVecRemaining x,y = 0 then if dimVecRemaining x,y > 0 then 16: return return True 21: end function Let us first give an overview of Algorithm 1. We initialize dimVecRemaining, which holds the dimensions of vector spaces yet unprocessed by the algorithm. In particular, we let dimVecRemaining x,y hold the dimension at (x, y) i.e. column x and row y counting from the bottom. For example, below is the underlying quiver of #-G 4,3 which has 4 columns and 3 rows. For clarity, the (x, y) coordinates of the corner points are labelled.
The main action happens in Line 10, where we decrement dimVecRemaining by the dimension vector of some interval L multiplied by its multiplicity d M (L) in M. Ignoring for a moment all the places where the algorithm can terminate early, if we simply iterate through all intervals L of the grid #-G m,n , then by Theorem 3.1 M is interval-decomposable if and only if dimVecRemaining x,y is 0 for all (x, y), at the end of the algorithm.
Algorithm 1 orders the processing of the intervals L so that there is a possibility of stopping early. In particular, we order the intervals by their lower-right corners (x, y), in order of decreasing x and increasing y (the two outer for-loops in Algorithm 1). The procedure GET-CANDIDATES(x, y) (in Algorithm 2), generates the intervals with lower-right corner given by (x, y). If, after processing all such candidates for some fixed lower-right corner (x, y), dimVecRemaining x,y is nonzero, then we know that M cannot be interval-decomposable. Indeed, the way we iterate over all possible lower-right corners ensures that once we finish processing (x, y), the value of dimVecRemaining x,y can no longer change. Read the interval L [k]: (assume that it is the interval from s to t given by if t is equal to the height of Q then Recall that we use the following notation for a staircase (an interval, by Proposition 4.1).
In Algorithm 2, we enumerate the candidate intervals with the coordinate of the "lower-right" corner fixed as d s = x and s = y. Starting with the lower-right corner, we progressively build up taller and taller intervals.
Next, we also write down Algorithm 3 for computing the multiplicity, to be used in Line 10 of Algorithm 1. The correctness of Algorithm 3 follows from formula (2.5) of Theorem 2.16. The major components of Algorithm 3 are the computation of the terms of almost split sequences ending at nonprojective intervals, which we provide as the function ALMOSTSPLITSEQUENCETERMS (Algorithm 4), and the computation of dim Hom(M, −). if L is projective then 3: τL, E L ← 0, rad L 4: We devote the next few pages to the discussion of Algorithm 4.

Proposition 5.1 (Section 3.2 of [21]). Let A be a finite-dimensional algebra. Then given a non-projective indecomposable A-module Z with End
return τZ, E Z 8: end function In Algorithm 4, we use the following basic concepts. We recall that D is the K- We note that Section 3.2 of [21] in fact provides the procedure for any non-projective indecomposable module Z. Here, we restrict our attention to Z with End Z ∼ = K since all interval representations Z satisfy this condition, and this simplifies the choice of S in Line 5 (in general another condition needs to be imposed on S).
Below, we go through the steps of Algorithm 4 given an interval representation Z = V L and analyze its complexity. Furthermore, this restriction to intervals simplifies some of the computation, as we can provide explicit forms for minimal projective presentation and the map ν f 1 . (Line 2 of Algorithm 4). Let V L be the interval representation associated with the interval subquiver L = {[b j , d j ] j | s ≤ j ≤ t}. Below, we explain the computation of a minimal projective presentation P 1 First, let us review some basic concepts. Recall that #-G m,n can be regarded as a subposet of Z × Z by the order (x 1 , y 1 ) ≤ (x 2 , y 2 ) ⇐⇒ x 1 ≤ x 2 and y 1 ≤ y 2 for any vertices (x 1 , y 1 ), (x 2 , y 2 ) of #-G m,n . For any pair of two vertices a = (x 1 , y 1 ) and b = (x 2 , y 2 ) of #-G m,n , we denote their join (resp. meet) by a ∨ b (resp. a ∧ b). These always exist in #-G m,n and are given by , max(y 1 , y 2 )) and a ∧ b = (min(x 1 , x 2 ), min(y 1 , y 2 )).
For each interval L, we fix the representation V L , isomorphic to an interval representation associated to L, by the following. We define That is, for each a ∈ L 0 , we set V L (a) to be the K-vector space of multiples of the vertex a itself (with a as fixed basis). For each arrow α : We recall fundamental facts on representations of quivers and modules over algebras by specializing to our case. Set A := K #-G m,n , and J to be the ideal of A generated by all arrows in #-G m,n . It is well-known that J becomes the Jacobson radical of A. For each a, b ∈ ( #-G m,n ) 0 we denote by p a,b the element of the algebra A represented by a path from b to a, and we set e a := p a,a . Note that p a,b is uniquely determined by a and b because #-G m,n has the full commutativity relations. Then there exists a well-known equivalence between the category of representations of #-G m,n and the category of (left) A-modules, sending each representation V Then as is easily verified, the bijections between bases To compute a minimal projective presentation of V L , we need the concept of upset, a special type of interval. The overall strategy is to first compute minimal projective presentations of V U for upsets U (Proposition 5.6). Then, as V L has the form V L ∼ = V U /V ′ U for some upsets U,U ′ (see Lemma 5.7), we piece together the minimal projective presentations of V U ,V ′ U to have that of V L . Definition 5.4 (upsets and upset representations). A subset U of ( #-G m,n ) 0 is called an upset if the conditions x ≤ y in ( #-G m,n ) 0 and x ∈ U imply y ∈ U. Obviously the intersection of any upsets is again an upset. Therefore for each subset S of ( #-G m,n ) 0 there exists the minimum upset U of ( #-G m,n ) 0 such that S ⊆ U, which we denote by U(S). When S = {a} for some a ∈ ( #-G m,n ) 0 , we simply write U(a) := U({a}). Since for any upset U, the full subquiver full(U) of #-G m,n with full(U) 0 = U turns out to be an interval (see the lemma below), the interval representation V U := V full(U) is defined, and is called an upset representation.
The following is obvious. We now give a minimal projective presentation of an upset representation.
Proof. We first show that the equality holds. It is enough to show the equality for all vertices p ∈ ( #-G m,n ) 0 . If p ∈ U, then dimV U (p) = 0 by definition, and we have p ≥ p c for all c = 1, . . . , l, and hence p ≥ q d for all d = 1, . . . , l − 1. Thus the left hand side is also zero, and Eq. (5.7) holds. Assume p ∈ U. We set {p j 1 , . . . , p j t } := {p c | p c ≤ p} with j 1 > · · · > j t . Note that the indices j i are contiguous integers. Then ∑ l c=1 dim P(p c )(p) = #{p c | p c ≤ p} = t, and ∑ l−1 d=1 dim P(q d )(p) = #{q d | q d ≤ p} = #{q j 1 , . . . , q j t−1 } = t − 1. Since dimV U (p) = 1 = t − (t − 1), Eq. (5.7) holds also in this case, and hence the equality (5.6) is verified. Now since all ε q c ,p c are monomorphisms, f U is also a monomorphism. On the other hand, π U is an epimorphism because Im π U = ∑ l c=1 Im ε p c ,V U = ∑ l c=1 P(p c ) = V U by Lemma 5.5(3). Furthermore, since ε p c ,V U ε q c ,p c = ε p c+1 ,V U ε q c ,p c+1 , we have π U f U = 0. These facts, together with the equality (5.6) show that the sequence (5.5) above is exact.
Obviously π U induces an isomorphism between the tops, and hence it is a projective cover of V U . The exactness of the sequence (5.5) shows that f U is a projective cover of Ker π U . Therefore the sequence (5.5) is a minimal projective presentation of V U .
where b j := b t for j = t + 1, . . . n, and Then U and U ′ are upsets satisfying Proof. Both U and U ′ are upsets by Lemma 5.5(5). The statement follows from the following calculations:  c = 1, . . . , l), with n ≥ j 1 > · · · > j l ≥ 1. Set also c(r) := min{c | p c ≤ r} for all r ∈ Source(U ′ ). Then a minimal projective presentation of V L is given by where π L := (ε p c ,V L ) l c=1 , and f ′ L := (δ c,c(r) ε r,p c(r) ) c,r . Here, δ i j is the Kronecker delta.
Proof. For simplicity we put P 0 := l c=1 P(p c ) and P 1 := l−1 d=1 P(q d ). Then we have an exact sequence which is a minimal projective presentation of V U by Proposition 5.6. By the same way we construct a minimal projective presentation of V U ′ of the form Then this induces an isomorphism between the tops, and hence π L := vπ U : P 0 → V L is a projective cover of V L . Set ΩV L := Ker π L and let µ : ΩV L → P 0 and u : V U ′ → V U be the inclusions. Then there exist unique morphisms g, g ′ that make the following diagram commute, with exact rows and columns: Consider the following diagram of solid arrows with exact rows: By the projectivity of P ′ 0 this is completed to the commutative diagram with h, h ′ . We may take h in such a way that µh : P ′ 0 → P 0 is given by the matrix f ′ L := (δ c,c(r) ε r,p c(r) ) c,r . Indeed, since u is a monomorphism, the equality ug ′ h = π U µh = π U (δ c,c(r) ε r,p c(r) ) c,r * = uπ U ′ shows that g ′ h = π U ′ , where the last equality ( * =) holds because the restrictions of both sides to P(r) coincide for all r ∈ Source(U ′ ).
Since the left square of the diagram (5.10) is a pushout and pullback diagram, we have the following exact sequence: Here (h, g) is a projective cover of ΩV L . Indeed, since π U ′ is a projective cover of V U ′ , we have Im f U ′ ⊆ rad P ′ 0 , and by the form of µh = f ′ L we have Im h ′ ⊆ rad P 1 . Therefore , as required. By connecting this sequence and the upper horizontal short exact sequence in the diagram (5.9) we obtain a minimal projective presentation Complexity Analysis for Line 2 of Algorithm 4. We let l = |Source(L)| = |Source(U)| and l ′ := |Source(U ′ )|. Furthermore, we set z := min{m, n} and assume that n = z = min{m, n}. Note that l, l ′ ≤ z. We give the cost of calculating (symbolically) the minimal projective presentation of V L as given by Proposition 5.8. For this, we need to compute U, U ′ (Lemma 5.7) and their source vertices Source(U ′ ) and Source(U) = {p 1 , . . . , p l } where p c = (b j c , j c ) for c = 1, . . . , l, with n ≥ j 1 > · · · > j l ≥ 1. Then we need to compute q d := p d ∨ p d+1 for all d = 1, . . . , l − 1, and c(r) = min{c | p c ≤ r} for each r ∈ Source(U ′ ).
• First, the computation of U and U ′ from L follows using Lemma 5.7. This costs O(z) by an obvious iteration over rows. Thus, overall we have a cost of O(z + l + l ′ log(l)) ≤ O(z log(z)). This ends our discussion and analysis of Line 2 of Algorithm 4 for the computation of a minimal projective presentation ending at an interval representation V L . Let us move on to the next line.
By Proposition 5.8 the morphism f 1 in the minimal projective presentation of V L has the form By ν this is sent to (5.11) by using the remark in Definition 5.3. Note that there is no need for new computations here. In the previous step we have symbolically calculated the minimal projective presentation by calculating Source(U) = {p 1 , . . . , p l }, Source(U ′ ), and q d = p d ∨ p d+1 for d = 1, . . . , l − 1, and c(r) = min{c | p c ≤ r} for each r ∈ Source(U ′ ), at total cost of O(z log(z)). In this step we essentially only turned ε q,p to ε ′ q,p . (Line 4 of Algorithm 4). In this part, we need to compute τL := Ker(ν f 1 : νP 1 → νP 0 ) First, we need to express ν f 1 = (ν f ′ L , ν f U ) : νP 1 → νP 0 , so far computed only symbolically, in terms of vector spaces and linear maps (as a representation).
The entries of ν f 1 = (ν f ′ L , ν f U ) involve the morphisms of the form ε ′ q,p (see Equation (5.11)). Fix one such ε ′ q,p . For each vertex v ∈ ( #-G m,n ) 0 , we compare v with p and q. If v ≤ p and v ≤ q, then we put the scalar 1 K in the appropriate entry. Over all vertices, this operation costs O(mn) in total. Then, since ν f U contains 2(l − 1) entries and ν f ′ L contains l ′ = |Source(U ′ )| entries involving ε ′ q,p , expressing ν f 1 as a collection of matrices costs O(mn((l + l ′ )) ≤ O(mnz).
For the computation of the kernel, we also need the internal maps (νP 1 )(α) for all α ∈ ( #-G m,n ) 1 . We let

I(r).
For fixed arrow α in #-G m,n , let a = #{r ∈ S 1 | s(α) ≤ r} and b = #{r ∈ S 1 | t(α) ≤ r}. We have (νP 1 )(α) = r∈S 1 I(r)(α) : K a → K b , and for each r ∈ S 1 , otherwise. (5.12) Then for each r ∈ S 1 , we determine the row and column in the b × a matrix corresponding to r. Note that only in the case of s(α) ≤ r and t(α) ≤ r will there be a corresponding entry. In that case, we put a 1 in the matrix. The rest of the entries of the matrix are 0. Since #S 1 = l ′ + l − 1, this costs O(l ′ + l) for each α. Then, since there are O(mn) arrows, we get a total cost of O(mn(l ′ + l)) ≤ O(mnz).
Having expressed ν f 1 : νP 1 → νP 0 in terms of vector spaces and linear maps, next we discuss the computation of Ker ν f 1 . In general, for a linear map φ : K p → K q , we can get an injection σ φ : Ker φ → K p by performing column operations on the augmented matrix: where I p denotes the identity matrix of size p. Since σ φ is a section map, there exists the retraction σ ′ φ such that σ ′ φ σ φ = I rank σ φ . This σ ′ φ is also obtained by the following elementary transformations of the matrix: Hence, for a morphism F : M → N in rep(Q, R), we can compute Ker F. For each vertex v ∈ Q 0 , we have (Ker F)(v) := Ker(F v ) = K rank σ Fv and for each arrow α : u → v in Q, we have (Ker F)(α) := σ ′ F v M(α)σ F u . Namely, Ker F is constructed to make the following diagram commutative.
Note that in our setting of computing Ker ν f 1 , we have q = dim νP 0 (u) ≤ z and p = dim νP 1 (u) ≤ 2z for all u ∈ ( #-G m,n ) 0 . Then the computation of (Ker ν f 1 )(v) := Ker((ν f 1 ) v ) for all vertices v costs O(z ω mn) via column echelon form computations. Furthermore, the computation of the internal maps (Ker ]σ (ν f 1 ) u for all arrows α costs O(z ω mn) total via matrix multiplications. (Line 5 of Algorithm 4). For Z = V L recall that θ L := θ Z is given by the composite where S is any simple direct summand of top Z, and π : top Z → S, top P 0 ∼ − → soc νP 0 are the canonical isomorphisms. Furthermore, the isomorphism top Z For each vertex u ∈ L 0 , we have where the entry ε ′ V L ,p 1 (u) is 1 if p 1 ≥ u and 0 otherwise. Computing over all vertices, we have a total cost of O(mn). (Line 6 of Algorithm 4). Note that the middle term E L := E Z , a pullback, can be computed as the kernel of We can compute this kernel by the method explained above, or instead, we can build it using information we already have.
Obviously we have E L ⊇ Ker ν f 1 ⊕ Ker θ L = τV L ⊕ Kerθ L . Let S = S(p 1 ) be the simple direct summand of top Z chosen above (in Line 5 of Algorithm 4), and let w := p 1 p 1 ∈ νP 1 ⊕ V L , where the first entry p 1 is p 1 ∈ I(q 1 ) in and the second entry p 1 is the obvious . From the exact sequences of the forms we have dim E L = dim τV L + dim Ker θ L + 1. Therefore noting that w ∈ τV L ⊕ Ker θ L , we have E L = τV L ⊕ Ker θ L ⊕ Kw as a vector space. Let L ′ be the interval full(L 0 \ {p 1 }). Then we have Ker θ L = V L ′ , and hence we finally have E L = τV L ⊕ V L ′ ⊕ Kw.
The representation structure of E L is defined by those of τV L , V L ′ and that of Kw defined by E L (α p 1 )w := 0 V L (α p 1 )(p 1 ) , E L (β p 1 )w := τV L (β p 1 )(p 1 ) V L (β p 1 )(p 1 ) , where α p 1 , β p 1 are the horizontal arrow and the vertical arrow of #-G m,n starting from p 1 , respectively. Namely, for each arrow α : s → t in #-G m,n , E L (α) is given by for arrows α : s → t with s = p 1 and t = p 1 ; for arrows α ending at t = p 1 , and finally for arrows starting at s = p 1 (given by the arrows α p 1 and β p 1 ), and where we set π 1 : νP 1 ⊕V L → νP 1 , and π 2 : νP 1 ⊕V L → V L to be the canonical projections, and since π 1 (E L ) ≤ π 1 (τV L ) + π 1 (Ker θ L ) + π 1 (Kv) ≤ τV L , they restrict to the morphisms π ′ 1 := π 1 | E L : E L → τV L and π ′ 2 := π 2 | E L : E L → V L . Note that we have already computed the maps τV L (α), and we only need to copy the known information to create E L . Thus, for the computational complexity, we only estimate the size of E L , which is given by ∑ α∈( #-G m,n ) 1 dim E L (s(α)) dim E L (t(α)) ≤ mnl 2 ≤ mnz 2 .
The above arguments show that Proof. In case that L is projective, L has support consisting of all vertices that have a directed path from g, for some fixed vertex g. The module rad L is simply the interval whose support is the support of L with g excluded. We then set E L = rad L and τL = 0. Otherwise, if L is not projective, we use Algorithm 4 to compute τL and E L . Next, we need to compute: and In particular, we compute dim Hom(M,Y ) = D 0 − rankB. Let us analyze the size of B, which depends on Y . Let ϒ = max i∈Q 0 dimY (i). Then where the factor 2 for D 1 comes from the fact that in #-G m,n , for each vertex i there are at most 2 arrows starting from i. We then note that using Gaussian elimination, rank B can be computed in time O(2ϒ ω (dim M) ω ) [22].
• In the case that Y = rad L or Y = L, ϒ = 1.
• In the case that Y = τL, we give an upper bound for ϒ as below. We note that since τL = ker(ν f 1 : νP 1 → νP 0 ). Furthermore, the maximum of dim νP 1 occurs at the bottom-left corner (1, 1) since νP 1 is injective. The final equality follows from the definition of ν. Then since P 1 = r∈Source(U ′ ) P(r) ⊕ l−1 d=1 P(q d ) by Proposition 5.8, we see that dim P 1 (m, n) = l ′ + l − 1 where l ′ = #Source(U ′ ) and l = #Source(U). Thus ϒ ≤ l ′ + l − 1 for Y = τL. • Finally, for the case Y = E L , the middle term of the almost split sequence, we have ϒ ≤ l ′ + l. To see this, note that dim E L (i) = dim τL(i) + dim L(i) so that using the previous case. Recall that l ′ ≤ z and l ≤ z, where z = min{m, n}. Combining the above, we have a time complexity of is the cost of computing the terms of the almost split sequence as given in Proposition 5.9, and O(z ω (dim M) ω ) if L is projective.
In the worst case, we need to test all #I m,n intervals, and we obtain the following: Theorem 5.11. Algorithm 1 can be performed in time complexity where dim M is the total dimension of M.

5.1.
Interval selection heuristic. An important complexity drawback of the above is the number of intervals we need to check. Given a module M, we need, in the worst case, to compute the multiplicities with respect to all intervals, which is #I m,n in number. The heuristics explained below do not change this worst case analysis, but we can hope that not all intervals appear in the decomposition for particular cases. Using adapted heuristics we can reduce the number of intervals to be checked. Contained-support heuristic. We note that if an interval is a summand of a module M then its support is included in the support of M. Thus, the number of intervals to check can be reduced by only considering intervals included in the support of M. For example, the algorithm GETCANDIDATES(x, y) in Algorithm 2 can be improved by including this heuristic, checking inclusion in the support of dimVecRemaining at each step, as dimVecRemaining represents the part of M still unprocessed. Line-restriction heuristic. We can further reduce the number of intervals to be tested by the following heuristic, which builds up candidate intervals by stacking 1D intervals to form 2D intervals, with the 1D intervals obtained by decomposing the restriction of M to horizontal lines.
Suppose that M = ℓ i=1 T i ⊕ X, where T i are all interval representations and X has no interval summands. That is, ℓ i=1 T i is the interval-decomposable part of M. We wish to create a set of candidate intervals containing {T i | i = 1, . . . , ℓ}, without knowing the decomposition given above.
The restriction of the above to a horizontal line L on the commutative grid gives M| L = ℓ i=1 T i | L ⊕ X| L , a decomposition of the 1D persistence module M| L . Note that since T i are 2D interval representations, they restrict to 1D interval representations T i | L . The indecomposable decomposition of M| L necessarily contains all of these 1D intervals (together with intervals coming from X| L ). Note that since M| L is a 1D persistence module, it decomposes into a set of 1D intervals which we denote as C(L).
We stack valid combinations of intervals from C(L) over all horizontal lines L, to produce a set of 2D intervals that necessarily contain {T i | i = 1, . . . , ℓ}. By a valid stacking, we mean that the resulting 2D object should be a valid interval, i.e. one with staircase shape. This procedure is described in Algorithm 5. for T ∈ L do 8: for S ∈ C(L i ) do 9: if S on T is a valid stacking then 10: Add the stacking of S on T to L ′ .

11:
Add the stacking of S on T extended by 0 on the whole grid to P. return P 18: end function this is a 2D interval on the grid defined up to the ith line. On the other hand, if T is 0, then the module T → S is also well defined and is simply the 2D interval that is nonzero only on the ith line.
We need to check that all 2D interval modules that are part of the decomposition of M are in P. This is a direct consequence of the following observation.
If M = ℓ j=1 T j ⊕ X, where ℓ j=1 T j is the interval-decomposable part of M, then for each j the restriction T j to the ith line T j | L i is an interval in C(L i ). Moreover, T j can be rewritten as the stacking T j | L 1 → · · · → T j | L n . Since T j is connected, the 1D intervals T j | L i that are nonzero correspond to a contiguous sequence of indices i (line heights), say s ≤ i ≤ t for some s and t. Thus, T j is formed in Algorithm 5 using the zero module up to the (s − 1)th line (Line 6 of Algorithm 5 with i = s − 1), stacking T j | L i ∈ C(L i ) for s ≤ i ≤ t (always valid stackings), and finishing up by zeros to the rest of the grid. That is T j ∈ P at the end of the algorithm.
Image-based heuristic. Another approach, given in Algorithm 6, is to use the ranks of maps to choose which intervals to test. We start from L the zero module and iteratively add interval summands of M with their multiplicity to L. At the end, if M is intervaldecomposable, then M ∼ = L. We will greedily work towards equalizing the dimension vectors of M and L. If we reach a point where the greedy procedure fails, this means that the module M is not interval-decomposable.
In lines 4 and 5, the algorithm selects the leftmost vertex s on the lowest possible line where the dimensions of the vector spaces of L and M disagrees. We then look for a rectangle B as large as possible that must be contained in the support of at least one indecomposable summand of M that does not yet appear in L. In lines 8 and 9, we achieve this by selecting a maximal element t such that the rank of the map M(s → t) is greater than the rank of L(s → t). Then B is a subset of each support of the intervals we want to find.
To reduce the number of candidates, we remark that those intervals must interact with the map M(s → t) in the following way. We first initialize F = M s , and progressively consider smaller and smaller subspaces of F as we process intervals. At each iteration of the inner while loop, we consider the subspace not accounted for previously, via a complementary basis in F of the kernel of M(s → t) in line 10. We note that we exclude the kernel because we want the intervals that contain the rectangle B from s to t in its support. Then the supports of the intervals of interest must be contained in the set of vertices reachable by images and pre-images along walks starting at those basis elements. This is encoded in the sets C i as defined in line 13. We can compute each set C i independently, and the intervals must be contained in the union f i=k+1 C i . Having obtained candidate intervals, in line 15 we then compute their true multiplicities d M (I) in M, for example via Algorithm 3 as discussed above. If M is intervaldecomposable, then since the intervals I being considered (which contain the rectangle from s to t), together with the already-processed L(s → t) should account for all the rank of M(s → t). Thus, in line 16 we use this condition to determine whether or not to stop early. If we are not sure that M is not interval-decomposable, in line 17 we update L and F and continue with the iteration. L ← 0.

Algorithm 6 Image based decomposition
3: s ← the minimal element of S on lowest row.

11:
B ← the rectangle with lower left corner s and upper right corner t. 12: for all i = k + 1, . . ., f do 13: [0, j] such that ∀0 < l < j − 3, r l and r l+2 are incomparable, r 0 = s, r 1 = t, r j = r, and ∀l, r l does not lie below the sth line, Proof. First, if the algorithm returns a module L then it is necessarily the interval decomposition of M. Indeed, by construction, L is a direct sum of intervals, and every such interval appears exactly the same number of times in L as it does in M. So M is isomorphic to the direct sum of L with another module L ′ . Moreover, dim M = dim L if L is returned, and so L ′ = 0 and M ∼ = L.
Second, we need to show that the algorithm always terminates. If dim M = dim L then S = 0 and s is well defined. For the second while loop, if dim M s = dim L s then T is not empty because M(s → s) is the identity with rank equal to dim M s = dim L s , and so s ∈ T . Thus, a maximal element t of T exists. Therefore, for every round of this loop, either the algorithm returns that M is not interval-decomposable, or the dimension F decreases and T is reduced.
Finally, we must show that if M is interval-decomposable, Algorithm 6 will always return a module L. Assume that M is interval-decomposable. Arriving at line 16, we have the following properties. We have picked two elements s and t, and we already know an interval decomposable module L that appears in the decomposition of M. As M is intervaldecomposable, we have at most rank M(s → t) distinct interval modules I 1 , . . . , I r in the decomposition of M such that rank I i (s → t) = 1. Note that ∑ r i=1 d M (I i ) = rankM(s → t). We separate the set {I 1 , . . . , I r } in two sets depending on their lowest left corner, i.e. the leftmost vertex on the lowest line of its support.
As the support of I is convex and contains no element below the line of s, we can build an alternating walk of paths by extracting a subsequence (r l ) j l=1 of (v 1 , . . . , v p ) such that no r l lies on a line lower than s, r l and r l+2 are incomparable for l < j − 3, r 1 = t and r j = v. Then fix r 0 = s. For ease of notation, we require that j is even. If the length is odd, we simply put r j+1 = r j and use the subsequence until r j+1 .
The map I(r j → r j−1 ) −1 I(r j−1 → r j−2 ) · · · I(r 2 → r 1 ) −1 I(r 0 → r 1 ) is an isomorphism. Translated into M, this implies that As x can be expressed as a linear combination of {x 1 , . . . , x f }, there exists at least one i ∈ {1, . . . , f } such that Since all elements x j for j ≤ k are part of Ker M(s → t), we have i ≥ k + 1 and v ∈ C i . Moreover, rank I(s → t) = 0 and thus the support of I also contains the rectangle B. Note also that there is no double counting. Therefore rank M(s → t) = rank L(s → t) + ∑dM(I), and the algorithm does not incorrectly stop early.

5.2.
Interval decomposability with [15] decomposition algorithm. Dey and Xin proposed in [15] a generalization of the persistence algorithm to decompose multidimensional persistence modules. Once an indecomposable decomposition is computed, testing for interval-decomposability is a simple matter as one only needs to check that all elements of the decomposition are interval modules. The generalized persistence algorithm however is limited to a specific case: In the matrix encoding the minimal presentation of the module, no pair of columns nor pair of rows can have the same grade. Translated into the language of this paper, this means that the generators of the projective modules appearing in the minimal projective presentation of the module are all distinct.
It was suggested in [15] that for modules not satisfying this property, an easy workaround can be implemented. In this case, the generalized persistence algorithm does not always provide a full decomposition. It nonetheless returns a direct sum decomposition of the module, the only limitation being that some of the summands might not be indecomposable. The suggested workaround is to arbitrarily fix an order on the rows and columns which have same grades, in essence artificially breaking ties in the grades. By exhaustively checking all such tie-breaking orders on the rows and columns, it was claimed that the algorithm will provide a full decomposition at some point.
This workaround is valid if there exists an order that allows for a full decomposition. Unfortunately, this is not always true as we show with the following example. We show that for whatever order chosen for elements with the same grade, no full decomposition can be obtained through the algorithm from [15]. In fact, we show a stronger statement, that for whatever order chosen for elements with the same grade, no algorithm using only "matrix operations in one direction" can obtain a full decomposition.
Example 5.14. We consider the field K = F 2 2 = F 2 (λ ) = {0, 1, λ , λ 2 }, where λ satisfies λ 2 + λ + 1 = 0. Let c ω z b y a α x and M : be the 2 × 2 × 2 commutative cube #-G 2,2,2 and a K-representation of #-G 2,2,2 , respectively. Then one can calculate the following minimal projective presentation for M: P(a) 2 ⊕ P(b) 2 ⊕ P(c) 2 P(x) 2 ⊕ P(y) 2 ⊕ P(z) 2 M 0 where P(v) is the indecomposable projective representation with source v, and where the morphism p 1 can be given in matrix form as 2  First, let us show that M is decomposable. We note that vertices x, y, and z do not have any arrows between them, and similarly for a, b, c. Thus, allowable matrix operations are restricted to within each block row or column in Equation (5.13). For ease of notation, let X be the matrix X = for P any invertible 2 × 2 matrix, by alternating row and column operations with respect to P. That is, the matrix form of p 1 can be transformed by conjugation of X, without affecting the other block entries. By letting P = 1 λ λ 1 , we have P −1 = λ 2 1 1 λ 2 since λ 2 + λ = −1 = 1 and λ 3 + 1 = 0 in the base field K = F 2 (λ ). Thus with the two summands each inducing a nontrivial summand of M ∼ = Coker p 1 ∼ = Coker p ′ 1 ⊕ Coker p ′′ 1 , where p ′ 1 (p ′′ 1 , respectively) is the first (second, respectively) direct summand of p 1 in the decomposition above.
Next, let us consider the workaround proposed by Dey and Xin, for when generators of the projectives appearing in the projective presentation of M have equal grades. This is exactly the case we have here, as we have each two copies of P(v) for v = a, b, c and v = x, y, z.
In general, without any restrictions, p 1 given in Equation (5.13) can be transformed into ) where X is as defined above, A 1 , A 2 , A 3 , B 1 , B 2 , B 3 are invertible 2 × 2 K-matrices, and 1 P(v) is the identity morphism of P(v). Note that since there are no nonzero morphisms among P(x), P(y), P(z), and among P(a), P(b), P(c), the off-diagonal blocks are always zero.
The workaround involves arbitrarily fixing an order for the rows and columns which have the same grades, and running their algorithm. Their algorithm only performs row/column operations in "one direction" with respect to the fixed order. This involves restricting A 1 , A 2 , A 3 , B 1 , B 2 , B 3 in the transformation matrices to either being upper or lower triangular. We note that we do not a priori require the matrices to be all upper or all lower. We show that it is impossible to compute a decomposition for p 1 with this restriction.
First, let us study the product AB of two upper or lower invertible 2 × 2 matrices A and B, since p 1 after transformation (Equation (5.15)) contains blocks of that form. Let A = a 1 a u a l a 2 and B = b 1 b u b l b 2 . Since we impose that A be upper or lower triangular and invertible, we have a u = 0 or a l = 0, and a 1 = 0, a 2 = 0, with similar conditions on B. Furthermore, we know one particular decomposition of p 1 as given in Equation (5.14), with each block row and block column decomposing into two. Thus, any other nontrivial decomposition of p 1 must have its blocks of the form AB be diagonal matrices. Note that given the restrictions on A and B, AB cannot be anti-diagonal.
Requiring the diagonality of AB = a 1 b 1 + a u b l a 1 b u + b 2 a u b 1 a l + a 2 b l a 2 b 2 + a l b l is equivalent to requiring that a 1 b u + b 2 a u = 0 and b 1 a l + a 2 b l = 0. Since a 1 , a 2 , b 1 , b 2 are all nonzero, we conclude that a u = 0 if and only if b u = 0, and a l = 0 if and only if b l = 0. That is, the "shape" (upper or lower) of A is the same as the "shape" of B. The transformed p 1 in Equation (5.15) has blocks A 1 B 2 , A 2 B 1 , A 2 B 3 , A 3 B 2 , and A 3 B 3 . Requiring that they all be diagonal implies that the shapes of A 1 , A 2 , A 3 , B 1 , B 2 , B 3 are all the same. That is, the transformation blocks need to all be upper triangular, or all lower triangular.
Finally, we consider the final block A 1 XB 1 in Equation (5.15), where X = 1 1 1 0 as before. Suppose that all the transformation blocks are upper triangular. In particular, A 1 and B 1 are upper triangular (a l = 0, b l = 0). Then Since a 2 b 1 cannot be zero, this cannot be diagonal. Similarly, in the case that all the transformation blocks are lower triangular, A 1 XB 1 cannot be diagonal. By the arguments above, there are no other possibilities for obtaining a nontrivial decomposition of p 1 . Thus, given the restrictions on the transformations on p 1 , one cannot obtain a full decomposition of p 1 .

CONCLUSION
In this paper we presented an algorithm for testing S -decomposability for any finite set S of indecomposables, based on the procedure [1] for computing the multiplicity of a given indecomposable in the decomposition of a module. We specifically studied the case of interval-decomposability by first providing a characterization and an enumeration method for interval modules in the 2D equioriented commutative grid case. To the extent of our knowledge, this is the first algorithm to test interval-decomposability of a module, without the need for computing its full decomposition if the answer is negative.
Interval modules have a very specific structure that made computation easier, especially the fact that their endomorphism rings are isomorphic to the underlying field K. This slightly simplified Algorithm 4, an essential component of the algorithm, compared to the general procedure (see Section 3.2 of [21]). When considering a different class S of indecomposables, the aforementioned simplification in Algorithm 4 may no longer hold, but the general procedure is still valid.
Another generalization is to consider interval modules of nD commutative grids with n > 2. More generally, in any finite bound quiver, we can still define and enumerate interval modules by using a brute-force approach. Then we can apply our interval-decomposability algorithm. However, the brute-force enumeration comes with an additional cost as we do not have an easy characterization of intervals in the general case, in contrast to the staircase shape of 2D intervals.
For the case of 2D equioriented commutative grids considered in this paper, we also provided several heuristics to try to speed up the enumeration of interval modules and the testing of interval-decomposability. It would be interesting to implement these various heuristics and conduct an in-depth comparison on practical instances to evaluate their performances.