On the poset and asymptotics of Tesler Matrices

Tesler matrices are certain integral matrices counted by the Kostant partition function and have appeared recently in Haglund's study of diagonal harmonics. In 2014, Drew Armstrong defined a poset on such matrices and conjectured that the characteristic polynomial of this poset is a power of $(q-1)$. We use a method of Hallam and Sagan to prove a stronger version of this conjecture for posets of a certain class of generalized Tesler matrices. We also study bounds for the number of Tesler matrices and how they compare to the number of parking functions, the dimension of the space of diagonal harmonics.


Introduction
Tesler matrices were introduced by Glenn Tesler to study Macdonald polynomials. They have been recently studied due to their relationship with diagonal harmonics and Haglund proved in [9] that the bigraded Hilbert series for the space of diagonal harmonics, denoted DH n , is the sum over Tesler matrices of a bivariate weight.
In (1), the Hilbert series is over the space DH n which has dimension (n + 1) n−1 . For more on this space, see [6,8]. Although the enumeration and asymptotics of Tesler matrices are not known, there are some nice product formulas when considering specializations of the alternating weight wt q,t (·). For instance, it was shown in [3] that (3) q ( n 2 ) A wt q,q −1 (A) = [n + 1] n−1 q where [n] q = 1 + q + · · · + q n−1 . Furthermore, it was also shown in [14] that Equations (3) and (4) show product formulas involving alternating sums of Tesler matrices. In this paper, we prove another such result that was initially conjectured by Armstrong in [1] by using a different alternating sum. He defines a poset on the set of Tesler matrices which we will denote as P (1 n ) and refer to as the Tesler poset. Recall that the characteristic polynomial on the poset (P, ), denoted χ(P ; q), is a Möbius function weighted rank generating function. Hence, where we use the terminology and notation of [20,Ch.3] for the Möbius function µ(·), the rank of an element A ∈ P and of a poset P as ρ(A), ρ(P ) respectively, and0 for the unique least element.
We will look at the characteristic polynomial of the Tesler poset, but we first need to give necessary definitions and conventions to discuss Tesler matrices in a precise manner.
We denote the number of matrices in U n with a hook sum vector of (α 1 , . . . , α n ) as T (α 1 , . . . , α n ) and the set of such matrices as T (α 1 , . . . , α n ) and refer to these as generalized Tesler matrices. We often use short hand of T (1 n ) and T (1 n ) for the number of and set of Tesler matrices respectively. [1]). Let P (1 n ) be the poset on Tesler matrices T (1 n ), then

Conjecture 1.2 (Armstrong
The method that we use in this paper extends to the larger class of generalized Tesler matrices with binary hook sums and settles Armstrong's conjecture with a simple calculation. Theorem 1.3. Let α = (α n−1 , . . . , α 0 ) ∈ {0, 1} n and P (α) be the poset on generalized Tesler matrices T (α). Then, letting w(α) = n−1 i=0 i · α i , we have that χ(P (α); q) = (q − 1) w (α) To see why this theorem settles Armstrong's conjecture, note that w(1, 1, . . . , 1) = n 2 . In addition, this theorem is also consistent with a well known result on the Boolean lattice (see Prop. 3.5). In order to prove this theorem, we will adapt a method [11] of Joshua Hallam and Bruce Sagan. We also show that certain powers of (q − 1) divide the characteristic polynomial of the Tesler poset corresponding to a hook sum vector with either a trailing or a leading binary word. (See Corollary 4.13.) Although Tesler matrices have been connected in [8] to diagonal harmonics via a bivariate weight and in [15] were shown to be a solution to the K ostant partition function, there are still many enumerative questions on Tesler matrices that have yet to be answered. For example, at the start of this paper, the previously known bounds for T (1 n ) were n! ≤ T (1 n ) ≤ 2 ( n 2 ) [15, §4]. In Section 5, through simple observations of an enumerative tool that we call the Armstrong polynomial, we are able to improve the lower bound such that In addition, we can similarly get a tighter upper bound. There are also interesting enumerative results when considering generalized Tesler matrices. Let C i = 1 i+1 2n n ∼ 4 n / √ πn 3 2 be the ith Catalan number. Zeilberger [23] showed that Thus T (1, 2, . . . , n) = e Θ(n 2 ) , which motivated the following question. Remark. Note that even the improved lower bound needs to be significantly improved further to give an affirmative answer to Question 1.4. However, the existing data in the OEIS A008608 suggests that log(T (1 n )) = O(n 1.6 ) as noted in [17].
We denote the hook sum vector (1, 1, . . . , 1, 0, 0, . . . , 0) with k 1's and (n − k) 0's as (1 k , 0 n−k ). This set of generalized Tesler matrices have previously been studied in [12] and we analyze the set T (1 k , 0 n−k ) in Section 6 to get some insight into Tesler matrices. We will show that This leads us to conjecture that the number of Tesler matrices can eventually be bounded below by the dimension of DH n , which is (n + 1) n−1 (also the number of parking functions of size n). We also find generating functions T k (x) for particular values of k. When k = 1, T (1, 0 n−1 ) = 2 n−1 , so this generating function is trivial. However, when k = 2 we find the generating function in Proposition 6.3 [12]. While the case where k = 3 is still open, these generating functions could provide insight about a generating function for the number of Tesler matrices.
Outline: In Section 2, we will highlight some previous results and methods that will be pertinent in this paper. Then, in Section 3, we introduce the Tesler poset, some its properties, and show that a specific hook sum vector yields a poset which is isomorphic to the well-known Boolean lattice that was initially noticed by Alejandro H. Morales in [16]. Using these results, we will then prove Theorem 1.3 in Section 4 and explore some of its corollaries. Finally, in the last two sections, we will explore asymptotics and other enumerative questions regarding generalized Tesler matrices and also explore the significance of settling Conjecture 1.2 in respect to the asymptotics of Tesler matrices.
2. Background 2.1. Tesler Generating Algorithm. We will discuss a method for generating generalized Tesler matrices as given by Drew Armstrong [1]. Fix a generalized Tesler matrix A = (a ij ) of size n with a hook sum vector (α 1 , . . . , α n ). Then, consider the main-diagonal entries of A as an n-tuple (d 1 , . . . , d n ) with d i := a ii . We will create a generalized Tesler matrix A = (a ij ) with hook sum vector (α 1 , . . . , α n , α n+1 ) by first constructing its main-diagonal (d 1 , . . . , d n+1 ). For each entry d i in the n-tuple, we take that entry and replace it with d i where 0 ≤ d i ≤ d i and set a n+1,i = d i −d i such that the ith hook sum remains unchanged. Then, let d n+1 be such that the sum of our newly constructed main-diagonal (n + 1)-tuple adds up to n+1 k=1 α k and let the other entries in the matrix remain unchanged.
Example 2.1. Our initial Tesler matrix has a main-diagonal tuple (0, 1, 2). The algorithm generates six (1 · 2 · 3) main-diagonal 4-tuples and hence six Tesler matrices of size 4. Figure 1. Note that the red triangle is constant and that the blue rectangle corresponds to what was subtracted from the original main-diagonal.

Proposition 2.2. Iterating the Tesler Generating Algorithm yields all Tesler matrices.
Proof. Seeking a contradiction, suppose that there exists a least integer z corresponding to the size of at least one Tesler matrix A that is not generated by this process. By reversing this process, we can then create a Tesler matrix of a smaller size (z − 1) that must be generated from this process as it is smaller in size than A. We could then generate A from a matrix that is generated through this process. Hence, this process generates all of the Tesler matrices.
Fixing A = (a ij ) with hook sum vector (α 1 , . . . , α n ), we now consider the number of generalized Tesler matrices of size (n + 1) that A generates. Definition 2.3. Let A = (a ij ) be an n × n generalized Tesler matrix, then let d i := a ii be the ith main-diagonal entry. We define the diagonal product of A, or dpro(A), as follows: Note that T (α 1 , . . . , α n , α n+1 ) = A∈T (α 1 ,...,αn) dpro(A) 2.2. Integral Flow Representation. A Tesler matrix of size n can also be represented as an integral flow on the complete directed graph on (n + 1) vertices with net flows equal to (1, 1, . . . , 1, −n) [15]. Given any generalized Tesler matrix with hook sum vector (α 1 , . . . , α n ), we can represent it as an integral flow with net flows equal to (α 1 , . . . , The bijection in [15] shows that these are equivalent notions. They consider the main-diagonal entry in row i to be the flow sent from the ith vertex to the (n + 1)st vertex, which is the rightmost vertex. Then for each entry such that i < j, a ij corresponds to the flow between the ith and jth vertices. See Figure 2 below for an example of this bijection. Figure 2. Net flows depicted underneathe the complete directed graph 4

Method of Hallam and Sagan.
Sagan [18] has previously done work on why the characteristic polynomial of a poset factors. Recently, Sagan and Hallam [11] have introduced a method for showing that the characteristic polynomial of a poset factors. We will apply Hallam and Sagan's method to the Boolean lattice to prove Theorem 1.3. Their method is to take ranked posets P 1 , . . . , P k for which the characteristic polynomial is known, and to consider Q = P 1 × · · · × P k . We recall the following facts regarding the characteristic polynomial of posets.
Then, they define an equivalence relation (∼) to identify elements in Q such that Q /∼ ∼ = P . The process of identifying elements leaves the characteristic polynomial unchanged if the equivalence relation satisfies certain conditions. First, they say that their equivalence relation is homogeneous if Next, we need ∼ to preserve rank so that if x ∼ y, then ρ(x) = ρ(y). Lastly, letting µ(·) be the Möbius function on Q /∼ and considering any nonzero X ∈ Q /∼ with lower order ideal L(X) ⊆ Q /∼, Hallam and Sagan refer to (5) as the summation condition and we adopt this same terminology. [11]). Let P, Q be posets as above and ∼ be an equivalence relation on Q which is homogeneous, preserves rank and satisfies the summation condition. Then

Lemma 2.4 (Hallam and Sagan
Remark. Hence, given suitable P i , we see that χ(P ; q) factors. In Hallam and Sagan's paper [11], they use claws CL n to construct their products. We will use the Boolean lattice to construct our product.

The Tesler Poset
We first define the cover relation, introduced by Drew Armstrong [1], and will then use this definition to prove a couple of useful facts which yield some intuition regarding the Tesler poset.

Definition of Tesler Poset.
There are two cases in the example in Figure 3 of the cover relation for the matrix representation depending on the location of the entries. Figure 3. The matrix version of the cover relation 5 The poset has a least element,0, with the main-diagonal corresponding to the hook sum vector and all other entries equal to zero. Hence, in the case of a hook sum vector (1, 1, . . . , 1), the minimal element is the identity matrix of size n.

Definition 3.1. Fix a hook sum vector α. Then
Remark. With the equivalent notion of a Tesler matrix as an integral flow on the complete directed graph, the cover relation for the Tesler poset can also be described in terms of integral flows. Abusing notation, let A, B be the corresponding integral flows to Tesler matrices A and B respectively. Then, integral flow A covers B if there exists vertices i < j < k such that the flow between i and k is 1 more in B than it is in A and the flow from vertices j to k and i to j is 1 more in A than it is in B. Figure 4. The cover relation for the Integral flow representation Example 3.2. In the poset below, we see that Armstrong's conjecture is true for the case where n = 3. Collecting terms from the bottom-up, we get Remark. By looking at the Hasse diagram of the Tesler poset P (1 3 ) in Figure 5, we see that it is not a lattice.

Proposition 3.3. The rank of a matrix in the Tesler poset P (α) is equal to the sum of the nonmain-diagonal entries. That is, for
Proof. As we see in the definition of the cover relation, for any A, B ∈ T (α), if A covers B, then we necessarily have that the sum of the non-main-diagonal entries for A is one more than the sum of the non-main-diagonal entries for B. The minimal entry has a non-main-diagonal sum of 0 and we get the desired result.

Corollary 3.4. The rank of the Tesler poset
The maximal element, M ∈ P (α), is such that the entry M i,i+1 = i k=1 α k and M n,n = α n with all other entries zero. This is easy to see when considering the integral flow representation. The result then follows from the previous proposition.
Remark. By the algorithm in Section 2.1, the last element in the hook sum vector does not impact the poset. Hence, P (1, 0, . . . , 0, 1) ∼ = P (1, 0, . . . , 0). Proposition 3.5. Let a n = (1, 0, . . . , 0, 1) be a hook sum vector of size n. Then we have that P (a n ) ∼ = B n−1 [16]. Figure 6 for an example of the map Φ.) We show that this is a bijection via induction and the Tesler generating algorithm described in Section 2.1. This clearly holds for when n = 1, so let us suppose it holds for size n. Then, the size n + 1 generalized Tesler matrices with hook sum vector a n+1 are constructed from the size n generalized Tesler matrices via the Tesler generating algorithm.
Consider an arbitrary generalized Tesler matrix with hook sum vector a n and by our inductive hypothesis, it corresponds with some subset of [n − 1]. If we choose to subtract the one non-zero main-diagonal entry, and hence add it to the (n + 1)st column, then this amounts to adding the element n to our set. If we don't subtract the non-zero main-diagonal element, then the (n + 1)st column will be all zeroes and hence, we don't add the element n to our set. Therefore, this map is a bijection. Observe that this bijection is order preserving as all of the covers of a generalized Tesler matrix with hook sum vector a n correspond to a set which contain the initial set. This is because if there is a non-zero entry in the ith column where i = 1, then there must be another non-zero entry in that same column after applying the cover relation since the hook sum must add up to 0, applying the cover relation does not remove any of the elements of the set. If i = 1, then there is only one such matrix that can satisfy the hook sum vector a n and this matrix corresponds to the empty set. As a result, Φ is also order preserving and we have our desired result. Corollary 3.6. The characteristic polynomial of the poset P (a n ) is (q − 1) n−1 Proof. The characteristic polynomial of B n is known to be (q − 1) n , hence the previous proposition that P (a n ) ∼ = B n−1 gives us the desired result. 4. Application of Hallam-Sagan to the Tesler Poset 4.1. Initial Case. We now can use the Hallam-Sagan method discussed in Section 2.3 for calculating the characteristic polynomial of the Tesler poset. In this subsection, we consider the initial case which serves as a motivating example. Let α, e n−1 ∈ {0, 1} n be such that α n−1 = 0, where e i is the ith elementary vector. In Figure 7, for instance, we have α = (1, 0, 1) and α + e 2 = (1, 1, 1). We want to compute the characteristic polynomial for the poset P (α + e n−1 ) using the characteristic polynomials of P (α) and B 1 .
We construct our product poset by considering a set of maps between T (α) and T (α+e n−1 ). Let It is easy to check that these maps are well-defined and that they form a poset isomorphic to B 1 where φ 1 φ 2 . In this motivation example, we define our equivalence relation ∼ on the product poset P . As we will show in Section 4.2, ∼ satisfies all of the conditions in Lemma 2.4 so Our method in the case where n = 3 with the equivalent elements enclosed in a green rectangle.

General Case.
We will now generalize the idea from the previous section which will lead to our main theorem. Fix n, r ∈ N such that r < n, and let α ∈ {0, 1} n and α + e n−r+1 ∈ {0, 1} n . The previous section considers the case where r = 2. We seek to show that We will consider a poset of maps from T (α) to T (α + e n−r+1 ). While there are certainly other such maps, we will consider a natural, intuitive set of maps which have a nice structure and turn out to be sufficient. In order for φ : T (α) → T (α + e n−r+1 ) to be well-defined, it must increase the (n − r + 1)st hook sum by 1 while not changing the other hook sums. As a result, we consider maps which can be thought of as an r ×r upper triangular matrix with a hook sum vector (1, 0 r−1 ), which is exactly an element of T (1, 0 r−1 ). We previously showed that these matrices are isomorphic to the Boolean lattice, so we often label these maps with their corresponding set.  Figure 8. * indicates no change to the element Let Q A be the subposet of P (α + e n−r+1 ) of the matrices φ A (T (α)).

Proposition 4.2.
We have the following facts: Proof.
(1) Clearly φ A is an injective map and is order preserving, so the posets are therefore isomorphic.
(2) By the well-defined nature of all of these maps, we clearly have that A⊆n φ A (T (α)) ⊆ T (α + e n−r+1 ). Now, let us consider the other direction. Let A ∈ T (α + e n−r+1 ), then there must be a non-zero element in the rth row. If this nonzero element is also in the rth column, then one can check that A ∈ φ ∅ (T (α)). Otherwise, by considering the columns with nonzero entries, we can construct a set B ⊆ [n] in the same manner as we did in Proposition 3.5 such that A ∈ φ B (T (α)). That is, if and only if there is a non-zero entry in the (n − i + 1)st column, then the element i is in the set B. As a result, we get that T (α + e n−r+1 ) ⊆ A⊆n φ A (T (α)).
We can form a poset of the maps φ A that is isomorphic to the Boolean lattice as we did in our motivating example and can consider the product poset P (α) × B r−1 . Since the maps φ A are additive maps, we often view φ A as a matrix S A ∈ T (1, 0 r−1 ).

Definition 4.3.
We define the equivalence relation ∼ on P (α) × B r−1 by where the equality is matrix equality. We have to be careful with how we define the addition of these matrices as the dimensions of the square matrices do not match. We extend the matrix S i such that it is an n × n matrix in the following manner. The entry S i,j becomes the entry S i+(n−r),j+(n−r) and all other entries of S are zero. Essentially, we are placing our matrix in the lower right corner in order to make addition of matrices defined. Clearly this is a homogeneous equivalence relation which preserves rank as it satisfies the conditions discussed in Section 2.3. Therefore, we have that P (α) × Br−1 /∼ is a valid poset. We now seek to show that the summation condition (5) holds. In order to do this, we will first need some technical lemmas. The first lemma restricts what elements can be in the same equivalence class. Proof. We show that A 0 + S d = A + S 0 by showing they are not equal in the (n − r + 1)st entry along the main-diagonal. That is, the values (A 0 + S d ) (n−r+1,n−r+1) and (A + S 0 ) (n−r+1,n−r+1) are different. On the RHS, we must have that this entry is equal to 0 as it is 0 in both matrices that we are adding. We know that this entry in S d is 0 since otherwise we would necessarily have that φ d is the minimal element. Considering this same entry for the LHS, we know that since φ 0 has a 1 in this particular entry, the non-negativity of elements in A gives that this element on the LHS must be greater than or equal to 1. Hence, we do not have matrix equality with the sum and thus the two elements are not equivalent under ∼. See Figure 10 below for a visual representative of this argument in a particular case.
(A, S 0 ) : Figure 10. The case when n = 5 and r = 3. Note that * indicates that we do not know the particular entry. Now, we fix an element X ∈ P (α) × Br−1 /∼. The next lemma dictates the elements that can be in the lower order ideal L(X). For the rest of the section, we let A 0 be the minimal element of P (α) and φ 0 be the minimal element of B r−1 .  (1) There exists a non-minimal A ∈ P (α) that is first coordinate isolated.
(2) There exists a non-minimal φ d ∈ B r−1 that is second coordinate isolated.
Proof. We proceed by contradiction. Suppose that φ d ∈ B r−1 is second coordinate isolated and A ∈ P (α) is first coordinate isolated. Now, since (A, φ 0 ) and (A 0 , φ d ) are in L(X), there exists a path in L(X) between those elements and a member of the equivalence class [X]. By a path, we mean a sequence of covers in the poset. Consider such a path Γ 1 : (A, φ 0 ) → (M l , φ l ) ∼ X. Note that this path stays in L(X) and is guaranteed to exist by the fact that we are looking at elements in a lower order ideal. (See Figure 11 for a pictorial representation of these paths.) We start at (A, φ 0 ). In the first cover in our path, we necessarily must change the first coordinate. This is because if we were to change the second coordinate, we would get that (A, φ 1 ) ∈ L(X) which contradicts our hypothesis that A ∈ P (α) is first coordinate isolated. Therefore, our first cover in Γ 1 must be (A, φ d ) → (A 1 , φ 0 ) for some A 1 > A ∈ P (α). Now, suppose our second cover resulted in our path Γ 1 going to (A 1 , φ 1 ) for some φ 1 ∈ B r−1 . This would imply that (A 1 , φ 1 ) ∈ L(X) and hence (A, φ 1 ) ∈ L(X) as this is a lower order ideal. This contradicts our hypothesis that A ∈ P (α) is first coordinate isolated. Hence, our updated path is Γ 1 : (A, φ d ) → (A 1 , φ 0 ) → (A 2 , φ 0 ) for some A 2 > A 1 ∈ P (α). Continuing this argument, we see that our path Γ 1 must have a constant second coordinate φ 0 . As a result, we have that Now, by a similar argument, we have that Γ 2 : . In Lemma 4.4, we showed that these are necessarily in different equivalence classes, hence we have reached our contradiction. Figure 11. Pictorial Representation of paths Γ 1 , Γ 2 in proof of Lemma 4.6 Observe that we can write the LHS in the following two ways Now, since we are considering the lower order ideal of a product, it is easy to evaluate S)). By the product structure of the lower order ideal L(X), there is a unique maximum Y k ∈ P (α) for elements in L(X) with the second coordinate S i . This follows by supposing that there are at least two incomparable relative maximal elements and using a very similar argument from the previous lemmas. By the recursive nature of the Möbius function, we get that so long as Y k is not the minimal element in P (α), the inner sum in Equation (7) is always 0 and Similarly, so long as S i is not the minimal element in B r−1 the inner sum in (6) is always 0 so that Thus, it suffices to show that we do not have a A ∈ P (α) which is first coordinate isolated and a φ d ∈ B r−1 which is second coordinate isolated. We showed this precise statement in Lemma 4.6. Hence, (5) holds as we are either adding up all zeroes in (6) or (7).
We are now ready to prove the lemma that we use in our main theorem.
Proof. It suffices to show that P (α + e n−r+1 ) ∼ = P (α) × Br−1 /∼. We have already shown that there is a bijection between the elements of the poset. We now must show that this bijection is orderpreserving. This follows by the fact that both sets have the same Tesler cover relation and by the definition of cover in a product poset. In the forward direction, this follows immediately by the definition of cover in a product poset. In the other direction, if we have a cover in P (α + e n−r+1 ), then we must have a non-zero element in the same spot in one of the two coordinates, and hence can create a cover in this coordinate. Thus, we have that P (α + e n−r+1 ) ∼ = P (α) × Br−1 /∼, and using Lemma 2.4 we get We are now ready to state and prove our main theorem. Note that we have a slight modification in our notation for the hook sum vector α for a more clean result. Proof. We iterate Lemma 4.8 for each α i = 1 where i ∈ [2, n − 1]. Note that if α i = 0, we are not changing the poset, so the characteristic polynomial is unchanged. One way of representing this using Lemma 4.8 is to multiply by (q − 1) α i (n−i) . This multiplies the characteristic polynomial of the unchanged poset by 1 when α i = 0 and by the desired amount when α i = 1. We start with the hook sum vector α n−1 + α 0 and then apply the Lemma 4.8 to get the characteristic polynomial for α n−1 + α 1 + α 0 as we did in our motivating example. We then do the same thing to get α n−1 + α 2 + α 1 + α 0 , and iterate until we have the characteristic polynomial of the poset corresponding to the hook sum vector α. As a result, we get that Collecting powers we obtain (q − 1) w(α) as desired.
Proof. Since w(1 n ) = n 2 , the result follows by Theorem 4.9. Note that Theorem 4.9 also generalizes the well-known result on the Boolean lattice result as the Boolean lattice is isomorphic to the Tesler poset P (1, 0, . . . , 0). We see this by noting that w (1, 0, . . . , 0) = w(1, 0, . . . , 0, 1 Remark. This result also gives another method of generating Tesler matrices T (1 n ) that is different from the Tesler generating algorithm discussed in Section 2.1. While this method is certainly less efficient that the Tesler generating algorithm, it is possible to construct the set T (1 n ) in this manner without knowledge of the sets T (1 1 ), . . . , T (1 n−1 ) and only using the well known Boolean lattice.
A natural question is to see if this result extends to all generalized Tesler matrices. In the general case, Lemma 4.4 and Lemma 4.6 do not hold. For other α ∈ N n , we get other factors besides (q − 1) as we see in (8). Moreover, in the general case, the characteristic polynomial need not factor over Z as we see in (9).  Figure 13 in the Appendix.) Then, one can check that However, we do have the following divisibility results as corollaries to Theorem 4.9. The question of (q − 1) divisibility in the Tesler poset was initially considered by Drew Armstrong and then communicated in [1].

Corollary 4.12.
Let α ∈ N k and consider the Tesler poset P (1, α), then Proof. We start off with the posets P (α) and B n−1 and consider the product P (α) × B n−1 and apply the same equivalence relation from Definition 4.3 and note that the results from Lemma 4.4 and Lemma 4.6 also hold in this case. As a result, we can use Lemma 2.4 and Proposition 3.5 to get that We can now use Corollary 4.12 and Lemma 4.8 to get some results about factors of the characteristic polynomial when there are leading and trailing binary words in the hook sum vector.
Collecting powers of (q − 1), and then reordering the sum gives us the desired result of a factor of (q − 1) w 1 (β) . Next, we consider the statement in (11). We iterate through our binary word β by starting with β 1 and ending with β k and use the result from Lemma 4.8. After collecting powers, we get a factor of (q − 1) w 2 (β) .
Using the Tesler generating algorithm discussed in Section 2.1, the only way to get such a diagonal is to start out with any main-diagonal of size (n − 1) and then taking everything away from all elements of the original diagonal. As a result, for each Tesler matrix of size (n − 1), we have a unique Tesler matrix of size n with diagonal (0, . . . , 0, n), thus proving the second statement. Finally, considering the last part of our proposition, let the coefficient of the term with degree 3·2 n−2 in A n (q) be a n . We simply need to show that a n satisfies the same recurrence relation as the sequence {2 n − n − 1}. Namely, we need to show that a n = (n − 1) + 2a n−1 . One can check that the terms with degree 3 · 2 n−2 in A n (q) come from the diagonal (2, 1, . . . 1, 0) and valid rearrangements of those terms. Starting with diagonals in the form (2, 1, . . . 1, 0) of the previous size, we can either do nothing, or subtract 2 from the 2 term in the diagonal (2, 1, . . . 1, 0). This accounts for the 2a n−1 . We get the (n − 1) from noting that we can also generate the diagonal (2, 1, . . . 1, 0) by starting from the unique Tesler matrix with main-diagonal (1, . . . , 1) and subtracting any one of the (n − 1) main-diagonal entries that are 1.
Note that given k ∈ N and the Armstrong polynomial, A k (q), it is possible read off T (1 k−1 ), T (1 k ), and T (1 k+1 ) from this polynomial as we show in the following proposition.
Proposition 5.5. The Armstrong polynomial has the following characteristics: Proof. First, we note that dpro(A) ≥ 2 for all n. Then, The second statement is immediate.
We can now use the observations in Proposition 5.4 regarding the Armstrong polynomial to get the following bounds on the number of Tesler matrices. Theorem 5.6.
We use a similar method as we did in our first approximation. This time, however, we know that we have exactly T (1 n−1 ) of our terms to have a diagonal product of (n + 1) by Proposition 5.5. We now assume that the remaining Tesler matrices have a diagonal product of 2n, the second lowest diagonal product. Using this, we note that We now use the previously known bounds in (12) that T (1 n ) ≥ nT (1 n−1 ) to get that Iterating this, we get our desired lower bound that We get the upper bound by the same method and further reductions.
Remark. We note that the lower bound in (13) is better than n! since and is O(( 2n e ) n−1 ). Note that this still does not give an affirmative answer to Question 1.4 and that the upper bound in (13) is still e Θ(n 2 ) , but is slightly tighter.

Understanding Different Hook Sum Vectors
Recall that T (1 k , 0 n−k ) denotes the set of generalized Tesler matrices with hook sum vector equal to (1, . . . , 1, 0, . . . , 0) where there are k 1's and (n − k) 0's and that T (1 k , 0 n−k ) denotes the number of such matrices. In this section, we will refer to Armstrong polynomials for generalized Tesler matrices with hook sum vector α as A n (α, q) where the Armstrong polynomial from the previous section is such that A n (q) := A n (1, 1, . . . , 1, q).
First, we consider T (1, 0 n−1 ). It follows that T (1, 0 n−1 ) can be generated by the method discussed in Section 2.1. Now, we note that there is only one possible diagonal up to reordering of (1, 0, . . . , 0), so all elements have the same diagonal product and as a result the Armstrong polynomial is always in the form A n (1, 0, . . . , 0, q) = (T (1, 0 n−2 ))q 2 which yields that T (1, 0 n−1 ) = 2 n−1 by the first part in Proposition 5.5. Hence, letting T 1 (x) be the generating function for the number of generalized Tesler matrices with hook sum vector (1, 0 n−1 ) we get that Now, let us consider T (1 2 , 0 n−2 ). These matrices have been recently studied in [5,12]. For the same reason as above, we can consider the corresponding Armstrong polynomial. There are only two possible diagonals up to reordering of (2, 0, . . . , 0) and (1, 1, 0, . . . , 0) with diagonal products 3 and 4 respectively. We now consider the corresponding Armstrong polynomial A n (1 2 , 0 n−2 , q) Proposition 6.1. Let A n−1 (1 2 , 0 n−3 , q) = a n−1 q 3 + b n−1 q 4 . Then, we have A n (1 2 , 0 n−2 , q) = (2a n−1 + b n−1 )q 3 + (a n−1 + 3b n−1 )q 4 Proof. We only need to that prove the value of the coefficient of q 3 is as stated as the other coefficient is determined by the fact we know the total number of matrices that are in this set from the A n−1 (1 2 , 0 n−2 , q) term. Thus, we consider the ways to get the diagonal (2, 0, . . . , 0) from the previous set. First, we can do nothing in the diagonal part of the Tesler generating process and add a zero to each of the (2, 0, . . . , 0) of the previous case. Second, for all of the previous size matrices, we subtract everything from the diagonal and then add 2 yielding the diagonal (0, . . . , 0, 2). As a result, we generate 2a n−1 + b n−1 distinct terms with diagonal (2, 0, . . . , 0). Proposition 6.2. Let t n := T (1 2 , 0 n−2 ). Then t n ≥ 3 n−1 for n ≥ 5.
Proof. Generating these matrices through a computer program, we note that t 5 = 90. Thus, since 3 is the smallest possible diagonal product we have t n ≥ 3t n−1 · · · ≥ 3 n−5 t 5 = 90(3 n−5 ) ≥ 3 n−1 Proposition 6.3 (See also [12]). The ordinary generating function for t n is We note that t n has the following recurrence relation t n+1 = 5t n −5t n−1 . From this difference equation, and the initial conditions t 1 = 1 and t 2 = 2, we can find the generating function for t n via standard methods. Proposition 6.4. For all k, there exists some N k ∈ N such that for all n ≥ N k we have Proof. For a given k, the smallest possible diagonal product in T (1 k , 0 n−k ) is (k + 1). Using similar methods of generating the diagonals of the form (0, 0, . . . , 0, k), we can see that less than half of the terms in the set T (1 k , 0 n−k ) have a diagonal product of (k + 1). Hence, noting that the next lowest diagonal product is 2k, the expected value of the diagonal product is at least (3k + 1) /2. Since (3k + 1) /(2k + 2) > 1 for k ≥ 2, we will eventually have an N k such that T (1 k , 0 N k −k ) ≥ (k +1) N k −1 6.1. Conjectures and Future Work. The sequence {T (1 n )} appears in the OEIS A008608. Based on the 25 entries in this sequence, and the insight from Proposition 6.4, we make the following conjecture. Conjecture 6.5. Let n, k ∈ Z be such that n ≥ k ≥ 11. Then, we have T (1 k , 0 n−k ) ≥ (k + 1) n−1 Remark. This conjecture would prove that for n ≥ 11, we have T (1 n ) ≥ (n + 1) n−1 which is a significant because (n + 1) n−1 is the value of (1) with t = 1 and q = 1 (i.e. the dimension of DH n ). We note that for k = 11, this conjecture is true as T (1 11 ) = 515, 564, 231, 770 which is bigger than 12 10 . Thus if we can show that T (1 n+1 ) ≥ e · (n + 2)T (1 n ) for k ≥ 11, then we have proven the conjecture. Here the number e comes from looking at the fraction of the next term over the previous term which gives us ( n+2 n+1 ) n−1 · (n + 2) where the first term is bounded below by e. The statistics dinv and area, which are mentioned in more detail in [8], are used in the now settled Haglund-Loehr conjecture [10]. Carlsson and Mellit show in [4] that (14) Hilb(DH n ; q, t) = π q dinv(π) t area(π) where the sum is over parking functions π of size n.
It was shown in [3] that by plugging in t = 1 and q = 1 we get (16) (n + 1) n−1 = A=(a i,j )∈PT (1 n ) a i,j >0 [a i,j ] q,t We note here that the only terms that survive on the RHS of (15) after plugging in t = 1 and q = 1 are Tesler matrices with exactly one nonzero element in each row. These are called Permutation Tesler matrices. This relationship between parking functions and Tesler matrices adds intrigue to having the number of parking functions eventually be a lower bound for Tesler matrices since this would imply there is a lot of cancellation in the alternating sum on the RHS of (15). We will now explore a way to affirmatively answer Question 1.4 using χ(P (1 n ); q). Proposition 6.6. Let µ(·) be the Möbius function for the Tesler poset P (1 n ). If for all A ∈ T (1 n ) we have that |µ(0, A)| ≤ f (n), then we have that: Proof. We note that by Corollary 4.10, we have Hence, if for all A ∈ T (1 n ) we have that |µ(0, A)| ≤ f (n), then we would have that T (1 n ) · f (n) ≥ 2 ( n 2 ) which gives the desired result.
Remark. We would find such a bound on the Möbius function for the Tesler poset P (1 n ) by analyzing the size of the equivalence classes that we get when we use Hallam-Sagan's method from Section 2.3. In their Lemma 2.4, they show that the Möbius function of the equivalence class [X] is equal to the sum of the Möbius function evaluated at the elements in the equivalence class [X].