Integer matrix factorisations, superalgebras and the quadratic form obstruction

We identify and analyse obstructions to factorisation of integer matrices into products $N^T N$ or $N^2$ of matrices with rational or integer entries. The obstructions arise as quadratic forms with integer coefficients and raise the question of the discrete range of such forms. They are obtained by considering matrix decompositions over a superalgebra. We further obtain a formula for the determinant of a square matrix in terms of adjugates of these matrix decompositions, as well as identifying a $\it co-Latin$ symmetry space.


Introduction
The question whether a given square integer matrix M can be factorised into a product of two integer matrices, either in the form of a square M = N 2 or (in case of a symmetric positive definite matrix) in the form M = N T N, has a long history in number theory. It is known that if n ≤ 7 and M is an n × n symmetric positive definite matrix with integer entries and determinant 1, then a factorisation M = N T N with N an n × n matrix with integer entries exists. However, there are examples of such matrices with dimension n = 8 which cannot be factorised in this way. This result is mentioned by Taussky (see [14, p. 812]) and goes back to Hermite, Minkowski, and Mordell Mordell [11].
The number theoretic properties relating to the factorisation of symmetric positive definite n × n integer matrices M with fixed determinant have classical connections to the theory of positive definite quadratic forms in n variables, see e.g. [12] and the above references.
In particular, Mordell considered the similarity classes of n×n matrices with determinant 1, where two such matrices L, M are in the same class if there exists a unimodular integral matrix N such that M = N T LN. The number of such similarity classes is denoted by h n . Then a matrix M is in the similarity class of I n (the identity matrix) if and only if there exists a factorisation M = N T N with an integer matrix N. This implies that the quadratic form classically associated with the symmetric matrix M, q(x) = x T Mx, can be written as where y = Nx, and N has determinant 1. Thus, the factorisation can be used to write the quadratic form q(x) as a sum of squares of n linear factors. When n = 8, such a factorisation may not exist, as Minkowski proved in 1911 that h n ≥ [1 + n/8], so h n ≥ 2 if n = 8. Mordell showed that h 8 = 2 [12], and Ko showed that h 9 = 2 as well [9].
In the present paper, we revisit the question of integer matrix factorisation in the light of recent general results on matrix decompositions [7], [8]. We establish in Corollary 3.1 that the existence of integer solutions to a certain quadratic equation is a necessary condition for a matrix factorisation of the type M = N 2 or M = N T N (for symmetric positive definite M) to exist. It is interesting to note that solutions to this new type of quadratic equation associated with a given integer matrix M can also lead to rational matrix factors N with entries in 1 n 2 Z. Throughout the paper, we use the classical example of the Wilson matrix [4], [6], [10], [13] W =     5 7 6 5 7 10 8 7 6 8 10 9 5 7 9 10     (2) to demonstrate the methodologies under consideration. This integer matrix has determinant 1 and hence an integer inverse matrix, but is moderately ill conditioned, despite its small size and entries. It has the integer factorisation W = Z T Z discovered in [6] with We note that the entries of Z are nonnegative and, although the matrix is not triangular, it has a block upper triangular structure and can be thought of as a block Cholesky factor of W .
The quadratic form associated with the Wilson matrix can be written, using (1) and (3), as a sum of four squares: As Z is a unimodular integer matrix and hence has an integer inverse, it follows by Lagrange's four-square theorem that the quadratic form q generated by the Wilson matrix is universal [1] in the sense that it generates all positive integers as x ranges over Z 4 .
This shows that integer matrix factorisation is a valuable tool in studying the quadratic form generated by an integer matrix. As a result of our considerations in Section 3 below, the question of factorising the Wilson matrix in the form W = Z T Z is associated with the solutions of the quadratic equation Indeed, a necessary (but not sufficient) condition for W to factorise is that integer solutions (w, x 1 , x 2 , x 3 ) to the quadratic equation (5) exist. Thus this equation can be considered a quadratic form obstruction to integer factorisability. This approach to integer matrix factorisation was briefly alluded to, but not fully worked out in [6].
In Section 2 we derive a useful S + V decomposition of square matrices, first identified in [8], and give explicit formulas for constructing it that were not given in [8]. In Section 3, we use the concept of matrix weight, associated with the S factor, to establish the quadratic form obstruction to integer matrix factorisation and show how a solution of the corresponding quadratic equation can be used to calculate the matrix factors. In Section 4, we discuss adjugate matrices in view of the matrix decomposition and in particular show that the type S part of a matrix is characterised by having an adjugate with all equal entries. Finally, in Section 5 we identify the type V part of a matrix as belonging to a space of co-Latin squares, defined as square matrices with constant sum over all entries carrying the same symbol in any Latin square.

The S + V decomposition of matrices
The following symmetries of n × n matrices were considered in [8].
(S) A matrix M = (m i,j ) n i,j=1 ∈ R n×n has the constant sum property (or is of type S) if there is a number w ∈ R, called the weight of the matrix, such that n j=1 m i,j = n j=1 m j,i = nw (i ∈ {1, . . . , n}).
The vector subspace of R n×n of matrices having the constant sum property with some weight is denoted by S n and can be characterised as where 1 n ∈ R n is the column vector with all entries equal to 1 and orthogonality is with respect to the standard inner product, {1 n } ⊥ = {u ∈ R n : u T 1 n = 0} (cf. [8, Thm. 2.6 (a)]).
and the matrix entries sum to zero, n i,j=1 m i,j = 0. The vector subspace of R n×n of matrices having the vertex cross sum property is denoted by V n and can be characterised as The spaces S n and V n only have the null matrix in common; in fact, they complement each other and give R n×n a superalgebra structure in the following way (cf. [8, Thm. 2.5 (a)]). Theorem 2.1. R n×n = S n ⊕ V n . Moreover, S n is a subalgebra of R n×n , and We show below in Corollary 2.1 that this decomposition of the matrix algebra is orthogonal.
We begin by showing in Theorem 2.2 that every element M of V n can be written in the form M = a1 T n + 1 n b T , before deriving a formula for obtaining these vectors a and b in Theorem 2.3.

Theorem 2.2. A matrix M ∈ R n×n is an element of V n if and only if there exist vectors
. Now every x ∈ R n can be written in the form x = α1 n + v with suitable v ∈ {1, n} ⊥ and α ∈ R, so where a := 1 n M1 n , bearing in mind that 1 T n 1 n = n. It follows from the last equation in (6) that 1 T n a = 0. Conversely, for any a, b ∈ {1 n } ⊥ the matrix a1 T n + 1 n b T ∈ V n .
Clearly M ∈ V n is symmetric if and only if a = b in the above representation. Theorem 2.2 shows that dim V n = 2n − 2, so dim S n = n 2 − 2n + 2 (see also [8, p. 14]), and that the rank of elements of V n cannot exceed 2. Moreover, this theorem makes the spectral decomposition of any matrix M ∈ V n very transparent. Indeed, the range of M is spanned by the orthogonal vectors a and 1 n , so any eigenvector for non-zero eigenvalue must be of the form u = αa + β1 n with numbers α, β ∈ C. Then, bearing in mind that 1 T n a = 0 and b T 1 n = 0, and denoting the eigenvalue by λ, we find giving nβ = λα and b T a α = λβ. Hence any nonzero eigenvalue of M is an eigenvalue of the 2 × 2 matrix 0 n b T a 0 and vice versa. This gives the characteristic polynomial and furthermore the eigenvectors for M as T a < 0 and the single eigenvector a for eigenvalue 0 if b T a = 0 (and a = 0; otherwise any non-null vector orthogonal to b will do).
Finally, Theorem 2.2 easily yields the orthogonality with respect to the Frobenius inner product of the direct sum decomposition of the space of n × n matrices into S n and V n . The Frobenius inner product of two matrices A, B ∈ R n×n is defined as A, B = tr A T B.
Corollary 2.1. The decomposition R n×n = S n ⊕ V n is orthogonal with respect to the Frobenius inner product.
We can calculate the weight of any matrix in S n as the mean of all matrix entries. In fact, it is meaningful to define the linear form Any M ∈ S n can be uniquely decomposed into where M 0 ∈ S n has weight 0 and E n = 1 n 1 T n ∈ S n is the n × n matrix with all entries equal to 1. In conjunction with the previous observations, this gives rise to the following unique decomposition of general n × n matrices, including an explicit formula for the calculation of the parts.
where M 0 ∈ S n with weight 0 and M V = a1 T n + 1 n b T ∈ V n , and the entries of the vectors a and b are given by In particular, if M is an integer matrix then the vectors n 2 a and n 2 b have integer entries.
Proof. Let {ε j : j ∈ {1, . . . , n − 1}} be an orthonormal basis of {1 n } ⊥ . Then is an orthonormal basis, with respect to the Frobenius inner product, of the ( and analogously for 1 n ε T j , 1 n ε T k .
These observations enable us to find M V , the V n part of M ∈ R n×n by orthogonal projection using the above orthonormal basis. We have Hence Example: the Wilson matrix. For the Wilson matrix W this decomposition takes the form and for the integer matrix factor Z of eq. (3) The last terms on the right-hand side of these formulae correctly yield wt W = 119 16 and wt Z = 19 16 .

Integer factorisation of matrices and the quadratic form obstruction
On the basis of the matrix decomposition established in the preceding section, we now derive the quadratic equation arising from balancing the weights in a matrix factorisation.
where, by the superalgebra property, the matrices in the first bracket lie in S n , the matrices in the second bracket in V n . Writing Hence the uniqueness of the decomposition of M by Theorem 2.3 gives the claimed identities.
(ii) Similarly, we find This gives the claimed identities by uniqueness of decomposition.
The above theorem gives the following quadratic form obstructions to the factorisation of an integer matrix M into either M = N T N or M = N 2 with an integer matrix N. Suppose we are given a symmetric integer matrix M ∈ Z n×n and have found a solution (w N , a 1 , . . . , a n ) ∈ 1 n 2 Z n+1 of the quadratic equation associated with the factorisation M = N T N, To complete the factorisation, we need to identify a vector b and a matrix N 0 satisfying the equations in Theorem 3.1 (i). Using the decomposition M = y1 T n +1 n y T +M 0 +(wt M) E n , , so we only need to find a solution N 0 of the quadratic matrix equation or, setting N 0 = L − 1 a T a+nw 2 N ay T , the simpler quadratic The right-hand side and the middle factor on the left-hand side of these equations are determined in terms of the S+ V decomposition of the given matrix M and the particular solution of the factorisation quadratic form considered. Although determining N 0 or L from these equations is a factorisation problem of similar type to the original equation M = N T N, we found that their solution was computationally more effective.
Remark 1. We remark in passing that the matrix aa T +nw 2 N appearing as a middle factor on the left-hand side of eq. (12) is symmetric positive definite and can be written as A T A, but due to the presence of square roots this is unlikely to give a rational solution N 0 .
Example: the Wilson matrix. The quadratic equation arising from balancing the weights in the assumed factorisation of the Wilson matrix W = Z T Z is wt W = 119 16 = a 2 1 + a 2 2 + a 2 3 + a 2 4 + 4w 2 Z = 2(a 2 1 + a 2 2 + a 2 3 + a 1 a 2 + a 1 a 3 + a 2 a 3 ) + 4 w 2 Z , as a 4 = −a 1 − a 2 − a 3 . Multiplying the equation by 2 7 and setting x i = 16a i , w = 16w Z , we find that a necessary condition for the integer factorisation of the Wilson matrix is that there are integer solutions to the quadratic equation (5) Solving this equation for w, x 1 , x 2 , x 3 in Mathematica 11.0 on a PC with a Intel Core i7 6500CPU, gave the 1728 solutions in just under 6 seconds. Exactly one third (576) of these solutions lead to rational matrix factorisations W = Z T Z with Z ∈ 1 16 Z 4×4 .
The process of converting solutions into matrix factors, i.e. of finding suitable vectors b and matrices N 0 satisfying the equations of Theorem 3.1 (i) as outlined after Corollary 3.1, took considerably longer at 34 minutes. Our approach involved utilising (11), in which the vector b is eliminated and the right-hand-side completely determined for a given factor weight w N . Substituting potential solutions for the vector a and weight w N , thus reduces the general problem of finding the matrix N 0 to that of an (n − 1) × (n − 1) unknown matrix. For the Wilson matrix this is a 9-dimensional problem, and then for each of the 9-dimensional solutions for N 0 , the vector b can be quickly constructed. Adding together the elements of the decomposition we then recover the rational matrix factors N.
To a large part, the multiplicity of solutions to the factorisation problem is expected. Indeed, it is clear that if M = N T N and U is an orthogonal matrix, then UN is another solution of the factorisation problem; conversely, if det M = 1 and N and N ′ are solutions with integer entries, then N ′ = UN, where U = N ′ N −1 is an integer orthogonal matrix. It is also not hard to see that any integer orthogonal matrix is a signed permutation matrix, i.e. a matrix which has exactly one non-zero entry, either 1 or −1, in each row and in each column.
It is therefore natural to classify the factorisation matrices (integer or rational) modulo left multiplication with integer orthogonal matrices. For the factorisations of Wilson's matrix obtained through the above procedure, this gives three distinct classes, represented by the matrix Z of (3) and the further two matrices The adjugate of a matrix appears naturally when calculating the determinant of a rank-1 perturbation of a matrix, as shown in the following lemma (cf. [5]). Proof. Suppose M is a nonsingular n × n matrix. Then This gives the stated identity for regular matrices. The general case follows from the facts that the set of regular matrices is dense in C n×n and that the determinant and adjugate are continuous functions of the matrix. Thus the adjugate of the weightless type S part of the Wilson matrix is a multiple of E 4 . As we show in the following theorem, the adjugate of a weightless type S matrix is in fact always a scalar multiple of E n . Moreover, the converse holds in the sense that a matrix whose adjugate is a non-zero multiple of E n must be a weightless type S matrix. Of course any matrix of rank n − 2 or less has adjugate 0 = 0E n . noting that all the determinants in the sum vanish except for the term l = k + 1. This shows that the (j, k + 1) entry and the (j, k) entry of adj M are equal. Since this holds for all k ∈ {1, . . . , n − 1} and for all j ∈ {1, . . . , n}, it follows that adj M has constant columns.
Applying the same argument to M T (which also has row sums 0), we find that adj M also has constant rows, and hence adj M = wE n for some w ∈ C. As m 1 , . . . , m n−1 are linearly independent, it follows that 1 + α k = 0, i.e. that α k = −1.
Since this holds for all k ∈ {1, . . . , n − 1}, we can conclude that n l=1 m l = 0, i.e. that the rows of M, except for the jth row, add up to 0. Since j ∈ {1, . . . , n} was arbitrary, this holds in fact for all rows.
Applying the above reasoning to the transpose of M, we find that its columns also add up to 0.

Co-Latin Matrices
An n × n Latin square (or Latin matrix) is an n × n matrix with entries from {1, . . . , n} such that the entries of each row and of each column are distinct, i.e. each number in {1, . . . , n} appears exactly once in each row and in each column. Evidently any Latin matrix is of type S. If L = (ℓ p,q ) n p,q=1 is a Latin square and k ∈ {1, . . . , n}, we define L (−1) (k) = {(p, q) ∈ {1, . . . , n} 2 : ℓ p,q = k}. Clearly the set of n × n co-Latin matrices forms a subspace of R n×n . The co-Latin property can be considered an extreme opposite of the weightless type S property. A matrix in S n with weight 0 has the property that its entries in any one row or column add to 0; a co-Latin matrix has the property that any selection of n entries such that no two selected entries lie in the same row or the same column add to 0. We show in the following that these properties are indeed complementary in the sense of unique decomposability of any given weightless n × n matrix into a weightless type S matrix and a co-Latin matrix; this is a direct consequence of the following theorem, which identifies co-Latin matrices with V n in (6). To prepare the proof of Theorem 5.1, we first show that there exists a Latin square which has the entries 1 and 2 pairwise on the diagonally opposite corners of a rectangle. By suitable row and column permutations, we can assume without loss of generality that this rectangle is the top left 2 × 2 square. The existence of such a Latin square is not trivial; in fact there is none in dimension 3, as the arrangement Proof. We consider the cases of even and odd n separately.
to create the Latin square L.
(ii) Odd n. Here we start off the first 3 antidiagonals of L thus: and then fill up the antidiagonals up to and including the main antidiagonal in the Hankel Latin manner described above. We then fill the next three antidiagonals with 3 1 2 1 2 · · · 1 2 1 2 3, 2 3 3 3 · · · 3 3 3 1, 1 2 1 · · · 2 1 2, respectively, and complete the remaining antidiagonals in the standard Hankel Latin manner. For example, for n = 9 this gives the Latin square This construction works from n = 5 onwards.
We can now prove the following.
Lemma 5.2. Let n ∈ N, and let M = (m i,j ) n i,j=1 be an n × n co-Latin square. Then M ∈ V n .
Proof. The statement is trivial in the case n = 1. Now consider n ∈ N \ {1, 3}. Let i, j, k, l ∈ {1, . . . , n} such that i = k and j = l. By suitable permutation of the rows and columns of the Latin square constructed in Theorem 5.1, there exists a Latin square L = (ℓ p,q ) n p,q=1 such that ℓ i,j = ℓ k,l = 1 and ℓ i,l = ℓ k,j = 2. By the co-Latin property, we know that Now consider the matrix L ′ = (ℓ ′ p,q ) n p,q=1 which arises from L by keeping all the same entries except that ℓ ′ i,j = ℓ ′ k,l = 2 and ℓ ′ i,l = ℓ ′ k,j = 1. Then L ′ is still a Latin square, and by the co-Latin square property we find that Noting that the index sets of the sums in (15) and (16), and hence the values of these sums, are the same, we conclude that M ∈ V n .
Finally, to see that the statement holds true in the case n = 3, we note that, up to permutations of the symbols {1, 2, 3}, there are only two different 3 × 3 Latin squares, namely either of the form To show that a 3 × 3 co-Latin square M has the vertex cross sum property, without loss of generality we can consider the case i = j = 1, k = l = 2 (as the other cases can be reduced to this by suitable row and column permutations). Then the first of the above Latin matrices shows that m 2,1 + m 1,2 + m 3,3 = 0, the second Latin matrix shows that m 1,1 + m 2,2 + m 3,3 = 0, and it follows that m 1,1 + m 2,2 = m 2,1 + m 1,2 .
We can now complete the proof of Theorem 5.1.
Proof of Theorem 5.1. By Lemma 5.2, any n × n co-Latin matrix is an element of V n .
Conversely, by Theorem 2.2 any element of V n is of the form a1 T n + 1 n b T , with suitable a, b ∈ {1 n } ⊥ . Since a Latin square takes each value in {1, . . . , n} exactly once in each row and each column, it is hence straightforward to see that a1 T n and 1 n b T are co-Latin squares.
Example: the Wilson matrix. Using the V part of our integer factorisation matrix Z 0 , given by Z V in (8), we have It can be easily verified numerically that this type V matrix satisfies all 4! = 24 Latin selections summing to 0, and so is a co-Latin matrix.

Conclusions
Matrices with integer entries play an important role in many modern applications of mathematics. In numerical analysis, for example, they make convenient test matrices because they are exactly representable. Families of matrices with bounded integer entries have recently been termed Bohemian matrices and various aspects of them have been studied [2], [3], [4]. The factorisation of integer matrices in the form M = N T N with integer N is related to a classical topic in number theory, but there has not been much work on finding such factorisations. In this work we have provided some new ideas to determine under what circumstances integer factorisations exist and to develop an approach to computing them. Our results are founded upon the orthogonal decomposition of the algebra of square matrices into two parts, of which one part is the subalgebra of constant (row and column) sum matrices, while we identified the other part as a space of co-Latin square matrices with symmetries determined by the properties of Latin squares.