Matrices in companion rings, Smith forms, and the homology of 3-dimensional Brieskorn manifolds

We study the Smith forms of matrices of the form $f(C_g)$ where $f(t),g(t)\in R[t]$, $C_g$ is the companion matrix of the (monic) polynomial $g(t)$, and $R$ is an elementary divisor domain. Prominent examples of such matrices are circulant matrices, skew-circulant matrices, and triangular Toeplitz matrices. In particular, we reduce the calculation of the Smith form of the matrix $f(C_g)$ to that of the matrix $F(C_G)$, where $F,G$ are quotients of $f(t),g(t)$ by some common divisor. This allows us to express the last non-zero determinantal divisor of $f(C_g)$ as a resultant. A key tool is the observation that a matrix ring generated by $C_g$ -- the companion ring of $g(t)$ -- is isomorphic to the polynomial ring $Q_g=R[t]/$. We relate several features of the Smith form of $f(C_g)$ to the properties of the polynomial $g(t)$ and the equivalence classes $[f(t)]\in Q_g$. As an application we let $f(t)$ be the Alexander polynomial of a torus knot and $g(t)=t^n-1$, and calculate the Smith form of the circulant matrix $f(C_g)$. By appealing to results concerning cyclic branched covers of knots and cyclically presented groups, this provides the homology of all Brieskorn manifolds $M(r,s,n)$ where $r,s$ are coprime.


Introduction
Let R be a commutative ring (with unity) other than the trivial ring, fix a monic polynomial g(t) = t n + n−1 k=0 g k t k ∈ R[t], and let C g be the companion matrix of g(t). For n ≥ 2, the subset of R n×n consisting of matrices f (C g ) that are polynomials in C g with coefficients in R forms a commutative ring, which we call the companion ring of g(t) and denote by R g . Important and well studied rings of matrices arise as special cases: if g(t) = t n , then R g is the commutative ring of lower triangular n × n Toeplitz matrices [2] with entries in R; if g(t) = t n − 1, then R g is the commutative ring of n × n circulant matrices [10] with entries in R; if g(t) = t n + 1, then R g is the commutative ring of n × n skew-circulant matrices [10] with entries in R.
When R is an integral domain, g(t) has n roots (counted with multiplicities) in some appropriate extension of R and, for f (t) ∈ R[t], the determinant of f (C g ) can be expressed as the resultant det f (C g ) = θ:g(θ)=0 f (θ) =: Res(f, g). (1.1) Note that in the last equality we are implicitly fixing the normalization in the definition of the resultant; we will follow this choice throughout.
(See Section 2 for definitions of undefined terms and notation used in this Introduction, together with relevant background). Restricting to the case that R is an elementary divisor domain, such as the ring of integers, one may seek to study the Smith forms of matrices f (C g ). Our first main result shows that f (C g ) is equivalent to the direct sum of a matrix F (C G ) and a zero matrix, where F, G are quotients of f, g by any of their (monic) common divisors, and so relates the Smith forms of f (C g ) and F (C G ).

Theorem A. Let g(t) ∈ R[t] be monic of degree n, and let f (t) ∈ R[t]
where R is an elementary divisor domain. Suppose that g(t) = G(t)z(t), f (t) = F (t)z(t) where z(t) is a monic common divisor of f (t) and g(t).
Then f (C g ) ∼ F (C G ) ⊕ 0 m×m , where m = deg z(t). In particular, F (C G ) has invariant factors s 1 , . . . , s r if and only if f (C g ) has invariant factors s 1 , . . . , s r , 0 (repeated m times).
An immediate corollary (Corollary B) expresses the last non-zero determinantal divisor as the resultant of F (t) and G(t). This therefore generalizes the expression (1.1) to the case of singular matrices f (C g ).
Corollary B. In the notation of Theorem A, suppose that z(t) is the monic greatest common divisor of f (t) = z(t)F (t) and g(t) = z(t)G(t). Then the last non-zero determinantal divisor of f (C g ) is (up to units of R) γ r = θ:G(θ)=0 F (θ) = Res(F, G).
As an application, in Theorem C, we calculate the Smith form of the integer matrix f (C g ) where f (t) is the Alexander polynomial of the torus knot K(r, s), i.e.
and g(t) = t n −1. As we explain in Section 2.2 this allows us, in Corollary D, to calculate the homology of all 3-dimensional Brieskorn manifolds M (r, s, n) where r, s are coprime. This generalizes (part of) [6,Proposition 5], which deals with the case r = 2.
Theorem C. Let r, s be coprime positive integers, n ≥ 2, such that x := (r, n) ≤ y := (s, n) and let f (t) ∈ Z[t] be the Alexander polynomial of the torus knot K(r, s) as in (1.2), We note that there is no loss of generality in assuming, as in the statement of Theorem C, that (r, n) ≤ (s, n) for if not we may simply swap the roles of r and s.
Corollary D. Let r, s, n ≥ 2 where r and s are coprime. Then setting x := (r, n) and y := (s, n), the homology of the 3-dimensional Brieskorn manifold M = M (r, s, n) is

Smith Forms and Elementary Divisor Domains
Given a GCD domain R, we denote the greatest common divisor 1 of the ntuple a 1 , . . . , a n ∈ R by (a 1 , . . . , a n ). An elementary divisor domain (EDD) [12, p. 16] is an integral domain R such that, for any triple of elements a, b, c ∈ R, there exist x, y, z, w ∈ R satisfying (a, b, c) = zxa + zyb + wyc. By choosing c = 0 in this definition, it follows that every EDD is a Bézout domain; it is conjectured, but to our knowledge still an open problem, that the converse is false [20,21]. Every principal ideal domain (PID) is an EDD see, for example, [12, Theorem 1.5.3]; a classical example of an EDD that is not a PID is the ring of functions that are holomorphic on a simply connected domain [16,32].
The following classical theorem is named after H. J. S. Smith, who studied the case R = Z [34]. Frobenius proved the Smith Theorem in [13] assuming that R is a ring of univariate polynomials with coefficients in a field. For a proof of the theorem when R is an EDD (the weakest permissible assumption under which it can hold) see, for example, [12,Theorem 1.14.1].
To state the Smith Theorem, recall [29, p. 12] that a square matrix U ∈ R n×n is called unimodular if det U is a unit of the base commutative ring R. Equivalently, unimodular matrices are precisely those matrices that are invertible over R, i.e., whose inverse exists and also belongs to R n×n . Theorem 2.1 (Smith Theorem). Let R be an EDD and M ∈ R m×n . Then there exist unimodular matrices U ∈ R n×n , V ∈ R m×m such that U M V = S where S is diagonal and satisfies S i,i | S i+1,i+1 for all i = 1, . . . , min(m, n) − 1. Further, let γ 0 = 1 ∈ R, and for i = 1, . . . , min(m, n) define the ith determinantal divisor γ i to be the greatest common divisor (gcd) of all minors of M of order i. Then To make the Smith form S uniquely determined by M , one might consider an appropriate normalization of the determinantal divisors, or equivalently of the invariant factors. This "appropriate normalization" is a conventional, albeit arbitrary, choice that depends on the base ring R. For instance, typical requirements are that the invariant factors are non-negative integers when R = Z; or that the invariant factors are monic polynomials when R = F[x] (univariate polynomials with coefficients in a field F). To avoid pedantic repetitions of sentences like "up to multiplication by units of R", we assume that one such normalization is tacitly agreed on.
Generally, if there are two unimodular U ∈ R m×m , V ∈ R n×n such that U M V = N , we say that M and N are equivalent (over R) and write M ∼ N . If furthermore V = U −1 then M and N are said to be similar over R. The Smith Theorem can therefore be stated as follows: every matrix with entries in an EDD is equivalent to a diagonal matrix whose diagonal elements form a divisor chain. This is, in fact, a characterization of an EDD, usually taken as a definition [20,21,32]: an EDD is an integral domain R over which the Smith Theorem holds. We mention two immediate consequences of the Smith Theorem that will be useful to us: firstly, any pair of m × n matrices with entries in R are equivalent if and only if they have the same invariant factors; secondly, since rank is preserved by multiplication by invertible matrices, M has rank r if and only if its invariant factors satisfy s i (M ) = 0 precisely when i > r.

Cyclically presented groups and Brieskorn manifolds
Any finitely generated abelian group A is isomorphic to a group of the form A 0 ⊕Z β where A 0 is a finite abelian group and β ≥ 0. The number β = β(A) is called the Betti number (or torsion-free rank) of A. Clearly A is infinite if and only if β(A) ≥ 1 and A is a free abelian group if and only if A 0 = 1.
Given a group presentation P = x 0 , . . . , x n−1 | R 0 , . . . , R m−1 (n, m ≥ 1) the relation matrix of P is the n × m integer matrix M whose (i, j) entry is the exponent sum of generator x i in relator R j . If the rank of M is r and the invariant factors of the Smith Form of M are s 1 , . . . , s n−r then the abelianization of the group G defined by the presentation P is A cyclic presentation is a group presentation of the form where w = w(x 0 , x 1 , . . . , x n−1 ) is some fixed element of the free group F (x 0 , . . . , x n−1 ) and the subscripts are taken mod n, and the group G n (w) it defines is called a cyclically presented group. If, for each 0 ≤ i < n, the exponent sum of x i in w(x 0 , . . . , x n−1 ) is a i then the relation matrix of P n (w) is the circulant matrix C whose first row is (a 0 , a 1 , . . . , a n−1 ). The representer polynomial of C is the polynomial and setting g(t) = t n − 1 ∈ Z[t], R = Z, the relation matrix of P n (w) is the circulant matrix f w (C g ). Thus, results concerning the Smith forms of such matrices f (C g ) provide information about the abelianization of the cyclically presented group G n (w). This, in turn, allows us to calculate the homology of certain 3-dimensional manifolds, as we now describe. For a 3-manifold M , the first homology H 1 (M ) is isomorphic to the abelianization of its fundamental group (see, for example, [15, Theorem 2A.1]). Thus, given a 3-manifold whose fundamental group has a cyclic presentation P n (w) with representer polynomial f w (t), the Smith form of the integer circulant f w (C g ) provides the homology of M . Suitable manifolds include, for example, all Dunwoody manifolds [11].
Brieskorn manifolds were introduced in [3] and the 3-dimensional Brieskorn manifolds M (r, s, n) (r, s, n ≥ 2) were studied by Milnor in [24]. As noted in [24, p. 176], the order of the homology of M (r, s, n) was computed by Brieskorn [3], who showed that the homology is trivial if and only if r, s, n are pairwise relatively prime. An algorithm for computing the homology itself, conjectured in [31], was proved in [33]. Further, the homology was calculated for the case r = 2 in [6]. The manifolds M (r, s, n) can be described as n-fold cyclic branched covers of the 3-sphere S 3 branched over the torus link K(r, s), or torus knot K(r, s) when (r, s) = 1 [24, Lemma 1.1]. Torus knots lie in a very general class of knots called (1, 1)-knots. A special case of [25, Theorem 3.1] is that if a manifold M is an n-fold cyclic branched cover of S 3 branched over a (1, 1)-knot then its fundamental group π 1 (M ) has a cyclic presentation P n (w) for some w. Moreover, by [5,Proposition 7] (see also [4,Theorem 4]) w can be chosen so that the representer polynomial f w (t) of P n (w) is equal to the projection of the Alexander polynomial Hence, for coprime r, s, the calculation of the Smith form of f (C g ) given in Theorem C provides the homology of M (r, s, n), as in Corollary D.

Quotient polynomial rings as a ring of matrices
As in the Introduction, let R be a commutative ring (with unity) other than the trivial ring and fix a monic polynomial g(t) = t n + n−1 k=0 is the ring of the equivalence classes of polynomials in R[t] modulo the ideal generated by g(t). Specifically, given here and below, h(t) ≡ f (t) mod g(t) is a compact notation to mean that there exists q(t) ∈ R[t] such that f (t) = h(t) + q(t)g(t). On the other hand, associated with g(t) is its companion matrix (where, as throughout this paper, entries not explicitly displayed are assumed to be 0); observe that the matrix C g is a representation of the multiplication-by-[t] operator in the quotient ring Q g , in that For more details on this viewpoint on companion matrices, as well as some generalizations, see, for example, [27,Section 2], [30,Section 9], and the references therein. It is well known in matrix theory that the characteristic polynomial of C g is precisely g(t): see, for example, the proof of [14, Theorem 1.1]; although there it is assumed that R = C, the proof is purely algebraic and only requires that R is a commutative ring. Theorem 3.1 below is a special case of the First Isomorphism Theorem for rings, but we will give an elementary matrix theoretical proof. It illustrates how to expand the idea of mapping [t] to C g , and why it generates a matrix algebra. Given an equivalence class [f (t)] ∈ Q g , it is natural to consider the coefficients f k ∈ R of its expansion in the monomial basis of Q g , i.e., Note that the above notation is generally equivalent to Before stating Theorem 3.1 we recall that the Cayley-Hamilton theorem holds for matrices over any commutative ring [18 It is easy to show that, under the usual matrix addition and multiplication, the subset of R n×n consisting of matrices that are polynomials in C g with coefficients in R forms a commutative ring, which we will denote by R g . Theorem 3.1 shows that this is isomorphic to the quotient ring Q g , and is fundamental to our methods.
is a ring isomorphism.
Proof. The map M is clearly bijective, maps [0] to 0 and [1] to I, and satisfies Theorem 3.1 shows that, for every monic polynomial g(t) ∈ R[t], we can define a matrix algebra R g that satisfies the important property of being isomorphic to the quotient ring Q g . It is also useful to observe that, by (3.1), for all j = 0, . . . , n − 1 the (n − j)-th row of f (C g ) contains the coefficients of [t j f (t)] in the monomial basis [1], . . . , [t n−1 ]. In particular, the last row of f (C g ) is precisely As mentioned in the Introduction, the commutative rings of lower triangular n×n Toeplitz matrices, of circulant matrices, and of skew-circulant matrices all arise as special cases of R g .
If we now specialize to the case where R is an EDD then, given g(t) ∈ R[t] (monic) and [f (t)] ∈ Q g , it makes sense to study the Smith canonical form of f (C g ). An important example of an EDD is the ring of the integers Z and in this setting, g(t) is a monic integer polynomial, [f (t)] is an equivalence class of integer polynomials modulo g(t), and f (C g ) is an integer matrix whose Smith form is sought. In the next sections, we derive results describing some features of the Smith canonical forms of f (C g ) in terms of [f (t)] and g(t).

On the Smith form of f (C g )
If the base ring R is an integral domain, it can be embedded in a closed field F, namely, the algebraic closure of the field of fractions of R. Hence, the matrix C g has n eigenvalues (counted with multiplicities) in F. In particular, these eigenvalues are the roots of g(t). Moreover, it is well known [2,14,27,28,30] that the eigenvectors of the companion matrix C g associated with an eigenvalue θ have the form, up to a nonzero constant, v θ = θ n−1 · · · θ 1 T ∈ F n , θ : g(θ) = 0.
If we assume that g(t) has n distinct roots, then this implies that C g is sent to its Jordan canonical form (over F) via similarity by a Vandermonde matrix. If g(t) has multiple roots, the similarity matrix is a confluent Vandermonde. For more details on these classical facts see, for example, [2,14,27,28] and the references therein. The matrix f (C g ) = M([f (t)]) therefore has eigenpairs of the form (f (θ), v θ ), and in particular, its rank r is equal to the number of the roots of g(t) which are not also roots of f (t), counted with multiplicity. Furthermore, this simple argument shows the determinant formula (1.1) of the Introduction. Although typically not stated in this generality, this result is well known at least for some popular choices of R and g(t), for example, when R is any subring of C and g(t) = t n − 1 (so R g is the ring of circulant matrices) [2,10]. We now focus on the case where R is an EDD (note that this implies that R is an integral domain, as in the discussion above), with the goal of studying the Smith form of f (C g ). Recall that every EDD is a Bézout domain, and therefore a GCD domain. This implies that R[t] is also a GCD domain; that is, given any pair of polynomials f (t), g(t) ∈ R[t] their gcd exists in R[t]. In the following, we will use this fact without further justification.

Proving Theorem A
In this section we prove Theorem A. The first step is the technical Lemma 5.1, which shows that if a(t), b(t) ∈ R[t] are two monic polynomials yielding the factorization g(t) = a(t)b(t) then C g is similar over R to another matrix X a,b ∈ R n×n which somehow explicitly displays the factorization of g(t); furthermore, the similarity can be expressed via a special matrix U a : a unit of R n×n that is completely determined by a(t).
Assume that the degree of a(t) is m, define r := n − m, and write Denote by C g , C a , C b the companion matrices of the polynomials g(t), a(t), b(t) respectively, and let U a be the n × n unimodular upper triangular Toeplitz matrix Then Before proving Lemma 5.1, we observe that (C b , e T r , e 1 ) and, respectively, (L m C T a L m , e T m , e 1 ) are standard triples [14] for, respectively, b(t) and a(t). It is therefore a consequence of (a minor modification of) [14, Theorem 3.1] that for all t, for some matrix S invertible over the field of fractions of R g(t) −1 = e T n (C g − tI) −1 e 1 = e T n (SX a,b S −1 − tI) −1 e 1 ; see also [7, Theorem 2] for a related result, also stated for R = C but discussing more general pencils. It follows in particular that C g and X a,b are similar over the field of fractions of R. Lemma 5.1 goes a step further by showing that one can take S = U a , i.e., U a C g = X a,b U a ; this is crucial in our context, as it implies that C g and X a,b are actually similar over R, and hence, equivalent.
Proof of Lemma 5.1. Introduce the vectors α = a m−1 · · · a 0 0 · · · 0 T ∈ R n−1 , Partition first On the other hand, partition Expanding g(t) = a(t)b(t) in the monomial basis gives 1 β T U a = 1 γ T , and hence, Moreover, since by construction the last m − 1 entries of the vector β and the last r − 1 entries of the vector α are zero, if i = r then β i (L n−1 α) i = 0. Hence, β T L n−1 α = β r (L n−1 α) r = b 0 a 0 = g 0 .
Remark 5.2. Combining Lemma 5.1 with Lemma 5.4 below and known divisibility relations between invariant factors of a submatrix and a matrix (see e.g. [32]) yields a potentially interesting consequence. Namely, if b(t) is any monic polynomial that divides g(t), one can write down divisibility relations between the invariant factors of f (C g ) and those of f (C b ). This is a useful property when deg b(t) ≪ deg g(t), as in this situation the size of f (C b ) is much smaller than the size of f (C g ) and so it is easier to compute the invariant factors of f (C b ) than those of f (C g ). A full discussion is beyond the scope of the present paper.
We now exhibit explicitly the Smith form of a(C g ), where a(t) is any monic divisor (in R[t]) of g(t) having degree m.

Lemma 5.3. If a(t) is a monic divisor (in R[t]) of g(t)
having degree m = n − r, then there exists a unimodular U ∈ R n×n such that Proof. We can partition However, the invertibility of U and U a implies that r + rank K = rank a(C g ) = r, and hence, K = 0.
The next technical lemma is useful to reduce the amount of explicit matrix calculations in other proofs; it is well known in matrix theory at least for the case where R is a field [17, Theorem 1.13(f)], and it can be proved similarly for a general R. Lemma 5.4. Suppose that f (t) ∈ R[t] and A, B are square matrices. Then Here ⋆ denotes a, possibly nonzero, block of the same size as X.
We now have all the ingredients to prove Theorem A.
Proof of Theorem A. In this proof, we specialize the notation of Lemmata 5.1 and 5.3 to the choice a(t) = z(t), b(t) = G(t). Then U z(C g )U −1 z = I r ⊕0.
where we used Lemma 5.4 and ⋆ denotes a block element whose precise nature is unimportant. Hence, using Lemma 5.3, In this section we consider factors of f (t) and of g(t). Our first result considers a factorization f (t) = f 1 (t)f 2 (t) of f and relates the Smith form of f (C g ) to the Smith forms of f 1 (C g ), f 2 (C g ). It is known [29,Theorem II.15] that if A and B have coprime determinants then the Smith form of AB is the product of the Smith forms of A and B. This immediately proves the following theorem as a special case.
and suppose that Res(f 1 , g) and Res(f 2 , g) are coprime. Denote by S, S 1 , S 2 the Smith forms of, respectively, f (C g ), The following result is a corollary of Theorem 6.1; however, we provide a more elementary proof that does not rely on this theorem.
and Res(f 2 , g) is a unit of R. Then Proof. By (1.1) the determinant det(f 2 (C g )) is a unit so f 2 (C g ) is unimod- Our next result considers a factorization g(t) = g 1 (t)g 2 (t) of g(t) and relates the matrix f (C g ) to the matrices f (C g 1 ),f (C g 2 ).
Proof. It follows from Lemma 5.1 and Lemma 5.4 that By [22, Lemma 6.11] (which is stated for R = F[x], but in fact only relies on the existence of the Smith form and on Bezout's identity, both valid over every EDD), if the determinants of A and B are coprime then To conclude the proof note that L m f (C g 1 ) T L m ∼ f (C g 1 ) T ∼ f (C g 1 ).
Corollary 6.4. Let g(t) = g 1 (t)g 2 (t) and Res(f, g 2 ) is a unit of R. Then Proof. Since Res(f, g 2 ) is a unit, (1.1) implies that f (C g 1 ) is unimodular and hence is equivalent to the identity matrix. The result then follows from Theorem 6.3.

Application to cyclically presented groups and Brieskorn manifolds
The polynomial g(t) = t n − 1 and Alexander polynomial f (t) of the torus knot K(r, s) -see (1.2) -can each be written as a product of cyclotomic polynomials. Before we calculate the Smith form of f (C g ) in Theorem C, we first calculate the Smith form of Φ m (C Φn ) in Theorem 7.4: this simpler case is potentially useful per se, and it serves the purpose of illustrating some of the basic ideas that we will also use later. Theorem 7.4 can be viewed as a considerable generalization of [1, Theorems 2 and 3] which assert that if m ≥ n ≥ 1 then the resultant Res(Φ m , Φ n ) is zero if m = n, is p φ(n) if m = np k where k ≥ 1 and p is prime, and is 1 otherwise. In [9, Theorem 3] one step further is taken to derive an expression for Res(Φ m , t n −1). We make repeated use of both these resultant formulae in the proofs of Theorem 7.4 and Theorem C. The first step is to characterize the first determinantal divisor which we do in Lemma 7.1. Recall that the content of a polynomial . . , f m ).
Lemma 7.1. Let g(t) ∈ R[t] be monic of degree n, and let f (t) ∈ R[t] where R is a GCD domain; moreover let h(t) be the unique polynomial of degree less than n such that f (t) ≡ h(t) mod g(t). Then the first determinantal divisor of f (C g ) is (up to units of R) γ 1 = cont(h).
Proof. Write h(t) = n−1 k=0 h k t k . Recalling Theorem 3.1 and the remarks before it, we have f (C g ) = h(C g ). It is readily verified, by finite induction, that the bottom row of C k g is equal to e T n−k for all k = 0, . . . , n − 1 (see also the remarks after Theorem 3.1). It follows that the bottom row of h(C g ) contains as entries precisely the h k , and hence, γ 1 | cont(h). On the other hand, and therefore all the entries of h(C g ) are R-linear combinations of the coefficients of h(t). This implies cont(h) | γ 1 , and concludes the proof.
The next steps are Lemmata 7.2 and 7.3: two simple properties of polynomials that will also be handy in proving Theorem C. Lemma 7.2 is a simple consequence of the fact that the p-th power map is a ring homomorphism on integers modulo p.
Proof of Theorem C. It is convenient to split the proof in two cases: (1) x = 1 (2) x > 1.
Case 1 : x = 1. We have Note that G ∩ F = ∅ so (f (t), g(t)) = 1 and hence the Smith form for f (C g ) has no zero invariant factors. Let r = p α 1 1 · · · p α ℓ ℓ be the prime factorization of r and d ∈ F. Then | Res(g, Φ d )| = 1 unless D = d/(d, n) is a positive prime power. This can only happen if D is a positive power of a prime factor p i of r, for if D divides s then, since (r, n) = 1, d = D(d, n) is coprime with r so d|rs implies d|s, a contradiction. In turn, this implies that d = p β i k with 1 ≤ β ≤ α i and 1 = k | y. Indeed, it cannot be k = 1, otherwise d divides r. Hence, in view of Theorem 6.1, the sought Smith form is the product of the Smith forms of f i (C g ), i = 1, . . . , ℓ, with Moreover, by Corollary 6.4, Hence, f i (C h ) has size precisely 1 =k|y φ(k) = y − 1. Furthermore, its determinant is in absolute value We now claim that f i (t) ≡ Ψ i (t) mod h(t), with p α i i | Ψ i (t). This implies, following an argument analogous to that of Case 2 in the proof of Theorem 7.4, that p α i i divides the first invariant factor of f i (C h ), and hence, f i (C h ) ∼ p α i i I y−1 . Since we can repeat this argument for all i = 1, . . . , ℓ, we conclude by Theorem 6.1 that f (C g ) ∼ I n+1−y ⊕ rI y−1 .
We now prove the claim. Observe that if d = p β i k then letting q iβ = p β−1 .
for any variable u: dividing by h(u), this in turn implies that p i also divides the polynomial [h(u)] p i −1 − h(u p i )/h(u). This in particular holds when u = t q iβ , for all 1 ≤ β ≤ α i . In this case q iβ is a prime power and coprime with y. Hence, for all δ | y, Φ δ (t) divides Φ δ (u), and thus h(t) divides h(u). Therefore, for all β = 1, . . . , α i , there exists a polynomial Ψ iβ (t) such that f iβ (t) ≡ p i Ψ iβ (t) mod h(t). Now let Ψ i (t) := p α i i α i β=1 Ψ iβ (t). Manifestly p α i i divides Ψ i (t), and by the above remarks it follows that this proves the claim. Case 2 : x > 1. Here z(t) = (f (t), g(t)) = d∈Σ Φ d (t) where Σ consists of all divisors of (n, rs) that are neither divisors of (n, r) nor (n, s). It follows that deg(z) = d∈Σ φ(d) = (x − 1)(y − 1) and so by Lemma 5.3 the Smith form has (x − 1)(y − 1) zero invariant factors. Since (r, s) = 1 we have (n, rs) = xy so after having removed common factors, as well as the trivial factor t − 1 in g(t), we are left with the index sets Let d ∈ F and suppose δ | n but δ ∤ xy. Then δ/d cannot be a positive or negative prime power, as otherwise d | xy or δ | xy, respectively; thus | Res(Φ d , Φ δ )| = 1 and by Theorem 6.4 we can effectively (up to neglecting some trivial invariant factors) replace g(t) with h(t), the product of cyclotomics over the set Suppose δ ∈ G 1 and d ∈ F. If δ | x then, since d ∤ r and d ∤ xy, the only possibility for Φ δ (t) and Φ d (t) to have a nontrivial resultant is for d to be of the form δŝ whereŝ is the power of a prime factor of s and s|s,ŝ ∤ y. A similar argument holds if δ ∈ G 2 , so we can replace F with F 1 ∪ F 2 with F 1 := {δŝ : 1 = δ | x,ŝ prime power,ŝ | s,ŝ ∤ y} and F 2 := {δr : 1 = δ | y,r prime power,r | r,r ∤ x}. Moreover, observing that s and r are coprime, if d ∈ F i , δ ∈ G j ({i, j} = {1, 2}) then | Res(Φ d , Φ δ )| = 1. Thus invoking Theorems 6.1 and 6.3, we also see that the Smith form of f (C h ) is the product of the Smith forms of I ⊕ f 1 (C h 1 ) and I ⊕ f 2 (C h 2 ), where the sizes of the identity matrices is clear from the context and, for i = 1, 2, f i (t) and h i (t) are products of cyclotomics whose indices vary in F i and G i , respectively. We now can, in essence, follow the first part of this proof to show that the Smith form of f 1 (C h 1 ) has non-unit invariant factors s/y (x − 1 times), and the Smith form of f 2 (C h 2 ) has non-unit invariant factors r/x (y − 1 times). More precisely, a slight modification is needed to take into account that, when writing (for example) f i (t) = β f iβ (t), the exponents β no longer vary between 1 and α i but between γ i + 1 and α i , where γ i is the power of p i in the prime factorization of x. This is a consequence of the fact that, as in the definition of F 1 , we must selectŝ as a prime power dividing s, but not y. However, as apart from this subtlety the argument is completely analogous, we omit the details.
Finally, the statement follows by multiplying two diagonal matrices.