On the Endomorphism Algebra of Specht Modules in Even Characteristic

Over fields of characteristic $2$, Specht modules may decompose and there is no upper bound for the dimension of their endomorphism algebra. A classification of the (in)decomposable Specht modules and a closed formula for the dimension of their endomorphism algebra remain two important open problems in the area. In this paper, we introduce a novel description of the endomorphism algebra of the Specht modules and provide infinite families of Specht modules with one-dimensional endomorphism algebra.


Introduction
Let k be an algebraically closed field of characteristic p ≥ 0 and r a positive integer.We write S r for the symmetric group on r letters and kS r for its group algebra over k.For each partition λ of r we have the Specht module Sp(λ) and for each composition α of r we have the permutation module M (α).Recall that Sp(λ) may be viewed as a submodule of M (λ).One fundamental result by James states that unless the characteristic of k is 2 and λ is 2-singular, the space of homomorphisms Hom kSr (Sp(λ), M (λ)) is onedimensional [J,Corollary 13.17].It follows that the endomorphism algebra of Sp(λ) is one-dimensional and so in particular that Sp(λ) is indecomposable.
In contrast, if the characteristic of k is 2 and λ is a 2-singular partition, that is λ has a repeated term, Sp(λ) may certainly decompose.The first example of a decomposable Specht module was discovered by James in the late 70s, thereby setting in motion the investigation of the (in)decomposability of Specht modules; a problem that has attracted a lot of attention over the years.In a recent paper [DG 1 ], Donkin and the first author considered partitions of the form λ = (a, m − 1, m − 2, . . ., 2, 1 b ) and obtained precise decompositions of Sp(λ) in the case where a − m is even and b is odd.An interesting feature arising in these decompositions is that there is no upper bound for the number of indecomposable summands of Sp(λ) and so in turn for the dimension of its endomorphism algebra [DG 1 , Example 6.3].Almost half a century after James' first example, a classification of the (in)decomposable Specht modules remains to be found and there is no known formula describing the dimension of their endomorphism algebra.In this paper, we provide a new characterisation of End kSr (Sp(λ)) as a subset of the homomorphism space Hom kSr (M (λ ′ ), M (λ)), where λ ′ is the transpose partition of λ.Our description allows one to realise an endomorphism of Sp(λ) as an element of the set Hom kSr (M (λ ′ ), M (λ)) that satisfies certain concrete relations.In this way, we are able to show that for λ = (a, m − 1, . . ., 2, 1 b ) with a − m ≡ b (mod 2), the endomorphism algebra of Sp(λ) is one-dimensional.
We do so by taking inspiration from the category of polynomial representations of the general linear groups.More precisely, for a partition λ, we compare two different constructions of the induced module ∇(λ) for GL n (k): the first introduced by Akin, Buchsbaum, and Weyman [ABW,Theorem II.2.11] and the second by James [J,Theorem 26.3(ii)].By Date: September 12, 2023.The authors gratefully acknowledge the support of The Royal Society through the research grant RGF\R1\181015a.
applying the Schur functor [G, §6.3], we then obtain two characterisations of the Specht module Sp(λ): first as a quotient of M (λ ′ ) and then as a submodule of M (λ).This leads to a concrete description of the endomorphism algebra of Sp(λ), which we shall then investigate in detail for partitions of the form λ = (a, m − 1, . . ., 2, 1 b ).
The paper is arranged in the following way.Section 2 provides the necessary background on polynomial representations of GL n (k) and kS r -modules.In Section 3 we explore the connection between these two categories via the Schur functor f and its right-inverse g.As a by-product of our considerations, we provide a new short proof of the fact that gSp(λ) ∼ = ∇(λ) for p = 2.Then, we focus on homomorphisms and in Lemma 3.3 we obtain the desired description of End kSr (Sp(λ)) in characteristic 2. In Section 4 we utilise more tools from the representation theory of GL n (k) to obtain a reduction technique that will be instrumental to our investigation of the case λ = (a, m − 1, . . ., 2, 1 b ) in Section 5.

Preliminaries
We write N for the set of non-negative integers.
2.2.Representations of general linear groups.We consider the general linear group G := GL n (k) and its coordinate algebra k [G] = k[c 11 , . . ., c nn , det −1 ], where det is the determinant function.We write A k (n) := k[c 11 , . . ., c nn ] for the polynomial subalgebra of k [G] generated by the functions c ij with 1 ≤ i, j ≤ n.The algebra A k (n) has an N-grading of the form A k (n) = À r∈N A k (n, r) where A k (n, r) consists of the homogeneous degree r polynomials in the c ij .Given a rational G-module V , we shall denote by cf(V ) the coefficient space of V , that is the subspace of k [G] generated by the coefficient functions and a polynomial representation of G of degree r if cf(V ) ⊆ A k (n, r).We write M k (n) for the category of polynomial representations of G and M k (n, r) for its subcategory of representations of degree r.Recall that the category M k (n, r) is naturally equivalent to the category of S k (n, r)-modules, where S k (n, r) := A k (n, r) * is the corresponding Schur algebra [G, §2.3, §2.4].For V ∈ M k (n) we write V • for its contravariant dual, in the sense of [G, §2.7].
We fix T to be the maximal torus of G consisting of the diagonal matrices in G.An element α ∈ Λ(n) may be identified with the multiplicative character of T that takes We denote by k α the one-dimensional rational T -module on which t ∈ T acts by multiplication by α(t).Then, given for all t ∈ T } for the α-weight space of V .We write E := k ⊕n for the natural G-module and S r E (resp.Λ r E, D r E) for the corresponding rth symmetric power (resp.exterior power, divided power) of E. For ℓ ≥ 1 and an ℓ-tuple α = (α 1 , . . ., α ℓ ) of non-negative integers, we define the polynomial G-modules: , where ∆(λ) is the Weyl module corresponding to λ [Jan,§II.2.13(1)].
Here, we shall review a construction of the induced module by Akin, Buchsbaum, and Weyman.In [ABW,§II.1],the authors associate to a partition λ with λ 1 ≤ n, a G-module denoted L λ (E), which they call the Schur functor of E. For simplicity, we shall refer to the module L λ (E) as the Schur module associated to λ.Further, in [ABW,§II.2]the authors provide a description of L λ (E) by generators and relations.More precisely, in [ABW,Theorem II.2.16], the authors identify L λ (E) with the cokernel of a G-homomorphism between a pair of (direct sums of) tensor products of exterior powers of E. By [D 3 , §2.7( 5)], we have that L λ (E) is isomorphic to an induced module, namely ).Their construction is as follows.Recall that the exterior algebra Λ(E) of E enjoys a Hopf algebra structure [ABW,§I.2].We write ∆ and µ for the comultiplication and multiplication of Λ(E) respectively.Let λ be a partition with ℓ := ℓ(λ).For 1 where σ denotes the isomorphism that permutes the corresponding tensor factors, and each 1 refers to the identity map on the corresponding tensor factor.Now, set: [ABW,Theorem II.2.16], and hence coker §2.7(5)].We shall refer to this description as the ABW-construction of ∇(λ).Now, we review an alternative description of ∇(λ) due to James [J, §26].Although James refers to this module as the "Weyl module", it is not to be confused with the usual Weyl module ∆(λ) that we discussed above [G,Theorem (4.8f)].James' construction is as follows.Recall that the symmetric algebra S(E) of E also has a Hopf algebra structure [ABW,§I.2].As a slight abuse of notation, we shall once again use the symbols ∆ and µ for the corresponding comultiplication and multiplication of S(E) respectively.Let λ be a partition with ℓ := ℓ(λ).For 1 coming from ∆ and µ respectively.Further, for 1 ≤ i < j ≤ ℓ, 1 ≤ t ≤ λ j , we construct the G-homomorphism ψ (i,j,t)   λ : S λ E − → S λ (i,j,t) E as the composition: (2.4) where σ denotes the isomorphism that permutes the corresponding tensor factors, and each 1 refers to the identity map on the corresponding tensor factor.Now, set: (2.6) For λ ∈ Λ + (n), we have that ∇(λ) ∼ = ker ψ λ [J,Theorem 26.5].We shall refer to this description as the James-construction of ∇(λ).

Endomorphism algebras
3.1.General Results.From now on we fix n ≥ r.Note that for λ ∈ Λ + (n, r) we have that λ ′ ∈ Λ + (n, r).First we point out some new properties of the functor g and then we utilise the two different descriptions of the Specht module Sp(λ) to introduce a new description of the endomorphism algebra of Sp(λ).
Proposition 3.1.Assume that p = 2. Then: and so it follows that: Now, since p = 2, the dimension of Hom kSr (M s (α), M (β)) does not depend on the value of p [DJ,Theorem 3.3(ii)], and so in order to calculate the dimension of gM s (α) β , we may assume that p = 0.However, in characteristic 0, the functors f and g are inverse equivalences of categories and so gM s (α) ∼ = Λ α E. Therefore, for p = 2, we deduce that dim gM s (α) β = dim Λ α E β for all β ∈ Λ(n, r).Now, recall that for V ∈ M k (n, r), we have the weight space decomposition V = À β∈Λ(n,r) V β [G, (3.2c)], and so it follows that, for p = 2, we have dim gM s (α) = dim Λ α E. Now, we have that M (1 r ) ∼ = eSe and so gM (1 r ) ∼ = Se ⊗ eSe eSe ∼ = Se ∼ = E ⊗r [G, (6.4f)].For α ∈ Λ(n, r) we have a surjective G-homomorphism E ⊗r − → Λ α E and so via the Schur functor, we get a surjective kS r -homomorphism M (1 r ) − → M s (α).The functor g, being right-exact, preserves surjections, and so the G-homomorphism gM (1 r ) − → gM s (α) is surjective.We consider the commutative diagram: where the horizontal maps are induced from the kS r -inclusions The top horizontal map is an isomorphism and the right-hand vertical map is surjective, and so the bottom horizontal map is hence surjective.Since dim gM s (α) = dim Λ α E away from characteristic 2, we obtain gM s (α) ∼ = Λ α E for p = 2.

r). Then:
(i) There is a k-isomorphism: (ii) In particular, when p = 2, there is a k-isomorphism: Proof.Part (i) follows immediately from the two descriptions of the Specht module: Sp(λ) ∼ = coker φλ ′ and Sp(λ) ∼ = ker ψλ from §2.3.Part (ii) then follows from part (i) and the fact that the permutation module and the signed permutation module coincide in characteristic 2.
Remark 3.5.We may view any partition λ ∈ Λ + (n, r) as an n-tuple by appending an appropriate number of zeros to λ. Accordingly, we may relax the dependence on ℓ(λ) of the maps φλ and ψλ .We do so by setting φ(i,j,s) λ := 0 and ψ(i,j,t) By Lemma 3.3(ii) and Lemma 3.4, we obtain the following Corollary: Corollary 3.6.Assume that char k = 2 and let λ ∈ Λ + (n, r).Then the endomorphism algebra of Sp(λ) may be identified with the k-subspace of Hom kSr (M (λ ′ ), M (λ)) consisting of those elements h that satisfy:

A concrete description.
From now on we shall assume that the underlying field k has characteristic 2. We write [r] := {1, . . ., r} and as always we assume that n ≥ r.First, we provide a matrix description of a k-basis of Hom kSr (M (α), M (β)) for α, β ∈ Λ(n, r), and then we shall utilise this description to obtain some crucial information regarding the endomorphism algebra of Sp(λ).
We write M n×n (N) for the set of (n × n)-matrices with non-negative integer entries.Let , where the ith tensor factor is defined to be 1 if α i = 0 for some 1 ≤ i ≤ n.We may parametrise this k-basis by the set of all elements of M n×n (N) whose sequence of row-sums is equal to α. Accordingly, for β ∈ Λ(n, r), the β-weight space (S α E) β has a k-basis parametrised by the set of all matrices in M n×n (N) whose sequence of row-sums is equal to α, and whose sequence of column-sums is equal to β.On the other hand, the permutation module M (α) has a k-basis consisting of all ordered sequences of the form (x 1 | . . .|x n ), where each x i = (x i1 , x i2 , . . ., x iα i ) is an unordered sequence with terms from [r], that satisfy the property that for each k ∈ [r], there is a unique pair (i, j) with x ij = k.Here x i denotes the zero sequence whenever α i = 0.
We set Tab(α, β) We associate to each A ∈ Tab(α, β), a homomorphism ρ[A] ∈ Hom kSr (M (α), M (β)).We do so as follows: Given a basis element x := (x 1 | . . .|x n ) ∈ M (α), we set ρ[A](x) to be the sum of all basis elements of M (β) that are obtained from x by moving, in concert, a ij entries from its ith-position x i to its jth-position x j in all possible ways, and for every 1

take any linear combination of the ρ[A]s, say h = A h[A]ρ[A] (h[A]
∈ k), along with any basis element x of M (α), and then consider the coefficients of the basis elements of M (β) in h(x).The linear independence of the ρ[A]s along with Lemma 3.2(i) give that the set {ρ the matrix with a 1 in its (i, j)th-position and 0s elsewhere.Notice that: Proof.This is a simple calculation which we leave to the reader.
For A = (a ij ) i,j ∈ M n×n (Z) and 1 ≤ k, l ≤ n, we shall write A (k,l) for the element of M n×n (Z) with entries given by a (k,l)  ij := a ij +δ (i,j),(k,l) , and A (k,l) for the element of M n×n (Z) with entries given by a Henceforth, we denote by , where the sum is over all l such that a jl = 0. (ii) ψ(i,j,1) , where the sum is over all k such that a kj = 0. Proof.We shall only prove part (i) since part (ii) is similar.We may assume that j ≤ ℓ(λ ′ ).Fix 1 ≤ i < j ≤ ℓ(λ ′ ), and we denote by k=1 x k , where x k denotes the basis element of M (λ ′ ) that is obtained from x by omitting the entry x ik from the sequence x i and placing it in the (unordered) sequence where the z[t] are the basis elements of M (λ) and the c kt are constants with c kt ∈ {0, 1}.Then ρ[A] • φ(i,j,1) i + 1 and some s with c ks = 1.Then, suppose that the entry x ik appears in the lth-position z[s] l of z[s] and hence a jl = 0. Note that the sequence z then c vs = 1.On the other hand, given 1 ≤ q ≤ λ ′ i + 1, if x iq does not appear as an entry in z[s] l , then c qs = 0.It follows that c s = a il + 1.Meanwhile, given 1 ≤ l ′ ≤ n, z[s] appears in ρ A (i,l ′ )  (j,l ′ ) (x) if and only if l ′ = l, in which case it appears with a coefficient of 1.The result follows.

and only if the coefficients h[A] of the ρ[A] in h satisfy:
(i) For all 1 ≤ i < j ≤ n, 1 ≤ k ≤ n, and all A ∈ T λ with a jk = 0, we have: and all A ∈ T λ with a kj = 0, we have: Recall that through the ABW-construction of the induced module, we see that namely by the submodule im φ λ ′ (2.3).We claim that we can replace the factor E ⊗b ′ with the symmetric power S b ′ E. This process is in fact independent of the characteristic of the field k.To this end, we construct from the multiplication map µ : follows from the definition of φ λ ′ .Then, note that by the definition of the symmetric power S b ′ E, the k-space ker µ is generated by elements of the form e

[k]
i for 1 ≤ k < b ′ and sequences i := (i 1 , . . ., i b ′ ) with terms in [n], where e Then, it follows that the k-space ker(1 ⊗ µ) is generated by elements of the form x ⊗ e i , and so x ⊗ e Moreover, it follows from part (i) that ker π = im φ λ ′ / ker(1 ⊗ µ), and so we deduce that On the other hand, recall that through the James-construction of the induced module, we see that that ∇(λ) is isomorphic to a submodule of S λ E, namely as the kernel of the G-homomorphism ψ λ (2.6).We claim that we may replace the factor E ⊗b with the exterior power Λ b E. Once again, this process is independent of the characteristic of k.For this, we construct from the comultiplication map ∆ : Lemma 4.2.For m ≥ 2 and λ = (a, m − 1, m − 2, . . ., 2, 1 b ), we have: Proof.(i) Firstly, it follows from the definition of ψ λ that ker ψ λ ⊆ ker ψ (m+k−1,m+k,1) λ for 1 ≤ k < b.Then, the k-space ker ψ (m+k−1,m+k,1) λ is generated by elements of the form x⊗e λ is generated by elements of the form: from which part (i) follows.
(ii) Now, the map 1 ⊗ ∆ : Moreover, it follows from part (i) that ν is surjective, and so we have a G-isomorphism ker( Remark 4.3.We shall consider the constructions of this section from the perspective of the Specht module Sp(λ).
Then, by combining the results of Lemma 4.4, Lemma 4.5, and Lemma 4.6, we obtain the following description of the endomorphism algebra of Sp(λ): Corollary 4.7.The endomorphism algebra of Sp(λ) may be identified with the k-subspace of Hom kSr (M (α), M (β)) consisting of those elements h that satisfy: (ii) We say that an element h ∈ Hom kSr (M (α), M (β)) is relevant if h • φ(i,j,1) α = 0 and ψ(i,j,1) Proof.The maps ω and ω are clearly injective.Now, Lemma 4.1(i) and Lemma 4.2(i) give that both maps are surjective.
Remark 4.10.Let γ ∈ Λ(n, r) with ℓ := ℓ(γ).Then: , where the sum is over those A ∈ Tab(λ ′ , γ) whose first (m − 1) rows agree with those of B, and also a i=m a ij = b mj for 1 ≤ j ≤ ℓ.Informally, these A are obtained from B by distributing, along columns, each non-zero entry within the mth-row of B into rows m through a of A such that these rows of A contain exactly one non-zero, and hence equal to 1, entry.(ii) Now, let B ∈ Tab(γ, β).Then ι β • ρ[B] ∈ Hom kSr (M (γ), M (λ)) and one can easily check that , where the sum is over those A ∈ Tab(γ, λ) whose first (m − 1) columns agree with those of B, and also a ′ j=m a ij = b im for 1 ≤ i ≤ ℓ.Informally, these A are obtained from B by distributing, along rows, each non-zero entry within the mth-column of B into columns m through a ′ of A such that these columns of A contain exactly one non-zero, and hence equal to 1, entry.

and only if the coefficients h[B] of the ρ[B] in h satisfy:
(i) For all 1 ≤ i < j ≤ m, 1 ≤ k ≤ m, and all B ∈ Tab(α, β) with b jk = 0, we have: and all B ∈ Tab(α, β) with b kj = 0, we have: Proof.For B ∈ Tab(α, β), we denote by Ω(B) the subset of matrices in Tab(λ ′ , λ) with: and we shall set Then, it follows from Remark 4.10 that the coefficients h[A] of the ρ[A] in h satisfy: Now, suppose that h is relevant and we shall show that the coefficients h[B] of the ρ[B] in h satisfy the relations stated in (i), and it may be shown in a similar manner that they also satisfy the relations stated in (ii).Firstly, note that h is relevant by Lemma 4.9(ii).We fix 1 ≤ i < j ≤ m, 1 ≤ k ≤ m, and B ∈ Tab(α, β) with b jk = 0.Then, there exists A ∈ Ω(B) with a jk = 0.For such an A, since h is relevant, the relation R k i,j (A) of Corollary 3.17(ii) gives that: ( Now, take any 1 ≤ l ≤ n with l = k such that a il = 0.If l < m, then a il = b il and A (i,k)(j,l) (j,k)(i,l) ∈ Ω(B (i,k)(j,l) (j,k)(i,l) ), so that h A (i,k)(j,l) (j,k)(i,l) = h B (i,k)(j,l) (j,k)(i,l) .On the other hand, if l ≥ m, then a il = 1 with A (i,k)(j,l) (j,k)(i,l) ∈ Ω(B (i,k)(j,m) (j,k)(i,m) ) so that h A (i,k)(j,l) (j,k)(i,l) = h B (i,k)(j,m) (j,k)(i,m) .Therefore, we may rewrite (4.15) as: Now, if k < m, then a ik = b ik and l≥m a il = b im .Thus, (4.16) becomes: which is precisely the relation R k i,j (B).On the other hand, if k = m, then a im = 0, since a jm = 0, and so l>m a il = b im .Moreover, B (i,k)(j,m) (j,k)(i,m) = B, and so (4.16) becomes: which in turn gives the relation R m i,j (B):

Conversely, suppose that the coefficients h[B] of the ρ[B]
in h satisfy the relations stated in the Lemma.Note that by Lemma 4.9(ii), in order to show that h is relevant, it suffices to show that h is relevant.To this end, we shall show that h • φ(i,j,1) ≤ n, and it shall follow similarly that ψ(i,j,1) λ • h = 0 for such i, j.Note that h is semirelevant by Lemma 4.9(i) and so h • φ(i,j,1) = 0 for i ≥ m.Therefore, we may assume that i < m.
Accordingly, fix some 1 ≤ i < j ≤ n with i < m.Then, as in the proof of Lemma 3.14, we have: (4.17) Let C ∈ Tab(λ ′ (i,j,1) , λ), and we wish to show that the coefficient of ρ is equal to 0. According to (4.14) and (4.17), we may assume that there exists some 1 ≤ k ≤ n with c ik = 0 such that A := C (j,k)  (i,k) ∈ Ω(B) for some B ∈ Tab(α, β), where Ω(B) is as in (4.13), since otherwise, each h C (j,l)  (i,l) appearing in the coefficient of ρ[C] in (4.17) is equal to zero.Then, it follows from (4.17) that the coefficient of ρ We split our consideration into the following cases: ,m) .Note that there are precisely b im such values of l.Hence, we may rewrite (4.18) as: Note that there are precisely b im such values of l.Hence, we may rewrite (4.18) as: ,m) .Note that there are precisely b im such values of l.Hence, we may rewrite (4.18) as: Finally, in this case, we have c ik = 1 and also b mm = 0 since Note that there are precisely b im such values of l.Hence, we may rewrite (4.18) as: is zero in all possible cases, and so we are done.Now, since α and β both have length m, we may ignore the final (n − m) rows and columns of each matrix in Tab(α, β) and Tab(β, α).Accordingly, we identify Tab(α, β) with the set T := {A ∈ M m×m (N) | j a ij = α i and i a ij = β j }, and Tab(β, α) with the set Remark 4.19.Note that λ and its transpose λ ′ are of the same form.That is to say, the swap λ which in turn is equivalent to the swap α ↔ β.Therefore, after defining the notion of relevance for elements h ∈ Hom kSr (M (β), M (α)), similarly to Definition 4.8(ii), and also swapping T with T ′ , we obtain the following analogue of Lemma 4.12: Remark 4.22.Let h ∈ Hom kSr (M (α), M (β)) and consider its transpose homomorphism h ′ ∈ Hom kSr (M (β), M (α)).We have:

A critical relation.
Here, we shall highlight a new relation that occurs as a combination of the relations R k i,j (A) and C k i,j (A) of Lemma 4.12 that will play an important role in our considerations below.Lemma 4.23.Suppose that h ∈ Hom kSr (M (α), M (β)) is a relevant homomorphism.

Then the coefficients h[A] of the ρ[A] in h satisfy the relations:
for all 1 ≤ j, k ≤ m and A ∈ T with a jk = 0, where z j,k (A) :

Proof. Since h is relevant, the coefficients h[A] of the ρ[A]
in h satisfy the relations of Lemma 4.12, and so in particular, given 1 ≤ j, k ≤ m, the coefficients satisfy the relation i<j R k i,j (A) + l<k C j l,k (A) for all A ∈ T with a jk = 0. But, the left-hand side of this relation is given by: by definition of z j,k (A).On the other hand, the right-hand side of this relation is: ) and so after cancelling those terms that appear twice, we may rewrite (4.25) as: which, along with (4.24), gives the required expression.

One-dimensional endomorphism algebra
Given integers s, t, we write s ≡ t to mean that s is congruent to t modulo 2, and so in particular, are equal as elements of the field k.From here, we shall assume that the parameters a, b, and m satisfy the parity condition: a − m ≡ b (mod 2).Note that this condition is preserved by the swap (a, b) ↔ (a ′ , b ′ ), where Firstly, we highlight some basic properties of the coefficients z j,k (A) from Lemma 4.23.
Lemma 5.1.Let A ∈ T .Then: Proof.Part (i) follows from substituting the two expressions: i<j a ik = β k − i≥j a ik and l<k a jl = α j − l≥k a ji into the definition of z i,j (A).Parts (ii)-(v) then follow immediately from part (i) along with the forms of α and β.Definition 5.2.Let A, B ∈ T .Then: (i) We write A < R B to mean that B follows A under the induced lexicographical order on rows, reading left to right and bottom to top.This is a total order and we call it the row-order.(ii) We write A < C B to mean that B follows A under the induced lexicographical order on columns, reading top to bottom and right to left.This is a total order and we call it the column-order.
Remark 5.3.Let 1 ≤ j, k ≤ m and let A ∈ T with a jk = 0. Then any B = A (i,k)(j,l) (j,k)(i,l) that appears in the relation Z j,k (A) of Lemma 4.23 satisfies both B < R A and B < C A.
Proof.Firstly, z m,m (A) = 1 by Lemma 5.1(iv), and the result follows by Z m,m (A).
Lemma 5.6.Let A ∈ T and suppose that there exist some 1 < j, k < m such that a jm = 0 and a mk = 0. Then h[A] = 0.
Proof.Suppose for contradiction that the claim is false and let A ∈ T be a counterexample that is minimal with respect to the column-order < C .We choose 1 < j, k < m to be maximal such that a jm , a mk = 0. We may assume that a mm = 0 by Lemma 5.4.Now, by Lemma 5.1(iii) we have z j,m (A) + z m,k (A) = 1 and so the relation where B [i,l] := A (i,m)(j,l) (j,m)(i,l) for i > j, l < m with a il ≡ 0, and Suppose that i > j, l < m are such that a il ≡ 0, and consider the matrix B [i,l] .If i = m, then b [m,l]  mm = 0 and so h[B [m,l] ] = 0 by Lemma 5.4.On the other hand, if i < m then b [i,l]  im , b [i,l]  mk = 0, and notice also that B [i,l] < C A by Remark 5.3.Therefore, by minimality of A, we have that h[B [i,l] ] = 0. Similarly, one may show that h[D [i,l] ] = 0 for i < m, l > k with a il ≡ 0, and so we deduce that h[A] = 0. Definition 5.7.We define the sets: (i) T R := {A ∈ T | a i1 = 1 for 1 ≤ i < m, and a mk = 0 for 1 < k ≤ m}.
Lemma 5.8.Let A ∈ T and suppose that A ∈ T R ∪ T C. Then h[A] = 0.
Proof.By Lemma 5.4 we may assume that a mm = 0. Suppose that a mk = 0 for some k with 1 < k < m.Then, by Lemma 5.6, we may assume that a jm = 0 for 1 < j < m.But then a 1m = b and so l<m a 1l = m − 1.Since A ∈ T C we deduce that there exists some 1 ≤ l < m with a 1l = 0. Now, the relation C 1 l,m (A) gives that h[A] = j>1 a jl h[B [j] ] where B [j] := A (1,l) (j,m)  (1,m)(j,l) for j > 1 with a jl ≡ 0. Suppose that j > 1 is such that a jl ≡ 0. If j = m then b [m]  mm = 0 and so h[B [m] ] = 0 by Lemma 5.4.Moreover, for 1 < j < m we have that b [j]  mk , b [j]  jm = 0 and so h[B [j] ] = 0 by Lemma 5.6.Therefore, we deduce that h[A] = 0. Hence, we may assume that a mk = 0 for all 1 < k ≤ m and so it follows that a m1 = a − m + 1 and that j<m a j1 = m − 1.However, since A ∈ T R we must have that a j1 = 0 for some j with 1 ≤ j < m.Now, the relation R 1 j,m (A) gives h[A] = l>1 a jl h[D [l] ] where D [l] := A (j,1) (m,l)  (m,1)(j,l) for l > 1 with a jl ≡ 0. Suppose that l > 1 is such that a jl ≡ 0. If l = m, then d [m]  mm = 0 and so h[D [m] ] = 0 by Lemma 5.4.On the other hand, if 1 < l < m then d [l]  ml = 0. Now, if d [l]  um = 0 for some 1 < u < m, then h[D [l] ] = 0 by Lemma 5.6.Hence, we may assume that d [l]  um = 0 for all 1 < u < m and so we deduce that d [l]  1m = a 1m = b.Since A ∈ T C we have that there exists some 1 ≤ k < m with a 1k = 0 and hence d [l]  1k = 0.Then, the relation C 1 k,m (D [l] ) expresses h[D [l] ] as a linear combination of h[F ]s where either f mm = 0, or f ml = 0 and f vm = 0 for some v with 1 < v < m.Once again, Lemma 5.4 and Lemma 5.6 give that h[F ] = 0 for all such F and so h[D [l] ] = 0. Hence h[A] = 0.
Definition 5.9.We shall require some additional notation that we shall introduce here: (i) In order to assist with counting in reverse, set τ (i) := m − (i − 1) for 1 ≤ i ≤ m.
(ii) For 1 < i < m, we define: T R i := {A ∈ T R | the τ (j)th-row of A contains j odd entries for 1 < j ≤ i}.
(iii) For 1 < i < m, we define T R i := T R i \ T R i+1 , where we set T R m := ∅.
Remark 5.10.Let A ∈ T .Recall that l a τ (i)l = i for 1 < i < m.Therefore, if A ∈ T R i for some 1 < i < m, then the τ (j)th-row of A consists entirely of ones and zeros for all 1 < j ≤ i.
Definition 5.11.Let 1 < i < m and A ∈ T R i .Then: we denote by w j (A) := (w j 1 (A), w j 2 (A), . ..) the decreasing sequence of column-indices within the final τ (k A ) columns of A that satisfy a τ (j)w j s (A) = 1 for s ≥ 1.Notice that the sequence w j (A) has j − k A + 1 terms.
Lemma 5.13.Let 2 < i < m and let A ∈ T R i with k A ≤ i. Suppose that there exists some index k with k A < k ≤ i such that w j t (A) = w j−1 t−1 (A) for all k A < j ≤ k and all even t.Then for l ≥ k A , k A ≤ j ≤ k, we have u≥τ (j) a ul ≡ 1 if and only if l = w j s (A) for some odd s.
Proof.We proceed by induction on j.The case j = k A is clear and so we may assume that j > k A and that the claim holds for all smaller values of j in the given range.Let l ≥ k A and suppose that u≥τ (j) a ul ≡ 1. Suppose, for the moment, that a τ (j)l = 0. Then u≥τ (j) a ul = u≥τ (j−1) a ul , and so l = w j−1 s (A) for some odd s by the inductive hypothesis.However, w j s+1 (A) = w j−1 s (A) = l and so a τ (j)l = 1, contradicting that a τ (j)l = 0. Hence, a τ (j)l = 1 and so l = w j s (A) for some s.Moreover, u≥τ (j) a ul ≡ 1 if and only if u≥τ (j−1) a ul ≡ 0 and so by the inductive hypothesis l = w j−1 s ′ (A) for any odd s ′ .Now, if s is even then w j s (A) = w j−1 s−1 (A), leading to a contradiction.Hence, s must be odd.Conversely, suppose that l = w j s (A) for some odd s, and suppose, for the sake of contradiction, that u≥τ (j) a ul ≡ 0.Then, there exists some k A ≤ j ′ < j such that a τ (j ′ )l = 1, and we choose j ′ to be maximal with this property.Therefore, a ul = 0 for τ (j) < u < τ (j ′ ) and u≥τ (j ′ ) a ul ≡ 1.Then, by the inductive hypothesis, 1 (B [l] ).We shall proceed by induction on j A , decreasing from j A = i.
Firstly, suppose that j A = i.If l > u and a lw = 0, then B [l] ∈ T R i ′ for some i ′ < i, and so B [l] is as described in case (i).Now, if l > u with a lw = 0, then k [l] = k, B [l] < C A, and the final column in which B [l] and A differ is the wth-column.Hence, here B [l] is as described in case (iii).On the other hand, if l < u, then k [l] > k and B [l] is as described in case (ii).Now, suppose that j A < i and that the claim holds for all D ∈ T R i with j A < j D ≤ i.We split our consideration into steps: Step 1: If l > u and a lw = 0, then B [l] ∈ T R i ′ for some i ′ < i, and so B [l] is as described in case (i).On the other hand, if l > u and a lw = 0, then B [l] ∈ T R i with k [l] = k and B [l] < C A. Moreover, the final column in which B [l] and A differ in this case is the wth-column and so B [l] is as described in case (iii).
Step 2: Now, if τ (i) ≤ l < u with a lw = 0. Then B [l] ∈ T R m−l with m − l < i since l ≥ τ (i) = m − i + 1, and so B [l] is as described as in case (i).
Step 3: On the other hand, if τ (i) ≤ l < u and a lw = 0, then B [l] ∈ T R i with k [l] = k and j [l] > j A .Moreover, the final column in which A and B differ is the wth-column, and so w j 1 (B [l] ) = w j 1 (A) for all j A < j ≤ i, and so in particular w j 1 (B [l] ) > w j−1 1 (B [l] ) for each j [l] < j ≤ i.Hence, by the inductive hypothesis, B [l] must satisfy the claim, and so h[B [l] ] may be written as a linear combination of h[D]s for some D ∈ T where either: (iv l] and D < C B [l] , which is witnessed within the final τ (w [l] ) columns of B [l] and D. Any such D as in (iv) is as described in case (i), whereas any such D as in (v) is as described in case (ii) since k [l] = k A .Now, notice that the final τ (w [l] ) columns of A and B [l] match since w [l] > w, and so any such D as in (vi) also satisfies D < C A (witnessed within the final τ (w) columns of A and D), and so is as described in case (iii).
Step 4: Finally, if l < τ (i), then B [l] ∈ T R i .Moreover, if a tk = 1 for all τ (i) ≤ t < τ (j A ), then k [l] > k and so B [l] is as described in case (ii).On the other hand, if a tk = 0 for some t in this range, then k [l] = k with j [l] > j A and then one may proceed as in Step 3 above.Now, suppose that A ∈ T C but B [l] ∈ T C for some l = u with a lk ≡ 0. Notice that this forces l = 1 and a lk = 2, which contradicts that a lk ≡ 0. Hence if A ∈ T C, then B [l] ∈ T C for all l = u with a lk ≡ 0. By applying this argument recursively, it follows that if A ∈ T C, then all such B produced by this procedure satisfy B ∈ T C as well.

Then we may express h[A] as a linear combination of h[B]s for some B ∈ T where either:
( Proof.Firstly, recall that the sum of the entries in the τ (i + 1)th-row of A is i + 1.Now, since A ∈ T R i+1 , we deduce that the τ (i + 1)th-row of A contains at most i − 1 odd entries.Hence, there exists some 1 < s ≤ i such that a τ (i+1)s is even and we choose s be minimal with this property.To ease notation, we set q := τ (i + 1) and u := τ (s).Note that a us = 1.The relation R s q,u (A) gives that h[A] = l =s a ql h[B [l] ] where B [l] := A (q,s)(u,l) (u,s)(q,l) for l = s with a ql ≡ 0. If l = 1, then B [1] ∈ T R, and so B [1] is as described in case (ii).Now, if 1 < l < s, then B [l] ∈ T R s−1 with s − 1 < i, and so B [l] is as described in case (i).Meanwhile, if l > s, then B [l] ∈ T R i , and there exists some s < t ≤ i (depending on l) such that b [l]  qt is even, and we take t to be minimal with this property.The relation R t q,τ (t) (B [l] ) expresses h[B [l] ] as a linear combination of h[D]s for some D ∈ T that must either fit into one of the cases described in the statement of the claim, or otherwise once again D ∈ T R i and there exists some t < v ≤ i such that d qv is even, and we take v to be minimal with this property.Noting that v > t > s, it is clear that this process must terminate, hence providing the desired expression for h [A].Now, suppose that A ∈ T C but B [l] ∈ T C for some l = s with a ql ≡ 0.Then, notice that B [l] agrees with A outside the τ (i + 1)th and τ (s)th rows, and so in particular they agree in the first row since i < m − 1. Hence a 1v = b [l] 1v = 1 for 1 ≤ v < m since B [l] ∈ T C. Now, by considering the first row-sum and the last column-sum of A, we deduce that a 1m = b and a vm = 0 for 1 < v ≤ m.However, this implies that A ∈ T C, which is a contradiction.Once again, by applying this argument recursively, it follows that if A ∈ T C, then all such B produced by this procedure satisfy B ∈ T C as well.
Lemma 5.17.Let 1 < i < m − 1 and let A ∈ T R i .Then we may express h Proof.We proceed by induction on i ≥ 2. Firstly, suppose that i = 2. Since A ∈ T \ T R 3 with l a τ (3)l = 3, the (m − 2)th-row of A must consist of a single odd entry, which must then be equal to 1, and be located in the first column of A. On the other hand, since A ∈ T R 2 , there exists a unique l > 1 with a . Evidently, B ∈ T R, and so the claim holds for i = 2. Now, we suppose that i > 2 and that the claim holds for all B ∈ T such that B ∈ T R i ′ for some 2 ≤ i ′ < i. Suppose, for the sake of contradiction, that the claim fails for this particular value of i and consider the set of counterexamples A ∈ T R i whose value of k A is maximal amongst all counterexamples.Now, we choose A to be the element of this set that is minimal with respect to the column-ordering.In other words .16 states that we may express h[A] as a linear combination of some h[B]s for some B ∈ T where either B ∈ T R i with i ′ < i, or B ∈ T R. In the first case the inductive hypothesis states that h[B] can be expressed as a linear combination of some h[D]s with D ∈ T R, whilst in the second case we have B ∈ T \ T R. Thus, h[A] satisfies the statement of the claim which contradicts that A was chosen to be a counterexample.
Hence, we may assume that k A ≤ i. Suppose, for the sake of contradiction, that there exists k A ≤ j ≤ i, k A ≤ k < m such that a τ (j)k = 1 and z τ (j),k (A) = 1.The relation Z τ (j),k (A) gives the expression: a ul h[B [u,l] ] + u>τ (j) l<k a ul h[B [u,l] ], (5.18) where B [u,l] := A (u,k)(τ (j),l) (τ (j),k)(u,l) for all such (u, l) satisfying a ul ≡ 0. Now, set B := B [u,l] where (u, l) is as in (5.18) with a ul ≡ 0. We claim that B fits into one of the following cases: B ∈ T R, B ∈ T R i ′ for some i ′ < i, or B ∈ T R i with k B = k A and B < C A. We provide full details for the case where u > τ (j), l < k with the other case, that is u < τ (j), l > k, being similar.
If l = 1 then B ∈ T R and so B is of the desired form.Now, if 1 < l < k A , then either u ≥ τ (k A ) or τ (j) < u < τ (k A ).In the first case, we have B ∈ T R j−1 , whilst in the second case we have B ∈ T R τ (u−1) if a uk = 1 and B ∈ T R j−1 if a uk = 0. Hence, in either case, we deduce that B ∈ T R i ′ for some i ′ < i. Suppose now that k A ≤ l < k, then we must have τ (j) < u ≤ τ (k A ) since a ul ≡ 0. Now, if a uk = 1 then B ∈ T R τ (u−1) , whilst if a uk = 0 and a τ (j)l = 1, then B ∈ T R j−1 .Finally, if a uk = 0 and a τ (j)l = 0, then B ∈ T R i with k B = k A and B < C A. But then, either by the inductive hypothesis on i, or by the minimality of A, all such B produced in this procedure must satisfy the statement of the claim, and hence so must A, which contradicts that A was chosen to be a counterexample.
Therefore, we may assume that that z τ (j),k (A) = 0 for all k A ≤ j ≤ i, k A ≤ k < m such that a τ (j)k = 1.Then, by Lemma 5.14 and Lemma 5.15, we may express h[A] as a linear combination of h[B]s for some B ∈ T where either: B ∈ T R i ′ for some i ′ < i, B ∈ T R i with k B > k A , or B ∈ T R i with k B = k A and B < C A. But then, either by the inductive hypothesis on i, maximality of k A , or minimality of A, each such B must satisfy the statement of the claim, and hence so must A, which contradicts that A was chosen to be a counterexample.Thus, no such counterexample may exist.Finally, once again, it is clear to see from the steps taken above that if A ∈ T C, then all such B produced by this procedure satisfy B ∈ T C as well.Proof.Suppose, for the sake of contradiction, that the claim is false, and let A ∈ T be a counterexample that is minimal with respect to the column-ordering of Definition 5.2(ii).By Corollary 5.19, we may assume that A ∈ T R i for any i < m − 1, and so we must have that A ∈ T R m−1 \ T C since A ∈ T R. Hence, for each 1 < u < m, either a um = 0 or a um = 1, and we claim that there exists at least one u in this range with a um = 1.Indeed, suppose otherwise, then there exists some 1 < v < m with a 1v even since A ∈ T C. But then the relation C 1 vm (A) expresses h[A] as a linear combination of h[B]s for some B ∈ T with B < C A and B ∈ T R \ T C. But h[B] = 0 for all such B by minimality of A, which contradicts that A was chosen to be a counterexample.We hence write (u 1 , . . ., u s ) for the increasing sequence whose terms are given by all such u.Firstly, suppose that s > 1 and set u := u s−1 and u ′ := u s .By Lemma 5.1(iii), we have that z u,m (A) + z u ′ ,m (A) = 1 and so the relation Z u,m (A) + Z u ′ ,m (A) is given by: [v,l] ] + v>u l<m a vl h[D [v,l] ], (5.21) where B [v,l] := A (v,m)(u,l) (u,m)(v,l) and D [v,l] := A (v,m)(u ′ ,l) (u ′ ,m) (v,l) for all such (v, l) with a vl ≡ 0. Now, let (v, l) be as in (5.21) with a vl ≡ 0.
Hence we may assume that s = 1, or in other words that there exists a unique u in the range 1 < u < m such that a um = 1, and so then z 1,m (A) = 1 by Lemma 5.1(v).By applying similar considerations to the above to the relation Z 1,m (A), we once again reach a contradiction, and so no such counterexample may exist.Definition 5.22.For 1 < i < m, similarly to T R i of Definition 5.9(ii), we define: T C i := {A ∈ T C | the τ (j)th-column of A contains j odd entries for 1 < j ≤ i}.
and such k and i.But given such x, k and i, the image of the element x Now, we shall return to the situation where the underlying field k has characteristic 2. We fix the sequences α := (a ′ , m − 1, . . ., 2, b ′ ) and β := (a, m − 1, . . ., 2, b).

Corollary 5. 19 .
Let 2 ≤ i < m − 1 and let A ∈ T R i with A ∈ T C. Then h[A] = 0.Proof.By Lemma 5.17, we may express h[A] as a linear combination of h[B]s for some B ∈ T with B ∈ T R ∪ T C. But h[B]= 0 for all such B by Lemma 5.8, and so the result follows.Lemma 5.20.Let A ∈ T R \ T C. Then h[A] = 0.

Remark 5 .
23. Firstly, note that by Remark 4.22, we see that the transpose homomorphism h ′ ∈ Hom kSr(M (β), M (α)) of h is relevant.Now, the results proven above are independent of the values of a and b, provided that they satisfy the parity condition a − m ≡ b.In particular, note that this condition is preserved under the swap (a, b) ↔ (a ′ , b ′ ), wherea ′ := b + m − 1, b ′ := a − m + 1. But,as in Remark 4.19, this swap is equivalent to the swap λ ↔ λ ′ and accordingly α ↔ β and T ↔ T ′ .Therefore, by defining the subsets T R ′ , T C ′ ⊆ T ′ analogously to T R, T C ⊆ T , we obtain the analogous results to those shown in this section for the coefficients h ′ [A ′ ] of the ρ[A ′ ] in h ′ .Proposition 5.24.Let A ∈ T and suppose that A ∈ T R m−1 ∩ T C m−1 .Then h[A] = 0. Proof.Suppose that D ∈ T is such that h[D] = 0.Then, we may assume that we have D ∈ T R ∪ T C since otherwise h[D] = 0 by Lemma 5.8.Moreover, we may assume that D ∈ T R \ T C since otherwise h[D] = 0 by Lemma 5.20.On the other hand, if D ∈ T C \ T R, then D ′ ∈ T R ′ \ T C ′ , where T R ′ , T C ′ ⊆ T ′ are as defined in Remark 5.23.But then we have h[D] = h ′ [D ′ ] = 0 á la Lemma 5.20, which contradicts our choice of D, and so we may assume that D ∈ T C \ T R. In sum, we have shown that h[D] = 0 for all D ∈ T with D ∈ T R ∩ T C. In particular, to prove the Proposition, we may assume that