The Center of the Partition Algebra

In this paper we show that the center of the partition algebra $\mathcal{A}_{2k}(\delta)$, in the semisimple case, is given by the subalgebra of supersymmetric polynomials in the normalised Jucys-Murphy elements. For the non-semisimple case, such a subalgebra is shown to be central, and in particular it is large enough to recognise the block structure of $\mathcal{A}_{2k}(\delta)$. This allows one to give an alternative description for when two simple $\mathcal{A}_{2k}(\delta)$-modules belong to the same block.


Introduction
Background. Let CS n be the group algebra of the symmetric group on n letters over the field of complex numbers C. The representation theory of CS n has been extensively studied and is well-understood. A particularly illuminating approach to the study of the representation theory of CS n was given by Okounkov and Vershik in [OV96], where they focused on two main features. The first of such was the chain of algebras C = CS 0 ⊂ CS 1 ⊂ CS 2 ⊂ . . . . The idea was to study the representation theory of CS n for all n simultaneously by understanding the restriction rules in the above chain. Notably, this chain was shown to be multiplicity-free, meaning for any simple CS n -module M , the multiplicity of any simple CS n−1 -module in the restriction M ↓ CSn−1 is at most one. The information of the restriction rules in the above chain can be encoded in the form of a directed graph, called the branching graph of the chain. From such a graph one can construct a unique basis (up to scalars) for any simple CS n -module indexed by certain paths in the branching graph, referred to as the Gelfand-Zetlin basis, or GZ-basis for short. The branching graph thus gives a useful framework to describing the representation theory for all the symmetric groups, and the GZ-basis allows one to fit any simple module within this framework in a compatible way. The second main feature in the work of Okounkov and Vershik is the role played by the Jucys-Murphy elements, or JM-elements for short. These are a sequence of commuting elements defined for each k ≥ 1 by where (j, k) denotes the transposition exchanging j and k. These elements are well-adapted to the multiplicity-free chain above. For example, we get a new JM-element for each step in the chain and this element commutes with the smaller algebras, that is L k ∈ CS k for all k ≥ 1 and L k π = πL k for all π ∈ CS k−1 . Also the GZ-basis for any simple CS n -module diagonalises the action of the JM-elements L 1 , L 2 , . . . , L n ∈ CS n . Hence for each GZ-basis element of any simple CS n -module we have an associated n-tuple of eigenvalues corresponding to each JM-element. Giving an explicit description of such tuples would in turn allow one to give an explicit description of the branching graph, from which much of the classical results of the representation theory of CS n are recovered. This is precisely what was done in [OV96], which gave an alternative means of showing that the branching graph was isomorphic to Young's lattice, that the paths indexing the GZ-basis correspond to standard Young tableau, and that the eigenvalues of the JM-elements for a given GZ-basis element correspond to the ordered content of boxes in the associated standard Young tableau. This approach thus gave quite a natural explanation of the involvement of Young diagrams and Young tableau within the theory.
A result of particular interest for this paper is the fact that the JM-elements of CS n can be used to give an alternative description of the center Z(CS n ). Since we are working with a group algebra, it is well known that the center can be given in terms of class sums, however it was also shown in [Jucys74] and [Murphy83] that the center Z(CS n ) equals the subalgebra of symmetric polynomials in the JM-elements. That is for any k ≥ 1, let p k (x 1 , . . . , x n ) := x k 1 + x k 2 + · · · + x k n , be the k-th symmetric power-sum polynomial in C[x 1 , . . . , x n ]. These generate the algebra of symmetric polynomials in n variables. Then we have that Z(CS n ) = p k (L 1 , . . . , L n ) | k ≥ 1 . This result on the center was later generalised to the affine degenerate Hecke algebras (see for example [Kleshchev05]).
As discussed above, the centers of the algebras CS n , B r (δ), and B r,s (δ) are generated by certain (super)symmetric polynomials in their respected JM-elements. For this paper we seek to give an analogous result for the center of A 2k (δ). The main result is Theorem 4.2.6 where we show that, in the semisimple case, the supersymmetric polynomials (see Definition 3.1.1) in the normalised JM-elements generate the center of A 2k (δ). For the previously discussed algebras, the strategy in showing the result on the center can be broken into two broad steps: (1) Show that the desired polynomials in the JM-elements are central.
(2) When the algebra is semisimple, use the action of the JM-elements to show that the correct number of such polynomials are linearly independent.
For each algebra the details of these steps are different, but the overall approach is the same. The first main result of the paper is Theorem 3.2.6, where we show step (1), which is independent of whether the partition algebra is semisimple or not. For this result we will make heavy use of the definitions and relations established in the work of Enyang in [Eny12] and [Eny13]. It is worth mentioning, that for the other algebras, step (1) is quite straightforward to show. For the partition algebra the situation is much more complicated, in particular showing that the Coxeter generators commute with such polynomials does not reduce to checking a few relations as was the case of [JK17] (see Remark 3.2.3). In the semisimple case, we use the action of the JM-elements on the GZ-basis described first in [HR05], along with the elementary supersymmetric polynomials (see Definition 3.1.4), to help us show that the center consists precisely of these polynomials. In the non-semisimple case, we only establish that the subalgebra of such polynomials is central, but do not know whether this constitutes the entire center. However, we can show that this central subalgebra is large enough to recognise the block structure of the partition algebra. This allows us to give an alternative criterion for when two simple modules of A 2k (δ) belong to the same block. The structure of the paper will proceed as follows: Section 2 recalls the definition of the partition algebra and the JM-elements, as described by Enyang in [Eny13]. We also collect some of the relations established in both [Eny12] and [Eny13], and prove some more relations involving the JM-elements. In Section 3 we begin by recalling the definition of supersymmetric polynomials, and the result of [Stem85] in showing that the elementary supersymmetric polynomails generate all such polynomials. We then use the relations established in Section 2 to show that the supersymmetric polynomials in the normalised JM-elements are central in the partition algebra. In Section 4 we will turn our attention to the semisimple case. We recall some of the representation theory of the partition algebras. We are mainly interested in the branching graph given in Definition 4.1.4, and the GZ-basis for simple modules. Then using the action of the JM-elements on such a basis we will show that the action of the supersymmetric polynomials in the normalised JM-elements can distinguish between the simple modules (Proposition 4.2.4). From this, and a result of linear algebra, we will be able to implicitly produce a basis for the subalgebra of such polynomials in the normalised JM-elements, and then a dimension check will confirm that it is the center (Theorem 4.2.6). Finally in Section 5 we will recall the block structure of the partition algebra established by Martin in [Martin96]. We will show that the block structure can be recovered from the action of the subalgebra of supersymmetric polynomials in the normalised JM-elements. Knowing this we then conclude with an alternative criterion for when two simple modules of the partition algebra belong to the same block.

Definitions
Throughout this paper, all algebras will be considered over the field C of complex numbers unless otherwise stated. Any algebraically closed field of characteristic 0 would equally do. For this section we begin by recalling the definition of the partition algebras and the Jucys-Murphy elements. We will also collect and prove various relations which will be used in the next section.
Let X be a finite set, then recall that a partition of X is a collection π = {U 1 , . . . , U n } of subsets of X such that U i ∩ U j = ∅ for all i = j, and ∪ 1≤i≤n U i = X. We refer to any U i ∈ π as a block of π. Let Π(X) denote the set of all partitions of X.
For any k ∈ N we set [k] := {1, 2, . . . , k} and [k ′ ] := {1 ′ , 2 ′ , . . . , k ′ }. We view [k] ∪ [k ′ ] as a formal set of 2k elements and let Π 2k := Π([k] ∪ [k ′ ]). Any partition π ∈ Π 2k can be represented by a graph consisting of two rows of k vertices, where we label the top row of vertices from 1 to k, and label the bottom row of vertices from 1 ′ to k ′ . If v, w ∈ [k] ∪ [k ′ ] belong to the same block in the partition π, then they are connected by a path in the graph, i.e. will be within the same connected component of the graph. For example, 1 2 3 4 5 Many such graphs can represent the same partition π, since vertices within the same block may be connected in many different ways. We do not distinguish between such representations, that is two finite graphs on 2k (labelled) vertices are equivalent if and only if they have the same connected components. Then the equivalence classes of such graphs are in a one-to-one correspondence with Π 2k . We refer to such classes as partition diagrams. In this way we may identify any partition π ∈ Π 2k with its partition diagram. From this we can define a multiplication of partitions π, γ ∈ Π 2k by concatenation of their respected diagrams as follows: The product π • γ is the partition diagram obtained by placing π on top of γ and identifying the bottom vertices of π with the top vertices of γ, removing any connected components that live completely within the middle row, and then reading off the resulting connected components produced between the top row of π and the bottom row of γ. This is best understood by example.
Example 2.1.1. Let π, γ ∈ Π 10 be the partition diagrams then their product is This multiplication respects the equivalence relation imposed previously and can be shown to be associative. For π, γ ∈ Π 2k let n(π, γ) denote the number of connected components removed in the process of obtaining the product π • γ. For the above example we have that n(π, γ) = 1. Let δ ∈ C and k ∈ N, then the partition algebra, denoted by A 2k (δ), is the associative unitial algebra with basis given by Π 2k , and with multiplication A 2k (δ) × A 2k (δ) → A 2k (δ) given by (π, γ) → δ n(π,γ) π • γ, for any π, γ ∈ Π 2k , extended linearly to all of A 2k (δ). An arbitrary element of A 2k (δ) is thus a formal linear combination of partition diagrams in Π 2k . We will represent the multiplication of basis elements π and γ by the concatenation of symbols πγ. The identity element of A 2k (δ) is the basis vector given by the partition For i ∈ [k − 1] and j ∈ [k], we define The elements s 1 , . . . , s k−1 , e 1 , e 2 , . . . , e 2k−1 generate A 2k (δ). The subalgebra generated by the elements s i for i ∈ [k − 1] is precisely the symmetric group algebra CS k . There is an algebra anti-automorphism * : A 2k (δ) → A 2k (δ) defined by flipping a partition diagram up-side-down, and extending linearly over A 2k (δ). We denote the image of a ∈ A 2k (δ) under this anti-automorphism by a * . In particular we have that the generators above are invariant under * . Furthermore, restricting this anti-automorphism to the subalgebra CS k yields the usual anti-automorphism of CS k given by inversion. In [HR05, Theorem 1.1] a presentation for A 2k (δ) in terms of the above generators was given. We recall some of the relations in this presentation for later use, which can be easily verified diagrammatically.
(3) e i e i±1 e i = e i .
The partition algebra A 2k (δ) has a subalgebra, denoted by A 2k−1 (δ), which is spanned by the partitions π ∈ Π 2k such that k and k ′ belong to the same block of π. In turn, the algebra A 2k−1 (δ) contains A 2k−2 (δ) as a subalgebra by identifying A 2k−2 (δ) with the span of all partitions π ∈ Π 2k with {k, k ′ } as a block. From such identifications we obtain a chain of algebras

In terms of the above generators
, that is we have dropped the generators s k−1 and e 2k−1 . Then to obtain A 2k−2 (δ), we further drop the generator e 2k−2 . Whenever we are referring to a subalgebra A r (δ) ⊂ A 2k (δ), we will be using the identification described in the above chain. Furthermore, we will use the variable r in the index to represent any parity, while we will use 2k or 2k ± 1 otherwise.

The Jucys-Murphy Elements.
In this section we will give the definition of the Jucys-Murphy elements (JM-elements) of the partition algebra, and describe some properties and relations regarding them. For each inclusion A r−1 (δ) ⊂ A r (δ) we obtain a new JM-element L r ∈ A r (δ), and this element belongs to the centraliser subalgebra Z(A r−1 (δ), A r (δ)) := z ∈ A r (δ) | za = az for all a ∈ A r−1 (δ) .
These elements were first introduced in [HR05], where they described them diagrammatically. It proves quite difficult to establish relations between the JM-elements and the generators of A r (δ) uses this diagrammatic definition. Fortunately Enyang has given a recursive definition of the JM-elements in [Eny12], alongside new elements σ i , which we both recall below.
Remark 2.2.1. This subsection relies on the work of Enyang from both [Eny12] and [Eny13]. However there is a change of notation between these two papers. We will be adopting the notation used in [Eny13], in particular the definition below is from [Eny13]. To ease reference checking, the conversion of notation from [Eny12] to [Eny13] respectively is given by p i ∼ e 2i−1 , p i+ 1 2 ∼ e 2i , σ i ∼ σ 2i−1 , σ i+ 1 2 ∼ σ 2i , L i ∼ L 2i , and L i+ 1 2 ∼ L 2i+1 . Definition 2.2.2. Let L 1 = 0, L 2 = e 1 , σ 2 = 1, and σ 3 = s 1 . Then for i = 1, 2, . . . , define where, for i = 2, 3, . . . , we have Also for i = 1, 2, . . . , define where, for i = 2, 3, . . . , we have In Section 2.3 of [Eny13] there were a few typos in the definition of the JM-elements due to the change in notation. We have corrected these and made them bold in the above definition. A straightforward proof by induction yields the fact that L i belongs to A i (δ) and σ i belongs to A i+1 (δ). Enyang showed that the elements e 1 , e 2 , . . . , e 2k−1 , σ 2 , σ 3 , . . . , σ 2k−1 generate A 2k (δ), moreover a presentation of A 2k (δ) in terms of these generators was given (see Theorem 4.1 of [Eny12]). Remarkably, although the diagrammatic description of the elements σ i gets quite complicated, the relations in Enyang's presentation are very simple. We will need a variety of the relations and facts regarding the elements L i and σ i established in both [Eny12] and [Eny13] throughout this paper. We collect below such results.
We will mainly be interested in working with the normalised JM-elements, defined to be N i := L i −δ/2. These elements are better suited to describe the center of A 2k (δ). We will now establish some relations regarding the elements N i which will be used extensively in the next section. We first show which generators commute with the normalised JM-elements.
Lemma 2.2.4. We have the following commuting relations: (1) e i N j = N j e i for all j = i, i + 1.
We would like to know what interaction occurs between the Coxeter generators and the normalised JM-elements when they do not commute. The lemma below can be used to give us relations of the form s i N 2i = N 2i+2 s i + C and s i N 2i−1 = N 2i+1 s i + D, where C and D are some linear combination of diagrams, which will come in handy in the next section.
Once again replacing L 2i+2 with N 2i+2 and L 2i with N 2i gives a valid equality.
The next lemma tells us how the normalised JM-elements interact with the odd indexed generators σ i , when they do not commute.
Lastly, by Lemma 2.1.2 we have that s i e 2i−1 e 2i = e 2i+1 s i e 2i = e 2i+1 e 2i , and similarly e 2i e 2i−1 s i = e 2i e 2i+1 . Hence the above equation reduces to the desired one.
(2): The sum r i=1 N i is central in A r (δ) by Lemma 2.2.3 (2)(iii), and by Lemma 2.2.4 (3) we know that σ 2i+1 N j = N j σ 2i+1 for all j = 2i, 2i + 1, 2i + 1. From these two facts we have that Rearranging we have We will now focus on the bracketed terms in (Eq1). For the first bracketed term, (1) tells us that For the second bracketed term, multiplying the equality from (1) by σ 2i+1 on both the left and right hand sides, and then rearranging gives Therefore the sum of the two bracketed terms is For our proposes, we do not need an analogous result regarding the even indexed generators σ i . The next lemma tells us how the generators e i interact with the normalised JM-elements whenever they do not commute.
Lemma 2.2.7. We have the following relations: ( , and the definition that N i = L i − δ/2, we have Hence e i N i = −e i N i+1 giving (1). Relation (2) follows by applying the anti-automorphism * to (1).

Central Subalgebra
For this section we will show that the subalgebra of supersymmetric polynomials in the normalised JM-elements belongs to the center of the partition algebra. We begin by recalling the definition of the algebra of supersymmetric polynomials and some of its properties. Then we will use the relations given in the previous section to show that the generators of the partition algebra commute with any such polynomial in the normalised JM-elements.

Supersymmetric Polynomials.
What is covered here can be found in [Stem85] and [Moens07]. We remodel the definitions a little to better align with our situation. Let r be a non-negative integer, then we will denote by r e the largest even integer such that r e ≤ r, and similarly we denote by r o the largest odd integer such that r o ≤ r. We also define the sets E(r) = {2, 4, . . . , r e } and O(r) = {1, 3, . . . , r o }. We set P r := C[x 1 , . . . , x r ], the algebra of polynomials in r commuting variables. For convention, when r = 0, we set r e = r o = 0, E(r) = O(r) = ∅, and P 0 := C.
Definition 3.1.1. Let r be a non-negative integer and p ∈ P r . We say that p is supersymmetric if (1) p is parity symmetric: p is symmetric in x 1 , x 3 , . . . , x ro , and symmetric in x 2 , x 4 , . . . , x re .
(2) p satisfies the cancellation property: substituting x 1 = −x 2 = y yields a polynomial in x 3 , x 4 , . . . , x r which is independent of y. We will denote by SS r [x] the set of all supersymmetric polynomials in r commuting variables.
When r is even, then the number of odd index variables agrees with the number of even index variables, while when r is odd, there is one more odd index variable than even. We will often suppress the arguments of a polynomial p, but when we want to be clearer about the number of variables in play we will write p(x 1 , x 2 , . . . , x r ). It is not difficult to see that the two properties in the above definition respect addition and multiplication of polynomials, and so SS r [x] is in fact a subalgebra of P r . For convention we set SS 0 [x] = C, and we have that SS 1 [x] = C[x], the polynomial algebra in one variable x.
Remark 3.1.2. In [JK17], they consider supersymmetric polynomials in the two sets of variables X = {x 1 , . . . , x r } and Y = {y 1 , . . . , y s }. That is a supersymmetric polynomial is one which is symmetric in the X variables, symmetric in the Y variables, and satisfies an analogous cancellation property to (2) above. We are working with the specialisation X = {x 1 , x 3 , . . . , x ro } and Y = {x 2 , x 4 , . . . , x re }.
(1) Let n, r ∈ Z ≥0 . Then the n-th power sum supersymmetric polynomials q n in r commuting variables are given by It is immediate that any permutation among the odd indexed variables leaves q n invariant, and similarly for the even indexed variables. The sign (−1) n+1 which appears also means that the cancellation property of Definition 3.1.1 is upheld, hence q n ∈ SS r [x]. It was shown in [Stem85] that the algebra of supersymmetric polynomials is generated by all the supersymmetric powersum polynomials, that is SS r [x] = q n | n ≥ 0 .
(2) Let r = 4. Then consider the polynomial One can see that permuting the variables x 2 and x 4 around, or permuting x 1 and x 3 around leaves l 2 unchanged, hence it is parity symmetric. Furthermore, setting x 1 = −x 2 = y gives which is independent of y. Thus l 2 also satisfies the cancellation property, and so l 2 ∈ SS 4 [x].
We will be particularly interested in what are called the elementary supersymmetric polynomials, of which l 2 from above is an example. To define these elements we will work in the algebra P r [[t]] of formal power series in the commuting variable t with coefficients in P r .
Definition 3.1.4. Let n, r ≥ 0 be non-negative integers. The elementary supersymmetric polynomials l n in r commuting variables are defined to be the coefficients in the generating function The properties of supersymmetry are immediately seen to be upheld from the definition of l n , showing that l n ∈ SS r [x]. Noting that |E(r)| = ⌊r/2⌋ and |O(r)| = ⌈r/2⌉, where ⌊−⌋ and ⌈−⌉ are the floor and ceiling functions respectively, then we may alternatively write ∞ n=0 l n t n = Example 3.1.5. We will give a general expression for all l n ∈ SS 4 [x]. We have that Therefore we have The first four cases are l 0 = 1 and . In particular, the polynomial l 2 is precisely the one given in Examples 3.1.3 (2).
The polynomials l n are the supersymmetric counterparts to the regular elementary symmetric polynomials. From the above example, one can observe that in r commuting variables l 1 = x 1 + x 2 + · · · + x r , and so by Lemma 2.2.3 (2) (iii), the element l 1 (N 1 , . . . , N r ) is central in A r (δ). Unlike the regular elementary symmetric polynomials, when r ≥ 2, we have that l n = 0 for all n > 0. In [Stem85], the following was proven: Theorem 3.1.6 (Theorem 2; Corollary of [Stem85]). The elementary supersymmetric polynomials in r commuting variables generate SS r [x]. That is 3.2. The elements l n (N 1 , . . . , N r ) are central in A r (δ).
We will prove that SS r [N 1 , . . . , N r ] ⊆ Z(A r (δ)) directly by showing that each generator of A r (δ) commutes with l n (N 1 , . . . , N r ), for any n ≥ 0. This will be done by using the above generating function and the various relations in A r (δ) that we established previously. Hence we will be working within A r (δ) [[t]], the algebra of formal power series in the commuting variable t with coefficients in A r (δ). We begin with the generators e i . Lemma 3.2.1. In A r (δ), we have that e i l n (N 1 , . . . , N r ) = l n (N 1 , . . . , N r )e i for all 1 ≤ i ≤ r − 1 and n ≥ 0.
Proof. We prove this for e 2i . The case for e 2i−1 follows in the same manner. From Lemma 2.2.4 (1) we know that e 2i N j = N j e 2i for all j = 2i, 2i + 1. Hence it suffices to show that By Lemma 2.2.7 we know that e 2i N 2i = −e 2i N 2i+1 and N 2i e 2i = −N 2i+1 e 2i . Therefore we have that Thus the above equality holds since both sides are equal to e 2i .
We now wish to show that s i l n (N 1 , . . . , N r ) = l n (N 1 , . . . , N r )s i . However, it turns out that we will need to first establishing that l n (N 1 , . . . , N r ) commutes with σ 2i+1 .
Proof. By Lemma 2.2.4 (3) we have that σ 2i+1 N j = N j σ 2i+1 for all j = 2i, 2i + 1, 2i + 2. Hence to prove this proposition it suffices to show To do this we will start with the left hand side and use the relations of Lemma 2.2.6 to pull σ 2i+1 to the right. In doing so we pick up many additional terms, and showing that such terms all cancel out will complete the proof. To begin, we have that where the second equality follows from Lemma 2.2.6 (2) and the relations σ 2i+1 e 2i = e 2i and σ 2 2i+1 = 1. From this we have , , , Next we have that where the second equality follows from Lemma 2.2.6 (1) and the relations σ 2i+1 e 2i = e 2i and σ 2 2i+1 = 1. Then multiplying on the left by (1 − N 2i t) −1 and on the right by (1 − N 2i+2 t) −1 , then rearranging yields − e 2i e 2i+1 + e 2i e 2i+1 σ 2i+1 e 2i+1 e 2i + 1 t 1 − N 2i+2 t From this we have that , , , .
Lastly we have that where the second equality follows from Lemma 2.2.6 (1) and the relations e 2i σ 2i+1 = e 2i and σ 2 2i+1 = 1. Then multiplying on the left by (1 − N 2i+2 t) −1 and on the right by (1 − N 2i t) −1 , then rearranging yields Thus collectively we have and so the result will follow if we can show that E + D + C = 0. We do this by showing that the terms C i , D i , E i pair up into additive complements: (C 1 = −D 1 ): (by Lemma 2.2.4(1)).
From this we have that Altogether we have that Hence E + D + C = 0 as required.
Since A r (δ) = e 1 , e 2 , . . . , e r−1 , σ 2 , σ 3 , . . . , σ r−1 , we could attempt to show that σ 2i commutes with any l n (N 1 , . . . , N r ) in the same manner done above. However, it turns out to be very difficult to show that the additional terms which appear cancel out. So instead we prove that s i and l n (N 1 , . . . , N r ) commute. We will show this is the exact same manner as done above with σ 2i+1 , and we will use the above proposition only once. This may seem excessive, but it appears necessary if one wants to prove it in this manner (see Remark 3.2.5).
Remark 3.2.3. In [JK17] they showed that the Coxeter generators commute with supersymmetric polynomials in the JM-elements by showing that they commute with symmetric polynomials in L 1 , . . . , L r , and that they commute with symmetric polynomials in L r+1 , . . . , L r+s . This was done by checking that s i commutes with both L i +L i+1 and L i L i+1 for all i = r. Since the supersymmetric power-sum polynomials are a sum of symmetric polynomials in the two sets of variables, it then follows that s i commutes with them. So in fact the Coxeter generators commuted with the larger algebra of polynomials symmetric in both sets of variables. For our situation, the analogous approach would be to show that symmetric polynomials in the odd indexed normalised JM-elements commute with the Coxeter generators, and similarly for the even index normalised JM-elements. However this is not true. From Lemma 2.2.5 one can show that the commutators [s i , L 2i−1 + L 2i+1 ] and [s i , L 2i + L 2i+1 ] are non-zero, and in fact we have In our setting, the cancellation property is needed for the Coxeter generators to commute. Proof. From Lemma 2.2.4 (2) we know that s i N j = N j s i for all j = 2i − 1, 2i, 2i + 1, 2i + 2, hence to prove the result it suffices to show that We start from the right hand side, pull s i through using the relations of Lemma 2.2.5, and then show that the additional terms which appear all cancel out. Firstly we have that where the second equality follows from Lemma 2.2.5 (2). From this we have that Hence we obtain Next we have that where the second equality follows from Lemma 2.2.5 (2). From this we have that With this we obtain , , , Next we have that where the second equality follows from Lemma 2.2.5 (1). With this we have that , , , .
Lastly we have that where the second equality follows from Lemma 2.2.5 (1). With this we have that , , , .
Thus collectively we have and so the result will follow if we can show that F + E + D + C = 0. We do this by showing that the terms C i , D i , E i , F i pair up into additive complements: (C 1 = −F 1 ): , (by Lemma 2.2.4(1)) = −F 1 .

From this we have
Altogether we have that Hence F + E + D + C = 0 as required.
Remark 3.2.5. We only used the relation σ 2i+1 l n (N 1 , . . . , N r ) = l n (N 1 , . . . , N r )σ 2i+1 when showing that C 1 = −F 1 . The issue here was the fact that the term σ 2i+1 e 2i−1 e 2i = s i L 2i e 2i was present, and so moving the generating function from left to right would require us to pass through either σ 2i+1 or s i . Since the proposition itself is about s i , it proved easier to check first that σ 2i+1 and l n (N 1 , . . . , N r ) commute.
Theorem 3.2.6. All supersymmetric polynomials in the normalised Jucys-Murphy elements are central in the partition algebra. That is SS r [N 1 , . . . , N r ] ⊆ Z(A r (δ)).

Semisimple Case
For this section we will show that in the semisimple case the center of the partition algebra A 2k (δ) is precisely SS 2k [N 1 , . . . , N 2k ]. To do this we will need to utilise some of the representation theory of A 2k (δ). We begin by setting up some notation and definitions in order to describe the branching grapĥ A. This graph is to the partition algebra what Young's lattice is to the symmetric group algebra CS k . In particular, this graph gives us a means to construct a unique basis for any simple A 2k (δ)-module indexed by certain paths inÂ, called the Gelfand-Zetlin basis. The action of the JM-elements on such a basis was first described in [HR05], and is given by the contents of paths. Knowing this tells us how the subalgebra SS 2k [N 1 , . . . , N 2k ] acts on any simple module. Using this action, specifically the action of the elementary supersymmetric polynomials, and a result from linear algebra, we will be able to show the existence of a basis for SS 2k [N 1 , . . . , N 2k ], from which a dimension check concludes that it agrees with the center.

Representation theory.
It was shown in [MS93] (see also [Martin96]) that the partition algebra A 2k (δ) is semisimple if and only if δ ∈ {0, 1, . . . , 2k − 2} . Until stated otherwise, we will assume that A 2k (δ) is semisimple. In this situation the chain of algebras is multiplicity free (see [Martin00, Proposition 7]). That is, let M be a simple A r (δ)-module for some 1 ≤ r ≤ 2k. Then the multiplicity of any simple A r−1 (δ)-module as a summand of Res r r−1 (M ) is at most one, where Res r r−1 is the restriction functor from A r (δ)-mod to A r−1 (δ)-mod. It is this property which allows one to construct a unique (up to scalars) basis for each simple module. We now set up the required definitions to describe the branching graphÂ.
Definition 4.1.1. A partition of a positive integer k is a n-tuple λ = (λ 1 , λ 2 , . . . , λ n ) ∈ N n , for some n ≥ 1, such that λ 1 ≥ λ 2 ≥ · · · ≥ λ n and n i=1 λ i = k. We write λ ⊢ k and |λ| = k. Given such a partition we define its associated Young diagram as the set We view this set as a collection of left-justified boxes, with n rows and λ i boxes in the i-th row. We will often not distinguish between a partition λ and its associated Young diagram [λ]. In particular, we will call an element a = (i, j) of [λ] a box, and write a ∈ λ. For a box a = (i, j) ∈ λ, we say that i is the row index of a and j is the column index of a.
Given a partition λ = (λ 1 , . . . , λ n ) ⊢ k, we say that the height of λ, written h(λ), is the number of rows of λ minus 1. Similarly we say that the width of λ, written w(λ), is the number of columns of λ minus 1. Given a box a = (i, j) ∈ λ, we say the content of a is c(a) = j − i. We say that a box a ∈ λ is removable if [λ]\{a} is a Young diagram for a partition of |λ| − 1. Similarly, we say a box a ∈ N × N is an addable box of λ if [λ] ∪ {a} is a Young diagram of a partition of |λ| + 1.
By convention the empty set ∅ is a partition of 0, and we set h(∅) = w(∅) = 0. Now in general two boxes a, b ∈ λ belong to the same diagonal (top-left to bottom-right) in the Young diagram if and only if their contents agree, that is c(a) = c(b). Thus we can index the diagonals of [λ] by content. Note that the multi-set of contents of λ determines the partition λ. We will denote the multi-set of contents of λ by M C(λ). We set M C(∅) = ∅. We will use superscripts to denote the multiplicity of an element in a multi-set. For example let λ = (7, 5, 4, 3) as in Example 4.1.2 above, then −1 3 , 0 3 , 1 3 , 2 2 , 3 2 , 4, 5, 6}.
It is clear to see that given a box a ∈ λ, then −h(λ) ≤ c(a) ≤ w(λ). Moreover, for any integer value c where −h(λ) ≤ c ≤ w(λ), there exists a box a ∈ λ such that c(a) = c. Hence the set {−h(λ), . . . , w(λ)} is precisely the indexing set for the diagonals of [λ] via content. It will be helpful to express the information of the multi-set M C(λ) in the following manner. We may now define the branching graph, where we adopt the notation used within [Eny13]. Let Λ k denote the set of all partition λ ⊢ k. Then we set A 2k =Â 2k+1 = {(λ, l) | λ ∈ Λ k−l for some l = 0, 1, . . . , k}.
That is, given (λ, l) ∈Â 2k , then λ is a partition of k − l for some 0 ≤ l ≤ k. Knowing k and λ one could recover l by l = k − |λ|. With this in mind, when it is clear we are working inÂ 2k =Â 2k+1 , we will occasionally suppress the l in (λ, l) and just write λ. The first 7 levels ofÂ are given in Figure 1 below. Figure 1. The branching graphÂ truncated at level 6. The path T ((2,1),0) has been expressed in bold. Hence going from an even level to an odd level, one can either "do nothing" or remove a box, while going from an odd level to and even level, one can do nothing or add a box. A path inÂ is defined in the natural way one would for a directed graph. We will be interested in paths which start at level 0 and reach a given vertex (λ, l).
Truncating the graphÂ at level 2k yields the branching graph associated with the multiplicity free chain (MFC) described above. We summarise what this means in the following theorem (see [HR05,Theorem 2.24] and [Martin00, Proposition 7]): Theorem 4.1.6. Assume that A 2k (δ) is semisimple. Then for all r ≤ 2k: (1) The verticesÂ r on the r-th level ofÂ give an indexing set for the simple A r (δ)-modules. We will denote by ∆ (3) For each (λ, l) ∈Â r we have dim(∆ (λ,l) r ) = |Path(λ, l)|.
It will be helpful to pick out a particular path in each Path(λ, l) to work with in later sections. The definition below is precisely the path described in [Eny13, Lemma 3.9].
Intuitively the path T (λ,l) is the one which never removes any boxes, only adds a box when it is forced to, and it priorities adding boxes in the highest row, i.e. lowest row index. In Figure 1 the path T (λ,l) for (λ, l) = ((2, 1), 0) ∈Â 6 has been given in bold. In general the path T (λ,l) is maximal with respect to a particular ordering on the paths Path(λ, l) (see [Eny13,Definition 3.8]), however we will not employ such a fact here. For us, this path will be useful since it is easy to calculate its contents, defined as follows.
Definition 4.1.8. Let (λ, l) ∈Â r and T = ((λ (i) , l i )) r i=0 ∈ Path(λ, l). Then the contents of T is the r-tuple (cont(T, i)) r i=1 defined as follows: For i even we have For i odd we have The next lemma follows from the definition of the standard path.
We now define the Gelfand-Zetlin basis for any simple A r (δ)-module ∆ (λ,l) r with (λ, l) ∈Â r . From Theorem 4.1.6 (2) we have the canonical decomposition where the sum runs over all edges (µ, m) → (λ, l) inÂ. If we iterate this process on the above summands, and continue as such down to A 0 (δ), then we obtain a unique decomposition of the simple A r (δ)-module ∆ (λ,l) r into simple A 0 -modules (i.e. 1-dimensional C-vector spaces) indexed by paths inÂ from ∅ ∈Â 0 to (λ, l) ∈Â k . That is Where Res r 0 = Res 1 0 • Res 2 1 • · · · • Res r r−1 . Picking a vector v T ∈ V T for each T ∈ Path(λ, l) gives a unique (up to scalars) basis {v T | T ∈ Path(λ, l)} for the simple A r (δ)-module ∆ (λ,l) r . We refer to such a basis as the Gelfand-Zetlin basis, or GZ-basis for short. Lastly, Halverson and Ram shown that the GZ-basis of a given simple module ∆

The Center.
We will now prove that whenever A 2k (δ) is semisimple we have that SS 2k [N 1 , . . . , N 2k ] = Z (A 2k (δ)). This will be done by employing a result from linear algebra (Lemma 4.2.2 below) from which one can implicitly give a collection of linearly independent elements of SS 2k [N 1 , . . . , N 2k ]. The number of such elements will be |Â 2k |, the number of isomorphism classes of simple A 2k (δ)-modules, which in the semisimple case agrees with the dimension of the center, hence showing equivalence. To begin we establish some small results.
The above lemma tells us that to calculate the action of p(N 1 , . . . , N 2k ) on any simple module ∆ (λ,l) 2k , we can evaluate p at the contents of any path T ∈ Path(λ, l). We choose T (λ,l) ∈ Path(λ, l) given in where λ runs over all ofÂ 2k . That is m = |Â 2k | and n = 2k. Thus we need to show that for any (λ, l), (ρ, r) ∈Â 2k with (λ, l) = (ρ, r), there exists a supersymmetric polynomial p ∈ SS 2k [x] such that p(λ) = p(ρ). We will prove this by showing the contrapositive, namely if p(λ) = p(ρ) for all p ∈ SS 2k [x], then we want to show that (λ, l) = (ρ, r). Recall that the elementary supersymmetric polynomials l n (given in Definition 3.1.4) generate SS 2k [x], so this is equivalent to showing that if , then (λ, l) = (ρ, r). Lemma 4.1.9 tells us that for any (λ, l) ∈Â 2k , , then (λ, l) = (ρ, r). In order to show this, it will be helpful to understand when the rational polynomials in t involved in (Eq3) are reduced, that is when the numerator and denominator share no common irreducible factors. Also, when they are not reduced, we would like to know what factors cancel. This is all described in the following lemma. Recall the diagonal datum (D(λ), m λ ) of a partition λ from Definition 4.1.3, then for any integer δ we let D(λ) ≤δ = D(λ) ∩ Z ≤δ and D(λ) >δ = D(λ) ∩ Z >δ .
Proof. We will prove this by showing the contrapositive, that is if p(λ) = p(ρ) for all p ∈ SS 2k [x], then (λ, l) = (ρ, r). As mentioned above, having p(λ) = p(ρ) for all p ∈ SS 2k [x] is equivalent to saying that . Using Lemma 4.2.3 we will break the above equality into four cases, and for each we will show that either (λ, l) = (ρ, r), or the case is impossible. The four cases to consider are the following: . where both sides are reduced. Since they are reduced, we may equate the numerators and denominators. Equating the numerators gives Assume one of the factors on the left hand side is trivial, that is i = δ/2 for some 0 ≤ i ≤ k − l − 1. This would imply that 0 ≤ δ ≤ 2(k − l − 1), which contradicts the assumption δ ∈ {−h(λ), . . . , 2k − 2}.
As such no factor on the left hand side of (Eq4) is trivial, similarly no factor on the right is trivial. Therefore (Eq4) implies that l = r and hence |λ| = |ρ|. Now equating the denominators gives .
Remark 4.2.5. It is worth mentioning that the above proposition follows from the semisimple case of Corollary 5.0.12, proven in the next section, and a description of the blocks of the partition algebra given by Martin (see Proposition 5.0.6). We have included the proof above for completeness, and to show that the result on the semisimple center ( Theorem 4.2.6 below) can be proven without any knowledge of the block theory of A 2k (δ), but just from knowing that l n (N 1 , . . . , N 2k ) is central. for any (µ, m) ∈Â 2k . However, since the column vectors of ((p λ (µ)) (λ,l),(µ,m)∈Â 2k are linearly independent, we must have that c λ = 0 for all (λ, l) ∈Â 2k . Therefore the set p λ (N 1 , . . . , N 2k ) : (λ, l) ∈Â 2k is linearly independent in SS 2k [N 1 , . . . , N 2k ]. Since A 2k (δ) is semisimple, we know that the dimension of the center Z(A 2k (δ)) equals |Â 2k |, which equals the size of the above linearly independent set. Hence this set is a basis, which shows that SS 2k [N 1 , . . . , N 2k ] = Z(A 2k (δ)).

Non-semisimple Case
In this section we will recall some of the block theory of A 2k (δ) for arbitrary δ ∈ C which was developed by P. Martin in [Martin96], and later by D. Wales and W. Doran in [DW00]. We will conclude by giving an equivalent condition for when two partitions belong to the same block, in terms of the generating function of l n (λ), which played a pivotal role in the previous section.
Let A be any finite dimensional C-algebra, and let Λ be an indexing set for the isomorphism classes of simple A-modules. The algebra A has a unique decomposition as a direct sum of indecomposable 2-sided ideals where 1 = e 1 + e 2 + · · · + e n is a decomposition of unity as a sum of primitive central idempotents e i ∈ A. The direct summands in the above decomposition are called the blocks of A. We say that an A-module M belongs to the block e i A if e i M = M and e j M = 0 for all j = i. Any simple module of A belongs to a particular block. Also one can deduce that M belongs to the block e i A if and only if all the composition factors of M belong to e i A. We can equip the indexing set Λ with the equivalence relation λ ∼ µ if and only if the simple modules indexed by λ and µ belong to the same block. Let B A (λ) be the equivalence class of λ in Λ with respect to this equivalence relation. We will refer to B A (λ) as a block of Λ. Whenever A is semisimple, we have that B A (λ) = {λ} for all λ ∈ Λ.
For any λ ∈ Λ let A λ denote the simple A-module corresponding to λ. Let z belong to the center Z(A) of A. Then by Schur's lemma the element z acts by a scalar on A λ . Let χ λ (z) ∈ C denote this scalar. Then we obtain a C-algebra homomorphism which we call the central character induced by λ. It is well known that λ and µ belong to the same block of Λ if and only if the central characters χ λ and χ µ equal one another. In this sense the center Z(A) can distinguish between the blocks of A.
We return now to the partition algebra A 2k (δ). Whenever δ = 0, the indexing set for the simple A 2k (δ)-modules isÂ 2k , that is the set of partitions λ ⊢ k − l where 0 ≤ l ≤ k. When δ = 0, the indexing set isÂ 2k \{∅} (see [DW00, Corollary 2.3]). We set For any λ ∈ Λ k (δ) set B k (λ) := B A 2k (δ) (λ). The blocks of Λ k (δ) were first described by P. Martin in [Martin96, Proposition 9] for δ = 0, where he gave an elegant combinatorial condition for when two partitions belong to the same block. These results were later extended to the case δ = 0 by D. Wales and W. Doran in [DW00]. We recall here the combinatorial description for the blocks.
In Example 5.0.1, the skew diagram λ/µ 1 is a skew hook, while λ/µ 2 is not a skew hook since there is no box with content −1 present. If λ/µ is a skew hook, then as a diagram it is one connected piece with one box in each of its diagonals. Suppose every box in a skew diagram λ/µ has the same row index, i.e. all of its boxes lie in the same row, then we call such a skew diagram a horizontal strip. Noticeably any horizontal strip is a skew hook.
Example 5.0.4. Let δ = 1, then the chain ∅ ⊂ 0 1 ⊂ 0 1 -1 of elements in Λ 3 (1) is a 1-chain. We see that ∅ and differ in the first row, and the content of the last box in this row is 1 = δ − |∅|. Also and differ in the second row, and the content of the last box is −1 = δ − | |.
Note if δ ∈ C is not an integer, then no δ-pairs exist. Let τ be a partition with n ≥ 1 rows. Let 1 ≤ i = j ≤ n + 1, and let R i and R j denote two horizontal strips which could be added to τ in the i-th and j-th row respectively to give new partitions. Set c(R i ) to be the set of contents of the horizontal strip R i , and similarly for R j . Then one can observe that c(R i ) ∩ c(R j ) = ∅. This tells us that if there exists a partition λ such that τ ⊂ λ and (τ, λ) is a δ-pair, then λ is the unique partition to do so. Similarly, if µ exists such that µ ⊂ τ and (µ, τ ) is a δ-pair, then µ is unique.
Lemma 5.0.5. Let τ be a partition and δ ∈ Z. Suppose there exists a partition λ such that τ ⊂ λ, (τ, λ) is a δ-pair, and λ differs to τ in the row indexed by i for i = 1. Then there exists a partition µ ⊂ τ such that (µ, τ ) is a δ-pair, and τ differs to µ in the row indexed by i − 1. Furthermore λ/µ is a skew hook.
Proof. Let λ/τ = R i be the horizontal strip in the i-th row of λ, and let the right-most box of R i be (i, j). Since (τ, λ) is a δ-pair, we have that j − i = δ − |τ |. Consider the box just above (i, j) in λ, that is the box (i−1, j). Note that (i−1, j) also belongs to τ . Let (i−1, k) ∈ τ be the last (right-most) box in the (i−1)th row of τ . Then consider the horizontal strip of boxes R i−1 = {(i−1, j), (i−1, j +1), . . . , (i−1, k)} with size |R i−1 | = k − j + 1. Let µ be the partition such that µ ∪ R i−1 = τ , in particular µ ⊂ τ , τ /µ = R i−1 is a horizontal strip, and R i−1 has been chosen such that λ/µ is a skew hook. What remains now is to show that the last box of R i−1 , that is the box (i − 1, k), has content equal to δ − |µ|. Well, − 1, k)).
The above lemma, along with the fact that δ-pairings are unique, implies that if we have a chain of partitions τ (x) ⊂ τ (x+1) ⊂ · · · ⊂ τ (y) such that (τ (i) , τ (i+1) ) is a δ-pair for each i, then it must be the case that this chain is a δ-chain, i.e. the horizontal strips in which consecutive pairs differ by occur in consecutive rows. Moreover, given such a δ-chain, the skew diagram τ (y) /τ (x) is in fact a skew hook. We say that a δ-chain τ (x) ⊂ τ (x+1) ⊂ · · · ⊂ τ (y) is maximal if no partition µ can be added to form a new δ-chain, that is the chain is maximal in size. The above lemma tells us that if τ (x) ⊂ τ (x+1) ⊂ · · · ⊂ τ (y) is maximal, then x = 0.
Due to the uniqueness of δ-pairings, the maximal δ-chains partition the indexing set Λ k (δ) (also see [Martin96, Proposition 8]). Let ∼ C k denote the corresponding equivalence relation, and let C k (λ) denote the equivalence class of λ with respect to this relation. For the following result see [Martin96,Proposition 9] and [DW00, Theorem 1.1].
We now wish to give an alternative criterion to describe the block structure of Λ k (δ). The partition algebras A 2k (δ) were shown in [Xi99] to be cellular algebras in the sense of Graham and Lehrer in [GL96]. For each (λ, l) ∈Â 2k we have a cell module ∆ (λ,l) 2k , and for each (λ, l) ∈ Λ k (δ) the cell module ∆ | (λ, l) ∈ Λ k (δ)} gives a complete set of pairwise non-isomorphic simple A 2k (δ)-modules. Let are precisely the simple modules discussed in the previous section. Also we know by Theorem 4.2.6 that SS 2k [N 1 , . . . , N 2k ] = Z(A 2k (δ)), and by Theorem 4.1.10 and Lemma 4.2.1 the characters χ λ act on polynomials p ∈ SS 2k [N 1 , . . . , N 2k ] by evaluating them over the contents (c λ (1), . . . , c λ (2k)), that is χ λ (p) = p(c λ (1), . . . , c λ (2k)) = p(λ) using the notation established in the previous section. From the discussion at the start of this section, in general we know that λ, µ ∈ Λ k (δ) are in the same block if and only if χ λ = χ µ . In the semisimple case B k (λ) = {λ}, and we proved in Proposition 4.2.4 that When A 2k (δ) is non-semisimple, we know that SS 2k [N 1 , . . . , N 2k ] ⊆ Z(A 2k (δ)). Let χ λ | SS denote the restriction of χ λ to SS 2k [N 1 , . . . , N 2k ]. It turns out that λ and µ belong to the same block if and only if χ λ | SS = χ µ | SS . To prove this we first need to show that χ λ | SS acts by evaluation by contents, just like the semisimple case.
Let R = C[x] with indeterminate x. We can define the partition algebras over R by letting x play the role of δ. Denote these R-algebras by A R 2k (x). The R-algebra A R 2k (x) is cellular with cell modules ∆ (λ,l) 2k,R for each (λ, l) ∈Â 2k . In [Eny13, Section 3] it was shown that each cell module ∆ (λ,l) 2k,R has a Murphy type basis {m T | T ∈ Path(λ, l)} on which the normalised Jucys-Murphy elements act upper-triangularly (see Proposition 3.15 of [Eny13]). As an operator on ∆ (λ,l) 2k,R , the eigenvalues of N i are the contents cont R (T, i) for all T ∈ Path(λ, l). Here cont R (T, i) is the same as Definition 4.1.8 except δ is replaced by x. Now let δ ∈ C and I(δ) be the ideal of R generated by the polynomial x − δ. Set C(δ) := R/I(δ) ∼ = C and let 2k,C(δ) = C(δ) ⊗ R ∆ (λ,l) 2k,R .
As C-algebras A C(δ) 2k (x) ∼ = A 2k (δ) and the cell modules agree, that is ∆ for all T ∈ Path(λ, l). As such χ λ | SS acts by evaluation by contents even in the non-semisimple case. To show that χ λ | SS = χ µ | SS if and only if λ and µ belong to the same block, we first define a generating function to express the information of the character χ λ | SS .
Recall from the previous section that l n (λ) denoted the evaluation of the n-th elementary supersymmetric polynomial with respect to the content vector (c λ (1), . . . , c λ (2k)), where c λ (i) := cont(T (λ,l) , i). From above we know that χ λ (l n ) = l n (λ). Then λ(t) in the above definition is the generating function whose t n coefficient is χ λ (l n ) (see Definition 3.1.4 and the discussion preceding (Eq3)). Since l n generates all of SS 2k [x], the generating function λ(t) is the same data as χ λ | SS , in particular λ(t) = µ(t) if and only if χ λ | SS = χ µ | SS . Hence ∅(t) = (t) = (t).
The above example tells us that, for any (λ, l) ∈ {(∅, 3), ( , 1), ( , 0)}, we have l n (λ) = 0 for all n ≥ 1, and as such all non-constant elements of SS 6 [N 1 , . . . , N 6 ] act on D λ 6 by 0. Example 5.0.4 showed that the partitions in the above example belong to the same maximal 1-chain, hence belong to the same block. We seek to show that two partitions λ, µ ∈ Λ k (δ) belong to the same block of A 2k (δ) if and only if λ(t) = µ(t). This will tell us that SS 2k [N 1 , . . . , N 2k ] can distinguish between the blocks of A 2k (δ). We will prove this by showing that λ(t) = µ(t) is equivalent to µ and λ belonging to the same maximal δ-chain.
Since R = {b 1 , . . . , b n } consists of consecutive boxes in the same row, and c(b n ) = δ − k + m, we see that where the last equality follows since m − n = l. This gives (Eq9)
The other direction will following from the next two lemmas.
Corollary 5.0.12. The partitions λ and µ are within the same block of Λ k (δ) if and only if λ(t) = µ(t).