Jucys-Murphy elements and Grothendieck groups for generalized rook monoids

We consider a tower of generalized rook monoid algebras over the field $\mathbb{C}$ of complex numbers and observe that the Bratteli diagram associated to this tower is a simple graph. We construct simple modules and describe Jucys-Murphy elements for generalized rook monoid algebras. Over an algebraically closed field $\Bbbk$ of positive characteristic $p$, utilizing Jucys-Murphy elements of rook monoid algebras, for $0\leq i\leq p-1$ we define the corresponding $i$-restriction and $i$-induction functors along with two extra functors. On the direct sum $\mathcal{G}_{\mathbb{C}}$ of the Grothendieck groups of module categories over rook monoid algebras over $\Bbbk$, these functors induce an action of the tensor product of the universal enveloping algebra $U(\hat{\mathfrak{sl}}_p(\mathbb{C}))$ and the monoid algebra $\mathbb{C}[\mathcal{B}]$ of the bicyclic monoid $\mathcal{B}$. Furthermore, we prove that $\mathcal{G}_{\mathbb{C}}$ is isomorphic to the tensor product of the basic representation of $U(\hat{\mathfrak{sl}}_{p}(\mathbb{C}))$ and the unique infinite-dimensional simple module over $\mathbb{C}[\mathcal{B}]$, and also exhibit that $\mathcal{G}_{\mathbb{C}}$ is a bialgebra. Under some natural restrictions on the characteristic of $\Bbbk$, we outline the corresponding result for generalized rook monoids.


Introduction
The aim of this paper is to prove several results on the representation theory of certain inverse semigroups called generalized rook monoids and on the structure of their semigroup algebras. These results are motivated by the corresponding results for the wreath products of symmetric and cyclic groups. The latter groups appear as maximal subgroups in generalized rook monoids. Below we explain our motivation and result in more detail.
Let R n be the set consisting of all n × n matrices with entries from {0, 1} and with the further condition that each row and each column contains at most one non-zero entry. The matrix multiplication defines on R n the structure of a monoid, called the rook monoid, cf. [Sol02]. The monoid R n is alternatively known as the symmetric inverse semigroup, see [Li96,GM09]. It is very well-known, see for example [Mun57], that the rook monoid algebra C[R n ] is semisimple, moreover, all simple modules over this algebra are very well-understood, see [GM09,Ste16,Gr02].
For a positive integer r, let C r denote the multiplicative cyclic group of order r. We can consider the wreath product C r ≀ R n , called the generalized rook monoid in [Ste08], whose elements are all n × n matrices with entries from C r ∪ {0} and with the condition that each row and each column contains at most one non-zero entry. Many of the results on the representations of the rook monoid obtained in [Sol02] were extended to the case of the generalized rook monoid in [Ste08].
Motivated by the construction of the irreducible representations as seminormal representations in the case of symmetric groups and generalized symmetric groups, in this article, we give a similar construction of the irreducible representations of C r ≀ R n in Theorem 3.2. The set of elements of C r ≀ R n whose (n, n)-th entry is equal to 1 is a submonoid of C r ≀ R n and this submonoid is isomorphic to C r ≀ R n−1 . Now, viewing C r ≀ R n−1 as a submonoid of C r ≀ R n in this way, we have the following tower of generalized rook monoid algebras: (1) For r = 1, the branching rule for the restriction of an irreducible representation for each successive inclusion of algebras in (1) is multiplicity-free by [Hal04,Section 3]. This means that, in this case, the Bratteli diagram of (1) is a simple graph. In this article, we prove a similar result for an arbitrary positive integer r in Corollary 3.4. In particular, this gives a natural basis of each irreducible representation of an algebra in the tower (1) indexed by certain paths in the Bratteli diagram, usually called the Gelfand-Zeitlin basis. If we replace, in (1), C by an algebraically closed field k of positive characteristic, then our method gives a modular branching rule as well. We construct a Gelfand model for C[C r ≀ R n ] in Proposition 3.5 which is a generalization of the case r = 1 as considered in [KM09], see also [Maz13] and [HRe15].
The construction of seminormal representations of the symmetric group S n is closely connected to the existence of some special elements, called Jucys-Murphy elements, in the group algebra C[S n ], see the introduction of [Ram97]. Jucys-Murphy elements for C[R n ] were constructed in [MS21]. In Section 4, we construct Jucys-Murphy elements for C[C r ≀ R n ] (these elements are defined over any field in which r is non-zero). Moreover, we observe that the expression for Jucys-Murphy elements of C[R n ], given in Section 4, is simpler than the one in [MS21]. We also show that Jucys-Murphy elements satisfy the fundamental properties similar to the ones from the classical setup of symmetric groups. In particular, we have: (a) Proposition 4.1 shows that these elements commute with each other.
(b) Theorem 4.2 proves that these elements act as scalars on all elements of the Gelfand-Zeitlin basis of every simple C[C r ≀ R n ]-module.
(c) Corollary 4.4 states that the eigenvalues of the action of Jucys-Murphy elements on elements of the Gelfand-Zeitlin basis distinguish non-isomorphic simple modules.
Let now k be an algebraically closed field of positive characteristic p. For a finitedimensional associative k-algebra A, let A -mod denote the category of finitedimensional left A-modules. Consider the Grothendieck group K 0 (A -mod) of A -mod and the complexified Grothendieck group G 0 (A) = C⊗ Z K 0 (A -mod), where Z denotes the ring of integers.
Let N denote the set of all non-negative integers. A classical result, proved in [LLT96], asserts that has the natural structure of a module over the universal enveloping algebra U ( sl p (C)) of the affine Lie algebra sl p (C) of type A (1) p−1 . Moreover, this module can be identified as the basic representation V (Λ 0 ) of U ( sl p (C)). This result was also established in [Gro99] for a more general setting with different techniques. One of the ways to obtain these results is to define the i-restriction and i-induction functors, for 0 ≤ i ≤ p − 1, using Jucys-Murphy elements of k[S n ]. Then one can show that, at the level of Grothendieck group, the functors satisfy the relations for the Chevalley generators of sl p (C).
Motivated by these classical results, we use our Jucys-Murphy elements for rook monoid algebras to define, for 0 ≤ i ≤ p − 1, the i-restriction functor res i and the i-induction functor ind i in the rook monoid setup, see (19) and (20). We also define two extra functors A and B which correspond to the additional edges in the Bratteli diagram for rook monoids, see (20). In Theorem 6.16, we show that, at the level of the direct sum of the Grothendieck groups, the functors res i and ind i , for 0 ≤ i ≤ p − 1, satisfy the relations for the Chevalley generators of sl p (C). Additionally, the functors A and B satisfy the relation of the generators of the bicyclic monoid B and commute with all res i and ind i . Furthermore, we show that the Grothendieck group (2) is isomorphic to the tensor product of the basic representation V (Λ 0 ) with the unique simple Assume that r is non-zero in k. Then, using the result for the generalized symmetric group algebras similar to the ones proved in [Tsu07] and also in [WW08], in Subsection 6.4 we conclude that the Grothendieck group It is well known that is a Hopf algebra, where the multiplication and the comultiplication are obtained by using appropriate induction and restriction functors, respectively, e.g. see [Mac15,Chapter I]. In Theorem 6.20, we prove that (2) is a bialgebra where the multiplication and the comultiplication are again obtained by using certain induction and restriction functors, respectively.
Acknowledgments. The authors thank Weiqiang Wang for bringing the reference [WW08] to our attention. The second author also thanks Arun Ram for fruitful discussions. The first author is partially supported by the Swedish Research Council and Göran Gustafsson Stiftelse.

Generalized rook monoids
In what follows, k is an algebraically closed field.
Recall that C r ≀ R n denote the generalized rook monoid. For 0 ≤ i ≤ n, let f i ∈ C r ≀ R n be the diagonal matrix whose (k, k)-th entry is 0, when i + 1 ≤ k ≤ n, and the remaining diagonal entries are equal to 1. Note that f 0 and f n are the zero matrix and the identity matrix in C r ≀ R n , respectively.
Green's left cell L n i of C r ≀ R n corresponding to the idempotent f i consists, by definition, of all σ ∈ C r ≀ R n satisfying Then L n i consists of all rank i matrices in C r ≀ R n whose j-th column is zero, for all i + 1 ≤ j ≤ n. The maximal subgroup of C r ≀R n corresponding to f i is the subgroup consisting of matrices in C r ≀ R n , whose non-zero entries lie on the first i × i-block. This subgroup is evidently isomorphic to the generalized symmetric group C r ≀ S i , where S i is the symmetric group on i letters. Unless stated otherwise, we use this identification throughout the manuscript. Note that C r ≀ S i acts on L n i from the right in the obvious way.
Let [n] := {1, 2, . . . , n} and S i : and let h n Z ∈ C r ≀ R n be such that the non-zero entries of h n Z are equal to 1 and they are at the coordinates (r 1 , 1), . . . , (r i , i). Note that h n Z ∈ L n i and, moreover, these matrices form a cross-section of the orbits of the right action of C r ≀ S i on L n i . In other words, kL n i is a free right k[C r ≀ S i ]-module which a k[C r ≀ S i ]-basis consisting of all matrices of the form h n Z , where Z ∈ S i (we use this basis often in what follows).
The space kL n i is also naturally a left k[C r ≀ R n ]-module where, for τ ∈ C r ≀ R n and σ ∈ L n i , the action is given by otherwise.
These two actions on kL n i obviously commute, making kL n Proof. This follows by combining the standard facts that, for 0 ≤ i ≤ n, the right Section 10.2].
Generators. For 1 ≤ j ≤ n − 1, let s j denote the simple transposition (j, j + 1) in S n . Fix a primitive r-th root of unity ξ in C r . Denote by P ∈ C r ≀ S n the diagonal matrix whose (1, 1)-th entry is 0 and the remaining diagonal entries are equal to 1. Denote by Q ∈ C r ≀ S n the diagonal matrix whose (1, 1)-th entry is ξ and the remaining diagonal entries are equal to 1. Then it is easy to check that C r ≀ S n is generated by P , Q and all s j , where 1 ≤ j ≤ n − 1.

Seminormal representations
3.1. Bases of irreducible representations. In this section, we construct the irreducible representations of C r ≀ R n over C, give a basis of an irreducible representation of C r ≀ R n and describe the actions of generators of C r ≀ R n . We also give the branching rule for the restriction of an irreducible representation of C r ≀ R n to C r ≀ R n−1 . To obtain these results we need the following notation and definitions.
Let P denote the set of all partitions of all non-negative integers. Given a partition λ = (λ 1 , . . . , λ r ) of a positive integer, its Young diagram [λ] is given as: We use the usual English notation for Young diagrams. The elements of [λ] are called boxes. By convention, the Young diagram of 0 is denoted ∅. For λ ∈ P, let |λ| denote the number of boxes in [λ]. For λ ∈ P with |λ| ≤ n, let Y(λ, n) denote the set of all fillings of boxes of [λ] with different elements from [n] such that the entries increase along the rows from left to right and along the columns from top to bottom. Let Λ r (n) := λ (r) = (λ (1) , . . . , λ (r) ) | λ (i) ∈ P, for 1 ≤ i ≤ r, and Let λ (r) ∈ Λ r (n) and m be a non-negative integer such that n ≤ m. Define Y(λ (r) , m) as the set (L 1 , . . . , L r ) L i ∈ Y(λ (i) , m), for 1 ≤ i ≤ r; L i and L j don't have common entries for 1 ≤ i = j ≤ r .
Let L = (L 1 , . . . , L r ) ∈ Y(λ (r) , m) and 1 ≤ b ≤ m. We write b ∈ L if b appears in one of L k , for 1 ≤ k ≤ r, and we also say "b ∈ L at the position k", if b appears in L k . Define the sign of b in L as Let b ∈ L be at the position k and, further, assume that b is in the box (s 1 , s 2 ) in L k . Define the content of b as ct(L(b)) := (s 2 − s 1 ). If both i and i + 1 appear in L at the position k (in particular, sgn L (i) = sgn L (i + 1)), define a L (i) := 1 ct(L(i + 1)) − ct(L(i)) .
Given L ∈ Y(λ (r) , n), let s i L be obtained from L by replacing i by i + 1 if i ∈ L, and by replacing i + 1 by i if i + 1 ∈ L. Note that it may happen that s i L does not lie in Y(λ (r) , n). For the next statement, we refer e.g. to [HRa98,Page 169], see also [AK94].
(a) The elements of Λ r (n) index the isomorphism classes of irreducible representations of C[C r ≀ S n ].
(b) For λ (r) ∈ Λ r (n), the corresponding irreducible representation W n λ (r) of C r ≀ S n has a basis {w L |L ∈ Y(λ (r) , n)} on which the generators s j , for 1 ≤ j ≤ n − 1, and Q act as follows: if sgn L (j) = sgn L (j + 1); a L (j)w L + (1 + a L (j))w sj L , if sgn L (j) = sgn L (j + 1); (3) Here The next claim is a generalization of Theorem 3.1 to the case of C r ≀ R n .
(a) The elements of Λ r (≤ n) := n i=0 Λ r (i) index the irreducible representations of C[C r ≀ R n ] in the following way: for λ (r) ∈ Λ r (i), the corresponding irreducible representation is (b) V n λ (r) has a basis {v L |L ∈ Y(λ (r) , n)} on which the generators P , Q, and s j , for 1 ≤ j ≤ n − 1, act as follows: )v sj L , if j ∈ L, j + 1 ∈ L and sgn L (j) = sgn L (j + 1); (5) otherwise.

(6)
Here Proof. The first claim follows directly from the general theory, see [Ste16], so we only prove the second claim. (One can also see it by combining Lemma 2.1 and Theorem 3.2.) Recall that, as a right k[C r ≀ S i ]-module, kL n i has a basis consisting of matrices of the form h n Z , where Z ∈ S i . Fix λ (r) ∈ Λ r (i). For Z = {r 1 < r 2 < · · · < r i } ⊆ [n] and L ′ ∈ Y(λ (r) , i), define L ∈ Y(λ (r) , n) by replacing l ∈ L by r l , for all 1 ≤ l ≤ i. Conversely, given L ∈ Y(λ (r) , n), let Z be the set of the entries in L. We can arrange these entries in the increasing order to get Z = {r 1 < r 2 < · · · < r i }. Now, replacing r l ∈ L by l, we obtain an element L ′ ∈ Y(λ (r) , i). Then , n)} is, by construction, a basis of V n λ (r) . Next we compute the action of s j . For 1 ≤ j ≤ n − 1, we have: ) tr denotes the transpose of h n sj (Z) . Case 1. Suppose that we have j / ∈ L or j + 1 / ∈ L. This means that j / ∈ Z or j + 1 / ∈ Z, respectively. In this case, (h n sj (Z) ) tr s j h n Z = f i ∈ C r ≀ S i , which is the identity of C r ≀ S i . Therefore, we have This completes the description of the action of s j for the first three cases in (5).
Case 2. Suppose that j ∈ L and j + 1 ∈ L or, equivalently, j ∈ Z and j + 1 ∈ Z. Then (h n sj (Z) ) tr s j h n Z is a (j, j + 1) transposition in C r ≀ S i . Then the remaining two cases in (5) follow from (3).
To compute the action of P , we start with P v L = P (h n Z ⊗ w L ′ ) = P h n Z ⊗ w L ′ . Note that P h n Z ∈ L n i if and only if 1 / ∈ Z. In particular, we have otherwise .
This implies the formula for the action of P in (6).
The action of Q in (6) can be computed similarly using (4).
3.2. The restriction functor. As we already mentioned, the set consisting of all matrices in C r ≀ R n whose (n, n)-th entry is 1 is a submonoid of C r ≀ R n and it is isomorphic to C r ≀ R n−1 . This defines an embedding Denote by F the functor Theorem 3.3. The following diagram commutes up to a natural isomorphism of functors.
Proof. Recall that, as a right k[C r ≀ S i ]-module, kL n i has a basis consisting of matrices of the form h n Z , where Z ∈ S i . We need to consider several cases.
It is easy to see that this C[C r ≀ R n−1 ]-module is isomorphic to L n−1 i−1 (V ). Case 2. In the case i = n, we have that kL n n is the right regular C[C r ≀ S n ]-module and the necessary claim follows from the construction.
Case 3. In the case i = 0, both kL n 0 and the group algebra of C r ≀S 0 are isomorphic to k and the claim is trivial. Now we give some applications of Theorem 3.3.
as the set consisting of λ (r) and all elements in Λ r (≤n − 1) which are obtained from λ (r) by removing an outer corner in one of the Young diagrams which constitute Corollary 3.4 (Branching rule over C). For λ (r) ∈ Λ r (≤ n), we have: Proof. It is a consequence of the branching rule for The Bratteli diagram of the tower (1) is an undirected graph whose vertices at the level n are given by the elements of Λ r (≤n). For two vertices λ (r) ∈ Λ r (≤n) and µ (r) ∈ Λ r (≤ n − 1), there is an edge between µ (r) and λ (r) if and only if j+1 , for 0 ≤ j ≤ n − 1. By construction, the Bratteli diagram encodes the branching rule in Corollary 3.4. In order to exhibit the Bratteli diagram, often it is more intuitive to consider the Young diagram corresponding to a partition and in the below we follow this.
In Figure 1, we illustrate the branching rule in the case r = 2 by the corresponding Bratteli diagram for the tower of generalized rook monoid algebras, up to the second level. Observe that the branching rule in Corollary 3.4 is multiplicity-free. Therefore there is a basis of V n λ (r) , defined uniquely up to rescaling of its elements, which is indexed by the paths from the vertex at the level m = 0 to a vertex, say λ (r) , at the level m = n in the Bratteli diagram, see e.g. [OV96, Page 585], where such a basis is called a Gelfand-Zeitlin basis. We note that the set of all such paths is in a bijective correspondence with Y(λ (r) , n). From Theorem 4.2 which will be proved later, it follows that the basis constructed in Theorem 3.2 with a Gelfand-Zeitlin basis of V n λ (r) . For L ∈ Y(λ (r) , n), the vector v L given in the part of Theorem 3.2 is called a Gelfand-Zeitlin basis vector. Now we outline the modular branching rule for generalized rook monoids as a consequence of Theorem 3.3 and the corresponding rule for generalized symmetric groups. Let p be a prime number and assume that k is of characteristic p. Recall that a partition is called p-regular if it does not have more than p − 1 parts that are equal. Let Λ p (n) denote the set of all p-regular partitions of n. It is known that the simple modules of k[S n ] are indexed by the elements of Λ p (n), see [Jam76] or [Jam78, Theorem 11.5]. For λ ∈ Λ p (n), let D λ denote the corresponding simple k[S n ]-module. In [Kle95a,Kle95b] one can find a modular branching rule for symmetric groups, that is a description of the socle of Res In particular, this asserts that the socle of Res can be found in [JK81]. A modular branching rule result for generalized symmetric group, under the assumption that r is non-zero in k, was obtained in [Tsu07]. Combining these results with Theorem 3.3, one obtains a modular branching rule for the generalized rook monoid algebras, in particular, it follows that this branching is multiplicity-free. Maz13,HRe15] for further details.
For σ ∈ C r ≀ S n , let Inv(σ) be the set consisting of all (i, j) ∈ [n] × [n] such that i < j and the non-zero entry in the i-th column of σ appears in a later row than the non-zero entry in the j-th column. Similarly, for σ ∈ C r ≀ S n , let Pair(σ) be the set consisting of all pairs (i, j) ∈ [n] × [n] such that i < j, the non-zero entry in the i-th column of σ appears in row j and the non-zero entry in the j-th column appears in row i. For a matrix A, let A tr denote its transpose. Let W n be the C-span of J = {σ ∈ C r ≀ S n | σ = σ tr }. For ω ∈ C r ≀ S n and σ ∈ J , set ωσ = (−1) |Inv(ω)∩Pair(σ)| ωσω −1 .
In [APR10], it is shown that W n is a Gelfand model for C[C r ≀ S n ], see also [MS16, Section 2.7].
such that i < j, the non-zero entry in the i-th column of M appears in row j and the non-zero entry in the j-th column of M appears in row i. We define the action of the generators of C[C r ≀ R n ] on V as follows: for 1 ≤ i ≤ n − 1, set Proof. For 0 ≤ m ≤ n, let V m be the C-span of I m = {M ∈ I | rank(M ) = m}. From (7) and (8), we see that each V m is closed under the action, moreover, it is easy to see that, directly by construction, we have V m ∼ = L n m (W m ).

Jucys-Murphy elements
Given any subset A of [n], let e A be the diagonal matrix which has 1 at the (i, i)-th entry for i ∈ [n] \ A and zeros elsewhere. Consider the element Then E A ∈ k[R n ] is an idempotent and, moreover, any two such idempotents commute with each other (since all idempotents of R n commute). Assume that r is non-zero in k. Consider the following elements in k[C r ≀ R n ]: where ξ l m ξ −l j ∈ C r ≀ S n denotes the diagonal matrix with 1's on the diagonal except for ξ l at the (m, m)-th entry and ξ −l at the (j, j)-th entry. Further, (m, j) is the usual transposition in S n . It is easy to observe that We will call the elements Proposition 4.1. For 1 ≤ i, j ≤ n, we have Then Q j is a diagonal matrix whose (j, j)-th diagonal entry is equal to ξ and the remaining diagonal entries are equal to 1. For loss of generality, we may assume that i < j. We can write Now, Y i commutes with The product of Y i with the middle term of (10) can be written as Also, we can write In the above, we note that elements in the first term commutes with each other and the second term simplifies to which is equal to (12) using, for 1 ≤ p ≤ i − 1, that (p, i)(p, j) = (i, j)(p, i) and Similarly to (13), we can write where the elements in the first term commutes with each other and the second term is equal to (11). All of the above, finally, yield that If i > j, then X i = Q i − P i commutes with each term in Y j , which implies that X i Y j = Y j X i . Let us assume that i ≤ j. Both Q i and P i commute with every element in the first and the last terms in (10). Now Q i commutes with This completes the proof.
From (6), we have: Assume now that (14) is true for 2 ≤ i < m ≤ n, and let us prove it for i = m. By definition, X m = s m−1 X m−1 s m−1 . We need to consider several cases.  Case 4. m − 1 ∈ L, m ∈ L and sgn L (m − 1) = sgn L (m) = ξ k−1 . In this case, m − 1 ∈ L k and m ∈ L k , for some 1 ≤ k ≤ r. Then Now we have to consider two subcases.
Subcase 4.1. s m−1 L / ∈ Y(λ (r) , n). Then m − 1 and m appear adjacent to each other either in the same row of L k or in the same column of L k . This implies that a L (m − 1) = ±1. Also, recall that, by convention, in this case we have v sm−1L = 0. Now, applying the inductive assumption, we obtain Subcase 4.2. s m−1 L ∈ Y(λ (r) , n). Using the inductive assumption, we have: This proves (14).
To prove (15), we again use induction on i. For i = 1, the claim is obvious. Assume now that (15) is true for 2 ≤ i < m ≤ n, and let us prove it for i = m.
We have We need to consider several cases. Note that in all the cases below, the action of ξ l m−1 ξ −l m is computed using (5), (6) and (9); in addition, we also use The last equality in the above is a consequence of the following: for an integer s, we have Case 4. m − 1 ∈ L, m ∈ L and sgn L (m − 1) = sgn L (m) = ξ k−1 . This means that m − 1 ∈ L k and m ∈ L k , for some 1 ≤ k ≤ r. Then we have We have now to consider two subcases.
Subcase 4.1. s m−1 L / ∈ Y(λ (r) , n). Then m − 1 and m appear adjacent to each other either in a same row of L k or in a same column of L k , This implies that a L (m − 1) = ±1. Also, recall our convention that v sm−1L = 0 in this case. Now, by the inductive assumption, we obtain Further, we have: Since a L (m − 1) + ct(L(m − 1)) = ct(L(m)), we get the desired answer.
Subcase 4.2. s m−1 L ∈ Y(λ (r) , n). Using the inductive assumption, we have: Now, we compute the action of second term on v L : Combining the coefficients at v L and v sm−1L , we get  .9]. It is easy to show, using induction, that the elements in [MS21] and the elements in (15) are the same.
As an immediate consequence of Theorem 4.2, we have the following: Corollary 4.4. The eigenvalues of the action of X i and Y i , for 1 ≤ i ≤ n, on Gelfand-Zeitlin basis vectors distinguish the latter and, consequently, the isomorphism classes of simple C[C r ≀ R n ]-modules.
In particular, it follows that the Gelfand-Zeitlin subalgebra of C[C r ≀R n ] is generated by Jucys-Murphy elements.

Bicyclic monoid
The results of this section should be known to experts. However, we could not trace an explicit reference, so we provide all proofs, for completeness.
Recall, cf. [CP61], that the bicyclic monoid B is an infinite monoid generated by two elements a and b and given by the following presentation: B := a, b | ab = 1 .
Consider the C-vector space V N with N as a basis. Define the actions of b and a on V N as follows: In the above, note that a acts as zero on the basis vector 0 ∈ N. Then it is easy to check that V N becomes a simple C[B]-module.
For a non-zero λ ∈ C, let V λ be the 1-dimensional C-vector space with basis v λ . Define the actions of b and a on V λ as follows: bv λ = λv λ and av λ = λ −1 v λ .
Then V λ is a simple C[B]-module.
Proposition 5.1. Let V be a simple C[B]-module. Then either V ∼ = V N or V ∼ = V λ , for some non-zero λ ∈ C.
Proof. Let L a and L b be linear operators on V representing the actions of a and b. Since ab = 1, we have L a • L b = Id V , in particular, L a is surjective and L b is injective. We need to consider two cases.
Case 1. Suppose L a is injective and hence invertible. Then L b = L −1 a and, in particular, L a L b = L b L a and hence L a commutes with the action of C[B]. From Schur-Dixmier Lemma, it then follows that L a = λId V , for some λ ∈ C. Obviously, this λ must be non-zero. Consequently, L b = λ −1 Id V . In this case any subspace of V is, clearly, a submodule. Therefore V must have dimension one by simplicity and hence is isomorphic to V λ .
Case 2. Suppose L a is not injective. Then there exists a non-zero v ∈ V such that av = 0. Consider the subspace W spanned by {b n v | n ∈ N}. Since L b is injective, the set {b n v | n ∈ N} is linearly independent and, therefore, W is infinitedimensional. Clearly, W is stable under the action of C[B] and is isomorphic to V N . Since V is simple, we must have V = W and, finally, V ∼ = V N .

Grothendieck groups for rook monoid algebras
In this section, we assume that the characteristic of k is p > 0. We identify the prime subfield k p of k with the additive cyclic group of order p. We start by recalling the classical results related to the tower of symmetric groups.
6.1. The case of symmetric groups. We have the following Jucys-Murphy elements for the algebra k[S n ]: Here, (i, k) ∈ S n is a transposition.
For V ∈ k[S n ] -mod, the eigenvalues of the operator Y k on V lie in k p (e.g., see [BK03, Lemma 2.2]). Since the elements in (16) commute with each other, for r = (r 1 , r 2 , . . . , r n ) ∈ k n p , the common generalized eigenspace of V with respect to the elements in (16) is We then have the decomposition V = r∈k n p V r .
For r ∈ k n p and l ∈ k p , let µ l := |{k ∈ [n] | r k = l}|. Then l∈kp µ l = n and the tuple wt(r) = (µ 0 , . . . , µ p−1 ) ∈ N p is called the weight of r. For The following lemma is [BK03, Lemma 2.4] and the key point in their proof is that the center of k[S n ] is generated by the elementary symmetric polynomials (see [Mac15, Section 2]) in {Y k | k ∈ [n]}, which can be found in [FH59] and [Juc74].
6.1.1. Decompositions of induction and restriction functors. For i ∈ k p and γ = (γ 0 , . . . , γ p−1 ) ∈ T n , denote by γ + i the element in N p whose i-th coordinate is γ i + 1 and the remaining coordinates are the same as those of γ. Similarly, if γ i = 0, denote by γ − i an element in N p , whose i-th coordinate is γ i − 1 and the remaining coordinates are the same as those of γ.
For γ ∈ T n and V = V [γ] ∈ k[S n ] -mod, define the functors as follows: This definition extends to any object in k[S n ] -mod and hence completely defines res i and ind i due to Lemma 6.1. The functors res i and ind i are called the irestriction and the i-induction functors, respectively. Using these definitions, we get the following decomposition of the restriction and induction functors in terms of the i-restriction and i-induction functors The functors res i and ind i are exact and hence they induce the following linear maps For λ ∈ Λ p (n), we denote by D λ the corresponding simple module of k[S n ] -mod. Then the set {[D λ ] | λ ∈ Λ p (n)} is a basis of G 0 (k[S n ]). In order to describe actions of [ res i ] and [ ind i ] in this basis, we need to recall some terminology.
Definition 6.2. Let λ be a partition and i ∈ k p .
(1) For a box b ∈ [λ], the residue of b, denoted by res(b), is defined as the content of b modulo p.
Proof. This is a straightforward computation.
From Theorem 4.2, we have that the eigenvalues of the actions of X i and Y i on all elements of the Gelfand-Zeitlin basis of simple C[R n ]-modules are integers. It follows that the eigenvalues of the actions of X i and Y i on all C[R n ]-modules are integers. The following lemma describes an analogue of the latter statement over k.
Lemma 6.6. Let M ∈ k[R n ] -mod. Then, for 1 ≤ i ≤ n, the eigenvalues of the linear operators X i , Y i ∈ End k (M ) lie in k p .
Proof. As we just mentioned, from Theorem 4.2 we know that the eigenvalues of the actions of X i and Y i on all C[R n ]-modules are integers. In particular, this applies to the regular C[R n ]-module. This means that the characteristic polynomial of X i (or Y i ) on the regular C[R n ]-module has integral roots over C.
Since Jucys-Murphy elements are in Z[R n ], over the field k we just need to reduce the above characteristic polynomial modulo p. Since the original polynomial had integral roots, the reduction modulo p will have roots in k p . The claim follows.
As an immediate consequence of Lemma 6.6, we have: and N ≫ 0}.
For γ ∈ N p , define M [γ] := i∈k n p , j∈{0,1} n wt(i,j)=γ The next lemma is motivated by the classical result that the center of k[S n ] is generated by the elementary symmetric polynomials in Jucys-Murphy elements, see e.g. [FH59] and [Juc74]. Before we prove Lemma 6.8, we need to introduce some notation that we will use in the proof. For σ ∈ R n , let C(σ) and R(σ) be the sets of indexes for all non-zero columns and rows of σ, respectively. When C(σ) = R(σ) and |R(σ)| = r, there is a unique order preserving bijection r → C(σ) and σ can be thought of as an element σ ′ in S r . We define the cycle type of σ ∈ R n as the cycle type of σ ′ ∈ S r , see [GM09] for details. can be realized, by the above, as the transposition (1, 2) ∈ S 2 . So, the cycle type of this σ is (2), i.e. it has one cycle of length two.
Proof. Let us denote by Z the subalgebra of k[R n ] spanned by all elementary symmetric polynomials in the X i and all elementary symmetric polynomials in the Y i . It is a straightforward exercise to check that Z is contained in the center of k[R n ] (in fact, below we explicitly compute the elementary symmetric polynomials in the X i and the fact that these are central follow e.g. from [Ste08]). Below we show the converse inclusion, i.e. the center of k[R n ] is contained in Z.
For 0 ≤ r ≤ n and λ ⊢ r, let M λ = {σ ∈ R n | C(σ) = R(σ) and the cycle type of σ = λ} Note that c ∅ = e [n] is the zero element of R n .
Our first claim is that {c λ | λ ⊢ r, 0 ≤ r ≤ n} is a basis of the center of k[R n ]. This follows easily from the construction by combining the following three well-known facts: • The center of k[S n ] has the obvious basis indexed by the cycle types for S n , in which the basis element corresponding to a fixed cycle type is just the sum of all elements in S n which have this cycle type.
• For any m, the center the algebra of m × m matrices over k[S n ] has the obvious basis indexed by cycle types for S n , in which the basis element corresponding to a fixed cycle type is just the identity matrix times the corresponding basis element for the center of k[S n ].
• Since R n is an inverse monoid, the monoid algebra k[R n ] is isomorphic to a direct sum of matrix algebras corresponding to the equivalence classes (with respect to Green's D-relation) of the maximal subgroups in R n . The latter subgroups are of the form S k , for 0 ≤ k ≤ n. For each such S k , the rows and columns in the corresponding matrix algebra are naturally indexes by all k-element subsets of [n] and the idempotents cutting out the k[S k ]-parts are exactly the elements E R(σ) , see e.g. [Ste08,Ste16].
Next, by induction on r, one shows that the element e {i1} e {i2} · · · e {ir } is a linear combination of d 0 , d 1 , . . . , d r and hence belongs to Z.
Further, note that For λ ⊢ r, letλ be the partition of n obtained by adding 1 (n−r) to λ at the end. Using (18), we can write By the classical results for symmetric groups, there exists a symmetric polynomial f λ in n variables such that One again, using (18), we conclude that c λ = d r f λ (Y 1 , . . . , Y n )g n−r ∈ Z.
Note that, over C, an analogue of Lemma 6.8 is also true for the generalized rook monoid algebras: Proposition 6.10. The center of C[C r ≀ R n ] is generated by the symmetric polynomials in {X i | i ∈ [n]} and the symmetric polynomials in Proof. Indeed, from Theorem 4.2, it is easy to see that all symmetric polynomials in the X i and all symmetric polynomials in the Y i act as scalars on all simple modules and hence are central. At the same time, they separate the isomorphism classes of simple modules and hence generate the center.
Over an arbitrary field, we only have the following: Proof. To prove the result, it is enough to show that the elementary symmetric polynomials in {X i | i ∈ [n]} as well as the elementary symmetric polynomials in {Y i | i ∈ [n]} belong to the center of k[C r ≀ R n ]. Note that, both P and Q commute with each X i and Y i . Furthermore, for j ∈ [n − 1], we have From this it follows that the symmetric polynomials in question commute with the generators of k[C r ≀ R n ] (see the end of Section 2) and hence are central.
Now in order to avoid excessive notation, the remaining part of this subsection is for r = 1 and for the arbitrary r we have stated the corresponding results in Subsection 6.4.
Like in the case of symmetric groups, symmetric polynomials in X i and the symmetric polynomials in Y i being central (see Lemma 6.8) give the following statement. We want to use Lemma 6.12 to define, for i ∈ k p , the following functors: . For r = (r 1 , r 2 , . . . , r j ) with wt(r) = γ, let v ∈ V r . Let Z = {β 1 < · · · < β j } ⊆ [n]. Then using the decomposition (17), h n Z ⊗ v ∈ L n j (V [γ]) and a general element is a linear combination of these elements. In the following, we show that h n Z ⊗ v ∈ L n j (V [γ])[γ]. Define l = (l 1 , . . . , l n ), where l β1 = l β2 = · · · = l βj = 1 and the remaining coordinates are equal to 0. Also, let m = (m 1 , . . . , m n ), where m β1 = r 1 , . . . , m βj = r j and the remaining coordinates are equal to 0. Then wt((l, m)) = γ.
In particular, (X k − l k )h n Z ⊗ v = 0. For q 1 , q 2 ∈ [n], observe that If k ∈ Z, then there exists s ∈ [j] such that The following corollary is a consequence of Theorem 3.3 and Lemma 6.13.
Corollary 6.14. For i ∈ k p and 0 ≤ j ≤ n, we have As an application of Lemma 2.1 and Corollary 6.14, we obtain the following corollary.
Corollary 6.15. For i ∈ k p , we have Let λ ∈ n j=0 Λ p (j). For i ∈ k p , below in (22) and (23), the first equality is due to Corollary 6.14 and the second equality is due to Theorem 6.3 where α λµ and β λν are as given in Theorem 6.3. Once again from Corollary 6.14, we obtain Define (c) The vector space G C is a module over U ( sl p (C)) ⊗ C C[B], moreover, Proof. Claim (a) follows from Theorem 6.3, Theorem 6.4(a), and Formulae (22) and (23) Under the isomorphism in Theorem 6.4(b), we may consider as a basis of V (Λ 0 ). Define the map, Φ([L n j (D λ )]) = [D λ ] ⊗ (n − j). It follows from the above discussion and the constructions that this map is an isomorphism of U ( sl p (C)) ⊗ C C[B]-modules.
6.3. Bialgebra structure on G C . 6.3.1. Preliminaries. It is well known thatG C has the natural structure of a Hopf algebra, see [Mac15,Chapter I]. In this section, we prove that G C has the natural structure of a bialgebra.
Lemma 6.17. For n, n 1 , n 2 ∈ N with n = n 1 + n 2 , the following diagram commutes up to isomorphism of functors.

Then the map
is an isomorphism of k[R (n1,n2) ]-modules which is functorial in V . The claim follows.
The following statement follows from Lemma 6.17 using Frobenius reciprocity.
Corollary 6.18. For n, n 1 , n 2 ∈ N with n = n 1 + n 2 , the following diagram where the functor F ′ is given by commutes up to isomorphism of functors.
An immediate consequence of Corollary 6.18 is the following statement.
Corollary 6.19. For n, n 1 , n 2 ∈ N with n = n 1 + n 2 , the functor Ind For n 1 , n 2 , . . . , n s ∈ N, let R (n1,n2,...,ns) := R n1 × R n2 × · · · × R ns . Then, as usual, we have the following decomposition involving the corresponding monoid algebras: Now we are ready to discuss the bialgebra structure on G C . 6.3.2. Multiplication. Since we have to deal with modules over k[R n ] for all n ∈ N in the same course of a proof or a statement, for the sake of clarity we decorate a module over k[R n ] by putting superscript n on it. This notational convention applies for modules over symmetric group algebras as well. For V n ∈ k[R n ] -mod and W m ∈ k[R m ] -mod, we have that V n ⊗ k W m ∈ k[R (n,m) ] -mod. Define: Since the functor Ind (27) gives rise to a well-defined multiplication on G C . Associativity of tensor products and also of the induction functor imply the associativity of (27). Let k 0 denote the trivial k[R 0 ]-module. Then [k 0 ] ∈ G 0 (k[R 0 ]) is the unit with respect to this multiplication. Thus G C becomes a unital algebra with respect to the multiplication given by (27).
Using the identification n1,n2∈N Then, both sides of the coassociativity condition for (28) reduce, essentially, to computation of every Res k[R (n 1 ,n 2 ,n 3 ) ] (V n ), for n = n 1 + n 2 + n 3 . Consider the map ǫ : G C → k which sends the basis element [k 0 ] ∈ G 0 (k[R 0 ]) to 1 ∈ k and is zero on all other basis elements. It is straightforward that the map ǫ is a counit of G C , and so G C becomes a coalgebra with respect to ∆ and ǫ.
6.3.4. Compatibility. The vector space V N is isomorphic to the monoid algebra C[N] of the monoid (N, +) of natural numbers. Therefore V N inherits from C[N] the structure of a bialgebra, where the multiplication is given by the monoid operation (addition) and the value of the comultiplication on i ∈ N is i1,i2∈N i1+i2=i i 1 ⊗ i 2 . It is wellknown that V (Λ 0 ) is a Hopf algebra, where the multiplication and comultiplication are given by replacing R n by S n in (27) and in (28), respectively. As a consequence, we obtain that V (Λ 0 ) ⊗ C V N is, naturally, a bialgebra.
Next we prove that the respective multiplication and comultiplication maps are preserved under the isomorphism (26). In particular, this implies that ∆ is compatible with multiplication and thereby G C possess the structure of a bialgebra.
Proof. For the comultiplication maps ∆ V (Λ0) and ∆ V N of V (Λ 0 ) and V N , respectively, the comultiplication on V (Λ 0 ) ⊗ C V N is given by but finitely many summands are zero). Now, using Lemma 6.17, the left hand side of (29) can be computed as follows:  On the other hand, the right hand side of (29) can be computed as follows: This implies (29) and we are done.
6.4. The case of generalized rook monoids. Suppose p does not divide r. For r ∈ [n], we have X r i = X i and hence the eigenvalues of each X i , considered as an operator on a k[C r ≀ R n ]-finite-dimensional module, are either r-th roots of unity or 0. Similarly, the eigenvalues of Jucys-Murphy elements Y i as an operator on a finite dimensional module over k[C r ≀ R n ] lie in k p . Using this and Proposition 6.11, we get a decomposition of every object in k[C r ≀ R n ] -mod as in Lemma. This allows us to define the i-induction and i-restriction functors as well as the functors corresponding to the two generators of the bicyclic monoid B. Using the results for the generalized symmetric groups C r ≀ S n from [Tsu07] (see also [WW08] for the more general setting), one shows that the direct sum, over all n, of G 0 (k[C r ≀ R n ]) is a U ( sl p (C)) ⊗r ⊗ C[B]-module isomorphic to V (Λ 0 ) ⊗r ⊗ V N .
Remark 6.21. The results of Section 6 have the obvious analogues in characteristic zero (with the same proofs), where the field k p is replaced by the ring Z of integers and, consequently, the Lie algebra sl p (C) is replaced by sl ∞ (C). Also, the basic representation V (Λ 0 ) now becomes the Fock space representation of sl ∞ (C).