Universal rings of invariants

Let $K$ be an algebraically closed field of characteristic zero. Algebraic structures of a specific type (e.g. algebras or coalgebras) on a given vector space $W$ over $K$ can be encoded as points in an affine space $U(W)$. This space is equipped with a $\text{GL}(W)$ action, and two points define isomorphic structures if and only if they lie in the same orbit. This leads to study the ring of invariants $K[U(W)]^{\text{GL}(W)}$. We describe this ring by generators and relations. We then construct combinatorially a commutative ring $K[X]$ which specializes to all rings of invariants of the form $K[U(W)]^{\text{GL}(W)}$. We show that the commutative ring $K[X]$ has a richer structure of a Hopf algebra with additional coproduct, grading, and an inner product which makes it into a rational PSH-algebra, generalizing a structure introduced by Zelevinsky. We finish with a detailed study of $K[X]$ in the case of an algebraic structure consisting of a single endomorphism, and show how the rings of invariants $K[U(W)]^{\text{GL}(W)}$ can be calculated explicitly from $K[X]$ in this case.


Introduction
In [Pr76] Procesi studied tuples (T 1 , . . . , T r ) of endomorphisms of a finite dimensional vector space up to simultaneous conjugation by studying the corresponding ring of invariants. He showed that for a field K of characteristic zero the algebra of invariants C d,r = K[(T i ) j,k ] GL d (K) can be generated by traces of monomials in T 1 , . . . , T r , and that all the polynomial relations among these traces arise from the Cayley-Hamilton Theorem. This gives an infinite presentation for the algebra of invariants C d,r . This algebra, however, is known to be finitely generated by a theorem of Nagata. Describing explicitly a finite presentation for this algebra when r > 1 is a difficult task. See [Te86], [Nak02], [ADS06], [BD08] and [Ho12] for results about such presentations for C 3,r and C 2,r for various values of r.
The main tool used in the work of Procesi to describe the invariants is Schur-Weyl duality. This tool was used further to study more complicated algebraic structures than a vector space equipped with endomorphisms. In [DKS03] and [Me17] Datt Kodiyalam and Sunder and the author of this paper applied invariant theory to the study of finite dimensional semisimple Hopf algebras. In the second paper invariant theory was also used to prove that a semisimple Hopf algebra admits at most finitely many Hopf orders over a number field. Invariant theory was also applied to other Hopf-algebra-related structures such as Hopf two-cocycles and Nichols algebras in [Me19] and [Me20-1]. See also [Me20-2] for the study of linear endomorphisms of the tensor product of vector spaces using invariant theory.
The first goal of the present paper is to generalize the above results to any type of algebraic structure based on a finite dimensional vector space over an algebraically closed field of characteristic zero. The second goal of this paper is to introduce a uniform approach for studying these rings of invariants, which does not depends on the dimension of the vector space. We will show that this results in an infinitely generated polynomial algebra which specializes to invariant rings arising in all possible dimensions. This universal invariant ring, which we will denote by K[X], is further equipped with a natural pairing and a self-dual Hopf algebra structure. This kind of structure resembles in many ways the PSH-algebras introduced by Zelevinsky in [Ze81].
To give the precise details, let K be an algebraically closed field of characteristic zero. We define an algebraic structure over K to be a finite dimensional vector space W equipped with structure tensors So for example an algebra is described by a tensor in W 1,2 , and a unit element can be thought of as a map K → W , or an element in W 1,0 . The tuple ((p 1 , q 1 ), . . . , (p r , q r )) = ((p i , q i )) ∈ (N 2 ) r will be referred to as the type of W and will be fixed throughout the paper.
The set of all possible such structures on W gives rise to a vector space (or an affine space, from the algebraic geometry point of view) U(W ). If we are interested only in structures which satisfy some set of axioms T then we will restrict out attention to the subset Y (W ) ⊆ U(W ) of all points which satisfy these axioms. In most cases Y (W ) is a Zariski closed subset of U(W ) (see Section 7). The group GL(W ) acts on U(W ) and on Y (W ), and two points in Y (W ) define isomorphic structures if and only if they lie in the same orbit. This leads to study the quotient Y (W )/GL(W ) and its algebraic counterpart K[Y (W )] GL(W ) . From here on we will write Y d = Y (K d ) and U d = U(K d ) (See Subsection 2.3).
R3 Relations arising from the axioms. If we write these are the polynomials in I Definition 1.2 (The algebra K[X]-first definition). The algebra K[X] is the commutative algebra generated by the symbols p(n, σ, n 1 , . . . , n r ) modulo the relations R0 and R1 of Theorem 1.1.
Notice that every element in K[X] can be evaluated on every element in X. This is the reason for the notation K[X]. The second definition for K[X] will be based on diagrams. A string diagram is a diagram which represents a linear map between some tensor powers of the algebraic structure W . It is made of boxes labeled by x 1 , . . . , x r and Id W , and input and output strings. Some of the input strings may be connected to the output strings. We consider diagrams to be equivalent if they represent the same linear map for every algebraic structure (see Definition 4.1). A diagram with free q input strings and free p output strings represents a linear map from W ⊗q to W ⊗p and is said to have degree (p, q). The following figure is an example of a diagram of degree (2, 1), representing the linear map ev(L (2) (12) (x 2 ⊗ x 1 )) (see Subsections 2.2 and 2.4 for the relevant terminology) x 2 x 1 A closed diagram is a diagram with no free input or output strings. In other words-it is a diagram of degree (0, 0). If Di 1 and Di 2 are two diagrams then we denote by Di 1 ⋆ Di 2 the diagram resulting from placing Di 1 to the left of Di 2 (see Section 4). This defines an associative multiplication on the set of all diagrams which is also commutative on the set of closed diagrams. This leads us the second definition of K[X]: Definition 1. 3. We define K[X] to be the free vector space on the set of all equivalence classes of closed diagrams in which no boxes are labeled by Id W . We define K[X] aug to be the free vector space on the set of all equivalence classes of closed diagrams. The ⋆-product of diagrams equips both these vector spaces with an algebra structure.
We will show in Section 5 that the two definitions of K[X] are indeed equivalent. The reason for the notation for K[X] aug is that we refer to diagrams which contain Id W as augmented, see Definition 5.5. Closed diagrams can be thought of as representing linear maps from W ⊗0 = K to itself, or in other words, as scalars. Since product corresponds to taking disjoint union and every diagram splits uniquely to the disjoint union of its connected components, we have the following result about the structure of K[X] (see Corollary 5.9): Proposition 1. 4. The algebra K[X] is a polynomial algebra on the set of all connected diagrams which do not contain Id W .
Since the equivalence relation for diagrams becomes more complicated once Id W is involved, we should be more careful about a similar statement for K[X] aug . We will show that in fact where D is an extra variable corresponding to the dimension of W (see Corollary 5.9).
Beyond giving a uniform description for the algebras K[Y d ] and K[U d ] the algebra K[X] has the advantage of having a much richer structure than just a commutative algebra. We will show in Section 6 that the operations of forming direct sums and tensor products of structures induce two coproducts ∆, ∆ ⊗ : . We will prove the following: Theorem 1.5. The coproduct ∆ equips K[X] with a Hopf algebra structure. The connected diagrams are primitive with respect to this coproduct. That is-they satisfy the equations ∆(p) = p ⊗ 1 + 1 ⊗ p and ǫ(p) = 0. The coproduct ∆ ⊗ equips K[X] with a bialgebra structure. All the diagrams are group like elements with respect to this coproduct. That is-they satisfy the equations ∆ ⊗ (p) = p ⊗ p and ǫ(p) = 1.
The algebra K[X] is graded by N r , where (K[X]) n 1 ,...,nr is spanned by the diagrams corresponding to basic invariants of the form p(n, σ, n 1 , . . . , n r ) where n = i n i p i = i n i q i and σ ∈ S n (if i n i p i = i n i q i then (K[X]) n 1 ,. ..,nr = 0). The multiplication and the coproduct ∆ respect this grading (the other comultiplication ∆ ⊗ does not). Thus, K[X] has a structure of a graded Hopf algebra. We will show that if i n i p i = i n i q i = n then (K[X]) n 1 ,...,nr is equipped with an inner product −, − arising from the natural inner product on the group algebra KS n . We will then show that with respect to this inner product, the multiplication is adjoint to the comultiplication. In other words, if we use the Sweedler notation ∆(z) = z 1 ⊗ z 2 then z, xy = z 1 , x z 2 , y .
Zelevinsky studied a similar family of Hopf algebras which he called positive self adjoint Hopf algebras (or PSH-algebras). PSH-algebras are defined over Z and have a very rigid structure-there is only one "simple" (or universal ) PSH-algebra, which encodes the representation theory of all the symmetric groups, and every other PSH algebra decomposes as a tensor product of copies of this universal PSH-algebra. Our Hopf algebras, however, are defined over a field of characteristic zero and not over Z. In Section 8 we will define rational PSH-algebras as a certain generalization of PSH-algebras. We will then show the following: Theorem 1. 6. The Hopf algebra K[X] is a rational PSH-algebra.
In Section 10 we will study in detail the algebra K[X] for the structure of a vector space with a single endomorphism. We will show that in this case the algebra K[X] is the extension of scalars from Z to K of the simple PSH-algebra. This enables us to give a very clean description of the ideals I d := Ker(K[X] → K[U d ] GL d (K) ). Indeed-we will show that K[X] ∼ = K[y 1 , y 2 , . . .] and that I d = (y d+1 , y d+2 , . . . , ). This will recover the well known result that the algebra of invariants K[M d (K)] GL d (K) is a polynomial ring in d variables. Zelevinsky introduced PSH-algebra in order to study representations of families of finite groups, such as S n or GL n (F q ). This brings us to the following question: Question 1.7. Does the rational PSH-algebra K[X] have a natural basis which defines a PSH-algebra over Z? In other words, can we always find inside K[X] a PSH-algebra R such that K[X] ∼ = R ⊗ Z K? Moreover, is there a representation theoretic interpretation for this PSH-algebra?
In Section 9 we will give a formula for the Hilbert function of the multi-graded algebra K[X] and its finitely generated quotients in terms of the Littlewood-Richardson coefficients and the Kronecker coefficients. A similar concrete calculation was done for the invariant ring of an endomorphism of the tensor product of two vector spaces of dimension 2, see [Me20-2]. While the formula we get in Section 9 seems to be quite complicated it establishes a connection between our rings of invariants and central themes in the representation theory of the symmetric groups, such as the Kronecker and the Littlewood-Richardson coefficients.

Geometric invariant theory.
Recall that a linear algebraic group Γ is called reductive if the category of rational representations of Γ is semisimple. In other words- of rational Γ-representations splits. In particular, this means that we can naturally identify (V /V ′ ) Γ and V Γ /(V ′ ) Γ . In this paper we will focus on the group Γ = GL d (K). The field K is algebraically closed and of characteristic zero, and GL d (K) is thus reductive. Assume now that Γ acts on an affine variety Y . Geometric invariant theory deals with studying possible quotients of Y by the action of Γ by considering the corresponding action of Γ on the coordinate ring K[Y ]. We summarize in the following theorem well known results from geometric invariant theory which we will use in this paper: Theorem 2.1. Let Y be an affine variety, and let Γ be a reductive group acting on Y .
(1) If W 1 and W 2 are two closed Γ-stable subsets of Y then there is f ∈ K[Y ] Γ such that f (W 1 ) = 0 and f (W 2 ) = 1 (2) The ring of invariants K[Y ] Γ is finitely generated and therefore its maximal spec- (4) The points in Z are in one to one correspondence with the closed Γ-orbits in Y . Proof. The first statement is Lemma 3.3. in [Ne78]. The second statement is the celebrated theorem of Nagata, see Theorem 3.4. in [Ne78]. The third and fourth statement follow from Theorem 3.5. in [Ne78]. The correspondence between closed orbits in Y and points in Z is induced by Γ · y → π(Γ · y).

2.2.
Linear algebra and natural identifications. As before, assume that K is an algebraically closed field of characteristic zero. Let W be a finite dimensional K-vector space. We write W p,q := W ⊗p ⊗ (W * ) ⊗q .
Notice that in this setting ev n : W n,n → W 0,0 = K is the map T → Tr(T ).
If U is any finite dimensional vector space then T (U) denotes the tensor algebra on U and K[U] the algebra of polynomial functions of U. Both algebras are graded, and we have a natural identification Where the action of S n on T (U * ) n = (U * ) ⊗n is given by permuting the tensor factors, and where for a group G and a G-representation V , we write V G for the G-coinvariants The following lemma will be useful for comparing the invariants and coinvariants: (1) Let G be a reductive group acting on a vector space V . The composition of the natural inclusion V G → V and the natural surjection V → V G gives an isomorphism from the G-invariants to the G-coinvariants of V .
(2) Let G and V be as in the previous part, and let H be another reductive group acting on V in such a way that h(gv) = g(hv) for h ∈ H, g ∈ G and v ∈ V . Then the group G × H acts on V and we have natural isomorphisms:

Proof.
For the first assertion we use the fact that the reductivity of G implies that the inclusion V G → V splits. We can thus write It holds that V ′G = 0 and thus V ′ G = 0, since otherwise we have a surjection V ′ → V ′ G which must split, contradicting the fact that V ′G = 0. Taking G-coinvariants of V we get For the second assertion, we notice that under the assumptions we made H acts on V G and G acts on V H in a natural way. Moreover, the isomorphism V H → V → V H is an isomorphism of G-representations. Thus,

Algebraic structures.
Recall from the introduction that an algebraic structure of type ((p i , q i )) is given by a finite dimensional vector space W and a collection of structure tensors We will also refer to (W, (x i )) as an algebraic structure in the sequel. A basis B = {e i } for W gives us a dual basis {f i } for W * . The tensor products of these bases gives us then also bases for W p,q for every (p, q) ∈ N 2 and we can write The scalars a i j 1 ,. ..,jp i ,k 1 ,...,kq i will be referred to as the structure constants of x i with respect to the basis B. We will refer to them just as a • • to ease notations. The group GL(W ) acts on U(W ) in a natural way. The following result holds: Lemma 2. 3. The GL(W )-orbits in U(W ) correspond exactly to the isomorphism classes of structures of type ((p i , q i )) on W . Proof. Indeed, two sets of structure tensors (x i ) and (y i ) define isomorphic structures if and only if there is an invertible linear map g ∈ GL(W ) that intertwines x i and y i for every i. In other words, for every i the diagram But this is the same as saying that g(x i ) = y i for every i. See also Lemma 3.2. in [Me19] for a particular instance of this.
In most cases we will be interested only in algebraic structures which satisfy certain set of axioms T. for every i, j, k, b.
We will study in detail the defining ideal of Y (W ) in Section 7. In the terminology of the introduction we have Y d = Y (K d ) and U d = U(K d ).

2.4.
Schur-Weyl duality. Schur-Weyl duality describes the link between invariants with respect to general linear groups and representations of the symmetric groups. We recall here the details. For a finite dimensional vector space V and n ∈ N, V ⊗n is a representation of S n in a natural way. A permutation σ ∈ S n acts via the formula We denote the resulting linear map V ⊗n → V ⊗n by L (n) σ ∈ End(V ⊗n ). This map commutes with the natural diagonal action of GL(V ). To state Schur-Weyl duality, we first recall that a partition λ of a natural number n is a non-decreasing sequence n 1 ≥ n 2 ≥ · · · ≥ n r > 0 such that i n i = n. We write λ ⊢ n. Every partition can be thought of geometrically as a Young diagram. There is one-to-one correspondence between partitions of n and irreducible representations of S n , which we shall write here as λ ↔ S λ . Following [Sa01] we call S λ the Specht module corresponding to λ. We will write r(λ) for the number of non-zero rows in the partition λ. Schur-Weyl duality is the following statement (see also the discussion in I.1 and Theorem 4. 3
(1) The linear map σ is a surjective ring homomorphism.
(2) Consider the Wedderburn decomposition of the group algebra of S n , where r(λ) is the number of non-zero rows in λ. As a result, we have an isomorphism of algebras Remark 2.5. Another way to describe the kernel of Φ V is the following: If dim(V ) ≥ n then Φ V is injective, and if dim(V ) < n then Ker(Φ V ) is the two-sided ideal of KS n generated by the idempotent

Block decomposition permutations.
For two integers a ≤ b we use the notation Since we will not use the real numbers in this paper this will cause no confusion with intervals in the real line. An unordered partition λ = (n 1 , . . . n k ) of n ∈ N is a sequence of integers which satisfies i n i = n (for regular partitions we also require that n 1 ≥ n 2 ≥ · · · ≥ n k ). We write S λ = S n 1 ,...,nr := S n 1 × · · · × S n k and The unordered partition λ gives rise to two maps Π λ : S λ → S n and Ω λ : S k → S n which will be used in this paper.
Definition 2.6. We define Π λ : S λ → S n to be the group embedding for which Π(S n i ) permutes the elements of I λ i . In places where it will cause no confusion we will identify between (σ i ) ∈ S λ and its image under Π λ in S n .
Next we give the definition of block permutations.
In cycles notation we get Ω λ ((13)) = (14325) which already shows that Ω λ is in general not a group homomorphism, as it sends an element of order 2 to an element of order 5.
In fact Ω λ is a group homomorphism if and only if n 1 = n 2 = . . . = n k . In this special case Ω λ can be written explicitly. Write first m = n 1 = · · · = n k and The map β is a bijection and it induces a natural group homomorphism β : S k × S m → S n given by where β −1 (x) = (i, j). The map Ω λ is then given by the composition where the first map is given by σ → (σ, Id).
Remark 2.8. Up to conjugation in S n this map is also given by the composition where the first map is the diagonal embedding. This will be used in Section 9 to derive formulas for the Hilbert function of the universal ring of invariants.
for the following group homomorphism

2.6.
Littlewood-Richardson and Kronecker coefficients. When n = a + b we have an embedding Π (a,b) : S a × S b → S n . We can restrict representations of S n along Π (a,b) and write stands for the class of the representation V in the relevant Grothendieck group, where we use the identification The coefficients c λ λa,λ b are called the Littlewood-Richardson coefficients. Following [Me20-2] we will give the following generalization when the unordered partition which appears in Π has more than two components.
Definition 2. 10. Let n = n 1 + n 2 + . . . + n k , and let Π (n 1 ,...,n k ) : S n 1 × · · · × S n k → S n be the group homomorphism defined in 2.5. We write For the Kronecker coefficients, let λ and µ be two partitions of n. The tensor product of the Specht modules S λ ⊗ S µ with the diagonal S n -action splits as a direct sum of simple S n -representations and we can write The natural numbers g(λ, µ, ν) are known as the Kronecker cofficients. For more on Kronecker coefficients and Littlewood-Richardson coefficients see [BVO15]. Since all the representations of S n are self dual, the Kronecker coefficients are also given by the formula If λ 1 , . . . , λ k are partitions of n we define the iterated Kronecker coefficients to be Another way to describe the iterated Kronecker coefficients is the following: if we write diag : S n → (S n ) k for the diagonal embedding, then In the relevant Grothendieck rings. We will use this identification in Section 9 in calculating the Hilbert functions of K[X].

2.7.
Commutative bialgebras and Hopf algebras. The K-algebras which we will consider in this paper are commutative and graded by the monoid N r . Recall that such an algebra A is said to be a bialgebra if it is equipped with maps ǫ : A → K and ∆ : A → A ⊗ A such that (A, ∆, ǫ) is a coalgebra, which means that the dual axioms to that of a unital algebra are satisfied, and such that ǫ and ∆ are algebra maps. This bialgebra is said to be graded if ∆ and ǫ preserve the grading, where it is understood that K has degree (0, 0, . . . , 0) and A n 1 ,...,nr ⊗ A m 1 ,. ..,mr has degree (n 1 + m 1 , . . . , n r + m r ). This means in particular that ǫ(A n 1 ,...nr ) = 0 unless (n 1 , . . . , n r ) = (0, 0, . . . , 0). A bialgebra is said to be a Hopf algebra if it admits an antipode. This is a linear map S : The antipode S, if it exists, can be understood as the inverse of Id A ∈ Hom K (A, A) under the convolution product induced by ∆ and m.
A group-like element in a bialgebra A is an element g ∈ A which satisfies ∆(g) = g ⊗ g and ǫ(g) = 1. The set of group-like elements in A forms a monoid which we denote by G(A). In case A is a Hopf algebra this monoid is in fact a group, where the inverse of g ∈ G(A) is S(g).
A primitive element in a bialgbera A is an element x ∈ A which satisfies the equations ∆(x) = x ⊗ 1 + 1 ⊗ x and ǫ(x) = 0. The set of primitive elements in A is a K-subspace which we will denote by P(A). All the bialgebras we will encounter in this paper will be either generated by group-like elements or by primitive elements. For more on Hopf algebras see [Sw69], in particular Chapters III and IV.
We will write U = U(W ) when W is fixed.
We write an element in U as (x i ) r i=1 . We thus think of points of U as possible algebraic structures of type ((p i , q i )) on W . Write U (i) = W p i ,q i . Then U = i U (i) and we have a natural isomorphism For every i the degree of polynomials gives a grading on K[U (i) ] by N. This gives a grading on K[U] by N r . Using the isomorphism Z of Subsection 2.2 we have U n 1 ,...,nr := ((U * (1) ) ⊗n 1 ⊗ · · · ⊗ (U * (r) ) ⊗nr ). This vector space is isomorphic to W n ′ ,n where n ′ = i q i n i and n = i n i p i .
We begin by studying the action of S n 1 ,. ..,nr = S n 1 × · · · × S nr on this vector space. For this, it will be enough to study the action of S n on (V * ) ⊗n where V = W p,q . The action of σ ∈ S n is given by .1). If every t i is a basic tensor of the form t i = w i1 ⊗· · ·⊗w iq ⊗f i1 ⊗· · ·⊗f ip then we see that after applying the natural isomorphism (T * ) ⊗n ∼ = W nq,np given by grouping all the W tensorands before all the W * tensoarnds, the permutation σ ∈ S n acts on (T * ) ⊗n by the formula: Identifying W nq,np with Hom K (W ⊗np , W ⊗nq ), this is the same as where we use the terminology of Definition 2. 7.
Going now back to the general case, the permutations Ω (p n ) (σ) glue together to give the permutation α (p i ) (n i ) , and similarly when we replace (p i ) by (q i ), see Definition 2.9. The conclusion of this is the following: write n = i p i n i and n ′ = i q i n i .
Lemma 3.1. The action of S n 1 ,...nr on U n 1 ,...,nr is given by the formula: The action of Γ on K[U] and the algebra of invariants. Since the action of S n 1 × · · · × S nr commutes with the action of Γ = GL(W ) and since finite groups are reductive in characteristic zero, Lemma 2.2 gives ..,nr ) Sn 1 ×···×Sn r . We have the following isomorphism of Γ-representations: where n ′ = i n i q i and n = i n i p i . By Theorem 2.4 we get the following description of σ ) σ∈Sn where we identify U n 1 ,...,nr ∼ = W n,n ∼ = End K (W ⊗n ). Moreover, the linear relations between the elements L By Lemma 3.1 the action of (σ i ) ∈ S n 1 ,...,nr on (U n 1 ,...,nr ) Γ is given by We can now describe explicitly the space (K[U] n 1 ,...,nr ) Γ : By following the duality isomorphisms and using Equation 2.2 we see that the image of the element L In the sequel we will evaluate these polynomials for algebraic structures of different dimensions. When it will be necessary to indicate the specific vector space we will also write p(n, σ, n 1 , . . . , n r )(W, (x i )). This discussion can be summarized in the following proposition: ..,nr is spanned by the polynomials p(n, σ, n 1 , . . . , n r ) for σ ∈ S n . The linear relations between these polynomials are spanned by the following two types of relations: Definition 3.3. We call the polynomials p(n, σ, n 1 , . . . , n r ) the basic polynomial invariants of multi-degree (n 1 , . . . , n r ).
Proof of Theorem 1.1. Proposition 3.2 gives a linear description of the ring of invariants . The product of two basic invariants is given in Proposition 5.10 and follows from the fact that if T i : W i → W i for i = 1, 2 then For relations of type R3, we write I T,d : . We thus have a short exact sequence which gives the short exact sequence This finishes the proof of Theorem 1.1. We Will give in Section 7 a uniform description of I Recall that For every d, X d has the structure of an affine variety, and is described in the beginning of Subsection 3.1. The first definition of K[X] (Definition 1.2) gives us a natural surjective homomorphism Definition 3.4. We write I d for the kernel of Φ d . It is spanned by elements of the form where τ 1 , τ 2 ∈ S n and n = i p i n i = i q i n i is bigger than d. Since Write K X for the commutative algebra of all functions from X to K with pointwise addition and multiplication. The maps {Φ d } for d ≥ 0 glue together to give a map Ξ : The map Ξ is thus injective and enables us to think of elements of K[X] as functions from X to K. One might ask if the image of Ξ separates points in X. That is-if for every two elements . This is almost the case. From invariant theory (see Subsection 2.1) we know that elements of Ξ(K[X]) can separate two different points in X d for a given d. However, the image might fail to separate x 1 ∈ X d 1 and x 2 ∈ X d 2 for d 1 = d 2 .
Consider for example the case of an algebraic structures which is a vector space with a single linear endomorphism. More specifically, consider the two structures (K, T 1 ) and (K 2 , T 2 ) where In this case all the invariants will be generated by traces of powers of the linear endomorphism (see Section 10), and the fact that Tr(T n 1 ) = Tr(T n 2 ) = 1 for every n > 0 means that Ξ(K[X]) cannot distinguish these two structures.
To overcome this, we will introduce in Section 5 another commutative algebra K[X] aug and we will show that K[X] aug ∼ = K[X] ⊗ K[D]. We The map Ξ can be extended to K[X] aug by sending D to the function This extension of Ξ to is still injective, and it has the advantage that it also separates points in X.
Definition 3.5. If (W, (x i )) is any algebraic structure then we will denote by χ (W,(x i )) : K[X] aug → K the ring homomorphism arising from evaluating closed diagrams on W . We call χ (W,(x i )) the character of invariants of W .

String diagrams
Let (W, (x i )) be an algebraic structure of type ((p i , q i )). Throughout this paper we will visualise some linear maps and scalar invariants using a certain type of string diagrams. To this end a diagram contains (1) A collection of boxes labeled by an element in {x 1 , . . . , x r , Id W }. A box with label x i has q i input strings at the bottom, and p i output strings at the top. A box labeled with Id W has one input string and one output string.
(2) Connections between some of the output strings with the input strings (but no connections between two output strings or two input strings).
(3) A permutation of the free (non-connected) input strings and a permutation of the free output strings.
Each such diagram can be interpreted as a linear transformation. A box with label x i corresponds to the linear transformation given by x i , and a box with label Id W corresponds to the identity map of W . Placing boxes one next to the other corresponds to taking the tensor product of the corresponding linear transformations. Permuting the strings corresponds to permuting the tensor factors of W , and connecting an input string with an output string correspond to applying the evaluation map ev : W ⊗ W * → K on two particular copies of W and W * . If a diagram has q free (non-connected) input strings and p free output strings then it represents a linear map in W p,q ∼ = Hom K (W ⊗q , W ⊗p ). Such a diagram will be referred to as a diagram of degree (p, q). Diagrams of degree (0, 0) will be referred to as closed diagrams. They represent linear maps W ⊗0 = K → W ⊗0 = K, or scalars. For example, the diagram x 3 x 2 x 1 (4.1) represents x 1 ⊗ x 2 ⊗ x 3 ∈ W 4,4 and so has degree (4, 4), while (12)(34) ) ∈ W 3,3 . Notice that the order of the free input and free output strings is important. This is the reason we include the permutations as part of the data of the diagram.
Definition 4.1. Two diagrams Di 1 , Di 2 of degree (p, q) are equivalent if they describe the same linear transformation in W p,q for every finite dimensional vector space W and every collection (x i ) of structure tensors. This equivalence relation is generated by the following two relations: (1) Permutations relation: If Di 1 can be obtained from Di 2 by permuting the boxes while keeping the free input and output strings fixed. So for example we have the following equivalence of diagrams: x 1 x 2 x 2 x 1 = (4.3) (2) Identity reduction: if Di 1 can be obtained from Di 2 (or vice versa) by using the fact that composition with Id W is the identity morphism. More precisely, we have the following equivalences between diagrams: · · · · · · (4.5) As the notations above suggest we will consider equivalent diagrams as equal in what follows.
The product Di 1 ⋆ Di 2 of two diagrams is formed by placing Di 1 to the left of Di 2 . By considering the interpretation of diagrams as morphisms, if Di 1 : W ⊗a → W ⊗b and Di 2 : W ⊗c → W ⊗d the product Di 1 ⋆ Di 2 represents the tensor product Di 1 ⊗ Di 2 : By rearranging the boxes, which is possible by the permutation relation, every diagram is equivalent to a diagram in which the boxes appear in order, where x 1 < x 2 < . . . < x r < Id W . More precisely, every diagram of degree (p, q) can be brought to a diagram of the form τ Di σ · · · · · · · · · · · · · · · · · · c(j, σ, τ, n 1 , . . . , n r , m) = where σ ∈ S p+j , τ ∈ S q+j , the number of connections of input and output strings is j, and

Second definition of K[X] and the algebra K[X] aug
Let Con p,q be the free vector space generated by all the equivalence classes of diagrams of degree (p, q). The ⋆-product equips the direct sum Con = p,q Con p,q with a structure of an N 2 graded algebra. This algebra will come into play in Section 7 where we will study the ideal arising from axioms. We will concentrate now on the subalgebra of trivial degree.
The algebra K[X] aug is a subalgebra of Con. It is commutative, as can be seen by a direct calculation using Formula 4.7 for the ⋆ product of two diagrams. The intuitive explanation for this is that if Di 1 and Di 2 are two closed diagrams then Di 1 ⋆ Di 2 can be obtained from Di 2 ⋆ Di 1 just by moving the boxes which appear in Di 2 to the right, as there are no free input or output strings that should be fixed. This is not true for general diagrams. The algebra K[X] aug has an extra grading arising from the number of boxes.
When we will concentrate on K[X] aug instead of Con, we will just refer to the boxdegree as the degree. We do not include the number m in the grading, as it can vary between equivalent diagrams. Definition 4.1 shows that the box-degree notion is well defined.
We say that the closed diagram p(n, σ, n 1 , . . . , n r , m) is a polynomial invariant of structures of type ((p i , q i )).
Notice that indeed, every p(n, σ, n 1 , . . . , n r , m) can be evaluated on any structure of type ((p i , q i )), no matter of what dimension, to give a scalar invariant. This scalar is Tr(L (n) σ (x ⊗n 1 1 ⊗ · · · ⊗ x ⊗nr r ⊗ Id ⊗m W )). In case m = 0 and we restrict our attention to structures of a fixed dimension d we get the basic polynomial invariants of Section 3. We will think of the symbol p(n, σ, n 1 , . . . , n r ) as representing both a diagram and a polynomial invariant.
Definition 5.4. An invariant p(n, σ, n 1 , . . . , n r , m) is called connected, or irreducible, if the corresponding diagram is connected. Equivalently-an invariant is connected if it cannot be written non-trivially as the product of two invariants.
Definition 5.5. We define the dimension invariant as p(1, Id, 0, 0, . . . , 0, 1). We denote it by D. We define an invariant to be unaugmented if it is equal to an invariant of the form p(n, σ, n 1 , . . . , n r , 0). Otherwise it is augmented.
The dimension invariant is represented by the following diagram: We call this invariant the dimension as it represents the trace of the identity map of W , which is indeed the dimension of W .
Proposition 5.6. Every invariant in K[X] aug can be written uniquely (up to reordering) as a product of connected invariants. The only augmented connected invariant is D. Proof. The first part of the proposition follows by considering the connected components of the diagram of every invariant, as they are uniquely defined. Notice that reducing invariants using the identity-reduction move in Definition 4.1 does not change the number of irreducible components of the diagram. Let now p(n, σ, n 1 , . . . , n r , m) be a connected invariant. We will show that if this invariant is not D and m = 0 then it is equivalent to an invariant with a smaller value of m. By induction this proves the result. For this, consider the diagram of p(n, σ, n 1 , . . . , n r , m). Consider any of the Id W boxes. If the input string and the output string of this box are connected to each other then we get a connected component of the diagram which looks like Figure 5. 1. We assumed that the invariant is connected and different from D, so this is impossible. We can thus use the identity-reduction move of 4.1 to get rid of this Id W -box from the diagram, thus reducing the value of m by one, as required.
We recall now Definition 1.3.
Definition 5.7 (The algebra K[X]-second definition). We write K[X] ⊆ K[X] aug for the subalgebra spanned by the unaugmented invariants. This is a subalgebra indeed, as the product of two diagrams which do not contain any apperance of Id W is again such a diagram. For unaugmented invariants we write p(n, σ, n 1 , . . . , n r ) := p(n, σ, n 1 , . . . , n r , 0) Lemma 5. 8. The two definitions of K[X] are equivalent. Proof. The isomorphism between the two algebras is the tautological one, given by sending p(n, σ, n 1 , . . . , n r ) to p(n, σ, n 1 , . . . , n r ) and extending by linearity. By the equivalence relation on diagrams from Lemma 4.2 and by Relation R1 in Proposition 3.2 we get that this is well defined. This isomorphism preserves the multiplication indeed, since taking the product of diagrams corresponds to taking the tensor product of the linear maps they represent. In case of maps K → K, this corresponds to just multiplication of scalars, which is the product in the first definition of K[X].
The discussion about augmented and unaugmented invariants has the following corollary: Corollary 5. 9. The algebra K[X] is a polynomial algebra. Its set of variables are all the unaugmented connected invariants. The algebra K[X] aug is isomorphic to K[X] ⊗ K[D].
We will think of both algebras as algebras of invariants of structures of type ((p i , q i )) i . The only difference is that K[X] "cannot see" the dimension of W , while K[X] aug can.
We summarize the structure of K[X] in the following proposition. All of the claims follow from the previous claims about the elements c(j, σ, τ, n 1 , . . . , n r , m).

The bialgebra structures on K[X]
The fact that elements of K[X] are invariant polynomials which can be evaluated on structures of all dimensions enriches the structure of K[X]. We exhibit here two bialgebra structures on K[X], one coming from forming the direct sum of structures, the other from forming the tensor product of structures. We begin with the following definition: Definition 6.1. Let (W 1 , (x i )), (W 2 , (y i )) be two algebraic structures of type ((p i , q i )). The direct sum of these structures is the structure The tensor product of the structures is the structure If the structure we are considering is that of associative algebras then the direct sum and tensor product we defined here are the usual direct sum and tensor product of algebras.
If (W 1 , (x i )) and (W 2 , (y i )) are two structures with closed orbit, then we can consider the unique closed orbits in the closures of the orbits of (W 1 ⊕ W 2 , (x i ⊕ y i )) and of (W 1 , ⊗W 2 , (x i ⊗ y i )). This induces operations ⊕ : X × X → X and ⊗ : X × X → X. The following lemma holds: Lemma 6.2. The operations ⊕ and ⊗ define associative operations ⊕, ⊗ : X × X → X. Both operations have a neutral element. The neutral element for ⊕ is the zero-dimensional structure with the zero tensors. The neutral element for ⊗ is the structure (K, (x i )) where x i = 1 ∈ K p,q ∼ = K. We denote this structure by (K, (1)).
This shows that ⊗ defines a bialgebra strcuture which we denote by Moreover, it shows us that with respect to this coproduct all the basic invariants are group-like elements (i.e. they satisfy the equation ∆ ⊗ (g) = g ⊗ g). This shows that ∆ ⊗ defines on K[X] a bialgebra structure which is not a Hopf algebra structure, assuming that K[X] is not trivial. Indeed, if K[X] had an antipode S with respect to ∆ ⊗ then the antipode equation would have implied that S(g) = g −1 for every group-like element.
Since K[X] is graded and all the basic invariants are of positive degree this is impossible. The counit ǫ ⊗ for this bialgebra structure is given by evaluation on the one dimensional structure (K, (1)) from Lemma 6.2. We have ǫ ⊗ (p(n, σ, n 1 , . . . , n r )) = 1 for every basic invariant. Consider now the direct sum operation. We will show that it defines another bialgebra structure on K[X]. We begin with the following lemma: Lemma 6.3. Let p(n, σ, n 1 , . . . , n r ) be a basic invariant, where n = i p i n i = i q i n i and σ ∈ S n . The diagram of this basic invariant contains n 1 + n 2 + · · · + n r boxes and n strings connecting them. Number these strings by {1, . . . , n}. Denote by I i the set of input strings for the i-th box and by J i the set of output strings of the i-th box. Then p(n, σ, n 1 , . . . , n r ) is irreducible if and only if the following condition holds: there is no proper partition {1, . . . , n} = X ⊔ Y such that for every i I i ∪ σ(J i ) ⊆ X or I i ∪ σ(J i ) ⊆ Y . Proof. If the diagram of p(n, σ, n 1 , . . . , n r ) is not connected, write {1, . . . , n} = X ⊔ Y where X contains the numbers of the input strings of one component, and Y = {1, . . . , n}\X. This will give us the desired decomposition. In the other direction, if we have such a partition, we get a non-trivial decomposition of the diagram of p(n, σ, n 1 , . . . , n r ) into different connected components.
Since the irreducible basic invariants generate K[X] it will be enough to prove that every irreducible basic invariant p(n, σ, n 1 , . . . , n r ) has image inside β(Ξ ⊗ Ξ)(K[X] ⊗ K[X]). We will show that in fact all the irreducible basic invariants are primitive with respect to ∆.
We will think of the tuple (j 1 , . . . , j n ) as a labeling of the strings which appear in the diagram which corresponds to T . It holds that This implies that if we have two strings going into the same box in the diagram which represents the morphism T then they must have the same label, as otherwise T = 0 and Tr(T ) = 0, contradicting the assumption. In a similar way, two strings going out of the same box must have the same label. This already implies that all the strings in the same connected component of the diagram which represents Tr(T ) must be equal. Since we have assumed that p(n, σ, n 1 , . . . , n r ) is irreducible, we have a single connected component, and thus Tr(T ) can be non-zero only if (j 1 , . . . j n ) = (1, 1, . . . , 1) or (2, 2, . . . , 2).
when p(n, σ, n 1 , . . . , n r ) is an irreducible basic invariant. This means that ∆ : K[X] → K[X]⊗K[X] is well defined and gives another bialgebra structure on K[X]. It also means that the irreducible basic invariants are primitive with respect to ∆. It is known that such a bialgebra is in fact a Hopf algebra. Indeed, since K[X] is generated by the irreducible basic invariants, the antipode S is defined by the formula S(p 1 p 2 · · · p t ) = (−1) t p 1 p 2 · · · p t where p i are irreducible basic invariants. We have used here the fact that S(p) = −p for a primitive elements, and that S is multiplicative (and not just anti-multiplicative, since the algebra K[X] is commutative).

The axioms ideal
The goal of this section is to discuss the ideals I T,d and I which appear in the R3 relations in Theorem 1.1. Axioms for an algebraic structure are often given by stating equality of two diagrams, or, more generally, by stating that a certain linear combination of some constructible maps is zero (e.g. the Jacobi identity for Lie algebras). For example, assume that x 1 ∈ W 1,2 . The tensor x 1 can be thought of as a multiplication on W . the associativity of x 1 is equivalent to the vanishing of the following linear combination of diagrams, when interpreted as elements in W 1,3 : If x 2 ∈ W 1,0 then the statement that x 2 : K → W defines a left unit for the multiplication x 1 is equivalent to the vanishing of the following linear combination of diagrams of degree (1, 1), when interpreted as an element in W 1,1 . x Similarly, commutativity of x 1 is equivalent to the vanishing of the linear combination We thus make the following definition: Definition 7.1. A theory for structures of type ((p i , q i )) is a subset T ⊆ ⊔ p,q Con p,q .
We will think of the elements of the theory as those linear combinations that should vanish in models of the theory. To state this explicitly, we give the following definition: Definition 7.2. If (W, (x i )) is an algebraic structure of type ((p i , q i )) we define the (p, q)realization map Re p,q : Con p,q → W p,q for p, q ∈ N to be the linear map which sends every diagram to its interpretation as a linear map in W p,q . In particular, Re 0,0 : K[X] aug → W 0,0 = K is just the character of invariants of W .
Since equivalent diagrams define identical linear maps the realization maps are well defined.
Definition 7. 3. We say that W is a model for a theory T if for every x ∈ T ∩ Con p,q it holds that Re p,q (x) = 0 in W p,q .
The concept here of "model" and "theory" differs slightly from that of first-order logic. In first order logic only functions W ×a → W are allowed, and we consider here maps which are assume to be linear between different tensor powers of W . Fix now a theory T.
Lemma 7.4. Let W = K d and let U = U(W ) = r i=1 W p i ,q i be as defined in Subsection 3.1. The set of all tuples (x i ) ∈ U which are models of T form a Zariski closed subset of U which is stable under the action of Γ = GL d (K). Proof. we can easily reduce to the case where T contains a single axiom x ∈ Con p,q . Fix a basis {e i } of W and a dual basis {f i } of W * . If Di is a diagram in Con p,q then Re p,q (Di) can be written as j 1 ,...,jp,k 1 ,. ..,kq f j 1 ,...,jp,k 1 ,...,kq e j 1 ⊗ · · · ⊗ e jp ⊗ f k 1 ⊗ · · · ⊗ f kq where f j 1 ,...,jp,k 1 ,...,kq are polynomials in a • • where a • • are defined in Subsection 2.3 (see also the discussion on associativity of multiplication at the end of that subsection). In other words, every r ∈ W q,p gives a polynomial function f r := Re p,q (x), r which should vanish on models of T, and the zero set of the polynomials f r is exactly the set of all models of T in U. This set is thus Zariski closed. It is easy to see that it is stable under Γ, as a change of basis will not change the validity of axioms defined purely by diagrams.
We are thus interested in the Zariski closed set defined by J T,d . The coordinate ring for this closed set is It thus holds that It might be the case that I T,d = J T,d . This happens, for example, when we have an algebraic structure containing a single endomorphism T : W → W . If we take the theory T = {T 2 } then Tr(T ) is in I T,d but not in J T,d . Indeed, Tr(T ) = 0 whenever T 2 = 0 so this polynomial will vanish on every model of the theory, and so it is in I T,d , but it cannot be contained in J T,d since this ideal is generated in degree 2, and Tr(T ) has degree 1. The passage between the ideal and its radical will not play a huge difference here, since we have equality of ideals rad(J Γ T,d ) = (rad(J T,d )) Γ in K[U d ] Γ . If dim K W = d the ideal J Γ T,d defines the subset of X d of all isomorphism classes with closed orbits of structures which are models for T.
We will describe now an ideal I T of K[X] aug , and show that its image in K[U d ] Γ is exactly J Γ T,d . We will use the fact that the ideal J T,d is defined using pairing of the axioms with elements in the dual space W q,p . We begin with the following definition: Definition 7.5. We define a pairing pair p,q : Con p,q ⊗ Con q,p → Con 0,0 = K[X] aug on the basis elements by the following pictorial description: pair ( , In other words-the pairing of x ∈ Con p,q with y ∈ Con q,p is given by connecting the p free output strings of x to the p free input strings of y, and the q free input strings of x to the q free output strings of y. We can give a rigorous formula using the formalism of Section 4, but it will not be very enlightening.
Since connecting strings in diagrams corresponds to applying iteratively the evaluation W ⊗ W * → K the following diagram is commutative: Here Re 0,0 = χ (W,(x i )) : K[X] aug → K is the character of invariants of W (see Definition 3.5). The following lemma is immediate from the commutativity of the above diagram.
Definition 7.7. We define I T ⊆ K[X] aug to be the ideal generated by all elements of the form pair(x, y) where x ∈ T ∩ Con p,q and y ∈ Con q,p , for some (p, q) ∈ N 2 . Proof. (see also the proof of Theorem 1.4. in [Me17]) Throughout the proof we will assume for simplicity that T contains a single axiom x of degree (p, q). The general case follows from the following argument: If T = {t i } i and we write T i for the singleton {t i } then it holds that I T = i I T i and J T,d = i J T i ,d . This means that we have a natural surjective map Since Γ is reductive this map remains surjective after taking invariants and we get a surjective map In other words, it holds that and it is enough to prove the proposition in case T is a singleton.
We can also think of this as a homomorphism from K[X] aug to K[U d ] with image in the Γ-invariants. So to show inclusion in the first direction, it will be enough to show that Φ d (I T ) ⊆ J T,d .
To do so, we use the fact that J T,d is generated by elements of the form f z := x, z where z ∈ W q,p . Consider Φ d (pair(x, y)). If we write {ǫ i } for the basis of W p,q arising from the original basis of W and {δ i } for the dual basis of W q,p then Re p,q (x) can be written as i f i ǫ i where f i ∈ K[U d ] and y can similarly be written as i g i δ i . It then follows that φ(pair(x, y)) = i f i g i ∈ K[U d ] is contained in the ideal generated by f i . But this is exactly the ideal J T,d .
For the other direction, consider the ideal J Γ T,d Using the same terminology as above, a general element in J T,d is of the form To phrase it in a different way, write V = span{f i }. Then V is a Γ-representation, and we have a surjective Since Γ is reductive, taking Γ-invariants preserves surjective maps. Taking Γ-invariants also commute with direct sums so the map ζ : We have an isomorphism V ∼ = (W p,q ) * = W q,p as Γ-representations, and V ′ is a quotient of W q ′ ,p ′ where p ′ = i n i p i and q ′ = i n i q i . If p + p ′ = q + q ′ then (V ⊗ W q ′ ,p ′ ) Γ = 0 and as a result (V ⊗ V ′ ) Γ = 0 and there is nothing to prove (again-we use here the reductivity of Γ). Assume that p + p ′ = q + q ′ = n. Schur-Weyl duality tells us that in this case all the resulting Γ-invariants in (V ⊗ V ′ ) arise from permutations σ ∈ S n . In other words, following back all the identifications we have used, the resulting element of J Γ T,d can be written as This element can further be simplified to be of the form Φ d (pair(x, y)) in the following way: write Di = x ⊗n 1 1 ⊗ · · · ⊗ x ⊗nr r . Then the element f is represented by the diagram If all the output strings of x, after being shuffled by L (n) σ are connected to output strings of Di, then what we get here is a pairing of x with the diagram resulted from Di by closing some of its input strings with some of its output strings. If, on the other hand, some output strings of x are connected to input strings of x after shuffling, we can use Equation 4.5 and add some copies of Id W to Di to break down these strings into twothe output string from x will go to the input string of this Id W box, and the output string of this box will go to the relevant input string of x. By taking the diagram formed from Di ⋆ (Id W ) ⋆m and closing it accordingly, we still get that the invariant is of the form pair(x, y) for some y ∈ Con q,p and we are done. where Z has the canonical pairing. (4) The algebra A is connected, that is A 0 = Z.
(5) All the structure constants of m, ∆, u, ǫ with respect to the basis B are nonnegative integers.
By a graded basis we mean that B = ⊔ n B n , where B n is a basis for A n .
Our algebra K[X] is not defined over Z but over a field K of characteristic zero. Nevertheless, all the structure constants for m and ∆ with respect to the basis B of monomials in the irreducible basic invariants are non-negative integers. We now adjust the definition of Zelevinsky to fit in our framework.
Definition 8.2. A rational K-PSH-algebra is an N r -graded K-Hopf algebra A equipped with a graded basis B which satisfies the following conditions: (1) The basis B is orthogonal and positive with respect to −, − . In other words-for every x, y ∈ B we have x, y = δ x,y c(x) for some c(x) ∈ Q + .
(2) The multiplication is adjoint to the comultiplication with respect to −, − where A ⊗ K A has the tensor product pairing. The number r which appears in the grading is some positive integer.
In the sequel we will refer simply to rational PSH-algebra when the field K will be clear from context. The serious relaxation we made was in the first axiom, where instead of talking about orthonormal basis we speak now about orthogonal basis. It turns out that the basis of monomials we have for K[X] will furnish a natural structure of a rational PSH-algebra, but for which the constants c(x) will often be integers = 1.
We define the following inner product on (K[X]) n 1 ,...,nr for every tuple (n 1 , . . . , n r ) ∈ N r : p(n, σ, n 1 , . . . , n r ), p(n, τ, n 1 , . . . n r ) = |{g ∈ S n 1 ,...nr |α 1 (g)σα 2 (g −1 ) = τ }| where we write α 1 = α (q i ) (n i ) and α 2 = α (p i ) (n i ) . Since equality between basic invariants is defined using the action of S n 1 ,...,nr it holds that if p(n, σ, n 1 , . . . , n r ) = p(n, τ, n 1 , . . . , n r ) then their pairing is zero, and the squared norm of p(n, σ, n 1 , . . . , n r ) is the cardinality of its stabilizer in S n 1 ,...,nr , where the action of this group is given by (g, σ) → α 1 (g)σα 2 (g −1 ). We will think of this stabilizer as the automorphism group of the diagram of p(n, σ, n 1 , . . . , n r ), as it corresponds to permutations of the boxes in a way which does not change the diagram. This pairing can also be interpreted in the following way: we have natural identification and inclusion and K[S n ] has a natural inner product given by σ, τ = δ σ,τ . The inner product described here is just the restriction of this inner product, rescaled by i (n i )! (see also Lemma 10.1). The rest of this section will be devoted to proving the following: Theorem 8. 3. The algebra K[X] with the basis of basic invariants and the above inner product is a rational PSH-algebra.
Since all basic invariants can be written uniquely as monomials in the irreducible basic invariants, and since the irreducible basic invariants are primitive with respect to ∆, it is easy to see that most of the axioms of a rational PSH-algebra hold in K[X]. The only non-trivial part is the fact that m is adjoint to ∆ with respect to −, − . The proof of the theorem will thus be complete with the proof of the following lemma: Lemma 8. 4. For every a, b, c ∈ K[X] it holds that a ⊗ b, ∆(c) = ab, c . Proof. Due to linearity it is enough to prove this in case a, b and c are monomials in the irreducible basic invariants. Write a = p a 1 1 · · · p at t , b = p b 1 1 · · · p bt t and c = p c 1 1 · · · p ct t where p 1 , . . . , p t are irreducible basic invariants. Since the monomials form an orthonormal basis, we get that if a i + b i = c i for some i then both sides of the equation are zero. So we assume that a i + b i = c i for i = 1 . . . , t. It follows that ab = c and the right hand side is equal to c, c = |Aut(c)|. On the other hand, since all the elements p i are primitive, We thus need to prove the equation This will follow once we prove that This follows easily from the following argument: every automorphism of a permutes the connected components of the diagram which corresponds to a. The diagram of a has a 1 + a 2 + . . . + a t connected components which correspond to the p i constituents.
Since an automorphism of a will send a connected component to an equivalent connected component any automorphism of a will permute the connected components of type p 1 , the connected components of type p 2 and so on. We thus have a surjective group homomorphism Aut(a) → i S a i . The kernel of this homomorphism consists of those homomorphisms of a which fix all the connected components, and is thus isomorphic to i Aut(p i ) a i . Calculating the cardinality of Aut(a) now gives us the desired result.
So we do see that we get a rational PSH-algebra structure on K[X]. PSH-algebras play an important role in the representation theory of finite groups. In [Ze81] Zelevinsky proved that every PSH-algebra decomposes uniquely into the tensor product of what he called universal PSH-algebras. The universal PSH-algebra is the polynomial algebra Z[x 1 , x 2 , . . .] where deg(x n ) = n and where ∆(x n ) = a+b=n x a ⊗ x b . This algebra arises in the study of the representation theory of the symmetric groups in the following way: Define A = n≥0 R(S n ), the direct sum of the Grothendieck groups of all the symmetric groups. For [V ] ∈ R(S n ) and [W ] ∈ R(S m ) define These operations together with the basis given by the irreducible representations of S n define a structure of a PSH-algebra. In [Ze81] Zelevinsky described other PSH-algebras arising from other families of finite groups such as wreath products or general linear groups over finite fields. In Section 10 we will show that when our algebraic structure contains a single endomorphism the algebra K[X] is just the extension of scalars of the universal PSH-algebra of Zelevinsky. I do not know, however, if the rational PSH-algebra K[X] always has a representation theoretic interpretation. See also Question 10.6.

The Hilbert function of K[X] and of its finitely generated quotients
We explain now how to calculate the Hilbert function of K[X] and also of the finitely generated quotients K[X]/I d . The algebra K[X] is graded by N r , and we define the Hilbert function H of K[X] to be H(n 1 , . . . , n r ) = dim K (K[X]) n 1 ,...,nr .

Now
To calculate the dimension of this space we will first calculate α * i (S λ ) for a partition λ ⊢ n and i = 1, 2.
We will now show that this algebra has a different basis, parametrized by partitions of n for every n ∈ N. This will enable is to show that this algebra has a Z-lattice which is the universal PSH-algebra of Zelevinsky, and it will enable us to give a clean description of the ideals I d . For this, recall that since S n is a finite group the natural map φ : (KS n ) Sn → (KS n ) Sn z → z is an isomorphism with inverse given by φ −1 (σ) = 1 n! τ ∈Sn τ στ −1 .
Also, since the action of S n on KS n here is simply conjugation, the invariant subspace (KS n ) Sn is just the center of S n . The center of S n has a natural basis given by {e λ } λ⊢n where e λ is the central idempotent which corresponds to the Specht module S λ We calculate here the squared norm of e λ . For this, define first an inner product on KS n by This inner product is the same as σ, τ = χ reg (στ −1 ) where χ reg is the character of the regular representation of S n . We claim the following: Lemma 10.1. The map K[X] n ∼ = (KS n ) Sn → (KS n ) Sn → KS n preserves the inner product.
Remark 10.2. As mentioned in the definition of the inner product in Section 8, this lemma is the reason we have defined the inner product on the coinavriants space the way we did. Proof. The image of σ ∈ K[X] n in (KS n ) Sn is 1 n! x∈Sn xσx −1 . We calculate: φ −1 (σ), φ −1 (τ ) = 1 n! 2 But this is the same as σ, τ so we are done.
Now we can calculate the squared norm of φ(e λ ). Since every element of S n is conjugate to its inverse, and since e λ is an idempotent, we get that φ(e λ ), φ(e λ ) = χ reg (e λ ) = d 2 λ where d λ is the dimension of the Specht module S λ .
We use this terminology to be consistent with Zelevinsky (see Chapter II.6 in [Ze81]). We thus have a new basis for K[X], given by {λ} λ⊢n,n∈N . This basis is orthonormal. Let A ⊆ K[X] be the Z lattice generated by the elements {λ}. We can already identify between A and n≥0 R(S n ), since both of them have a basis parametrized by partitions of all natural numbers. In addition, the basis here is also orthonormal, as in the universal PSH-algebra. Our goal now will be to show that the algebra A has the same multiplication and comultiplication as the algebra of Zelevinsky. It will be enough to show that the product is the same, since we know that in both algebras the comulutplication is adjoint to the multiplication, and so the multiplication defines it uniquely.
So where a ν ∈ Q are some numbers. The following lemma proves that these scalars are exactly the Littlewood-Richardson coefficients.
Lemma 10. 4. Let G be a finite group and let H be a subgroup. Assume that e ∈ H is the idempotent which corresponds to an irreducible representation V of H. Write W 1 , . . . W r for the isomorphism classes of the irreducible representations of G, and let f i be their corresponding central idempotents. Then dim(W i ) f i and write χ i for the character of W i . Since χ i (f j ) = dim(W i )δ i,j we get χ i (Z) = a i , so it will be enough to evaluate this expression. Using the fact that χ i is invariant under conjugation in G we get that

Now if dim
Hom H (V, Res G H (W i )) = t i then Res G H (W i ) ∼ = V ⊕t i ⊕ V ′ where V ′ is a direct sum of irreducible H-representations which are not isomorphic to V . The action of e on W i is then given by projection on V ⊕t i with kernel V ′ . But since this is a projection with image of rank dim(V )t i , we get that a i = 1 dim(V ) dim(V )t i = t i . and we are done.