Idempotent systems

In this paper we introduce the notion of an idempotent system. This linear algebraic object is motivated by the structure of an association scheme. We focus on a family of idempotent systems, said to be symmetric. A symmetric idempotent system is an abstraction of the primary module for the subconstituent algebra of a symmetric association scheme. We describe the symmetric idempotent systems in detail. We also consider a class of symmetric idempotent systems, said to be $P$-polynomial and $Q$-polynomial. In the topic of orthogonal polynomials there is an object called a Leonard system. We show that a Leonard system is essentially the same thing as a symmetric idempotent system that is $P$-polynomial and $Q$-polynomial.


Introduction
In this paper we introduce the notion of an idempotent system.This linear algebraic object is motivated by the structure of an association scheme.Before summarizing the contents of this paper, we briefly recall the notion of an association scheme.A (symmetric) association scheme is a sequence (X, {R i } d i=0 ), where X is a finite nonempty set, and {R i } d i=0 is a sequence of nonempty subsets of X × X such that (iv) there exist integers p h ij (0 ≤ h, i, j ≤ d) such that for any (x, y) ∈ R h the number of z ∈ X with (x, z) ∈ R i and (z, y) ∈ R j is equal to p h ij .
The integers p h ij are called the intersection numbers.By (iii) they satisfy p h ij = p h ji for 0 ≤ h, i, j ≤ d.The concept of an association scheme first arose in design theory [2][3][4]10] and group theory [15].A systematic study began with [6,7].A comprehensive treatment is given in [1,5].
Let (X, {R i } d i=0 ) denote an association scheme.As we study this object, the following concepts and notation will be useful.Let R denote the real number field.Let Mat X (R) denote the R-algebra consisting of the matrices with rows and columns indexed by X, and all entries in R. Let I (resp.J) denote the identity matrix (resp.all 1's matrix) in Mat X (R).Let V denote the vector space over R consisting of the column vectors with coordinates indexed by X, and all entries in R. The algebra Mat X (R) acts on V by left multiplication.We define a bilinear form , : V × V → R such that u, v = y∈X u y v y for u, v ∈ V. We have Bu, v = u, B t v for B ∈ Mat X (R) and u, v ∈ V.Here B t denotes the transpose of B. For y ∈ X define y ∈ V that has y-entry 1 and all other entries 0. Note that { y | y ∈ X} form an orthonormal basis of V.
We now recall the Bose-Mesner algebra.For 0 ≤ i ≤ d define A i ∈ Mat X (R) that has (y, z)-entry 1 if (y, z) ∈ R i and 0 if (y, z) ∈ R i (y, z ∈ X).The matrix A i is symmetric.We have The {A i } d i=0 form a basis for a commutative subalgebra M of Mat X (R).We call M the Bose-Mesner algebra of the scheme.Each matrix in M is symmetric.By [5, Section 2.2] there exists a basis {E i } d i=0 for M such that We have For 0 ≤ i ≤ d, E i V is the i th common eigenspace for M , and E i is the orthogonal projection from V onto E i V.There exist real numbers p i (j), q i (j) (0 ≤ i, j ≤ d) such that We now recall the Krein parameters.Note that A i • A j = δ i,j A i (0 ≤ i, j ≤ d), where • denotes entry-wise multiplication.Therefore M is closed under •.Consequently there exist real numbers q h ij (0 ≤ h, i, j ≤ d) such that By [1,Theorem 3.8], q h ij ≥ 0 for 0 ≤ h, i, j ≤ d.The q h ij are called the Krein parameters of the scheme.
We now recall the dual Bose-Mesner algebra.For the rest of this section fix x ∈ X.For B ∈ M let B ρ denote the diagonal matrix in Mat X (R) that has (y, y)-entry B x,y for y ∈ X. Roughly speaking, B ρ is obtained by turning column x of B at a 45 degree angle.For 0 ≤ i ≤ d define E * i = A ρ i .For y ∈ X the (y, y)-entry of E * i is 1 if (x, y) ∈ R i and 0 if (x, y) ∈ R i .Note that E * 0 has (x, x)-entry 1 and all other entries 0. The matrices {E * i } d i=0 satisfy Therefore {E * i } d i=0 form a basis for a commutative subalgebra M * of Mat X (R).We call M * the dual Bose-Mesner algebra with respect to x.We have For 0 ≤ i ≤ d, E * i V has basis { y | y ∈ X, (x, y) ∈ R i }.Moreover E * i V is the i th common eigenspace for M * , and E * i is the orthogonal projection from V onto E * i V.The map ρ : M → M * , B → B ρ is R-linear and bijective.For 0 ≤ i ≤ d define The {A * i } d i=0 form a basis of M * , and For 0 ≤ i ≤ d, p i (j)A * j .
We now recall the subconstituent algebra T and the primary T -module.Let T denote the subalgebra of Mat X (R) generated by M and M * .We call T the subconstituent algebra (or Terwilliger algebra) with respect to x.The algebra T is closed under the transpose map.By [12,Lemma 3.4] the algebra T is semisimple.Moreover by [12,Lemma 3.4] the T -module V decomposes into an orthogonal direct sum of irreducible T -modules.Among these modules there is a distinguished one, said to be primary.We now describe the primary T -module.Let 1 denote the vector in V that has all entries 1.So 1 = y∈X y.For 0 ≤ i ≤ d, Therefore M x = M * 1; denote this common vector space by V .By construction V is a T -module with dimension d + 1.By [12,Lemma 3.6] the T -module V is irreducible.The T -module V is said to be primary.For 0 ≤ i ≤ d define The vector The vector The bases {1 i } d i=0 and {1 * i } d i=0 are related by The following bases for V are of interest: The bases (i), (ii) are dual with respect to , .Moreover the bases (iii), (iv) are dual with respect to , .The algebras M and M * are related as follows.For 0 ≤ h, i, j ≤ d, We summarize the above description with four statements about V : (i) the {E i } d i=0 act on V as a system of mutually orthogonal rank 1 idempotents; (ii) the {E * i } d i=0 act on V as a system of mutually orthogonal rank 1 idempotents; The above statements (i)-(iv) have the following significance.We will show that (i)-(iv) together with the symmetry of the matrices {E i } d i=0 , {E * i } d i=0 are sufficient to recover the T -module V at an algebraic level.
We now turn our attention to idempotent systems.An idempotent system is defined as follows.Let F denote a field.Let d denote a nonnegative integer, and let V denote a vector space over F with dimension d + 1.Let End(V ) denote the F-algebra consisting of the F- The above idempotent system Φ is said to be symmetric whenever there exists an antiautomorphism † of End(V ) that fixes each of ) denote a symmetric idempotent system on V .Using Φ we will define some elements {A i } d i=0 , {A * i } d i=0 in End(V ) and some scalars in F. The scalar ν corresponds to |X|.We will endow V with a nondegenerate symmetric bilinear form , .We will define four orthogonal bases of V that correspond to the four earlier bases of interest.We will show that the resulting construction matches the primary T -module at an algebraic level.Our definitions are summarized as follows.Note that {E i } d i=0 form a basis for a commutative subalgebra M of End(V ).We show that for 0 ≤ i ≤ d there exists a unique i=0 is a basis for the vector space M. Similarly, the {E * i } d i=0 form a basis for a commutative subalgebra M * of End(V ).We show that for 0 ≤ i ≤ d there exists a unique is a basis for the vector space M * .Concerning the scalars (1), we show that tr(E 0 E * 0 ) = 0.The scalar ν is defined by The scalars k i , k * i are defined by We show that d i=0 k i = ν = d i=0 k * i , and each of k i , k * i is nonzero for 0 ≤ i ≤ d.The scalars p h ij , q h ij are defined by The scalars p i (j), q i (j) are defined by We define a bilinear form , on V as follows.By linear algebra, there exists a nondegenerate bilinear form , on V such that Bu, v = u, B † v for all B ∈ End(V ) and u, v ∈ V .The bilinear form , is unique up to multiplication by a nonzero scalar in F. The bilinear form , is symmetric.
Fix nonzero ξ, ζ in E 0 V and nonzero ξ * , ζ * in E * 0 V .We show that each of the following (i)-(iv) is an orthogonal basis for V : The bases (i), (ii) are dual if and only if ξ, ζ = ν, and the bases (iii), (iv) are dual if and only if ξ * , ζ * = ν.We just summarized our definitions.In the main body of the paper, we show that the resulting defined objects are related in a manner that matches the primary T -module.To describe this relationship, we use some equations involving the called the reduction rules.Near the end of the paper we introduce the P -polynomial and Q-polynomial properties for symmetric idempotent systems.We show that a symmetric idempotent system that is P -polynomial and Q-polynomial is essentially the same thing as a Leonard system in the sense of [13,Definition 4.1].
The paper is organized as follows.In Section 2 we recall some basic results from linear algebra.In Section 3 we introduce the concept of an idempotent system.In Section 4 we introduce the scalar ν and discuss some related topics.In Section 5 we introduce the symmetric idempotent systems.In Sections 6, 7 we introduce a certain linear bijection ρ : M → M * and use it to define the elements A i , A * i .In Sections 8, 9 we introduce the scalars k i , k * i and obtain some reduction rules involving these scalars.In Sections 10, 11 we introduce the scalars p h ij , q h ij and obtain some reduction rules involving these scalars.In Sections 12, 13 we introduce the scalars p i (j), q i (j) and obtain some reduction rules involving these scalars.In Section 14 we put some of our earlier results in matrix form.In Sections 15-17 we introduce the four bases of interest and discuss their properties.
In Section 18 we obtain the transition matrices between these four bases, and the inner products between these four bases.We also obtain the matrices representing A i , A * i , E i , E * i with respect to these four bases.In Section 19 we introduce the P -polynomial and Qpolynomial properties.In Section 20 we recall the notion of a Leonard pair and a Leonard system.In Section 21 we show that a Leonard system is essentially the same thing as a symmetric idempotent system that is P -polynomial and Q-polynomial.
The reader might wonder how the concept of a symmetric idempotent system is related to the concept of a character algebra [8].Roughly speaking, a symmetric idempotent system is obtained by gluing together a character algebra and its dual; we will discuss this in a future paper.

Preliminaries
In this section we fix some notation and recall some basic concepts.Throughout this paper F denotes a field.By a scalar we mean an element of F. All algebras and vector spaces discussed in this paper are over F. All algebras discussed in this paper are associative and have a multiplicative identity.For an algebra A, by an automorphism of A we mean an algebra isomorphism A → A, and by an antiautomorphism of A we mean an F-linear bijection τ : For the rest of this paper, fix an integer d ≥ 0 and let V denote a vector space with dimension d + 1.Let End(V ) denote the algebra consisting of the F-linear maps from V to V .Let Mat d+1 (F) denote the algebra consisting of the d + 1 by d + 1 matrices that have all entries in F. We index the rows and columns by 0, 1, . . ., d.The identity of End(V ) or Mat d+1 (F) is denoted by I.For A ∈ End(V ), the dimension of AV is called the rank of A. A matrix M ∈ Mat d+1 (F) is said to be tridiagonal whenever the (i, j)-entry M i,j = 0 if |i − j| > 1 (0 ≤ i, j ≤ d).Assume for the moment that M is tridiagonal.Then M is said to be irreducible whenever M i,j = 0 if |i − j| = 1 (0 ≤ i, j ≤ d).We recall how each basis {v i } d i=0 of V gives an algebra isomorphism End(V ) → Mat d+1 (F).For A ∈ End(V ) and M ∈ Mat d+1 (F), we say that M represents A with respect to {v i } d i=0 whenever Av j = d i=0 M i,j v i for 0 ≤ j ≤ d.The isomorphism sends A to the unique matrix in Mat d+1 (F) that represents A with respect to {v i } d i=0 .Next we recall the transition matrix between two bases of V .Let {u i } d i=0 and {v i } d i=0 denote bases of V .By the transition matrix from Then T is invertible and A subspace W ⊆ V is called an eigenspace of A whenever W = 0 and there exists a scalar θ such that W = {v ∈ V | Av = θv}; in this case θ is the eigenvalue of A associated with W .We say that A is diagonalizable whenever V is spanned by the eigenspaces of A. We say that A is multiplicity-free whenever A is diagonalizable and its eigenspaces all have dimension one.

Definition 2.1 By a decomposition of V we mean a sequence {V
Definition 2.2 By a system of mutually orthogonal rank 1 idempotents in End(V ) we mean a sequence {E i } d i=0 of elements in End(V ) such that The next lemma is routinely verified.
Lemma 2.3 The following hold.
Definition 2.4 Let A denote a multiplicity-free element in End(V ), and let {V i } d i=0 denote an ordering of the eigenspaces of A.
denote the corresponding system of mutually orthogonal rank 1 idempotents from Lemma 2.3(i).We call {E i } d i=0 the primitive idempotents of A.
For the rest of this section, let {E i } d i=0 denote a system of mutually orthogonal rank 1 idempotents in End(V ).The next two lemmas are routinely verified.
Lemma 2.5 The following hold: where tr means trace.

Idempotent systems
Recall the vector space V with dimension d + 1.In this section we introduce the notion of an idempotent system on V .
Definition 3.1 By an idempotent system on V we mean a sequence ) denote an idempotent system on V .Define Then Φ * is an idempotent system on V , called the dual of Φ.We have (Φ * ) * = Φ.For an object f attached to Φ, the corresponding object attached to Φ * is denoted by ) denote an idempotent system on a vector space V ′ .By an isomorphism of idempotent systems from Φ to Φ ′ we mean an algebra isomorphism End(V ) → End(V ′ ) that sends We say that Φ and Φ ′ are isomorphic whenever there exists an isomorphism of idempotent systems from Φ to Φ ′ .By the Skolem-Noether theorem (see [11,Corollary 7.125]), a map σ : End(V ) → End(V ′ ) is an algebra isomorphism if and only if there exists an F-linear bijection S : Note that M is commutative, and {E i } d i=0 form a basis of the vector space M.
4 The scalars m i , ν ) denote an idempotent system on V .In this section we use Φ to introduce some scalars {m i } d i=0 , ν.
Lemma 4.2 For 0 ≤ i ≤ d the following hold: Proof.(i) Abbreviate A = End(V ).By Lemma 2.6(ii), E * 0 is a basis for the vector space E * 0 AE * 0 .So there there exists a scalar α i such that In this equation, take the trace of each side and simplify the result using Lemma 2.5(i) and tr(M N ) = tr(N M ) to obtain α i = m i .The result follows.
(ii) By Lemma 2.5(ii), d i=0 E i = I.In this equation, multiply each side on the left by In this equation, take the trace of each side, and evaluate the result using Lemma 2.5(i) and Definition 4.1.✷ Definition 4.5 Setting i = 0 in (2) we find that m 0 = m * 0 ; let ν denote the multiplicative inverse of this common value.We emphasize ν = ν * and Lemma 4. 6 We have Proof.To get the equation on the left in (4), set i = 0 in Lemma 4.2(ii) and use Definition 4.5.Applying this to Φ * we get the equation on the right in (4).✷ Lemma 4.7 Each of the following is a basis of the vector space End(V ): Proof.(i) In view of Lemma 2.6, it suffices to show that E i E * 0 E j = 0 for 0 ≤ i, j ≤ d.Let i, j be given, and suppose E i E * 0 E j = 0. Using Lemmas 4.2(i) and 4.4(i), (ii) Apply (i) to Φ * .(iii) By (i) above and Definition 3.2.✷

Symmetric idempotent systems
We continue to discuss an idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .Definition 5.1 We say that Φ is symmetric whenever there exists an antiautomorphism † of End(V ) that fixes each of E i , E * i for 0 ≤ i ≤ d.Recall the algebra M from Definition 3.2.Lemma 5.2 Assume that Φ is symmetric, and let † denote an antiautomorphism of End(V ) from Definition 5.1.Then the following hold: (iii) † fixes every element in M and every element in M * .
(ii) The composition † • † is an automorphism of End(V ) that fixes everything in M and everything in M * .This automorphism is the identity in view of Lemma 4.8(iii).
(i) Let µ denote an antiautomorphism of End(V ) that fixes each of E i , E * i for 0 ≤ i ≤ d.We show µ = †.The composition † • µ is an automorphism of End(V ) that fixes everything in M and everything in M * .So this automorphism is the identity.We have † = † −1 by (ii) above, so µ = †.✷ 6 The map ρ ) denote a symmetric idempotent system on V .Recall the algebra M from Definition 3.2.In this section we introduce a certain map ρ : M → M * that will play an essential role in our theory.As we will see, ρ is an isomorphism of vector spaces but not algebras.
Proof.(i) By Lemmas 2.5(ii) and 2.6(i) the sum We show that these elements are linear independent.For scalars For 0 ≤ r ≤ d, multiply each side of this equation on the left by E r to obtain 0 = α r E r E * 0 .We have E r E * 0 = 0 by Definition 3.1(iv), so α r = 0. We have shown that {E i E * 0 } d i=0 are linearly independent, and hence a basis of Proof.(i) Clearly the map is F-linear.By Lemma 6.1(i), the map sends the basis Proof.Abbreviate A = End(V ).Concerning existence, consider the F-linear map g : satisfies (5).We have shown that ρ exists.The map ρ is unique by Lemma 6.2(ii).✷ Lemma 6. 4 The maps ρ and νρ * are inverses.In particular, the maps ρ, ρ * are bijective.

Proof. By Lemma 5.2(iii) and since
Proof.(i) Use Lemmas 6.3, 7. We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .In this section we use Φ to define some scalars k i that will play a role in our theory.
Proof.(i) By Definition 8.1.(ii) Apply (i) to Φ * .✷ Lemma 8.3 For 0 ≤ i ≤ d the following hold: In this equation, evaluate the lefthand side using Lemmas 4.6, 8.2(i), and evaluate the right-hand side using Lemma 4.2(ii).This gives The following hold: Proof.(i) Apply Lemma 4.4(i) to Φ * and use Lemma 8.3(i).
(iii) By Definition 4.5 and Lemma 8.3(i).✷ 9 Some reduction rules We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .In this section we obtain some reduction rules for Φ.Recall the antiautomorphism † of End(V ) from Definition 5.1.
Lemma 9.1 For 0 ≤ i ≤ d the following hold: Proof.(i) Set Y = E i in (5) and use Lemma 7.2.(ii) Apply (i) to Φ * .(iii), (iv) For the equations in (i) and (ii), apply † to each side.✷ Lemma 9.2 For 0 ≤ i, j ≤ d the following hold: Proof.(i) For the equation in Lemma 9.1(ii), multiply each side on the left by In this equation, evaluate the left-hand side using Lemma 9.1(ii).
We call these scalars the Krein parameters of Φ.
Proof.Apply Lemma 10.1 to Φ * and use Definition 10.3.✷ Lemma 10.5 For 0 ≤ h, i, j ≤ d the following hold: (ii) q h ij = q h ji .
(i) Expand A h (A i A j ) = (A h A i )A j in two ways using (7), and compare the coefficients using Lemma 7.7.
(ii) Apply (i) to Φ * .✷ Lemma 10.8For 0 ≤ h, i ≤ d the following hold: Proof.(i) Using Lemmas 7.6 and 8.2(i), By (7), Compare the above two equations using Lemma 7.7.(ii) Apply (i) to Φ * .✷ Lemma 10.9For 0 ≤ i, j ≤ d the following hold: Proof.(i) For the equation ( 7), multiply each side on the left by E 0 E * 0 and on the right by E * 0 E 0 .Evaluate the result using Lemma 7.4(i),(iii) along with Lemmas 4.2, 8.3.(ii) Apply (i) to Φ * .✷ Lemma 10.10For 0 ≤ i, j ≤ d the following hold: Proof.(i) In (7), multiply each side by E 0 , and simplify the result using Lemma 8.2(i).(ii) Apply (i) to Φ * .✷ Lemma 10.11For 0 ≤ h, i, j ≤ d the following hold: Proof.(i) In view of Lemma 10.5(i), it suffices to show that k h p h ij = k j p j hi .To obtain this equation, set t = 0 in Lemma 10.7(i), and evaluate the result using Lemma 10.9(i).
(ii) Apply (i) to Φ * .✷ 11 Reduction rules involving p h ij , q h ij We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .In this section we give some reduction rules for Φ that involve the intersection numbers and Krein parameters.
(ii) Apply (i) to Φ * .✷ Lemma 11.4For 0 ≤ h, i, j ≤ d the following hold: Proof.By Lemmas 4.7 and 11.3.✷ Lemma 11.5 For 0 ≤ h, i, j ≤ d the following hold: Proof.(i) In Lemma 11.2(iii), take the trace of each side, and simplify the result using Definition 4.1.(ii) Apply (i) to Φ * .✷ 12 The scalars p i (j), q i (j) We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .In this section we use Φ to define some scalars p i (j), q i (j) that will play a role in our theory.Recall the algebra M from Definition 3.2.Lemma 12.1 There exist scalars p i (j) (0 ≤ i, j ≤ d) such that Proof.By Definition 3.2 the elements {E i } d i=0 form a basis of M. By Definition 7.1, A i ∈ M for 0 ≤ i ≤ d.The result follows.✷ Definition 12.2 For 0 ≤ i, j ≤ d define q i (j) = (p i (j)) * .
Proof.(i) In (9), apply ρ to each side and use Lemma 7.2.(ii) Apply (i) to Φ * .✷ Lemma 12.6 For 0 ≤ i, j ≤ d the following hold: Proof.(i) By (9), In this equation, eliminate E h using Lemma 12.5(ii), and compare the coefficients of each side.
Proof.(i) Set i = 0 in (9) and recall that A 0 = I. (ii) Apply (i) to Φ * .✷ Lemma 12.8 For 0 ≤ i ≤ d the following hold: Proof.(i) Set j = 0 in Lemma 12.4(i) and compare the result with Lemma 8.2(i).
Proof.(i) In (7), multiply each side by E r , and simplify the result using Lemma 12.4(i).
(ii) Apply (i) to Φ * .✷ Lemma 12.12 For 0 ≤ h, i, j ≤ d the following hold: Proof.(i) Expand the sum d r=0 p i (r)p j (r)q r (h) using Lemma 12.11(i), and simplify the result using Lemma 12.6(i).

Some matrices
We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .In the previous sections we used Φ to define several kinds of scalars, and we described how these scalars are related.In this section we express these relationships in matrix form.Definition 14.1 Let K (resp.K * ) denote the diagonal matrix in Mat d+1 (F) that has (i, i)-entry k i (resp.k * i ) for 0 ≤ i ≤ d.Let P (resp.Q) denote the matrix in Mat d+1 (F) that has (i, j)-entry p j (i) (resp.q j (i)) for 0 ≤ i, j ≤ d.Lemma 14.2 The following hold: Proof.(i) By Lemma 12.6.
(ii), (iii) By Lemma 14.4(i),(iii) and Lemma 14.2(i).✷ Definition 14.7 For 0 ≤ i ≤ d let B i and B * i denote the matrices in Mat d+1 (F) that have entries We call B i (resp.B * i ) the i th intersection matrix (resp.i th dual intersection matrix) of Φ. Definition 14.8For 0 ≤ i ≤ d let H i and H * i denote the diagonal matrices in Mat d+1 (F) that have diagonal entries Lemma 14.9For 0 ≤ r ≤ d, Proof.To get the equation on the left in (13), compare the entries of each side using Lemma 12.11(i).In the equation on the left in (13), multiply each side on the left and on the right by Q and simplify the result using Lemma 14.2(i).This gives the equation on the left in (14).To obtain the equation on the left in (15), compare the entries of each side using Lemma 10.11(i).The equation on the left in (16) follows from QH r = B r Q and Lemma 14.4(iii) together with the fact that H r , K * commute since they are both diagonal.
(iii), (iv) By Lemma 12.11.✷ 15 The Φ-standard basis We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .In this section we introduce the notion of a Φ-standard basis.
Proof.The vector space E * i V has dimension 1 and contains Proof.Let the integer i be given.We show E * i ξ = 0.The vector space E 0 V has dimension 1 and ξ is a nonzero vector in E 0 V , so ξ spans , where ξ is a nonzero vector in E 0 V .We give a characterization of a Φ-standard basis.
Lemma 15.4 Let {u i } d i=0 denote a sequence of vectors in V , not all 0. Then this sequence is a Φ-standard basis if and only if both (i), (ii) hold below: Proof.To prove the lemma in one direction, assume that {u i } d i=0 is a Φ-standard basis of V .By Definition 15.3 there exists a nonzero ξ ∈ E 0 V such that In this equation we apply each side to ξ, to find that ξ = d i=0 u i , and (ii) follows.We have now proved the lemma in one direction.To prove the lemma in the other direction, assume that {u i } d i=0 satisfy (i) and (ii).Define ξ = d i=0 u i and observe ξ ∈ E 0 V .Using (i) we find that

Bilinear forms
In this section we recall some basic facts concerning bilinear forms on V .See [11,Section 8.5] for more information.By a bilinear form on V we mean a map , : V × V → F that satisfies the following four conditions for u, v, w ∈ V and α ∈ F: Let , denote a bilinear form on V .We abbreviate ||v|| 2 = v, v for v ∈ V .The following are equivalent: (i) there exists a nonzero u ∈ V such that u, v = 0 for all v ∈ V ; (ii) there exists a nonzero v ∈ V such that u, v = 0 for all u ∈ V .The form , is said to be degenerate whenever (i), (ii) hold and nondegenerate otherwise.
Let γ denote an antiautomorphism of End(V ).Then there exists a nonzero bilinear form , on V such that Au, v = u, A γ v for u, v ∈ V and A ∈ End(V ).The form is unique up to multiplication by a nonzero scalar.The form is nondegenerate.We refer to this form as a bilinear form on V associated with γ.
For the rest of this section let , denote a nondegenerate bilinear form on V .
Definition 16.1 For bases {u i } d i=0 and {v i } d i=0 of V , the inner product matrix from {u i } d i=0 to {v i } d i=0 is the matrix in Mat d+1 (F) that has (i, j)-entry u i , v j for 0 ≤ i, j ≤ d.
Referring to Definition 16.1, the inner product matrix from Definition 16.2The form , is said to be symmetric whenever u, v = v, u for u, v ∈ V .Definition 16.3 Assume that , is symmetric.Then two bases {u i } d i=0 , {v i } d i=0 of V are said to be dual with respect to , whenever u i , v j = δ i,j for 0 ≤ i, j ≤ d.
Lemma 16.4 Assume that , is symmetric.Then each basis of V has a unique dual with respect to , .
Lemma 16.5 Assume that , is symmetric.Let {u i } d i=0 and {v i } d i=0 denote bases of V .Then the following are the same: (i) the inner product matrix from {u i } d i=0 to {v i } d i=0 ; (ii) the inner product matrix from {u i } d i=0 to {u i } d i=0 , times the transition matrix from Proof.Routine linear algebra.✷ 17 The dual Φ-standard basis We return our attention to a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .In this section we introduce the notion of a dual Φ-standard basis of V .Recall the antiautomorphism † of End(V ) from Definition 5.1.For the rest of the paper , denotes a bilinear form on V associated with †.By the construction, for A ∈ End(V ) we have Recall the algebra M from Definition 3.2.
Shortly we will describe the dual Φ-standard bases.We will use the following definition.
Definition 17.6 Note that for nonzero ξ, ζ ∈ E 0 V the following are equivalent: We say that ξ, ζ are partners whenever they satisfy (i)-(iii).
Lemma 17.7 For nonzero ξ, ζ in E 0 V the following are equivalent: (i) the bases {E * i ξ} d i=0 and {k −1 i E * i ζ} d i=0 are dual with respect to , ; (ii) ξ, ζ are partners.
Proof.The vector space E 0 V has dimension 1, so there exists a scalar α such that ζ = αξ.By this and Lemma 17.2, So (i) holds if and only if α||ξ|| 2 = ν.By this and Definition 17.6 we obtain the result.✷ Lemma 17.8 A given basis of V is a dual Φ-standard basis if and only if it has the form Proof.Use Lemma 17.7.✷ We mention a result for later use.
Lemma 17.9 Proof.Using E 0 ξ = ξ, E * 0 ξ * = ξ * and Lemma 13.4(i), In this section we display the matrices that represent with respect to these bases.We display the inner product matrices between these bases.We display the transition matrices between these bases.We introduce some notation.For 0 ≤ i, j ≤ d define ∆ i,j ∈ Mat d+1 (F) that has (i, j)entry 1 and all other entries 0.
Proposition 18.1 In the table below we give some matrix representations.For 0 ≤ r ≤ d, each entry in the table is the matrix that represents the map in the given column with respect to the basis in the given row: Proof.Note that ζ (resp.ζ * ) is a nonzero scalar multiple of ξ (resp.ξ * ).Using this and Lemmas 17.2, 17.9 we represent the inner products in terms of P , Q, K, K * .Now eliminate P , Q using Lemma 14.4 to get the result.✷ In the diagram below we display the inner product matrices between the four bases in (21): Inner products Proposition 18.3 In the table below we give the transition matrices between the four bases in (21).Each entry of the table is the transition matrix from the basis in the given row to the basis in the given column: Proof.Use Lemma 16.5 and Proposition 18.2.✷ In the diagram below we display the transition matrices between the four bases in (21): Transition matrices We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .
Definition 19.1 We say that Φ is P -polynomial whenever p h ij is zero (resp.nonzero) if one of h, i, j is greater than (resp.equal to) the sum of the other two (0 ≤ h, i, j ≤ d).
For the moment, assume that d ≥ 1 and Φ is P -polynomial.Then the first intersection matrix B 1 has the form , where Moreover c i = 0 for 1 ≤ i ≤ d and b i = 0 for 0 ≤ i ≤ d − 1.So B 1 is irreducible tridiagonal.Shortly we will show that this feature of B 1 characterizes the P -polynomial property.
Lemma 19.2 Assume that d ≥ 1 and Φ is P -polynomial.Then Proof.By Lemma 10.1 and the comments below Definition 19.1.✷ For elements A, B in any algebra, we say that B is an affine transformation of A whenever there exist scalars α, β such that α = 0 and B = αA + βI.
Proposition 19.3 Assume that d ≥ 1.Then for A ∈ End(V ) the following are equivalent: (i) Φ is P -polynomial and A is an affine transformation of A 1 ; (ii) for 0 ≤ i ≤ d there exists f i ∈ F[x] such that deg(f i ) = i and A i = f i (A).
(ii) ⇒ (i) The elements {A i } d i=0 are linearly independent by Lemma 7.7, so the elements {A i } d i=0 are linearly independent.Pick integers i, j (0 ≤ i, j ≤ d) such that i + j ≤ d.We show that Define a polynomial g = f i f j − d h=0 p h ij f h .The degree of g is at most d, and g(A) = 0. Therefore g = 0. We have shown (22).In (22) we examine the degrees to find By this and Lemma 10.11(i), we find that Φ is P -polynomial.Since A 1 = f 1 (A) and deg(f 1 ) = 1, A is an affine transformation of A 1 .✷

Leonard pairs and Leonard systems
In this section we recall the notion of a Leonard pair and a Leonard system.
Definition 20.1 [13, Definition 1.1]By a Leonard pair on V we mean an ordered pair A, A * of elements in End(V ) that satisfy the following (i), (ii).
(i) There exists a basis of V with respect to which the matrix representing A is irreducible tridiagonal and the matrix representing A * is diagonal.
(ii) There exists a basis of V with respect to which the representing A * is irreducible tridiagonal and the matrix representing A is diagonal.
Let A, A * denote a Leonard pair on V .By [13,  ) is a Leonard system on V .
Lemma 20.3 [14, Lemma 9.2] The following hold: Lemma 20.4 [14, Theorem 6.1 and Lemma 6.3]There exists a unique antiautomorphism † of End(V ) that fixes each of A, A * .Moreover † fixes each of E i , E * i for 0 ≤ i ≤ d.
0 for a contradiction.The result follows.(ii) Apply (i) to Φ * .✷ Lemma 4.8 Each of the following is a generating set for the algebra End(V ): (i) E * 0 and M; (ii) E 0 and M * ; (iii) M and M * .Proof.(i) By Definition 3.2 and Lemma 4.7(i).

Lemma 7 . 6 Lemma 7 . 7
2.(ii) Apply (i) to Φ * .(iii), (iv) For the equations in (i) and (ii), apply † to each side and use Lemma 7.3.✷ Lemma 7.5 We have A 0 = I.Proof.By Lemma 6.5, I ρ = E * 0 .In this equation, apply ρ * to each side and evaluate the result using Lemma 6.4 and Definition 7.1.✷We have d i=0 A i = νE 0 .Proof.In the equation d i=0 E * i = I, apply ρ * to each side and evaluate the result using Definition 7.1 along with Lemma 6.5 applied to Φ * .✷The elements {A i } d i=0 form a basis of the vector space M.Proof.By Lemmas 6.4, 7.2 and since {E * i } d i=0 form a basis of the vector space M * .✷ 8 The scalars k i

Definition 8 . 1
For 0 ≤ i ≤ d let k i denote the eigenvalue of A i corresponding to E 0 .Lemma 8.2 For 0 ≤ i ≤ d the following hold:
has dimension 1 by Lemma 15.1 so E * i ξ is nonzero.The remaining assertions are clear.
✷ Definition 15.3By a Φ-standard basis of V we mean a sequence {E * i ξ} d i=0 ξ * .By this and Lemma 8.3(ii) we obtain the result.✷ 18 Four bases of V We continue to discuss a symmetric idempotent system Φ = ({E i } d i=0 ; {E * i } d i=0 ) on V .Recall the elements A i from Definition 7.1.Recall the matrices K, K * , U , U * from Definitions 14.1, 14.3, and the matrices B i , B * i , H i , H * i from Definitions 14.7, 14.8.Recall the bilinear form , from above Lemma 17.1.Throughout this section, we fix nonzero vectors ξ, ζ ∈ E 0 V and ξ * , ζ * ∈ E * 0 V , and consider the following four bases of V : r U Proof.We first consider the matrices representing A r .The matrix representing A r with respect to {E * i ξ} d i=0 is obtained using Lemma 11.1(i) and Definition 14.7.The matrix representing A r with respect to {k −1 i E * i ζ} d i=0 is obtained using Lemmas 10.11(i) and 11.1(i).The matrices representing A r with respect to{E i ξ * } d i=0 and {(k * i ) −1 E i ζ * } d i=0are obtained using Lemma 12.4(i) and Definition 14.8.Applying these result to Φ * we obtain the matrices representing A * r .Next we consider the matrices representing E r .The matrix representing E r with respect to {E * i ξ} d i=0 is obtained using Lemmas 13.2(iii), 12.3, 14.4(i),(iii).Multiply this matrix on the left (resp.right) by K (resp.K −1 ) and use Lemma 14.6(iii) to obtain the matrix representing E r with respect to {k −1 i E * i ζ} d i=0 .The matrices representing E r with respect to {E i ξ Proposition 18.2 In the table below we give the inner product matrices between the bases in (21).Each entry of the table is the inner product matrix from the basis in the given row to the basis in the given column: * } d i=0 and {(k * i ) −1 E i ζ * } d i=0are routinely obtained.Applying these results to Φ * we obtain the matrices representing E * r .✷ Lemma 1.3]each of A, A * is multiplicityfree.Let {E i } d i=0 denote an ordering of the primitive idempotents of A. For 0 ≤ i ≤ d pick a nonzero v i ∈ E i V .Then {v i } d i=0 form a basis of V .We say that the ordering {E i } d i=0 is standard whenever {v i } d i=0 satisfies Definition 20.1(ii).In this case, the ordering {E d−i } d i=0 is standard and no further ordering is standard.A standard ordering of the primitive idempotents of A * is similarly defined.{E i } d i=0 is a standard ordering of the primitive idempotents of A; (iii) {E * i } d i=0 is a standard ordering of the primitive idempotents of A * .For the rest of this section let (A; {E i } d i=0 ; A * ; {E * i } d i=0 ) denote a Leonard system on V .Note that (A * ; {E * i } d i=0 ; A; {E i } d i=0 [13,nition 20.2[13, Definition 1.4]By a Leonard system on V we mean a sequence(A; {E i } d i=0 ; A * ; {E * i } d i=0 )(23)of elements in End(V ) that satisfy the following (i)-(iii):(i) A, A * is a Leonard pair on V ;(ii)