Skew polynomial rings

Skew polynomial rings .are considered with a multiplication defined by where K is a (skew) field and the ai depend on a E K. Under certain conditions the rings appear to be non-commutative principal ideal rings with a unique factorization. §

(k<t> E K, polynomial of degree n) and the multiplication is completely determined by the commutation formula. This ring has been first described by ORE [8], who showed that the ring K[x; a, J] satisfies a right division algorithm, hence every left ideal of this ring is principal (left principal ideal domain, left PID for short). Because of the fact that every element (polynomial) has a finite prime factorization we conclude by [ 4,Theorem 5.5, Corollary 1] that this ring is a unique factorization domain (UFD for short). The notion of a UFD is called to mind by the following definitions: 1. Two elements a, b of a ring Rare said to be similar, if RjaR ~ RjbR, as right R-modules; in an integral domain (ring with a unit-element and without zero-divisors) this implies RfRa ~ RfRb (cf. JACOBSON [7], p. 33 or CoHN [ 4 ], p. 314); 2. A UFD is an integral domain such that every non unit has a factorization into primes; two different factorizations of the same element have the same number of prime factors and the factors are similar in pairs (cf. CoHN [4], p. 317).

)
The term "field" will be used in the sense of "skew field", i.e. "not necessarily commutative division ring". 210 In this paper we consider the ring R of polynomials over the field K in a single indeterminate x with the commutation formula (r=2, 3, ... ), where the a-t depend on a. By the associative and distributive laws we obtain many relations between the mappings 6i: a-+ ai (i= 1, 2, ... , r). If we assume that 62, 6a, ... , 6r are right K-independent, then these relations can be simplified.
In § 6 we derive that R satisfies a right Euclidean algorithm, hence R is a left PID. Because of a finite prime factorization R is a UFD (Theorem 4, § 6). From the relations obtained by the associative law it can be derived that if ex is the mapping a -+ a~, i.e. a1 = aex, then ex is an endomorphism of K and 62 : a-+ a2 is an (ex 2 , ex)-derivation of K.
Assume further ex is an automorphism of K with inverse fJ and put a2 =a6ex, then 6 is an ex-derivation of K. In § 2 we derive (k= 1, 2, ... ), hence from 0=ar+1=a6rex we observe that 6 is a nilpotent ex-derivation of K and the index of nilpotence of 6 is r.
It is rather remarkable that y=x-1 satisfies again the linear Ore-rule In the sections 3, 4 and 5 we investigate the polynomial units of R=K[x]. First we give examples of splitting up units as products of units of lower degree. In § 5 we derive in a few lines the general decomposition theorem of units. Every unit can be written as a product of units of degree ~ r-1, for r =F 2 even as a product of units of degree ~ r-2. The group G of units is multiplicatively generated by r isomorphic fields of units K-t of the form x'~Kyi (i= 0, 1, 2, ... , r-1). For r= 2 the ring R is the free product of K and K1 =xKx-1 ( § 5, Theorem 3). If r~ 3, then the ring R is a proper homomorphic image of the free product of the fields K and K1 over the constant field 0 (Theorem 3).  where c is an arbitrary element of K, is the polynomial obtained from We shall now define multiplication for the additive group formed by the polynomials (I), so that the group becomes a ring. We assume that the multiplication of polynomials is associative and both -sided distributive.
It is clear that, due to the distributive property, it suffices to define the product x·a. We assume the commutation formula This leads to the following relations (4) i=l,2, ... ,r, hence the mapping !5i: a-+ ai (i= I, 2, ... , r) is an endomorphism of the additive group of the field K.
In the special case of a skew polynomial ring K[x; a, 0] we have !5i=0 (i ~ 2) and !51= a.
Now the special properties of the mappings l5i will be discussed further. The principal formula (3) yields We obtain by induction on k: Because of (6) kr xka= z a(lc,i)Xi, k= 1, 2, ... , r.
i~k aoc,i)=O for i>kr and i<k formula (5) can be rewritten in the form (7) 00 xka = z a(lc,i) xi.
i~l Important relations are found from the associative law (xa)b=x(ab), all a, b E K. One has by (7) 00 00 Equate coefficients: (8) e.g. (9) (10) For i>r the left-hand side of (8) vanishes, hence To be able to continue the calculations we require two assumptions (B) and (C).
AssuMPTION (B). If bi: a-+ ai then b2, ... , br are right K-independent, (all u E K), This assumption is satisfied for instance if for each l there exists an In fact if (14) holds, let c<i) be the first non-zero coefficient and put u=v<i>, then Because of assumption (B) relation (13) yields immediately so that the relations (15) are a consequence of the matrix relation (16) (k=2, ... , r; i=r+1, ... , 2r).
If a= b1 would be the zero endomorphism, then it easily follows from (8) that b2=b3= ... =br=O. This contradicts assumption (B), so ai=-o, i.e. a is necessarily a monomorphism of K. The inverse {J of a is defined on Ka. Now (17) says e.g.
A further assumption (C) will appear important.
AssuMPTION (C). 01=ex is an automorphism of K, with mverse p.
We shall also introduce the mapping o by In virtue of (10) o satisfies the condition in other words, o is an ex-derivation of K. The constant field 0 is defined as the subfield of o-constants, so O={a EKjao=O}.
Combining (20) and (29) we obtain after cancelling an application of ex With formula (28) we now conclude In virtue of br+l=briX=O, we see br=O, i.e. t5 is a nilpotent IX-derivation of index r. Note that (31) is a consequence of the nilpotence of 15, cf. [9], Theorem l. The results obtained may now be summed up as follows:  We observe that (x+a) with a/5= -1 is a linear unit. The equation a(')= -1 had been considered in [9]. We found a=(b('Jr-1)-1(b('Jr-2) for all b E K with b('Jr-1 i= 0. To find quadratic units we write ax2+bx+c) =8 (g, a, b, c, 8 After long calculations we find where g E K satisfies the relation The last equation has a solution g = -(M)-1t for all t E K with to i= 0 and to 3 =0.
It is surprising that the quadratic unit in (34) can be decomposed into a linear unit and a constant term. After complicated calculations we obtain the decomposition Of course one might be inclined to believe that all quadratic units can be written as the product of a linear unit and a constant term. However quadratic units with quadratic complementary units cannot be decomposed in general into linear units and constant terms. In § 5 we derive a general decomposition theorem on units. In the following section we make some preparations for this theorem. This expresses the fact, that the mapping a _,. xay is an isomorphism of the field K; in fact it is the automorphism generated by the derivation b( =infinitesimal automorphism) by the Taylor formula.
Unless stated otherwise we always assume that the ring R satisfies all the assumptions of Theorem 1 (especially Assumption (C)).
Because of y=x-1 we have and it is interesting to consider the units It is important to notice that all units of the form xnayn (where n is a fixed integer) constitute a (skew) field Kn isomorphic with the coefficient field K. Of course K,=K, so that the polynomial ring R contains (r-1) isomorphic copies K1, K2, ... , Kr-1 of K.
Finally we remark that the quadratic unit Ur-2(a) cannot be written in general as the product of a linear unit and a constant term. By (40) and (43) we observe that in the case r=2 all the Et(e<t>) are elements of K or linear units. For r ~ 3 we found in the preceding section that Eo, E1, ... , Er-2 were units of degree 0, I, 2, ... , r-2 and Er-1 was a unit of degree r-I that could be written as the product of a linear unit and an element of K (cf. (42)). The results of § 4 and § 5 may be stated as THEOREM 3. Let R be the polynomial ring satisfying all the assumptions of Theorem I. Let Kn C R be the (skew) field of all units xnayn (a E K; n=O, 1, 2, ... , r-1; Ko=K). Then Kn is an isomorphi<J copy of K and further (i) every unit of R can be written as a product of units from the fields Kn; (ii) for r>2 every unit of R can be written as a product of units of degree ~r-2, for r=2 as a product of linear units; (iii) the fields K and K 1 = xKx-1 generate the whole ring R; (iv) if r = 2, then the ring R is the free product over the constant field 0 of the fields K and K1; (v) if r>2, then the ring R is a proper homomorphic image of the free product of the fields K and K1 over the constant field 0.
Proof. In the beginning of this section we proved (i) and (ii). Statement (iii) follows from the fact that there are polynomial units of every degree, so that an arbitrary polynomial of R can be written as a sum of units, hence as a sum of products of elements from the Ki (i=O, 1, ... , r-1). However also the elements of K 2 , Ka, ... , Kr-1 can be written as sums of products of elements of K and K 1. To show this let g E K satisfy giJ= 1 (such an element exists, cf. § 3), then we have with certain coefficients a(m,m+k) E K. Consequently R is generated by the fields K and K1 and the polynomial ring R is a homomorphic image of the free product of K and K1 over the common (constant) field 0. This image may be proper or not proper. To prove statement (iv) we remark in the first place that K possesses a nilpotent IX-derivation IJ of index 2, hence by Theorem 3 of [9] every element a E K can be written uniquely in the forms a = n + ez = a + zr (n, (],a, r E 0, ziJ = 1), i.e. K fO is a left and right quadratic extension of the constant field 0 with basis 1 and z (ziJ= 1). Now statement (iv) can be proved by the remark that if R would be a proper homomorphic image of the free product of K and K1, then R would be a field (COHN [3], Theorem 8) which is of course a contradiction, so R is the free product itself. However statement (v) can be proved by an inverse method. If R would be the free product itself of K and K 1 , then every unit in R would be a monomial unit, i.e. every unit would be a product of elements of K and K1 (COHN [2], Theorem 2.6). By (55) this is a contradiction, thus R is not the free product itself but a proper homomorphic image (r~ 3). This remark completes the proof of Theorem 3.
CoROLLARY 3.1. In the case r = 2 or 3 every unit can be written as a product of linear units. In the case r = 4 every unit can be written as a product of linear and quadratic units etc. § 6. THE PRIME FACTORIZATION The quadratic commutation formula (r=2) has been met first by P. M. CoHN ([3), p. 548) in a free product of two quadratic extensions. He proved that every left ideal was principal. It is not difficult to generalize his proof for an arbitrary r. In the proof we do not need the fact that the monomorphism a is also an automorphism of K. Proof. We note first that results (vi) up to (ix) of Theorem l cannot be proved any longer. In fact we only need formula (17). Now let I =F 0 be a left ideal of R, and let f=xn+axn-1 +bxn-2+ . .. be a monic polynomial of least degree in I. Of course f is unique. Now we have Rf C I and xf = arxn+r-1 + (ar-1 + br) xn+r-2 + ... x 2 f = a(2,r)xn+r-1 + (a(2,r-1) + b(2,r)) xn+r-z + .. .
If there would exist a linear relation between the right-hand sides of (59), then also between the left-hand sides of (59), which is impossible because they were left K-independent. Hence the system (59) has one and only one solution (mod Rn) for xn+r-k, k= 1, 2, ... , r-l. E.g.

hence
We can even obtain a relation of the form So finally it follows that Rf contains monic polynomials of degree n, n + 1, n + 2, ... , n + rl. After multiplication of those polynomials on the left by xr we conclude that Rf contains a monic polynomial of degree N for every N ~ n and an arbitrary element h of R can be written as In particular if hE I, than t E I, so t=O and h=qf E Rf which yields I~ Rf. Because of Rf ~I we conclude I =Rf, which proves the lemma.
From (61) it is obvious how to develop a right Euclidean algorithm in R. If we want to divide hz E R of degree l by gm E R of degree m, we look in the ideal Rgm for the monic polynomial of least degree, say fn, so Rgm=Rfn with n~m. Clearly we have fn=Ugm, gm=Vfn, where u and v are units, so fn and gm are left associated. By (61) we can write hz in the form hl=qfn+t, t E Rn-1, thus We observe that deg t<n~m=deg gm. Notice that it is possible that l<m.
LEMMA 2. Let the polynomial ring R be defined by the assumptions (A) and (B) of Theorem 1, then R is a unique factorization domain (UFD for short).
Proof. First we show that the descending chain condition holds for left ideals having an intersection i= 0 (restricted descending chain condition). Let Proof. This merely depends on the fact that if the monomorphism IX is an automorphism of K (with inverse {3) every polynomial in R with left-hand coefficients can always be represented as a polynomial with right-hand coefficients.
The last term becomes zero and we have the right-hand commutation formula