Constructions of MDS convolutional codes using superregular matrices

Maximum distance separable convolutional codes are the codes that present best performance in error correction among all convolutional codes with certain rate and degree. In this paper, we show that taking the constant matrix coefficients of a polynomial matrix as submatrices of a superregular matrix, we obtain a column reduced generator matrix of an MDS convolutional code with a certain rate and a certain degree. We then present two novel constructions that fulfill these conditions by considering two types of superregular matrices.


Introduction
The (free) distance of a code measures its capability of detecting and correcting errors introduced during information transmission through a noisy channel. Maximum distance separable (MDS) codes are the ones that have maximum distance among all codes with the same parameters. MDS block codes of rate k/n are the block codes with distance equal to the Singleton bound n − k + 1. The class of MDS block codes is very well understood and there exist relevant MDS block codes like the Reed-Solomon codes [12]. The theory of convolutional codes is more involved and there are not many known constructions of MDS convolutional codes. Maximum distance separable (MDS) codes have maximum free distance in the class of convolutional codes of a certain rate k/n and a certain degree δ, i.e., are the ones with free distance equal to the Singleton bound (n − k) δ k + 1 + δ + 1 [13]. The first construction of MDS convolutional codes was obtained by Justesen in [9] for codes of rate 1/n and restricted degrees. In [16] Smarandache and Rosenthal presented constructions of convolutional codes of rate 1/n and arbitrary degree using linear systems representations. However these constructions require a larger field size than the constructions obtained in [9]. Gluesing-Luerssen and Langfeld introduced in [6] a new construction of convolutional codes of rate 1/n that requires the same field sizes as the ones obtained in [9] but also with a restriction on the degree of the code. Finally, Smarandache, Gluesing-Luerssen and Rosenthal [15] constructed MDS convolutional codes for arbitrary parameters. We will define a new construction of convolutional codes of any degree and sufficiently low rate using superregular matrices with a specific property. We then provide explicitly constructions of these codes using Cauchy circulant matrices [14] and superregular matrices as defined in [2]. A similar procedure was done for constructing two-dimensional MDS convolutional codes [4,3]. The paper is organized as follows: In the next section we start by introducing some preliminaries on superregular matrices. We give the definition of these matrices and two different types of superregular matrices. then we give some definitions and results on convolutional codes. In Section 3, we present a procedure to construct MDS convolutional codes using superegular matrices. We show that generator matrices whose coefficients of its entries fulfill certain conditions are generator matrices of an MDS convolutional code. In Section 4, we give two different constructions of MDS convolutional codes of an arbitrary degree and rate smaller than some upper bound. Finally, in Section 5, we compare the necessary field size and the restrictions on the parameters of our obtained constructions with those of already known constructions.

Superregular matrices
The following lemma is easy to see and we will use it several times to derive our conditions for MDS convolutional codes.

Lemma 2.2. (i) Let
A ∈ F r×ℓ be superregular. Then, each vector which is a linear combination of s columns of A has at most s − 1 zeros. (ii) Let A ∈ F r×ℓ with r ≥ ℓ be such that all its fullsize minors are nonzero. Then, each vector which is a linear combination of ℓ columns of A has at most ℓ − 1 zeros.
There are many examples of superregular matrices. We will present two types of superregular matrices that we will use later for the constructions that we introduce in this paper. The first one will be the Cauchy circulant matrices. Theorem 2.3. [14] Let F be a finite field with |F| = q where q is an odd number. Furthermore, let α be an element of order (q−1) 2 and let b be a nonsquare element in F. Then the is superregular.
The matrix considered in the above theorem is a Cauchy circulant matrix. Another type of superregular matrix is given in the next theorem.
Let α be a primitive element of a finite field F = F p N and B = [ν i ℓ ] be a matrix over F with the following properties Suppose N is greater than any exponent of α appearing as a nontrivial term of any minor of B. Then B is superregular.

Convolutional codes
Let F be a finite field and F[z] the ring of the polynomials with coefficients in F. A convolutional code of rate k/n is an F[z]-submodule of F[z] n of rank k. An generator matrix of a convolutional code C of rate k/n is any n × k matrix whose columns constitute a basis of C, i.e., it is a full column rank matrix G(z) such that If G(z) ∈ F[z] n×k is a generator matrix of a convolutional code C, then all generator matrices of C are of the form G(z)U(z) for some unimodular matrix U(z) ∈ F[z] k×k . Two generator matrices of the same code are said to be equivalent generator matrices.
Since two equivalent generator matrices differ by right multiplication of a unimodular matrix, they have the same full size minors, up to multiplication by a nonzero constant. The complexity or degree of a convolutional code is defined as the maximum degree of the full size minors of a generator matrix of the code.
Define the j-th column degree ν j of a polynomial matrix G(z) ∈ F[z] n×k to be the maximum degree of the entries of the j-th column of G(z). Obviously, the sum of the column degrees of G(z) is greater or equal than the maximum degree of its full size minors. If the sum of the column degrees of G(z) equals the maximum degree of its full size minors, G(z) is said to be column reduced. A convolutional code always admits column reduced generator matrices and two column reduced generator matrices have the same column degrees up to a column permutation [5,10]. Therefore, column reduced generator matrices are the ones that have minimal sum of the column degrees and such the sum of its column degrees is equal to the degree of the code.
Then, the highest column degree coeffcient matrix [G] hc ∈ F n×k is defined as the matrix consisting of the entries [g ij ] hc .

It holds that G(z) is column reduced if and only if [G]
hc ∈ F n×k has full rank.
The weight of a vector c ∈ F n , wt(c), is the number of its nonzero entries and the weight of a polynomial vector v The free distance of a convolutional code C is the minimum weight of the nonzero codewords of the code, i.e., In [13] Smarandache and Rosenthal obtained an upper bound for the free distance of a convolutional code C of rate k/n and degree δ given by This bound is called the generalized Singleton bound. A convolutional code of rate k/n and degree δ with free distance equal to the generalized Singleton bound is called Maximum Distance Separable (MDS) convolutional code. If C is such a code and G(z) ∈ F[z] k×n is a column reduced generator matrix of C, its columns degrees are equal to δ k + 1 with multiplicity t := δ − k δ k and δ k with multiplicity k − t; see [13].

Conditions to obtain MDS convolutional codes
Let C be a convolutional code of rate k/n and degree δ. In this section, we will derive conditions on a column reduced generator matrix G(z) of C that ensure that the code is an MDS convolutional code.
To this end, we assume that G(z) has non-increasing column degrees with values δ k + 1 and δ k . We write Write As wt(G(z)u(z)) = wt(G(z)u(z)z r ) for r ∈ N, throughout this paper, we assume, without loss of generality that u 0 = 0.
Proof. Since the highest column degree coefficient matrix of G(z) is equal to it is a submatrix of the superregular matrix G and hence full rank. Consequently, G(z) is column reduced. Therefore, the degree of the code generated by G(z) is equal to the sum of the column degrees of G(z), which is νt The generated code is an MDS convolutional code if and only if for each Next, we will show that under certain conditions equation (4) is fulfilled by considering different cases for the value of δ. In any case, one of the conditions will always be the superregularity of G. However, this condition is not necessary to obtain an MDS convolutional code as the following example shows.

Conditions for the case δ < k
In this case, we have to prove that wt(v(z)) ≥ n − k + δ + 1.
Theorem 3.3. Assume that δ < k and let G be superregular. If n ≥ δ +k −1, then G(z) is the generator matrix of an (n, k, δ) MDS convolutional code.
Here, one has v(z)

Conditions for the case δ ≥ k
For this subsection, we need the additional definitions It holdsḠ = G 1 for k ∤ δ andḠ = G 2 for k | δ and G(z) = [I n I n z · · · I n z µ ]Ḡ.
Theorem 3.4. Assume that δ ≥ k and let G defined in (1) be superregular. Moreover, assume that all fullsize minors ofḠ are nonzero. If n ≥ 2δ +k −ν, then G(z) is the generator matrix of an (n, k, δ) MDS convolutional code.
Proof. We distinguish several cases. Case 1: l = 0 Case 1.1: k | δ In this case, the generalized Singleton bound is equal to nν−k+1. If we define v = G 2 u, we obtain that v is a nonzero linear combination of the columns of a matrix with nonzero fullsize minors and hence wt(v) ≥ nν − k + 1.
0 ∈ F k−t , and set v = G 1 u.
is a nontrivial linear combination of columns of an nν × k matrix with nonzero fullsize minors and v (2) is a linear combination of columns of an n × t matrix with nonzero fullsize minors. We distinguish two further subcases.

Constructions of MDS convolutional codes
In this section, we will use the results of the preceding section to obtain two different constructions for MDS convolutional codes.
Then, G(z) = µ i=0 G i z i with G i = [g i,1 · · · g i,k ] is the generator matrix of an (n, k, δ) MDS convolutional code.
Proof. The preceding theorem is an immediate consequence of Theorem 2.3.
is the generator matrix of an (n, k, δ) MDS convolutional code over F p N .
Proof. According to Theorem 2.4, G is superregular over For the last inequality, we used the geometric sum.

Constructions for δ ≥ k
Theorem 4.3 (Construction 1). Assume δ ≥ k, t and ν as defined before and n ≥ k + 2δ − ν. Moreover, let F be a finite field with |F| = q where q is an odd number such that q ≥ 2n(ν + 1) + 1 and let C = c ij be the Cauchy circulant matrix defined in Theorem 2.3. Set ] is the generator matrix of an (n, k, δ) MDS convolutional code.
Proof. By Theorem 2.3, C is a superregular matrix. Then the matrixḠ is superregular because it is a submatrix of C. Since α q−1 2 = 1, we obtain for 0 ≤ u, v ≤ q−3 2 , and, hence, Consequently, after an appropriate rearrangement of the columns of G, we obtain a submatrix of the Cauchy matrix C. Therefore, the matrix G is also superregular.
] is the generator matrix of an (n, k, δ) MDS convolutional code over F p N .

Comparison of constructions for MDS convolutional codes
In this section, we want to compare the new constructions for MDS convolutional codes in this paper with the already known constructions. The comparison should be in terms of conditions on the parameters n, k and δ and in terms of the necessary field size. Throughout this section, we refer to the new constructions of the preceding section as Construction 1 and Construction 2.
The constructions in [9], [16] and [6], which we already mentioned in the introduction, work only for k = 1 but in this case the required field sizes are smaller than the field sizes required for Construction 1 and Construction 2.
For nearly all parameters with k > 1, the construction of [15] leads to the smallest field size of all known constructions. But this construction has the drawback that it only works for |F| ≡ 1 mod n. Moreover, Construction 1 obtained in this paper could improve the necessary field size of [15] in some particular cases, e.g. it leads to smaller field sizes for (17, 2, 1) and (17, 2, 4) convolutional codes. However, also this construction has restrictions, i.e. it works only for odd field sizes and if n is larger than a particular lower bound.
Maximum distance profile (MDP) convolutional codes are convolutional codes whose so-called column distances increase as rapidly as possible for as long as possible; see [8] or [7] for more explanation. As each (n, k, δ) MDP convolutional code with (n − k) | δ is an MDS convolutional code [7], for comparison, one also has to consider constructions for MDP convolutional codes if (n − k) | δ. In [1] and [11,Theorem 3.2], one could find such constructions that have no other requirements on the parameters than (n − k) | δ. There, the required field sizes are larger than the field size from [15] but again this construction has the drawback that it only works for |F| ≡ 1 mod n. Theorem 3.2 of [11] provides a construction for MDP convolutional codes where the required field size is smaller than the field size in [1]. However, it only works for very large characteristic of the field, while the construction in [1] and also Construction 2 work for arbitrary characteristic. If n is sufficiently large, such that the conditions for Construction 2 are fulfilled, it depends on the parameters if it is better than the construction in [1] or not. For example, for an (5, 2, 2) code the construction from [1] is the better, and for an (5, 1, 5) code, Construction 2 is better.