Input-state-output representations and constructions of ﬁnite-support 2D convolutional codes

Two-dimensional convolutional codes are considered, with codewords having compact support indexed in N 2 and taking values in F n , where F is a ﬁnite ﬁeld. Input-state-output representations of these codes are introduced and several aspects of such representations are discussed. Constructive procedures of such codes with a designed distance are also presented.


Introduction
Multidimensional (mD) convolutional codes are the higher dimensional (nontrivial) generalizations of one-dimensional (1D) convolutional codes.These codes may prove useful in transmission of mdimensional data, such a 2D pictures, 3D animation, etc. (see [4,13,17]), or for the storage of digital information, [14].While 1D convolutional codes have been thoroughly understood, the literature about mD convolutional codes is quite limited.Moreover, most of the existing research is focused on algebraic aspects and fundamental issues.Fornasini and Valcher [2] introduced the general theory for the study of two-dimensional (2D) convolutional codes constituted by sequences indexed on Z 2 , and discussed issues such as the characterization of such codes in terms of their internal properties and input-output representations.In [15] the same authors considered 2D convolutional codes in which the codewords have compact support in Z 2 , and presented several properties of their encoders and syndrome formers (parity-check matrices) under different hypotheses on the code structure.They also introduced the dual codes of such codes.Multidimensional (mD) convolutional codes having finite support in N m were first studied by Weiner in [16], where he explored some connections of mD convolutional codes with commutative algebra and algebraic geometry.In [3], Gluesing-Luerssen, Rosenthal and Weiner analyzed the relation between multidimensional convolutional codes and systems.In the same line, a different approach is given by Kitchens in [6], where he set a concept of mD convolutional code from the symbolic dynamical point of view, establishing five different equivalent notions.More recently, for the purpose of studying mD convolutional codes from a more practical point of view, R. Lobo, in his dissertation [8] introduced the concept of locally invertible encoders which lead to a particular class of basic convolutional codes.However crucial aspects of the theory of convolutional codes such as the construction of good convolutional codes and of minimal representations of a code have not been deeply studied by these authors.In particular, only Weiner studied related problems in [16] for a very particular and simple class of multidimensional codes, the unit memory codes, which are the ones whose encoders are constituted by polynomials of either first or zero degree.
In this paper, we consider two-dimensional (2D) convolutional codes over a finite field F, constituted by polynomials in two indeterminates with coefficients in F n , v(z 1 , z 2 ) = (i,j)∈N 2 v(i, j)z i 1 z j 2 .
These codes are called 2D finite support convolutional codes.We study such codes by means of state space representations.Rosenthal and collaborators [10,12] defined input-state-output (ISO) representations for 1D finite support convolutional codes and used these representations in the construction of good codes.We introduce such representations for 2D finite support convolutional codes, considering the Fornasini-Marchesini state space model for 2D linear systems [1].Minimality of such representations is very important not only for an efficient implementation of the code but also for the construction of good codes.However, minimality of the Fornasini-Marchesini state space models is not completely characterized as it happens for 1D state space models [1,5].In this paper, we define the complexity of a 2D finite support convolutional code, similarly to the 1D case, and show that it is a lower bound on the dimension of such representations.In addition, we give a sufficient condition to construct minimal ISO representations of 2D finite support convolutional codes.The advantages of this approach is that ISO representations show how the encoding algorithm proceeds by explicitly displaying the corresponding evolution of the state space (the memory of the code).In particular, we take advantage of the nature of such representations to construct 2D finite support convolutional codes with a designed distance.
The structure of the paper is as follows.We begin by introducing, in Section 2, some necessary background on polynomial matrices in two indeterminates and on the Fornasini-Marchesini state space models, centering around concepts such as reachability and observability.Section 3 is devoted to provide an overview of the theory of 2D finite support convolutional codes already available in the literature.In Section 4 we introduce the ISO representations for 2D finite support convolutional codes, study minimality and characterize the properties of such representations to obtain non-catastrophic 2D finite support convolutional codes.We then consider, in Section 5, the restriction of 2D finite support convolutional codes to the semi-axes N × {0} and {0} × N, and show that they constitute 1D finite support convolutional codes.In Section 6 we relate the distance of a type of 2D codes with the distance of its restriction to the axes.This allow us to present a construction of 2D finite support convolutional codes with a designed distance.

Preliminaries
Denote by F[z 1 , z 2 ] the ring of polynomials in 2 indeterminates with coefficients in F, by F(z 1 z 2 ) the field of fractions of F[z 1 , z 2 ] and by F[[z 1 , z 2 ]] the ring of formal powers series in 2 indeterminates with coefficients in F.
In this section we start by giving some preliminaries on matrices over F[z 1 , z 2 ] that will be needed in the sequel.For more details see [7,9].
A matrix is left factor prime ( F P ) / left zero prime ( ZP ) if its transpose is rF P / rZP respectively.The following propositions give characterizations of right factor primeness and right zero primeness.
Then the following are equivalent: 1. G(z 1 , z 2 ) is right factor prime; 2. there exist polynomial matrices Then the following are equivalent: 1. G(z 1 , z 2 ) is right zero prime; 2. G(z 1 , z 2 ) admits a polynomial left inverse.
It is well known that given a full column rank polynomial matrix The following lemma will be needed in the sequel.
be an F P and an rF P matrix, respectively, such that H(z 1 , z 2 )G(z 1 , z 2 ) = 0. Then the corresponding maximal order minors of H(z 1 , z 2 ) and G(z 1 , z 2 )1 are equal, modulo a unit of the ring Next, we consider 2D linear systems and analyze their state space model representations.In particular we consider the Fornasini-Marchesini state space model [1], which we will use to construct 2D finite support convolutional codes.In this model a first quarter plane 2D linear system, denoted by Σ = (A 1 , A 2 , B 1 , B 2 , C, D), is given by the updating equations where n > k and with past finite support of the input and of the state (u(i, j) = x(i, j) = 0 for i < 0 or j < 0) and zero initial conditions (x(0, 0) = 0).We say that Σ = (A 1 , A 2 , B 1 , B 2 , C, D) has dimension δ, local state x(i, j), input u(i, j) and output y(i, j), at (i, j).

Given an input trajectory û
Moreover, we have that the set of input-state-output trajectories of Σ is given by ker Reachability and observability properties of the system have been introduced for local states and global states [1].We will only consider local states, which are the ones that will be important for our study.Next we present the reachability and observability properties that we will need later.To simplify terminology, we use state instead of local state.
In the 1D case, these notions 1 and 2 of reachability are equivalent.Such equivalence is stated in the PBH test [5].However, in the 2D case there are systems which are locally reachable but not modally reachable and vice-versa (see [1]).
3 Input-state-output representations of 2D finite support convolutional codes In this section we recall the definition and properties of 2D finite support convolutional codes and introduce the input-state-output representations of such codes considering the Fornasini-Marchesini state space models.We characterize properties of such representations in order to obtain a noncatastrophic 2D finite support convolutional code.We conclude the section by discussing minimality issues.In particular we prove that the dimension of these input-state-output representations is lower bounded by the complexity of the corresponding code and we give conditions to reach such a bound.Finally we show that if the representation is not minimal, we can reduce dimension via Kalman reachability form.
whose columns constitute a basis for C, i.e., such that is called an encoder of C. The elements of C are called codewords.

Two full column rank matrices
which happens if and only if there exists a unimodular matrix [15,16].Considering the usual definition of degree of a polynomial, deg (p(z 1 , z 2 )) =max {i + j : p(i, j) = 0}, we can introduce the notion of complexity of C as follows.
Definition 3.2 Let C be a 2D finite support convolutional code with an encoder G(z 1 , z 2 ).The complexity of C,represented by δ C , is defined as the maximal degree of the full size minors of G(z 1 , z 2 ).
Note that the fact that two equivalent encoders differ by unimodular matrices also implies that the primeness properties of the encoders of a code are preserved, i.e., if C admits an rF P (rZP ) encoder then all its encoders are rF P (rZP ).A 2D finite support convolutional code that admits rF P encoders is called noncatastrophic and is named basic if all its encoders are rZP .
An important measure of robustness of a code is its distance.We define the notion of distance as in [16].The weight of v(z , is given by wt(v) = (i,j)∈N 2 wt(v(i, j)), where wt(v(i, j)) is the number of nonzero elements of v(i, j).
Let us now consider a first quarter plane 2D system Σ defined in (1).For (i, j) ∈ N 2 , define v(i, j) = (u(i, j), y(i, j)) ∈ F n to be the code vector.In this work we are interested in the finite support input-output trajectories of (1), i.e., we want to consider as codewords.It is worth mentioning that this approach is different from the one adopted in [2] where the codewords are constituted by the output trajectories ŷ(z 1 , z 2 ) of a system.Another important point to note here is that if the system (1) produces a finite support inputoutput trajectory with corresponding state trajectory x(z 1 , z 2 ) having infinite support, this would make the system remain indefinitely excited, which is a situation that we want to avoid.Thus, we want to restrict ourselves to finite support input-output trajectories (û(z 1 , z 2 ), ŷ(z 1 , z 2 )) with corresponding state x(z 1 , z 2 ) also having finite support.We call such trajectories (û(z 1 , z 2 ), ŷ(z 1 , z 2 )) finite-weight input-output trajectories and the triple (x(z 1 , z 2 ), û(z 1 , z 2 ), ŷ(z 1 , z 2 )) finite-weight trajectories.Note that not all finite support input-output trajectories have finite weight.The following result asserts that the set of finite-weight trajectories of (1) forms a 2D finite support convolutional code and therefore justifies its use for representing such codes.
Theorem 3.1 The set of finite-weight input-output trajectories of (1) is a 2D finite support convolutional code of rate k n .
Proof Let us denote by S and S io the set of finite-weight trajectories and the set of finiteweight input-output trajectories of (1), respectively.Then one has that , where X(z 1 , z 2 ) is defined in (3).Since ker F(z1,z2) X(z 1 , z 2 ) has dimension k, there exists an rF P matrix such that ker F(z1,z2) X(z 1 , z 2 ) = Im F(z1,z2) L(z 1 , z 2 ), and as L(z 1 , z 2 ) is rF P , we use Proposition 2.1 to conclude that are in such a way that the resulting block matrix Since det 0, and, by Lemma 2.1, the corresponding maximal order minor of L2 (z 1 , z 2 ) is also nonzero, which implies that L2 (z 1 , z 2 ) is full column rank, and therefore S io is a 2D finite support convolutional code with rate k n .
We denote by C(A 1 , A 2 , B 1 , B 2 , C, D) the 2D finite support convolutional code whose codewords are the finite-weight input-output trajectories of the system Σ The following theorem provides a condition to obtain a noncatastrophic 2D finite support convolutional code.
) is noncatastrophic and its codewords are the finite support input-output trajectories of Σ.
Proof Suppose that Σ has k inputs, n−k outputs and dimension δ.It follows from the proof of Theorem 3.1 that there exist polynomial matrices where Since the system is modally observable, we have that rF P .Suppose by contradiction that G(z 1 , z 2 ) is not rF P .Then, by Proposition 2.1, there exists a nonpolynomial û(z 1 , z 2 ) ∈ F(z 1 , z 2 ) k such that G(z 1 , z 2 )û(z 1 , z 2 ) is polynomial.So, by (5), we obtain that is also polynomial.Then, since is polynomial and therefore so it is For a given code C with ISO representation Σ, the following result establishes a lower bound on the dimension of Σ, which, in turn, yields a sufficient condition for the minimality of such representations.Proof Observe that every maximal order minor of X(z 1 , z 2 ), defined in (3), has degree smaller or equal than δ, since the elements of the first δ rows of X(z 1 , z 2 ) have degree smaller or equal than 1 and the elements of its last n − k rows are constant.
If δ is the maximal degree of the maximal order minors of X(z 1 , z 2 ), then δ ≥ δ .Moreover, by Lemma 2.1, the maximal order minors of G(z 1 , z 2 ) have the same degree as the maximal order minors of X(z 1 , z 2 ) which include the first δ columns of X(z 1 , z 2 ).Thus δ ≥ δ C .
Next corollary shows how to obtain a minimal ISO representation Σ of the corresponding code C. It also proves that the dimension of Σ coincides with the complexity of C. We omit its proof since it follows immediately from the arguments used in the proof of Proposition 3.1.
However, for general 2D convolutional codes we do not know how to obtain a minimal ISO representation.But, if the ISO representation is not locally reachable, it is possible to obtain one with smaller dimension, as we show in the next proposition.To this end observe that if S is an invertible constant matrix, the systems Σ represent the same code.These systems are said to be algebraically equivalent [1].Also in [1], Fornasini and Marchesini generalized the Kalman reachability form for 2D systems, considered in the next definition, and showed that every 2D system is algebraically equivalent to a system in the Kalman reachability form.
where Ã(1) 11 and Ã(2) 1 and B(2) with the remaining matrices of suitable dimensions, and 1 , C1 , D) is a locally reachable system, which is the largest reachable subsystem of Σ.
1 , C1 , D) be the largest reachable subsystem of Σ, of dimension δ 1 .Then given by ( 6), if and only if there exist vectors x1 (z which happens if and only if there exists x1 (z 1 , C1 , D).

Constructions of 2D finite support convolutional codes
In this section we construct 2D finite support convolutional codes with a designed distance.We start by studying the relation between the properties of a 2D finite support convolutional code and the properties of its projection onto the semi-axis e i , ∈ N, for e 1 = (1, 0) and e 2 = (0, 1).Then we build ISO representations, Σ = (A 1 , A 2 , B 1 , B 2 , C, D), in such a way that the projections of ) with a specified distance.In this way we obtain a lower bound on the distance of Given a 2D finite support convolutional code C, define It is easy to check that C i is a (free) submodule of F n [z i ], i = 1, 2, and therefore a 1D finite support convolutional code [10].The next lemma is immediate.
Thus, to determine the distance of a 2D finite support convolutional code with ISO representation Σ = (A 1 , A 2 , B 1 , B 2 , C, D) we only have to consider the codewords whose restrictions to the semiaxes are nonzero.This, together with the fact that wt(v(0, 0)) ≤ 0 leads readily to the following result.
Proposition 4.1 Let C be a 2D finite support convolutional code with an ISO representation and C i as defined in (7) n×k be an encoder of C and define the polynomial matrices . Note however that the fact that G(z 1 , z 2 ) is an encoder of C does not imply, in general, that G(z 1 , 0) and G(0, z 2 ) are also encoders of C 1 and C 2 , respectively.Also, the condition of factor primeness on G(z 1 , z 2 ) is too weak to ensure that G(z 1 , 0) and G(0, z 2 ) are right prime matrices over F[z 1 ] and F[z 2 ], respectively.Thus the noncatastrophicity of C does not imply the noncatastrophicity of C 1 and C 2 .But, if G(z 1 , z 2 ) is zero prime then C 1 and C 2 are noncatastrophic.Let {(x(i, j), u(i, j), y(i, j))} (i,j)∈N 2 be a trajectory of (1) and let us consider its restriction to the semi-axis e 1 , ∈ N, i.e., {(x(i, 0), u(i, 0), y(i, 0))} i∈N .By the past finite support property of the input and of the state and by the zero initial condition of (1) it follows that this restriction is a 1D trajectory that fulfills the following equations with x1 (0) = 0, and x1 (i) = x(i, 0), ū1 (i) = u(i, 0) and ȳ1 (i) = y(i, 0), for i ∈ N. Thus, the 1D system Σ 1 = (A 1 , B 1 , C, D) generates the restrictions to the semi-axis e 1 , ∈ N, of all trajectories of (1), which means that Σ 1 is a realization of C 1 .Analogously, we have that A particular type of 1D convolutional codes will be very important in our construction of 2D codes.The properties of these codes are presented in Theorem 3.1 of [12].Below, we restate this theorem in a revised form by taking an additional component of the codeword into account which will help us to construct 2D codes with a designed distance.Such an ingredient consists in taking into account also the weight of the output.Indeed, studying [12,Th. 3.1] and its proof, one can obtain the following result.
forms the parity-check matrix of a block code of distance d.Using the previous result and the projections onto the semi-axes, we establish a lower bound on the distance of a 2D finite support convolutional code.Theorem 4.2 Let Σ 1 = (A 1 , B 1 , C, D) and Σ 2 = (A 2 , B 2 , C, D) be two 1D systems of dimension δ, with k inputs and n−k outputs, which are observable with observability indices ν 1 and ν 2 , respectively.Suppose that Φ diνi (A i , B i ), as defined in (9), is the parity-check matrix of a block code of distance d i , for i = 1, 2. Then, In the case D contains the k × k identity matrix I k up to row permutation, one obtains that Proof By Lemma 4.1 it is enough to analyze the weight of the codewords (û(z 1 , z 2 ), ŷ(z 1 , z 2 )) where û(z 1 , 0) = 0 and û(0, z 2 ) = 0.
The distance of a rate k n 1D convolutional code of degree δ is always upper-bounded by the generalized Singleton bound (n − k)( δ/k + 1) + δ + 1, see [11].The 1D convolutional codes whose distance reach such a bound are called maximum-distance separable (MDS) convolutional codes.However, in the 2D case very few results exist on the distance of 2D convolutional codes (see for instance [16,Proposition 4.2.3]) and no upper bound is known.
We are now ready to construct 2D finite support convolutional codes by means of an ISO representation in such a way that the code is noncatastrophic, its distance has a designed value and the ISO representation has the minimal possible dimension.Here we propose the following example by making particular choices of the parameters A 1 , A 2 , B 1 , B 2 , C and D and construct a 2D convolutional code of rate 2  3 and complexity 5. Since the system Σ = (A 1 , A 2 .B 1 , B 2 , C, D) is modally observable by Theorem 3.2 we have that the code C(A 1 , A 2 , B 1 , B 2 , C, D) is noncatastrophic.Moreover, as Σ is modally observable and satisfies the conditions of Corollary 3.1 the system Σ is a minimal ISO representation of C(A 1 , A 2 , B 1 , B 2 , C, D), with complexity 5. Finally, it is worth pointing out that the distance of C(A 1 , A 2 , B 1 , B 2 , C, D) is larger than any 1D convolutional code of rate 2  3 and complexity 5 since the Singleton bound in this case is equal to 9.

Conclusions
In this paper we have established a framework to represent 2D finite support convolutional codes by means of linear systems.To that purpose we have introduced input-state-output representations, which have several advantages with respect to the representations used by Fornasini and Valcher in [2].In particular, we have overcome the important issue of minimality, namely, our input-stateoutput representations allowed us to construct minimal representations of 2D convolutional codes, whereas such constructions are not known for the representations used in [2].We have characterized catrastrophicity of these representations and show how it is possible to make constructions with a designed distance.We think that these representations are a good tool for the construction of optimal 2D finite support convolutional codes.It is a topic of future research to obtain an upper bound for the maximal distance of 2D convolutional codes and construct codes that reach such a bound.Moreover, we believe that such an approach will lead to a future algebraic decoding algorithm.

Theorem 4 . 1
Let C be a 1D finite support convolutional code of rate k n with ISO representation Σ = (A, B, C, D) of dimension δ, which is observable, with observability index ν, and such that is a nonzero codeword of C (where û(z) is an input and ŷ(z) the corresponding output of Σ) and = min{i | v(i) = 0}, where v(z) = i∈N v(i)z i = i∈N (u(i), y(i))z i , it follows that d f (C) ≥ d + wt(y( )).