An Algebraic Approach for Decoding Spread Codes

In this paper we study spread codes: a family of constant-dimension codes for random linear network coding. In other words, the codewords are full-rank matrices of size (k x n) with entries in a finite field F_q. Spread codes are a family of optimal codes with maximal minimum distance. We give a minimum-distance decoding algorithm which requires O((n-k)k^3) operations over an extension field F_{q^k}. Our algorithm is more efficient than the previous ones in the literature, when the dimension k of the codewords is small with respect to n. The decoding algorithm takes advantage of the algebraic structure of the code, and it uses original results on minors of a matrix and on the factorization of polynomials over finite fields.

This work focuses on spread codes which are a family of constant-dimension codes first introduced in [MGR08]. Spreads of F n q are a collection of subspaces of F n q , all of the same dimension, which partition the ambient space. Such a family of subspaces of F n q exists if and only if the dimension of the subspaces divides n. The construction of spread codes is based on the F q -algebra F q [P ] where P ∈ GL k (F q ) is the companion matrix of a monic irreducible polynomial of degree k. Concretely, we define spread codes as where G Fq (k, n) is the Grassmannian of all subspaces of F n q of dimension k. Since spreads partition the ambient space, spread codes are optimal. More precisely, they have maximum possible minimum distance 2k, and the largest possible number of codewords for a code with minimum distance 2k. Indeed, they achieve the anticode bound from [EV08]. This family is closely related to the family of Reed-Solomon-like codes introduced in [KK08b]. We discuss the relation in detail in Section 2.2. In Lemma 17, we show how to extend to spread codes the existing decoding algorithms for Reed-Solomon-like codes and rank-metric codes.
The structure of the spreads that we use in our construction helps us devise a minimum-distance decoding algorithm, which can correct up to half the minimum distance of S r . In Lemma 28 we reduce the decoding algorithm for a spread code S r to at most r − 1 instances of the decoding algorithm for the special case r = 2. Therefore, we focus on the design of a decoding algorithm for the spread code The paper is structured as follows. In Section 2 we give the construction of spread codes, discuss their main properties. In Subsection 2.1 we introduce the main notations. In Subsection 2.2 we discuss the relation between spread codes and Reed-Solomon-like codes, which is given explicitly in Proposition 15. Proposition 18 shows how to apply a minimum-distance decoding algorithm for Reed-Solomon-like codes to spread codes, and estimates the complexity of decoding a spread code using such an algorithm.
The main results of the paper are contained in Section 3. In Subsection 3.1 we prove some results on matrices, which will be needed for our decoding algorithm. Our main result is a new minimum-distance decoding algorithm for spread codes, which is given in pseudocode as Algorithm 2. The decoding algorithm is based on Theorem 34, where we explicitly construct the output of the decoder. Our algorithm can be made more efficient when the first k columns of the received word are linearly independent. Proposition 35 and Corollary 36 contain the theoretical results behind this simplification, and the algorithm in pseudocode is given in Algorithm 3. Finally, in Section 4 we compute the complexity of our algorithm. Using the results from Subsection 2.2, we compare it with the complexity of the algorithms in the literature. It turns out that our algorithm is more efficient than the all the known ones, provided that k ≪ n. In [MGR08] we give a construction of spreads suitable for use in Random Linear Network Coding (RLNC). Our construction is based on companion matrices.

Preliminaries and notations
Definition 3. Let F q be a finite field and p = k i=0 p i x i ∈ F q [x] a monic polynomial. The companion matrix of p Let n = rk with r > 1, p ∈ F q [x] a monic irreducible polynomial of degree k and P ∈ F k×k q its companion matrix.
Remark 8. Notice that In order to have a unique representative for the elements of S r , we bring the matrices A 1 · · · A r in row reduced echelon form.
Lemma 9 ([MGR08, Theorem 1]). Let S r ⊂ G Fq (k, n) be a spread code. Then , for all U, V ∈ S n distinct, i.e., the code has maximal minimum distance, and 2. |S r | = q n −1 q k −1 , i.e., the code has maximal cardinality with respect to the given minimum distance.
Remark 10. In [TMR10] the authors show that spread codes are an example of orbit codes. Moreover, in [TR11] it is shown that some spread codes are cyclic orbit codes under the action of the cyclic group generated by the companion matrix of a primitive polynomial.
In Section 3 we devise a minimum-distance decoding algorithm for uniquely decodable received spaces.

Further notations
We introduce in this subsection the notation we use in the paper.
Definition 12. Let s ∈ N with s < k and denote by L s F q n ⊂ F q n [x] the set of linearized polynomials of degree less than q s . Equivalently, f ∈ L s F q n if and only if f = In the rest of the work we denote q-th power exponents such as Let F q be a finite field with q elements, and let p ∈ F q [x] be a monic irreducible polynomial of degree k > 1. P ∈ GL k (F q ) denotes the companion matrix of p, and S ∈ GL k (F q k ) is a matrix which diagonalizes P .
Let M be a matrix of size k × k and let J = (j 1 , . . . , j s ), L = (l 1 , . . . , l s ) ∈ {1, . . . , k} s . [J; L] M denotes the minor of the matrix M corresponding to the submatrix J; L M with row indices j 1 , . . . , j s and column indices l 1 , . . . , l s . We skip the suffix M when the matrix is clear from the context.
• |K| := s is the length of the tuple.
• K ∩ J denotes the L ⊂ K, J such that |L| is maximal.
• If J ⊂ K then K \ J denotes the L ⊂ K with |L| maximal such that J ∩ L = ∅ where ∅ is the empty tuple.
• min K = min{i | i ∈ K}, with the convention that min ∅ > min K for any K.
We define the non diagonal rank of a matrix as follows. At last, algorithms' complexities are expressed as O(F; p(n, k)), which corresponds to performing O(p(n, k)) operations over a field F, where n, k are given parameters.

Relation with Reed-Solomon-like codes
Reed-Solomon-like codes, also called lifted rank-metric codes, are a class of constant-dimension codes introduced in [KK08b]. They are strictly related to maximal rank distance codes as introduced in [Gab85]. We give here an equivalent definition of these codes.
Definition 14. Let F q ⊂ F q n be finite fields. Fix some F q -linearly independent elements α 1 , . . . , α k ∈ F q n . Let ψ : F q n → F n q be an isomorphism of F q -vector spaces. A Reed-Solomon-like (RSL) code is defined as The following proposition establishes a relation between spread codes and RSL codes. The proof is easy, but rather technical, hence we omit it.
Proposition 15. Let n = rk, F q ⊆ F q k ⊆ F q n finite fields, and P ∈ GL k (F q ) the companion matrix of a monic irreducible polynomial p ∈ F q [x] of degree k > 0. Let λ ∈ F q k be a root of p, µ 1 , . . . , µ r ∈ F q n a basis of F q n over F q k . Moreover, let ψ : F q n → F n q be the isomorphism of F q -vector spaces which maps the basis (λ i µ j ) 0≤j≤k−1 1≤i≤r to the standard basis of F q n over F q . Then for every choice of A 0 , . . . , A r−1 ∈ F q [P ] there exists a unique linearized polynomial of the form f = ax with a ∈ F q n such that The constant a is a = ψ −1 (v) where v ∈ F n q is the first row of (A 0 · · · A r−1 ). The proposition allows us to relate our spread codes to some RSL codes. The following corollary makes the connection explicit. We use the notation of Proposition 15.
Corollary 16. For each 1 ≤ i ≤ r − 1, let µ 1,i , . . . , µ r−i,i be a basis of F q (r−i)k over F q k . Let denote the isomorphism of vector spaces that maps the basis (λ j µ l,i ) 0≤j≤k−1 Corollary 16 readily follows from Proposition 15. The connection that we have established with RSL codes allows us to extend any minimumdistance decoding algorithm for RSL codes to a minimum-distance decoding algorithm for spread codes. We start with a key lemma.
Lemma 17. Let S r be a spread code, and R = rowsp R 1 · · · R r ∈ G Fq (k, n) for somek ≤ k. Assume there exists a C = rowsp C 1 · · · C r ∈ S r such that d(R, C) < k. Let It holds that: Proof. The result follows from Lemma 28 and the observation that In the next proposition, we use Corollary 16 and Lemma 17 to adapt to spread codes any decoding algorithm for RSL codes. In particular, we apply our results to the algorithms contained in [KK08b] and [SKK08], and we give the complexity of the resulting algorithms for spread codes.
Proposition 18. Any minimum-distance decoding algorithm for RSL codes may be extended to a minimum-distance decoding algorithm for spread codes. In particular, the algorithms described in [KK08b] and [SKK08] can be extended to minimum-distance decoding algorithms for spread codes, with complexities O(F q n−k ; n 2 ) for the former and O(F q n−k ; k(n − k)) for the latter.
Proof. Suppose we are given a minimum-distance decoding algorithm for RSL codes. We construct a minimum-distance decoding algorithm for spread codes as follows: Let R = rowsp R 1 · · · R r ∈ G Fq (k, n) be the received word, and assume that there exists a C = rowsp C 1 · · · C r ∈ S r such that d(R, C) < k. First, one computes the rank of R 1 , R 2 , . . . until one finds an i such that rank(R i ) > (k − 1)/2, rank(R j ) ≤ (k − 1)/2 for j < i. Thanks to Lemma 17, one knows that C j = 0 for j < i and C i = I. Moreover, one has d(rowsp R i R i+1 · · · R r , rowsp I C i+1 · · · C r ) < k Therefore, one can apply the minimum-distance decoding algorithm for RSL codes to the received word rowsp R i R i+1 · · · R r in order to compute C i+1 , . . . , C r .
Assume now that one uses as minimum-distance decoder for RSL codes either the decoding algorithm from [KK08b], or the one from [SKK08]. The complexity of computing the rank of R 1 , . . . , R i by computing row reduced echelon forms is O(F q ; nk 2 ). The complexity of the decoding algorithm for RSL codes is O(F q n−k ; n 2 ) for the one in [KK08b] and O(F q n−k ; k(n − k)) for the one in [SKK08]. The complexity of the decoding algorithm is the dominant term in the complexity estimate.
It is well known that RSL codes are strictly related to the rank-metric codes introduced in [Gab85]. Although the rank metric on rank-metric codes is equivalent to the subspace distance on RSL codes, the minimum-distance decoding problem in the former is not equivalent to the one in the latter. In [SKK08] the authors introduced the Generalized Decoding Problem for Rank-Metric Codes, which is equivalent to the minimum-distance decoding problem of RSL codes. Decoding algorithms for rank-metric codes such as the ones contained in [Gab85,Loi06,RP04] must be generalized in order to be effective for the Generalized Decoding Problem for Rank-Metric Codes, and consequently, to be applicable to RSL codes.
Another interesting application of Lemma 17 allows us to improve the efficiency of the decoding algorithm for the codes proposed in [Ska10]. For the relevant definitions, we refer the interested reader to the original article.

Corollary 19. There is an algorithm which decodes the codes from [Ska10] and has complexity
The algorithm is a combination of Lemma 17 and the decoding algorithm contained in [SKK08]. First, by Lemma 17, one finds the position of the identity matrix. This reduces the minimumdistance decoding problem to decoding a RSL code, so one can use the algorithm from [SKK08].

The Minimum-Distance Decoding Algorithm
In this section we devise a new minimum-distance decoding algorithm for spread codes. In the next section, we show that our algorithm is more efficient than the ones present in the literature, when n ≫ k.
We start by proving some results on matrices, which we will be used to design and prove the correctness of the decoding algorithm.

Preliminary results on matrices
Let F be a field and let m ∈ F[y 1 , . . . , y s ] be a polynomial of the form m = U⊆(1,...,s) a U y U where y U := u∈U y u , a (1,...,s) = 0.

It holds
for all U, V such that |V | = s − 1 and Proof. We proceed by induction on s.
⇒ If s = 1, m is a linear polynomial. Let us now suppose the thesis is true for s − 1. Then whereã (1,...,s−1) = 1 and the coefficientsã U with U ⊆ (1, . . . , s − 1) satisfy by hypothesis condition (2). The coefficients of m are a U =ã U\(s) if s ∈ U , and a U = µ sãU otherwise. Therefore we only need to prove that (2) holds for U ∈ (1, . . . , s − 1). The equality is ⇐ The thesis is trivial for s = 1. Let us assume that the thesis holds for s − 1. We explicitly show the extraction of a linear factor of the polynomial.
The thesis is true by induction.
Let F[x i,j ] 1≤i,j≤k be a polynomial ring with coefficients in a field F. Consider the generic matrix of size k × k Denote by I s+1 ⊂ F[x i,j ] 1≤i,j≤n the ideal generated by all minors of size s + 1 of M , which do not involve entries on the diagonal, i.e., We establish some relations on the minors of M , modulo the ideal I s+1 .
Assume that the thesis is true for s − 1.
Let us now focus on the factor x jr ,ls+1 [J s ; L s ] for r ≥ s + 1, we get By substitution it follows that whereL = (l 1 , . . . , l s , l s , l s+2 , . . . , l k ). The repetition of column l s twice inL implies that [J;L] = 0. The last equality follows from the induction hypothesis.
The following is an easy consequence of Lemma 21.
We now study the minors of a matrix of the form S −1 N S where N ∈ F k×k q and S has a special form, which we describe in the next lemma.
Lemma 23. Let P ∈ GL k (F q ) to be the companion matrix of a monic irreducible polynomial p ∈ F q of degree k > 0, and let λ ∈ F q k be a root of p. Then the matrix diagonalizes P .
Proof. The eigenvalues of the matrix P correspond to the roots of the irreducible polynomial It is enough to show that the columns of S correspond to the eigenvectors of P .
We now establish some properties of S.
Lemma 24. The matrices S and S −1 defined by (3) satisfy the following properties: 1. the entries of the first column of S (respectively, the first row of S −1 ) form a basis of F q k over F q , and 2. the entries of the (i + 1)-th column of S (respectively, row of S −1 ) are the q-th power of the ones of the i-th column (respectively, row) for i = 1, . . . , k − 1.
Proof. The two properties for the matrix S come directly from its definition. By [LN94, Definition 2.30] we know that there exists a unique basis {γ 0 , . . . , γ k−1 } of F q k over F q such that where Tr F q k /Fq (α) := 1 + α [1] + · · · + α [k−1] for α ∈ F q k . We have The next theorem and corollary will be used in Subsection 3.3 to devise a simplified minimumdistance decoding algorithm, under the assumption that the first k columns of the received vector space are linearly independent.
Theorem 25. Let t ≤ k and let N ∈ F t×k q and S ∈ F k×t q k be two matrices satisfying the following properties: • N has full rank, • the entries of the first column of S form a basis of F q k over F q , and • the entries of the (i + 1)-th column of S are the q-th power of the ones of the i-th column, for i = 1, . . . , t − 1.
where s 1 , . . . , s k ∈ F q k form a basis of F q k over F q . Then: , since the entries of N are in F q . Let τ i := k l=1 n il s l ∈ F q k , then The elements τ 1 , . . . , τ t ∈ F q k are linearly independent over F q . Indeed, the linear combination is zero only when t i=1 α i n il = 0 for l = 1, . . . , t. Since N has full rank it follows that α 1 , . . . , α t must all be zero, leading to the linear independence of τ 1 , . . . , τ t . Now let a 0 , . . . , a t−1 ∈ F q k be such that The elements τ 1 , . . . , τ t are by assumption roots of f . Since f is a linear map, the kernel of f contains the subspace τ 1 , . . . , τ t ⊂ F q k . Therefore f is a polynomial of degree q t−1 with q t different roots, then a 0 = · · · = a t−1 = 0. Proof. Let t = rank(N ) and J, L ⊂ (1, . . . , k) with |J| = |L| = t, let H = (1, . . . , t). Let N 1 ∈ F k×t q and N 2 ∈ F t×k q be matrices with full rank such that N = N 1 N 2 . One has We can now focus on the characterization of the maximal minors of the matrix N 2 S. The following considerations will also work for the matrix S −1 N 1 considering its transpose.
The minor [H, L] N2S is the determinant of a square matrix obtained by multiplying N 2 with the submatrix consisting of the columns of S indexed by L. Let L contain consecutive indices. By Lemma 24, the submatrix of S that we obtain together with N 2 satisfy the conditions of Theorem 25. It follows that [H, L] N2S = 0.
As a consequence we have that [J, L] S −1 N S = 0 when both J and L are tuples of consecutive indices.
The following is a reformulation of Corollary 26 for small rank matrices. In particular, ndrank(S −1 N S) = rank(N ).

The Decoding Algorithm
In this subsection we devise an efficient minimum-distance decoding algorithm for spread codes, and establish some closely related mathematical results.
We start by reducing the minimum-distance decoding algorithm for S r to at most r−1 instances of the minimum-distance decoding algorithm for S 2 . Notice that the minimum-distance decoders for the case r = 2 can be run in parallel.
Let R = rowsp R 1 · · · R r be a received space. We assume that 1 ≤k = rank(R) ≤ k.
Algorithm 1 on page 13 is based on the following lemma.
Lemma 28. Let S r be a spread code, and R = rowsp R 1 · · · R r ∈ G Fq (k, rk). Assume there exists a C = rowsp C 1 · · · C r ∈ S such that d(R, C) < k. It holds Proof. ⇒ Let i ∈ {1, . . . , r} be an index such that C i = 0. By the construction of a spread code there exists a j ∈ {1, . . . , r} with C j = I. We claim that dim(C ∩ R) >k 2 . In fact, From the claim it follows that This proves that rank(R i ) <k 2 .
⇐ Let i ∈ {1, . . . , r} be such that rank(R i ) ≤k −1 2 and assume by contradiction that C i ∈ F q [P ] * . It follows that which contradicts the assumption that d(C, R) = k +k − 2 dim(C ∩ R) < k.
Algorithm 1: Minimum-distance decoding algorithm: n = rk, r > 2 input : R = rowsp R 1 · · · R r ∈ G Fq (k, rk), r > 2, P ∈ GL k (F q ) the companion matrix of p ∈ F q [x] and S ∈ GL k (F q k ) its diagonalizing matrix. output: C ∈ S r ⊂ G Fq (k, rk) such that d(R, C) < k, if such a C exists. Let r i = rank(R i ) for i = 1, . . . , r; if r i ≤k −1 2 for all i ∈ {1, . . . , r} then return there exists no C ∈ S r such that d(R, C) < k end for i ∈ {1, . . . , r} and r i ≤k −1 2 do C i = 0 ∈ F k×k q ; end for j < i ≤ r and r i > k−1 2 do Run a minimum-distance decoding algorithm for r = 2 with input R = rowsp R j R i , P and S; if minimum-distance decoding algorithm returns no C ∈ S 2 then return there exists no C ∈ S r such that d(R, C) < k; else let C i ∈ F q [P ] such that C = rowsp I C i ; end end return C = rowsp C 1 · · · C r .
Because of Lemma 28, we may now focus on designing a minimum-distance decoding algorithm for the case where n = 2k. For the remainder of this subsection, we consider the spread code where I and 0 are respectively the identity and the zero matrix of size k × k.
Since a minimum-distance decoding algorithm decodes uniquely up to half the minimum distance, we are interested in writing an algorithm with the following specifications: and S ∈ GL k (F q k ) its diagonalizing matrix.
Output C ∈ S ⊂ G Fq (k, 2k) such that d(R, C) < d(S) 2 = k, if such a C exists. We first give a membership criterion for spread codes. We follow the notation given at the beginning of this section.
From the proposition, we get an efficient algorithm to test whether a received vector space is error-free.
The following is an easy consequence of Lemma 28. It allows us to efficiently test whether the sent codeword was rowsp 0 I , or rowsp I 0 .
Corollary 31. Let R = rowsp R 1 R 2 ∈ G Fq (k, 2k) be a received space, and assume that it is uniquely decodable. The following are equivalent: • rank(R 1 ) ≤k −1 2 , and • the output of a minimum-distance decoding algorithm is rowsp 0 I .

The analogous statement holds for R 2 .
Because of Corollary 31, we can restrict our decoding algorithm to look for codewords of the form C = rowsp I A where A ∈ F q [P ]. Since there is an obvious symmetry in the construction of a spread code, we assume without loss of generality that rank(R 1 ) ≥ rank(R 2 ) >k − 1 2 .
With the following theorem we translate the unique decodability condition into a rank condition, and then into a greatest common divisor condition.
Theorem 32. Let R ∈ G Fq (k, n) be a subspace with The following are equivalent: • R is uniquely decodable.
• There exists a unique µ ∈ F q k such that Proof. R is uniquely decodable if and only if there exists a unique matrix X ∈ F q [P ] such that Furthermore we get that rank(R 1 X −R 2 ) = rank( is a consequence of Lemma 29. The existence of a unique solution X ∈ F q [P ] is then equivalent to the existence of a unique µ ∈ F q k such that This is equivalent to the condition that all minors of size ⌊k +1 2 ⌋ of S −1 R 1 S∆(µ) − S −1 R 2 S are zero. This leads to a nonempty system of polynomials in the variable x having a unique solution µ ∈ F q k . Therefore Equality follows from the uniqueness of µ.
Theorem 32 has the following immediate consequence, which constitutes a step forward towards the design of our decoding algorithm.
Corollary 33. Assume that the received space R ∈ G Fq (k, n) is uniquely decodable. Then it decodes to Under the unique decodability assumption, decoding a received space R corresponds to computing the µ from Corollary 33. However, computing the greatest common divisor of all the minors of S∆(x)S −1 of the appropriate size does not constitute an efficient algorithm.
The following theorem provides a significant computational simplification of this approach. In the proof, we give a procedure to construct one minor of S∆(x)S −1 of the appropriate size, whose factorization we can explicitly describe. In particular, we give explicit formulas for its roots. In practice, one wants to proceed as follows: First, find such a minor and write down all of its roots, and second, for each root µ check whether rank(S∆(µ)S −1 ) ≤ ⌊k −1 2 ⌋.
Theorem 34. Let R = rowsp R 1 R 2 ∈ G Fq (k, 2k) be uniquely decodable with rank(R 1 ) ≥ rank(R 2 ) >k −1 2 , S ∈ GL k (F q k ) a matrix diagonalizing P and M ∈ GL k (F q k ) such that M S −1 R 1 R 2 S is in row reduced echelon form. Let R(x) := M S −1 R 1 S∆(x) − M S −1 R 2 S. Then, there exist J, L ⊂ I := (1, . . . , k) with |J| = |L| = ⌊k +1 2 ⌋ − (k − rank(R 1 )) such that Proof. We first focus on the form of the matrix R(x). Let r i := rank(R i ) for i = 1, 2. We deduce by Corollary 26 that the pivots of the matrix M S −1 R 1 R 2 S are contained in the first r 1 columns and, since dim R =k, in a choice ofk − r 1 of the first r 2 columns of M S −1 R 2 S. Figure  1 and Figure 2 at page 16 depict respectively the matrix M S −1 R 1 R 2 S and R(x).
(l 1 , . . . , lk −r1 ) ⊂ I is the tuple of indices of the columns corresponding to the pivots of M S −1 R 2 S. Hence, for all i ∈ {1, . . . ,k − r 1 } the entries of columns l i of R(x) are all zero except for the entry l i , which is x [li−1] , and the entry r 1 + i, which is 1.
The matrix R ′ (x) is a matrix containing unknowns only in the diagonal entries. Let J; L R ′ (x) be a submatrix of R ′ (x), then it holds that Let µ ∈ F q k be the unique element satisfying condition (4), by the previous relation it holds that This implies that µ is a root of all [J, L] R ′ (x) such that |J| = |L| = ⌊k +1 2 ⌋ − (k − r 1 ). Let J ′ , L ′ ⊂ I ′ be tuples of indices such that The existence of a couple of tuples satisfying these conditions is ensured by the definition of ndrank(R ′ (x)). [J\K,L\K] R(0) and µ ∈ µ Summarizing, the decoding algorithm that we obtain exploiting Theorem 34 is as follows: 1. Find tuples J, L satisfying the assumptions (6) of the theorem. Algorithm 4 in Section 4 gives an efficient way to find such tuples.

Write down the roots of the minor [J, L] R(x)
, where R(x) is the matrix in the statement of the theorem. Theorem 34 gives explicit formulas for the roots, so this step requires a negligible amount of computation.
3. For each root µ found in the previous step, check whether the rank of R(µ) is smaller than or equal to ⌊k −1 2 ⌋. 4. If the unique decodability assumption is satisfied, exactly one root µ will satisfy the rank condition in the previous step. In this case, we decode to rowsp I S∆(µ)S −1 .
5. Else, none of the roots will. In this case, we have a decoding failure.
We now give the detailed minimum-distance decoding algorithm in pseudocode.
if either r 1 = k and S −1 R −1 1 R 2 S is diagonal or r 1 = 0 and r 2 = k then return R ∈ S; end 2.
We start by establishing the mathematical background. Under the assumption that the matrix R 1 is invertible, an alternative form of Theorem 32 holds.
Proposition 35. Let R ∈ G Fq (k, n) be a subspace with The following are equivalent: • R is uniquely decodable.
• There exists a unique µ ∈ F q k such that Proof. By Theorem 32 R is uniquely decodable if and only if there exists a unique µ ∈ F q k such that Let A = S∆(µ)S −1 , then by Corollary 27 Our improved decoding algorithm relies on the following corollary.
Proof. By Proposition 35, there exists a unique µ for which Hence it suffices to consider minors of R(x) of size ndrank(S −1 R −1 1 R 2 S) + 1. By Corollary 27, the minor is not identically zero. Hence the root makes rank(R(µ)) = ndrank(S −1 R −1 1 R 2 S). By Proposition 35, µ yields the unique solution to the decoding problem.
Remark 37. The previous corollary allows us to design a more efficient decoding algorithm than the one presented in [MGR08], since it does not require the use of the Euclidean Algorithm. More precisely, it allows us to find a minor (in fact, many of them) whose roots can be directly computed via an explicit formula. Practically, this makes the decoding complexity negligible.
Algorithm 3: Minimum-distance decoding algorithm: n = 2k, R 1 non-singular input : R = rowsp R 1 R 2 ∈ G Fq (k, 2k) with either rank(R 1 ) = k or rank(R 2 ) = k P ∈ GL k (F q ) the companion matrix of p ∈ F q [x] and S ∈ GL k (F q k ) its diagonalizing matrix. output: C ∈ S ⊂ G Fq (k, n) such that d(R, C) < k, if such a C exists.
if either r 1 = k and S −1 R −1 1 R 2 S is diagonal or r 1 = 0 and r 2 = k then return R ∈ S; end 2.

Algorithms Complexities
In this section, we compute the complexity of some algorithms that we gave in the previous section.
We start by specifying an algorithm for finding tuples J ′ , L ′ ⊂ I ′ needed in Step 4 of Algorithm 2. The algorithm performs only row operations. The pseudocode is given in Algorithm 4, while correctness is proved in the next lemma. Proof. We start by setting K = (1, . . . , k). The algorithm eventually terminates since |K| strictly decreases after every cycle of the while loop. Moreover, its complexity is bounded by the complexity of the Gaussian elimination algorithm which computes the row reduced echelon form of a matrix of F n×n q in O(F q ; n 3 ) operations. We have to prove that the returned tuples J, L ⊂ (1, . . . , k) satisfy the output conditions. Since M is not diagonal, J, L = ∅. The emptiness of J ∩ L follows from the fact that J, L are initialized to ∅ and each time we modify them, we get J ∪ (j) and L ∪ (l) where j = l and j, l are not elements of J ∪ L.
In order to continue we have to characterize the matrix N . The matrix changes as soon as we find coordinates j, l ∈ I with i = j for which n jl = 0. The multiplication P N consists of the following row operations • the i-th row of P N is the i-th row of N for i ≤ j, and • the i-th row of P N is the i-th row of N minus n i,l n j,l times the j-th row of N , where N = (n j,l ) 1≤j,l≤k for i > j.
It follows that the entries of the l-th column of P N are zero as soon as the row index is bigger than j.
We claim that after each cycle of the while loop it holds that [J, L] N = 0. We prove it by induction on the cardinality of J and L. Since the matrix M is not diagonal, the while loop will eventually produce tuples J = (j) and L = (l) with j = l such that [J, L] M = 0. Now suppose that we have J, L such that J, L = ∅, J ∩ L = ∅ and [J, L] N = 0 and there exist, following the algorithm, entries j, l ∈ I with j = l such that n j,l = 0. From the previous paragraph, the only nonzero entry of the row with index j of J ∪ (j); L ∪ (l) N , which by construction is the last one, is n j,l , hence For simplicity, in the following comparisons we give the minimum-distance decoding complexity only for the case when the received space R ∈ G Fq (k, n). This is an upper bound for the complexity in the general case. The precise complexity for the case when R ∈ G Fq (k, n),k < k may be obtained via an easy adaptation of our arguments.

Complexity of the decoding algorithm
Algorithm 2 consists of matrix operations over the extension field F q k ⊇ F q . The most expensive of such operations is the computation of the rank of matrices of size k × k, which can be performed via the Gaussian elimination algorithm. The complexities then are as follows: • The complexity of step 4. is O(F q k ; k 3 ), which corresponds to the computation of rank(R(µ)).
• The complexity of step 5. is O(F q k ; k 4 ), which corresponds to the computation of rank(R(µ i )) for all i ∈ K, where |K| ≤ ⌊ k−1 2 ⌋. The overall complexity of Algorithm 2 is then O(F q k ; k 4 ). This makes the complexity of Algorithm 1 O(F q k ; (n − k)k 3 ). Notice that computing the rank of the matrices R i has complexity O(F q ; (n − k)k 2 ), which is dominated by O(F q k ; (n − k)k 3 ). for l ∈ K and l = j do if n j,l = 0 and t = 0 then J = J ∪ (j), L = L ∪ (l) and K = K \ (j, l); P = (p j ′ ,l ′ ) 1≤j ′ ,l ′ ≤k such that p i,i = 1 for any i ∈ {1, . . . , k}, p i,l = − n i,l n j,l for any i ∈ I with i > j and p j ′ ,l ′ = 0 otherwise; N = P N ; t = 1; end end if t = 0 then K = K \ (j); j = min K; end return J, L;

Comparison with other algorithms and conclusions
We compare the complexity of Algorithm 1 with other algorithms present in the literature, specifically with the algorithms discussed in Proposition 18. The complexity of the decoding algorithm contained in [KK08b] is O(F q n−k ; n 2 ). In order to compare the two complexity estimates, we use the fact that the complexity of the operations on an extension field F q s ⊇ F q is O(F q ; s 2 ). This is a crude upper bound, and the complexity may be improved in some cases (see, e.g., [GPS07]). Nevertheless, under this assumption the decoding algorithm from [KK08b] has complexity O(F q ; n 2 (n − k) 2 ).
Following similar reasoning, the complexity of the decoding algorithm contained in [SKK08] is O(F n−k q ; k(n − k)), i.e., O(F q ; k(n − k) 3 ). We conclude that the minimum-distance decoding algorithm presented in this paper has lower complexity than the algorithms in [KK08b] and [SKK08], whenever k ≪ n. Since this is the relevant case for the applications, the decoding algorithm that we propose constitutes usually a faster option for decoding spread codes.

Conclusions
In this paper we exhibit a minimum distance decoding algorithm for spread codes which performs better than other known decoding algorithms for RSL codes when the dimension of the codewords is small with respect to the dimension of the ambient space.
The problem of extending our decoding algorithm to the case when the dimension of the received space is bigger than the dimension of the codewords remains open. Another natural question arising from this work is finding a generalization of the decoding algorithm to a list decoding algorithm. Theorem 32 can be easily extended for this purpose. Yet finding a way to solve the list decoding problem which requires neither the computation of a gcd, nor the factorization of a minor is a non trivial task.