The Multilinear Rank and Core of Trifocal Grassmann Tensors

Closed formulas for the multilinear rank of trifocal Grassmann tensors are obtained. An alternative process to the standard HOSVD is introduced for the computation of the core of trifocal Grassmann tensors. Both of these results are obtained, under natural genericity conditions, leveraging the canonical form for these tensors, obtained by the same authors in a previous work. A gallery of explicit examples is also included.


Introduction
Tensors, either as multidimensional arrays of data in applied settings or, more classically, as representations of multilinear applications among vector spaces, have recently attracted renewed attention: see, for instance, [4], [19].Among the many fascinating and intricate problems in the study of tensors, the calculation of any of the various established notions of their rank marries theoretical interest and practical applications.In particular, the determination of the multilinear rank of a tensor, i.e. the ranks of all its flattening matrices, is part of the process needed to arrive at a core of a tensor, see Section 2.3.Being able to successfully and efficiently compute a core of large tensors can be a crucial step for concrete applications in image processing, computer engineering, and data management.
The authors have been interested for a while in a class of tensors that arise naturally in computer vision.In the classical case of reconstruction of a three-dimensional static scene from two, three, or four two-dimensional images, these tensors are known as the fundamental matrix, the trifocal tensor, and the quadrifocal tensor, respectively, and have been studied extensively, see for example [1,2,3,6,16,18,20].In a more general setting, these tensors, called Grassmann tensors, were introduced by Hartley and Schaffalitzky, [17], and were studied by three of the authors in several articles [7,8,9,10,11,13] as well as by two of the authors and other collaborators, [12].
In [5], the authors leveraged the possibility of obtaining a canonical form for a general trifocal Grassmann tensor to compute its rank with a closed formula.
In this work we turn our attention to the multilinear rank of trifocal Grassmann tensors and to the related problem of computing their core.Under the same natural genericity assumption used in [5], see Assumption 2.1, and similarly leveraging the resulting canonical form, in Section 3 the multilinear rank of a trifocal Grassmann tensor is computed, with closed formulas as well.
A standard approach for the computation of a core C of a tensor T is to utilize the so-called Tucker decomposition [21], often in the form of a higher order singular-value decomposition (HOSVD), [14,22].The Tucker decomposition combines the singular value decompositions of all the flattenings T i = U i Σ i W * i of the tensor in a multilinear multiplication C = (U * 1 , . . ., U * i , . . . ) • T where * denotes the adjoint matrix.Leveraging once again the canonical form for a trifocal Grassmann tensor, Section 4.2 shows how to compute a core in a simpler alternative way.Properties of the canonical form of trifocal Grassmann tensors allow for a direct, immediate computation of its core.This canonical core can then appropriately be pulled back to produce a core for the original tensor.As part of this process, singular values of appropriate matrices still need to be computed, but the size of the matrices involved is, in general, significantly smaller than in the standard Tucker decomposition or HOSVD.
Examples of the explicit computation of the multilinear rank and the core are provided in Section 5.

Notation and Background Material
2.1.Notation.Throughout this work we assume that the underlying field is the field C of complex numbers.Given a matrix A with complex entries, A * denotes its adjoint matrix.For any positive integer k, I k denotes the k × k identity matrix.A vector space V of dimension r over C is sometimes referred to as an r-space and V * denotes its dual, i.e.V * = Hom C (V, C).

2.2.
Preliminaries on tensors.Notation and definitions of tensors and their ranks (rank, multilinear rank or F -rank, P-rank) used in this work are relatively standard in the literature.They are all contained in [19] and briefly summarized below.
Given vector spaces V i , i = 1, . . .t, the rank of a tensor T ∈ V 1 ⊗ V 2 ⊗...⊗V t , denoted by R(T ), is the minimum number of decomposable tensors needed to write T as a sum.Recall that R(T ) is invariant under changes of bases in the vector spaces V i (see, for example, [19], Section 2.4 ).
This work focuses on a special class of trilinear tensors.For the convenience of the reader, and to fix our notation, it is useful to recall the explicit construction of the flattening matrices of a three dimensional tensor. Let and the corresponding matrix, of size n 1 ×(n 2 n 3 ), which is the flattening T 1 , and has the following block structure: In the same way, paying attention to the cyclic nature of indices i, j, k, one can define flattenings T 2 and T 3 .
Remark 2.1.Let M r ∈ GL(n r ) be invertible matrices for r = 1, 2, 3. Let T r be the r-th flattening of a tensor T as above.Then the F-rk (T ) is invariant under the left action of GL(n r ) and the right action of GL(n s n t ) for s, t = r.In particular the F-rk (T ) is invariant under the right multiplication of M s ⊗ M t ∈ GL(n s n t ).

Core of a Tensor
, where, as before, V 1 , V 2 , V 3 are vector spaces of dimension, respectively, n 1 , n 2 , n 3 , with fixed bases and assume that F-rk (T ) = (r 1 , r 2 , r 3 ).Standard procedures in applications associate a core tensor C to T .In this paper, following [21], by a core tensor of T we mean a tensor C that satisfies the following properties: 1. C ∈ Z 1 ⊗ Z 2 ⊗ Z 3 , where Z 1 , Z 2 , Z 3 are vector spaces of dimension, respectively, r 1 , r 2 , r 3 ; 2. there exist semi-orthogonal matrices U j , i.e.U * j U j = I r j , of size n j × r j for j = 1, 2, 3 such that: a. the multilinear multiplication We recall here the higher-order singular value decomposition (HOSVD) procedure which is the standard approach to the computation of a core of a tensor.It generalizes to tensors the standard (compact) singular value decomposition process for matrices.Let T be a tensor of order 3 with flattening matrices T 1 , T 2 , T 3 and F-rk (T ) = (r 1 , r 2 , r 3 ).Then T 1 is a n 1 × (n 2 n 3 ) matrix and one can perform the (compact) SVD to T 1 : where Σ 1 is a r 1 × r 1 square diagonal matrix and where U 1 and W 1 are, respectively, n 1 × r 1 and (n 2 n 3 ) × r 1 matrices such that Similarly, one can consider the SVD of T 2 and T 3 , namely The HOSVD procedure for the construction of a core C of T consists then of the following multilinear multiplication: 2.4.Multiview Geometry and Grassmann Tensors.For the convenience of the reader we recall standard facts and notation in the context of projective reconstruction in computer vision.A scene is a set of N points {X i } i=1,...,N in P k = P(W ), where W is a vector space of dimension k + 1.A camera P is a projection from P k onto the target space (view) P h = P(V), where V is a vector space of dimension h + 1, h < k, from a linear center C P = P(K), where K is a vector space of dimension k − h.Once bases have been chosen in W and V, P can be identified with a (h + 1) × (k + 1)− matrix of maximal rank, defined up to a constant, for which we use the same symbol P .With this notation, K is the right annihilator of P, and using the same notation X for the point's homogeneous coordinates in the chosen bases, P (X) denotes the image P • X of a point X in P k .
In the context of multiple view geometry, one considers a set of multiple images of the same scene, obtained from a set of cameras P j : P k \ C j → P h j where P k = P(W ), P h j = P(V j ), and C j = P(K j ).Two different images P l (X) and P m (X) of the same point X are corresponding points and, more generally, r linear subspaces S j ⊂ P h j , j = 1, . . ., r are said to be corresponding if there exists at least one point X ∈ P k such that P j (X) ∈ S j for j = 1, . . ., r.In [17] Hartley and Schaffalitzky introduced Grassmann tensors which encode the relations between sets of corresponding subspaces in the various views.We recall here the basic elements of their construction.
Consider, as above, a set of projections P j : P k \ C j → P h j , j = 1, . . ., r, h j ≥ 2 and a profile, i.e. a partition (α 1 , α 2 , . . ., α r ) of k + 1, where 1 ≤ α j ≤ h j for all j, and α j = k + 1.Let {S j }, j = 1, . . ., r, where S j ⊂ P h j , be a set of general s j -spaces, with s j = h j − α j , and let S j be a maximal rank (h j + 1) × (s j + 1)−matrix whose columns are a basis for S j .By definition, if all the S j are corresponding subspaces there exist a point X ∈ P k such that P j (X) ∈ S j for j = 1, . . ., r.In other words there exist r vectors v j ∈ C s j +1 j = 1, . . ., r, such that: The existence of a non trivial solution {X, v 1 , . . ., v r } for system (2) implies that the system matrix has zero determinant.This determinant can be thought of as an r-linear form, i.e. a tensor, in the Plücker coordinates of the spaces S j .This tensor is called the Grassmann tensor T with profile (α 1 , . . ., α r ).
More explicitly, the entries of the Grassmann tensor are computed as maximal minors of the matrix: (3) obtained by selecting α j columns from P j T , for j = 1, . . ., r.Notice that each column in P T j can be thought of as an element in P(V j ), where V j = s j +1 ((W/K j ) * ) is the vector space of dimension n j = h j +1 for each j = 1, . . ., r, an s j -dimensional subspace S j can be described as the intersection of α j = h j − s j hyperplanes of P k containing P(K j ).In other words, the columns of each P T j may be viewed as hyperplanes of P k containing the center C j .Moreover, the choice of α j columns of P T j gives an element in Gr(α j − 1, h j ) ⊂ P( α j (W/K j )) which is the dual Grassmannian of Gr(s j , h j ).
It is useful to observe that a right action of GL(k + 1) on (3), i.e. a change of coordinates in the ambient space P k , does not alter the tensor, as all entries are multiplied by the same nonzero constant.
As far as the effect of changes of coordinates in each of the view we have the following remark: Remark 2.2.The F-rk (T ) is invariant under change of coordinates in each of the views P hr .From Remark 2.1 it is enough to show that any left action of GL(h j + 1) on P T j , i.e. a change of coordinates in the corresponding view, induces a linear invertible transformation on V j .Indeed, any transformation H j ∈ GL((W/K j )) yields the transformation α j H j on the Plucker coordinates of Gr(α j − 1, h j ).Since the tensor is expressed in terms of the Plücker coordinates of the Grassmanniann Gr(s j , h j ), the transformation induced on the tensor by the matrix H j is s j +1 H * j −1 , where the adjoint (transpose) is needed because of the dual coordinates and where the inverse appears because of the action on the coefficients of the tensor T .
2.5.Canonical Form of Trifocal Grassmann Tensors.In [5] the authors showed that, under some generality assumptions, one can obtain a canonical form T c for a trifocal Grassmann tensor T that leads to a direct computation of its rank.It turns out that the same canonical form allows us to successfully compute the multilinear rank of T as well, under the same genericity assumption.For the convenience of the reader here we summarize the construction of the canonical form T c for T , referring the reader to [5] for additional details.
We consider, for each triplet of distinct integers r, s, t ∈ {1, 2, 3} the following integers: Notice that the definition of j r,s is independent of the order of the indices, i.e. j r,s = j s,r .Our generality assumption is the following: Assumption 2.1.For any choice of r, s, t with {r, s, t} = {1, 2, 3}, L t and the intersection Λ rs = L r ∩ L s span C k+1 , or, equivalently, the linear span of each pair of centers does not intersect the third one.
This assumption implies, in particular, that for any choice of a pair r, s, the span of L r and L s is the whole C k+1 , or, in other words, that the two centers C r and C s do not intersect.
Under Assumption 2.1, applying Grassmann formula one sees that the three numbers above have the following meaning: In [5] it is shown that under Assumption 2.1 a suitable choice of bases, realized by H j ∈ GL(h j + 1), for j = 1, 2, 3, and K ∈ GL(k + 1), transforms the matrix (3) as As described above, entries of T c are given by the maximal minors of (8), obtained by selecting α j columns from H j P j K T , for j = 1, 2, 3.More precisely, as in [5], let (a 1 , a 2 , a 3 ) be a partition of α 1 and let (b 1 , b 2 , b 3 ) and (c 1 , c 2 , c 3 ) be partitions of α 2 and α 3 , respectively.Each entry of T c is a maximal minor T c I,J,K of (8) built by choosing a 1 columns from I i , a 2 columns from I j 1,2 , a 3 columns from I j 13 , appropriately completing them with zero vectors to obtain entire columns of (8) and proceeding analogously with b 1 , b 2 , b 3 and the second block of (8) and with c 1 , c 2 , c 3 and the third block of (8), where are the indices of the columns of the three blocks of (8) that were not chosen.As already recalled in [5], the entries T c I,J,K of the tensor T c are indexed with respect to the lexicographical order of the families of multi-indices {I}, {J}, and {K}.If we consider the first flattening T c 1 of T c , one then sees that a row of T c 1 corresponds to one specific choice of a 1 columns from I i , a 2 columns from I j 12 , and a 3 columns from I j 1,3 , with a 1 + a 2 + a 3 = α 1 .Similarly one sees the role of specific choices of b u columns and c u columns from the corresponding submatrices of Φ k h 1 ,h 2 ,h 3 in determining a chosen row of T c 2 and T c 3 respectively.
Remark 2.3.In the following section we will make use of the canonical form (8) in order to determine the multilinear rank of a Grassmann tensor satisfying Assumption 2.1.As mentioned in [5], if Assumption 2.1 doesn't hold, we cannot obtain a canonical form depending only on the dimension of the various spaces and, indeed, even the rank of the Grassmann tensor depends also on the geometric configuration of the three projections.This observation is still true as far as the multilinear rank is concerned; this is the reason why in this paper we will always assume that Assumption 2.1 is satisfied.
As an example, consider the case of three projections from P 4 to P 2 , with profile (2, 2, 1).Notice that in this case i = −1.We can choose projection matrices P j , j = 1, 2, 3 such that: The first flattening of the corresponding trifocal tensor is whose rank is generically 3 and drops to at most 2 if ek = f h.

The multilinear rank of trifocal Grassmann tensors
In this section, T will always be a trifocal Grassmann tensor of dimension n 1 ×n 2 ×n 3 , with profile (α 1 , α 2 , α 3 ), satisfying Assumption 2.1 and T c will be its canonical form introduced in Section 2.5.Leveraging properties of T c we will obtain results on F-rk (T .) ) where v r is the number of zero rows in the flattening matrix T c r .Proof.In the proof of [5, Theorem 5.2] the authors showed that, with our assumptions on T , if T c î, ĵ, k = 0 then T c î, ĵ,k = 0 for all k = k.Considering the cyclic role of the indices i, j, k the above observation also says that T c i, ĵ, k = 0 for all i = î, and T c î,j, k = 0 for all j = ĵ.Assume T c î, ĵ, k = 0, then the above observation can be visualized in T

  
While T c 1 can have more than one nonzero element on the same row, it cannot contain two nonzero elements on the same column.Hence any two rows containing non-zero elements are linearly independent.Therefore, if v 1 is the number of zero rows of T c 1 , it is rk (T c 1 ) = n 1 − v 1 .A similar argument can be carried out for T c 2 and T c 3 .Lemma 3.2.Let T c be the canonical form of a trifocal Grassmann tensor T of dimension n 1 × n 2 × n 3 , with profile (α 1 , α 2 , α 3 ), satisfying Assumption 2.1.Let (r, s, t) be any permutation of {1, 2, 3} and let j r,s be defined as in (6).Then the flattening matrix T c r contains zero rows if and only if Moreover, conditions (9) are mutually exclusive.Proof.For simplicity, let us fix (r, s, t) = (1, 2, 3) and conduct the proof for T c 1 , noticing that the proof is identical for T c 2 and T c 3 , with a cyclic adjustment of the role of the three indices and of the parameters {a j , b j , c j } introduced in Section 2.5.Recall that T c 1 is the first flattening matrix of T c = [T c ℓ,j,k ], of dimension n 1 × (n 2 n 3 ) obtained by juxtaposing n 3 blocks of dimension n 1 × n 2 , where ℓ runs over the rows of T c 1 , j runs over the columns of each block, and k runs over the blocks.Let i be as defined in (5), and let {a j , b j , c j }, j = 1, 2, 3 be as in Section 2.5.As noted in Section 2.5, a row of T c 1 corresponds to one specific choice of a 1 columns from the first block of (8), a 2 columns from its second block, and a 3 columns from its thrid block, with a i ≥ 0, and ), so that j 13 ≥ α 3 + 1, and let a 1 , a 2 , a 3 be such that 0 ≤ a 3 ≤ j 1,3 − α 3 − 1.Notice that the assumption j 13 − α 3 − 1 ≥ max(0, α 1 − i − j 1,2 ) implies that there exists at least one such triplet with Recalling the canonical structure of the matrix (8), it follows from ( 10) that all elements of the row of T c 1 corresponding to a choice of a 1 , a 2 , a 3 as above are zero, as all the maximal minors corresponding to elements of this row are now forced to contain at least one duplicate column coming from I i , I j 1,2 or I j 2,3 .The proof can be carried out with the obvious adjustments if Assume now that ) and ( 11) We will show that every row in T c 1 contains at least one non-zero element.Let us fix a row of T c 1 by fixing non-negative values for (a 1 , a 2 , a 3 ) with ℓ a ℓ = α 1 , a 1 ≤ i, a 2 ≤ j 1,2 , and a 3 ≤ j 1,3 .As observed in [5], this row contains a non zero element if the following system of linear equations ( 13) has at least one set of integer solutions in the unknowns (b 1 , b 2 , b 3 , c 1 , c 2 , c 3 ), that satisfy the following conditions: Our assumptions on (a 1 , a 2 , a 3 ) imply that the second and third equation of ( 13) are already solved, satisfying the relevant (14).Therefore it remains to show that it is possible to choose 0 ≤ c 3 ≤ j 2,3 such that b 1 = α 2 − j 1,2 + a 2 − j 2,3 + c 3 and c 1 = α 3 − j 1,3 + a 3 − c 3 satisfy the relevant ( 14), i.e. ( Recalling that i = ℓ α ℓ − j 1,2 − j 1,3 − j 2,3 , (15) give:
Case 4. In this case we need to further consider the possible relative sign of LB and UB − j 2,3 , generating four possible cases as in the table below.In each case one can choose c 3 as indicated in the fourth column.
Arguments similar to the ones used above in previous cases show that (18) are satisfied.
Claim 3.1.With the notation of this section, let i and j u,v for u, v ∈ {1, 2, 3}, u = v, be defined as in ( 5) and( 6) and let T c 1 be the first flattening matrix of T c .Assume that rk (T c 1 ) < n 1 , i.e. rk (T c 1 ) is not maximum, so that, by Lemma 3.2, j 1,s − α s − 1 ≥ max(0, α 1 − i − j 1,t ) for some s, t ∈ {2, 3} t = s.Let The cardinality |A| of the set A can be computed as follows.For each (a 1 , a 2 , a 3 ) ∈ A set m 1 = min (i, α 1 − a s ) and m 2 = min (j 1,t , α 1 − a s ). where Similarly, one can define sets B and C, respectively, for flattenings T c 2 and T c 3 , if their ranks are not maximum.In those cases b u and c u play the role of a u and the j u,v are adjusted accordingly, taking into consideration their role in (8) and in Lemma 3.2.
Theorem 3.1.Let T be a trifocal Grassmann tensor of dimension n 1 × n 2 × n 3 , with profile (α 1 , α 2 , α 3 ), satisfying Assumption 2.1.Let i and j u,v for u, v ∈ {1, 2, 3}, u = v, be defined as in ( 5) and (6).Then the rank of the first flattening T 1 is: Proof.Let T c be the canonical form of T and T c 1 be its first flattening matrix.Remark 2.2 shows that F-rk (T c 1 ) = F-rk (T 1 ), therefore from now on we will focus on F-rk (T c 1 ).From Lemma 3.1 the rank of the flattening matrix ) and let A be the corresponding set defined in Claim 3.1, which, under our last assumption, is non empty.As noted above, choosing a row of T c 1 is equivalent to choosing a 1 columns from I i , a 2 columns from I j 1,2 , a 3 columns from I j 13 , appropriately completing them with zero vectors to obtain entire columns of (8), where a i ≥ 0, a 1 + a 2 + a 3 = α 1 , a 1 ≤ i, a 2 ≤ j 1,2 , and a 3 ≤ j 1,3 .From Lemma 3.2 and Claim 3.1 it follows that zero rows in T c 1 are exactly all rows that correspond to triplets of nonnegative integers (a 1 , a 2 , a 3 ) ∈ A, hence ) then A is the empty set and thus v 1 = 0, and F-rk (T c 1 ) is maximum, i.e.F-rk (T 1 ) = n 1 .Remark 3.1.While Theorem 3.1 gives the result for the first flattening, one can easily obtain the rank of the second and third flattening matrices by simply switching the order of the views and proceeding accordingly.
Proof.Suppose that rk (T r ) < n r for all r = 1, 2, 3. From Lemma 3.2 it follows that three of the six conditions (21) (r, s, t) : j r,s − α s − 1 ≥ max(0, α r − i − j r,t ), where r, s, t ∈ {1, 2, 3}, must hold, one with r = 1, one with r = 2, and one with r = 3.First observe that, for fixed values of r, s, t, if (r, s, t) holds then neither (s, t, r) nor (t, r, s) can hold.Indeed assume (r, s, t) holds and α r − i − j r,t = j r,s − α s + j s,t − α t < 0. Then max(0, α r − i − j r,t ) = 0, and j r,s − α s − 1 ≥ 0, which in turn gives j s,t − α t < 0, and thus (s, t, r) can not hold.Assume instead that (r, s, t) holds and α r − i − j r,t = j r,s − α s + j s,t − α t ≥ 0. Then (r, s, t) gives j s,t − α t ≤ −1, and thus (s, t, r) is not possible in this case either.Further observe that (r, s, t) implies j r,s −α s ≥ 1 and if (t, r, s) held it would be j t,r −α r −1 ≥ j t,r −α r + j r,s −α s and thus j r,s −α s ≤ −1 which is impossible.Hence if (r, s, t) holds neither (s, t, r) nor (t, r, s) can hold.Now assume (r, s, t) is one of the six conditions that hold.From the above observation it follows that (s, r, t) must hold.But the same observation then implies that (t, s, r) must hold, which is incompatible with (s, r, t), again from the above observation.
Remark 3.2.Let T be a trifocal Grassmann tensor of dimension n 1 × n 2 × n 3 , with profile (α 1 , α 2 , α 3 ), satisfying Assumption 2.1.As its canonical form T c is obtained via successive invertible transformations in the ambient space and in the views, it is rk (T ) = rk (T c ).
Remark 3.3.Proposition 3.1 and Claim 3.1 show how to count the number of zero rows in a flattening matrix T c r of a tensor T c in canonical form.Here we describe a procedure that identifies exactly which rows of the flattening matrix vanish.For simplicity we will set r = 1, as similar arguments work for r = 2, 3. Let T c 1 be a flattening matrix of a tensor T c as above, and let a = (a 1 , a 2 , a 3 ) ∈ A, where A is as in Claim 3.1.Recall that the rows of T c 1 are indexed by the multi-indices I with respect to the lexicographic order.
First, choose any a 1 columns among the first i columns of the first block of (8), a 2 columns from the next j 12 columns, and a 3 columns from the last j 13 columns.Each such choice produces entries T c Ia,J,K of the tensor T c , where I a is any multi-index containing the indices of any non-chosen i − a 1 + j 12 − a 2 + j 13 − a 3 = s 1 + 1 columns of the canonical form, and J, K are any multi-indices of length, respectively, s 2 + 1 and s 3 + 1, as described above.Therefore, for any triplet a = (a 1 , a 2 , a 3 ) the i a 1  Let T be a trifocal Grassmann tensor and denote by T c its canonical form.Results form the previous section allow one to directly find the core of T c .This approach is similar to HOSVD (see Section 2.3) but the canonical form of a tensor makes it easier to compute the matrices U j involved in the process.
In Section 3 we computed the multilinear rank (r 1 , r 2 , r 3 ) of T c .As seen before, it is given by r j = n j − v j where v j is the number of zero rows of T c j .Moreover, Remark 3.3 gives an effective method to list the zero rows r h 1 , . . ., r hv j , in T c j .Denote by r k 1 , . . ., r kr j , with r k 1 < r k 2 < • • • < r kr j the non-zero rows of T c j .As remarked in the proof of Lemma 3.1, the columns of T c j are zero or they are elements of the canonical basis {e 1 , . . ., e n j } of C n j , i.e., among the columns of T c j we can find all vectors e kt for t = 1, . . .r j .Hence it is straightforward to find an orthonormal basis for the image of each flattening and therefore the matrices U j quoted in 2.3.Indeed, the matrix U j is the matrix whose columns are the vectors e k 1 , . . ., e kr j .Notice that deleting the zero rows r h 1 , . . ., r hv j from U j we get the identity matrix.What's more, the multiplication of U * j by T c j deletes the zero rows of T c j .As a consequence, the core tensor C c of T c is obtained from T c by deleting all zero faces in each of the three directions.4.2.Core of Grassmann Tensors in the general case.Let T be a trifocal Grassmann tensor and denote by T c its canonical form.Recall that T c can be obtained from T via multilinear multiplication, i.e., T c = (V 1 , V 2 , V 3 ) • T where V i are invertible matrices obtained from the matrices H i and K in Section 2.5.More precisely, V j = ( s j +1 H −1 j ) * for j = 1, 2, 3.As shown in the previous subsection, our construction of the canonical tensor allows us to introduce suitable matrices U 1 , U 2 , U 3 such that the core C c of T c can be obtained as follows: In order to find a core C of T , we proceed as follows.First, we define an invertible matrix B j of size r j × r j for j = 1, 2, 3 as B j = E j D −1 j where D j is the diagonal matrix with the singular values of V −1 j U j and E j is the matrix whose columns are the eigenvectors of (V −1 j U j ) * (V −1 j U j ).Second, we define the tensor C as (B −1 1 , B −1 2 , B −1 3 ) • C c .Finally, we introduce matrices S j = V −1 j U j B j for j = 1, 2, 3, which are semiorthogonal.
Third, we verify that C is a core of T , i.e., T = (S 1 , S 2 , S 3 )•C because C = (S * 1 , S * 2 , S * 3 ) • T and the following diagram commutes: The matrices of the diagram above are computed in the following concrete example.

The multilinear multiplication (B
3 ) are given by In this case we consider three projections from P 7 to, respectively, P 6 , P 4 , and P 4 , with profile (3, 3, 2).T is a tensor of dimension 35 × 10 × 10 and the value of the quantities in ( 5) and ( 6) are as in the table above.Notice that Assumption 2.1 is satisfied as i = 1 and hence the construction of T c can be performed.The only values of r, s that satisfy one of ( 9) are r = 1, s = 3, as j 1,3 − (α 3 + 1) = 0. Hence rk (T c 2 ) and rk (T c 3 ) are both maximal while rk (T c 1 ) drops.For (r, s, t) = (1, 3, 2) Claim 3.1 shows that A = {(0, 3, 0), (1, 2, 0)}.According to Proposition 3.1, the contribution to the rank deficiency given by the first triplet in A is 1 More specifically, the entries of the first row of T 1 are given by T 1234,J,K , where J and K are multi-indices as in Section 2.5.
In correspondence of the first triplet (0, 3, 0), as a 2 = j 12 = 3, we are forced to choose the second, the third and the fourth column of the first block of (8) to compute entries of the tensor.These entries T c I,J,K correspond to the multi-index I = {1567}, i.e. the 20th row of T c 1 .On the other hand, the triplet (1, 2, 0) gives three possible multiindices of rows.As i = a 1 = 1 we are forced to choose the first column of the first block of (8).As j 12 = 3 and a 2 = 2, we have three possible choices {(2, 3), (2, 4), (3,4)} for two out of the next three columns of the same block.Hence we have, respectively, the three rows T c 1 4567,J,K , T c 1 3567,J,K , T c 1 2567,J,K , i.e. rows 30, 34, and 35.
Example 5.7.In this last case we have an example of tensor with a relatively small core.We consider three projections from P 9 to P 3 , P 8 , and P 8 , with profile (3,4,4).T is a tensor of dimension 4 × 84 × 126, i = 1.Proceeding as in previous examples in this section one gets F-rk (T ) = (4, 65, 75).

j 1, 2 a 2 j 1 , 3 a 3
rows with indices I a of the flattening T c 1 are zero.

3 .
Therefore, rk (T c 1 ) drops by 4, and F-rk (T ) = (31, 10, 10).Following Remark 3.3 we will now identify the 4 zero rows of T c 1 .Notice that the first block of (8) is a submatrix of dimension 9 × 7 while α 1 = 3, hence the multi-indices of the sets of columns that are not chosen in the calculation of each maximal minor, i.e. the row multiindices of T c 1 , have length 4 are the following, in proper lexicographic order, listed above their corresponding row indices: 1234 1235 1236 ... ... ...