Subspace structure of some operator and Banach spaces

We construct a family of separable Hilbertian operator spaces, such that the relation of complete isomorphism between the subspaces of each member of this family is complete $\ks$. We also investigate some interesting properties of completely unconditional bases of the spaces from this family. In the Banach space setting, we construct a space for which the relation of isometry of subspaces is equivalent to equality of real numbers.


Introduction and main results
Recently, there has been much progress in describing the complexity of various relations between subspaces of a given separable Banach space. The reader is referred to [4,5,6] for the known results on the relations of isomorphism, biembeddability, and more. Isometry and local equivalence (finite representability) are handled in [12] and [7], respectively.
In this paper, we consider an operator space analogue of this problem (see Section 2 for a brief introduction into operator spaces). The Effros-Borel structure on the set S(Z) of infinite-dimensional subspaces of a separable operator space Z is defined in the same way as for Banach spaces, see e.g. Chapter 12 of [8], or [6]. Reasoning as in Section 2 of [6], we show that the relations of complete isomorphism, complete biembeddability, and such, defined on S(Z), are analytic equivalence relations. How "simple" can these relations get, without being trivial? The main result of this paper provides a partial answer to this question. Theorem 1.1. There exists a family F of operator spaces, such that, for any operator space X ∈ F, the following is true: (1) X is isometric to ℓ 2 .
(2) The relation of complete isomorphism and complete biembeddability on S(X) are Borel bireducible to the complete K σ relation. The family F contains a continuum of operator spaces, not completely isomorphic to each other.
It is possible to prove that the relation of complete isometry on S(X) (X ∈ F) is Borel bireducible to the equality on R. The proof proceeds along the same lines as Theorem 6.1, but is exceedingly technical, and not very illuminating. We therefore omit it, and present a related Theorem 1.4 instead.
Each space from F has its canonical basis (defined in Section 4), which is 1-completely unconditional. It turns out these bases have interesting properties of their own. Theorem 1.2. Any subspace Y of an operator space X from the family F has 1-completely unconditional canonical basis. Any C-completely unconditional basis in such a Y is φ(C)-equivalent (up to a permutation) to the canonical basis of Y , with φ(C) polynomial in C.
For certain spaces X, a stronger result holds. Theorem 1.3. For any a > 1 there exists an operator space X, belonging to the family F, such that the canonical basis in any subspace of X is a-equivalent to a subsequence of the canonical basis of X. Consequently, every C-completely unconditional basic sequence in X is φ(C)equivalent (up to a permutation) to a subsequence of the canonical basis, with φ(C) polynomial in C.
As part of our motivation lies in the field of Banach spaces, we mention a few classical results related to the three theorems above.
No "commutative" counterpart of Theorem 1.1 has yet been obtained: for a Banach space E, very little is known about the "upper" estimates on the complexity of the isomorphism relation on S(E). It is possible that the relation of isomorphism on S(E) is complete analytic whenever E is a separable Banach space, not isomorphic to ℓ 2 . Theorem 1.2 shows (among other things) that every subspace of X has an unconditional basis. One can raise a related question: suppose every infinite dimensional subspace of a separable Banach space E has an unconditional basis. Must E be isomorphic to ℓ 2 ? The answer to this question appears to be unknown. We mention a few partial results. First, any subspace of E has the Approximation Property, therefore, by [11,Theorem 1.g.6], E has to have type 2 − ε and cotype 2 + ε for any ε > 0. Furthermore, by [16,Theorem 10.13], E has weak cotype 2. By [9], E is ℓ 2 -saturated. Finally, if E = ℓ 2 (F ) for some Banach space F , then E is isomorphic to a Hilbert space [10].
Searching for a "commutative" analogues of Theorem 1.3, we pose the following question: suppose a Banach space E has an unconditional basis (e i ), such that every subspace of E has an unconditional basis, equivalent to a subsequence of (e i ). Must E be isomorphic to ℓ 2 ?
The rest of the paper is organized as follows: Section 2 provides a brief introduction into operator spaces and c.b. maps. In Section 3 we construct, for each contraction A ∈ B(ℓ 2 ), a subspace X(A) of R ⊕ C, and study the properties of the spaces X(A). Further investigation is carried out in Section 4, where we concentrate on the case when A is compact. Unconditional bases in the spaces X(A) are described in Section 5. In Section 6 we show that, for the "right" compact contractions A, the relations of complete isomorphism and complete biembeddability on S(X(A)) are Borel bireducible to the complete K σ relation. Having gathered all the preliminary results, we prove Theorems 1.1, 1.2, and 1.3 in Section 7. Finally, in Section 8 we prove Theorem 1.4.

Introduction into operator spaces
As our paper deals primarily with operator spaces, we are compelled to recall some basic definitions and facts about the topic. An interested reader is referred to [3,15,17] for more information. A (concrete) operator space X is, for us, just a closed subspace of B(H) (H is a Hilbert space). If X and Y are operator spaces, embedded into B(H) and B(K) respectively, we defined the minimal tensor product of X and Y (denoted simply by X ⊗ Y ) as the closure of the algebraic tensor product X ⊙ Y in B(H ⊗ 2 K). It is common to denote M n ⊗ X by M n (X). Here, M n = B(ℓ n 2 ) is the space of n × n matrices. We view M n (X) as the space of X-valued n × n matrices, with the norm · n . It is easy to see that the sequence of matricial norms · n satisfies two properties (Ruan's axioms): (i) for any v ∈ M n (X), α ∈ M n,k , and β ∈ M k,n , (β ⊗ I X )v(α ⊗ I X ) k β x n α , and (ii) for any v ∈ M n (X) and w ∈ M k (X) , v⊕w k+n = max{ v n , w k }. It turns out that the converse is also true. Suppose X is a matricially normed space -that is, it is a Banach space, for which the spaces M n (X) of n×n X-valued matrices are equipped with the norms · n , satisfying (i) and (ii) above. Then the norms · n arise from an isometric embedding of X into B(H), for some Hilbert space H. The spaces X as above are sometimes called abstract operator spaces.
It is easy to see that a subspace of an operator space is, again, an operator space (we use the notation ֒→ to denote one operator space being a subspace of another). Moreover, a quotient and the dual of an operator space can be again equipped with an operator space structure. Once again, the reader is referred to [3,15,17] for details.
A map u from an operator space X to an operator space Y is called completely bounded (c.b. for short) if its c.b. norm is finite. The set of all c.b. maps from X to Y is denoted by CB(X, Y ). Clearly, u cb u , and CB(X, Y ) ⊂ B(X, Y ) (the inclusion may be strict). The operator spaces X and Y are called completely isomorphic . We shall use the notation ≃ for complete isomorphism. Moreover, we say that Complete embeddability and biembeddability are defined in the obvious way.
Suppose the operator space s X and Y are embedded into B(H) and B(K), respectively. We define the direct sum of operator spaces X and Y (denoted by X ⊕ ∞ Y , or simply X ⊕ Y ) by viewing X ⊕ Y as embedded into B(H ⊕ 2 K). Note that any u ∈ M n (X ⊕ Y ) has a unique expansion as v ⊕ w, with v ∈ M n (X) and w ∈ M n (Y ). Then u = max{ v , w }. Throughout this paper, we work with the row and column spaces. Recall that a Hilbert space H can be equipped with row and column operator space structure, denoted by H R and H C , respectively. The space H R is defined as the linear space of operators ξ ⊗ ξ 0 , where ξ 0 is a fixed unit vector, and ξ runs over H. Here, for ξ ∈ H and η ∈ K, ξ ⊗ η denotes the operator in B(H, K), defined by (ξ ⊗ η)ζ = ζ, ξ η. Similarly, the space H C is defined as the space of operators ξ 0 ⊗ ξ (ξ ∈ H).
It is easy to see that, if K is a subspace of H, then K C (K R ) is a subspace of H C (resp. H R ). For simplicity of notation, we write denoted by R and C, instead of (ℓ 2 ) R and (ℓ 2 ) C , respectively. One can use matrix units to describe these spaces. We denote by E ij ∈ B(ℓ 2 ) the infinite matrix with 1 on the intersection of the i-th row and the j-th column, and zeroes elsewhere. Then R (C) is the closed linear span of (E 1j ) ∞ j=1 (respectively, (E i1 ) ∞ i=1 )). Below we list a few useful properties of row and column spaces. Here, H and K are Hilbert spaces.
(1) H R and H C are isometric to H (as Banach spaces).
(3) If (ξ i ) is an orthonormal system in H, then, for any finite sequence (a i ) of elements of M n , R is a complete contraction, and id −1 cb √ n. The same is also true for column spaces. (5) For any u ∈ CB(H R , K C ) or u ∈ CB(H C , K R ), u 2 = u cb (here, · 2 is the Hilbert-Schmidt norm). (6) Duality: H * R = H C , and H * C = H R . (7) For any operator space X and any u ∈ B(X, H C ) (u ∈ B(X, H R )), u cb = I C ⊗u B(C⊗X,C⊗H C ) (resp. u cb = I R ⊗u B(R⊗X,R⊗H R ) ). This result follows from the proof of Smith's lemma -see e.g. Proposition 2.2.2 of [3]. (8) If H is separable infinite dimensional, then H R (H C ) is completely isometric to R (resp. C).
A sequence (x i ) i∈I in an operator space X is called normalized if x i = 1 for every i ∈ I. (x i ) is said to be a c-completely unconditional basic sequence (c 1) if, for any finite sequence of matrices (a i ), and any sequence of scalars λ i ∈ {λ ∈ C : |λ| 1}, we have . It is easy to see that any such sequence (x i ) is linearly independent. A c-completely unconditional basic sequence (x i ) ⊂ X is called a c-completely unconditional basis in X if X = span[x i : i ∈ I] (here and below, span[F ] refers to the closed linear span of the family F ).
A convenient example of a normalized 1-completely unconditional basis is provided by an orthonormal basis in R or C. For future reference, we observe that any completely unconditional basis is "similar to" an orthogonal one. More precisely, suppose X is an operator space, which is isometric to a Hilbert space (this is the setting we are concerned with in this paper). Suppose, furthermore, that (x i ) is a normalized c-completely unconditional basis in an operator space X, isometric to a Hilbert space, then (x i ) is "similar to" an orthonormal basis: for any finite sequence of scalars (α i ), and similarly, c −2 Thus, there exists an U : X → ℓ 2 (I), s.t. (Ux i ) i∈I is an orthonormal basis, and U , U −1 c. Families (x i ) i∈I and (y i ) i∈I in operator spaces X and Y , respectively, are called c-equivalent if there exists a map u : span[x i : i ∈ I] → span[y i : i ∈ I], such that ux i = y i , and u cb u −1 cb c. Two sequences are equivalent if they are c-equivalent, for some c. Furthermore, (x i ) i∈I is c-equivalent to a subfamily of (y j ) j∈J if there exists a subset I ′ ⊂ J, such that |I| = |I ′ |, and (x i ) i∈I is c-equivalent to (y j ) j∈I ′ . Finally, (x i ) i∈I is c-equivalent to (y i ) i∈I up to a permutation if there exists a bijection π : I → I such that (x i ) i∈I is c-equivalent to (y π(i) ) i∈I .
A sequence (x i ) i∈I is called completely unconditional if it is c-completely unconditional, for some c. A completely unconditional basis, equivalence of sequences etc. are defined by dropping the c, in a similar manner.

Subspaces of R ⊕ C: basic facts
Suppose H and K are separable Hilbert spaces, and A ∈ B(H, K) is a contraction. Denote by X R (H, K, A) the subspace of H R ⊕ K C , spanned by (e, Ae) (e ∈ H). If there is no confusion as to the spaces H and K, we simply write X R (A). X C (A) is defined in the same way. Often, we write X(A) (X(H, K, A)) instead of X R (A) (X R (H, K, A)). Note that the formal identity map id : H → X(H, K, A) : ξ → ξ ⊕ Aξ is an isometry. Thus, we identify subspaces of X(H, K, A) with those of H. This identification gives meaning to the notation A| Y , where Y ֒→ X(A).
Remark 3.1. Although the spaces R and C are "simple", the structure of their direct sum R ⊕ C is rather rich. For instance, it was shown in [21] that the "operator Hilbert space" OH is a subspace of a quotient of R ⊕ C (actually, the results of that paper are much more general). It follows from [21] that the spaces X(A) (A ∈ B(ℓ 2 )) defined as above are natural "building blocks" of subspaces of R ⊕ C.
Begin by stating a simple lemma. (1) Suppose U ∈ B(K, K ′ ) is such that Uξ = ξ for any ξ ∈ ran A. Then X(H, K, A) is completely isometric to X(H, K ′ , UA).
In particular, X(H, K, A) is completely isometric to X(H, H, |A|), where |A| = (A * A) 1/2 . (2) Suppose P 1 , . . . , P m and Q 1 , . . . , Q m are families of orthogonal mutually orthogonal projections on H and K, respectively, such that k P k = I H , k Q k = I K , and A = m k=1 Q k AP k . Set H k = P k (H) and K k = Q k (K). Then the formal identity operator id from X(H, K, A) to X(H 1 , Proof. We only establish part (2). A Gram-Schmidt orthogonalization shows that any element x ∈ M n (H) can be written as x = x 1 + . . . + x n , with x k = i a ki ⊗ ξ ki a ki ∈ M n , and (ξ ki ) a finite orthonormal systems in H k . Then By the basic properties of row and column spaces (listed at the end of the previous section), Furthermore, the vectors Aξ ki belong to the mutually orthogonal spaces Note that We complete the proof by noting that Next, we establish our key tool for computing c.b. norms. Then To estimate the first term, note that id : X(A) → R is a complete contraction, hence However, T T CB(X(A),R) , hence T = T CB(X(A),R) . Next we estimate BT CB(X(A),C) . We know that Identifying elements of C ⊗ X(A) and C ⊗ C with operators from R to X(A) and C, respectively, we see that Next we examine the exactness of X(A) * . Recall that an operator space X is called exact if there exists c > 0 such that, for any finite dimensional subspace E of X, there exists an operator u from E to a subspace F of M n (n ∈ N), such that u cb u −1 cb < c. The infimum of all the c's like this is called the exactness constant of X. It is easy to see that R ⊕ C is 1-exact, hence so is X(A). The case of the dual is different.
Proof. Let c = 2 5/2 ex(X(A) * ). By Corollary 0.7 of [18], there exist operators , and a simple calculation yields Therefore, which implies the desired estimate for c.
Proof. The estimates on the c.b. norm of id and id −1 follow from On the other hand, if A is not Hilbert-Schmidt, then, by Proposition 3.4, X(A) * is not exact. However, R * = C is exact, thus X(A) is not completely isomorphic to R.
We say that A ∈ B(H, K) (H and K are Hilbert spaces) is diagonalizable if the eigenvectors of |A| = (A * A) 1/2 span H. Equivalently, there exist orthonormal systems (ξ i ) and (η i ) in H and K, respectively, and a sequence (λ i ) of non-negative numbers, such that When working with X(A), it is often convenient to have A diagonalizable. While every compact operator is diagonalizable, a non-compact one need not have this property. However, we have: Lemma 3.6. Suppose H and K are separable Hilbert spaces, A ∈ B(H, K) is a contraction, and ε > 0. Then there exists a diagonalizable contraction B ∈ B(H, K) such that A−B 2 ε, and X(A) is (1+ε) 2completely isomorphic to X(B). If A is non-negative, B can be selected to be non-negative, too.
Proof. By Lemma 3.2(1), it suffices to consider the case of A = |A|, and H = K. By [20], there exists a selfadjoint diagonalizable C ∈ B(H) such that A−C 2 < ε/2. Let (c i ) be the eigenvalues of C, and (ξ i ) the corresponding norm 1 eigenvectors. Define the operator B by setting By the triangle inequality, A−B 2 < ε. Moreover, B is a non-negative contraction.
Denote the the formal identity map from X(A) to X(B) by U. It remains to show that U cb , U −1 cb 1 + ε. As U is an isometry, Lemma 3.3 implies By the triangle inequality, Proof. We can assume B = |B|. By Lemma 3.6, there exists a nonnegative diagonalizable contraction A such that A − B is Hilbert-Schmidt, and X(A) As the essential spectrum is stable under compact perturbations, 0 ∈ σ ess (A). Write A = diag (α), where α = (α i ) i∈I . Then 0 is a cluster point of the set (α i ). Denote the norm 1 eigenvectors, corresponding to α i , by ξ i . Find I 0 ⊂ I such that i∈I 0 α 2 i < 1. Let I 1 = I\I 0 , and let P 0 and P 1 be the orthogonal projections onto H 0 = span[ξ i : i ∈ I 0 ] and H 1 = span[ξ i : To summarize, Corollary 3.8. Suppose H and K are separable infinite dimensional Hilbert spaces, and a contraction A ∈ B(H, K) satisfies 0 ∈ σ ess (|A|).
Proof. Let P be the orthogonal projection fromH onto H. onto H.

Classification of subspaces using sequences
In this section, we study the spaces X(A) when A is compact, and establish a connection between such spaces and a certain family of sequences. We denote by C the space of compact contractions A ∈ B(ℓ 2 ), which are not Hilbert-Schmidt. We denote by F the set of all spaces X(A), with A ∈ C.
Start by defining the canonical basis in X(A), where A ∈ B(H, K) is a compact contraction. As X(A) = X(|A|), we assume henceforth that A = |A|, and H = K. Let (ξ i ) i∈I 1 be the normalized eigenvectors of A, corresponding to the positive eigenvalues of A. Furthermore, set H ′ = ker A, and let (ξ i ) i∈I 0 be an orthonormal basis in H ′ (we assume that Moreover, e i = 1 for each i, and, for any finite sequence (a i ) ⊂ M n , We say that the vectors e i = e i [A] form the canonical basis of X(A). Now suppose (α i ) i∈N is a sequence of numbers in [0, 1]. In an effort to link operator spaces with certain sequences of scalars, we define the operator space X d (α) = X(diag (α)). To describe the operator space structure of X d (α), denote by (ξ i ) the canonical orthonormal basis of for any finite sequence of matrices (a i ). We call (e i (α)) i∈N the canonical basis of is a finite sequence, we define the finite dimensional space X d (α) in the same way.
To reduce ourselves to working with the spaces X d (α) (and hence to sequences of scalars), define, for a compact A ∈ B(H, K), the sequence α = D(A): if A has rank n < ∞, let α 1 . . . α n > 0 be the nonzero singular values of A, and set α i = 0 for i > n. If the rank of A is infinite, let (α i ) be the singular values of A, listed in the non-increasing order. We have: Proof. Let α = (α i ) = D(A). By Lemma 3.2(1), we can assume that Let P be the orthogonal projection onto K = span[ξ i : i ∈ N]. Set Q = I − P , and L = Q(H). By Lemma 3.2(2), X(A) is √ 2-completely isomorphic to X(K, K, P AP ) ⊕ X(L, L, QAQ). Furthermore, QAQ = 0, hence X(L, L, QAQ) = L R . It is easy to see that X(K, K, P AP ) = X d (α). By Proposition 3.7, the latter space is 4 We will also use a related observation. Proof. Indeed, let β = D(A). If A is finite rank, then X(A) is completely isometric to X d (β). If rank A = ∞, assume (by Lemma 3.2) that A = |A|. Then β 1 β 2 > 0 is the list of all positive eigenvalues of A. Denote the corresponding norm 1 eigenvectors by ξ i . Let H = span[ξ i : i ∈ N], and K = ker A. Clearly, K and H are mutually orthogonal subspaces of ℓ 2 . Let (η j ) j∈J be the orthogonal basis of K. Find a sequence (γ j ) j∈J of positive numbers, satisfying j∈J γ 2 i < ε 2 . Consider the compact contractionÃ ∈ B(ℓ 2 ), defined byÃξ i = β i ξ i , andÃη j = γ j η j . Then A−Ã 2 < ε. By (4.1), the formal identity map id : X(Ã) → X(A) is a complete contraction, and id −1 cb < 1 + ε. We complete the proof by identifying X(Ã) with the space X d (α), where the sequence α = (α i ) is the "join" of the sequences β and γ (that is, any number c ∈ [0, 1] occurs in α as many times as it occurs in the sequences β and γ combined). Now denote by S the set of all sequences (α i ) i∈N satisfying 1 α 1 α 2 . . . 0, and lim i α i = 0. The rest of this section is devoted to the spaces X d (α) (α ∈ S). We translate the relations between sequences α, β ∈ S to relations between the corresponding spaces X d (α) and X d (β). We say that the sequence α dominates β (α ≻ β) if there exists a set S ⊂ N) and K > 0 s.t. i∈S β 2 i < ∞, and Kα i β i for any i / ∈ S. We say α is equivalent to β (α ∼ β) if α ≻ β, and β ≻ α. Clearly, the relation ≻ is reflexive and transitive. The relation ∼ is, in addition to this, symmetric. For instance, to establish the transitivity of ≻, suppose α ≻ β, and β ≻ γ, and show that α ≻ γ. Note that there exist sets S 1 and S 2 , and constants K 1 and K 2 , s.t.
which is what we need. The other properties are proved in a similar fashion.
Proof. If S and K with the properties described above exist, then they witness the fact that α ≺ β and α ≻ β, hence α ∼ β. Conversely, suppose α ≺ β and α ≻ β. Then there exist a constant K, and sets S 1 and S 2 , s.t. α i Kβ i for i / ∈ S 1 , β i Kα i for i / ∈ S 2 , i∈S 1 α 2 i < ∞, and i∈S 2 β 2 i < ∞. By reducing S 1 and S 2 further, we can assume that α i > Kβ i for each i ∈ S 1 , and β i > Kα i for each i ∈ S 2 . Then i∈S 1 β 2 i < ∞, and i∈S 2 α 2 i < ∞. Therefore, S = S 1 ∪ S 2 has the required properties.
The main result of this section is: From this we immediately obtain Corollary 4.5. Suppose α, β ∈ S. The following three statements are equivalent.
Proof. By Corollary 3.8, i ) be orthonormal bases in the first and second copies of ℓ 2 , respectively. Then X(A) is the closed linear span of the vectors Lemma 4.7. Suppose α, β ∈ S, E is a subspace of X d (α), and a completely bounded map U : E → X d (β) has bounded inverse (in the terminology of [14], E is completely semi-isomorphic to X d (β)). Then β ≺ α.
where the supremum is taken over all subspaces E 1 ֒→ . . . ֒→ E k of the domain of T , with dim E j = i j for 1 j k. Actually, the theorem is stated in [2] for operators on finite dimensional spaces, but a generalization to compact operators is easy to obtain. Applying the above identity to T = S * S, where S is a compact operator with singular numbers s 1 s 2 . . . 0, we obtain: In our situation, assume U cb = 1. Let c = U −1 , A = diag (α i ), B = diag (β i ), B ′ = BU. Denote the singular values of B ′ by (β ′ i ). Clearly, β ′ i β i cβ ′ i for every i ∈ N. Pick an orthonormal system (x j ) k j=1 in E (the domain of U). Let u be the formal identity from R k = (ℓ k 2 ) R (the k-dimensional row space) to span[x j : 1 j k]. Then (compare with the proof of Lemma 3.3) and (since U is a complete contraction) Thus, k j=1 B ′ x j 2 k j=1 Ax j 2 + 1 for any orthonormal family (x j ) k j=1 . By (4.3), i j for any i 1 < . . . < i k (indeed, when computing k j=1 α 2 i j , we are taking the supremum over a larger family of subspaces (E j ), than when we are computing k j=1 β ′2 i j ).
Developing the ideas of this proof, we obtain: Theorem 4.8. Suppose α, β ∈ S, and there exists an isomorphism U : Proof. As in the proof of Lemma 4.7, let A = diag (α i ), B = diag (β i ), and B ′ = BU. By Lemma 3.3, B ′ u 2 C max{ Au 2 , u } for any u : ℓ 2 → X d (α). Denote the singular numbers of B ′ by (β ′ i ), and note that β i /C β ′ i Cβ i for every i. Reasoning as in the proof of Lemma 4.7, we see that As in the preceding proof, i∈I β ′2 i < 2C 2 . Therefore, i∈I β 2 i < 2C 4 , and β i 2C 2 α i for i / ∈ I. Next we show that id : X d (α) → X d (β) cb < 4C 2 . As before, denote the canonical bases in X d (α) and X d (β) by (e i (α)) i∈N and (e i (β)) i∈N , respectively. By (4.2), Y 1 = span[e i (α) : i ∈ I] and Y 0 = span[e i (α) : i / ∈ I] are completely contractively complemented subspaces of X d (α). Moreover, (here, V is the formal identity from (Y 1 ) R to span[e i (β) : i ∈ I]). By Lemma 3.3 and (4.2), id| Y 0 cb < 2C 2 . Therefore, The norm of id : X d (β) → X d (α) is computed the same way.

Completely unconditional bases
In this section, we further investigate bases in spaces X(A), where A ∈ B(H, K) is a compact contraction. A subspace E of X(A) is isometric to X(A| E ). As A| E is a compact contraction, (4.1) implies that E has a 1-completely unconditional basis. The key result of this section is Proposition 5.2, establishing the uniqueness of a completely unconditional basis in E (the existence of such a basis has been established by Proposition 4.1). We also show that the canonical basis (and therefore, every completely unconditional basis) in a completely complemented subspace of X d (α) is equivalent to a subsequence of the canonical basis of X d (α). Moreover, there exists α ∈ S such that the canonical basis in every complemented subspace of X d (α) is equivalent to a subsequence of the canonical basis of X d (α) (Theorem 5.6). In general, the last statement need not be true (Remark 5.5).
First we show that any completely unconditional basic sequence in an X(A) space corresponds to a canonical basis of X d (β), for some β. Proof. As noted in Section 2, for any finite sequence of scalars (α i ). Thus, T , T −1 C. Let B = diag (β) (note that B = B * ). By Lemma 3.3, it suffices to show that for any u : ℓ 2 → X d (β). By Lemma 3.3, the complete unconditionality of (e ′ i ) implies: , with λ i = ±1 for each i, and u ∈ B(ℓ 2 , Y ). Note that Au 2 2 = tr(A * Av), where v = uu * .
Proposition 5.2. For α ∈ c 0 , the completely unconditional basis in X d (α) is unique (up to permutative equivalence). More precisely: if (g i ) is a C-completely unconditional basis in X d (α), then it is 16C 11equivalent (up to a permutation) to the canonical basis in X d (α).
Proof. Let (e i (α)) be the canonical basis of X d (α). Set β i = Ag i , and let (e i (β)) the canonical basis of X d (β). By Proposition 5.1, the map where α ′ is a subsequence of α.
Remark 5.4. By [13], the completely unconditional basis in R ⊕ C is unique up to a permutation.
Remark 5.5. In general, the canonical basis of a subspace of X d (α) (α ∈ S) need not be equivalent to a subsequence of the canonical basis of X d (α). For instance, suppose the sequence α = (α i ) and β = (β i ) are defined by setting α i = 2 −n 2 , β i = 2 −n 2 −n for 4 n 2 i < 4 (n+1) 2 (n ∈ {0}∪N). By Lemma 4.6, X d (β) embeds completely isomorphically into X d (α). However, (e i (β)) (the canonical basis of X d (β)) is not equivalent to any subsequence of the canonical basis (e i (α)) of X d (α). Indeed, suppose, for the sake of achieving a contradiction, that there exists a complete isomorphism T from X d (β) to a subspace of X d (α), mapping e i (β) to e k i (α). Fix n ∈ N with max{ T cb , T −1 cb } < 2 n/2 . Consider the sets By Pigeon-Hole Principle, with I n or J n has the cardinality grater than 4 n 2 +2n . If |I n | > 4 n 2 +2n , consider (recall that E i1 is the "matrix unit" with 1 on the intersection of the first column and the i-th row, and zeroes everywhere else). By (4.2), However, by (4.2) again, As before, x 2 = |J n | · 2 −n 2 −n , while hence T −1 2 cb 2 n+1 . Thus, max{ T cb , T −1 cb } 2 n/2 , which yields the desired contradiction.
In certain situations, the canonical basis for every subspace of X d (α) is equivalent to a subsequence of the canonical basis of X d (α).
Theorem 5.6. For any a > 1 there exists α ∈ S\ℓ 2 such that any subspace Y of X d (α) (finite or infinite dimensional) has a 1-completely unconditional basis, a-equivalent to a subsequence of the canonical basis of X d (α).
Combining this result with Proposition 5.2, we obtain Corollary 5.7. There exists α ∈ S\ℓ 2 such that any C-completely unconditional basic sequence in X d (α) is AC B -equivalent to a subsequence of the canonical basis of X d (α) (here, A and B are positive).
Proof of Theorem 5.6. Assume a < 2. Pick a sequence of integers 1 = N 0 < N 1 < . . ., s.t. N k > 2N k−1 for each k. Define a sequence α = (α i ) by setting α 2i = a −k for N k i < N k+1 , α 2i−1 = 0 for any i ∈ N. Clearly, α ∈ c 0 \ℓ 2 . Let A = diag (α). For a subspace Y of X(A), let β = (β i ) is the sequence of singular values of A| Y . Define the set I 1 by setting I 1 = {1, . . . , M} if rank (A| Y ) = M < ∞, and I 1 = N if rank (A| Y ) = ∞. We can also assume that the elements of (β i ) i∈I 1 are listed in the non-increasing order. Then β i α 2i for each i.
Denote the normalized eigenvectors of (A| Y ) * A| Y , corresponding to the eigenvalues β i (i ∈ I 1 ), by η i . Furthermore, find the vectors (η i ) i∈I 0 , forming an orthonormal basis in ker (A| Y ). For i ∈ I 0 , set β i = 0. Let I = I 1 ∪ I 0 (we assume that this union is disjoint). Then the family (η i ) i∈I is the canonical basis for Y .
For each positive integer k, let M k be the smallest value of i s.t. β i a −k . Set M 0 = 1. In this notation, a 1−k β i > a −k iff M k−1 i < M k . As noted above, β i α 2i , hence M k N k . By our choice of the sequence (N k ), Thus, there exists an injective map π : Define the operator T : Y → span[e π(i) : i ∈ I] ֒→ X d (α), defined by T ξ i = e π(i) . By (4.1), T is a complete contraction, and T −1 cb a.

The equations
Qf i = f i and (5.8) yield a i β i +b i 1 − β 2 i = 1. Then sup i max{|a i |, |b i |} Q . As lim β i = 0, there exists K ∈ N such that |b i | > 1/2 for . Then x = 1, and therefore, which is impossible.
Remark 5.9. Suppose a Banach space E is such that every infinite dimensional subspace E is isomorphic to a complemented subspace of E. We not not know whether E is necessarily isomorphic to a Hilbert space.

Completely isomorphic classification of subspaces of X(A)
The main goal of this section is to prove Theorem 6.1 and Corollary 6.2 below. Recall that C is the set of compact contractions which are not Hilbert-Schmidt. Theorem 6.1. If A ∈ B(ℓ 2 ) belongs to C, then (S(X(A)), ≃) is Borel bireducible to the complete K σ relation.
Together with Corollary 4.5, this theorem immediately implies Corollary 6.2. If A ∈ B(ℓ 2 ) belongs to C, then the relation of complete biembeddability on S(X(A)) is Borel bireducible to the complete K σ relation.
The proof of Theorem 6.1 proceeds in two steps. First, we introduce the space S A of sequences of non-negative generalized integers, with an equivalence relation * ∼, and show the latter is Borel bireducible with (S(X(A)), ≃). Then we prove that (S A , * ∼) is, in fact, a complete K σ relation.
Suppose A ∈ B(ℓ 2 ) is of class C. By Lemma 3.2, we can assume that Denote by (ξ ′ i ) i∈N the canonical orthonormal basis in the second copy of ℓ 2 . Then the canonical basis of X(A) is the collection of vectors e i = ξ i ⊕ s o i ξ i and f i = ξ ′ i ⊕ 0. As in (4.1), we have, for n × n matrices a 1 , b 1 , a 2 , b 2 , . . ., Define S A as the set of all elements β = (β i ) i∈N ∈ N N * , such that (1) β k α k for any k, and (2) β 1 β 2 . . .. Equipping N N * with its product topology, we see that S A is closed.
Define the relation * ∼ on S A as follows: β * ∼ γ if there exists K ∈ N and I ⊂ N s.t. |β i − γ i | K for any i / ∈ I, and i∈I (4 Here, the vectors (g i ) come from the definition of Y. Note that, for each i, g i depends solely (and continuously) on β i . Therefore, for each m-tuple  Proof. We have to show that the set F = {(β, γ) ∈ S A × S A : β * ∼ γ} is a K σ set, that is, a countable union of compact sets. To this end, define a family of subsets of S A ×S A , described below. For K, n ∈ N and I n ⊂ {1, . . . , n}, define F (K, n, I n ) as the set of all pairs (β, γ) (β = (β i ), γ = (γ i )) with the property that |β i − γ i | K for i ∈ {1, . . . , n}\I n , and i∈In (4 −β i +4 −γ i ) K. Let F (K, n) = ∪ In⊂{1,...,n} F (K, n, I n ), and F (K) = ∩ n F (K, n). It suffices to show that F = ∪ K∈N F (K). Indeed, F (K, n, I n ) is a compact subset of S A × S A , hence so is F (K, n) (as a finite union of compact sets). Furthermore, F (K) is also compact, and Show first that F ⊂ ∪ K F (K). By definition, β * ∼ γ if there exists K ∈ N and I ⊂ N s.t. |β i − γ i | K for any i / ∈ I, and i∈I (4 −β i + 4 −γ i ) K. Letting I n = I ∩{1, . . . , n}, we see that (β, γ) ∈ F (K, n, I n ) for each n, hence (β, γ) ∈ F (K).
Suppose, on the contrary, that b ′ * ∼ c ′ , where b ′ = φ(b) and c ′ = φ(c). Then there exists a set I ⊂ N and K ∈ N s.t. |b ′ j − c ′ j | K for j / ∈ I, and j∈I (4 −b ′ j + 4 −c ′ j ) K. We shall show that |b k − c k | K for all but finitely many k's. Indeed, otherwise there exist infinitely many k's s.t. I k ⊂ I (this follows from the fact that Conclusion of the proof of Theorem 6.1. By Corollary 6.4, (S(X(A)), ≃ ) and (S A , * ∼) are Borel bireducible to each other. By Lemmas 6.5 and 6.6, (S A , * ∼) is Borel bireducible to a complete K σ relation. Remark 6.7. For many separable Banach spaces X, it is known that the isomorphism relation on S(X) reduces certain "classical" relations, such as E Kσ (see e.g. [1,4,5,6] Recall that the class C consists of all compact contractions, which are not Hilbert-Schmidt, and the family F is the set of all operator spaces X(A), where A ∈ B(ℓ 2 ) belongs to C. Clearly, all these spaces are isometric to ℓ 2 .
Proof of Theorem 1.1. Suppose X(A) ∈ F. By Theorem 6.1 and Corollary 6.2, the relations of complete isomorphism and complete biembeddability on S(X(A)) are complete K σ . To show that F contains a continuum of spaces, not completely isomorphic to each other, pick . By the results of Section 6, there exists a Borel map Φ : To this end, write N as a disjoint union of infinite sets I k (k ∈ N). For Clearly, this family (b ε ) has the desired properties. We handle the real case. Begin by introducing a numerical invariant of subspaces of X = R ⊕ 1 ℓ 2 . Denote by P the "natural" projection onto R. For Y ∈ S(X), define c(Y ) = P | Y .
Next consider c(Y ) ∈ (0, 1). Suppose, for the sake of contradiction, that there is no x as in the statement of the lemma. Then for every n ∈ N there exist t n ∈ (c(Y ) − 1/n, c(Y )), and ξ n ∈ ℓ 2 s.t. ξ n = 1, and t n ⊕ (1 − t n )ξ n ∈ Y . Passing to a subsequence if necessary, we can assume that (ξ n ) is a Cauchy sequence in ℓ 2 . Indeed, otherwise there exist n 1 < n 2 < . . . and α > 0, such that, for any i, ξ n i+1 − ξ n i > α. By the uniform convexity of Hilbert spaces (which follows, for instance, from the parallelogram identity), there exists β > 0 s.t.
Lemma 8.4. If Y and Z are infinite dimensional subspaces of X, and c(Y ) = c(Z), then Y is isometric to Z.
To handle Φ, consider an open ball U ⊂ X with the center at α ⊕ N i=0 β i e i and radius r. Then Φ(t)∩U = ∅ iff there exist λ 0 , . . . , λ N ∈ Q s.t.
This inequality describes a Borel subset of [0, 1]. As any open subset of X is a countable union of open balls, the map Φ is Borel.