Some points of view on Grothendieck's inequalities

Haagerup's proof of the non commutative little Grothendieck inequality raises some questions on the commutative little inequality, and it offers a new result on scalar matrices with non negative entries. The theory of completely bounded maps implies that the commutative Grothendieck inequality follows from the little commutative inequality, and that this passage may be given a geometric form as a relation between a pair of compact convex sets of positive matrices, which, in turn, characterizes the little constant in the complex case.


Introduction
Grothendieck's work on tensor products of normed spaces [7] has influenced mathematics in several ways, some of which are very surprising.This is described in Pisier's survey article [14] and the book [4] by Diestel, Fourie and Swart.Here we will focus on the inequalities which are named Grothendieck's inequality and Grothendieck's little inequality in the setting of complex m × n matrices.This is a continuation of our re-cent articles [2], [3], where we showed that the theory of operator spaces and completely bounded maps provides a set up, which fits very well -in our opinion -to the existing results related to Grothendieck's inequalities.An important aspect in Grothendieck's work deals with a bounded operator between Banach spaces which factors through a Hilbert space or through the space of continuous complex functions on a compact topological space.In this article the Hilbert spaces are finite dimensional and the compact spaces we will meet have only finitely many points, so our results will deal with complex m × n matrices, the set of which we denote M (m,n) (C).This set of matrices is in a canonical way isomorphic to the algebraic tensor product C m ⊗ C n where the isomorphism is described via the canonical basis (δ i ) {1≤i≤m} for C m and (γ j ) {1≤j≤n} , for C n and the matrix units {e (i,j) : 1 ≤ i ≤ m, 1 ≤ j ≤ n} for M (m,n) (C) by the linear map ϕ : C m ⊗ C n → M (m,n) (C) which satisfies ϕ(δ i ⊗ γ j ) := e (i,j) .In several spots we will use that the image ϕ(a ⊗ b) is the rank one matrix with entries ϕ(a ⊗ b) (i,j) = a i b j and also use, that this matrix is a product of a one column matrix a | := (a 1 , . . ., a m ) and a one row matrix b For each real positive p ≥ 1 or p = ∞ and any natural number k we let .p denote the usual p-norm on C k .Given a couple of normed spaces such as (C m , .p ) and (C n , .r ), we recall that Schatten [15] has introduced the concept named a cross norm on the tensor product (C m , .p ) ⊗ (C n , .r ), and we recall that a norm say |||.||| on the tensor product of the normed spaces is called a cross norm on this tensor product of normed spaces if it satisfies ∀η ∈ C m ∀ξ ∈ C n : |||(η ⊗ ξ)||| = η p ξ r .
Schatten proved that there is a minimal and a maximal cross norm.Today the minimal cross norm is called the injective cross norm and denoted .∨ .The maximal cross norm is called the projective cross norm and it is denoted .∧ .In the situation with (C m , .p ) ⊗ (C n , .r ) we can then define norms .∨(p,r) and .∧(p,r) on M (m,n) (C) by a transport of the injective and projective norms on the tensor product to norms on M (m,n) (C) via the isomorphism ϕ we defined above.
There are many well known norms on M (m,n) (C), and amongst them we will right now mention the operator norm, which we denote X ∞ = X| ∨ (2,2) , the Hilbert Schmidt norm, which we denote X 2 and the Schur multiplier norm, which we denote X S .We will remind you on the Schur product of matrices.Given two complex matrices X = (X (i,j) ) and A = (A (i,j) ) in M (m,n) (C), then we define their Schur product X • A to be the matrix in M (m,n) (C) which is given by the equation (X • A) (i,j) := X (i,j) A (i,j) .We can then formulate Grothendieck's inequality in a way which is close to the original one, except -of course -for the use of Grothendieck's name.

Theorem 1.1. There exists a positive real K C
G ≤ sinh(π/2) such that for any complex m × n matrix X we have The exact value of K C G is unknown, but after the combined efforts of several authors Pisier reports in Section 4 of [14] that 1.338 < K C G ≤ 1.4049.The Grothendieck inequality is most often described as a property for bilinear forms on the product (C m , .∞ ) × (C n , .∞ ) and we will return to this formulation of the inequality later on.
There is also an inequality which often is named Grothendieck's little inequality.To formulate that one we recall the norms X F and X cbF from [2], which we defined on M (m,n) (C).The norm X F is defined as the norm of the linear operator F X with the matrix X acting as an operator from (C n , .∞ ) to (C m , . 2 ), and then X cbF is the completely bounded norm of F X .It follows from the definition of the injective norm that X F = X ∨ (2,1) .The little inequality may then be formulated as follows.

Theorem 1.2. There exists a positive constant k C
G such that for any complex m ×n matrix X we have It is known, see Section 5 of [14], that k C G = 4/π.The normed space (C n , .∞ ) may be considered to be the n dimensional abelian C*algebra A n := C({1, . . ., n}, C), the continuous complex functions on the set {1, . . ., n} equipped with the sup-norm, and in this way Grothendieck's inequality contains a statement on bounded bilinear forms on a product of 2 abelian C*-algebras.This raises the natural question if Grothendieck's inequality does have an extension to bounded bilinear forms on a pair of non commutative C*-algebras.This was solved by Pisier in [12], where he shows what the non commutative inequality ought to be and also shows that this inequality holds if a certain approximation property holds.This approximation restriction was not a serious problem and it was removed by Haagerup in [9].We will not go into any discussion of the content of the non commutative Grothendieck inequality here, but mention that Haagerup in an appendix to [8] gives a proof of the non commutative little Grothendieck inequality which actually seems to contain new information when applied in the finite dimensional and abelian situation we are studying here.This aspect is discussed in Section 3 below.Another aspect of this application of Haagerup's work is, that we can prove Grothendieck's little inequality from Theorem 1.2 with elementary mathematics, but at a cost of the rather bad inequality k C G ≤ 2. Our research in the articles [2] and [3] studied relations between some norms on M (m,n) (C) and it gave some characterizations of the norms in terms of certain factorization properties.The norms we studied are not new, but the perspective is to look at them as completely bounded norms of some linear or bilinear operators and then investigate their minimal Stinespring representations, as defined in Definition 2.1 of [1].The concepts named completely bounded and Stinespring representation come from the theory of operator spaces and completely bounded maps, but we will not give an introduction to that theory here.The connections to the theory of Schur multipliers [17] and module maps [16] have been noticed long ago.We gave a short description of the most needed facts we use, in [2] on page 546, right after Proposition 1.5 of that article, and there are fine text books by Paulsen [11], Pisier [13] and Effros and Ruan [5] which describe this subject.
Already Grothendieck introduced many norms in his résumé [7], Pisier in [14] and Diestel, Fourie and Swart in [4] list more norms than we will discuss here.We will look at some of the norms mentioned in Section 3 of Pisier's survey, but there is also the norm .T defined in Definition 1.2 of [3], which may exist under a different name in the literature?
In Section 2 we recall the norms we studied in [2] and [3] as factorization norms on matrices in M (m,n) (C).It is not obvious that they are all cross norms, we think, but the factorization result of [3] gives an easy way to verify this.The results from Section 3 of Pisier's survey make it easy to identify all but one of the norms we have introduced with norms from [14].
In Section 3 we go back to Haagerup's proof in [8] of the little Grothendieck inequality for non commutative C*-algebras, and we find, that when his construction is applied in the finite dimensional and abelian setting we are studying, then the objects, he investigates, become quite concrete and in principle computable.This raises the question: Will this method of Haagerup's actually produce the optimal cbF-factorization or operator factorization of X which we studied in item (i) of Theorem 2.1 of [3].We guess that the answer in general is no, but we prove that the answer is yes for matrices with non negative entries only.For such matrices we actually show that Grothendieck's little inequality holds with the constant equal to 1.
In Section 4 we return to [3], where we showed that for a positive matrix P in M n (C) we have P cbB ≤ k C G P B , or in words that for positive matrices the constant K C G may be replaced by k C G .When applying this result to a certain positive matrix in M (m+n) (C) we can obtain bounds for the constant The Theorem 1.1 is dual to the Grothendieck inequality and it is really a statement on the convex hull of the rank one matrices ϕ(η ⊗ξ) This may be seen as a geometrical formulation of Grothendieck's inequality.We pursue this aspect, but for positive matrices, and we show in Theorem 5.3 that the mentioned result for positive matrices " P cbB ≤ k C G P B " implies a certain property for the closed convex hull of the positive rank one matrices with diagonal equal to the identity.That geometrical result turns out to characterize k C G and it has equation (1.1) as an easy corollary.

Cross norms and different names
Our identification of C m ⊗ C n with M (m,n) (C) goes via the linear map ϕ given by ϕ(η ⊗ ξ) = i,j η i ξ j e (i,j) .From here you see that the image by ϕ of the non vanishing elementary tensors η ⊗ ξ, consists of the rank one matrices.Hence in order to establish that a norm on (C m , .p ) ⊗ (C n , .r ) is a cross norm we will have to look at rank 1 matrices only.There are quite a few norms involved in this section.From Section 3 of [14] we get some projective and some injective norms, the norms .H , .H , the Schur multiplier norm and the Haagerup tensor norm, which we denote .h .From Definition 1.2 of [3] we get norms with subscripts {F, cbF, B, cbB, S, T }.The Section 3 of [2] gives a characterization of the norms .cbF , .cbB , .S and .T in terms of certain factorization properties, and based on these properties we can obtain concrete factorizations of rank one matrices, which show that all the norms are cross norms on tensor products of the form (C m , .p ) ⊗ (C n , .r ) for some choices of p and r in the set {1, 2, ∞}.When possible, we will identify some of the norms Pisier mentions with some of the norms we have given different names.The reason why we have introduced the new names is, that we find that these names seem to fit well with our completely bounded approach to the problems under investigation.
Theorem 2.1.Let X be a non-zero complex m × n matrix of rank 1 with With the notation from Section 3 of [14], .S = .H = .h .(vi) With the notation from Section 3 of [14], The different factorizations of X, which will show that the cb-norms are as claimed, are described in the proof of the proposition.
Proof.It follows from a direct computation that X F = μ 2 ν 1 .We will show that the vector ξ in the cbF-factorization of X is given by ξ ).By construction this ξ is a positive unit vector in (C n , . 2 ).We may define Δ n (ξ) inv as the diagonal matrix with entries equal to ξ −1 j if ξ j > 0 and 0 if ξ j = 0. Then we have XΔ n (ξ) inv ∞ = XΔ n (ξ) inv 2 since the rank is one.Then this norm is given as The by Theorem 1.3 item (i) of [3] we have so the norms are cross norms as claimed.The very definition of the norm .F tells that it is the minimal cross norm ∨(2, 1).The inequality then follows from Theorem 2.1 of [2].The duality statement in item (i) is a consequence of equation (4.4) of [2], and item (i) follows.
With respect to item (ii), we find by direct computation that X B = μ 1 ν 1 .We define 2 positive unit vectors ξ and η to be used in a bilinear factorization of X by .
The matrix B in the factorization X = Δ m (η)BΔ n (ξ) is then given by Then, since the rank of B is one, B ∞ = B 2 = μ 1 ν 1 and we see from Theorem 1.3 item (ii) of [3] that so the norms are cross norms as claimed.By the definition of a minimal cross norm, it follows that .B is the minimal cross norm ∨(1, 1) so the result [2].The duality statement of item (ii) follows from equation (3.3) of [2] and item (ii) follows.
With respect to item (iii), the equations X (i,j) = μ i ν j describes X as a product of a column matrix μ | by a row matrix ν − , so by Theorem 1.3 item (iii) of [3] we have X S ≤ μ ∞ ν ∞ .On the other hand a certain matrix unit will give the opposite inequality, and item (iii) follows together with a factorization The duality statement follows from that of item (ii), and the inequality .∧(∞,∞) ≤ K C G .S is the dual of the one presented in item (ii), so item (iii) follows.
In the case of item (iv) we remark that .T is conjugate dual to the cross norm .cbF , so it is a cross norm, and then by the duality X T = μ 2 ν ∞ .The concrete factorization may be obtained as follows.We consider again X as a product of a column matrix μ | and a row matrix ν − .We have Δ m (μ) 2 = μ 2 and for L := (Ω m ) − we have follows also from the conjugate duality between the cbF-norm and the T-norm.
The Haagerup tensor product is given by a cross norm .h on the tensor product of the two abelian C*-algebras (A m , .∞ ) and (A n , .∞ ) in the following way for a matrix X in M (m,n) (C).
If you are given an expression X = ϕ( ) and based on Theorem 1.3 item (iii) of [2] we have obtained a proof of the identity .S = .h , which is a part of item (v).The equation (3.9) of [14] and Theorem 1.3 item (iii) of [3] imply that the norm .H of [14] equals .S and item (v) follows.
The equality .H = .S of item (v) implies by duality that .H = .cbB .The norm γ 2 from equation (2.2) of [14] is known to be equal to .H , and the entire theorem follows.Following Pisier's survey [14] we see that all of the norms except the norm .T are explicitly present in that survey, and as dual to one of these the T -norm is implicitly mentioned too.The duality results mentioned in the items (ii) and (iii) was well known and used by Grothendieck.The real new thing here is -in our opinion -that some of the norms now have a characterization as completely bounded norms of some linear or bilinear operators between operator spaces, and the theory of completely bounded linear and multi-linear maps yields concrete optimal factorizations of the operators under investigation.

The Haagerup factorization
Haagerup gives in an appendix to [8] a proof of the little Grothendieck inequality for non commutative C*-algebras.That proof may also be applied to the case of the finite dimensional commutative C*-algebra A n and then to a linear operator F X from (C n , .∞ ) to (C m , . 2 ).In this way Haagerup provides an elementary way to obtain a concrete factorization of a complex m × n matrix X which qualitatively is close to the optimal cbF-factorization of X. See Theorem 3.1 item (i) of [2].Haagerup's construction gives the best upper bound in the non commutative setting, but in the abelian case, which we study here, his method gives the upper bound 2 for the little Grothendieck constant k C G , and this aspect is not impressing, since we know that k C G = 4/π < 1.274.The impressing thing is that his proof is constructive and only uses elementary analysis.Furthermore the construction shows that if we only look at matrices with non-negative entries, then, in that world, k C G = 1 and Haagerup's factorization is the optimal one here.There is a possibility that Haagerup's factorization is the optimal one even if we get a bad upper bound for k C G , the reason being that A n is abelian and in this case the inequality (3.2) below might have an extension to a version of the inequality (3.3) with the constant k C G instead of 2? Even if Haagerup's construction does not give the optimal cbF-factorization in general, it still raises some new questions in this setting, and it gives an elementary proof of Grothendieck's little inequality, although it does not provide the right constant.
It is quite easy to describe Haagerup's factorization, and we will do that by following the first arguments in his proof of the non commutative little Grothendieck inequality.To this end, let Ω denote the vector in C n consisting of 1's only, and let X be a complex m × n matrix such that X F = 1.Then there exists a unitary u in A n such that and we can define a functional φ on A n of norm 1 by This functional of norm 1 on A n = C({1, . . ., n}, C) takes the value 1 at the unit I n , so it is a state -i.e.given as the integral with respect to a probability measure on the set {1, . . ., n}.Then the vector λ in C n defined by has non negative entries with sum 1, and these real numbers are the masses of the points with respect to the mentioned measure.We can then define a unit vector ξ with nonnegative entries in (C n , . 2 ) by the definitions ξ j = λ j .If we follow Haagerups proof, we find that Equation (3.3) shows that there exists a complex m × n matrix Z such that Z ∞ ≤ √ 2, and XΔ n (u) = ZΔ n (ξ) and X = (ZΔ n (u * ))Δ n (ξ).This gives the rather bad upper bound k C G ≤ 2 for the general cbF-factorization, but a quite explicit construction.We have made some simple experiments with this vector in the mathematical tools package named Maple, but our knowledge of the powerful tools of Maple is limited, so no new information showed up, except for the case of matrices with non-negative entries.This case is much easier to deal with, because here the optimal unitary u which is a part of Haagerup's construction is simply the unit I n of A n .The experiments for matrices with non-negative entries indicated that for such matrices we have If this experimental result is true then it may be combined with the results from [2] to get which, if true, shows that the Haagerup vector is the optimal one for the cbF-factorization in this case.The following easy proposition and our reformulation of Grothendieck's inequality as a linear program in Theorem 3.5 of [3] tell that the results of the experiments are based on a mathematical theorem.This is yet another example which demonstrates that the points of view presented in the book [6] by Eilers and Johansen are very fruitful.
Proposition 3.1.Let X be a complex m × n matrix, then the 1-norms i X 1 of the rows in X satisfy the inequality If the entries in X are all non negative then (3.4) becomes an equality.
Proof.Let z in C n be given with z ∞ ≤ 1, then and the proposition follows.
Theorem 3.2.Let X be a real m × n matrix with non negative entries then (1/2) , and Haagerup's vector becomes the optimal positive unit vector from the cbF-factorization of X.
Proof.We define a positive matrix P = X * X and a vector λ in R n with non negative entries via the formula x (s,j) x (s,t) , and we will show that P ≤ Δ n (λ).Then let γ be a vector in C n and we get We see by the proposition and the definition of λ that j λ j = X 2 F , then by Theorem 3.5 of [3] and the computations above we get that P cbB ≤ j λ j .By Theorem 3.2 of [3] we know that P cbB = X 2 cbF , so By Theorem 3.5 of [3] we know that the optimal positive unit vector ξ for the cbFfactorization is given by the formula ξ j = λ j /( X 2 F ) and that is Haagerup's vector, so the corollary follows.
The results above also give an upper bound -which is easy to compute -for the cbB-norm of a positive matrix.Corollary 3.3.Let P be a positive n × n complex matrix and X a complex m × n matrix with X * X = P then with equality in the second inequality if all the entries X (i,j) are non negative.
Proof.We return to (3.5), and we find that for the non negative λ in R n defined by we will get P ≤ Δ n (λ) and then by Theorem 3.5 of [3], P cbB ≤ Tr n (Δ n (λ)), which is the stated sum.To get the next inequality we remind you that the column norm of the j th column X j equals P (1/2) (j,j) .We can then continue with the inequality we already have established and use the Cauchy-Schwarz Inequality once more to get and the corollary follows.
It seems natural to ask if the results above can be extended to matrices with real entries only?Since it is known that k R G > 1, this is not possible, but why not.One reason, we anticipate, is that for some matrices with real entries the maximal value of F X (u) 2 must be obtained in a unitary u = (u 1 , . . ., u n ) where the entries are not all real.In the general case, it might still be that Haagerup's vector continues to be the one from the optimal cbF-factorization, but we think that this is not the case.We do not have a definite result which proves this, but we have made some observations, which indicate this and also give some more information on the problems in finding the optimal vector ξ for the cbF-factorization.
Suppose we are given a complex m × n matrix X with optimal unitary u in A n and a Haagerup vector ξ such that all ξ j > 0, then we have the following equation and γ is an eigenvector corresponding to the eigenvalue X 2 F for the matrix Δ n (ξ) −1 X * XΔ n (ξ) −1 .By the factorization result in Theorem 3.1 item (i) of [2] we know that Δ n (ξ) −1 X * XΔ n (ξ) −1 ∞ ≥ X 2 cbF and with equality exactly if ξ is the positive unit vector from the cbF-factorization of X.For a general matrix X we can then use Haagerup's recipe to construct a positive unit vector ξ such that X 2 F is an eigenvalue for the matrix Δ n (ξ) −1 X * XΔ n (ξ) −1 and If the recipe actually gave the optimal cbF-factorization from Theorem 3.1 of [2] then both X 2 F and X 2 cbF will be eigenvalues for Δ n (ξ) −1 X * XΔ n (ξ) −1 , and that may be too much to ask for?We have obtained a way to reformulate this problem, and in order to describe this we return to the question on computing the cbB-norm of X * X.Then let X * X = Δ n (η)BΔ n (η) be its cbB-factorization and we can define a probability distribution μ on {1, . . ., n} by μ j = η 2 j .Suppose all η j > 0, then from Theorem 2.4 of [2] we know that X * X cbB is an eigenvalue of B and then the following determinants both vanish, det and we have obtained the following proposition.
Proposition 3.4.Let X be a complex m ×n matrix, λ the probability distribution associated to Haagerup's vector and μ the probability distribution associated to the cbB-factorization of X * X.If all λ j > 0 and all μ j > 0 then both X 2 F Δ n (λ) and X * X cbB Δ n (μ) belong to the set of positive diagonal matrices D which satisfy det X * X − D = 0.
We do not know if the equation det X * X − D = 0 of Proposition 3.4 has been the studied in the literature, but the structure of its set of solutions might offer some new insights into the questions we are looking at here.

From the little inequality to the Grothendieck inequality. The analytic approach
In Theorem 3.2 of [3] we showed that for any complex m × n matrix X we have hence the following equation is valid, and k C G is the smallest possible constant for this inequality.
Then in the world of positive square matrices the Grothendieck inequality has the constant k C G .In particular this implies the well known inequality k C G ≤ K C G .To try to understand how K C G depends on k C G for non positive matrices, we will in Theorem 4.1 present a factorization result, which improves some of the non self-adjoint aspects of Theorem 3.2 of [3].Unfortunately we were not able to make direct use of this result in the way we hoped for, but some steps in its proof are actually used in the proof of Theorem 4.2, where we show that the general Grothendieck's inequality does follow from the inequality for positive matrices with the constant Theorem 4.1.Let X be a complex m × n matrix and let the cbB-factorization of X be denoted X = Δ m (η)BΔ n (ξ).Let B = W P be the polar decomposition of B, then the complex n × n matrix C and the complex m × n matrix D given as C := P (1/2) Δ n (ξ) and D := Δ m (η)W P (1/2) satisfy Proof.Suppose for simplicity that X cbB = 1.By the Theorems 1.8 and 1.3 item (iii) in [3], choose a matrix We can see -as in equation (2.1) from [3] 2 ≤ 1, and RΔ n (ξ)P (1/2) 2 ≤ 1 then both inequalities are equalities and we can define an n × n complex matrix T with T 2 = 1 by the equations Since P ∞ = 1 we get from Theorem 1.3 item (i) of [3] that and also The theorem then follows from Theorem 3.6 of [2].
We will now apply the theorem above and parts of its proof to get an upper bound for Theorem 4.2.Grothendieck's complex constants satisfy Proof.As mentioned in front of Theorem 4.1, that theorem, when applied to positive matrices, implies that k C G ≤ K C G .Let X and Y be complex m × n matrices such that X cbB = 1, Y S = 1 and Tr n (Y * X) = 1.We will also assume that no row and no column in X vanishes.Let the cbB decomposition of X be denoted as X = Δ m (η)BΔ n (ξ) and an elementary Schur decomposition of Y be given as The non-vanishing of rows and columns in X imply that all η i > 0 and all ξ j > 0. We will now construct a positive matrix P in M (m+n) (C), to which we will apply equation (4.1), and we will also define a positive matrix Q in M (m+n) (C) with Schur multiplier norm 1 and other nice properties.Let γ be the positive unit vector in C (m+n) given as γ = 2 −(1/2) (η 1 , . . ., η m , ξ 1 , . . ., ξ n ), then we define P and Q by The operator B has operator norm 1 since X cbB = 1.Then the matrix I m B B * I n is positive and of operator norm 2. This implies that P is positive and P cbB ≤ 4. The operator Q is clearly positive and diag(Q) ≤ I (m+n) since L c = 1 and R c = 1.Since L c = 1 we know by Schur's result [17] that Q S = 1.The last decomposition of Q also shows that Q S = 1 since the column norm of the matrix (L R) is 1.As in the proof of Theorem 4.1, the equality Tr n (Y * X) = 1, implies that the analogies to the equations (4.3) and (4.4) hold here, too.Since B ∞ = 1 we get from (4.3) that Tr m (Δ m (η)L * LΔ m (η) = 1.Similarly (4.4) implies that Tr n (Δ n (ξ)R * RΔ n (ξ) = 1, and combined we get that Tr (m+n) (QP ) = 4, so P cbB = 4.We will then find the value of P B .It is not hard to see that we get P B = 2 + 2 X B .We already mentioned that Theorem 4.1 of this article or more explicit Theorem 3.2 of [3] implies that for the positive matrix P we have This holds for any X with X cbB = 1, so for a general X we get and the theorem follows.

A geometrical characterization of k C G and a geometrical proof of Grothendieck's inequality
The result (4.1) for positive P , that P cbB ≤ k C G P B and the fact that k C G is the smallest possible constant, which satisfies this inequality, must clearly be based on geometrical properties of some unit balls with respect to some norms, but it is not obvious what the relevant relations may be.We are missing a lot in our understanding of the geometrical aspects, but we have found a relation between some convex sets which reflects a geometrical property of the positive part of the unit ball of the Schur multipliers.Our result gives a version of Grothendieck's geometrical formulation of his inequality as presented in Theorem 1.1.We will obtain the constant k G .This approach also leads to a geometrical characterization of the value of k C G .In this situation we look at positive Schur multipliers and try to write them as linear combinations of positive rank one multipliers in a fashion which is close to the optimal one from Theorem 1.1.If we look at a positive n × n matrix P with P S = 1, the natural desire would then be to write it as a positive linear combination of rank one positive matrices in the form This hope is a fake dream, as far as we can see, and the extension of Theorem 1.1 to a neater result for positive matrices is much more complicated.The complications can be done with, but at a cost of a new point of view at the geometrical problem.The following Theorem 5.4 shows that k C G may be seen as a constant, which expresses a relation between certain compact convex sets.This point of view reproduces (4.6) as an easy corollary.Definition 5.1.
It is worth to keep in mind that R n is the closed convex hull of the rank one matrices in Q n so it is clearly a subset of Q n .The geometrical statement in this section is the following Theorem 5.3, but we will need a little lemma first.F .The latter norm is the operator norm for P (1/2) as an operator from the C*-algebra A n into the Hilbert space C n .The extreme points in the unit ball of A n are the unitaries so there exists a unitary By Theorem 1.8 of [3] we know that there exists a self-adjoint Q 0 in M n (C) such that Q 0 S = 1 and Tr n (P Q 0 ) = P cbB .The Proposition 2.4 of [3] shows that Q 0 has a factorization Q 0 = GSG such that G is positive with diag(G 2 ) ≤ I n and S is a selfadjoint partial isometry, which in this case means an orthogonal projection.We may then define a positive diagonal operator D by D := I n − diag(G 2 ) and an element Q in Q n by Q := G 2 + D, so the diagonal of Q is I n and Q S = 1.Hence and the lemma follows.
Proposition 5.3.Let Q be a matrix in Q n then there exist matrices R in R n and P in Proof.We will use the duality between the norms .cbB and .S .It follows from the computations in [3] that this duality also holds if we restrict to the real subspace M n (C) sa consisting of the self-adjoint complex n × n matrices.The convex cone M n (C) + consists of the positive matrices in M n (C).We will work inside the real vector space consisting of the self-adjoint matrices and here we define the polar of a set S contained in M n (C) sa by We define two convex subsets S and T of M n (C) sa by then we can remark that bot sets contain the zero matrix, and they are both closed since both R n and Q n are compact.Hence both sets equal their bipolars.In order to compute their polars we need a little observation based on the Theorems 1.3 plus 1.8 of [3].
Based on standard techniques and the lemma above we get , and then by the bipolar theorem and the proposition follows.
We will use equation ( 5.3) to obtain 2 results.First the equation (5.3) characterizes k C G in a geometrical way, and secondly it can give a geometrical proof of Theorem 1.1, although with a constant larger than K C G .
Theorem 5.4.The constant k C G is the smallest positive real β such that for any natural number n and any real α with α ≥ β we have (5.4) Proof.We know from (5.3) that (5.4) holds for α = k C G , so we will first remark that if (5.4) holds for an α > 0 then it holds for all γ ≥ α.To see this we just make the following rearrangement for a Q in Q n .By the assumption there exists an R in R n and a P in M n (C) + such that Q = αR − P = γR − P + (γ − α)R , and the claim follows.We can then define α n as the infimum over all the possible α's which, for a given n, satisfy (5.4) and because R n is compact this α n will also satisfy that equation.Finally we define β as the supremum over all the α n 's.By the equation (5.3) we get β ≤ k C G .On the other hand let S be a positive n × n matrix then by Lemma 5.2 there exists a Q in Q n such that Tr n (QS) = S cbB .To this Q we can find R in R n and P in M n (C) + such that Q = βR − P , and then (5.5) By the statements in front of the inequality (4.1), or at the beginning of this section we get β ≥ k C G , and the theorem follows.The following proposition is an application of Theorem 5.3 which will serve to get a geometrical proof of Grothendieck's inequality for Schur multipliers.Proposition 5.5.Suppose a natural number n is given and let Q be a matrix in Q n then there exist matrices R + and R − in R n such that To a matrix Q in Q n there exist by (5.3) a matrix R 1 in R n and a positive matrix P 1 such that From this we see that diag Q n , and we can use the equation ( 5.3) once more to obtain the existence of a matrix R 2 in R n and a positive matrix P 2 such that We can then continue by induction and since 0 < (k C G − 1) < 1 we can obtain convergent sums to describe Q as and then by convexity and closedness of R n , there exist matrices R + and R − in R n such that and the proposition follows.
It is possible to obtain the Schur multiplier variant of Theorem 4.2 as a corollary to this proposition, so we will present this proof too.First we define V as the closed convex hull of all rank 1 matrices where all entries have numerical value equal to 1.This set may be described as the convex hull of the matrices which are given as the product of a unitary m-column vector by a unitary n-row vector V := conv {(u | v − ) : u unitary in A m , v unitary in A n } . (5.8) It is worth to remark that although V and R n seems to have similar definitions, they are anyway very different as sets, the reason being that in the definition of V the variables u and v are independent.For instance 0 is in V while for any R in R n we have diag(R) = I n .
We can now state the application of Proposition 5.5.
Theorem 5.6.Let X be a complex m × n matrix such that X S = 1, then X belongs to the set k C G /(2 − k C G )V.
Proof.By the proof of Proposition 2.6 of [3] we see that the condition X S = 1 implies that there exists a natural number k, a k × m matrix L and a k × n matrix R such that X = L * R and any column in both L and R is a unit vector.Then the matrix Q in M (m+n) (C), which is defined in (4.5) is positive and has diagonal equal to I (m+n) , so we may apply the Proposition 5.5.If we look at the (1, 2) corner of the block matrix Q we find that X = L * R sits there and satisfies so the theorem follows.
The theorem raises the natural question if the equation (5.7) is valid with another constant, different from 1/(2 − k C G ), such that the corollary would give Theorem 1.1 with the right constant K C G .This is not possible, at least if one follows the obvious path to try.The following lines contain the analysis which leads to this conclusion.
Remark that if for some positive real α ≥ 1 we have then this relation holds with α replaced by any real β ≥ α.This follows because R n is convex and it is seen in the following way.Let Q in Q n and R 1 , R 2 be in R n such that Q = αR 1 − (α − 1)R 2 , then we define R 3 in R n by and we get Q = βR 1 − (β − 1)R 3 .Since R n is compact there exists a minimal α, say α n , such that (5.9) is valid in M n (C) + for α n and all reals larger than α n .The question is then which relation is there between α s := sup{α n : n ∈ N} and K C G or k C G ?We have some partial answers.By Theorem 5. (5.10) If the methods from above are applied we get the estimate K C G ≤ 2α s − 1, so at its very best we would get K C G ≤ 2k C G − 1 < 1.55, which is far from Haagerup's upper bound from [10], which states that K C G is at most 1.41.On the other hand, the inequality above naturally raises the question if We think the answer is no, because Theorem 5.4 gives a characterization of k C G which together with the definition of α s seems to indicate that k C G < α s .

Declaration of competing interest
No competing interests.

Lemma 5 . 2 .
Let P be a positive n × n matrix.Then there exists an R in R n and a Q in Q n such that Tr n (P R) = P B and Tr n (P Q) = P cbB .Proof.By Theorem 3.2 of[3] we know P (1/2) B = P(1/2) 3 we get α s ≤ 1/(2 − k C G ) < 1.38.To obtain a lower bound for α s we choose a P be in M n (C) + with P cbB = 1, then, by Lemma 5.2, there exists aQ in Q n such that Tr n (QP ) = 1.By assumption Q = α n R 1 −(α n −1)R 2 ≤ α n R 1and we getP cbB = 1 = Tr n (P Q) ≤ α n Tr n (R 1 P ) ≤ α n P B ≤ α s P B .Hence k C G ≤ α s and we have 1.27 < k C G ≤ α s ≤ 1 2 − k C G < 1.38.