MATHEMATICAL PROPERTIES OF THE REGULAR *-REPRESENTATION OF MATRIX ∗ -ALGEBRAS WITH APPLICATIONS TO SEMIDEFINITE PROGRAMMING

. In this paper we give a proof for the special structure of the Wedderburn decomposition of the regular *-representation of a given matrix ∗ - algebra. This result was stated without proof in: de Klerk, E., Dobre, C. and Pasechnik, D.V.: Numerical block diagonalization of matrix ∗ -algebras with application to semideﬁnite programming , Mathematical Programming-B, 129 (2011), 91–111; and is used in applications of semideﬁnite programming (SDP) for structured combinatorial optimization problems. In order to provide the proof for this special structure we derive several other mathematical properties of the regular *-representation.

where X 0 means X must be symmetric positive semidefinite.
The SDP's for which one can use the results in this paper have large data matrices A 0 , . . . , A m , and they are not tractable without exploiting the structure in this data. They formed the motivation to study the Wedderburn decomposition of the regular *-representation of a given matrix * -algebras.
Of particular interest is a structure called algebraic symmetry, where the SDP data matrices are contained in a low-dimensional matrix * -algebra. (Recall that a matrix * -algebra is a linear subspace of C n×n that is closed under multiplication and taking complex conjugate transposes.) Although this structure may seem exotic, it arises in a surprising number of applications, and first appeared in a paper by Schrijver [28] in 1979 on bounds for binary code sizes. (Another early work on algebraic symmetry in SDP is by Kojima et al. [22].) More recent applications are surveyed in [14,8,30] and include bounds on kissing numbers [1], bounds on crossing numbers in graphs [20,18], bounds on code sizes [29,7,23], truss topology design [13,3], quadratic assignment problems [21], kpartitioning problem [16], traveling salesman problem [17] etc.
Algebraic symmetry may be exploited since matrix * -algebras have a canonical block diagonal structure after a suitable unitary transform. This result is due to Wedderburn [31] and dates back to 1907. Block diagonal structure may in turn be exploited by interior point algorithms. For some examples of SDP instances with algebraic symmetry, the required unitary transform is known beforehand, e.g. as in [29]. For other examples, like the instances in [20,18], it is not. When this is the case one may perform numerical pre-processing in order to obtain the required unitary transformation. Murota et al. [27] presented a practical randomized algorithm that may be used for pre-processing of SDP instances with algebraic symmetry; and later this work has been extended by Maehara and Murota [25].
A nice survey on invariant semidefinite programs (finite and infinite dimensional) is given in the chapter Invariant semidefinite programs of the Handbook on Semidefinite, Conic and Polynomial Optimization, chapter written by Bachoc, Gijswijt, Schrijver and Vallentin [2]. They present how to reduce the matrix sizes by regular *-representation and by block diagonalization. Since the latter approach gives the finest decomposition of a matrix *-algebra the authors do not take into consideration combining the two techniques.
However, if we do not have the analytical expression of the block diagonalization, it is not always possible to conduct numerical computations directly to the original matrix * -algebra due to the size of the data matrices. In such situations it makes sense to first use an isomorphic representation (i.e. regular *-representation) of the matrix * -algebra, which will reduce the size of the data matrices to the dimension of the algebra (i.e the cardinality of its basis); and then perform the canonical block decomposition. For example, in [19], the ϑ number of the so-called Erdös-Renyi graphs was studied. These graphs, denoted by ER(q) are determined by a singe parameter q > 2, which is prime. The number of vertices (which will give the size of the matrices in the corresponding SDP relaxation) is n = q 2 + q + 1, but the dimension of the algebra is only 2q + 11. Note that, for example, if q = 157, one has n = 24807, making it impossible to solve the problem numerically without exploiting the symmetry. Moreover, the Wedderburn decomposition of the algebra is not known in closed form [19]. The ϑ -number of a graph was introduced in [26] as a strengthening of the Lovász ϑ-number [24] upper bound on the co-clique number of a graph. The ϑ -number was also studied in detail for Hamming graphs in the seminal paper by Schrijver [28].
Another structured combinatorial optimization problem where the regular *representation has proven to be useful in reducing the size of the corresponding SDP relaxation is computing the crossing number of complete bipartite graphs. Recall that the crossing number cr(G) of a graph G is the minimum number of intersections of edges in a drawing of G in the plane.
The crossing number of the complete bipartite graph K r,s is only known in a few special cases (like min{r, s} ≤ 6), and it is therefore interesting to obtain lower bounds on cr(K r,s ). (There is a well known upper bound on cr(K r,s ) via a drawing which is conjectured to be tight.) De Klerk et al. [20] showed that one may obtain a lower bound on cr(K r,s ) via the optimal value of a suitable SDP problem, namely: where Q is a certain (given) matrix of order n = (r−1)!, and J is the all-ones matrix of the same size. In [20] it was proven that one can restrict the optimization to the centralizer algebra of aut(Q), say A SDP . For this SDP problem the algebra A SDP is a coherent algebra and an orthogonal basis B 1 , . . . , B d of zero-one matrices of A SDP is available. For r = 9 for example, using the regular *-representation of A SDP , the dimension of the SDP constraint was reduced from n = 40320 to d = 2438. Further, the Wedderburn decomposition of the regular *-representation of A SDP yields linear matrix inequalities involving matrices with maximum size 12. This improved significally the computational time of the underlying SDP problem, see [15].
Outline. The paper is structured as follows. In Section 2 we review some basic properties of matrix * -algebras. In particular, the canonical block decomposition is described. Section 3 introduces the regular *-representation of a given matrix * -algebra and includes an extension of a theorem due to de Klerk et al [18]. Section 4 proves that the regular *-representation is invariant to changing the basis of the matrix * -algebra, up to an orthogonal transformation. Finally, using this result, in Section 5 we prove a special block structure of the Wedderburn decomposition of the regular *-representation.

Basic properties of matrix * -algebras.
In what follows we give a review of decompositions of matrix * -algebras over C, with an emphasis on the constructive (algorithmic) aspects.
• XY ∈ A. A matrix C * -subalgebra of A is said to be maximal if it is not contained in any proper C * -subalgebra of A. (Recall that a subset of a set is proper if it is not the empty set or the set itself.) In applications one often encounters matrix C * -algebras with the following additional structure.
Definition 2.2. Assume that a given set of zero-one n × n matrices {A 1 , . . . , A d } has the following properties: (1) i∈I A i = I for some index set I ⊂ {1, . . . , d} and Thus, a coherent configuration is a basis of zero-one matrices of a (possibly non-commutative) matrix * -algebra. Such an algebra is called a coherent algebra. Moreover, when the elements of the set {A 1 , . . . , A d } commute and I ∈ {A 1 , ..., A d }, the basis of zero-one matrices is called an association scheme. Proposition 1 (see e.g., Section 1.5 in [9]). The elements of a commutative matrix C * -algebra have a common set of orthonormal eigenvectors. These may be viewed as the columns of a unitary matrix Q, i.e., Q * Q = I.
More information on coherent configurations and related structures may be found in [11] and [4].
As a consequence of Proposition 1, any element of a commutative matrix C *algebra can be diagonalized using the same unitary matrix Q. If the commutativity does not appear then one could block diagonalize the elements of the algebra as we will see further in this section.
For matrices A 1 , A 2 , the direct sum is defined as and we will denote the iterated direct sum of A 1 , ..., A n by Let A and B be two matrix C * -algebras. Then the direct sum of A and B is: We say that A is a zero algebra if A consists only of the zero matrix.
(An ideal of A is a * -subalgebra that is closed under both left and right multiplication by elements of A.) for some integers s, t.
Definition 2.5. Two matrix C * -algebras A, B ⊂ C n×n are called equivalent if there exists a unitary matrix Q ∈ C n×n such that Proposition 2 (see e.g., Section 2.2 in [6]). Every matrix C * -algebra A containing the identity is equivalent to a direct sum of simple matrix C * -algebras.
Proposition 3 (see e.g., Section 2.2 in [6]). Every simple matrix C * -algebra A containing the identity is equivalent to a basic matrix C * -algebra.
Propositions 2 and 3 imply the so-called fundamental structure theorem for matrix C*-algebras, which is as follows: Theorem 2.6 (see [31]). If A ⊆ C n×n is a matrix * -algebra that contains the identity, then there exist a unitary matrix Q and positive integers p and n i , t i (i = 1, . . . , p) such that If the identity does not belong to A, then in view of Definition 2.5, each matrix * -algebra over C is equivalent to a direct sum of basic algebras and possibly a zero algebra. A detailed proof of this result is given e.g., in [6] (Theorem 1 there).
3. Regular *-representation of a matrix *-algebra. Definition 3.1 (see e.g., Section 1 in [5]). A representation of an algebra A is a vector space V together with a homomorphism of algebras ϕ : A → End(V ), where End(V ) denotes the set of endomorphisms from V to V . Note that in the definition above A * is the involution of A, and ϕ(A) * is the adjoint of the linear operator ϕ(A). We will use the notation Assume now that A has an orthogonal basis of real matrices B 1 , . . . , B d ∈ R n×n , with B * i ∈ {B 1 , ..., B d } for any i = 1, ..., d. This situation is not general, but it is usual for the applications in semidefinite programming that we have considered in the introduction.
We normalize this basis with respect to the Frobenius norm: and define multiplication parameters γ k i,j via: and subsequently define the d × d matrices L k (k = 1, . . . , d) via Proof. Since D k ∈ A, for any k = 1, ..., d we have which completes the proof. Therefore, we will work with the matrix representation of the linear operator ϕ D k . The matrices L k form the basis of a matrix * -algebra, say A reg . We will abuse terminology slightly by calling A reg the regular *-representation of A (with respect to the basis {D 1 , ..., D d }).
The following result is proven by de Klerk, Pasechnik and Schrijver [18] in the case when A is the centralizer algebra of a group. However, their arguments go through for any matrix *-algebra; and we present here the extended proof for the completeness of this section.  Proof. For any Y ∈ A we define as before the linear operator ϕ Y : A → A by Using Lemma 3.3 we have that L k := Φ(D k ) is the matrix corresponding to the linear operator ϕ D k in the basis D 1 , ..., D d . Thus, for any Y = k y k D k ∈ A, Φ(Y ) is the matrix corresponding to the linear operator in the basis D 1 , ..., D d .
Using (5) we have for any Y, Z ∈ A: Therefore, for any Y, Z ∈ A we have Φ(Y Z) = Φ(Y )Φ(Z). Thus, Φ is an algebra homomorphism. Φ(Y ) = 0 implies that Y X = 0 ∀X ∈ A, and in particular we obtain Y Y * = 0, which implies that Y = 0. Therefore, Φ is injective and by construction we conclude that it is a bijection.
We still need to show that Φ is a *-isomorphism (i.e., it preserves symmetry). To do so, we need to show that Φ(Y * ) = Φ(Y ) * .
On the one hand, by definition of ϕ Y we have ϕ Y (D j ) = Y D j ; on the other hand, using the fact that Φ(Y ) is the matrix of operator ϕ Y in the basis D 1 , ..., D d we obtain: Using the orthonormality of the basis D 1 , ..., D d , in the above relation, we take the inner product with the matrices D i and use the linearity of the operator. Hence, In the same way: and we take the inner product with the matrices D j . Notice that if A ∈ C n×n then trace(A * ) = trace(A). From the orthonormality of the basis, the right-hand side becomes Φ(Y * ) ji . Hence, , therefore the preservation of the symmetry is proved.
Since Φ is a homomorphism, A and Φ(A) have the same eigenvalues (up to multiplicities) for all A ∈ A. As a consequence, we have the following theorem.  (4), and x ∈ R d . We have Example 3.1. Consider the 5-cycle (pentagon), denoted C 5 . The automorphism group of C 5 is the so-called dihedral group on 5 elements and has order |aut(C 5 )| = 10. The centralizer algebra of this group has the following basis: The purpose of this example is to illustrate the regular *-representation of a matrix *-algebra. Hence, for details about centralizer algebras the reader is refferd to Section 4 in [21]. Further, we normalize the basis B 1 , B 2 , B 3 to get Then, Table 1 will give us the coefficients γ k i,j , i, j, k = 1, . . . 3. Table 1. Multiplication table of the normalized matrices.
Further, using (3) and (4), we can easily compute by hand the matrices L 1 , L 2 , L 3 , that form the basis of the regular * -representation: Notice that in this toy example we have reduced the size of the basis matrices from n = 5 to d = 3.

4.
A change of basis. By Wedderburn's theorem, any matrix C * -algebra A that contains the identity takes the form for some integers t, t i , and n i (i = 1, . . . , t), and some unitary Q.
Our further goal is to show that the Wedderburn decomposition of A reg has a special structure that does not depend on the values t i (i = 1, . . . , t). To this end, the lemmas in this section show how the regular *-representation behaves when the orthonormal basis of the matrix * -algebra A is changed. Proof. Denote by A Q the algebra after block diagonalization.
We have that {Q * D 1 Q, ..., Q * D d Q} is a basis for A Q . We will prove that applying the regular *-representation to both A and A Q yields the same matrices denoted earlier in this section by L 1 , ..., L d .
If we denote D i := Q * D i Q, then from (3), by multiplying with Q * and Q to the left and right respectively we obtain: Further, since Q is unitary, we have and using the earlier notation which proves that we have the same values γ k i,j so we obtain the same regular *-representation for both A and A Q .
This implies that, when studying A reg , we may assume without loss of generality that A takes the form   Proof. Define as before the linear mappings Φ : A → A reg such that Φ(D k ) = L k (k = 1, . . . , d), and Φ : A → A reg such that Φ (D k ) = L k (k = 1, . . . , d). Then we have where Q is some orthonormal matrix that does not depend on A. This concludes the proof.

5.
Wedderburn decomposition of regular *-representation. In this section we will prove the main result of this paper, namely: when constructing the Wedderburn decompossition of the regular *-representation one obtains in (6) the number of identical blocks (i.e. t i ) equal to the size of the identical blocks (i.e. n i ). To this end we will prove three lemmas and for the first lemma we need a basic property of the Kronecker product of two matrices. We do not go into details, for further information on this topic the reader is refferd to [10].
If n denotes the size of two given matrices A and B, then A ⊗ B denotes a block matrix with block ij given by A ij B (Kronecker product). One has: Lemma 5.1. Let t and n be given integers. The regular *-representation of t C n×n is equivalent to n C n×n , for the standard basis.
Proof. The standard basis of t C n×n clearly has n 2 elements since we have t repeated blocks. Let denote the normalized basis matrices. Its regular *-representation will consist of n 2 dimensional matrices, say L i1i2 , (i 1 , i 2 = 1, . . . , n). We will show that for all i 1 , i 2 we have L i1i2 = 1 √ t P T (I n ⊗ (e i2 e T i1 ))P , for some permutation matrix P , and the lemma will therefore be proved.
To this end, for i 1 , i 2 ∈ {1, . . . , n} let us define E (i1i2) := e i1 e T i2 . Then, using (3), we have for i 1 , i 2 , j 1 , j 2 = 1, . . . , n for some scalars γ (k1k2) (i1i2),(j1j2) . This is equivalent to which yields: Since Following (7) we obtain, for a suitable permutation matrix P , L i1i2 = 1 √ t P T (I n ⊗ (e i2 e T i1 ))P , and this concludes the proof. Lemma 5.2. Let t and n be given integers. The regular *-representation of t C n×n is equivalent to n C n×n , for any choice of orthonormal basis.
Using the last two lemmas, we can readily prove the following theorem.
Theorem 5.4. The regular *-representation of A := ⊕ t i=1 t i C ni×ni is equivalent to ⊕ t i=1 n i C ni×ni . The Wedderburn decomposition of A reg therefore takes the form for some suitable unitary matrix Q.
To end, comparing (6) and (8), we may informally say that the t i and n i values are equal for all i in the Wedderburn decomposition of a regular *-representation.