Matrix regularization of embedded 4-manifolds

We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S^3 also possible).


Motivation
Matrix regularization of surfaces is a remarkable statement which says that there exists a map from functions on a surface (closed, orientable) to SU (N ) matrices such that structure constants of the area preserving diffeomorphisms (APD) on that surface can be approximated by the structure constants of SU (N ). The statement was put froward in early 80s by Goldstone and Hoppe [1] in the case of a sphere. Later on it was generalized to the case of a torus [2,3,4], higher genus Riemann surfaces [5,6] and arbitrary Kähler manifolds [7,8]. A very important property of matrix regularization is that among closed, connected and orientable surfaces the underlying matrix group is SU (N ) regardless of the topology. On the other hand, the topological information is encoded in the way one proceeds with the N → ∞ limit therefore the global information about the surface is in the limit. At finite but large N one can extract topological information about the surface by analyzing the spectral properties of the matrices as observed in [9].
Originally, matrix regularization was invented in order to consistently quantize a membrane [1]. However it is clear that the framework itself, a priori has nothing to do with quantization: it is a mathematical framework in which one can consider functions on surfaces as a limiting case of certain N ×N matrices. As such it can be applied both to classical physics by considering the regularized classical equations of motion [10,11,12] and quantum physics by regularizing the quantum theory of fields on the surface -as a result obtaining a tractable quantum mechanical system [1,13]. In the last few years it has been discovered that many aspects of the differential geometry of (embedded) Riemannian 2-manifolds can be formulated by that procedure [14,15,16].
Matrix description of a quantum membrane is clearly interesting from the particle-physics point of view. If elementary particles are extended objects then it is natural to consider membranes and their excitations as good candidates -the idea put forward by Dirac in 1962 [17, 18, 19] where he introduced what is now called the Nambu-Goto type action for membranes and considered the possibility that leptons may be understood as excitations of the membrane ground state (the electron).
On the other hand, it is quite natural to search for generalizations of matrix-like regularization to higher-dimensional manifolds. This problem, although very well known [20], is still unsolved. Moreover from the point of view of quantum gravity one is more interested in a possible regularized description of space-time itself which favors 4-manifolds among others. If, for simplicity, one assumes that the 4-manifold under consideration is compact and that it has Euclidean signature then one arrives at as simple question as: what is the regularized description of S 2 × S 2 ? In this paper we shall prove that the group structure underlying this manifold is the tensor product SU (N ) ⊗ SU (N ) i.e. a group of N 2 × N 2 matrices obtained by a Kronecker product of two SU (N ) matrices. The result can be generalized to arbitrary product N × M of two 2-manifolds N and M whose regularizations are given by SU (N ) matrices.
A way to regularize the 4-sphere is also presented by constructing N 2 × N 2 matrix representations of the coordinates of S 4 embedded in R 5 . We than show that the Nambu 4-bracket evaluated on the coordinates can be replaced by a 4-commutator of those matrices. This construction can then be used to prove analogous statements for S 3 .

Matrix regularization of surfaces
Let us start with reviewing matrix regularization for surfaces. There are several excellent reviews on that subject e.g. [21,22] -our objective here is to fix the notation and the conventions.
For the case of a single sphere it is convenient to expand functions on S 2 in terms of spherical harmonics Y l m (ϕ 1 , ϕ 2 ) where a l m are real. The corresponding structure constants c ABC of the APD of S 2 will be given by where we used a double-index notation A = (l, m) and where {·, ·} is the Poisson bracket The factor ρ is conventional at this stage however it turns out to be a unique one (up to a constant) for which the regularization of the of a membrane Hamiltonian can performed. By analyzing membrane equations of motion one finds that, up to a constant, ρ = √ det g rs where g rs is a metric of the embedded surface.
Other reason why that factor is preferred is due to the identity for the coordinates of the unit sphere i.e. under a Poisson bracket defined as in (1) the coordinates of a unit sphere satisfy relations similar to those for the spin matrices.
This observation turns out to be crucial for matrix regularization of S 2 . Note also that the Poisson bracket satisfies all the axioms of a Lie bracket (i.e. the antisymmetry and the Jacobi identity) hence it provides the Lie algebraic structure for functions on S 2 .
Matrix regularization maps spherical harmonics Y A to matrices T A in such a way that when replacing the Poisson bracket {·, ·} by a commutator −i[·, ·] the structure constants f ABC T C converge in the large N limit to c ABC . Let us give more details for that construction for S 2 .
First step is to use the fact that spherical harmonics can be expressed in terms of homogeneous polynomials in coordinates where in the last equality we introduced the coefficients c m a 1 ...a l which are totally symmetric. The second step is to replace the coordinates with the constant γ lN to be determined (see below). The choice of spin matrices J i is motivated by an algebraic identity i J 2 i = 1 which mimics the corresponding identity for the coordinates i x 2 i = 1 (however one can use other conventions e.g. J i = S i as in original work by Hoppe [1]). At this stage there is no reason yet to think that this choice will give rise to structure constants which in the large N limit converge to the structure constants of APD's of S 2 . Therefore at this stage it would be premature to refer to the above construction as to the construction of a matrix-regularized S 2 .
The key observation is to note that the Poisson bracket on x i 's has the same structure as when considering spin matrices with the commutator (which was mentioned above) and to observe that both the Poisson bracket and the commutator satisfy the Leibniz rule converge to c ABC . What remains to calculate is the constant γ lN in (2). It must be fixed so the all the factors of 1/N will be canceled when calculating an arbitrary commutation relation [T A , T B ]. The correct choice turns out to be γ lN ∝ N for large N with the proportionality factor depending on the conventions (see [23] for a detailed discussion).

General considerations
To generalize the construction outlined in the previous section, to the case of higher-dimensional manifolds, is a challenging task. Let us point out some of the most important difficulties related to such construction.

The N d scaling
First, let us note that the number of N 2 − 1 real parameters of the SU (N ) matrix is closely related to the topological dimension of the surface i.e. for d-dimensional manifold we expect the SU (N ) matrices to be replaced by objects parametrized by ∼ N d real numbers, for large N . The simplest way to see this is to consider a d-dimensional torus T d . Functions on T d can be expanded in terms of a convenient basis where n k 's are integers. Because non of the ϕ k 's are distinguished we expect that the regularization based on the truncation of the mode numbers n k should be such that n k ≤ N for some N and all k's. Therefore there would be N d parameters corresponding to a single expansion in terms of (5).
A similar conclusion is obtained when analyzing the expansion on S d in terms of corresponding S d -spherical harmonics Y l m 1 ...m d−1 (ϕ 1 , . . . , ϕ d ). These functions can be classified by homogeneous polynomials of order l of d + 1 variables [24,25]. The number of such polynomials for fixed l is given by (e.g. 2l + 1 for d = 2, (l + 1) 2 for d = 3) i.e. it scales with 2 (d−1)! l d−1 for large l. Therefore the sum of the first N modes, for large N , gives Another way to argue the scaling N d is to view the regularized d-manifold as a lattice of interacting points (i.e. vertices and links). Here one immediately arrives at the conclusion that there should be about N d such points. However this argument is less rigorous then the previous one since already for surfaces it is not clear what is a geometrical meaning of the N 2 − 1 parameters. Moreover as observed by Nicolai and Helling [22] the matrix regularization is significantly different from lattice approaches as it does not introduce any dimensional parameters (unlike in lattice approaches where one introduces a lattice spacing).
Finally we note that the scaling N d (for large N ) satisfies also consistency checks e.g. for S 2 × S 2 one should obtain ∼ N 4 real parameters, as for S 4 or T 4 , which is indeed the case -there are N 2 modes corresponding to each S 2 in S 2 × S 2 .

d-arrays
The above considerations are simple but suggest something less trivial i.e. that the possible algebraic objects corresponding to functions on d-manifolds should be certain generalizations of square matrices. A natural proposal is to consider d-dimensional matrices or d-arrays i.e. matrices that carry more then two indices which range from 1 to N and a related concept of d-algebras (for a review see [26]) . There exist however elementary problems with such objects e.g. for (2n + 1)arrays it is difficult to define a product of two arrays. If a product of two arrays involves only index contractions then it is not possible to multiply two (2n + 1)-arrays in such a way that the result is again the (2n + 1)-array. For example a product of two 3-arrays can be a 4-array a 2-array or a 0-array but never a 3-array (here • indicates a product of d-arrays,F and G are d-arrays corresponding to functions F and G respectively). To overcome this difficulty one considers a multiplication rule in which some indices are fixed and equal but not summed over [27,28]. Similar problem does not appear for 2n-arrays i.e. it is possible to define a product of two 2n-arrays so that the product involves only index contractions and the result is again a 2n-array. The apparent dichotomy between odd and even dimensions is the reason why regularization of d-manifolds should be carried out in a conceptually different manner for these two cases respectively. It seems also that the 2n-dimensional manifolds should be easier to deal with.
Similar distinction between odd/even dimensions takes place in topological properties of Lie groups i.e. Betti numbers of Lie group manifolds are given by Betti numbers of certain products of odd spheres [29]. Because the structure underlying the regularization of manifolds is expected to be a Lie group it seems plausible that the Lie group cohomology is related to that distinction.
Let us also note that in case of 2n dimensions the scaling N 2n can be also obtained by considering N n ×N n matrices instead of 2n-arrays. There exist a natural operation, the Kronecker product of matrices, which gives such matrices from the usual N × N ones. Therefore there exists a possibility to regularize the 2n-manifolds by means of usual matrices which are obtained by a Kronecker product of n, N ×N ones. We shall use this observation in Section 4.

d-brackets
For d-manifolds the counterpart of membrane APD algebra is the dVPD (d-volume-preserving diffeomorphism) algebra. The counter-part of the Poisson bracket is the Nambu d-bracket [30] where ϕ i is the manifold's parameterization, ρ is a certain scalar density to be determined (in this paper we choose ρ as in the case of surfaces i.e. ρ is the square-root of the determinant of the metric of embedded manifold). Technically this is no longer an algebra in a usual sense (as the Nambu d-bracket defines a product among d functions and not just 2 functions) but a different algebraic structure called the d-algebra. A natural way to proceed now is to replace the Nambu d-bracket by an antisymmetric product of d-arrays (modulo some numerical factors e.g. the factor of −i in {·, ·} → −i[·, ·] for the case of surfaces) Although we do not write the range of A explicitly (due to a technical difficulty -the index A is a multi index and is different for different manifolds), the above sum is over all the modes Y A . Let us assume now that we have a d-array like objects T A 's corresponding to Y A 's hence a counterpart of F is where the prime indicates that the sum is over a finite number of modes Λ = Λ(N ) (Λ ∝ N d for large N ). In order to show that such regularization is indeed correct one has o verify that the structure constants c A 1 ...A d+1 of the dVPD d-algebra, given by can be approximated by the structure constants f (N ) In the following section we shall prove that assertion for particular 4-manifolds.

Regularization of 4-manifolds
In terms of finding the appropriate regularization for 4-manifolds, the products of two 2-manifolds such as S 2 ×S 2 or S 2 ×T 2 are the simplest ones considering the fact that one knows how to regularize surfaces. From this perspective the manifold S 1 × S 3 seems to be much more difficult to deal with. Because the algebra behind surfaces is SU (N ) it is reasonable to expect that the algebra underlying regularized 4manifolds contains two copies of SU (N ) as in e.g. the tensor product SU (N ) ⊗ SU (N ). On the other hand, this statement seem highly nontrivial for manifolds such as S 1 × S 3 .
Functions on this manifold can be expanded in terms of products of spherical harmonics for each sphere where we again use the double index notation A 1 = (l 1 , m 1 ), A 2 = (l 2 , m 2 ). If we now replace Y A 's with matrices T A as in (2) then a question arises what matrix operation is the counterpart of the prod- is not good. The reason is simple -T A 1 T A 2 is still N × N matrix hence the number of independent parameters describing T A 1 T A 2 scales with N 2 while we require the N 4 scaling. However the result of the Kronecker product of two matrices (A ⊗ B) ijkl := A ij B kl has the correct scaling, therefore we consider where γ l 1 l 2 N is a constant to be determined later on (a counterpart of γ lN in (2)). That this prescription may work follows form bi-linearity of the Kronecker product and the mixed-product property which, together with the fact that T A 's are given by linear combination of products of spin matrices J i , implies that we can expand T A 1 ⊗ T A 2 in terms of (ordinary) products of matrices Concretely, we have

The double-Poisson bracket
Note that the matrices J x i and J y i form two independent sets of N 2 × N 2 spin matrices i.e.
which implies that there is no ordering ambiguity in (13). Moreover let us observe that the commutator −i[·, ·] corresponds now to a double-Poisson bracket i.e. defining we obtain relations on S 2 × S 2 which are similar to (14) {x Therefore any further construction must take into account the fact that the commutator −i[·, ·] is a regularization of the double-Poisson bracket (15).

4-arrays
The advantage of writing everything as in (13) is that we have got rid of the explicit Kronecker product by "moving" it inside the definition of the building blocks J x i , J y i . These matrices can now be manipulated in a usual fashion however since they are N 2 × N 2 the resulting regularization will be much different from that for surfaces. This result is a realization of what was discussed in section 3.2: one can view the Kronecker productf ⊗ĝ as a 4-array with the indices ranging form 1 to N (f ⊗ĝ) i 1 i 2 j 1 j 2 :=f i 1 i 2ĝ j 1 j 2 , i 1 , i 2 , j 1 , j 2 = 1, . . . , N or as a 2-array (a matrix) with indices from 1 to N 2 (f ⊗ĝ) IJ , I, J = 1, . . . , N 2 (we use the notationF orf ⊗ĝ for matrices corresponding to functions F (ϕ 1 , ϕ 2 , ϕ 3 , ϕ 4 ) or f (ϕ 1 , ϕ 2 )g(ϕ 3 , ϕ 4 ) on S 2 × S 2 ).

General prescription
Up to now we have defined a map which gives us the matrix counterparts of the basis Y A 1 Y A 2 via (11) and (13). However for Cartesian product of arbitrary 2-manifolds this prescription will not work. On the other hand, we could have obtained the same result by simply defining . This prescription will work since (T A 1 ⊗ 1)(1 ⊗ T A 2 ) = T A 1 ⊗ T A 2 and hence can be used in the general case. Let us therefore consider two 2-manifolds N and M and their product N × M. Functions on N and M can be expanded in terms of modes Y A 1 (ϕ 1 , ϕ 2 ) andỸ A 2 (ϕ 3 , ϕ 4 ) respectively. Suppose their matrix counterparts are T A 1 andT A 2 and define a map Having done that we now define the counterpart of the Nambu 4bracket by whereF i 's are the matrix counterparts of F i 's and c 4 is a number to be determined. The productF iFjFkFl in (19) is a usual product of four N 2 × N 2 matrices. In the following we will be interested in calculating the Nambu 4-bracket for the basis functions Y A 1Ỹ A 2 and their matrix counterparts T A 1 ⊗T A 2 therefore all F i 's considered here will have that product form (and allF i 's will have the Kronecker product form). We now use the observation that the Nambu 4-bracket can be expanded as [31] {F where {·, ·} is a double-Poisson bracket, and that the quantum 4bracket (the 4-commutator) resolves as i.e. it has twice as many terms due to the noncommutativity. Therefore it is reasonable to expect that − 1 2 [·, ·, ·, ·] is a correct quantum counterpart of the Nambu 4-bracket (with the minus factor due to the i factor in {·, ·} → −i[·, ·] and the fact that the 4-commutator is a sum of "squares" of commutators, implying that c 4 = −1/2, c.p. (19)). For the case of S 2 × S 2 we have shown that the double-Poisson bracket corresponds to the commutator −i[·, ·]. Let us verify that this rule is true for general product manifolds. Considering the functions in the product form and their matrix counterparts F 1 := f 1 (ϕ 1 , ϕ 2 )g 1 (ϕ 3 , ϕ 4 ) →f 1 ⊗ĝ 1 =:F 1 , F 2 := f 2 (ϕ 1 , ϕ 2 )g 2 (ϕ 3 , ϕ 4 ) →f 2 ⊗ĝ 2 =:F 2 we find that the double-Poisson bracket and the commutator are i.e. they resolve in the same way (note the ordering of f i 's). This, together with (20) and (21), verifies that the Nambu 4-bracket can be replaced by − 1 2 [·, ·, ·, ·] while keeping in mind that the double-Poisson bracket is replaced by −i[·, ·].

Leibniz rules
Let us now note that the Leibniz rule for a double-Poisson bracket i.e.
resolves in the same way as the commutator (where F 3 = f 3 (ϕ 1 , ϕ 2 )g 3 (ϕ 3 , ϕ 4 ) →f 3 ⊗ĝ 3 :=F 3 ). Eqs. (22) and (23) can be used in evaluating {F 1 , F 2 , F 3 , F 4 F 5 } and its quantum counterpart − 1 2 [F 1 ,F 2 ,F 3 ,F 4F5 ] via identities (20) and (21) respectively. As a result one obtains a Leibniz-like rule, analogous to (4) for surfaces, which can be used to evaluate the Nambu 4-bracket and the 4-commutator for basis functions Y A 1Ỹ A 2 and their matrix counterparts T A 1 ⊗T A 2 respectively. Just like in the case of S 2 we observe that when performing that calculation one makes the same algebraic manipulations modulo factors of 1/N (for large N ) coming from the commutators [f i , f j ] and [g i , g j ] (c.p. (14) for the case of S 2 × S 2 ). Therefore in the leading order in N the structure constants f (N ) The large N expression for the factor γ l 1 l 2 N is given by the product γ N l 1 N γ M l 2 N i.e. of order N 2 . Note that identities (22) and (23) are useful here only because the basis functions on N ×M and their matrix counterpart are in the product form and Kronecker product form respectively. However the basis corresponding to e.g. S 4 will not have that property and therefore one has to use/argue a different mechanism which contains Leibnizlike rule. Unfortunately the Leibniz rule for a Nambu 4-bracket i.e.
does not hold for the 4-commutator. On the other hand, from matrixregularization point of view one may relax the Leibniz rule to allow 1/N corrections -i.e. the following identity would still work (24) whereÔ are someF i -dependent terms which vanish in the large N limit. Using (21) and the Leibniz rule for the commutators we find that matrixÔ iŝ All the terms inÔ contain three commutators hence they are of order 1/N 3 (assuming that each commutator is of order 1/N ) while the 4-commutators is of order 1/N 2 hence we verify thatÔ is indeed subleading in N .
Therefore in calculating the structure constants one can use the Leibniz rule for 4-commutator keeping in mind that there will be additional 1/N terms. This is another way of proving that the structure constants of 4VPD are recovered when N → ∞.
Functions on S 4 can be expanded in terms of S 4 -hyperspherical harmonics Y l m 1 m 2 m 3 (ϕ 1 , ϕ 2 , ϕ 3 , ϕ 4 ). These harmonics can be expressed in terms of homogeneous symmetric polynomials in variables x 1 , x 2 , x 3 , x 4 and x 5 as are symmetric in indices a k 's. Because the coordinates x i for S 4 do not separate as in (10) a map similar to (17) will not work. On the other hand, the Nambu 4-bracket can be evaluated for S 4 coordinates, we have where the scalar density used here is ρ = sin 3 ϕ 1 sin 2 ϕ 2 sin ϕ 3 (c.p. (8)). Therefore a matrix counterpart of x i , call it Γ i , should satisfy where d 4 may depend on N (d 4 should scale like 1/N 2 based on arguments from previous subsection). Here we have assumed that the formula for the quantum Nambu 4-bracket should be independent of the topology of the manifold (as it is in the case of surfaces) therefore we use (19) with c 4 = −1/2. An obvious choice of matrices satisfying (25) is Γ i ∝ γ i where γ i 's are 5D Dirac matrices corresponding to Euclidean signature {γ i , γ j } = 2δ ij 1 -identity (25) follows directly from algebraic properties of γ i 's. However Dirac matrices in 5D are 4 × 4 and there are no other irreducible representations of a different dimension while we are looking for N 2 × N 2 matrices Γ i for any N . It follows that matrices Γ i cannot satisfy Clifford algebra for N > 2 while for N = 2 they are given by 5D gamma matrices. The last remark suggests the following construction, define where S i 's are n × n spin matrices [S i , S j ] = iǫ ijk S k , a and b are normalization factors so that Γ 2 i = 1. These matrices are 2n × 2n hence to obtain the correct N 4 scaling one should take 2n ∝ N 2 for large N . To argue again the importance of the N 4 scaling let us also note that since there will be about 2N 4 /4! matrix counterparts of the harmonics Y l m 1 m 2 m 3 (c.p. (7)) the majority of them would be linearly dependent if e.g. 2n ∼ N . The scaling 2n ∼ N 2 guaranties that they are independent. For 2n = 2, these matrices are proportional to the 5D Dirac matrices in the chiral representation, however for 2n > 2 they no longer satisfy Clifford algebra (e.g. {Γ 1 , Γ 2 } = 0 for 2n > 2). On the other hand, as we shall show now, Γ i 's satisfy identity (25) with .
For the proof one should a priori calculate twenty four 4-commutators in (25) however due to the complete antisymmetry it is enough to evaluate only 5 of them. Using identity (21) we find that they are where s(s + 1) is an eigenvalue of n × n matrix representation of the Casimir operator S 2 and 2s = n − 1. The normalization condition Γ 2 i = 1 and the fact that d 4 in (25) should be common to all the 4-commutators imply equations a 2 s(s + 1) + 2b 2 = 1, −12ab 2 = −4a 3 s(s + 1) = −2d 4 which are solved by (28) (note that other branches are possible e.g. a → −a, b → −b, d 4 → −d 4 ). Therefore we have found a 2n × 2n irreducible matrix representation of the 4-algebra (25). As anticipated the factor d 4 scales like 1/n ∼ 1/N 2 therefore in the large N limit the 4-commutator is zero which is analogous to the case of S 2 (c.p. (3)) i.e. as N → ∞ matrices Γ i 4-commute. They will not commute however since which holds for all n.
Having defined matrix counterparts of Y l m 1 m 2 m 3 by (27) we can now evaluate corresponding structure constants. Once again the scaling 2n ∼ N 2 turns out to be necessary -otherwise most of the structure constants would be zero (for 2n ∼ N ) or there would be too many degrees of freedom (for 2n ∼ N k , k > 2). One can now use the Leibniz rule with 1/N corrections (24) 1 to prove that the structure constants given by T l m 1 m 2 m 3 's converge to the structure constants of the 4VPD of S 4 . Finally let us observe that form (6) it follows that there are matrices T l m 1 m 2 m 3 which for large N is less then (N 2 − 1) 2 hence the group underlying S 4 is not SU (N ) ⊗ SU (N ).

S 3
Although in this paper we are mostly interested in 4-manifolds let us observe that matrices (26) can be used to regularize S 3 . For the coordinates of a unit S 3 embedded in R 4 we take x 1 = cos ϕ 1 , x 2 = sin ϕ 1 cos ϕ 2 , x 3 = sin ϕ 1 sin ϕ 2 cos ϕ 3 , with the scalar density ρ = sin 2 ϕ 1 sin ϕ 2 . Just like in the case of S 4 , in order to find a matrix counterpart of (30) it is clear to start with the 4D Dirac matrices γ i owing to the identity [γ i , γ j , γ k ] ∝ ǫ ijkl γ 5 γ l . However because of the factor of γ 5 one cannot identify x i 's with γ i 's yet. A way to proceed further is to alter the definition of the quantum 3-bracket and define it using the 4-commutator as which therefore gives [γ i , γ j , γ k ] γ 5 ∝ ǫ ijkl γ l and hence x i can be replaced by γ i 's. This approach was used in the context of multiplemembrane theories [32] where it is also shown that it arises naturally 1 Since the commutator (29) does not scale like 1/N one may think that the matrixÔ in (24) for a generic case may be of order 1/N 2 (instead of 1/N 3 ) and hence not subleading. HoweverÔ will in fact scale like 1/n 2 ∼ 1/N 4 due to Γ a 's, a = 1, 2, 3, carrying the factor of 1/n. from non-associative algebras. Let us now generalize this construction to 2n × 2n matrices. Using the same Γ i 's as for S 4 we find that therefore one may use these matrices for negative d 4 (i.e. the branch a → −a, b → −b, d 4 → −d 4 ) to represent quantum counterparts of coordinates of S 3 as long as we are using [·, ·, ·] Γ 5 as a definition of a quantum 3-bracket. We observe that in the large n limit not only the The spherical harmonics of S 3 can now be mapped to 2n × 2n matrices via.
and therefore to maintain the N 3 scaling of the number of modes we will take 2n ∼ N 2 . As for S 4 , matrices Γ i are N 2 × N 2 but the resulting matrix-harmonics (32) contain only N 3 degrees of freedom (more precisely, it follows form (6) that there are N (N + 1)(2N + 1)/6 degrees of freedom corresponding to first N modes). That the structure constants coming from T l m 1 m 2 's converge to the structure constants of VPD of S 3 follows immediately from the fact that the quantum 3-bracket is in fact given by a 4-commutator. Therefore the reasoning presented in the case of S 4 applies here.

Summary and Outlook
Matrix regularization of embedded surfaces is a procedure that at first sight seems easily generalizable to d-dimensional manifolds. One simply performs Fourier expansion of functions on a manifold and then replaces the modes by a suitably chosen matrices -why should it be more complicated? The problem however is there -it is much more difficult to find concrete representations of algebraic structures (different from Lie algebras) that should be used to replace the Nambu d-bracket. Even more basic problem is related to noncommutative objects that replace functions on the manifold. These objects should depend on roughly N d real parameters where N is the number of Fourier modes -a natural choice would be to use the d-arrays. Although we find this path appealing, in this paper we focused on possible matrix representations due to the ambiguity in defining the product of two d-arrays (d > 2). For 2n-manifolds we simply consider N n × N n matrices while for d = 2n − 1 a subset of N n × N n matrices.
In this paper we focused on d = 4 manifolds due to the possible relevance of this case to quantum gravity. By finding a matrix regularization of space-time one may in principle be able to quantize the resulting system of finite degrees of freedom. We firstly considered manifolds that are Cartesian product of two 2-manifolds and showed that functions on such manifolds can be approximated by N 2 × N 2 matrices from SU (N ) ⊗ SU (N ). This result shows that one does not have to refer to 4-arrays in order to regularize 4-manifolds. We then considered more complicated case: S 4 , which also can be regularized by N 2 × N 2 matrices however this time we were unable to identify the underlying group. Finally we used the construction for S 4 to find the regularized description of S 3 . In particular we used the observation that one can define a quantum 3-algebra by using the quantum 4-bracket (the 4-commutator).
Let us now discuss some generalizations. First, as for the product manifolds N × M for arbitrary closed manifolds N , M of dimension > 1 we conjecture that the resulting group of matrices underlying the regularization is G N ⊗ G M , where the groups G N , G M correspond to N and M respectively. For the proof, the construction presented in Section 4.4 will work provided the corresponding Leibniz rule (with 1/N corrections) is also there.
Second, the regularization of S d , d > 4 seems straightforward by appropriate generalization of matrices Γ i (26). The strategy is to first regularize S 2n with the usual 2n-commutator as a quantum counterpart of the Nambu 2n-bracket and then apply this construction for S 2n−1 . For example S 6 embedded in R 7 can be regularized by taking 4n × 4n matricesΓ i (i = 1, . . . , 7) with the Nambu 6-bracket replaced by the 6-commutator. After this is done the regularization of S 5 can be made by using the same matrices (excludingΓ 7 ) and defining the quantum 5-bracket by [·, ·, ·, ·, ·]Γ 7 = [·, ·, ·, ·, ·,Γ 7 ]. Third, as observed in [28] there exist 3-array, n × n × n representations of the 3-bracket for S 3 . That construction has a virtue of using the usual definition of the quantum 3-bracket (i.e. the 3-commutator). Therefore one does not need to refer to the 4-commutator [·, ·, ·, γ 5 ]. On the other hand, working with 3-arrays requires introducing a non-standard multiplication rule. It seems plausible that these two approaches are equivalent.