Global unitary fixing and matrix-valued correlations in matrix models

We consider the partition function for a matrix model with a global unitary invariant energy function. We show that the averages over the partition function of global unitary invariant trace polynomials of the matrix variables are the same when calculated with any choice of a global unitary fixing, while averages of such polynomials without a trace define matrix-valued correlation functions, that depend on the choice of unitary fixing. The unitary fixing is formulated within the standard Faddeev-Popov framework, in which the squared Vandermonde determinant emerges as a factor of the complete Faddeev-Popov determinant. We give the ghost representation for the FP determinant, and the corresponding BRST invariance of the unitary-fixed partition function. The formalism is relevant for deriving Ward identities obeyed by matrix-valued correlation functions.


ABSTRACT
We consider the partition function for a matrix model with a global unitary invariant energy function.We show that the averages over the partition function of global unitary invariant trace polynomials of the matrix variables are the same when calculated with any choice of a global unitary fixing, while averages of such polynomials without a trace define matrixvalued correlation functions, that depend on the choice of unitary fixing.
The unitary fixing is formulated within the standard Faddeev-Popov framework, in which the squared Vandermonde determinant emerges as a factor of the complete Faddeev-Popov determinant.We give the ghost representation for the FP determinant, and the corresponding BRST invariance of the unitary-fixed partition function.The formalism is relevant for deriving Ward identities obeyed by matrix-valued correlation functions.
Over the years there has been considerable interest in matrix models from various points of view.Matrix models are used to approximate quantum many body systems and quantum field theories [1], and have deep connections with string theories [2].They also have been studied as classical statistical mechanical systems, from which quantum behavior emerges under certain conditions [3].A common issue that arises in all of these applications is dealing with an overall global unitary invariance transformation of the partition function.
Typically, in matrix model calculations this overall invariance is partially integrated out as a first step, thus eliminating a U(N)/U(1) N subgroup of the global unitary group.Our aim in this paper is to proceed in an alternative fashion, by using the Faddeev-Popov framework to impose a set of unitary invariance fixing conditions, that completely break the SU(N) subgroup of the global unitary invariance group U(N).One can think of our construction as a type of polar decomposition, based on modding out the action of the SU(N) subgroup.This allows one to define matrix-valued correlation functions, which give additional structural information about the system, but which (like gauge potentials in gauge field theory) depend on the choice of unitary fixing.A complete global unitary fixing is needed for the application of matrix models to emergent quantum theory developed in Ref. [3], so as to be able to construct matrix ensembles that do not integrate over the spacetime translation group of the emergent theory.The formalism that we develop here may well find other matrix model applications as well.
Let M 1 , ..., M D be a set of N × N complex self-adjoint matrices, and let us take as the energy function with H a self-adjoint polynomial in its arguments constructed using only c-number coeffi-cients (i.e., no fixed, non-dynamical matrices appear as coefficients in constructing H).Then the corresponding partition function Z is defined by with and with the integration measure d[M] for the self-adjoint matrix M defined in terms of the real and imaginary parts of the matrix elements M ij of M by As is well known, the measure d[M] is unitary invariant, in other words, if U is a fixed N ×N unitary matrix, then If we make the same unitary transformation U on all of the matrices M d , d = 1, ..., D, then by our assumption that H involves no fixed matrix coefficients, H is invariant by virtue of the cyclic property of the trace, Correspondingly, let so that Q is a global unitary invariant.One can now consider the calculation of averages of Q and of Q respectively over the ensemble defined by Eq. ( 2).In the case of the trace polynomial Q one has which because of the global unitary invariance involves an overall structure-independent unitary integration that is typically done as the first step, by using Mehta's change of variables [4] for one of the matrix arguments on which Q depends.Let us now consider the corresponding average of the polynomial Q over the ensemble, Making a global unitary transformation on all of the matrix integration variables, and using the invariance of dM and of H given in Eqs.(4a,b), and the covariance of Q given in Eq. (5a), we then find that for all unitary matrices U. Thus by Schur's lemma (which applies since U(N) acts irreducibly on the complex N dimensional vector space) Q AV must be a c-number multiple of the unit matrix, so that by taking the trace, we learn that and all nontrivial matrix information (e.g., the unitary orientation and nontrivial operator properties) contained in Q has been lost.
In order to retain access to the matrix information contained in Q, let us then proceed in an alternative fashion.Let us define a measure dM in which the SU(N) subgroup of the global unitary invariance group has been fixed.(The full global unitary invariance group is the product of this SU(N) with a global U(1) that is an overall phase times the unit matrix; since this U(1) commutes with Q, averaging over it causes no loss of the matrix information contained in Q, and so fixing the overall U(1) is not necessary.)We then define the average of Q over the unitary-fixed ensemble as the partition function in which the global unitary invariance has been broken, and an orientation on the N-dimensional vector space has been fixed.Clearly, the procedure just described is a global unitary analog of the gauge fixing customarily employed in the case of local gauge invariances.If we change the recipe for fixing the global unitary invariance, then the average defined by Eq. (8a) will change in a manner that is in general complicated.However, we will show that the average of Q in the unitary-fixed ensemble is independent of the fixing and is equal to that defined in Eq. (6a) by averaging over the original ensemble, so that In other words, the average of the trace of Q takes the same value for any choice of unitary fixing.To make an analogy with local gauge fixing in gauge theories, the trace polynomials Q are analogs of gauge invariant functions, while polynomials Q without a trace are analogs of gauge-variant quantities.Just as the gauge-variant potentials contain useful information in gauge theories, the unitary fixing-variant averages of polynomials Q contain useful structural information about matrix models.
To prove Eq. ( 9), we proceed by analogy with the standard Faddeev-Popov procedure used for local gauge fixing.Let us write an infinitesimal SU(N) transformation in generator form as U = exp(G), with G anti-self-adjoint and traceless.We take as the N 2 − 1 infinitesimal parameters of the SU(N) transformation the real numbers g j , j = 1, ..., N 2 − 1, with those for j = 1, ..., N(N − 1) given by the real and imaginary parts of the offdiagonal matrix elements of G, that is, by Re G ij and Im G ij for i < j.The remaining ones for j = N(N − 1) + 1, ..., N 2 − 1 are given by the differences of the imaginary parts of the diagonal matrix elements of G, that is, by Im(G 11 − G 22 ), ..., Im(G 11 − G N N ).Let f j ({M}) , j = 1, ..., N 2 − 1 be a set of functions of the matrices M 1 , ..., M D with the property that the equations f j ({M}) = 0 , j = 1, ..., N 2 − 1 completely break the SU(N) invariance group, so that the only solution of We consider now the integral with the function K[{f j }] taken as Here G is a global unitary invariant function of the matrices M 1 , ..., M D , such as a trace poly-nomial Q or any function of trace polynomials (for example the partition function weight exp(−H)).Equation (10a) has the standard form of the Faddeev-Popov analysis, as formulated for example in the text of Weinberg [5] (except that when one is dealing with a non-compact local gauge invariance, where the limits of integration lie at infinity, one can take the function K to be a general function of gauge variant functions f j ; in the compact case considered here, the delta functions of Eq. (10b) must be used in order to make the integration limits irrelevant.)The standard FP argument then shows that the integral in Eq. ( 10a) is independent of the constraints f j .Briefly, the argument proceeds by replacing the dummy variable of integration dM by dM V , where M V = V † MV , and integrating over the SU(N) matrix V .The group property of unitary transformations together with the chain rule then converts the determinant in Eq. (10a) into a Jacobian transforming the V integration into an integration over the constraints f j , permitting the delta functions in Eq. (10b) to be integrated to give unity.This shows that the result is independent of the constraints, and that it is the same as the result obtained by integrating over the original unfixed ensemble, thus establishing Eq. ( 9).Clearly, this argument works only when the function G is a unitary invariant, so that it has no dependence on V .For example, if G is replaced by a polynomial in the matrices without an overall trace, then the unitary fixing constraints cannot be eliminated by integrating over V , and the result depends on the unitary fixing in a complicated way.
A specific realization of the general unitary fixing can be given when D ≥ 2, so that the set of matrices M 1 , ..., M D contains at least two independent self-adjoint matrices A = M 1 and B = M 2 .We take the functions f j , j = 1, ..., N 2 − 1 to be linear functions of A and B, constructed as follows.As the f j for j = 1, ..., N(N − 1) we take the real and imaginary parts of the off-diagonal matrix elements of A, that is, the functions Re A ij and Im A ij for i < j.Equating these functions to zero forces the matrix A to be diagonal.The N remaining diagonal unitary transformations then commute with A, so that no further conditions can be furnished by use of A alone.However, the diagonal SU(N) transformations can always be used to make the off-diagonal matrix elements in the first row of the second matrix B have vanishing imaginary parts, leaving a residual Z N −1 symmetry that is broken by requiring these matrix elements to have positive semidefinite real parts.So for the remaining conditions f j for j = N(N − 1) + 1, ..., N 2 − 1, we take the N − 1 functions ImB 1j , j > 1, and we restrict the integrations over ReB 1j , j > 1 to run from 0 to ∞.Since the function K chosen in Eq. (10b) enforces the conditions f j = 0 in a sharp manner, they can be used to simplify the expression for the Faddeev-Popov determinant.A simple calculation now shows that when the f j all vanish, the matrix elements of the commutator [G, M] needed in Eq. (10a) are given by with R a remainder containing only off-diagonal elements G i =j of the matrix G. Since Eq. (11a) shows that the matrix is triangular (its upper off-diagonal matrix elements are all zero because R has no dependence on the diagonal matrix elements of G), its determinant is given by the product of its diagonal matrix elements.Thus we have the first factor of which is the familiar squared Vandermonde determinant.Substituting Eqs. (10b) and (12a) into Eq.(10a), we thus arrive at the formula for the unitary-fixed integral with the integrals over ReB 1j , j = 2, ..., N in Eq. (12b) running over positive values only.
The part of this analysis involving only a single matrix A is well-known in the literature [6]; With this choice of unitary fixing, the unitary fixed average Q ≡ Q ÂV defined in Eq. (8a) has a characteristic form that is dictated by the symmetries of the unitary-fixed ensemble.Since the unitary fixing conditions are symmetric under permutation of the basis states with labels 2, 3, ..., N, and since this permutation is also a symmetry of the unfixed measure dM, the matrix Q must be symmetric under this permutation of basis states.Thus, there are only five independent matrix elements, Q11 =α , Qjj =β , j = 2, ..., N , Q1j =γ , j = 2, ..., N , In this notation, the original unfixed average Q ≡ Q AV defined by Eq. ( 7b) is given by showing explicitly that there is a loss of structural information in using the unfixed average.But even the unitary-fixed average has a structure that is greatly restricted as compared with a general N × N matrix.(Similar reasoning applies to the partial unitary fixing in which one only imposes the condition that A should be diagonal.Since this condition is symmetric under permutation of the basis states with labels 1, ..., N, the partially unitary fixed average of a polynomial Q defined by integrating with the measure ] must also have this permutation symmetry, and thus must be a c-number times the unit matrix.) We now introduce ghost integrals to represent the determinant ∆.Let ω ij and ωij be the matrix elements of independent N × N complex anti-self-adjoint Grassmann matrices ω and ω.We take ω to be traceless, Trω = 0, while we take ω to have a vanishing 11 matrix element, ω11 = 0.The integration measure for ω is defined by while the integration measure for ω is taken as We can now use these Grassmann matrices to give a ghost representation of the factors in Eq. (12a) involving the matrices A and B. Since the matrix A is diagonal, we have Hence up to an overall sign, the square of the Vandermonde determinant i<j (A ii − A jj ) 2 is given by the ghost integral with the diagonal factors dIm(ω jj − ω 11 ), dImω jj , j = 2, ..., N omitted from the primed integration measures d ′ ω and d ′ ω.To represent the second factor in Eq. (12a) as a ghost integral, we use the diagonal matrix elements of ω and ω in an analogous fashion.Thus, up to a phase, the factor N j=2 ReB 1j is given by the ghost integral By defining a matrix X by X 11 = 0; (15) Yet another equivalent form is obtained by noting that with the remainder S denoting terms that only involve matrix elements ω ij with i = j.The remainder S makes a vanishing contribution to the Grassmann integrals when Eq. ( 16a) is substituted for B 1j i(ω jj − ω 11 ) in Eq. ( 15), since one factor of (ω jj − ω 11 ) for each j = 2, ..., N is needed to give a nonvanishing integral, and each such term in the exponent is already accompanied by a factor ωjj , so that terms with additional such factors vanish inside the Grassmann integrals.(We are just using here the fact that with ζ, ζ Grassmann variables, dζd ζ exp( ζW ζ + Uζ) = W , with no dependence on U.) Since the diagonal matrix elements of ω are pure imaginary, Eq. (16a) implies that which when substituted into Eq.( 15) gives the alternative formula This formula will be used to establish a BRST [7] symmetry, the topic to which we now turn.
To formulate a BRST invariance transformation corresponding to Eq. ( 17), we rewrite the product of δ functions in Eq. (10b) and the half-line restriction on the integrals over ReB 1j in terms of their Fourier representations, by introducing three sets of Nakanishi-Lautrup [8] variables.One set are the elements h ij of a self-adjoint N × N matrix h with vanishing diagonal matrix elements, so that h ii = 0 , i = 1, ..., N. The integration measure for this set is defined as The second set are N − 1 real numbers H j , j = 2, ..., N, with integration measure In terms of these variables, the product of δ functions of Eq. (10b) can be represented (up to an overall constant factor) as The third set are N − 1 complex numbers k j , j = 2, ..., N, integrated along a contour on the real axis with integration measure with infinitesimal positive ǫ.These can be used to insert a product of step functions allowing the integrals over the ReB 1j in Eq. (12b) to be taken from −∞ to ∞.
second term in the exponent in Eq. (19a) can be rewritten as i N j=2 H j ImB 1j = TrY B, and so an alternative form of Eq. (19a) is Similarly, defining a matrix Z by Z 11 = 0; Z ij = 0 , 2 ≤ i, j ≤ N; Z 1j = Z j1 = 1 2 ik j , the exponent in Eq. (19c) can be rewritten as N j=2 ik j ReB 1j = TrZB, and so an alternative form of Eq. (19c) is These equations allow us to write Eq. (12b) in terms of the unrestricted measure dM, and the ghost representation of ∆, as with C an overall constant factor.The first representation of J in Eq. (20) will be used to establish a BRST invariance, while the second will be used to discuss Ward identities obeyed by the matrix-valued correlations.
We now show that the first integral in Eq. ( 20 Hence for the terms in the exponent of Eq. (20) involving A, we get (using the fact that A is diagonal) For the terms in the exponent of Eq. (20) involving B but not involving the parameters k j , inside the summation over j we have since δ[B, ω] = 0 by the same argument as in Eq. (22a).So the entire exponent of the first representation in Eq. ( 20) is BRST invariant, apart from the k j ReB 1j terms.But shifts in the ReB 1j are linear in ω while not involving ω.Thus (since we shall see shortly that the integration measures are invariant), the shifts in the terms in the exponent involving the products k j ReB 1j make a vanishing contribution to the Grassmann integrals, by an argument similar to that used to justify the neglect of S in Eq. ( 16a).
An alternative method of including the step functions, that leads to a manifestly BRST invariant integrand, is to include in the exponent in the first representation of Eq. ( 20) an additional term − N j=2 κ j Re[B, ω] 1j , with κ j auxiliary Grassmann parameters that are not integrated over.This term is linear in ω but does not involve ω, and so again makes a vanishing contribution to the Grassmann integrals in Eq. (20).The BRST transformation of Eq. ( 21) is then augmented by the rule δκ j = −ik j θ, with the result that the combination Continuing the BRST analysis, since Trστ = −Trτ σ for any two Grassmann odd grade matrices τ and σ, we have Trω 2 = −Trω 2 = 0, and so the condition that ω should be traceless is preserved by Eq. ( 21).(On the other hand, ω 2 11 is nonzero even when ω 11 is zero, which is why we must use a traceless condition, rather than a condition ω 11 = 0, for ω.) Also, letting * denote complex conjugation, since the property that ω is anti-self-adjoint is preserved by Eq. ( 21).The integration measures the cyclic property of the trace to rewrite Trω[ω, A] as Tr{ω, ω}A, we have with Ẑ given by the expression on the right hand side of Eq. ( 24) with Q replaced by unity.
Ward identities follow from the fact that the unrestricted measure dM is invariant under a in the first exponential on the right hand side of Eq. (25a), which arose from the unitary fixing procedure.Explicitly, we have Hence from Eq. ( 20) we are able to get explicit forms of all of the Ward identities, including those obtained by varying the matrices singled out in the unitary invariance fixing.Note that were we to employ the original ensemble average of Eq. (6a), which has no unitary fixing, in deriving the Ward identities, then Eq. (7b) implies that we would only obtain the trace of the matrix relation of Eq. (26b).In other words, unitary fixing is essential for extracting the full content of the Ward identities; without it, all nontrivial matrix structure is averaged out.
) Thus, Eqs.(4a) and (4b) together imply that the partition function Z has a global unitary invariance.The global unitary invariance of Z must be taken into account in calculating correlations of the various matrices M d averaged over the partition function.Let Q[{M}] be an arbitrary polynomial in the matrices M 1 , ..., M D constructed using only c-number coefficients, so that under global unitary transformations, Q transforms as has been added here is the complete SU(N) fixing obtained by imposing a condition on a second matrix B as well.The part of Eqs.(12a,b) involving each B 1j is just a planar radial integral ∞ 0 ρdρ, with ρ = |B 1j | = ReB 1j , where the associated angular integral 2π 0 dφ has been omitted because it corresponds to a U(1) factor that has been fixed by the condition φ = 0.
) is invariant under the nilpotent BRSTtransformation δA =[A, ω]θ , δB =[B, ω]θ , δM d =[M d , ω]θ , d = 3, ..., D , δω =ω 2 θ , δ ωij = − ih ij θ , i = j , δ ωjj = − iH j θ , j = 2, ..., N , δh =0 , δH j =0 , δk j =0 ,(21)with θ a c-number Grassmann parameter.(The part of this transformation involving ω is patterned after the BRST transformation for the local operator gauge invariant case studied by Adler[9].)We first remark that since Eq.(21) has the form of an infinitesimal unitary transformation with generator ωθ acting on the matrix variables M d , the global unitary invariant function G[{M}] and the matrix integration measure dM are both invariant.We consider next the terms in the exponent in Eq. (20).From Eq. (21) we have shift of any matrix M d by a constant δM d , which under the assumption that surface terms related to the shift vanish, implies0 = dMdhdHdkdωdωδ M d exp Tr[(ih + {ω, ω})A] + Tr(X + Y + Z)B exp(−H)Q .(25a)When H and Q are varied with respect to M d , the factor δM d can be cyclically permuted to the right in each term of the varied trace polynomials, giving the formulasδ M d H =Tr δH δM d δM d , δ M d Q =Tr δQ δM d δM d ,(25b)which [3] define the variational derivatives of the trace polynomials with respect to the operator M d .Carrying through the variations of all terms of Eq. (25a), and dividing by Ẑ, we are left with an expression of the form 0 = Tr W d ÂV δM d .(26a) However, since δM d is an arbitrary self-adjoint matrix, the vanishing of the real and imaginary parts of Eq. (26a) implies the matrix identity 0 = W d ÂV .(26b) For d = 3, ..., D, the variation δ M d in Eq. (25a) acts only on the product exp(−H)Q, and we have d = 1 and d = 2, corresponding to M 1 = A and M 2 = B, there are additional contributions to the Ward identities arising from variations of the traces involving A and B