On the real spectrum of a product of Gaussian matrices

Let X m = G 1 . . . G m denote the product of m independent random matrices of size N × N , with each matrix in the product consisting of independent standard Gaussian variables. Denoting by N R ( m ) the total number of real eigenvalues of X m , we show that for m ﬁxed


Introduction
The subject of non-Hermitian random matrix theory can be said to originate in Ginibre's 1965 paper [16] which introduced three basic random matrix ensembles of interest. These ensembles consist of N × N matrices of Gaussian variables over the real, complex or quaternion number systems respectively. For Hermitian matrices, these number fields had been shown earlier by Dyson [8] to relate to global time reversal symmetry in complex quantum systems. In the present paper we are exclusively interested in the real Ginibre ensemble, which is defined by the probability density function Tr(GG T ) , (1.1) acting on the set of all N × N real matrices 1 with respect to the Lebesgue measure. An intriguing feature of the real case is that a certain number of the eigenvalues are real with non-zero probability. Such real eigenvalues were studied by Edelman, Kostlan and Shub [9,10], who calculated their expected number and asymptotic density in the limit N → ∞.
Since then it was realized that the real eigenvalues form an interesting point process in * Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK. E-mail: n.simm@warwick.ac.uk 1 For ease of presentation we will assume throughout that N is even, the case N odd can be dealt with similarly and does not affect the final results.
their own right, with connections to random dynamical systems [15], annihilating and coalescing Brownian motions [26,27], and intermediate spectral statistics in quantum chaos [2]. The purpose of the present paper is to study the real eigenvalues of the random matrix product G 1 . . . G m where for each i = 1, . . . , m the G i are independent and distributed according to (1.1). Products of random matrices are currently a rapidly evolving field, with many new developments occurring in the last 3 or 4 years, see [1] for an overview.
Our first result answers one of the most basic questions regarding a random product matrix: how many of the eigenvalues are real?
An obvious yet intriguing feature of (1.2) is that for m > 1 more eigenvalues congregate onto the real axis. This phenomenon was also discussed recently in the context of the probability that all eigenvalues are real, which was observed to increase monotonically to 1 as m → ∞ with fixed N [12,13,22]. These works were motivated by a recent application of the real eigenvalues of products to quantum entanglement [18]. The expected value (1.2) has been studied numerically in the physically motivated works [18,17,7], but the value of the pre-factor 2m/π is not discussed there and, despite its simplicity, appears to be absent from the literature. Such a concise result is quite surprising given that the spectrum of a product typically depends in a complicated way on the spectra of each individual factor. When m is fixed, the asymptotic order of magnitude in (1.2) is √ N , as is the case for m = 1 [10]. Still for m = 1, Tao and Vu [25] have shown that this persists to matrices with more general distributions on the entries. The same √ N order of magnitude also holds for the expected number of real roots of certain random polynomials [3]. The universality of this so-called ' √ N −law' and physical applications are discussed in [2]. Next we give a more precise result regarding the global spectral distribution of the product matrix. Let λ 1 , . . . , λ N R (m) denote the real eigenvalues of G 1 . . . G m . The averaged empirical spectral density of these points is defined by the quantity and its appropriately normalized version .  This result proves a conjecture of Forrester and Ipsen in [13]. It states that the global empirical spectral distribution of real eigenvalues of the product is asymptotically for large N the same as that of G m 1 , appropriately symmetrized. For the distribution of all real and complex eigenvalues this type of relation between product and power matrices is known due to techniques coming from free probability [5,6] and was generalized to non-Gaussian matrices in [21]. However, it is apparently unclear how such techniques can apply to the real spectrum itself or to obtain (1.5).
Our proofs of Theorems 1.1 and 1.2 follow a unified approach based on the method of moments. Namely, the strategy is to compute the moments to leading order in N as N → ∞. In the particular case k = 0 we obtain M 0,N (m) = E(N R (m)) and consequently (1.2). Then it will be proved that which are the moments of the density on the right-hand side of (1.5). Since this density is uniquely determined by its moments we establish the weak convergence (1.5). This is a popular strategy in random matrix theory and goes back to Wigner who used it to derive the famous semi-circle law for Hermitian random matrices with independent entries. A subtlety here is that one normally obtains the moments by computing the expected traces of powers. Here the problem is that such traces necessarily involve contributions from the complex eigenvalues and thus are not obviously related to (1.6). For the same reason, techniques coming from free probability do not seem to be of help. Instead our strategy is based on a recent computation of Forrester and Ipsen [13], showing that (1.3) can be written exactly in terms of Meijer G-functions. Then after obtaining a suitable integral representation for (1.6), an asymptotic analysis of this exact formula leads to our main results, Theorems 1.1 and 1.2.

Moments and Meijer G-functions
In this section we review the results for the finite-N density (1.3) as in [13] and use it to give an exact formula for the moments (1.6). As this necessarily involves working with Meijer G-functions, we begin with a definition. Definition 2.1. For a set of real parameters a 1 , . . . , a p , b 1 , . . . , b q and z ∈ C, the Meijer G-function is defined by the following contour integral The contour γ goes from −i∞ to i∞ with all poles of Γ(b j − s) lying to the right of γ for all j = 1, . . . , m and all poles of Γ(1 − a k + s) lying to the left of γ for all k = 1, . . . , n.
Functions of Meijer-G type appear very frequently in the study of random matrix products. This might be expected from results for the scalar case: the density function for a product of m independent standard Gaussian variables is proportional to (see e.g. the same fundamental role that w 1 (x) = e −x 2 /2 plays in the analysis of a single Ginibre matrix. In this case it is known that the real eigenvalues form a Pfaffian point process, meaning that all p-point correlation functions of real eigenvalues can be written as p × p Pfaffians involving an explicit 2 × 2 matrix kernel (see e.g. [23,14,4]). Recently this has been shown to extend to products of random matrices.
Theorem 2.2 (Forrester and Ipsen [13]). The real eigenvalues of the matrix product G 1 . . . G m form a Pfaffian point process with correlation kernel given by In particular, the p-point correlation function of the real eigenvalues satisfies The results in [13] also include correlations involving purely complex eigenvalues, but we will not need them here. From Theorem 2.2, we can extract the moments (1.6) explicitly.
2k,N (m) = N −mk Here a j,k is a particular case of the Meijer G-function (2.10) Proof. By Theorem 2.2, the unscaled density of real eigenvalues is (2.8) with p = 1 and noting that I(x, x) = 0, we obtain  Since w m (x) in (2.2) is an even function of x so too is ρ 1 (x). Therefore all odd moments vanish identically. To compute the even moments, consider the coefficients We multiply both sides of (2.11) by x 2k and integrate over R, leading to (2.14) In obtaining (2.14) we have used that the 2k th moments of the unscaled density ρ 1 (x) are N −mk times those of ρ R N,m (x) in (1.3). The quantity c j,k (m) is skew-symmetric and depends on the parity of j and k. As shown in [  We now deduce a useful integral representation for a j,k . This appears without proof in [13].
Lemma 2.4. For j, k ≥ 1, the coefficients a j,k in (2.10) admit the integral representation where u := m l=1 t l . Now if u > 1 we close the γ R contour to the right and pick up the pole at s = 0. The error is given by the integration over the large semi-circular contour C R := {−1/4 − iRe iφ : 0 < φ < π}. We have If u > 1 + the right hand side of (2.20) is uniformly bounded by u 1/4 /R and inserting into (2.19) gives zero in the limit R → ∞. If 1 < u < 1 + we change variables t m = u/(t 1 . . . t m−1 ) and use the bound (1 + u/t) −k−j+1/2 ≤ t so that the contribution to Now if u < 1, we close the contour to the left where the integrand is analytic. Then the only contribution comes from the integral overC R := {−1/4 + iRe iφ : 0 < φ < π} which tends to zero by an identical argument. We have

Asymptotic analysis
In this section we prove our main results, Theorems 1.1 and 1.2. We begin by showing that both theorems are consequences of Lemmas 2.3, 2.4 and j → ∞ asymptotics for the coefficients in the sums (2.9) which we present below. Due to relation (2.10), this yields an apparently previously unknown asymptotic expansion of a Meijer G-function with large parameters. Such expansions were also studied in [11], but only for a specific family of indices which do not cover the situation considered here.  Before giving the proof, we show how its conclusion quickly implies the main results of the paper.
Proof of Theorems 1.1 and 1.2. First note that we may consider only the contribution to the sums (2.9) from sufficiently large j > j 0 (m, k) where the asymptotics (3.2) hold, since the sum from j = 0 to j 0 is O(N −mk ). We have     term in the asymptotics (1.2) is left undetermined. Although we also do not compute the log(N ) correction, it seems reasonable it will cancel out of the final results, as we know happens for m = 1 [10]. However proving this requires looking at the next order term in (3.2) which is left for future investigation.
The only remaining task is to deduce the asymptotics (3.2) of Proposition 3.1. This turns out to be a standard application of the saddle point method for multi-dimensional integrals. However, in this case the analysis is complicated by the fact that the saddle point lies on the boundary of the m-dimensional domain of integration. We remark that recent works analysing the asymptotic form of the averaged characteristic polynomial for products of complex Gaussian random matrices also face this task [19,20]. This requires slightly different analysis compared with the typical case of interior points and will lead to an expansion in half integer powers of j instead of integer powers one might normally expect. Hence we give a self-contained exposition for our particular application.
Proof of Proposition 3.1. By definition we have and   (3.13) which is clearly negative so that p is the unique maximum. It will be useful in what follows to have the formulae (3.14) which follow from known facts regarding the determinant and inverse of a symmetric tri-diagonal matrix. The analysis begins by localizing the integral near the saddle point, so that (3.15) up to exponentially small errors. This is justified because away from the unique maximum we have which gives rise to an exponentially small contribution for any > 0. By Taylor's theorem, for sufficiently small > 0, we may expand Φ( x) near the saddle point.
where T 3 ( x 0 , p) is the third order term in the Taylor expansion of Φ( x) near x = p.
Combined with a simple bound on the exponential function, we obtain from (3.18) the estimate where the error is uniform in x ∈ [1 − , 1 + ] m and all j > 0 (but not uniform in m).  Note that e jΦ( p) = 4 −mj which appears in the claimed asymptotics. The quantity T 3 ( u, p) can be calculated explicitly Inserting (3.22) and (3.21) into (3.20) shows that it remains to compute order of m Gaussian integrals with some low order polynomials in the integrand. All such integrals can be computed in terms of the following generating function