Universal K-matrix distribution in beta=2 Ensembles of Random Matrices

The K-matrix, also known as the"Wigner reaction matrix"in nuclear scattering or"impedance matrix"in the electromagnetic wave scattering, is given essentially by an M x M diagonal block of the resolvent (E-H)^{-1} of a Hamiltonian H. For chaotic quantum systems the Hamiltonian H can be modelled by random Hermitian N x N matrices taken from invariant ensembles with the Dyson symmetry index beta=1,2,4. For beta=2 we prove by explicit calculation a universality conjecture by P. Brouwer which is equivalent to the claim that the probability distribution of K, for a broad class of invariant ensembles of random Hermitian matrices H, converges to a matrix Cauchy distribution with density ${\cal P}(K)\propto \left[\det{({\lambda}^2+(K-{\epsilon})^2)}\right]^{-M}$ in the limit $N\to \infty$, provided the parameter M is fixed and the spectral parameter E is taken within the support of the eigenvalue distribution of H. In particular, we show that for a broad class of unitary invariant ensembles of random matrices finite diagonal blocks of the resolvent are Cauchy distributed. The cases beta=1 and beta=4 remain outstanding.

The phenomenon of chaotic resonance scattering of quantum waves (or their classical analogues) has attracted considerable theoretical and experimental interest for more than two decades, see e.g. articles in [1]. The resonances manifest themselves via fluctuating structures in scattering or transport observables, and statistical properties of such objects can be successfully described within the framework of the Random Matrix Theory [2,3]. The most important object in such an approach is the energydependent M × M random unitary scattering matrix S(E), S † (E)S(E) = 1 M which relates amplitudes of incoming and outgoing waves. Here the integer M stands for the number of open channels at given energy, the dagger denotes the Hermitian conjugation and 1 M is the M × M identity matrix. Statistical properties of scattering observables considered at a fixed energy E of incoming waves can be inferred from the corresponding probability density of S = S(E) derived starting from rather general physical principles. Those include unitarity, causality and (if relevant) the time-reversal invariance imposed on S combined with the assumption of maximal entropy (minimum information). The procedure yields the so-called Poisson's kernel distribution wih density [4]: where S stands for the mean of the scattering matrix, β = 1, 2, 4 is the parameter related to underlying symmetries with respect to time reversal and C β is a normalization constant. The mean S is determined by the details of coupling of the systems to continuum and thus contains all information which should be specified for a given scattering system. In particular, for the simplest, yet most fundamentally important "perfect coupling" case S = 0, and the density in (1) is constant, implying that the S−matrix is uniformly distributed over the unitary matrices of given symmetry. Although the above method has proved to be very successful in the statistical description of scattering characteristics at fixed energy [2], it can not be used to study statistics of fluctuations of the scattering observables over an energy interval comparable with a typical separation between resonances. The latter task can be most successfully achieved in an alternative powerful approach going back to the pioneering work [5] which is based on the paradigm of random matrix properties of the underlying Hamiltonian H describing quantum chaotic behaviour of the closed counterpart of the scattering system. In such an approach the resonance part of the S-matrix is expressed in terms of the resolvent of such a Hamiltonian as where W is an N×M matrix of energy-independent coupling amplitudes between N energy levels of the closed system and M open scattering channels. To study quantum chaos-induced fluctuations of S one then replaces the Hamiltonian H with a random matrix taken from one of the standard random matrix ensembles, usually Gaussian Unitary (GUE, β = 2) if one is interested in the systems with broken time reversal invariance or Gaussian orthogonal (GOE, β = 1) if such invariance is preserved, the case β = 4 being relevant for systems with spin-orbit scattering. The approach proved to be extremely successful, and quite a few scattering characteristics were thoroughly investigated in that framework in the last two decades, mainly by the supersymmetry method [5]- [7]. The results of such calculations are found in general to be in good agreement with available experiments in chaotic electromagnetic resonators ("microwave billiards") or acoustic reverberation cameras, see e.g. [8,9,10] and most recently in [11], as well as with numerical simulations of scattering in such paradigmatic model as quantum chaotic graphs [12]. The two random matrix approaches described above look very different in their formulation, yet they are meant to describe precisely the same object, the S−matrix for a chaotic system. The consistency therefore requires that the Poisson kernel distribution (1) for S must follow from the law of distribution of H entering the relation (2). Surprisingly, a direct verification of such a correspondence turns out to be a rather challenging task. The challenge here is that the two objects are related via the resolventlike K−matrix, and to convert the law of distribution of H into that of the resolvent is not at all trivial. A very elegant indirect way round this problem was discovered by P. Brouwer [13] who proposed to choose H from the Cauchy ensemble of random matrices with density parameters. The main advantage of such a choice is that the resolvent (E − H) −1 is Cauchy-distributed as well albeit with modified parameters and, moreover, diagonal blocks of Cauchy matrices have again closely related distributions. Using these facts Brouwer indeed was able to demonstrate the validity of the Poisson kernel for such a choice of H for all values of β = 1, 2, 4. He then showed that in the large-N limit the eigenvalue correlation functions in the Cauchy ensemble (called "Lorentzian" ensemble by Brouwer) have the standard Dyson form, and conjectured that such equivalence of eigenvalue correlation functions should be enough to ensure the same S−matrix distribution is to be shared by all representatives of the corresponding universality class. Although such conjecture sounds very natural, the particular mechanism by which the generic spectral properties of H are translated into universality of the probability density of the K−matrix and then P (S) remained unclear. To the best of our knowledge no further attempts to verify universality of the S−matrix distribution were undertaken in the literature apart from (i) the simplest case M = 1 and H ∈ GUE considered in [7] and (ii) the recent work [14] which however concentrated on the universality of two-point spectral correlations of the individual S−matrix entries rather than on the one-point matrix distribution.
In the present work we establish the universality of the Poisson distribution for the S-matrix under the condition of equivalent coupling to continuum in all scattering channels. Since the K-and S-matrices are related via the Cayley transformation (2), the claim of the Poisson kernel distribution for the S−matrix is equivalent to claiming that the K-matrix is Cauchy with density where the width λ and the mean ǫ of the Cauchy distribution are determined by the meanS of the S-matrix and vice versa. Such equivalence can be verified by transforming to the eigenvalues of the S-and K-matrices.
In the present work, under fairly generic assumptions on H, we verify that the law of distribution of the K-matrix is indeed Cauchy and relate its parameters λ and ǫ to the strength of the coupling amplitudes W and the density of states of the underlying matrix H as well as details of its (invariant) distribution. We would like to note that the statistical characteristics of the K-matrix are directly accessible in microwave experiments [9] where it is related to the real part of the impedance in the regime of small losses. Our paper stems from attempts to understand better the mechanism behind universality of the probability distribution of finite blocks of random matrix resolvents and to provide an ab initio explicit derivation of this distribution for generic invariant ensembles of random matrices.
We will start with showing that the universality of the K-matrix in question follows from the universal limit of a very general spectral object -the product of the ratios of powers of characteristic polynomials det(E − H) of random matrices H.
To that end it is necessary to mention that in the literature there exist two alternative choices of the matrix W of coupling amplitudes in (2). The standard choice is to follow the original paper [5] and to consider the columns w 1 , . . . w M of W as mutually orthogonal N−component vectors, real for β = 1 and complex for β = 2. The case of equivalent channels then corresponds to (w a , w b ) = γδ ab for all a, b = 1, . . . M, or, equivalently, where γ > 0 is a coupling constant. An alternative choice which was suggested originally in [15] is to consider the columns of W to be independent Gaussian vectors with joint probability density function Both choices are expected to lead to the same results in the limit N → ∞ as long as the number of channels M remains fixed. Such an equivalence was explicitly verified in [16] for particular scattering characteristics (Wigner delay times), but is expected to hold generally. We shall first consider the random amplitude case (5) and show the equivalence to the fixed amplitude case (4) for β = 2 at the end of the paper. For notational convenience, it is more convenient to work with the rescaled K-matrix rather than the K-matrix itself, with ρ(E) being the large-N limit of the mean eigenvalue density of H at point E inside the support of ρ(E) (so that ρ(E) > 0). We start with the representation P (K) = F β,N (X) exp(i β 2 Tr XK) dX of the probability density function ofK in terms of the characteristic function where the matrix X has the same dimensions and symmetries as the matrixK and the angular brackets stand for the averaging over all random variables the matrixK depends on.
we first perform the averaging over the coupling matrix W in (6) which amounts to performing a Gaussian integral over the vectors w c : Here x 1 , . . . , x M are the eigenvalues of the Hermitian M × M matrix X and dW stands for the appropriately normalized Lebesgue measure on the space of complex or real N × M matrices W . The easiest way to verify (7) is by diagonalizing where T is orthogonal for β = 1 and unitary for β = 2. Then one changes (T W ) → W and exploits the invariance of W † W and the measure dW with respect of such a transformation. At the next step we bring the characteristic function F β,N (X) to the following form: where the angular brackets now stand for the averaging over the N × N matrices H. The above relation is exact in the random amplitude model (5) for any choice of N and M. We will show at the end of the paper that for β = 2 the same equation (8) is valid asymptotically in the fixed amplitude model (4) in the limit N ≫ M provided the probability density of H is rotationally invariant. For β = 2 and β = 4 the object in the right-hand side of (8) is well-studied in the Random Matrix Theory [17,18,19]. In particular, in the simplest case β = 2 the formula (2.14) from [18] appears to be most useful for our goals. Namely, for N × N matrices H distributed according to an invariant ensemble density with polynomial potential V , the following universal relation holds asymptotically ‡: where ∆{η} = i<j (η i − η j ) is the Vandermonde determinant, and An analogous result for averaged products of ratios of characteristic polynomials with β = 4 is also known [19], but has a more complex structure, with Pfaffians replacing determinants. Unfortunately, for β = 1 no result of comparable generality seems to be known for the products of square roots of the characteristic polynomials , though for M = 1, 2 it can be in fact evaluated in closed form, see e.g. [21] and references therein. Below we consider in full generality only the case of Hermitian ensembles with β = 2, whereas the cases β = 4 and especially β = 1 remain a challenge to us and are currently under investigation. With the asymptotic relation (10) in hand, one can evaluate the characteristic function (8) of the rescaled K-matrix in the limit N → ∞ and M fixed.
Proposition 1 Assume that the N × N matrix H has invariant distribution (9). Then in the random amplitude model (5) Proof. To adjust equation (10) (9) for the notational convenience. The asymptotic relation (10), and as a consequence our Proposition 1 hold for invariant ensembles of random matrices under fairly general conditions on the matrix measure, see the recent paper [20] whereg (ζ, η) = exp[−iπ sgn(Im ζ)η] ζ − η .
The limits are now performed successively applying the L'Hospital's rule, the final result for the second line in equation (14) being where we have defined Finally, by redefining g n (ζ) = e iπζ sgn(Im ζ) ζ Mg n (ζ)/n!, several factors in front of the determinant can be absorbed into the determinant. After identifying ζ c → iγx c /π we arrive at (12). This completes our proof of Proposition 1.
At the next step we observe that achieving our main goal is equivalent to verifying that F β=2 (X) is the characteristic function of a matrix Cauchy distribution.
Proposition 2 where the integral is over the set of all Hermitian M × M matrices K.
Proof. A standard random matrix calculation which involves changing the variables of integration in (18) to the eigenvalues and eigenvectors of K and then applying the Itzykson-Zuber-Harish-Chandra (IZHC) formula, see e.g. [22], and the Andréief-de Bruijn integration formula yields (20) where K ν (x) is the modified Bessel (Macdonald) function and the constant is given by . In particular, for M = 1 we have f (x) = c γ e −γ|x| , for higher M we have where we added a subscript to indicate the M-dependence. Using a recursive relation for the derivative of the Macdonald function we can show that f ′ M (x) = −γxf M −1 (x). Inductively one gets for higher derivatives where ⌊·⌋ denotes the floor-function. This enables us to simplify the determinantal structure of (19). By successively adding to the n-th row appropriate linear combinations of all preceding rows 1, 2, . . . , n − 1, and exploiting yet another recursive relation for the Macdonald function one can remove all terms in the equation (22) but the one for l = 0, leading to where the proportionality constant is and the combination involved in the n-th row in the right-hand side is given explicitly by The equations (13) and (24) have a very similar structure, though the coefficients of the terms in the sum are still different. In fact this similarity can be further exploited to show that the determinants in equations (12) and (19) (or equivalently (23)) are proportional to each other, thus verifying the equation (18). We start our demonstration of this fact with bringing the first row of the determinant in equation (12) to the form coinciding with the first row of the determinant in (23). Since the zeroth and the first order coefficients of g M −1 (x) are both equal to unity, and the two corresponding coefficients are also equal in the expression for f M (x) (but are different from unity) we can safely change those coefficients in g M −1 (x) to the coefficients in f M (x) as such a change gives rise to a constant proportionality factor for the determinant.
The main observation is that the adjustment of both the coefficients a n and a n+1 , given that all previous coefficients are already adjusted, can be done simultaneously by adding the (2n + 1)−th row multiplied with the factor For this procedure to work we need to verify, for any integer n, the following identity: with δ = 0 or δ = 1. The left-hand side of (26) is what becomes of the (n + δ)-th order coefficient of g M −1 (x) after adding multiples of all odd rows up to 2n + 1, choosing the multiplication factors according to (25). The right-hand side equals to the corresponding (n + δ)-th order coefficient of f M (x). Both equations can be conveniently combined into a single relation: where M = M − 1 and m = 2n or m = 2n + 1. To verify (27) we first express the first binomial on the left-hand side by a contour integral using its generating function and the Cauchy's residue theorem. The summation over l is then performed in the integrand using the binomial theorem, and the resulting contour integral can be again evaluated by the residues, yielding precisely the right-hand side of the relation (27). We conclude that it is indeed possible to transform g M −1 (x) into f M (x) by adding multiples of all odd rows to the first row. Note that in each step two coefficients get adjusted simultaneously, and this is precisely the mechanism ensuring the whole procedure being functional. Had it not been for that property, we would be only able to change half of the coefficients to the required form, since adding even rows to odd rows or vice versa is meaningless due to their rather different structure. All remaining odd rows as well as all even rows can be treated by exactly the same procedure, since the coefficients involved are essentially the same as before. Note also that as the very last row contains on both sides the function e −γ|x| x n−1 the coincidence is ensured automatically. This completes our proof of Proposition 2 except for the proportionality constant. It can be found by considering the X = 0 case. In that case we have F β=2 (0) = 1 and the integral on the left-hand side yields the given constant.
Since the characteristic function uniquely determines the law of distribution, one concludes from Propositions 1 and 2 that the distribution of the K-matrix (2) converges in the limit N → ∞ to the matrix Cauchy distribution with density P β=2 (K) (3) having mean ǫ = γV ′ (E)/2 and width λ = πγρ(E). This corresponds to the Poisson kernel distribution (1) for the S-matrix with mean The case of perfect coupling is then obtained for α(E max ) = 0 and πγρ(E max ) = 1, where E max denotes the point where ρ(E) has its maximum. Thus indeed, the Poisson kernel distribution for the S-matrix is universal in the random amplitude model (5) in that it does not depend on the choice of the random matrix ensemble for the underlying matrix H.
Finally, we would like to demonstrate that the fixed amplitude model (4) yields the same universal behaviour of the K−matrix in the limit N → ∞. Let us again consider the characteristic function Here U is the unitary matrix of eigenvectors of H and Λ = diag{λ 1 , . . . , λ N } stands for the diagonal matrix of the corresponding eigenvalues. The averaging over H then can be performed in two steps, the first step being the averaging over the Haar measure on the unitary group U(N). As this is again a special case of the IZHC integral it can be done explicitly. The important new feature however is that the N × N matrix Γ x is of a reduced rank, with its M ≪ N nonzero eigenvalues coinciding with eigenvalues γx c , c = 1, . . . , M of the matrix XW † W = γX, the rest of N − M eigenvalues being exactly zero. At the same time the resolvent matrix R Λ is of the full rank N. The problem of performing the IZHC integral for two matrices of different rank can be most efficiently done by employing equation (A4) of the Appendix A in the paper [23] (which is in fact closely related to the so-called duality IZHC relation, see equation (17.3.8) in [22]). In our case it takes the form: In the limit N → ∞ the integrals over y c can be straightforwardly evaluated by the saddle-point method, with the saddle-point values being given by y (s.p) c = i γxc . This is justified as equation (10) ensures that the expected value in the integrand tends for N → ∞ to a well-defined limit of the order of unity along contours in the vicinity of the chosen saddle point. Moreover, one can show that the saddle-point can be reached by deforming the original contours without crossing any singularities of the integrand. Furthermore, ∆{Y (s.p) } ∝ ∆{X} det X −(M −1) and the Gaussian fluctuations around the saddle-point value yield the factor det X N −1 . Taking all these facts together we see that (29) indeed reproduces the expression for F β=2 (X) from (8), and hence in the fixed amplitude model (4) the K-matrix in the limit N → ∞ has the Cauchy distribution with density (3). This result has an interesting corollary. If w c are chosen to be the first M columns of the N × N identity matrix, then W † (E − H) −1 W is nothing else as the M ×M block of the resolvent (E −H) −1 . Therefore for invariant ensembles of Hermitian random matrices H, finite blocks of the resolvent of H are Cauchy-distributed in the limit of large matrix dimension.