Concentration of the spectral measure of large Wishart matrices with dependent entries

We derive concentration inequalities for the spectral measure of large random matrices, allowing for certain forms of dependence. Our main focus is on empirical covariance (Wishart) matrices, but general symmetric random matrices are also considered.


Introduction
In this short paper, we study concentration of the spectral measure of large random matrices whose elements need not be independent. In particular, we derive a concentration inequality for Wishart matrices of the form X ′ X/m in the important setting where the rows of the m × n matrix X are independent but the elements within each row may depend on each other; see Theorem 1. We also obtain similar results for other random matrices with dependent entries; see Theorem 4, Theorem 5, and the attending examples, which include a random graph with dependent edges, and vector time series.
Large random matrices have been the focus of intense research in recent years; see Bai [2] and Guionnet [7] for surveys. While most of this literature deals with the case where the underlying matrix has independent entries, comparatively little is known for dependent cases. Götze and Tikhomirov [6] show that the expected spectral distribution of an empirical covariance matrix X ′ X/m converges to the Marčenko-Pastur law under conditions that allow for some form of dependence among the entries of X. Bai and Zhou [1] analyzed the limiting spectral distribution of X ′ X/m when the row-vectors of X are independent (allowing for certain forms of dependence within the rowvectors of X). Mendelson and Pajor [13] considered X ′ X/m in the case where the row-vectors of X are independent and identically distributed (i.i.d.); under some additional assumptions, they derive a concentration result for the operator norm of X ′ X/m − E(X ′ X/m). Boutet de Monvel and Khorunzhy [4] studied the limiting behavior of the spectral distribution and of the operator norm of symmetric Gaussian matrices with dependent entries.
For large random matrices similar to those considered here, concentration of the spectral measure is also studied by Guionnet and Zeitouni [8], who consider Wishart matrices X ′ X/m where the entries X i,j of X are independent, as well as Hermitian matrices with independent entries on and above the diagonal, and by Houdre and Xu [9], who obtained concentration results for random matrices with stable entries, thus allowing for certain forms of dependence. For matrices with dependent entries, we find that concentration of the spectral measure can be less pronounced than in the independent case. Technically, our results rely on a slight extension of a result of Talagrand [14], and on McDiarmid's bounded difference inequality [12].

Results
Throughout, the eigenvalues of a symmetric n × n matrix M are denoted by λ 1 (M ) ≤ · · · ≤ λ n (M ), and we write F M (λ) for the cumulative dis- For certain classes of random matrices M and certain classes of functions f , we will show that F M (f ) is concentrated around its expectation EF M (f ) or around any median med F M (f ). For a Lipschitz function g, we write ||g|| L for its Lipschitz constant. Moreover, we also consider functions f : is finite; cf. Section X.1 in [10].
[A function f is of bounded variation on (a, b) if and only if it can be written as the difference of two bounded monotone functions on (a, b), as is easy to see. Note that the indicator function g : The following result establishes concentration of F S (f ) for Wishart matrices S of the form S = X ′ X/m where we only require that the rows of X are independent (while allowing for dependence within each row of X). See also Example 8 and Example 9, which follow, for scenarios that also allow for some dependence among the rows of X. Theorem 1. Let X be an m × n matrix whose row-vectors are independent, set S = X ′ X/m, and fix f : R → R.
(i) Suppose that f is such that the mapping x → f (x 2 ) is convex and Lipschitz, and suppose that |X i,j | ≤ 1 for each i and j. For each ǫ > 0, we then have (1) [From the upper bound (1) one can also obtain a similar bound for P(|F S (f ) − EF S (f )| ≥ ǫ) using standard methods.] (ii) Suppose that f is of bounded variation on R. For each ǫ > 0, we then have . (2) In particular, for each λ ∈ R and each ǫ > 0, the probability P(|F S (λ) − EF S (λ)| ≥ ǫ) is bounded by the right-hand side of (2) with V f (R) replaced by 1.
The upper bounds in Theorem 1 are of the form where A, B, and C equal med F S (f ), 4, and mǫ 2 / (n + m)8||f (· 2 )|| 2 L in part (i) and EF S (f ), 2, and n2ǫ 2 /(mV 2 f ) in part (ii), respectively. For the interesting case where n and m both go to infinity at the same rate, the next example shows that these bounds can not be improved qualitatively without imposing additional assumptions.
Example 2. Let n = m = 2 k , and let X be the n × n matrix whose i-th row The v i 's can be obtained, say, from the first n binary Walsh functions; cf. [15].] Note that the eigenvalues of Then nF S (f ) is binomial distributed with parameters n and 1/2, i.e., nF S (f ) ∼ B(n, 1/2). By Chernoff 's method (cf. Theorem 1 of [5]), we hence obtain that (4). These statements continue to hold with med F S (f ) replacing EF S (f ), because the mean coincides with the median here. To apply Theorem 1(i), we extend f by setting f (x) = |x| for x ∈ R; to apply Theorem 1(ii), extend f as f (x) = 1{x ≤ 1/2}. Theorem 1(i) and Theorem 1(ii) give us that the left hand side of (4) is bounded by terms of the form 4 exp [−nC 1 (ǫ)] and 2 exp [−nC 2 (ǫ)], respectively, for some functions Hence, both parts of Theorem 1 give upper bounds with the correct rate (−n) in the exponent. The constants C i (ǫ), i = 1, 2, both are sub-optimal, i.e., they are too small, but the constant C 2 (ǫ), which is obtained from Theorem 1(ii), is close to the optimal constant for small ǫ.
Under additional assumptions on the law of X, F S (f ) can concentrate faster than indicated by (3). In particular, in the setting of Theorem 1(i) and for the case where all the elements X i,j of X are independent, Guionnet and Zeitouni [8] obtained bounds of the same form as (3) but with n 2 replacing n in the exponent, for functions f such that x → f (x 2 ) is convex and Lipschitz. (This should be compared with Example 9 below.) However, if f does not satisfy this requirement, but is of bounded variation on R so that Theorem 1(ii) applies, then the upper bound in (2) can not be improved qualitatively without additional assumptions, even in the case when all the elements X i,j of X are independent. This is demonstrated by the following example.
Theorem 1 can also be used to get concentration inequalities for the empirical distribution of the singular values of a non-symmetric n × m matrix X with independent rows. Indeed, the i-th singular value of X/ √ m is just the square root of the i-th eigenvalue of X ′ X/m.
Both parts of Theorem 1 are in fact special cases of more general results that are presented next. The following two theorems, the first of which should be compared with Theorem 1.1(a) of [8], apply to a variety of random matrices besides those considered in Theorem 1; some examples are given later in this section.
Theorem 5. Let M be a random symmetric n × n matrix that is a function of m independent random quantities Y 1 , . . . , Y m , i.e., M = M (Y 1 , . . . , Y m ). Write M (i) for the matrix obtained from M after replacing Y i by an independent copy, i.e., holds (almost surely) for each i = 1, . . . , m and for some (fixed) integer r. Finally, assume that f : R → R is of bounded variation on R. For each ǫ > 0, we then have Also, if a and b, −∞ ≤ a < b ≤ ∞, are such that Pa < λ 1 (S) and λ n (S) < b = 1, then (7) holds for each function f : (a, b) → R of bounded variation on (a, b), ) replaces V f (R) on the right hand side of (7).
To apply Theorem 5, one needs to establish the inequality in (6) for each i = 1, . . . , m. This can often be accomplished by using the following lemma, which is taken from Bai [2], Lemma 2.2 and 2.6, and which is a simple consequence of the interlacing theorem. [Consider a symmetric n × n matrix A and denote its (n − 1)×(n−1) major submatrix by B. The interlacing theorem, a direct consequence of the Courant-Fisher formula, states that λ i (A) ≤ λ i (B) ≤ λ i+1 (A) for i = 1, . . . , n − 1.] Lemma 6. Let A and B be symmetric n× n matrices and let X and Y be m× n matrices. Then the following inequalities hold: and We now give some examples where Theorem 4 or Theorem 5 can be applied, the latter with the help of Lemma 6.  The following two examples deal with the sample covariance matrix of vector moving average (MA) processes. For the sake of simplicity, we only consider MA processes of order 2. Our arguments can be extended to also handle MA processes of any fixed and finite order. In Example 8, we consider an MA(2) process with independent innovations, allowing for arbitrary dependence within each innovation, and obtain concentration inequalities of the form (3). In Example 9, we consider the case where each innovation has independent components (up to a linear function) and obtain a concentration inequality of the form (3) but with n 2 replacing n in the exponent.
(ii) Suppose that f is of bounded variation on R. For each ǫ > 0, we then have The proofs of (8) and (9) follow essentially the same argument as used in the proof of Theorem 1 using the particular structure of the matrix X as considered here.
Example 9. As in Example 8, consider an m × n matrix X whose row-vectors follow a vector MA (2) process (X i,· ) ′ = Y i+1 + BY i for some fixed n × n matrix B, i = 1, . . . , m. For the innovations Y i , we now assume that Y i = U Z i , where U is a fixed n × n matrix, and where the Z i,j , i = 1, . . . , m + 1, j = 1, . . . , n, are independent and satisfy |Z i,j | ≤ 1. Set S = X ′ X/m. For a function f such that the mapping x → f (x 2 ) is convex and Lipschitz, we then obtain that for each ǫ > 0, where C is shorthand for C = (1 + ||B||) ||U || with ||B|| and ||U || denoting the operator norms of the indicated matrices. The relation (10) is derived by essentially repeating the proof of Theorem 1(i) and by employing the particular structure of the matrix X as considered here.
We note that the statement in the previous paragraph reduces to Corollary 1.8(a) in [8] if one sets B to the zero matrix and U to the identity matrix. Moreover, we note that Theorem 5 can also be applied here (similarly to Example 8(ii)), but the resulting upper bound does not improve upon (9).

A Proofs
We first prove Theorem 4 and Theorem 5 and then use these results to deduce Theorem 1. The proof of Theorem 4 is modeled after the proof of Theorem 1.1(a) in Guionnet and Zeitouni [8]. It rests on a slight modification of Theorem 6.6 of Talagrand [14] that is given as Theorem 10 below, and also on Lemma 1.2 from Guionnet and Zeitouni [8] that is restated as Lemma 11, which follows.
Lemma 11. Let A n denote the set of all real symmetric n × n matrices and let u : R → R be a fixed function. Let us denote by Λ n u the functional A → F A (u) on A n . Then (i) If u is convex, then so is Λ n u .
(ii) If u is Lipschitz, then so is Λ n u (when considering A n with the Euclidean norm on R n(n+1)/2 by collecting the entries on and above the diagonal). Moreover, the Lipschitz constant of Λ n u satisfies Also, since f is assumed to be convex and Lipschitz, Lemma 11 entails that T 2 is convex and Lipschitz with ||T 2 || L ≤ (2/n) 1/2 ||f || L . It follows that T is convex (and hence quasi-convex) and Lipschitz with ||T || L ≤ (2/(nm)) 1/2 C M ||f || L . The proof is complete.
To prove Theorem 5, we recall McDiarmid's bounded difference inequality ( [12]; see also Proposition 12 in [3]): Proposition 13. Consider independent random quantities Y 1 , . . . , Y m , and a (measurable) function Z = f (Y 1 , . . . , Y m ). For each i = 1, . . . , m, define Z (i) like Z, but with Y i replaced by an independent copy; that is, holds (almost surely) for each i = 1, . . . , m, then, for each ǫ > 0, both Proof of Theorem 5. It suffices to prove the second claim. Hence assume that a and b, −∞ ≤ a < b ≤ ∞ are such that P (a < λ 1 (S) and λ n (S) < b) = 1 and that f : (a, b) → R is of bounded variation on (a, b). We shall now show that With this, we can use the bounded difference inequality, i.e., Proposition 13, with Z, Z (i) , and c i (1 ≤ i ≤ m) replaced by F S (f ), F S (i) (f ), and rV f (a, b)/n, respectively, to obtain (7), completing the proof.
Proof of Theorem 1. Our reasoning is similar to that used in the proof of Corollary 1.8 of Guionnet and Zeitouni [8]. Setñ = m + n and writeM as shorthand forñ ×ñ matrixM = 0 n×n X ′ n×m X m×n 0 m×m .
Moreover, setS =M / √ m, and write Y i for the i-th row of X, 1 ≤ i ≤ m, i.e., Y i = (X i,· ) ′ . We viewM as a function of Y 1 , . . . , Y m . Also letf (x) = f x 2 .
It is easy to check that where µ (μ) can be either EF S (f ) (EFS(f )) or med F S (f ) (med FS(f )).
To prove (i), it suffices to note that Theorem 4 applies withM ,S,ñ, n,f , and 1 replacing M , S, n, p, f , and C M , respectively. Using Theorem 4 with these replacements and with 2ñ n ǫ replacing ǫ, we see that the left hand side of (1) is bounded as claimed.
To prove (ii), we first note that ||FS − FS (i) || ∞ ≤ 2/ñ in view of Lemma 6 (whereS (i) is defined asS but with Y i replaced by an independent copy). Also, note thatf is of bounded variation on R with Vf (R) ≤ V f (R). Hence, Theorem 5 applies withM ,S,ñ, r andf replacing M, S, n, 2 and f respectively and (2) follows after elementary simplifications.