Joint Estimation of Precision Matrices in Heterogeneous Populations

We introduce a general framework for estimation of inverse covariance, or precision, matrices from heterogeneous populations. The proposed framework uses a Laplacian shrinkage penalty to encourage similarity among estimates from disparate, but related, subpopulations, while allowing for differences among matrices. We propose an efficient alternating direction method of multipliers (ADMM) algorithm for parameter estimation, as well as its extension for faster computation in high dimensions by thresholding the empirical covariance matrix to identify the joint block diagonal structure in the estimated precision matrices. We establish both variable selection and norm consistency of the proposed estimator for distributions with exponential or polynomial tails. Further, to extend the applicability of the method to the settings with unknown populations structure, we propose a Laplacian penalty based on hierarchical clustering, and discuss conditions under which this data-driven choice results in consistent estimation of precision matrices in heterogenous populations. Extensive numerical studies and applications to gene expression data from subtypes of cancer with distinct clinical outcomes indicate the potential advantages of the proposed method over existing approaches.


Introduction
Estimation of large inverse covariance, or precision, matrices has received considerable attention in recent years. This interest is in part driven by the advent of high-dimensional data in many scientific areas, including high throughput omics measurements, functional magnetic resonance images (fMRI), and applications in finance and industry. Applications of various statistical methods in such settings require an estimate of the (inverse) covariance matrix.
Examples include dimension reduction using principal component analysis (PCA), classification using linear or quadratic discriminant analysis (LDA/QDA), and discovering conditional independence relations in Gaussian graphical models (GGM).
In high-dimensional settings, where the data dimension p is often comparable or larger than the sample size n, regularized estimation procedures often result in more reliable estimates.
Sparse estimation is particularly relevant in the setting of GGMs, where conditional independencies among variables correspond to zero off-diagonal elements of the precision matrix (Lauritzen, 1996). The majority of existing approaches for estimation of high-dimensional precision matrices, including those cited in the previous paragraph, assume that the observations are identically distributed, and correspond to a single population. However, data sets in many application areas include observations from several distinct subpopulations. For instance, gene expression measurements are often collected for both healthy subjects, as well as patients diagnosed with different subtypes of cancer. Despite increasing evidence for differences among genetic networks of cancer and healthy subjects (Ideker and Krogan, 2012;Sedaghat et al., 2014), the networks are also expected to share many common edges. Separate estimation of graphical models for each of the subpopulations would ignore the common structure of the precision matrices, and may thus be inefficient; this inefficiency can be particularly significant in high-dimensional low sample settings, where p n.
To address the need for estimation of graphical models in related subpopulations, few methods have been recently proposed for joint estimation of K precision matrices Ω (k) = (ω (k) ij ) p i,j=1 ∈ R p×p , k = 1, . . . , K (Guo et al., 2011;Danaher et al., 2014). These methods extend the penalized maximum likelihood approach by combining the Gaussian likelihoods for the K subpopulations n (Ω) = 1 n K k=1 n k log det(Ω (k) ) − tr Σ (k) n Ω (k) .
Here, n k andΣ (k) n are the number of observations and the sample covariance matrix for the kth subpopulation, respectively, n = K k=1 n k is the total sample size and tr(·) and det(·) denote matrix trace and determinant.
To encourage similarity among estimated precision matrices, Guo et al. (2011) modeled the (i, j)-element of Ω (k) as product of a common factor θ ij and group-specific parameters γ (k) ij , i.e. ω (k) ij = δ ij γ (k) ij . Identifiability of the estimates is ensured by assuming δ ij ≥ 0. A zero common factor δ ij = 0 induces sparsity across all subpopulations, whereas γ (k) ij = 0 results in conditionspecific sparsity for ω (k) ij . This reparametrization results in a non-convex optimization problem based on the Gaussian likelihood with 1 -penalties i =j δ ij and i =j K k=1 |γ (k) ij |. Danaher et al. (2014) proposed two alternative estimators by adding an additional convex penalty to the graphical lasso objective function: either a fused lasso penalty i =j k =k |ω (k) ij − ω k ij | (FGL), or a group lasso penalty i =j K k=1 (ω (k) ij ) 2 (GGL). The fused lasso penalty has also been used by Kolar et al. (2009), for joint estimation of multiple graphical models in multiple time points. The fused lasso penalty strongly encourages the values of ω (k) ij to be similar across all subpopulations, both in values as well as sparsity patterns. On the other hand, the group lasso penalty results in similar estimates by shrinking all ω (k) ij across subpopulations to zero if K k=1 (ω (k) ij ) 2 is small. Despite their differences, methods of Guo et al. (2011) and Danaher et al. (2014) inherently assume that precision matrices in K subpopulations are equally similar to each other, in that they encourage ω (k) ij and ω (k ) ij and ω (k) ij and ω (k ) ij to be equally similar. However, when K > 2, some subpopulations are expected to be more similar to each other than others. For instance, it is expected that genetic networks of two subtypes of cancer be more similar to each other than to the network of normal cells. Similarly, differences among genetic networks of various strains of a virus or bacterium are expected to correspond to the evolutionary lineages of their phylogenetic trees. Unfortunately, existing methods for joint estimation of multiple graphical models ignore this heterogeneity in multiple subpopulations. Furthermore, existing methods assume subpopulation memberships are known, which limits their applicability in settings with complex but unknown population structures; an important example is estimation of genetic networks of cancer cells with unknown subtypes.
In this paper, we propose a general framework for joint estimation of multiple precision matrices by capturing the heterogeneity among subpopulations. In this framework, similarities among disparate subpopulations are presented using a subpopulation network G(V, E, W ), a weighted graph whose node set V is the set of subpopulations. The edges in E and the weights W kk for (k, k ) ∈ E represent the degree of similarity between any two subpopulations k, k . In the special case where W kk = 1 for all k, k , the subpopulation similarities are only captured by the structure of the graph G. An example of such a subpopulation network is the line graph corresponding to observations over multiple time points, which is used in estimation of time-varying graphical models (Kolar et al., 2009). As we will show in Section 2.3, other existing methods for joint estimation of multiple graphical models, e.g. proposals of Danaher et al. (2014), can also be seen as special cases of this general framework.
Our proposed estimator is the solution to a convex optimization problem based on the Gaussian likelihood with both 1 and graph Laplacian  penalties. The graph Laplacian has been used in other applications for incorporating a priori knowledge in classification (Rapaport et al., 2007), for principal component analysis on network data (Shojaie and Michailidis, 2010), and for penalized linear regression with correlated covariates Huang et al., 2011;Weinberger et al., 2006;Liu et al., 2014Zhao and Shojaie, 2015). The Laplacian penalty encourages similarity among estimated precision matrices according to the subpopulation network G. The 1 -penalty, on the other hand, encourages sparsity in the estimated precision matrices. Together, these two penalties capture both unique patterns specific to each subpopulation, as well as common patterns shared among different subpopulations.
We first discuss the setting where G(V, E, W ) is known from external information, e.g. known phylogenetic trees (Section 2), and later discuss the estimation of the subpopulation memberships and similarities using hierarchical clustering (Section 4). We propose an alternating methods of multipliers (ADMM) algorithm (Boyd et al., 2011) for parameter estimation, as well as its extension for efficient computation in high dimensions by decomposing the problem into block-diagonal matrices. Although we use the Gaussian likelihood, our theoretical results also hold for non-Gaussian distributions. We establish model selection and norm consistency of the proposed estimator under different model assumptions (Section 3), with improved rates of convergence over existing methods based on penalized likelihood. We also establish the consistency of the proposed algorithm for the estimation of multiple precision matrices, in settings where the subpopulation network G or subpopulation memberships are unknown. To achieve this, we establish the consistency of hierarchical clustering in high dimensions, by generalizing recent results of Borysov et al. (2014) to the setting of arbitrary covariance matrices, which is of independent interest. The rest of the paper is organized as follows. In Section 2 we describe the formal setup of the problem and present our estimator. Theoretical properties of the proposed estimator are studied in Section 3, and Section 4 discusses the extension of the method to the setting where the subpopulation network is unknown. The ADMM algorithm for parameter estimation and its extension for efficient computation in high dimensions are presented in Section 5.
Results of the numerical studies, using both simulated and real data examples, are presented in Section 6. Section 7 concludes the paper with a discussion. Technical proofs are collected in the Appendix.

Problem Setup
Consider K subpopulations with distributions P (k) , k = 1, . . . , K. Let X (k) = (X (k),1 , . . . , X (k),p ) T ∈ R p be a random vector from the kth subpopulation with mean µ k and the covariance ma- Suppose that an observation comes from the kth subpopulation with probability π k > 0.
Our goal is to estimate the precision matrices Ω To this end, we use the Gaussian log-likelihood based on the correlation matrix (see Rothman et al. (2008)) as a working model for estimation of true Ω . . , n k , be independent and identically distributed (i.i.d.) copies from P (k) , k = 1, . . . , K. We denote the correlation matrices and their inverse by i,j=1 , k = 1, . . . , K, respectively. The Gaussian log-likelihood based on the correlation matrix can then be written as where Ψ (k) n , k = 1, . . . , K is the sample correlation matrix for subpopulation k.
Examining the derivative of (2), which consists of Ψ n , k = 1, . . . , K, justifies its use as a working model for non-Gaussian data: the stationary points of (2) is Ψ gives a consistent estimate of Ψ (k) 0 . Thus we do not, in general, need to assume multivariate normality. However, in certain applications, for instance LDA/QDA and GGM, the resulting estimate is useful only if the data follows a multivariate normal distribution.

The Laplacian Shrinkage Estimator
Let Θ = (Θ (1) , . . . , Θ (K) ) and write Θ ij = (θ (1) ij , . . . , θ (K) ij ) T ∈ R K , i, j = 1, . . . , p for a vector of (i, j)-elements across subpopulations. Our proposed estimator, Laplacian Shrinkage for Inverse Covariance matrices from Heterogeneous populations (LASICH), first estimates the inverse of the correlation matrices for each of the K subpopulations, and then transforms them into the estimator of inverse covariance matrices, as in Rothman et al. (2008). In particular, we first obtain the estimateΘ of the true inverse correlation matrix by solving the following optimization problem where Θ = Θ T enforces the symmetry of individual inverse correlation matrices, i.e. Θ (k) = (Θ (k) ) T , and Θ 0 requires that Θ (k) is positive definite for k = 1, . . . , K. The 1 -penalty (3)  ij . The tuning parameters ρ n and ρ n ρ 2 control the size of each penalty term. Nonzero and zero off-diagonal entries are colored in black and white, respectively, while diagonal entires are colored in gray. The associated subpopulation network G reflects the similarities between precision matrices of subpopulations 1 and 2 and 1 and 3. The simulation experiments in Section 6.1 use a similar subpopulation network in a high-dimensional setting. Figure 1 illustrates the motivation for the graph Laplacian penalty Θ ij L in (3). The gray-scale images in the figure show the hypothetical sparsity patterns of precision matrices Θ (1) , Θ (2) , Θ (3) for three related subpopulations. Here, Θ (1) consists of two blocks with one "hub" node in each block; in Θ (2) and Θ (3) one of the blocks is changed into a "banded" structure. It can be seen that one of the two blocks in both Θ (2) and Θ (3) have a similar sparsity pattern as Θ (1) . However, Θ (2) and Θ (3) are not similar. The subpopulation network G in this figure captures the relationship among precision matrices of the three subpopulations.
Such complex relationships cannot be captured using the existing approaches, e.g. Guo et al. (2011);Danaher et al. (2014), which encourage all precision matrices to be equally similar to each other. More generally, G can be a weighted graph, G(V, E, W ), whose nodes represent the subpopulations 1, . . . , K. The edge weights W : E → R + represent the similarity among pairs of subpopulations, with larger values of W kk ≡ W (k, k ) > 0 corresponding to more similarity between precision matrices of subpopulations k and k .
In this section, we assume that the weighted graph G is externally available, and defer the discussion of data-driven choices of G, based on hierarchical clustering, to Section 4. Given G, the (unnormalized) graph Laplacian penalty Θ ij L is defined as where W kk = 0 if k and k are not connected. The Laplacian shrinkage penalty can be alternatively written as where d k = k =k W kk is the degree of node k in G with W kk = 0 if k and k are not connected. The Laplacian shrinkage penalty can also be defined in terms of the normalized graph Laplacian, I − D −1/2 W D −1/2 , where D = diag(d 1 , . . . , d K ) is the diagonal degree matrix. The normalized Laplacian penalty, , which we also denote as Θ ij L , imposes smaller shrinkage on coefficients associated with highly connected subpopulations. We henceforth primarily focus on the normalized penalty.
Given estimates of the inverse correlation matricesΘ (1) , . . . ,Θ (K) from (3), we obtain estimates of precision matrices Ω (k) by noting that Ω ii for the ith element in the kth subpopulation.
A number of alternative strategies can be used instead of the graph Laplacian penalty in (3). First, similarity among coefficients of precision matrices can also be imposed using a ridge-type penalty, Θ ij 2 L . The main difference is that our penalty Θ ij L discourages the inclusion of edges θ if they are very different across the K subpopulations.
Another option is to use the graph trend filtering , which impose a fused lasso penalty over the subpopulation graph G. Finally, ignoring the weights W kk in (4), the Laplacian shrinkage penalty resembles the Markov random field (MRF) prior used in Bayesian variable selection with structured covariates Li and Zhang (2010). Our penalized estimation framework can thus be seen as an alternative to using an MRF prior to estimate the precision matrices in a mixture of Gaussian distributions.

Connections to Other Estimators
To connect our proposed estimator to existing methods for joint estimation of multiple graphical models, we first give an alternative interpretation of the graph Laplacian penalty as a norm for a transformed version of θ (k) ij s. More specifically, consider the mapping g G : R K → R K defined based on the Laplacian matrix for graph G if G has at least one edge. For a graph with no edges, define where I K is the K-identity matrix, and ⊗ denotes the Kronecker product. It can then be seen that the graph Laplacian penalty can be rewritten as where · F is the Frobenius norm.
Using the above interpretation, other methods for joint estimation of multiple graphical models can be seen as penalties on transformations g G (Θ ij ) corresponding to different graphs G. We illustrate this connection using the hypothetical subpopulation network shown in  Consider first the FGL penalty of Danaher et al. (2014), applied to elements of the inverse ij |. Let G C be a complete unweighted graph (W kk = 1 ∀k = k ), in which all K 2 node-pairs are connected to each other ( Figure 2b). It is then easy to see that where the factor of 2(K − 1) can be absorbed into the tuning parameter for the FGL penalty. A similar argument can also be applied to the GGL penalty of Danaher et al. (2014), Θ ij , by considering instead an empty graph G e with no edges between nodes ( Figure 2c).
In this case, the mapping g G would give a diagonal matrix with elements θ (k) ij , and hence Θ ij = g Ge (Θ ij ) F . Unlike proposals of Danaher et al. (2014), the estimator of Guo et al. (2011) is based on a non-convex penalty, and does not naturally fit into the above framework. However, Lemma 2 in Guo et al. (2011) establishes a connection between the optimal solutions of the original optimization problem, with those obtained by considering a single penalty of the form K k=1 |θ (k) ij | 1/2 ≡ Θ ij 1,2 . Similar to GGL, the connection with the method of Guo et al. (2011) can be build based on the above alternative formulation, by considering again the empty graph G e (Figure 2c), but instead the . 1,2 penalty, which is a member of the CAP family of penalties Zhao et al. (2009). More specifically, Using the above framework, it is also easy to see the connection between our proposed estimator and the proposal of Kolar et al. (2009): the total variation penalty in Kolar et al. (2009) is closely related to FGL, with summation over differences in consecutive time points.
It is therefore clear that the penalty of Kolar et al. (2009) (up to constant multipliers) can be obtained by applying the graph Laplacian penalty defined for a line graph connecting the time points (Figure 2d).
The above discussion highlights the generality of the proposed estimator, and its connection to existing methods. In particular, while FGL and GGL/ Guo et al. (2011) consider extreme cases with isolated, or fully connected nodes, one can obtain more flexibility in estimation of multiple precision matrices by defining the penalty based on the known subpopulation network, e.g. based on phylogenetic trees or spatio-temporal similarities between fMRI samples.
The clustering-based approach of Section 4 further extends the applicability of the proposed estimator to the settings where the subpopulation network in not known a priori. The simulation results in Section 6 show that the additional flexibility of the proposed estimator can result in significant improvements in estimation of multiple precision matrices, when K > 2.
The above discussion also suggests that other variants of the proposed estimator can be defined, by considering other norms. We leave such extensions to future work.

Theoretical Properties
In this section, we establish norm and model selection consistency of the LASICH estimator.
We consider a high-dimensional setting p n k , k = 1, . . . , K, where both n and p go to infinity. As mentioned in the Introduction, the normality assumption is not required for establishing these results. We instead require conditions on tails of random vectors X (k) for each k = 1, . . . , K. We consider two cases, exponential tails and polynomial tails, which both allow for distributions other than multivariate normal.
Since we adopt the correlation-based Gaussian log-likelihood, we require the boundedness of the true variances to control the error between true and sample correlation matrices.
(ii) (Polynomial tails). Let C 2 = sup n {ρ n n/ log p} = O(1) where ρ n is given in Lemma 1 in the Appendix and c 7 > 0 be some constant. It holds that Condition 4 determines the sufficient sample size n = k for consistent estimation of precision matrices Θ (1) , . . . , Θ (K) in relation to, among other quantities, the number of variables p, the sparsity pattern s and the spectral norm of the Laplacian matrix L 2 of the subpopulation network G. While a general characterization of L 2 is difficult, investigating its value in special cases provides insight into the effect of the underlying population structure on the required sample size. Consider, for instance, two extreme cases: for a fully connected graph G associated with K subpopulations, L 2 = 1/(K − 1); for a minimally connected "line" graph, corresponding to e.g. multiple time points, L 2 = 2: with K = 5, 30% more samples are needed for the line graph, compared to a fully connected network. The above calculations match our intuition that fewer samples are needed to consistently estimate precision matrices of K subpopulations that share greater similarities. This, of course, makes sense, as information can be better shared when estimating parameters of similar subpopulations. Note that, here L represents the Laplacian matrix of the true subpopulation network capturing the underlying population structure. The above conditions thus do not provide any insight into the effect of misspecifying the relationship between subpopulations, i.e., when an incorrect L is used. This is indeed an important issue that garners additional investigation; see Zhao and Shojaie (2015) for some insight in the context of inference for high dimensional regression. In Section 4, we will discuss a data-driven choice of L that results in consistent estimation of precision matrices.
Before presenting the asymptotic results, we introduce some additional notations. For a matrix A = (a ij ) p i,j=1 ∈ R p×p , we denote the spectral norm A 2 = max x∈R p , x =1 Ax , and the element-wise ∞ -norm A ∞ = max i,j |a i,j | where x is the Euclidean norm for a vector x. We also write the induced . For the ease of presentation, the results in this section are presented in asymptotic form; non-asymptotic results and proofs are deferred to the Appendix.
Theorem 1. Suppose Conditions 3 and 4 hold. Under Condition 1 or 2, as n, p → ∞ where ρ n is given in Lemma 1 in the Appendix with γ = min k π k /2.
Theorem 1 is proved in the Appendix. The proof builds on tools from Negahban et al.
(2012a). However, our estimation procedure does not match their general framework: First, we do not penalize the diagonal elements of the inverse correlation matrices; our penalty is thus not a norm. Second, the Laplacian matrix is nonpositive definite. Thus, the Laplacian shrinkage penalty is not strictly convex. The results from Negahban et al. (2012a) are thus not directly applicable to our problem. To establish the estimation consistency, we first show, in Lemma 3, that the function r(·) = · 1 + ρ 2 · L is a seminorm, and is, moreover, convex and decomposable. We also characterize the subdifferential of this seminorm in Lemma 6,

Model Selection Consistency
where sign(a) is 1 if a > 0, 0 if a = 0 and −1 if a < 0. We say that an estimatorΩ ρn of Ω 0 is We begin by discussing an irrepresentability condition for estimation of multiple graphical models. This restrictive condition is commonly assumed to establish model selection consistency of lasso-type estimators, and is known to be almost necessary (Meinshausen and Bühlmann, 2006;Zhao and Yu, 2006). For the graphical lasso, Ravikumar et al. (2011) showed that the irrepresentability condition amounts to a constraint on the correlation between entries of the Hessian matrix Γ = Ω −1 ⊗ Ω −1 in the set S corresponding to nonzero elements of Ω, and those outside this set. Our irrepresentability condition is motivated by that in Ravikumar et al. (2011), however, we adjust the index set S to also account for covariances of "non-edge variables" that are correlated with each other. More specifically, the description of irrepresentability condition in Ravikumar et al. (2011) involves Γ SS consisting only of elements σ ij σ kl with (i, j) ∈ S and (k, l) ∈ S. However, σ ij = 0 for (i, j) / ∈ S is not taken into account by this definition. We thus adjust the index set S so that Γ SS also includes elements This definition is based on the crucial observations that Γ = Σ ⊗ Σ involves the covariance matrix Σ instead of the precision matrix Ω, and that some variables are correlated (i.e., σ ij = 0) even though they may be conditionally independent (i.e., ω ij = 0). Defining S (k) for k = 1, . . . , K as above, we assume the following condition.
In addition to the irrepresentability condition, we require bounds on the magnitude of θ (k) ij = 0 and their normalized difference.
Condition 6 (Lower bounds for the inverse correlation matrices). There exists a constant Moreover, for Ω 0,ij = 0, LΩ 0,ij = 0 and there exists a constant c 9 > 0 such that The first lower bound in Condition 6 is the usual "min-beta" condition for model selection consistency of lasso-type estimators. The second lower bound, which is represented here for the normalized Laplacian penalty, is a mild condition which ensures estimates based on inverse correlation matrices can be mapped to precision matrices. For any pair of subpopulations k and k connected in G it requires that if the difference in (normalized) entries of the entires of the precision matrices are nonzero, the difference in (normalized) entries of inverse correlation matrices are bounded away from zero. In other words, the bound guarantees that Θ 0,ij is not in the null space of L, whenever Ω 0,ij is outside of the null space. This bound can be relaxed if we use a positive definite matrix L = L + I for > 0 small.
Our last condition for establishing the model selection consistency concerns the minimum sample size and the tuning parameter for the graph Laplacian penalty. This condition is necessary to control the ∞ -bound of the errorΘ ρn − Θ 0 , as in Ravikumar et al. (2011).
Our minimum sample size requirement is related to the irrepresentability condition. Let κ Γ be the maximum of the absolute column sums of the matrices {(Γ (k) ) −1 } S (k) S (k) , k = 1, . . . , K, and κ Ψ be the maximum of the absolute column sums of the matrices Ψ (k) 0 , k = 1, . . . , K. The minimum sample size in Ravikumar et al. (2011) is also a function of the irrepresentability constant, in particular, their There is, therefore, a subtle difference between our definition and theirs: in our definition, the matrix is first inverted and then partitioned, while in Ravikumar et al. (2011), the matrix is first partitioned and then inverted. Corollary 2 establishes the model selection consistency under a weaker sample size requirement, by exploiting instead the control of the spectral norm in Theorem 1.

Additional Results
In this section, we establish norm and variable selection consistency of LASICH under alternative assumptions. Our first result gives better rates of convergence for consistency in the ∞ -, spectral and Frobenius norms, under the condition for model selection consistency. Our rates in Corollary 1 improve the previous results by Ravikumar et al. (2011), and are comparable to that of Cai et al. (2011) in the ∞ -and spectral norms under both tail conditions. Corollary 1. Suppose the conditions in Theorem 2 hold. Then, under Condition 1 or 2, Our next result in Corollary 2 establishes the model selection consistency under a weaker version of the irrepresentability condition (Condition 6). Aside from the difference in the index sets S (k) , the form of the Condition 6 and the assumption of invertibility of (Ψ Ravikumar et al. (2011). On the other hand, Ravikumar et al. (2011) do . However, their proof is based on an application of Brouwer's fixed point theorem, which does not hold for the corresponding function (Eq. (70) in page 973) since it involves a matrix inverse, and is hence not continuous on its range. The additional inevitability assumption in Condition 6 is used to address this issue in Lemma 11.
The condition can be relaxed if we assume an alternative scaling of the sample size stated in Condition 8 below instead of Condition 7.

Laplacian Shrinkage based on Hierarchical Clustering
Our proposed LASICH approach utilizes the information in the subpopulation network G. In practice, however, similarity between subpopulations may be difficult to ascertain or quantify. In this section, we present a modified LASICH framework, called HC-LASICH, which utilizes hierarchical clustering to learn the relationships among subpopulations. The information from hierarchical clustering is then used to define the weighted subpopulation network.
Importantly, HC-LASICH can even be used in settings where the subpopulation membership is unavailable, for instance, to learn the genetic network of cancer patients, where cancer subtypes may be unknown.
We use hierarchical clustering with a complete, single or average linkage to estimate both the subpopulation memberships and the weighted subpopulation network G. Specifically, the length of a path between two subpopulations in the dendrogram is used as a measure of dissimilarity between two subpopulations; the weights for the subpopulation networks are simply defined by taking the inverse of these lengths. Throughout this section, we assume that the number of subpopulations K is known. While a number of methods have been proposed for estimating the number of subpopulations in hierarchical clustering (see e.g. Borysov et al. (2014) and the references therein), the problem is beyond the scope of this paper.
Let I i , i = 1, . . . , n be i.i.d. copies of I andÎ i = (Î 1 i , . . . ,Î K i ) be an estimated subpopulation indicator for the ith observation via hierarchical clustering. Based on the estimated subpopulation membership and subpopulation networkĜ, we apply our method to obtain the estimator, HC-LASICH,Ω HC,ρn = (Ω (1) HC,ρn ). Interestingly, HC-LASICH enjoys the same theoretical properties as LASICH, under the normality assumption. To show this, we first establish the consistency of hierarchical clustering in high dimensions, which is of independent interest. Our result is motivated by the recent work of (Borysov et al., 2014), who study the consistency of hierarchical clustering for independent normal variables X (k) ∼ N (µ (k) , σ (k) I); we establish similar results for multivariate normal distributions with arbitrary covariance structures. We make the following assumption.
Under the normality assumption, the following results shows that the probability of successful clustering converges to 1, as p, n → ∞.
To proof of Theorem 3 generalizes recent results of Borysov et al. (2014) to the case of arbitrary covariance structures. A key component of the proof is a new bound on the 2 norm of a multivariate normal random variable with arbitrary mean and covariance matrix established in Lemma 14. The proof of the lemma uses new concentration inequalities for high-dimensional problems in Boucheron et al. (2013), and may be of independent interest.
Note that the consistent estimation of subpopulation memberships (7) implies that the estimated hierarchy among clusters also matches the true hierarchy. Thus, with successful clustering established in Theorem 3, theoretical properties ofΩ HC,ρn naturally follow.

Algorithms
We develop an alternating directions method of multipliers (ADMM) to efficiently solve the convex optimization problem (3).
To facilitate the computation, we consider instead a perturbed graph Laplacian L = L+ I, where I is the identity matrix and > 0 is a small perturbation. The difference between solutions to the original and modified optimization problem is largely negligible for small ; however, the positive definiteness of L results in more efficient computation. A similar idea was used in Guo et al. (2011) and Rothman et al. (2008) to avoid dividision by zero. The optimization problem (3) with L replaced by L can then be written as Here > 0 is a regularization parameter and L 1/2 is the square root of L with L = The proposed ADMM algorithm is as follows.
• Repeat the iteration until the maximum of the errors r (k) n,ij , . . . , ψ and for all i, j such that the (i, j) element is outside the blocks. 6 Numerical Results

Simulation Experiments
We compare our method with four existing methods, graphical lasso, the method of Guo et al. (2011), FGL and GGL of Danaher et al. (2014). For graphical lasso, estimation was carried out separately for each group with the same regularization parameter.
Our simulation setting is motivated by estimation of gene networks for healthy subjects and patients with two similar diseases caused by inactivation of certain biological pathways. We consider K = 3 groups with sample sizes n = (50, 100, 50) and dimension p = 100. Data are generated from multivariate normal distributions N (µ (k) , (Ω (k) 0 ) −1 ), k = 1, 2, 3; all precision matrices Ω (k) 0 are block diagonal with 4 blocks of equal size. To create the precision matrices, we first generated a graph with 4 components of equal size, each as either an Erdős-Rényi or scale free graphs with 95 total edges. We randomly assigned Unif((−.7, −.5) ∪ (.5, .7)) values to nonzero entries of the corresponding adjacency matrix A and obtained a matrixÃ. We then added 0.1 to the diagonal ofÃ to obtain a positive definite matrix Ω  Danaher et al. (2014), where the graph included more components, but no perturbation was added. We consider two simulation settings, with known and unknown subpopulation network G.

Known subpopulation network G
In this case, we set µ (k) = 0, k = 1, 2, 3 and use the graph in Figure 1

Unknown subpopulation network G
In this case, the subpopulation memberships and the subpopulation network G are estimated based on hierarchical clustering. We randomly generated µ (1) from a multivariate normal distribution with a covariance matrix σ 2 I. For subpopulations 2 and 3, the elements of µ (1) corresponding to the empty components of the graph were set to zero to obtain µ (2) and µ (3) .
Hierarchical clustering with complete linkage was applied to data to obtain the dendrogram; we took inverse of distances in the dendrogram to obtain similarity weights used in the graph Laplacian.
Figures 4 compares the performance of HC-LASICH, in terms of support recovery, to competing methods, in the setting where the subpopulation memberships and network are estimated from data (Section 4). Here the differences in subpopulation means µ (k,k ) are set up to evaluate the effect of clustering accuracy. The four settings considered correspond to average Rand indices of .6 .7, .8 and .9 across 50 data sets, respectively. Here the second tuning parameter for HC-LASICH, GGL and FGL is chosen according to the best performing model in Figure 3. As expected, changing the mean structure, and correspondingly the Rand index, does not affect the performance of other methods. The results indicate that, as long as features can be clustered in a meaningful way, HC-LASICH can result in improved support recovery. Data-adaptive choices of the tuning parameter corresponding to the Laplacian shrinkage penalty may result in further improvements in the performance of the HC-LASICH.
However, we do not pursue such choices here. To facilitate the comparison, tuning parameters were selected such that the estimated networks of the three subtypes using each method contained a total of 150 edges. For methods with two tuning parameters, pairs of tuning parameters were determined using the Bayesian information criterion (BIC), as described in Guo et al. (2011). Estimated genetic networks of the three cancer subtypes are shown in Figure 5. For each method, edges common in all three subtypes, those common in Luminal subtypes and subtype specific edges are distinguished.

Genetic Networks of Cancer Subtypes
In this example, results from separate graphical lasso estimation and FGL/GGL are two extremes. Estimated network topologies from graphical lasso vary from subtype to subtype, and common structures are obscured; this variability may be because similarities among subtypes are not incorporated in the estimation. In contrast, FGL and GGL give identical

Discussion
We introduced a flexible method for joint estimation of multiple precision matrices, called LASICH, which is particularly suited for settings where observations belong to three or more subpopulations. In the proposed method, the relationships among heterogenous subpopulations is captured by a weighted network, whose nodes correspond to subpopulations, and whose edges capture their similarities. As a result, LASICH can model complex relationships among subpopulations, defined, for example, based on hierarchical clustering of samples.
We established asymptotic properties of the proposed estimator in the setting where the relationship among subpopulations is externally defined. We also extended the method to the setting of unknown relationships among subpopulations, by showing that clusters estimated from the data can accurately capture the true relationships. The proposed method generalizes existing convex penalties for joint estimation of graphical models, and can be particularly advantageous in settings with multiple subpopulations.
A particularly appealing feature of the proposed extension of LASICH is that it can also be applied in settings where the subpopulation memberships are unknown. The latter setting is closely related to estimation of precision matrices for mixture of Gaussian distributions.
Both approaches have limitations and drawbacks: on the one hand, the extension of LASICH to unknown subpopulation memberships requires certain assumptions on differences of population means (Section 4). On the other hand, estimation of precision matrices for mixture of Gaussians is computationally challenging, and known rates of convergence of parameter estimation in mixture distributions (e.g. in Städler et al. (2010)) are considerably slower.
Throughout this paper we assumed that the number of subpopulations is known. Extensions of this method to estimation of graphical models in populations with an unknown number of subpopulations would be particularly interesting for analysis of genetic networks associated with heterogeneity in cancer samples, and are left for future research.

Appendix: Proofs and Technical Detials
We denote true inverse correlation matrices as Θ 0 = (Θ x J = (x j , j ∈ J) T . For a matrix A, λ k (A) is the kth smallest eigenvalue and A is the vectorization of A. For J ⊂ {(i, j) : i, j = 1, . . . , p} and A ∈ R p×p , A J is a vector in R |J| obtained by removing elements corresponding to (i, j) / ∈ J from A. A zero-filled matrix A J ∈ R p×p is obtained from A by replacing a ij by 0 for (i, j) / ∈ J.

Consistency in Matrix Norms
Theorem 1 is a direct consequence of the following result.
(ii) Suppose that Condition 2 holds with p ≤ c 7 n c 2 , c 2 , c 3 , c 7 > 0. For ρ n = C 1 Kδ n satisfying 2 4 3 2 C 1 ρ 2 n γ −2 s(1 + ρ 2 L 1/2 2 ) 2 λ 4 Θ ≤ 1/4 and τ > (2 7 + 2 3 1 + 2 4 3 2 c 4 max k,i {σ Our proofs adopt several tools from Negahban et al. (2012a). Note however that our penalty does not penalize the diagonal elements, and is hence a seminorm; thus, their results do not apply to our case. We first introduce several notations. To treat multiple precision matrices in a unified way, our parameter space is defined to be the setR (pK)×(pK) of (pK)×(pK) symmetric block diagonal matrices, where the kth diagonal block is a p × p matrix corresponding to the precision matrix of subpopulation k. We write A ∈R (pK)×(pK) for a K-tuple where ·, · p is the trace inner product on R p×p . In this parameter space, we evaluate the following map fromR (pK)×(pK) to R given by where r :R (pK)×(pK) → R is given by r(Θ) = Θ 1 + ρ 2 Θ L . This map provides information on the behavior of our criterion function in the neighborhood of Θ 0 . A similar map with a different penalty was studied in Rothman et al. (2008). A key observation is that f (0) = 0 and f (∆ n ) ≤ 0 where∆ n =Θ ρn − Θ 0 .
The following lemma provides a non-asymptotic bound on the Frobenius norm of ∆ (see for all elements ∆ ∈ C ∩ {∆ ∈R (pK)×(pK) : ∆ F = } then ∆ n F ≤ .
Proof. We first show that∆ n ∈ C. We have by the convexity of −˜ n (Θ) that It follows from Lemma 3(iv) with our choice ρ n that the right hand side of the inequality is further bounded below by −2 −1 ρ n r(∆ n,M ) + r(∆ n,M ⊥ ) . Applying Lemma 3(iii), we or r(∆ n,M ⊥ ) ≤ 3r(∆ n,M ). This verifies∆ n ∈ C. Note that f , as a function of ∆ is sum of two convex functions n and r, and is hence convex. Thus, the rest of the proof follows exactly as
(ii) Rothman et al. (2008) (page 500-502) showed that Since A 2 ≤ A F , n k /n ≥ γ and ∆ F ≤ c, this is further bounded below by (iii) Because the graph Laplacian L is a positive semidefinite matrix, the triangle inequality r(Θ 1 + Θ 2 ) ≤ r(Θ 1 ) + r(Θ 2 ) holds. To see this let L =LL T be any Cholesky decomposition of L. Then It is clear that r(cΘ) = cr(Θ) for any constant c. Thus, given that r does not penalize the diagonal elements, it is a seminorm. The decomposability follows from the definition of r.
The convexity follows from the same argument for the triangle inequality. Since Θ 0 + ∆ = Θ 0 + ∆ M + ∆ M ⊥ , the triangle inequality and the decomposability of r yield (iv) We show that, for A, B ∈R (pK)×(pK) with diag(B) = 0, A, B ≤ r(A) B ∞ . If A is a diagonal matrix (or if A = 0), the inequality trivially holds since A, B = 0. If not, r(A) = 0 so that Since the diagonal elements of ∇˜ n (Θ 0 ) are all zero, the result follows.
(v) For s = 0, we have In the last inequality we used the fact that ij , which follows by the concavity of the square root function. For s = 0, we trivially have 0 = r(Θ M ) ≤ s 1/2 {1 + ρ 2 L 1/2 2 } Θ M F . Combining these two cases yields the desired result.
Next, we obtain an upper bound for max 1≤k≤K Ψ (k) n − Ψ (k) 0 ∞ , which holds with highprobability assuming the tail conditions of the random vectors.
Lemma 4. Suppose that n k /n ≥ γ > 0 for all k and n.
(ii) Note that We first evaluate the probability in (14) for n −1 Note that It follows from Bernstein's inequality that Note that ν n,2 → 0 as p → ∞ for τ > (2 7 + 2 3 1 + 2 4 3 2 c 4 max k,i {σ ii } 2 ). To see this note that we need to have so that the power in the exponent is strictly negative. This inequality reduces to ii } 2 (4 + τ ).
We can solve this by changing a quadratic equation for τ , since τ of our interest is positive.
Since b n → 0, b n ≤ c 5 /2 for n sufficiently large by Condition 3. On the event It follows that Thus we have So far we have assumed n k /n ≥ γ in lemmas. We evaluate the probability of this event noting that n k ∼ Binom(n, π k ).
Computing appropriate probabilities using Lemmas 4 and 5 completes the proof.
Proof of Theorem 1. The estimation error Ω (k) ρn −Ω 0 (2) 2 in the spectral norm can be bounded and evaluated in the same way as in the proof of Theorem 2 of Rothman et al. (2008) together with Lemma 1.

Model Selection Consistency
Our proof is based on the primal-dual witness approach of Ravikumar et al. (2011), with some modifications to overcome a difficulty in their proof when applying the fixed point theorem to a discontinuous function. First, we define the oracle estimatorΘ ρn = (Θ where Θ (k) (S (k) ) c = 0 indicates that Θ (i,j) = 0 for (i, j) / ∈ S (k) .
Lemma 6. (i) Let A ∈ R p×p be a positive semidefinite matrix with eigenvalues 0 ≤ λ 1 ≤ λ 2 ≤ · · · ≤ λ p and corresponding eigenvectors u i satisfying u i ⊥ u j , i = j and u i = 1. The where U ∈ R p×p has u i as the ith columns and Λ 1/2 is the diagonal matrix with λ 1/2 i , i = 1, . . . , p, as diagonal elements. Furthermore, the subgradients are bounded above, i.e.
(ii) Let A ∈ R p×p be a positive semidefinite matrix and S = {S i } ⊂ {1, . . . , p}. Suppose A SS has eigenvalues 0 ≤ λ 1,S ≤ λ 2,S ≤ · · · ≤ λ |S|,S and corresponding eigenvectors u i,S satisfying u i,S ⊥ u j,S , i = j and u i,S = 1. Let g S : R |S| → R p be a map defined by where U S ∈ R |S|×|S| has u i,S as the ith columns and Λ 1/2 S is the diagonal matrix with λ 1/2 i,S , i = 1, . . . , |S|, as diagonal elements. For x with A SS x = 0, there is a relationship between ∂ x T A SS x and ∂ y T Ay at y = g S (x) given by Subgradients are bounded above: Proof. (i) For x with Ax = 0, f (x) is differentiable and the subgradient of f at x is simply the matrix derivative. By definition, for x with Ax = 0, the subgradient v of f at x satisfies the following inequality for all y. Choosing y = 2x and y = 0 yield 0 ≥ x, v and 0 ≥ − x, v , implying x, v = 0.
The inequality (22) reduces to y T Ay ≥ y, v , for any y. If Ay = 0, a similar argument implies that y, v = 0. Hence v ⊥ y for every y with Ay = 0.
Let j 0 be the smallest index such that λ j 0 > 0. Because u j 's form an orthonormal basis, any arbitrary vector y can be written as y = p j=1 β j u j . Moreover, the null space of A is the span of u 1 , . . . , u j 0 −1 . Thus, the subgradient v can be written as v = p j=j 0 α j u j . Thus, using the spectral decomposition of A as A = p j=j 0 λ j u j u T j , we can write f (y) = { p j=j 0 λ j β 2 j } 1/2 . On the other hand, y, v = p j=j 0 α j β j . Thus, the inequality (22) further reduces to It follows from the Cauchy-Schwartz inequality that the left hand side of the inequality is bounded from above; It is easy to see that this set is the image of the map U Λ 1/2 on the closed ball of radius 1.
Given that x ∞ ≤ x , to establish the bound in the ∞ -norm, we compute the bound in the Euclidean norm. We use the same notation as in (i). For x with Ax = 0, for every y. Because of the form of the subdifferential and the fact that U x = x , the result follows.
(ii) Let B S be a product of elementary matrices for row and column exchange such that B S g S (x) = (x, 0). Notice that B S = B −1 S and that B S = B T S since B S only rearranges elements of vectors and exchanges rows by multiplication from the left. Note also that B S 2 ≤ B S ∞/∞ = 1, since C 2 ≤ C ∞/∞ for C = C T and each row of B S has only one element with value 1. Because the subdifferential of h A,S (x) follows from (ii). For x with A SS x = x and y = g S (x), Ay = An ∞ -bound follows from (i) and the fact that Lemma 7. For any ρ n > 0 and sample correlation matricesΨ n = (Ψ 2,ij , . . . ,Û 2,ij ) T ∈ ∂ ΘT ρn,ij LΘ ρn,ij for every i = j and k = 1, . . . , K. Moreover, ij } S ij for every i = j and k = 1, . . . , K. Moreover, 2,ij = 0 for every i = 1, . . . , p, and k = 1, . . . , K.
Proof. A proof for the uniqueness of the solution is similar to the proof of Lemma 3 of Ravikumar et al. (2011). The rest is the KKT condition using Lemma 6.
The main idea of the proof is to show that (Θ ρn ,Ũ ) satisfies the optimality conditions of the original problem with probability tending to 1. In particular, we show the following equation, which holds by construction ofŨ 1 andŨ 2 , is in fact the KKT condition of the original problem (3): To this end, we show thatŨ 1 andŨ 2 are both subgradients of the original problem. We can then conclude that the oracle estimator in the restricted problem (21) is the solution to the original problem (3). Then it follows from the uniqueness of the solution thatΘ ρn =Θ ρn .
The following lemma is similar to the statement of Lemma 6 of Ravikumar et al. (2011).
Lemma 11. Suppose that Proof. We apply Shauder's fixed point theorem on the event min k π k /2 ≤ n k /n, which holds with probability 1 − 2K exp(−n min k π 2 k /2) by Lemma 5 with = min k π k /2. We first define the function f k and its domain D k to which the fixed point theorem applies. Let S (k) = S (k) ∪ {(i, i) : 1 ≤ i ≤ p}, and define This set is a convex, compact subset of the set of all symmetric matrices.

LetǓ
(k) l ∈ R p×p , l = 1, 2, be zero-filled matrices whose (i, j)-element isǓ (k) l,ij in Lemma 7 if (i, j) ∈ S (k) and zero otherwise. Define the map g k on the set of invertible matrices in is the KKT condition for the restricted problem (21). Let δ > 0 be a constant such that δ < min{1/2, 1/{10(4dr + 1)}}r and δ + r ≤ 1/{6d max{κ Ψ , κ 3 Ψ κ Γ }. Define a continuous function f k : D k → D k as We now verify the conditions of Shauder's fixed point theorem below. Once conditions are established, the theorem yields that In the following, we write A = vec(A) for a matrix A for notational convenience. For The function f k is continuous on D k . To see this, note first that A + Θ (k) 0 + δI is positive definite for every A ∈ D k so that the inversion is continuous. Note also that all elements in the matrices involved with eigenvalues in h k (A) are uniformly bounded in D k , and hence the eigenvalues are also uniformly bounded.
To show that f k (A) ∈ D k , first we show that f k (A) + Θ (k) 0 is positive semidefinite. This follows because for any x ∈ R p is positive, then the inequality easily follows. On the other hand, if λ A < −1, we have It follows from Lemma 9 that Thus, adding and subtracting Ψ

Vectorization and restriction on S (k)
gives where Here we used h k (A) ≤ (1/4 + 1/2)/1 = 3/4. For the first term of the upper bound in (29), it follows by the inequality Ax ∞ ≤ A ∞/∞ x ∞ for A ∈ R p×p and x ∈ R p , Lemma 9 and the choice of δ satisfying δ + r ≤ For the second term, it follows by the assumption, the inequality that for A ∈ R p×p and x ∈ R p , and Lemma 6 that 2 ) = (min k π k )r/4 ≤ (n k /n)r/2.
Since (f k (A)) (S (k) ) c = 0 by definition, all the conditions for the fixed point theorem are established. This completes the proof.
Proof of Theorem 2. We prove that the oracle estimatorΘ ρn satisfies (I) the model selection consistency and (II) the KKT conditions of the original problem (3) with (Θ ρn ,Ũ 1 ,Ũ 2 ). The model selection consistency ofΘ ρn =Θ ρn then follows by the uniqueness of the solution to the original problem. The following discussion is on the event that min k π k /2 ≤ n k /n, k = 1, . . . , K, and max k Ξ (k) ∞ ≤ α/8. Note that this event has probability approaching 1 by Lemmas 4 and 5.
Next, we show that the Oracle estimator satisfies the KKT condition of the original problem (3). As the first step, we proveŨ (k) 1,ij ∈ ∂Θ (k) ρn for every i, j, k with probability approaching 1.
For the (i, j)-element with ω (k) 0,ij = 0 for every k = 1, . . . , K, the equation holds for every k = 1, . . . , K, because it is the equation for the KKT condition of the corresponding element in a restricted problem (21). For (i, j)-element with Ω 0,ij = 0 and ω (k ) 0,ij = 0 for some k , note thatΘ ρn,ij = 0 with probability approaching 1 and that the rearrangement in Θ ij and corresponding exchange of rows and columns of L for each i, j does not change the original and restricted optimization problems (3) and (21). Thus, with the appropriate rearrangement of elements and exchange of rows and columns,Ũ Proof of Corollary 1. In the proof of Theorem 2, the ∞ -bound of the error yields Note that if one of two matrices A and B is diagonal, AB ∞ ≤ A ∞ B ∞ . Thus, we can proceed in the same way as in the proof of Theorem 2 of Rothman et al. (2008) to conclude that Ω (k) n − Ω (k) 0 The result follows from a similar argument to the proof of Corollary 3 in Ravikumar et al. (2011).
The following lemma is an extension of Lemma 2 in Borysov et al. (2014).
The following is an extension of Lemma 3 in Borysov et al. (2014).