1 Introduction

Genetic association studies have emerged as an important branch of statistical genetics (Manolio et al. 2009; Vattikuti et al. 2014). The goal of this field is to find causal associations between high-dimensional vectors of genotypes, such as single nucleotide polymorphisms (SNPs), and observable outcomes (phenotypes, or traits). For various phenotypes, such as heritable diseases, it is assumed that these associations manifest themselves on only a small number of genes. This leads to the challenging problem of identifying few relevant positions along the genome among ten thousands of irrelevant genes. For various complex diseases, such as bipolar disorder or type 2 diabetes (Craddock et al. 2010), these sparse associations are largely unknown (Manolio et al. 2009), which is why these missing associations have been entitled the The Dark Matter of Genomic Associations (NHGR Institute 2009).

Genetic associations can be spurious, unreliable, and unreproducible when the data are subject to spurious correlations due to confounding (Imbens and Rubin 2015; Pearl 2009; Morgan and Winship 2014). Confounding can stem from varying experimental conditions and demographics such as age, ethnicity, or gender (Li et al. 2011). The perhaps most important types of confounding in statistical genetics arise from population structure (Astle and Balding 2009), as well as similarities between closely related samples (Li et al. 2011; Lippert et al. 2011; Fusi et al. 2012). Ignoring such confounders can often lead to spurious false positive findings that cannot be replicated on independent data (Kraft et al. 2009). Correcting for such confounding dependencies is considered one of the greatest challenges in statistical genetics (Vilhjálmsson and Nordborg 2013).

Our approach is inspired by linear mixed models (LMMs) for genome-wide association studies (Lippert et al. 2011), which model the effects of confounding in terms of correlated noise on the traits. A related tool for feature selection is the LMM-Lasso (Rakitsch et al. 2013). In this paper, we extend the idea of LMMs to binary labels. The LMM and its Lasso version are restricted to the linear regression case where the output variable is continuous, but in many important applications the phenotype is binary, such as the presence or absence of a heritable disease. To this end, we threshold the output through a Probit likelihood (Bliss 1934). This makes parameter learning challenging since the model becomes a Bayesian latent variable model with an intractable likelihood. Drawing on the tools of approximate Bayesian inference, we propose two scalable inference algorithms that allow us to fit this model to high-dimensional data.

In an experimental study on genetic data, we show that our approach beats several baselines. Compared to sparse probit regression, our features are less correlated with the first principal component of the noise covariance that represents the confounder. Furthermore, compared to the LMM-Lasso (Rakitsch et al. 2013), sparse probit regression, and Gaussian Process (GP) classification (Rasmussen and Williams 2006), our approach yields up to 5 percentage points higher prediction accuracies. We show that our approach generalizes beyond statistical genetics in a computer malware experiment.

This paper is organized as follows. In Sect. 2 we introduce our model and discuss related work. Section 3 then contains the mathematical details of the inference procedure. In Sect. 4 we apply our method to extract features associated with diseases and traits from confounded genetic data. We also test our method on a data set that contains a mix of different types of malicious computer software data. Finally in Sect. 5 we draw our conclusions.

2 Sparse probit linear mixed model

We first review the problem of confounding by population structure in statistical genetics in Sect. 2.1. In Sect. 2.2, we review LMMs and introduce a corresponding Probit model. We discuss the choice of the noise kernel in Sect. 2.3 and discuss related approaches in Sect. 2.4.

2.1 Confounding and similarity kernels

The problem of confounding is fundamental in statistics. A confounder is a common cause both of the genotypes and the traits. When it is unobserved, it induces spurious correlations that have no causal interpretation: we say that the genotypes and traits are confounded (Imbens and Rubin 2015; Pearl 2009; Morgan and Winship 2014).

In statistical genetics, a major source of confounding originates from population structure (Astle and Balding 2009). Population structure implies that due to common ancestry, individuals that are related co-inherit a large number of genes, making them more similar to each other, whereas individuals of unrelated ancestry obtain their genes independently, making them more dissimilar. For this reason, collecting genetic data has to be done carefully. For example, when data are collected only in selected geographical areas (such as in specific hospitals), one introduces a selection bias into the sample which can induce spurious associations between phenotypes and common genes in the population. It is an active area of research to find models that are less prone to confounding (Vilhjálmsson and Nordborg 2013). In this paper, we present such a model for the setup of binary classification.

A popular approach to correcting for confounding relies on similarity kernels, also called kinship matrices (Astle and Balding 2009). Given n samples, we can construct an \(n\times n\) matrix K that quantifies the similarity between samples based on some arbitrary measure. In the case of confounding by population structure, one typically chooses \(K_{ij} = X^\top _i X_j\), where \(X_i \in {\mathbb {R}}^{d}\) is a vector of genetic features of individual i. As \(K \in {\mathbb {R}}^{n\times n}\) contains the scalar products between the genetic vectors of all individuals, it is a sensible measure of genetic similarity. As another example, when correcting for confounding by age, then we can choose K to be a matrix that contains 1 if two individuals have the same age, and zero otherwise. Details of constructing similarity kernels and other examples can be found in (Astle and Balding 2009). Next, we explain how the similarity matrix can be used to correct for confounding.

2.2 Generalizing linear mixed models

We first review the LMM (Henderson 1950), which has been widely applied in the field of statistical genetics (Fisher 1919; Yu et al. 2006; Lippert et al. 2011; Rakitsch et al. 2013). LMMs are linear regression models that capture dependencies between the data points in terms of correlated noise. They are a special case of generalized multivariate regression models of the following type,

$$\begin{aligned} y_i = f \left( X_i^\top w + \epsilon _i \,\right) , \qquad \epsilon =(\epsilon _1,\ldots ,\epsilon _n)^\top \sim \mathcal{N}(0,\varSigma ), \end{aligned}$$
(1)

where f is an inverse link function. For LMMs, f is the identity. The outputs \(y_i\) may be continuous or discrete, and \(X_i\) is a set of n input variables. The variables \(\epsilon _i\) are noise variables. Crucially, they are correlated and have a covariance \(\varSigma \),

$$\begin{aligned} \varSigma = \lambda _1\mathbf{I} + \lambda _2 K. \end{aligned}$$
(2)

The noise kernel K is a modeling choice and will be discussed in Sect. 2.3. The noise contribution proportional to the identity matrix \(\mathbf{I}\) is necessary to regularize the problem in case K has small eigenvalues. The parameter \(\lambda = (\lambda _1,\lambda _2)\) may be found by restricted maximum likelihood (Patterson and Thompson 1971), or, as done in this work, by cross-validation. Depending on the application, we may use multiple similarity kernels.

The crucial idea behind the model in Eq. 1 is that parts of the observed labels can be explained away by the correlated noise; thus not all observed phenotypes are linear effects of X. By construction, the noise covariance \(\varSigma \) contains information about similarities between the samples and may be systematically used to model spurious correlations due to relatedness between samples. The computational goal is to distinguish between these two effects.

LMMs allow to efficiently perform inference by preprocessing the data matrix by means of a rotation,Footnote 1 which does not generalize beyond regression. We therefore need new inference algorithms when generalizing this modeling paradigm to non-linear link functions. In this paper, we tackle inference for the important case of binary classification (Bliss 1934; Fahrmeir et al. 2013). In the following, we assume \(f \equiv \mathrm{sign}\) which is the sign (or Probit) function. This involves binary labels \(y_i\in \{+1,-1\}\). As before, we break the independence of the label noises. This leads to the following model:

$$\begin{aligned} y_i = \mathrm{sign} \left( X_i^\top w + \epsilon _i \,\right) , \qquad \epsilon =(\epsilon _1,\ldots ,\epsilon _n)^\top \sim \mathcal{N}(0,\varSigma ). \end{aligned}$$
(3)

In the special case of \(\varSigma = \mathbf{I}\), this is just the Probit model for classification. When the noise covariance is not simply the identity but displays some non-trivial correlations, we call this modified linear mixed model the Probit Linear Mixed Model, or short Probit-LMM.

Our next goal is to derive a likelihood function for our model. For the sake of a simpler notation and without loss of generality, we will assume that all observed binary labels \(y_i\) are 1. The reason why this assumption is no constraint is that we can always perform a linear transformation to absorb the sign of the labels into the data matrix and noise covariance (this transformation is shown in “Appendix A”). Thus, when working with this transformed data matrix and noise covariance, our assumption is satisfied.

Under our assumption, the likelihood function is the probability that all transformed labels are 1. This is satisfied when \(X_i^\top w + \epsilon _i >0\). When integrating over all realizations of noise, the resulting (marginal) likelihood is

$$\begin{aligned} {\mathbb {P}}(\forall i:y_i=1|w) \; = \; {\mathbb {P}}(\forall i:X_i^\top w + \epsilon _i > 0|w )\; = \; \int _{{\mathbb {R}}_+^n} \mathcal{N}(\epsilon ; X^\top w,\varSigma ) \, d^n \epsilon . \end{aligned}$$
(4)

The marginal likelihood is hence an integral of the multivariate Gaussian over the positive orthant. In Sect. 3, we will present efficient approximations of this integral. Before we get there, we further characterize the model.

We turn the Probit-LMM into a model for feature selection where we are interested in a point estimate of the weight vector w that is sparse, i.e. most elements are zero. This is well motivated in statistical genetics, because generally only a small number of genes are believed to be causally associated with a phenotype such as a disease. Sparsity is achieved using the Lasso (Tibshirani 1996), where we add an \(\ell _1\)-norm regularizer to the negative marginal likelihood:

$$\begin{aligned} \mathcal{L}(w)= & {} -\log \int _{{\mathbb {R}}_+^n} \mathcal{N}(\epsilon ; X^\top w,\varSigma )\,d^n\epsilon \, + \, \lambda _0 ||w||^1_1. \end{aligned}$$
(5)

The fact that the noise variable \(\epsilon \) and the weight vector w have different priors or regularizations makes the model identifiable and lets us distinguish between linear effects and effects of correlated noise. In “Appendix B” we prove that the objective function in Eq. 5 is convex. This concludes the model; inference will be discussed in Sect. 3. Next, we discuss an approximation of this model and related methods.

2.3 Linear kernel and MAP approximation

We now specify the noise covariance and explore an equivalent formulation of the model. We consider the simplest and most widely used covariance matrix \(\varSigma \), which is a combination of diagonal noise and a linear kernel of the data matrix,

$$\begin{aligned} \varSigma= & {} \lambda _1 \mathbf{I} + \lambda _2 X^\top X. \end{aligned}$$
(6)

The linear kernel \(X^\top X\) measures similarities between individuals. Since the scalar product measures the overlap between all genetic features, it models the dense effect of genetic similarity between samples due to population structure. To further motivate this kernel, we use a Gaussian integral identity:

$$\begin{aligned} \mathcal{L}(w)= & {} -\log \int _{{\mathbb {R}}_+^n} \mathcal{N}(\epsilon ; X^\top w, \lambda _1 \mathbf{I} + \lambda _2 X^\top X)\,d^n\epsilon \; + \; \lambda _0 ||w||^1_1\nonumber \\= & {} - \log \int _{{\mathbb {R}}^d} dw^{\prime } \; \mathcal{N}(w^{\prime };0,\lambda _2 \mathbf{I}) \int _{{\mathbb {R}}_+^n} d^n \epsilon \; \mathcal{N}(\epsilon ; X^\top (w+w^{\prime }), \lambda _1\mathbf{I}) \; + \; \lambda _0 ||w||^1_1. \nonumber \\= & {} - \log \int _{{\mathbb {R}}^d} dw^{\prime } \; \mathcal{N}(w^{\prime };0,\lambda _2 \mathbf{I}) \prod _{i=1}^n \varPhi \left( \frac{X_i^\top (w+w^{\prime })}{\sqrt{\lambda _1}}\right) \; + \; \lambda _0 ||w||^1_1. \nonumber \\=: & {} \mathcal{L}_0(w) \; + \; \lambda _0 ||w||^1_1. \end{aligned}$$
(7)

Above, \(\varPhi \) is the Gaussian cumulative distribution function. We have introduced the new Gaussian noise variable \(w^{\prime }\). Conditioned on \(w^{\prime }\), the remaining integrals factorize over n. However, since \(w^{\prime }\) is unobserved (hence marginalized out), it correlates the samples. As such, we interpret \(w^{\prime }\) as a confounding variable which models the effect of the overall population on the phenotype of interest.

The simplest approximation to the log-likelihood in Eq. 7 is to substitute the integral over \(w^{\prime }\) by its maximum a posteriori (MAP) value:

$$\begin{aligned} \mathcal{L}(w,w^{\prime })= & {} - \sum _{i=1}^n \log \varPhi \left( \frac{X_i^\top (w+w^{\prime })}{\sqrt{\lambda _1}}\right) + \frac{1}{2 \lambda _2} ||w^{\prime }||_2^2 + \lambda _0 ||w||_1^1. \end{aligned}$$
(8)

Under the MAP approximation, the likelihood contribution to the objective function becomes completely symmetric in w and \(w^{\prime }\): only the sum \(w+w^{\prime }\) enters. The difference between the two weight vectors w and \(w^{\prime }\) in this approximation is only due to the different regularizers: while \(w^{\prime }\) has an \(\ell _2\)-norm regularizer and is therefore dense, w is \(\ell _1\)-norm regularized and therefore sparse. Every feature gets a small non-zero weight from \(w^{\prime }\), and only selected features get a stronger weight from w. The idea is that \(w^{\prime }\) models the population structure, which affects all genes. In contrast, we are interested in learning the sparse weight vector w, which has a causal interpretation because it involves only a small number of features.Footnote 2

The MAP approximation objective in Eq. 8 is convex (proof in “Appendix B”) and computationally more convenient, but is prone to overfitting. Under the MAP approximation we additionally optimize over \(w^{\prime }\), so that we can make use of the factorized form of the objective (Eq. 7) over n for efficient computation. In contrast, in the original Probit-LMM in Eq. 3, \(w^{\prime }\) is marginalized out. This is more expensive, but may generalize better to unseen data. (The corresponding inference algorithm is subject of Sect. 3.) We compare both approaches in Sect. 4.

2.4 Related methods and prior work

There is a large amount of literature on linear mixed models for genome-wide association studies. For a review see Price et al. (2010), Astle and Balding (2009) and Lippert (2013). Our approach mostly relates to the the LMM-Lasso (Rakitsch et al. 2013). Compared to feature selection in a simple linear regression model, the LMM-Lasso improves the selection of true non-zero effects as well as prediction quality (Rakitsch et al. 2013). Our model is a natural extension this model to binary outcomes, such as the disease status of a patient. While one could also use the LMM-Lasso to model such binary labels, we show in our experimental section that this leads to lower predictive accuracies. As we explain in this paper, inference in our model is, however, more challenging than in Rakitsch et al. (2013).

Our model furthermore captures two limiting cases: sparse probit regression and GP classification (Rasmussen and Williams 2006). To obtain sparse probit regression, we simply set the parameters \(\lambda _i = 0\) for \(i \ge 2\), thereby eliminating the non-diagonal covariance structure. To obtain GP classification, we simply omit the fixed effect (i.e., we set \(w=0\)) so that our model likelihood becomes \({\mathbb {P}}(Y=Y^\mathrm{obs}|w) = \int _{{\mathbb {R}}_+^n} \mathcal{N}(\epsilon ; 0,\varSigma ) \, d^n \epsilon \), where the noise variable \(\epsilon \) plays the role of the latent function f in GPs (Rasmussen and Williams 2006). When properly trained, our model is thus expected to outperform both approaches in terms of accuracy. We compare our method to all three related methods in the experimental part of the paper and show enhanced accuracy.

A common generalized linear model for classification is the logistic regression model (Cox 1958). Accounting for correlations in the data is non-straightforward (Ragab 1991); one has to resort to approximate inference techniques, including the Laplace and mean field approximations that have been proposed in the context of GP classification (Rasmussen and Williams 2006), or the pseudo likelihood method, which has been proposed in the context of generalized LMMs (Breslow and Clayton 1993). To our knowledge feature selection has not been studied in a correlated logistic setup. On the other hand, without correlations, there is a large body of work on feature selection in Lasso regression (Tibshirani 1996). Alternative sparse priors to the Lasso have been suggested in Mohamed et al. (2011) for unsupervised learning (again, without compensating for confounders). The joint problem of sparse estimation in a correlated noise setup has been restricted to the linear regression case (Seeger and Nickisch 2011; Vattikuti et al. 2014; Rakitsch et al. 2013), whereas we are interested in classification. For classification, we remark that the ccSVM (Li et al. 2011) deals with confounding in a different way and it does not yield a sparse solution. Finally, our algorithm builds on EP for GP classification (Rasmussen and Williams 2006; Cunningham et al. 2011), but note that GP classification does not yield sparse estimates and therefore does not allow us to select predictive features.

Several alternatives to the LMM have recently been proposed and shall briefly be addressed. Song et al. (2015) developed a new statistical association test between traits and genetic markers. The approach reverses the placement of trait and genotype in the model and thus regresses the genotypes conditioned on the trait and an adjustment based on a fitted population structure model. Klasen et al. (2016) propose a new hierarchical testing procedure, where one searches for highly correlated clusters of genotypes, and tests them for significant associations to the response variable. The significant clusters in the lowest hierarchy (or individual genotypes) are then considered as the causal genotypes of interest. Finally, in the context of GWAS, spike-and-slap priors (Carbonetto and Stephens 2012) have been proposed as alternatives to \(\ell _1\) regularizers for variable selection. In contrast to our model, where the feature weights are modeled as the sum of a dense vector \(w^{\prime }\) and a sparse vector w contributing a small number of large effects (see Eq. (7)), spike-and-slap models draw each weight from exactly one of several different effect priors. While this is scalable, the approach typically results in a non-convex optimization problem. Our approach has a convex optimization objective and is robust under bootstrapping, as we show in our experiments.

3 Training procedure

In this section, we lay out two efficient inference algorithms to train our model. Both algorithms rely on approximations of the truncated Gaussian integral, which is intractable to compute in closed-form. While the first algorithm relies on a point estimate for the auxiliary variable \(w^{\prime }\) of Eq. 7, the second algorithm uses techniques from approximate Bayesian inference to estimate the truncated Gaussian integral. While the MAP approximation algorithm is faster and easier to use in practice, the Bayesian algorithm is more precise as we show in our experimental section.

3.1 Prelude: ADMM algorithm

In both objective functions given in Eqs. 7 and 8, we encounter the problem of minimizing a convex function in the presence of an additional \(\ell _1\) regularizer:

$$\begin{aligned} \mathcal{L}(w) = \tilde{\mathcal{L}}(w) + \lambda ||w||_1^1. \end{aligned}$$
(9)

(In Eq. 8, the objective also depends on the additional variable \(w^{\prime }\), in which it is smooth and which we therefore suppress here). The \(\ell _1\)-norm in the objective function is not differentiable and thus prevents us from applying standard gradient-based methods such as Newton’s method. This is a well-known problem, and several alternative solutions have been developed; one of these is the alternating direction method of multipliers (ADMM) (Boyd et al. 2011). In ADMM we augment the objective with the additional parameters z and \(\eta \),

$$\begin{aligned} \begin{aligned} \mathcal{L}(w,z,\eta ) := \tilde{\mathcal{L}}(w) + \lambda ||z||_1^1 + \eta ^\top (w-z) + \textstyle {\frac{1}{2}} c ||w-z||^2_2. \end{aligned} \end{aligned}$$
(10)

This objective can be viewed as the Lagrangian associated with the problem

$$\begin{aligned}&\min _{w,z} \, \tilde{\mathcal{L}}(w) + \lambda ||z||_1^1 + \textstyle {\frac{1}{2}} c ||w-z||^2_2\\&\text {s. th. }\, z = w, \end{aligned}$$

which is equivalent to the original problem, Eq. 9. Since strong duality holds we can solve the primal problem in Eq. 9 by solving the dual problem, Eq. 10. This is done by an iterative scheme where we alternate between the minimization updates for w and z and a gradient step in \(\eta \). Note that the term \(\textstyle {\frac{1}{2}} c ||w-z||^2_2\) is optional but grants better numerical stability and faster convergence. Details on the ADMM algorithm can be found in Boyd et al. (2011). Note that also other optimization methods are possible, which deal with non-smooth objectives such as ours, in particular subgradient methods. The benefit of the ADMM approach, though, is that it allows us to use second-order information because the objective is now smooth in w. This will be used on both of the following algorithms.

3.2 Maximum a posteriori approach

The simplest approximation to tackle the intractable integral relies on simply optimizing the MAP approximated objective function of Eq. 8. To this end, we minimize the objective function jointly in \((w,w^{\prime })\), where we alternate between updates in w and \(w^{\prime }\). Cast in the form suitable for the ADMM algorithm, the objective function becomes

$$\begin{aligned} \mathcal{L}(w,w^{\prime },z,\eta ) = - \sum _{i=1}^n \log \varPhi \left( \textstyle {\frac{X_i^\top (w+w^{\prime })}{\sqrt{\lambda _1}}}\right) + \frac{1}{2 \lambda _2} ||w^{\prime }||_2^2 + \lambda _0 ||z||_1^1 + \eta ^\top (z-w). \end{aligned}$$
(11)

It is straightforward to calculate the gradient in w and \(w^{\prime }\). We do an alternating gradient descent in these variables and carry out the additional ADMM updates in z and \(\eta \).

3.3 Approximate expectation-maximization

Another solution is to approximate the truncated Gaussian distribution by a simpler distribution that allows us to solve the integral approximately. This way, we found consistent improvements in predictive accuracy in all of our experiments. On the downside, this proposed algorithm is slightly slower in practice.

We interpret the correlated noise \(\epsilon \) as a latent variable, and the sparse weights w as global parameters. Latent variable models of this type are most conveniently solved using expectation-maximization (EM) algorithms (Dempster et al. 1977) that alternate between a gradient step in the global parameters (M-step) and a Bayesian inference step (E-step) to infer the distribution over latent variables. In our case, the E-step relies on approximate inference, which is why our approach can be called an approximate EM algorithm.

In more detail, to follow the gradients and optimize the objective, we employ ADMM in the M-step. Below, we derive analytic expressions for the Hessian and the gradient of the marginal likelihood in terms of moments of the posterior distribution over the latent noise. The inner loop (the E-step) then consists of approximating these moments by means of approximate Bayesian inference, which we describe next. Prediction in our model is addressed in “Appendix C”.

The inner loop of the EM algorithm amounts to computing the gradient and Hessian of \(\mathcal{L}(w,z,\eta )\). These are not available in closed-form, but in terms of the first and second moment of a truncated Gaussian density. Computing the derivatives of the linear and quadratic term is straightforward. We therefore focus on \(\mathcal{L}_0(w)\equiv -\log \int _{{\mathbb {R}}_+^N}\mathcal{N}(\epsilon ; X^\top w,\varSigma )d^n\epsilon \), which contains the intractable integral. In the following, we use the short hand notation

$$\begin{aligned} \mu \; \equiv \; \mu (w) \; =\; X^\top w. \end{aligned}$$
(12)

It is convenient to introduce the following probability distribution:

$$\begin{aligned} p(\epsilon | \mu ,\varSigma )= & {} \frac{{\mathbbm {1}}[\epsilon \in {\mathbb {R}}_+^n]\,\mathcal{N}(\epsilon ; \mu ,\varSigma ) }{ \int _{{\mathbb {R}}_+^n} \mathcal{N}(\gamma ; \mu ,\varSigma ) \, d^n \gamma }. \end{aligned}$$
(13)

Above, \({\mathbbm {1}}[\cdot ]\) is the indicator function. This is just the multivariate Gaussian, truncated and normalized to the positive orthant. It can be considered as the Bayesian posterior of the latent multivariate noise distribution. We furthermore introduce

$$\begin{aligned} \mu _p(w)= & {} {\mathbb E}_{p(\epsilon |\mu (w),\varSigma )} \left[ \epsilon \right] , \nonumber \\ \varSigma _p(w)= & {} {\mathbb E}_{p(\epsilon |\mu (w),\varSigma )} \left[ (\epsilon - \mu _p(w))(\epsilon - \mu _p(w))^\top \right] . \end{aligned}$$
(14)

This is just the mean and the covariance of the truncated multivariate Gaussian, as opposed to \(\mu ,\varSigma \) which are the mean and covariance of the non-truncated Gaussian. In general, these expectations do not have a closed-form solution. However, we develop suitable approximations for them in the following.

We abbreviate \(\mu _p \equiv \mu _p(w)\) and \(\varSigma _p\equiv \varSigma _p(w)\), and write \(\varDelta \mu = \mu _p - \mu \) for the difference between the means of the posterior (the truncated Gaussian) and the un-truncated Gaussian. The gradient and Hessian of \(\mathcal{L}_0(w)\) are given by

$$\begin{aligned} \begin{aligned} \nabla _{w}\mathcal{L}_0(w)&= \varDelta \mu \varSigma ^{-1} X^\top , \\ H_0(w)&= -X [\varSigma ^{-1}(\varSigma _p - \varDelta \mu \varDelta \mu ^{\top })\varSigma ^{-1} - \varSigma ^{-1}]X^\top . \end{aligned} \end{aligned}$$
(15)

Proofs are given in “Appendix D”. Note that the variable w enters through \(\varSigma _p(w)\) and \(\varDelta \mu (w)\).

The next step is to approximate the quantities \(\mu _p\) and \(\varSigma _p\) in Eq. 14, which we need for computing Eq. 15. These are intractable, involving expectations over the full posterior. Hence, we use approximate Bayesian inference methods to obtain estimates of these expectations.

A popular method for approximate Bayesian inference is Expectation Propagation (EP) (Minka 2001), which we use in our experimental study. In particular, we employ EP to approximate the moments of truncated Gaussian integrals (Cunningham et al. 2011). EP approximates the posterior \(p(\epsilon |\mu ,\varSigma )\) in terms of a variational distribution \(q(\epsilon )\), aiming to minimize the Kullback-Leibler divergence,

$$\begin{aligned} q^*(\epsilon |\mu _{q^*},\varSigma _{q^*})= & {} \mathrm{arg} \min _{q} \left( {\mathbb E}_p [\log p(\epsilon | \mu ,\varSigma )] - {\mathbb E}_p [\log q(\epsilon |\mu _q,\varSigma _q)] \right) . \end{aligned}$$
(16)

The variational distribution \(q^*(\epsilon )\) is an un-truncated Gaussian \(q^*(\epsilon ; \mu _{q^*},\varSigma _{q^*}) = \mathcal{N}(\epsilon ;\mu _{q^*},\varSigma _{q^*})\), characterized by the variational parameters \(\mu _{q^*}\) and \(\varSigma _{q^*}\). We approximate the posterior p in terms of the variational distribution, whose mean and covariance are \(\mu _p \approx \mu _{q^*}\) and \(\varSigma _p \approx \varSigma _{q^*}\). We warm-start each gradient computation with the optimal parameters of the earlier iteration. As a remark, instead of computing the first and second moment of the integral to compute the gradient and Hessian, the objective in Eq. 5 could also be optimized numerically using BFGS where the integral is still approximated using EP. This is less efficient as it requires many evaluations of the integral for a single gradient estimate.

Algorithm 1 summarizes our procedure. We denote the expectation propagation algorithm for approximating the first and second moment of the truncated Gaussian by \(\text {EP}(\mu , \varSigma )\). Here, \(\mu \) and \(\varSigma \) are the mean and covariance matrix of the un-truncated Gaussian. The subroutine returns the first and second moments of the truncated distributions \(\mu _q\) and \(\varSigma _q\). When initialized with the outcomes of earlier iterations, this subroutine typically converges within a single EP loop.

figure a

Our algorithm thus consists of two nested loops; the outer ADMM loop, containing the Newton update, and the inner EP loop, which computes the moments of the posterior. We choose stopping criterion 1 to be the convergence criterion proposed by Boyd et al. (2011) and choose criterion 2 to be always fulfilled, i. e. we perform only one Newton optimization step in the inner loop. Our experiments showed that doing only one Newton optimization step, instead of executing until convergence, is stable and leads to significant improvements in speed. ADMM is known to converge even when the minimizations in the ADMM scheme are not carried out exactly (see e.g. Eckstein and Bertsekas 1992).

4 Empirical analysis and applications

We study the performance of our proposed methods in experiments on both artificial and real-world data. We consider the two versions of our model: Probit-LMM (which minimizes Eq. 7 with respect to w) and Probit-LMM MAP (that minimizes Eq. 8 with respect to both w and \(w^{\prime }\)). Our data are taken from the domains of statistical genetics and computer malware prediction.

We compare our algorithms against three competing methods, including sparse probit regression, GP classification and the LMM-Lasso. In all considered cases, the Probit-LMM achieves higher classification performance. Also, the features that our algorithms find are less affected by spurious correlations induced by population structure. We find that the Probit-LMM outperforms its MAP approximation across all considered datasets. Yet, in many cases the MAP approximation is a cheap alternative to the full model.

4.1 General experimental setup

For the real-world and synthetic experiments, we first need to make a choice for the class of kernels that we use for the covariance matrix. We choose a combination of three contributions,

$$\begin{aligned} \varSigma = \lambda _1 \mathbf{I} + \lambda _2 X^\top X + \lambda _3 \varSigma _{\text {side}}. \end{aligned}$$
(17)

The third term is optional and depends on the context; it is a kernel representing any side information provided in an auxiliary feature matrix \(X^{\prime }\). Here, we compute \(\varSigma _{\text {side}}\) as an RBF kernelFootnote 3 from the side information \(X^{\prime }\). Note that this way, the data matrix enters the model both through the linear effect but also through the linear kernel. We evaluate the methods by using n individuals from the dataset for training, and splitting the remaining dataset equally into validation and test sets. This process is repeated 50 times, over which we report on average accuracies or areas under the ROC curve (AUCs), as well as standard errors (Fawcett 2006).

The hyperparameters of the kernels, together with the regularization parameter \(\lambda _0\), were determined on the validation set, using grid search over a sufficiently large parameter space (optimal values are attained inside the grid; in most cases \(\lambda _i \in [0.1, 1000]\)). For all datasets, the features were centered and scaled to unit standard deviation, except in experiment 4.4, where the features are binary.

In Sects. 4.3 and 4.4, we show that including a linear kernel into the covariance matrix leads to top-ranked features which are less correlated with the population structure in comparison to the top-ranked features of sparse probit regression. The correlation plotsFootnote 4 in Fig. 6 show the mean correlation of the top features with population structure and the corresponding standard errors. All experiments were performed on a linux machine with 48 CPU kernels (each 2.4 GHz) and 368 GB RAM.

4.2 Simulated data

To test the properties of our model in a controlled setup, we first generated synthetic data as follows. We generate a weight vector \(w \in \mathbb {R}^{d}\) with \(1 \le k\le d\) entries being 1, and the other \(d-k\) entries being 0. We chose \(d=50\) and varied k. We then create a random covariance matrix \(\varSigma _{\text {side}} \in \mathbb {R}^{n\times n}\), which serves as side information matrix.Footnote 5 We chose \(n=200\) and drew 200 points \(X = \{x_1,\dots x_{n}\}\) independently from a uniform distribution over the unit cube \([-1,1]^{d}\) and create the labels according to the Probit model, Eq. 3, using \(\varSigma _{\text {side}}\) as covariance matrix. We reserve 100 samples for training and 50 for validation and testing, respectively.

The synthetic data allowed us to control the sparsity level k of non-zero features. We then fit various models to the data to predict the binary labels: Probit-LMM (proposed) as well as Probit-LMM MAP (proposed), GP-classification, the LMM-Lasso, and standard \(\ell _1\)-norm regularized (sparse) Probit regression. As a benchmark we introduce the oracle classifier, where we use the Probit-LMM (with covariance matrix \(\varSigma _{\text {side}}\)), but skip the training and instead use the true underlying w for prediction. Fig. 1 shows the resulting accuracies. The horizontal axis shows the varying percentage of non-zero features in the artificial data k / d. Note that the accuracies of all methods fluctuate due to the finite size of the different data sets that we generated.

Fig. 1
figure 1

Toy: average accuracies as a function of the number of true non-zero features in the generating model (proposed methods: Probit-LMM and MAP approximation)

The observed performances of the methods depend on the varying level of sparsity of the data: if the true linear effect is sparse, sparsely regularized models should be expected to work better. The opposite can be expected from models that include all features in a dense way, such as GP classification. These models are good when the true effects are dense. Our plot indeed reveals this tendency. \(\ell _1\)-norm regularized (sparse) Probit regression performs well for small k, whereas GP classification works well for large k. The Probit-LMM and its MAP approximation outperform both methods, because they contain both a dense kernel as well as a sparse linear effect. Interestingly, even though the LMM-Lasso also has a sparse effect and a dense kernel, its performance is not very compelling on our experimental dataset. This may be explained by its output being continuous (and not binary), and therefore not well suited for classification tasks.

We also compared the runtimes across different methods, shown in Fig. 5. The Probit-LMM and Probit regression have an approximately constant runtime in all scenarios whereas the latter is around 2.5 times faster. As expected, the runtime of Probit-LMM MAP lies between the other methods and slightly decreases in the more dense scenarios. It can be considered a cheap alternative to the Probit-LMM, but predicts slightly worse.

Finally, we analyzed the importance of the \(\ell _1\)-norm regularizer in the Probit-LMM and compared it against a model that is \(\ell _2\)-regularized. We generated an artificial data set with \(k=10\) non-zero features and tried to recover these non-zero feature weights with both algorithms. Figure 2 shows the results of this analysis. The blue solid line represents the truly non-zero weights, while the green dots show our estimates when using \(\ell _1\)-norm (left) and \(\ell _2\)-norm (right) regularization on w, respectively. We observe that the \(\ell _1\)-norm regularized Probit model finds better estimates of the linear weight vectors that were used to generate the data.

Fig. 2
figure 2

Toy: effects of the regularizer on the model’s ability to select features. Ground truth (blue solid line) and feature weights (green dots) of \(\ell _1\)-norm (left) and \(\ell _2\)-norm (right) regularized Probit-LMM (Color figure online)

4.3 Tuberculosis disease outcome prediction

In our first real-world experiment, we predicted the outcome of Tuberculosis from gene expression levels. We obtained the dataset by Berry et al. (2010) from the National Center for Biotechnology Information website,Footnote 6 which includes 40 blood samples from patients with active tuberculosis as well as 103 healthy controls, together with the transcriptional signature of blood samples measured in a microarray experiment with 48,803 gene expression levels, which serve as features for our purposes. Also available is the age of the subjects when the blood sample was taken, from which we compute \(\varSigma _{\text {side}}\).Footnote 7 All competing methods are trained by using various training set sizes \(n\in [40,80]\). To be consistent with previous studies (e.g. Li et al. 2011), we report on the area under the ROC curve (AUC), rather than accuracy. The results are shown in Fig. 4, left.

We observe that Probit-LMM achieves a consistent improvement over sparse probit regression (by up to 12 percentage points), GP classification (by up to 3 percentage points), LMM-Lasso (by up to 7 percentage points) and its MAP approximation (by up to 7 percentage points). In Fig. 5 we show the runtime of Probit-LMM, its MAP version, and sparse probit regression with respect to the dataset size. Note that both the prediction performance of the MAP approximation and its runtime lie between the full model (Probit-LMM) and sparse probit regression. In Fig. 6, left, we show the correlation between the top features and the population structure (as confounding factor) for the Probit-LMM and sparse probit regression. The plot was created as explained in Sect. 4.1. We find that the features obtained by the Probit-LMM show less correlation with population structure than the features of sparse probit regression. By inspecting the correlation coefficients of the first top 100 features of both methods, we observe that the features found by the Probit-LMM are less correlated with the confounder. This is because population structure was built into our model as a source of correlated noise.

To make sure that our selected features are reliable, we investigate their stability under bootstrapping. We considered stability selection (Meinshausen and Bühlmann 2010), where we randomly subsample \(90\%\) of the data 100 times (to accommodate the limited sample size, we follow (Rakitsch et al. 2013) and do not use \(50\%\) of the samples for each draw as proposed in the original article). We define a feature to be selected if the absolute weight exceeds the threshold of 0.001. In Fig. 3 we show the selection probability for each feature. For the Probit-LMM, the top 7 features are selected in every singe run out of 100 runs, indicating that they are very stable. In contrast, in standard sparse probit regression (Lasso) these features only get selected with about \(90\%\) probability. Also, the total number of selected features over all runs is 294 in our approach, whereas for sparse probit regression it is 1837, which indicates that there is less variability compared to the standard Lasso approach. The Probit-LMM thus leads to more stable features than the standard Lasso approach since it also includes a dense effect as explained in Sect. 2.3.

Furthermore, we test the significance of the selected features of the Probit-LMM, where we construct a test statistic based on the likelihood ratio of our model and a reference model without fixed effect (Neyman and Pearson 1933). Our null hypothesis is, thus, that these features do not influence the disease outcome, hence that a model where all these corresponding feature weights are zero is equally powerful. We train our method on \(75\%\) of the data and valuate the likelihoods of both models on the remaining \(25\%\) of the data and repeat this procedure 10 times for random test-training splits. In each run, our algorithm selects between 32 and 37 features based on the aforementioned criterium that the feature weights exceed 0.001. We obtain a log-likelihood ratio of \(2.7 \pm 0.3\). Note that to construct a p-value out of this likelihood ratio, further assumptions about the distribution of model parameters would be required.

Fig. 3
figure 3

TBC: stability of selected features for the Probit-LMM and sparse probit regression. The plot shows the selection probabilities for each feature. Ideally, we want these to be 0 or 1. The Probit-LMM (proposed) leads to more stable top features and has less variability under bootstrapping

4.4 Malicious computer software (Malware) detection

We experiment on the Drebin datasetFootnote 8 (Arp et al. 2014), which contains 5560 Android software applications from 179 different malware families. There are 545,333 binary features; each feature denotes the presence or absence of a certain source code string (such as a permission, an API call or a network address). It makes sense to look for sparse representations (Arp et al. 2014), as only a small number of strings are truly characteristic of malware. The idea is that we consider populations of different families of malware when training, and hence correct for the analogue of genetic population structure in this new context, that we call “malware structure”.

Fig. 4
figure 4

Left Average AUC in the tuberculosis (TBC) experiment with respect to the training set size. Right Average ROC curves for the computer malware detection experiment

Fig. 5
figure 5

Toy: training time with respect to the dataset size in the tuberculosis experiment (left) and with respect to the number of true non-zero features in the generating model (right)

Fig. 6
figure 6

Correlation between the selected features and population structure as described in the main text (low values are better). The tuberculosis experiment is shown left, and computer malware shown right. The x-axis is sorted by descending absolute weights. Light-red/light-blue areas indicate standard errors (Color figure online)

We concentrate on the top 10 most frequently occurring malware families in the dataset.Footnote 9 We took 10 instances from each family, forming together a malicious set of 100 and a benign set of another 100 instances (i.e., in total 200 samples). We employ \(n=80\) instances for training and stratify in the sense that we make sure that each training/validation/test set contains \(50\%\) benign samples and an equal amount of malware instances from each family. Since no side information is available, we only use a linear kernel and the identity matrix as components for the correlation matrix. We report on the (normalized) area under the Receiver Operating Characteristic (ROC) curve over the interval [0, 0.1] and denote this performance measure by \(\text {AUC}_{0.1}\). In Fig. 4, right, we show the ROC curves, in Table 1 the achieved \(\text {AUC}_{0.1}\) and in Table 2 the runtimes of the Probit-LMM, its MAP approximation, and sparse probit regression.

Table 1 Malware: \(\text {AUC}_{0.1}\) and corresponding standard deviations attained on the malware dataset
Table 2 Malware: Average training time on the malware dataset

We observe that the Probit-LMM achieves a consistent improvement in terms of \(\text {AUC}_{0.1}\) over sparse probit regression (by approximately 7.5 percentage points), GP classification (by approximately 5 percentage points), LMM-Lasso (by approximately 8.4 percentage points), and over its MAP approximation (by approximately 2 percentage points). Furthermore, in Fig. 6, right, we plot the correlation of the top features of Probit-LMM and sparse probit regression with population structure. We observe that the Probit-LMM leads to features which are less correlated with the malware structure.

4.5 Flowering time prediction from single nucleotide polymorphisms

We experiment on genotype and phenotype data consisting of 199 genetically different accessions (instances) from the model plant Arabidopsis thaliana (Atwell et al. 2010). The genotype of each accession comprises 216,130 single nucleotide polymorphism (SNP) features. The phenotype that we aim to predict is early or late flowering of a plant when grown at ten degrees centigrade. The original dataset contains the flowering time for each of the 199 genotypes. We split the dataset into the lower and upper \(45\%\)-quantiles of the flowering time and binarized the labels, resulting in a set of 180 accession from which we use \(n=150\) accessions for training. The results are reported in Table 3 and show that the Probit-LMM has a slight advantage of at least 0.5 percentage points in AUC over the competitors. The MAP approximation can be considered as cheap alternative to the Probit-LMM since its prediction performance is only slightly worse than the Probit-LMM but it is substantially faster (see Table 4).

Table 3 Flowering: AUCs and corresponding standard errors in the flowering time prediction experiment
Table 4 Flowering: Average training time in the flowering time experiment

An analysis restricted to the ten SNPs with largest absolute regression weights in our model showed that they lie within four well-annotated genes that all convincingly can be related to flowering, structure and growth: the gene AT2G21930 is a growth protein that is expressed during flowering, AT4G27360 is involved in microtubule motor activity, AT3G48320 is a membrane protein, involved in plant structure, and AT5G28040 is a DNA binding protein that is expressed during flowering.

5 Conclusion

We presented a novel algorithm for sparse feature selection in binary classification where the training data show spurious correlations, e.g., due to confounding. Our approach generalizes the LMM modeling paradigm to binary classification, which poses technical challenges as exact inference becomes intractable. Our solution relies on approximate Bayesian inference. We demonstrated our approach on a synthetic dataset and two data sets from the field of statistical genetics as well as third data set from the domain of compute malware detection.

Our approximate Bayesian EM-algorithm can be seen as a hybrid between an \(\ell _1\)-norm regularized Probit classifier (enforcing sparsity) and a GP classifier that takes as input an arbitrary noise kernel. It is able to disambiguate between sparse linear effects and correlated Gaussian noise and thereby explains away spurious correlations due to confounding. We showed empirically that our model selects features which show less correlation with the first principal components of the noise covariance, and which are therefore closer to the truly underlying sparsity pattern.

While sparsity by itself is not the ultimate virtue to be striven for, we showed that the combination of sparsity-inducing regularization and dense-type probabilistic modeling (as in the proposed method) may improve over purely sparse models such as \(\ell _1\)-norm regularized (sparse) Probit regression. The corresponding theoretical exploration is left for future work. We note that a good starting point to this end will be to study the existing literature on compressed sensing as pioneered by Candès and Tao (2006) and Donoho (2006) and put forward by Boufounos and Baraniuk (2008) in the context of 1-bit compressed sensing. For the latter case such theory recently has been developed by Plan and Vershynin (2012), but under the assumption of independent noise variables—an assumption that is violated in the Probit-LMM.

A shortcoming of the model is the fact that the noise covariance kernel is fixed in advance and is not learned from the data. As a possible extension, one could treat the design matrix X which is used to compute the similarity kernel K(XX) as a free parameter and optimize it according to a maximum likelihood criterion. For a linear kernel this would basically yield a probabilistic PCA, for a non-linear kernel such as in deep Gaussian processes or Gaussian process latent variable models, this can yield interesting forms of dimensionality reduction. However, these models are typically used to analyze higher dimensional data where multiple outputs (phenotypes) per training example are available. Trying to estimate a covariance of size \(n\times n\) with only n training examples, we would run the danger of overfitting. This is also the reason why linear kernels of the feature matrix are still standard in genetics and are used in most LMM applications.

In the future, several paths are viable. An interesting extension of our approach would be a fully Bayesian one that also captures parameter uncertainty over w. To obtain the posterior on w, it might be easier to use sparsity-inducing hierarchical priors, e.g., an automatic relevance determination prior or Gaussian scale mixture, instead of the Laplace prior. Second, multi-class versions of the model are possible. And third, even more scalable approaches could be explored. To this end, one can make use of the formulation of the model in Eq. 7 and employ Stochastic Variational Inference, a scalable Bayesian algorithm based on stochastic optimization (Hoffman et al. 2013). We will leave these aspects for future studies.