Fast Simulation of Hyperplane-Truncated Multivariate Normal Distributions

We introduce a fast and easy-to-implement simulation algorithm for a multivariate normal distribution truncated on the intersection of a set of hyperplanes, and further generalize it to efficiently simulate random variables from a multivariate normal distribution whose covariance (precision) matrix can be decomposed as a positive-definite matrix minus (plus) a low-rank symmetric matrix. Example results illustrate the correctness and efficiency of the proposed simulation algorithms.


Introduction
We investigate the problem of simulation from a multivariate normal (MVN) distribution whose samples are restricted to the intersection of a set of hyperplanes, which is shown to be inherently related to the simulation of a conditional distribution of a MVN distribution. A naive approach, which linearly transforms a random variable drawn from the conditional distribution of a related MVN distribution, requires a large number of intermediate variables that are often computationally expensive to instantiate. To address this issue, we propose a fast and exact simulation algorithm that directly projects a MVN random variable onto the intersection of a set of hyperplanes. We further show that sampling from a MVN distribution, whose covariance (precision) matrix can be decomposed as the sum (difference) of a positive-definite matrix, whose inversion is known or easy to compute, and a low-rank symmetric matrix, may also be made significantly fast by exploiting this newly proposed stimulation algorithm for hyperplane-truncated MVN distributions, avoiding the need of Cholesky decomposition that has a computational complexity of O(k 3 ) (Golub and Van Loan 2012), where k is the dimension of the MVN random variable.
Related to the problems under study, the simulation of MVN random variables subject to certain constraints (Gelfand et al. 1992) has been investigated in many other different settings, such as multinomial probit and logit models (Albert and Chib 1993;McCulloch et al. 2000;Imai and van Dyk 2005;Train 2009;Holmes and Held 2006;Johndrow et al. 2013), Bayesian isotonic regression (Neelon and Dunson 2004), Bayesian bridge (Polson et al. 2014), blind source separation (Schmidt 2009), and unmixing of hyperspectral data (Altmann et al. 2014;Dobigeon et al. 2009a). A typical example arising in these different settings is to sample a random vector x ∈ R k from a MVN distribution subject to k inequality constraints as where N S (µ, Σ) represents a MVN distribution truncated on the sample space S, µ ∈ R k is the mean, Σ ∈ R k×k is the covariance matrix, G ∈ R k×k is a full-rank matrix, l ∈ R k , u ∈ R k , and l < u. If the elements of l and u are permitted to be −∞ and +∞, respectively, then both single sided and fewer than k inequality constraints are allowed. Equivalently, as in Geweke (1991Geweke ( , 1996, one may let x = µ + G −1 z and use Gibbs sampling (Geman and Geman 1984;Gelfand and Smith 1990) to sample the k elements of z one at a time conditioning on all the others from a univariate truncated normal distribution, for which efficient algorithms exist (Robert 1995;Damien and Walker 2001;Chopin 2011). To deal with the case that the number of linear constraints imposed on x exceed its dimension k and to obtain better mixing, one may consider the Gibbs sampling algorithm for truncated MVN distributions proposed in Rodriguez-Yam et al. (2004). In addition to Gibbs sampling, to sample truncated MVN random variables, one may also consider Hamiltonian Monte Carlo (Pakman and Paninski 2014;Lan et al. 2014) and a minimax tilting method proposed in Botev (2016).

Hyperplane-truncated and conditional MVNs
For the problem under study, we express a k-dimensional MVN distribution truncated on the intersection of k 2 < k hyperplanes as where G ∈ R k2×k , r ∈ R k2 , and Rank(G) = k 2 . The probability density function can be expressed as where Z is a constant ensuring p(x | µ, Σ, G, r)dx = 1, and δ(x) = 1 if the condition x is satisfied and δ(x) = 0 otherwise. Let us partition G, x, µ, Σ, and Λ = Σ −1 as , and Λ = Λ 11 Λ 12 Λ 21 Λ 22 , whose sizes are (k 2 × k 1 , k 2 × k 2 ), k 1 × 1 k 2 × 1 , k 1 × 1 k 2 × 1 , k 1 × k 1 k 1 × k 2 k 2 × k 1 k 2 × k 2 , and k 1 × k 1 k 1 × k 2 k 2 × k 1 k 2 × k 2 , respectively, where k = k 1 + k 2 , Σ 21 = Σ T 12 , and Λ 21 = Λ T 12 . A special case that frequently arises in real applications is when G 1 = 0 k2×k1 and G 2 = I k2 , which means (0 k2×k1 , I k2 )x = x 2 = r and the need is to simulate x 1 given x 2 = r. For a MVN random variable x ∼ N (µ, Σ), it is well known, e.g., in Tong (2012), that the conditional distribution of x 1 given x 2 = r, i.e., the distribution of x restricted to S = {x : (0 k2×k1 , I k2 )x = r}, can be expressed as Alternatively, applying the Woodbury matrix identity to relate the entries of the covariance matrix Σ to those of the precision matrix Λ, one may obtain the following equivalent expression as In a general setting where G = (0 k2×k1 , I k2 ), let us define a full rank linear transformation matrix H ∈ R k×k , with (H 1 , H 2 ) as the (k × k 1 , k × k 2 ) partition of H, where the columns of H 1 ∈ R k×k1 span the null space of the k 2 rows of G, making GH = (GH 1 , GH 2 ) = (0 k2×k1 , GH 2 ), where GH 2 is a k 2 × k 2 full rank matrix. For example, a linear transformation matrix H that makes GH = (0 k2×k1 , I k2 ) can be constructed using the command H = inv ([null(G) ; G]) in Matlab and H <−solve(rbind(t(Null(t(G))), G)) in R. With H and H −1 , one may re-express the constraints as S = {x : (0 k2×k1 , GH 2 )(H −1 x) = r}. Denote z = H −1 x, then we can generate x by letting x = Hz, where and hence x truncated on S can be naively generated using the following algorithm, whose computational complexity is described in Table 1 of the Appendix. Algorithm 1 Simulation of the hyperplane truncated MVN distribution x ∼ N S (µ, Σ), where S = {x : Gx = r}, by transforming a random variable drawn from the conditional distribution of another MVN distribution.
For high dimensional problems, however, Algorithm 1 in general requires a large number of intermediate variables that could be computationally expensive to compute.
In the following discussion, we will show how to completely avoid instantiating these intermediate variables.

Fast and exact simulation of MVN distributions
Instead of using Algorithm 1, we first provide a theorem to show how to efficiently and exactly simulate from a hyperplane-truncated MVN distribution. In the Appendix, we provide two different proofs. The first proof facilitates the derivations by employing an existing algorithm of Hoffman and Ribak (1991) and Doucet (2010), which describes how to simulate from the conditional distribution of a MVN distribution shown in (4) without computing Σ 11 − Σ 12 Σ −1 22 Σ 21 and its Cholesky decomposition. Note it is straightforward to verify that the algorithm in Hoffman and Ribak (1991) and Doucet (2010), as shown in the Appendix, can be considered as a special case of the proposed algorithm with G = [0, I].
The above algorithm and theorem, whose computational complexity is described in Table 2 of the Appendix, show that one may draw y from the unconstrained MVN as y ∼ N (µ, Σ) and directly map it to a vector x on the intersection of hyperplanes using

Fast simulation of MVN distributions with structured covariance or precision matrices
For fast simulation of MVN distributions with structured covariance or precision matrices, our idea is to relate them to higher-dimensional hyperplane-truncated MVN distributions, with block-diagonal covariance matrices, that can be efficiently simulated with Algorithm 2. We first introduce an efficient algorithm for the simulation of a MVN distribution, whose covariance matrix is a positive-definite matrix subtracted by a low-rank symmetric matrix. Such kind of covariance matrices commonly arise in the conditional distributions of MVN distributions, as shown in (4). We then further extend this algorithm to the simulation of a MVN distribution whose precision (inverse covariance) matrix is the sum of a positive-definite matrix and a low-rank symmetric matrix. Such kind of precision matrices commonly arise in the conditional posterior distributions of the regression coefficients in both linear regression and generalized linear models.
Theorem 2. The probability density function (PDF) of the MVN distribution is the same as the PDF of the marginal distribution of . , x k ) T , whose PDF is expressed as where Z is a normalization constant, G 1 = Σ 21 Σ −1 11 is a matrix of size k 2 × k 1 , G 2 is a user-specified full rank invertible matrix of size k 2 × k 2 , r ∈ R k2 is a user-specified vector, and where The above theorem shows how the simulation of a MVN distribution, whose covariance matrix is a positive-definite matrix minus a symmetric matrix, can be realized by the simulation of a higher-dimensional hyperplane-truncated MVN distribution. By construction, it makes the covariance matrixΣ of the truncated-MVN be block diagonal, but still preserves the flexibility to customize the full-rank matrix G 2 and the vector r. While there are infinitely many choices for both G 2 and r, in the following discussion, we remove that flexibility by specifying G 2 = I k2 , leading to G = (G 1 , G 2 ) = (Σ 21 Σ −1 11 , I k2 ), and r = Σ 21 Σ −1 11 µ 1 . This specific setting of G 2 and r leads to the following Corollary that is a special case of Theorem 2. Note that while we choose this specific setting in the paper, depending on the problems under study, other settings may lead to even more efficient simulation algorithms.
Corollary 3. The PDF of the MVN distribution is the same as the PDF of the marginal distribution of x 1 in x = (x T 1 , x k1+1 , . . . , x k ) T , whose PDF is expressed as where Further applying Theorem 1 to Corollary 3, as described in detail in the Appendix, a MVN random variable x with a structured covariance matrix can be generated as in Algorithm 3, where there is no need to compute Σ 11 − Σ 12 Σ −1 22 Σ 21 and its Cholesky decomposition. Suppose the covariance matrix Σ 11 admits some special structure that makes it easy to invert and computationally efficient to simulate from N (0, Σ 11 ), then Algorithm 3 could lead to a significant saving in computation if k 2 k 1 . On the other hand, when k 2 k 1 and Σ 22 − Σ 21 Σ −1 11 Σ 12 admits no special structures, Algorithm 3 may not bring any computational advantage and hence one may resort to the naive Cholesky decomposition based procedure. Detailed computational complexity analyses for both methods are provided in Tables 3 and 4 of the Appendix, respectively.
Algorithm 3 Simulation of the MVN distribution , which can be realized using The efficient simulation algorithm for a MVN distribution with a structured covariance matrix can also be further extended to a MVN distribution with a structured precision matrix, as described below, where β ∈ R p , µ β ∈ R p , Φ ∈ R n×p , and both A ∈ R p×p and Ω ∈ R n×n are positive-definite matrices. Computational complexity analyses for both the naive Cholesky decomposition based implementation and Algorithm 4 are provided in Table 5 and 6 of the Appendix, respectively. Similar to Algorithm 3, Algorithm 4 may bring a significant saving in computation when p n and A admits some special structure that makes it easy to invert and computationally efficient to simulate y 1 .

Algorithm 4 Simulation of the MVN distribution
Corollary 5. The random variable obtained with Algorithm 4 is distributed as

Illustrations
Below we provide several examples to illustrate Theorem 1, which shows how to efficiently simulate from a hyperplane-truncated MVN distribution, and Corollary 4 (Corollary 5), which shows how to efficiently simulate from a MVN distribution with a structured covariance (precision) matrix. We run all our experiments on a 2.9 GHz computer.

Simulation of hyperplane-truncated MVNs
We first compare Algorithms 1 and 2, whose generated random samples follow the same distribution, as suggested by Theorem 1, to highlight the advantages of Algorithm 2 over Algorithm 1. We then employ Algorithm 2 for a real application whose data dimension is high and sample size is large.

Comparison of Algorithms 1 and 2
We compare Algorithms 1 and 2 in a wide variety of settings by varying the data dimension k, varying the number of hyperplane constraints k 2 , and choosing either a diagonal covariance matrix Σ or a non-diagonal one. We generate random diagonal covariance matrices using the MATLAB command diag(0.05 + rand(k, 1)) and random non-diagonal ones using U. * diag(0.05 + rand(k, 1)) * U , where rand(k, 1) is a vector of k uniform random numbers and U consists of a set of k orthogonal basis vectors. The elements of µ, r, and G are all sampled from N (0, 1), with the singular value decomposition applied to G to check whether Rank(G) = k 2 .
First, to verify Theorem 1, we conduct an experiment with k = 5000 data dimension, k 2 = 20 hyperplanes, and a diagonal Σ. Contour plots of two randomly selected dimensions of the 10,000 random samples simulated with Algorithms 1 and 2 are shown in the top and bottom rows of Figure 3, respectively. The clear matches between the contour plots of these two different algorithms suggest the correctness of Theorem 1.
To demonstrate the efficiency of Algorithm 2, we first carry out a series of experiments with the number of hyperplane constraints fixed at k 2 = 20 and the data dimension increased from k = 50 to k = 5000. The computation time of simulating 10,000 samples averaged over five random trials is shown in Figure 4(a) for non-diagonal Σ's and in Figure 4(d) for diagonal ones. It is clear that, when the data dimension k is high, Algorithm 2 has a clear advantage over Algorithm 1 by avoiding computing unnecessary intermediate variables, which is especially evident when Σ is diagonal. We then carry out a series of experiments where we vary not only k, but also k 2 from 0.1k to 0.9k for each k. As shown in Figure 4, it is evident that Algorithm 2 dominates Algorithm 1 in all scenarios, which can be explained by the fact that Algorithm 2 needs to compute much fewer intermediate variables. Also observed is that a larger k 2 leads to slower Figure 3: Comparison of the contour plots of two randomly selected dimensions of the 10,000 k = 5000 dimensional random samples simulated with Algorithm 1 (top row) and Algorithm 2 (bottom row). Each of the five columns corresponds to a random trial.
simulation for both algorithms, but to a much lesser extent for Algorithm 2. Moreover, the curvatures of those curves indicate that Algorithm 2 is more practical in a high dimensional setting. Note that since Algorithm 2 can naturally exploit the structure of the covariance matrix Σ for fast simulation, it is clearly more capable of benefiting from having a diagonal or block-diagonal Σ, demonstrated by comparing Figures 4(b) and 4(c) with Figures 4(e) and 4(f). All these observations agree with our computational complexity analyses for Algorithms 1 and 2, as shown in Table 1 and 2 of the Appendix, respectively.

A practical application of Algorithm 2
In what follows, we extend Algorithm 2 to facilitate simulation from a MVN distribution truncated on a probability simplex This problem frequently arises when unknown parameters can be interpreted as fractions or probabilities, for instance, in topic models (Blei et al. 2003), admixture models (Pritchard et al. 2000;Dobigeon et al. 2009b;Bazot et al. 2013), and discrete directed graphical models (Heckerman 1998). With Algorithm 2, one may remove the equality constraint to greatly simplify the problem.
More specifically, we focus on a big data setting in which the globally shared simplexconstrained model parameters could be linked to some latent counts via the multinomial likelihood. When there are tens of thousands or millions of observations in the dataset, scalable Bayesian inference for the simplex-constrained globally shared model parameters is highly desired, for example, for inferring the topics' distributions over words in latent Dirichlet allocation (Blei et al. 2003;Hoffman et al. 2010) and Poisson factor analysis (Zhou et al. 2012(Zhou et al. , 2016. Let us denote the κth model parameter vector constrained on a V -dimensional simplex by φ κ ∈ S V , which could be linked to the latent counts n vjκ ∈ Z of the jth document under a multinomial likelihood as (n 1jκ , . . . , n V jκ ) ∼ Mult(n ·jκ , φ κ ), where Z = {0, 1, 2, · · · }, v ∈ {1, . . . , V }, κ ∈ {1, . . . , K}, and j ∈ {1, . . . , N }. In topic modeling, one may consider K as the total number of latent topics and n vjκ as the number of words at the vth vocabulary term in the jth document that are associated with the κth latent topic. Note that the dimension V in real applications is often large, such as tens of thousands in topic modeling. Given the observed counts n vj for the whole dataset, in a batch-learning setting, one typically iteratively updates the latent counts n vjκ conditioning on φ κ , and updates φ κ conditioning on n vjκ .
However, this batch-learning inference procedure would become inefficient and even impractical when the dataset size N grows to a level that makes it too time consuming to finish even a single iteration of updating all local variables n vjκ . To address this issue, we consider constructing a mini-batch based Bayesian inference procedure that could make substantial progress in posterior simulation while the batch-learning one may still be waiting to finish a single iteration.
Without loss of generality, in the following discussion, we drop the latent factor/topic index κ to simplify the notation, focusing on the update of a single simplex-constrained global parameter vector. More specifically, we let the latent local count vector n j = (n 1j , . . . , n V j ) T be linked to the simplex-constrained global parameter vector φ ∈ S V via the multinomial likelihood as n j ∼ Mult (n ·j , φ), and impose a Dirichlet distribution prior on φ as φ ∼ Dir (η1 V ).
Instead of waiting for all n j to be updated before performing a single update of φ, we develop a mini-batch based Bayesian inference algorithm under a general framework for constructing stochastic gradient Markov chain Monte Carlo (SG-MCMC) (Ma et al. 2015), allowing φ to be updated every time a mini-batch of n j are processed. For the sake of completeness, we concisely describe the derivation for a SG-MCMC algorithm, as outlined below, for simplex-constrained globally shared model parameters. We refer the readers to Cong et al. (2017) for more details on the derivation and its application to scalable inference for topic modeling.
Using the reduced-mean parameterization of the simplex constrained vector φ, namely ϕ = (φ 1 , · · · , φ V −1 ) T , where ϕ ∈ R V −1 + is constrained with ϕ · ≤ 1, we develop a SG-MCMC algorithm that updates ϕ for the tth mini-batch as where ε t are annealed step sizes, ρ is the ratio of the dataset size N to the mini-batch size, n :· = (n 1· , · · · , n V · ) T = j∈It n j ,n :· = (n 1· , · · · , n (V −1)· ) T , [·] denotes the constraint that ϕ ∈ R V −1 + and ϕ · ≤ 1, and M := E N j=1 n ·j is approximated along the updating using M = (1 − ε t ) M + ε t ρE [n ·· ]. Alternatively, we have an equivalent update equation for φ as where [·] ∠ represents the constraint that φ ∈ R V + and 1 T φ = 1. It is clear that (16) corresponds to simulation of a V − 1 dimensional truncated MVN distribution with V inequality constraints. Since the number of constraints is larger than the dimension, previously proposed iterative simulation methods such as the one in Botev (2016) are often inappropriate. Note that, by omitting the non-negative constraints, the update in (17) corresponds to simulation of a hyperplane-truncated MVN simulation with a diagonal covariance matrix, which can be efficiently sampled as described in the following example.
The sampling steps in Example 1 directly follow Algorithm 2 and Theorem 1 with the distribution parameters specified as Σ = adiag(φ), G = 1 T , and r = 1. Accordingly, we present the following fast sampling procedure for (16).
Example 2: Simulation from (16) can be approximately but rapidly realized as To verify Example 2, we conduct an experiment using multinomial-distributed data vectors of V = 2000 dimensions, which are generated as follows: considering that the simplex-constrained vector φ is usually sparse in a high-dimensional application, we sample a V = 2000 dimensional vector f whose elements are uniformly distributed between 0 and 1, randomly select 40 dimensions and reset their values to be 100, and set φ = f / V i=1 f i ; we simulate N = 10, 000 samples, each n j of which is generated from the multinomial distribution Mult(n ·j , φ), where the number of trials is random and generated as n ·j ∼ Pois(50). We set ε t = t −0.99 and use mini-batches, each of which consists of 10 data samples, to stochastically update global parameters via SG-MCMC.
For comparison, we choose the same SG-MCMC inference procedure but consider simulating (16), as performed every time a mini-batch of data samples are provided, either as in Example 2 or with the Gibbs sampler of Rodriguez-Yam et al. (2004). Simulating (16) with the Gibbs sampler of Rodriguez-Yam et al. (2004) is realized by updating all the V dimensions, one dimension at a time, in each Gibbs sampling iteration. We set the total number of Gibbs sampling iterations for (16) in each minibatch based update as 1, 5, or 10. Note that in practice, the n j belonging to the current mini-batch are often latent and are updated conditioning on the data samples in the mini-batch and φ. For simplicity, all n j here are simulated once and then fixed.
Using φ * post = ( N j=1 n j + η)/( N j=1 n ·j + ηV ), the posterior mean of φ in a batchlearning setting, as the reference, we show in Figure 5 how the residual errors for the estimated φ * , defined as φ * − φ 2 , change both as a function of the number of processed mini-batches and as a function of computation time under various settings of the mini-batch based SG-MCMC algorithm. The curves shown in Figure 5 suggest that for each mini-batch, to simulate (16) with the Gibbs sampler of Rodriguez-Yam et al. (2004), it is necessary to have more than one Gibbs sampling iteration to achieve satisfactory results. It is clear from Figure 5(a) that the Gibbs sampler with 5 or 10 iterations for each mini-batch, even though each mini-batch has only 10 data samples, provides residual errors that quickly approach that of the batch posterior mean with a tiny gap, indicating the effectiveness of the SG-MCMC updating in (16). While simulating (16) with Gibbs sampling could in theory lead to unbiased samples if the number of Gibbs sampling iterations is large enough, it is much more efficient to simulate (16) with the procedure described in Example 2, which provides a performance that is undis- tinguishable from those of the Gibbs sampler with as many as 5 or 10 iterations for each mini-batch, but at the expense of a tiny fraction of a single Gibbs sampling iteration.

Simulation of MVNs with structured covariance matrices
To illustrate Corollary 4, we mimic the truncated MVN simulation in (16) and present the following simulation example with a structured covariance matrix.
Example 3: Simulation of a MVN distribution as can be realized as follows.
, and µ k = 1 − 1 T µ 1 , the above sampling steps can also be equivalently expressed as follows.
The distribution parameters are randomly generated and computation time averaged over 100 random trials is displayed.
Directly following Algorithm 3 and Corollary 4, the first sampling approach for the above example can be derived by specifying the distribution parameters as Σ 11 = adiag(φ 1 ), Σ 12 = φ 1 , Σ 21 = φ T 1 , and Σ 22 = a −1 , while the second approach can be derived by specifying Σ 11 = adiag(φ 1 ), Σ 12 = aφ 1 , Σ 21 = aφ T 1 , and Σ 22 = a. To illustrate the efficiency of the proposed algorithms in Example 3, we simulate from the MVN distribution x 1 ∼ N [µ 1 , a diag(φ 1 ) − a φ 1 φ T 1 ] using both a naive implementation via Cholesky decomposition of the covariance matrix and the fast simulation algorithm for a hyperplane-truncated MVN random variable described in Example 3. We set the dimension from k = 10 2 up to k = 10 4 and set µ = (1/k, . . . , 1/k) and a = 0.5. For each k and each simulation algorithm, we perform 100 independent random trials, in each of which φ is sampled from the Dirichlet distribution Dir(1, . . . , 1) and 10,000 independent random samples are simulated using that same φ.
As shown in Figure 6, for the proposed Algorithm 3, the average time of simulating 10,000 random samples increases linearly in the dimension k. By contrast, for the naive Cholesky decomposition based simulation algorithm, whose computational complexity is O(k 3 ) (Golub and Van Loan 2012), the average simulation time increases at a significantly faster rate as the dimension k increases.
For explicit verification, with the 10,000 simulated k = 10 4 dimensional random samples in a random trial, we randomly choose two dimensions and display their joint distribution using a contour plot. As in Figure 7, shown in the first row are the contour plots of five different random trials for the naive Cholesky implementation, whereas shown in the second row are the corresponding ones for the proposed Algorithm 3. As expected, the contour lines of the two figures in the same column closely match each other. To further examine when to apply Algorithm 3 instead of the naive Cholesky decomposition based implementation in a general setting, we present the computational complexity analyses in Tables 3 and 4 of the Appendix for the naive approach and Algorithm 3, respectively. In addition, we mimic the settings in Section 4.1 to conduct a set of experiments with randomly generated Σ 12 , diagonal Σ 11 , and diagonal Σ 22 . We fix k 1 = 4000 and vary k 2 from 1 to 8000. The computation time for one sample averaged over 50 random trials is presented in Figure 8. It is clear from Tables 3 and 4 and Figure 8 that, as a general guideline, one may choose Algorithm 3 when k 2 is smaller than k 1 and Σ 11 admits some special structure that makes it easy to invert and computationally efficient to simulate from N (0, Σ 11 ).

Simulation of MVNs with structured precision matrices
To examine when to apply Algorithm 4 instead of the naive Choleskey decomposition based procedure, we first consider a series of random simulations in which the sample size n is fixed while the data dimension p is varying. We then show that Algorithm 4 can be applied for high-dimensional regression whose p is often much larger than n. We fix n = 4000, vary p from 1 to 8000, and mimic the settings in Section 4.1 to randomly generate Φ, diagonal A, and diagonal Ω. As a function of dimensions p, the computation time for one sample averaged over 50 random trials is shown in Figure 9. It is evident that, identical to the complexity analysis in Tables 5 and 6, Algorithm 4 has a linear complexity with respect to p under these settings, which will bring significant acceleration in a high-dimensional setting with p n. If the sample size n is large enough that n > p, then one may directly apply the naive Cholesky decomposition based implementation.
Algorithm 4 could be slightly modified to be applied to high-dimensional regression, where the main objective is to efficiently sample from the conditional posterior of β ∈ R p×1 in the linear regression model as where Φ ∈ R n×p , Ω ∈ R n×n , and different constructions on A ∈ R p×p lead to a wide variety of regression models (Caron and Doucet 2008;Carvalho et al. 2010;Polson et al. 2014). The conditional posterior of β is directly derived and shown in the following example, where its simulation algorithm is summarized by further generalizing Corollary 5.
Example 4: Simulation of the MVN distribution can be realized as follows.
• Sample y 1 ∼ N (0, A −1 ) and y 2 ∼ N (0, Ω −1 ) ; , which can be realized using Note that if Ω = I n , then the simulation algorithm in Example 4 reduces to the one in Proposition 2.1 of Bhattacharya et al. (2016), which is shown there to be significantly more efficient than that of Rue (2001) for high-dimensional regression if p n.

Conclusions
A fast and exact simulation algorithm is developed for a multivariate normal (MVN) distribution whose sample space is constrained on the intersection of a set of hyperplanes, which is shown to be inherently related to the conditional distribution of a unconstrained MVN distribution. The proposed simulation algorithm is further generalized to efficiently simulate from a MVN distribution, whose covariance (precision) matrix can be decomposed as the sum (difference) of a positive-definite matrix and a low-rank symmetric matrix, using a higher dimensional hyperplane-truncated MVN distribution whose covariance matrix is block-diagonal.
Rodriguez-Yam, G., Davis, R. A., and Scharf, L. L. (2004). "Efficient Gibbs sampling of truncated multivariate normal with application to constrained linear regression." Technical report. where z = (z T 1 , z T 2 ) T and z 2 ∈ R k2 is a vector whose values can be chosen arbitrarily. In addition, since M is an arbitrary full-rank matrix, we can let , and choose M to make which means GΣG T (Mz 2 ) = GHz. Thus we have In addition, since if z ∼ N [H −1 µ, H −1 Σ(H −1 ) T ], then y = Hz ∼ N (µ, Σ). Therefore, without the need to compute any intermediate variables, one may use Algorithm 2 to generate x from the hyperplane truncated MVN distribution.

Computational Complexity
We present the computational complexities of all proposed algorithms in the following tables, where we highlight with bold the lowest complexity in each row.

Brief derivation of SG-MCMC for a simplex-constrained vector
Based on a comprehensive framework for constructing SG-MCMC algorithms in Ma et al. (2015), we have the mini-batch update rule for a global variable z as where ε t are annealed step sizes, D (z) is a positive semidefinite diffusion matrix, Q (z) is a skew-symmetric curl matrix,B t is an estimate of the stochastic gradient noise variance satisfying a positive definite constraint as 2D (z t ) − ε tBt 0, and Γ i (z), the i th element of the compensation vector Γ(z), is defined as Γ i (z) = j ∂ ∂zj [D ij (z) + Q ij (z)]. The mini-batch estimation of energy function is defined as H (z) = − ln p (z) − ρ x∈X ln p (x |z ), withX the mini-batch and ρ the ratio of the dataset size to the mini-batch size.
With the multinomial likelihood n j ∼ Mult (n ·j , φ), and the reduced-mean parameterization ϕ ∈ R V −1 + , where j ∈ {1, · · · , N } with N the dataset size, it is straight to derive the FIM as where M := E N j=1 n ·j is approximated along the updating as M = (1 − ε t ) M + ε t ρE [n ·· ]. Further with the Dirichlet prior φ ∼ Dir (η1 V ), we have the conditional