A sequential reduction method for inference in generalized linear mixed models

The likelihood for the parameters of a generalized linear mixed model involves an integral which may be of very high dimension. Because of this intractability, many approximations to the likelihood have been proposed, but all can fail when the model is sparse, in that there is only a small amount of information available on each random effect. The sequential reduction method described in this paper exploits the dependence structure of the posterior distribution of the random effects to reduce substantially the cost of finding an accurate approximation to the likelihood in models with sparse structure.


Introduction
Generalized linear mixed models are a natural and widely used class of models, but one in which the likelihood often involves an integral of very high dimension. Because of this intractability, many alternative methods have been developed for inference in these models.
One class of approaches involves replacing the likelihood with some approximation, for example using Laplace's method or importance sampling. However, these approximations can fail in cases where the structure of the model is sparse, in that only a small amount of information is available on each random effect, especially when the data are binary.
If there are n random effects in total, the likelihood may always be written as an n-dimensional integral over these random effects. If there are a large number of random effects, then it will be computationally infeasible to obtain an accurate approximation to this n-dimensional integral by direct numerical integration. However, it is not always necessary to compute this n-dimensional integral to find the likelihood. In a two-level random intercept model, independence between clusters may be exploited to write the likelihood as a product of n one-dimensional integrals, so it is relatively easy to obtain a good approximation to the likelihood, even for large n. In more complicated situations it is often not immediately obvious whether any such simplification exists.
The 'sequential reduction' method developed in this paper exploits the structure of the integrand to simplify computation of the likelihood, and as a result allows a fast and accurate approximation to the likelihood to be found in many cases where existing approximation methods fail. Examples are given to demonstrate the new method, including pairwise competition models and a model with nested structure.
2 The generalized linear mixed model 2.1 The model A generalized linear model (Nelder and Wedderburn, 1972) allows the distribution of a response Y = (Y 1 , . . . , Y m ) to depend on observed covariates through a linear predictor η, where η = Xβ, for some known design matrix X. Conditional on knowledge of the linear predictor, the components of Y are independent. The distribution of Y is assumed to have exponential family form, with mean µ = E(Y|η) = g −1 (η), for some known link function g (.).
An assumption implicit in the generalized linear model is that the distribution of the response is entirely determined by the values of the observed covariates. In practice, this assumption is rarely believed: in fact, there may be other information not encoded in the observed covariates which may affect the response. A generalized linear mixed model allows for this extra heterogeneity by modeling the linear predictor as η = Xβ + Z(ψ)u, where u = (u 1 , . . . , u n ), and the u i are independent samples from some known distribution. This paper concentrates on the case u i ∼ N (0, 1), which allows Z(ψ)u to have any multivariate normal distribution with mean zero.
The non-zero elements of the columns of Z(ψ) give us the observations which involve each random effect. We will say the generalized linear mixed model has 'sparse structure' if most of these columns have few non-zero elements, so that most random effects are only involved in a few observations. These sparse models are particularly problematic for inference, especially when the data are binary, because the amount of information available on each random effect is small.

Example: pairwise competition models
Consider a tournament among n players, consisting of contests between pairs of players. For each contest, we observe a binary outcome: either i beats j or j beats i. We suppose that each player i has some ability λ i , and that conditional on all the abilities, the outcomes of the contests are independent, with distribution depending on the difference in abilities of the players i and j, so that Pr(i beats j|λ) = g −1 (λ i − λ j ) for some link function g(.). If g(x) = logit(x), then this describes a Bradley-Terry model (Bradley and Terry, 1952). If g(x) = Φ −1 (x) (the probit link), then it describes a Thurstone-Mosteller model (Thurstone, 1927;Mosteller, 1951).
If covariate information x i is available for each player, then interest may lie in the effect of the observed covariates on ability, rather than the individual abilities λ i themselves. We allow the ability of player i to depend on the covariates x i through λ i = β T x i + σu i , where u i are independent N (0, 1) samples. This gives a generalized linear mixed model, depending on a linear predictor η with components η r = λ p 1 (r) − λ p 2 (r) , where p 1 (r) and p 2 (r) are the first and second player involved in match r. The model will have sparse structure if each player competes in only a small number of matches, which is a common scenario in practice.

The likelihood
Let f (.|η i ) be the density of Y i , conditional on knowledge of the value of η i , and write θ = (β, ψ) for the full set of model parameters. Conditional on η, the components of Y are independent, so that where X i is the ith row of X, and Z i (ψ) is the ith row of Z(ψ). Unless n is very small, it will not be possible to approximate the likelihood well by direct computation of this n-dimensional integral. Pinheiro and Bates (1995) suggest using a Laplace approximation to the integral (1). Write

Existing approximations to the likelihood
for the integrand of the likelihood. This may be thought of as a non-normalized version of the posterior density for u, given y and θ. For each fixed θ, the Laplace approximation relies on a normal approximation to this posterior density. To find this normal approximation, let µ θ maximize log g(u|y, θ) over u, and write Σ θ = −H −1 θ , where H θ is the Hessian resulting from this optimization. The normal approximation to g(.|y, θ) will be proportional to a N n (µ θ , Σ θ ) density. Writing g na (.|y, θ) for the normal approximation to g(.|y, θ), where we write φ n (.; µ, Σ) for the N n (µ, Σ) density. When we integrate over u, only the normalizing constant remains, so that L Laplace (θ) = g(µ θ |y, θ) φ n (µ θ ; µ θ , Σ θ ) = (2π) − n 2 (det Σ θ ) − 1 2 g(µ θ |y, θ).
In the case of a linear mixed model, the approximating normal density is precise, and there is no error in the Laplace approximation to the likelihood. In other cases, and particularly when the response is discrete and may only take a few values, the error in the Laplace approximation may be large. In the case that n is fixed, and m → ∞, the relative error in the Laplace approximation may be shown to tend to zero. However, in the type of model we consider here, n is not fixed, but grows with m. The validity of the Laplace approximation depends upon the rate of this growth. Shun and McCullagh (1995) study this problem, and conclude that the Laplace approximation should be reliable provided that n = o(m 1/3 ). However, the Laplace approximation to the difference in the log-likelihood at two nearby points tends to be much more accurate than the approximation to the log-likelihood itself. The effect that ratios of Laplace approximations to similar functions tend to be more accurate than each Laplace approximation individually has been noted before, for example by Tierney and Kadane (1986) in the context of computing posterior moments. Nonetheless, in models with very sparse structure (where we might have n = O(m)), even the shape of the Laplace approximation to the log-likelihood surface may be inaccurate, so another method is required.
In cases where the Laplace approximation fails, Pinheiro and Bates (1995) suggest constructing an importance sampling approximation to the likelihood, based on samples from the normal distribution N n (µ θ , Σ θ ). Writing the likelihood may be approximated by Unfortunately, there is no guarantee that the variance of the importance weights w(u (i) ; θ) will be finite. In such a situation, the importance sampling approximation will still converge to the true likelihood as N → ∞, but the convergence may be slow and erratic, and estimates of the variance of the approximation may be unreliable.

Bayesian inference
From a Bayesian perspective, Markov chain Monte Carlo methods could be used to sample from the posterior distribution. However, such methods are computationally intensive, and it can be difficult to detect whether the Markov chain has converged to the correct distribution. Rue et al. (2009) suggest the Integrated Nested Laplace Approximation (INLA) to approximate the marginal posterior distribution of each parameter. INLA is computationally efficient, but Fong et al. (2010) note that the approximation may perform poorly in models for binary data. In situations where the Laplace approximation to the likelihood fails, INLA may be also unreliable.
We do not consider these methods further, and instead focus on those methods which provide a direct approximation to the (marginal) likelihood (1).

The sequential reduction method 3.1 Conditional independence structure
Before observing the data y, the random effects u are independent. The information provided by y about the value of combinations of those random effects induces dependence between them. If there is no observation involving both u i and u j , u i and u j will be conditionally independent in the posterior distribution, given the values of all the other random effects.
It is possible to represent this conditional independence structure graphically. Consider a graph G constructed to have:

A vertex for each random effect
2. An edge between two vertices if there is at least one observation involving both of the corresponding random effects.
By construction of G, there is an edge between i and j in G only if y contains an observation involving both u i and u j . So if there is no edge between i and j in G, u i and u j are conditionally independent in the posterior distribution, given the values of all the other random effects, so the posterior distribution of the random effects has the pairwise Markov property with respect to G. We call G the posterior dependence graph for u given y.
In a pairwise competition model, the posterior dependence graph simply consists of a vertex for each player, with an edge between two vertices if those players compete in at least one contest. For models in which each observation relies on more than two random effects, an observation will not be represented by a single edge in the graph.
The problem of computing the likelihood has now been transformed to that of finding a normalizing constant of a density associated with an undirected graphical model. In order to see how the conditional dependence structure can be used to enable a simplification of the likelihood, we first need a few definitions. A complete graph is one in which there is an edge from each vertex to every other vertex. A clique of a graph G is a complete subgraph of G, and a clique is said to be maximal if it is not itself contained within a larger clique. For any graph G, the set of all maximal cliques of G is unique, and we write M (G) for this set.
The Hammersley-Clifford theorem (Besag, 1974) implies that g(.|y, θ) factorizes over the maximal cliques of G, so that we may write for some functions g C (.). A condition needed to obtain this result using the Hammersley-Clifford theorem is that g(u|y, θ) > 0 for all u. This will hold in this case because φ(u i ) > 0 for all u i . In fact, we may show that such a factorization exists directly. One particular such factorization is constructed in Section 3.4, and would be valid even if we assumed a random effects density f u (.) such that f u (u i ) = 0 for some u i . Jordan (2004) reviews some methods to find the marginals of a density factorized over the maximal cliques of a graph. While these methods are well known, their use is typically limited to certain special classes of distribution, such as discrete or Gaussian distributions. We will use the same ideas, combined with a method for approximate storage of functions, to approximate the marginals of the distribution with density proportional to g(.|y, θ), and so approximate the likelihood L(θ) = R n g(u|y, θ)du. We take an iterative approach to the problem, first integrating out u 1 to find the non-normalized marginal posterior density of {u 2 , . . . , u n }. We start with a factorization of g(.|y, θ) over the maximal cliques of the posterior dependence graph of {u 1 , . . . , u n }, and the idea will be to write the marginal posterior density of {u 2 , . . . , u n } as a product over the maximal cliques of a new marginal posterior dependence graph. Once this is done, the process may be repeated n times to find the likelihood. We will write G i for the posterior dependence graph of {u i , . . . , u n }, so we start with posterior dependence graph

Exploiting the clique factorization
Factorizing g(.|y, θ) over the maximal cliques of G 1 gives To integrate over u 1 , it is only necessary to integrate over maximal cliques containing vertex 1, leaving the functions on other cliques unchanged. Let N 1 be the set of neighbors of vertex 1 in G (including vertex 1 itself). Then g(u|y, θ)du 1 = Thus g 1 N 1 (.) is obtained by multiplication of all the functions on cliques which are subsets of N 1 . This is then integrated over u 1 , to give The functions on all cliquesC which are not subsets of N 1 remain unchanged, with g 2 C (uC) = g 1 C (uC). This defines a new factorization of g(u 2 , . . . u n |y, θ) over the maximal cliques M 2 of the posterior dependence graph for {u 2 , . . . , u n }, where M 2 contains N 1 \1, and all the remaining cliques in M 1 which are not subsets of N 1 . The same process may then be followed to remove each u i in turn.

The sequential reduction method
We now give the general form of a sequential reduction method for approximating the likelihood. We highlight the places where choices must be made to use this method in practice. The following sections then discuss each of these choices in detail.
1. The u i may be integrated out in any order. Section 3.6 discusses how to choose a good order, with the aim of minimizing the cost of approximating the likelihood. Reorder the random effects so that we integrate out u 1 , . . . , u n in that order.
2. Factorize g(u|y, θ) over the maximal cliques M 1 of the posterior dependence graph, as g(u|y, θ) = C∈M 1 g 1 C (u C ). This factorization is not unique, so we must choose one particular factorization {g 1 C (.) : C ∈ M 1 }. Section 3.4 gives the factorization we use in practice.
3. Once u 1 , . . . u i−1 have been integrated out (using some approximate method), we have the factorizationg(u i , . . . , u n |y, θ) = C∈M i g i C (u C ), of the (approximated) non-normalized posterior for u i , . . . , u n . Write We then integrate over u i (using a quadrature rule), and store an approximate representationg N i \i (.) of the resulting function g N i \i (.). In Section 3.5 we discuss the construction of this approximate representation.

A specific clique factorization
The general method described in Section 3.3 is valid for an arbitrary factorization of g(u|y, θ) over the maximal cliques M 1 of the posterior dependence graph. To use the method in practice, we must first define the factorization used.
Given an ordering of the vertices, order the cliques in M 1 lexicographically according to the set of vertices contained within them. The observation vector y is partitioned over the cliques in M 1 by including in y C all the observations only involving items in the clique C, which have not already been included in y B for some earlier clique in the ordering, B. Write a(C) for the set of vertices appearing for the first time in clique C. Let Then g(u|y) = C∈M 1 g 1 C (u C ), so g 1 C (.) does define a factorization of g(.|y).

A modified function for storage
A key choice in the sequential reduction algorithm is the method used to 'store' the function g N i \i (.). The storage consists of a set of points S i at which to evaluate g N i \i (.), and a method of interpolation between those points, which will be used later in the algorithm if we need to evaluate g N i \i (u N i \i ) for some u N i \i ∈ S i . We would like to minimize the size of the absolute error in the interpolation for those points u N i \i at which we will later interpolate. The quality of the interpolation may be far more important at some points u N i \i than at others. We will transform to a new function so that the size of the absolute interpolation error for r N i \i (.) is of roughly equal concern across the whole space. Given an interpolation method for r N i \i (.), we obtain interpolated values for , so we must ensure that h N i \i (.) is easy to compute.
Recall that we may think of the original integrand g(.|y, θ) as being the non-normalized posterior density for u|y, θ. The region where where we will interpolate a large number of points corresponds to the region where the marginal posterior density of u N i \i |y, θ is large. Ideally, we would choose h N i \i (.) to make r N i \i (.) proportional to the density of u N i \i |y, θ, but this density is difficult to compute.
To solve this problem, we make use of the normal approximation to g(.|y, θ) used to construct the Laplace approximation to the likelihood, which approximations the posterior distribution u|y, θ as N n (µ, Σ). The marginal posterior distribution of u N i \i |y, θ may therefore be approximated as That is, we choose log h N i \i (.) to be a quadratic function, with coefficients cho-

Storing a function with a normal approximation
Suppose that f (.) is a non-negative function on R d , for which we want to store an approximate representation, and that we may approximate f (.) with f na (x) ∝ φ d (x, µ, Σ), for some µ and Σ. In our case, the function f (.) which we store is r N i \i (.), of dimension d = |N i \ i|, and with normal approximation N d (µ N i \i , Σ N i \i ). We transform to a new basis. Let z = A −1 (x − µ), where A is chosen so that AA T = Σ. More specifically, we choose A = P D, where P is a matrix whose columns are the normalized eigenvectors of Σ and D is a diagonal matrix with diagonal entries the square roots of the eigenvalues of Σ. Write f z (z) = f (Az+µ), and let c(z) = log f z (z) − log φ d (z, 0, I), so that c(.) will be constant if the normal approximation is precise. We store c(.) by evaluating at some fixed points for z, and specifying the method of interpolation between them. The choice of these points and the interpolation method is discussed in the next section. Given the interpolation method for c(.), we may define f interp (x) = exp{c interp (A −1 (x − µ))} φ d (A −1 (x − µ), 0, I), to give an interpolation method for f (.).
If g(u|y, θ) ∝ φ n (u, µ, Σ), there will be no error in the Laplace approximation to the likelihood. In this situation, c(.) will be constant, and the sequential reduction approximation will also be exact. In situations where the normal approximation is imprecise, c(.) will no longer be constant, and we may improve on the baseline (Laplace) approximation to the likelihood by increasing the number of points used for storage.

Sparse grid interpolation
In order to store an approximate representation of the standardized modifier function c(.), we will compute values of c(.) at a fixed set of evaluation points, and specify a method of interpolation between these points. We now give a brief overview of the interpolation methods based on sparse grids of evaluation points. Some of the notation we use is taken from Barthelmann et al. (2000), although there are some differences: notably that we assume c(.) to be a function on R d , rather than on the d-dimensional hypercube [−1, 1] d , and we will use cubic splines, rather than (global) polynomials for interpolation.
First we consider a method for interpolation for a one-dimensional function c : R → R. We evaluate c(.) at m l points s 1 , . . . , s m l and write where the a l j are basis functions. The approximate interpolated value of c(.) at any point x is then given by U l (c)(x).
Here l denotes the level of approximation, and we suppose that the set of evaluation points is nested so that at level l, we simply use the first m l points of a fixed set of evaluation points S = {s 1 , s 2 , . . .}. We assume that m 1 = 1, so at the first level of approximation, only one point is used, and m l = 2 l − 1 for l > 1, so there is an approximate doubling of the number of points when the level of approximation is increased by one.
The full grid method of interpolation is to take m l j points in dimension j, and compute at each possible combination of those points. We write Thus, in the full grid method, we must evaluate c(.) at d j=1 m l j = O d j=1 2 l j = O 2 l j points. This will not be possible if d j=1 l j is too large. In order to construct an approximate representation of c(.) in reasonable time, we could limit the sum d j=1 l j used in a full grid to be at most d + k, for some k ≥ 0. If k > 0, there are many possibilities for 'small full grids' indexed by the levels l = (l 1 , . . . , l d ) which satisfy this constraint. A natural question is how to combine the information given by each of these small full grids to give a good representation overall.
For a univariate function c(.), let for l > 1, and ∆ 1 = U 1 . Then ∆ l gives the quantity we should add the approximate storage of c(.) at level l − 1 to incorporate the new information given by the knots added at level l.
Returning to the multivariate case, the sparse grid interpolation of c(.) at level k is given by To store c(.) on a sparse grid at level k, we must evaluate at O d k+1 points, which allows approximate storage for much larger dimension d than is possible using a full grid method. Barthelmann et al. (2000) use global polynomial interpolation for a function defined on a hypercube, with the Chebyshev knots. We prefer to use cubic splines for interpolation, since the positioning of the knots is less critical. Since we have already standardized the function we wish to store, we use the same knots in each direction, and choose these standard knots s l at level l to be m l equally spaced quantiles of a N (0, τ 2 k ) distribution. As k increases, we choose larger τ k , so that the size of the region covered by the sparse grid increases with k. However, the rate at which τ k increases should be sufficiently slow to ensure that the distance between the knots s k decreases with k. Somewhat arbitrarily, we choose τ k = 1 + k 2 , which appears to work reasonably well in practice.

Bounded interpolation
To ensure that g N i (.) remains integrable at each stage, we impose an upper bound M on the interpolated value of c(.). In practice, we choose M to be the largest value of c(z) observed at any of the evaluation points.

Computational complexity
Using sparse grid storage at level k, the cost of stage i of the sequential reduction algorithm is at most O(|N i | 2k ). The overall cost of approximating the likelihood will be large if max i |N i | is large. The random effects may be removed in any order, so it makes sense to use an ordering that allows approximation of the likelihood at minimal cost. This problem may be reduced to a problem in graph theory: to find an ordering of the vertices of a graph, such that when these nodes are removed in order, joining together all neighbors of the vertex to be removed at each stage, the largest clique obtained at any stage is as small as possible. This is known as the triangulation problem, and the smallest possible value, over all possible orderings, of the largest clique obtained at some stage is known as the treewidth of the graph.
Unfortunately, algorithms available to calculate the treewidth of a graph on n vertices can take at worst O(2 n ) operations, so to find the exact treewidth may be too costly for n at all large. However, there are special structures of graph which have known treewidth, and algorithms exist to find upper and lower bounds on the treewidth in reasonable time (see Koster, 2008, 2010). We use a constructive algorithm for finding an upper bound on the treewidth, which outputs an elimination ordering achieving that upper bound, to find a reasonably good (though not necessarily optimal) ordering.

An R package for sequential reduction
The sequential reduction method is implemented in R (R Core Team, 2014) by the package glmmsr, which may be found at warwick.ac.uk/heogden/code. The code for sparse grid interpolation is based on the efficient storage schemes suggested by Murarasu et al. (2011). Code to reproduce the examples of Section 4 is also provided.

Examples
We give some examples to compare the performance of the proposed sequential reduction method with existing methods to approximate the likelihood. The first two examples here are of pairwise competition models (a simple tree tournament with simulated data, and a more complex, real-data example); the third is a mixed logit model with two nested layers of random effects.

Tree tournament
Consider observing a tree tournament, with structure as shown in Figure 1a. Suppose that there is a single observed covariate x i for each player, where λ i = βx i +σu i and u i ∼ N (0, 1). We consider one particular tournament with this tree structure, simulated from the model with β = 0.5 and σ = 1.5. We suppose that we observe two matches between each pair of competing players. The covariates x i are independent draws from a standard normal distribution. We fit the model using the Laplace approximation, and the sequential reduction approximations, for k = 1, 2, 3, 4 and 5. The posterior dependence graph of a tree tournament is a tree, which has treewidth 2. Using the sequential reduction method with sparse grid storage at level k, the cost of approximating the likelihood at each point will be O(n4 k ). In reality, the computation time does not quadruple each time k is increased, since the computation is dominated by fixed operations whose cost does not depend on k. To compute the approximation to the likelihood at a single point took about 0.02 seconds for the Laplace approximation, 0.22 seconds for k = 1, 0.24 seconds for k = 2, 0.24 seconds for k = 3, 0.27 seconds for k = 4 and 0.30 seconds for k = 5. Table 1 gives the estimates of β and σ resulting from each approximation to the likelihood. The estimates of β are similar for all the approximations, but the estimate of σ found by maximizing the Laplace approximation to the likelihood is smaller than the true maximum likelihood estimator. We also want to consider the quality of an importance sampling approximation to the log-likelihood, as described in Section 2.4. We are interested in the shape of the log-likelihood surface, rather than the pointwise quality of the approximation, so we consider approximations to the difference between the loglikelihood at two points: the maximum (0.46, 1.30), and the point (0.60, 2.00).
We consider the quality of each approximation relative to the time taken to compute it. Figure 2 shows the trace plots of importance sampling and sequential reduction approximations to this difference in log-likelihoods, plotted against the length of time taken to find each approximation, on a log scale. In well under  Figure 2: Importance sampling and sequential reduction approximations to (0.46, 1.30) − (0.60, 2.00), plotted against the time taken to find the approximation, on a log scale. The sequential reduction approximation converges in less than a second, but the importance sampling approximation has still not converged after over 14 hours. a second, the sequential reduction approximation converges to such an extent that differences in the approximations are not visible on this scale. By contrast, after more than 14 hours, the importance sampling approximation has still not converged.
4.2 An animal behavior "tournament": Augrabies Flat lizards Whiting et al. (2006) conducted an experiment to determine the factors affecting the fighting ability of male Augrabies flat lizards, Platysaurus broadleyi. They captured n = 77 lizards, recorded various measurements on each, and then released them and recorded the outcomes of fights between pairs of animals. The tournament structure is shown in Figure 1b. The data are available in R as part of the BradleyTerry2 package (Turner and Firth, 2012). There are several covariates x i available for each lizard. Turner and Firth (2012) suggest to model the ability of each lizard as λ i = β T x i + σu i , where u i ∼ N (0, 1). The data are binary, and we assume a Thurstone-Mosteller model, so that Pr(i beats j|λ i , λ j ) = Φ(λ i − λ j ).
In order to find the sequential reduction approximation to the likelihood, we must first find an ordering in which to remove the players, an ordering which will minimize the cost of the algorithm. Methods to find upper and lower bounds for the treewidth give that the treewidth is either 4 or 5, and we use an ordering corresponding to the upper bound.
To demonstrate the performance of the sequential reduction approximation, we consider the cut across the log-likelihood surface at β = 0, as σ varies. The various approximations to this curve are shown in Figure 3. It becomes harder to obtain a good approximation to the log-likelihood as σ increases. The case k = 0 corresponds to the Laplace approximation, and gives a poor-quality approximation for σ > 0.5. As k increases, the approximation improves. All values of k ≥ 3 give an excellent approximation to the log-likelihood, and the approximations for k = 4 and k = 5 are indistinguishable at this scale. If we include all covariates suggested by Turner and Firth (2012) in the model, the maximum likelihood estimator is not finite. A penalized version of the likelihood could be used to obtain a finite estimate. In a generalized linear model, the bias-reduction penalty of Firth (1993) may be used for this purpose. Further work is required to obtain a good penalty for use with generalized linear mixed models. 100 second-level groups, each containing two first-level groups, which themselves each contain two items. The posterior dependence graph of this model is shown in Figure 1c, and has treewidth 2. The treewidth of the posterior dependence graph for a similarly defined L-level model is L − 1. We suppose that y i ∼ Bernoulli(p i ), where p i = logit −1 (η i ), and and simulate from this model, with α = −0.5, β = 0.5, σ 1 = 1 and σ 2 = 0.5, The fitted values found using the sequential reduction method with various different values of k are shown in Table 2. The parameter estimates found from the Laplace approximation to the likelihood are some distance from the maximum likelihood estimator, especially for the variance parameter of the level-1 group.

Conclusions
Many common approaches to inference in generalized linear mixed models rely on approximations to the likelihood which may be of poor quality if there is little information available on each random effect. There are many situations in which it is unclear how good an approximation to the likelihood will be, and how much impact the error in the approximation will have on the statistical properties of the resulting estimator. It is therefore very useful to be able to obtain an accurate approximation to the likelihood at reasonable cost.
The sequential reduction method outlined in this paper allows a good approximation to the likelihood to be found in many models with sparse structure -precisely the situation where currently-used approximation methods perform worst. By using sparse grid interpolation methods to store modifications to the normal approximation used to construct the Laplace approximation, it is possible to get an accurate approximation to the likelihood for a wide range of models.