mixtools: An R Package for Analyzing Finite Mixture Models

The mixtools package for the R statistical software (R Development Core Team, 2008) provides a set of functions for analyzing a variety of ﬁnite mixture models. These functions include both traditional methods, such as EM algorithms for univariate and multivariate normal mixtures, and newer methods that reﬂect some recent research in ﬁnite mixture models. In the latter category, mixtools provides algorithms for estimating parameters in a wide range of diﬀerent mixture-of-regression contexts, in multinomial mixtures such as those arising from discretizing continuous multivariate data, in nonparametric situations where the multivariate component densities are completely unspeciﬁed, and in semiparametric situations such as a univariate location mixture of symmetric but otherwise unspeciﬁed densities. Many of the algorithms of the mixtools package are EM algorithms or are based on EM-like ideas, so this article includes an overview of EM algorithms for ﬁnite mixture models.


Introduction to finite mixtures and mixtools
Populations of individuals may often be divided into subgroups. Yet even when we observe characteristics of these individuals that provide information about their subgroup memberships, we may not actually observe these memberships per se. The basic goal of the tools in the mixtools package (version 0.4.0, as of this writing) is to examine a sample of measurements to discern and describe subgroups of individuals, even when there is no observable variable that readily indexes into which subgroup an individual properly belongs. This task is sometimes referred to as "unsupervised clustering" in the literature, and in fact mixture models may be generally thought of as comprising the subset of clustering methods known as "model-based clustering".
Finite mixture models may also be used in situations beyond those for which clustering of individuals is of interest. For one thing, finite mixture models give descriptions of entire subgroups, rather than assignments of individuals to those subgroups (though the latter may be accomplished using mixture models). Indeed, even the subgroups may not necessarily be of interest; sometimes finite mixture models merely provide a means for adequately describing a particular distribution, such as the distribution of residuals in a linear regression model where outliers are present.
Whatever the goal of the modeler when employing mixture models, much of the theory of these models involves the assumption that the subgroups are distributed according to a particular parametric form -and quite often this form is univariate or multivariate normal. While mixtools does provide tools for traditional fitting of finite mixtures of univariate and multivariate normal distributions, it goes well beyond this well-studied realm. Arising from recent research whose goal is to relax or modify the assumption of multivariate normality, mixtools provides computational techniques for finite mixture model analysis in which components are regressions, multinomial vectors arising from discretization of multivariate data, or even distributions that are almost completely unspecified.
To make the mixture model framework more concrete, suppose the possibly vectorvalued random variables X 1 , . . . , X n are a simple random sample from a finite mixture of m > 1 arbitrary distributions, which we will call components throughout this article. The density of each X i may be written where θ = (λ, φ) = (λ 1 , . . . , λ m , φ 1 , . . . , φ m ) denotes the parameter and the λ m are positive and sum to unity. We assume that the φ j are drawn from some family F of multivariate density functions absolutely continuous with respect to, say, Lebesgue measure. The representation (1) is not identifiable if no restrictions are placed on F, where by "identifiable" we mean that g θ has a unique representation of the form (1) and we do not consider that "label-switching" -i.e., reordering the m pairs (λ 1 , φ 1 ), . . . , (λ m , φ m ) -produces a distinct representation.
In the next sections we will sometimes have to distinguish between parametric and more general nonparametric situations. This distinction is related to the structure of the family F of distributions to which the component densities φ j in model (1) belong. We say that the mixture is parametric if F is a parametric family, F = {φ(·|ξ), ξ ∈ R d }, indexed by a (d-dimensional) Euclidean parameter ξ. A parametric family often used is the univariate gaussian family F = {φ(·|µ, σ 2 ) = density of N (µ, σ 2 ), (µ, σ 2 ) ∈ R × R + * }, in which case the model parameter reduces to θ = (λ, (µ 1 , σ 2 1 ), . . . , (µ m , σ 2 m )). For the multivariate case, a possible parametric model is the conditionally i.i.d. normal model, for which F = {φ(x i ) = r k=1 f (x ik ), f (t) density of N (µ, σ 2 )} (this model is included in mixtools; see section 6.1). An example of a (multivariate) nonparametric situation is F = {φ(x) = r k=1 f (x i ), f (t) a univariate density on R}, in which case θ consists in a Euclidean part (λ) and a nonparametric part (f 1 , . . . , f m ).
As a simple example of a dataset to which mixture models may be applied, consider the sample depicted in Figure 1. In the Old Faithful dataset, measurements give time in minutes between eruptions of the Old Faithful geyser in Yellowstone National Park, USA. These data are included as part of the datasets package in R; type help("faithful") in R for more details. For the Old Faithful eruption data, a two-component mixture model is clearly a reasonable model based on the bimodality evident in the histogram. This example is analyzed by Hunter et al. (2007), who compare a standard normal-mixture method for fitting it with a novel semiparametric approach. Both approaches are included in mixtools; see Sections 2.3 and 4.2 of this article. In Section 2 of the current article we review the well-known class of EM algorithms for finite mixture models, a common thread that runs throughout much of the rest of the article. The remaining sections discuss various categories of functions found in the mixtools package, from cutpoint methods that relax distributional assumptions for multivariate data by discretizing the data (Section 3), to semi-and non-parametric methods that eliminate distributional assumptions almost entirely depending on what the identifiability of the model allows (Section 4), to methods that handle various mixtures of regressions (Section 5). Finally, Section 6 describes several miscellaneous features of the mixtools package.

EM algorithms for finite mixtures 2.1 Missing data setup
Much of the general methodology used in mixtools involves the representation of the mixture problem as a particular case of maximum likelihood estimation (MLE) when the observations can be viewed as incomplete data. This setup implies consideration of two sample spaces, the sample space of the (incomplete) observations, and a sample space of some "complete" observations, the characterization of which being that the estimation can be performed explicitly at this level. For instance, in parametric situations, the MLE based on the complete data may exist in closed form. Among the numerous reference papers and monographs on this subject are, e.g., the original EM algorithm paper by Dempster et al. (1977) and the finite mixture model book by McLachlan and Peel (2000) and references therein. We now give a brief description of this setup as it applies to finite mixture models in general.
The (observed) data consist of n i.i.d. observations x = (x 1 , . . . , x n ) from a density g θ given by (1). It is common to denote the density of the sample by g θ , the n-fold product of g θ , so that we write simply x ∼ g θ . In the missing data setup, g θ is called the incomplete-data density, and the associated log-likelihood is L x (θ) = n i=1 log g θ (x i ). The (parametric) ML estimation problem consists in findingθ x = argmax θ∈Φ L x (θ), or at least finding a local maximum -there are certain well-known cases in which a finite mixture model likelihood is unbounded (McLachlan and Peel, 2000), but we ignore these technical details for now. Calculatingθ x even for a parametric finite mixture model is known to be a difficult problem, and considering x as incomplete data resulting from non-observed complete data helps.
The associated complete data is denoted by c = (c 1 , . . . , c n ), with density h θ (c) = n i=1 h θ (c i ) (there exists a many-to-one mapping from c to x, representing the loss of information). In the model for complete data associated with model (1) 1} is a Bernoulli random variable indicating that individual i comes from component j. Since each individual comes from exactly one component, this implies m j=1 Z ij = 1, and The complete-data density for one observation is thus In the parametric situation, i.e. when F is a parametric family, it is easy to check that the complete-data MLEθ c based on maximizing log h θ (c) is easy to find, provided that this is the case for the family F.

EM algorithms
An EM algorithm iteratively maximizes, instead of the observed log-likelihood L x (θ), the operator where θ (t) is the current value at iteration t, and the expectation is with respect to the distribution k θ (c|x) of c given x, for the value θ (t) of the parameter. The iteration θ (t) → θ (t+1) is defined in the above general setup by 1. E-step: compute Q(θ|θ (t) ) 2. M-step: set θ (t+1) = argmax θ∈Φ Q(θ|θ (t) ) mixtools: An R Package for Analyzing Mixture Models For finite mixture models, the E-step does not depend on the structure of F, since the missing data part is only related to the z's: The z are discrete, and their distribution is given via Bayes' theorem. The M-step itself can be split in two parts, the maximization related to λ, which does not depend on F, and the maximization related to φ, which has to be handled specifically (say, parametrically, semi-or non-parametrically) for each model. Hence the EM algorithms for the models handled by the mixtools package share the following common features: 1. E-step: Calculate the "posterior" probabilities (conditional on the data and θ (t) ) of component inclusion, for all i = 1, . . . , n and j = 1, . . . , m. Numerically, it can be dangerous to implement equation (2) exactly as written due to the possibility of the indeterminant form 0/0 in cases where x i is so far from any of the components that all φ (t) j ′ (x i ) values result in a numerical underflow to zero. Thus, many of the routines in mixtools actually use the equivalent expression or some variant thereof.

An EM algorithm example
As an example, we consider the univariate normal mixture analysis of the Old Faithful waiting data depicted in Figure 1. This fully parametric situation corresponds to a mixture from the univariate gaussian family described in Section 1, where the jth component density φ j (x) in (1) is normal with mean µ j and variance σ 2 j . The M-step for the parameters (µ j , σ 2 j ), j = 1, . . . , m of this EM algorithm for such mixtures of univariate normals is straightforward, and can be found, e.g., in McLachlan and Peel (2000). The function normalmixEM implements it in mixtools. A code for the Old Faithful example, using most of the default values (e.g., stopping criterion, maximum number of iterations), is simply R> data("faithful") R> attach(faithful) R> wait1 <-normalmixEM(waiting, lambda = .5, mu = c(55, 80), sigma = 5) number of iterations= 9 The code above will fit a 2-component mixture (because mu is a vector of length two) in which the standard deviations are assumed equal (because sigma is a scalar instead of a vector). See help("normalmixEM") for details about specifying starting values for this EM algorithm.

Cutpoint methods
Traditionally, most literature on finite mixture models has assumed that the density functions φ j (x) of equation (1) come from a known parametric family. However, some authors have recently considered the problem in which φ j (x) is unspecified except for some conditions necessary to ensure the identifiability of the parameters in the model. One such set of conditions is as follows: Hettmansperger and Thomas (2000); Cruz-Medina et al. (2004); and Elmore et al. (2004) treat the case in which φ j (x) equals the product f j (x i ) · · · f j (x r ) for some univariate density function f j . Thus, conditional on knowing that X comes from the jth mixture component, the coordinates of X are independent and identically distributed. For this reason, this case is called the conditionally i.i.d. model.
The authors named above have developed an estimation method for the conditionally i.i.d. model. This method, the cutpoint approach, discretizes the continuous measurements by replacing each r-dimensional observation, say X i = (x i1 , . . . , x ir ), by the p-dimensional multinomial vector (n 1 , . . . , n p ), where p ≥ 2 is chosen by the experimenter along with a set of cutpoints −∞ = c 0 < c 1 < · · · < c p = ∞, so that for a = 1, . . . , p, Note that the multinomial distribution is guaranteed by the conditional i.i.d. assumption, and the multinomial probability of the ath category is equal to θ a ≡ P(c a−1 < X ik ≤ c a ).
The cutpoint approach is completely general in the sense that it can be applied to any number of components m and any number of repeated measures r, just as long as r ≥ 2m−1, a condition that guarantees identifiability (Elmore and Wang, 2003). However, some information is lost in the discretization step, and for this reason it becomes difficult to obtain density estimates of the component densities. Furthermore, even if the assumption of conditional independence is warranted, the extra assumption of identically distributed coordinates may not be; and the cutpoint method collapses when the coordinates are not identically distributed.
As an illustration of the cutpoint approach applied to a dataset, we show here how to use mixtools to reconstruct-almost-an example from Elmore et al. (2004). The dataset is Waterdata, a description of which is available by typing help("Waterdata"). This dataset contains 8 observations on each of 405 subjects, where the observations are angle degree measurements ranging from −90 to 90 that describe the subjects' answers to a series of 8 questions related to a conceptual task about how the surface of a liquid would be oriented if the vessel containing it were tipped to a particular angle. The correct answer is 0 degree in all cases, yet the subjects showed a remarkable variety of patterns of answers. Elmore et al. (2004) assumed the conditionally i.i.d. model (see Benaglia et al. (2009a) for an in-depth discussion of this assumption and this dataset) with both m = 3 and m = 4 mixture components. Elmore et al. (2004) summarized their results by providing plots of estimated empirical distribution functions for the component distributions, where these functions are given byF In equation (5) We cannot obtain the exact results of Elmore et al. (2004) because those authors do not state specifically which cutpoints c a they use; they merely state that they use thirteen cutpoints. It appears from their Figures 1 and 2 that these cutpoints occur approximately at intervals of 10.5 degrees, starting at −63 and going through 63; these are the cutpoints that we adopt here. The function makemultdata will create a multinomial dataset from the original data, as follows: R> data("Waterdata") R> cutpts <-10.5*(-6:6) R> watermult <-makemultdata(Waterdata, cuts = cutpts) Once the multinomial data have been created, we may apply the multmixEM function to estimate the multinomial parameters via an EM algorithm. Finally, compCDF calculates and plots the estimated distribution functions of equation (5). Figure 3 gives plots for both a 3-component and a 4-component solution; these plots are very similar to the corresponding plots in Figures 1 and 2 of Elmore et al. (2004).

Nonparametric and semiparametric methods
In this section we consider nonparametric multivariate finite mixture models. The first algorithm presented here was introduced by Benaglia et al. (2009a) as a generalization of the stochastic semiparametric EM algorithm of Bordes et al. (2007). Both algorithms are implemented in mixtools.

EM-like algorithms for mixtures of unspecified densities
Consider the mixture model described by equation (1). If we assume that the coordinates of the X i vector are conditionally independent, i.e. they are independent conditional on the subpopulation or component (φ 1 through φ m ) from which X i is drawn, the density in (1) can be rewritten as: where the function f (·), with or without subscripts, will always denote a univariate density function. Here we do not assume that f jk (·) comes from a family of densities that may be indexed by a finite-dimensional parameter vector, and we estimate these densities using nonparametric density techniques. That is why we say that this algorithm is a fully nonparametric approach. The density in equation (6) allows for a different distribution for each component and each coordinate of X i . Notice that if the density f jk (·) does not depend on k, we have the case in which the X i are not only conditionally independent but identically distributed as well. These are the two extreme cases. In order to encompass both the conditionally i.i.d. case and the more general case (6) simultaneously in one model, we allow that the coordinates of X i are conditionally independent and there exist blocks of coordinates that are also identically distributed. If we let b k denote the block to which the kth coordinate belongs, where 1 ≤ b k ≤ B and B is the total number of such blocks, then equation (6) is replaced by The indices i, j, k, and ℓ will always denote a generic individual, component (subpopulation), coordinate (repeated measurement), and block, respectively. Therefore, we will The EM algorithm to estimate model (7) has the E-step and M-step described in Section 2.2. In equation (2) jℓ (·) is obtained by a weighted nonparametric (kernel) density estimate, given by: 3. Nonparametric (Kernel) density estimation step: For any real u, define for each component j ∈ {1, . . . , m} and each block ℓ ∈ {1, . . . , B} where K(·) is a kernel density function, h jℓ is the bandwidth for the jth component and ℓth block density estimate, and C ℓ is the number of coordinates in the ℓth block.
The function npEM implements this algorithm in mixtools. This function has an argument samebw which, when set to TRUE (the default), takes h jℓ = h, for all 1 ≤ j ≤ m and 1 ≤ ℓ ≤ B, that is, the same bandwidth for all components and blocks, while samebw = FALSE allows a different bandwidth for each component and each block, as detailed in Benaglia et al. (2009b). This function will, if called using stochastic = TRUE, replace the deterministic density estimation step (8) by a stochastic density estimation step of the type proposed by Bordes et al. (2007) im ) as a multivariate random vector with a single trial and success probability vector p 1m ), then in the M-step for λ t+1 j in equation (4), replace p ij and let In other words, the stochastic versions of these algorithms re-assign each observation randomly at each iteration, according to the p ij values at that iteration, to one of the m components, then the density estimate for each component is based only on those observations that have been assigned to it. Because the stochastic algorithms do not converge the way a deterministic algorithm often does, the output of npEM is slightly different when stochastic = TRUE than when stochastic = FALSE, the default. See the corresponding help file for details. Benaglia et al. (2009a) also discuss specific cases of model (7) in which some of the f jb k (·) densities are assumed to be the same except for a location and scale change. They refer to such cases as semiparametric since estimating each f jb k (·) involves estimating an unknown density as well as multiple location and scale parameters. For instance, equation (17) of Benaglia et al. (2009a) sets where ℓ = b k for a generic k.
The mixtools package implements an algorithm for fitting model (9) in a function called spEM. Details on the use of this function may be obtained by typing help("spEM"). Implementation of this algorithm and of that of the npEM function requires updating the values of f jb k (x ik ) for all i, j, and k for use in the E-step (2). To do this, the spEM algorithm keeps track of an n × m matrix, called Φ here, where The density estimation step of equation (8) updates the Φ matrix for the (t+1)th iteration based on the most recent values of all of the parameters. For instance, in the case of model

A univariate symmetric, location-shifted semiparametric example
Both Hunter et al. (2007) and Bordes et al. (2006) study a particular case of model (1) in which x is univariate and where φ(·) is a density that is assumed to be completely unspecified except that it is symmetric about zero. Because each component distribution has both a nonparametric part φ(·) and a parametric part µ j , we refer to this model as semiparametric.
Under the additional assumption that φ(·) is absolutely continuous with respect to Lebesgue measure, Bordes et al. (2007) propose a stochastic algorithm for estimating the model parameters, namely, (λ, µ, φ). This algorithm is implemented by the mixtools function spEMsymloc. This function also implements a nonstochastic version of the algorithm, which is the default and which is a special case of the general algorithm described in Section 4.1.
As noted in Figure 1, model (10) appears to be an appropriate model for the Old Faithful waiting times dataset. Here, we provide code that applies the spEMsymloc function to these data. First, we display the normal mixture solution of Figure 2 with a semiparametric solution superimposed, in Figure 4(a): R> plot(wait1, which = 2, cex.axis = 1.4, cex.lab = 1.4, cex.main = 1.8, + main2 = "Time between Old Faithful eruptions", xlab2 = "Minutes") R> wait2 <-spEMsymloc(waiting, mu0 = c(55, 80)) R> plot(wait2, lty = 2, newplot = FALSE, addlegend = FALSE) Because the semiparametric version relies on a kernel density estimation step (8), it is necessary to select a bandwidth for this step. By default, spEMsymloc uses a fairly simplistic approach: It applies "Silverman's rule of thumb" (Silverman, 1986) to the entire dataset using the bw.nrd0 function in R. For the Old Faithful waiting time dataset, this bandwidth is about 4: R> bw.nrd0(waiting) [1] 3.987559 But the choice of bandwidth can make a big difference, as seen in Figure 4(b). We find that with a bandwidth near 2, the semiparametric solution looks quite close to the normal mixture solution of Figure 2. Reducing the bandwidth further results in the "bumpiness" exhibited by the solid line in Figure 4(b). On the other hand, with a bandwidth of 8, the semiparametric solution completely breaks down in the sense that algorithm tries to make each component look similar to the whole mixture distribution. We encourage the reader to experiment by changing the bandwidth in the above code.

A trivariate gaussian example
As a first simple, nonparametric example, we simulate a gaussian trivariate mixture with independent repeated measures and a shift of location between the two components in each coordinate, i.e., m = 2, r = 3, and b k = k, k = 1, 2, 3. The individual densities f jk are the densities of N (µ jk , 1), with component means µ 1 = (0, 0, 0) and µ 2 = (3,4,5). This example was introduced by Hall et al. (2005) then later reused by Benaglia et al. (2009a) for comparison purposes. Note that the parameters in this model are identifiable, since Hall and Zhou (2003) showed that for two components (m = 2), identifiability holds in model (1) is under mild assumptions as long as r ≥ 3, even in the most general case in which b k = k for all k.
A function ise.npEM has been included in mixtools for numerically computing the Integrated Squared Error (ISE) relative to a user-specified true density for a selected estimated densityf jk from npEM output. Each densityf jk is computed using equation (8) together with the posterior probabilities after convergence of the algorithm, i.e., the final values of the p t ij (when stochastic = FALSE). We illustrate the usage of ise.npEM in this example by running a Monte Carlo simulation for S replications, then computing the square root of the Mean Integrated Squared Error (MISE) for each density, where du, j = 1, 2 and k = 1, 2, 3.

R> plot(a)
The resulting plots are given in Figure 5.

A more general multivariate nonparametric example
In this section, we fit a more difficult example, with non-multimodal mixture densities (in block #2), heavy-tailed distributions, and different scales among the coordinates. The model is multivariate with r = 5 repeated measures and m = 2 components (hence identifiability holds; cf. Hall and Zhou (2003)  corresponds to a mixture of two noncentral Student t distributions, t ′ (2, 0) and t ′ (10, 8), where the first parameter is the number of degrees of freedom, and the second is the non-centrality. Block 2 corresponds to a mixture of Beta distributions, B(1, 1) (which is actually the uniform distribution over [0, 1]) and B(1, 5). The first component weight is λ 1 = 0.4. The true mixtures are depicted in Figure 6.

R> plot(b, breaks = 15)
Finally, we can compute the ISE of the estimated density relative to the truth for each block and component. The corresponding output is depicted in Figure 8.

Mixtures of linear regressions
Consider a mixture setting where we now assume X i is a vector of covariates observed with a response Y i . The goal of mixtures of regressions is to describe the conditional distribution of Y i |X i . Mixtures of regressions have been extensively studied in the econometrics literature and were first introduced by Quandt (1972) as the switching regimes (or switching regressions) problem. A switching regimes system is often compared to structural change in a system (Quandt and Ramsey, 1978). A structural change assumes the system depends deterministically on some observable variables, but switching regimes implies one is unaware of what causes the switch between regimes. In the case where it is assumed there are two heterogeneous classes, Quandt (1972) characterized the switching regimes problem "by assuming that nature chooses between regimes with probabilities λ and 1 − λ".
Suppose we have n independent univariate observations, y 1 , . . . , y n , each with a corresponding vector of predictors, x 1 , . . . , x n , with x i = (x i,1 , . . . , x i,p ) T for i = 1, . . . , n. We often set x i,1 = 1 to allow for an intercept term. Let y = (y 1 , . . . , y n ) T and let X be the n × p matrix consisting of the predictor vectors.
Suppose further that each observation (y i , x i ) belongs to one of m classes. Conditional on membership in the jth component, the relationship between y i and x i is the normal regression model where ǫ i ∼ N (0, σ 2 j ) and β j and σ 2 j are the p-dimensional vector of regression coefficients and the error variance for component j, respectively.
Accounting for the mixture structure, the conditional density of y i |x i is where φ(·|x T β j , σ 2 j ) is the normal density with mean x T β and variance σ 2 . Notice that the model parameter for this setting is θ = (λ, (β 1 , σ 2 1 ), . . . , (β m , σ 2 m )). The mixture of regressions model (12) differs from the well-known mixture of multivariate normals (12) makes no assertion about the marginal distribution of X i , whereas the mixture of multivariate normals specifies that X i itself has a mixture of multivariate normals distribution.  Figure 9: 1996 data on gross national product (GNP) per capita and estimated carbon dioxide (CO 2 ) emissions per capita. Note that "CH" stands for Switzerland, not China.

GNP and Emissions Data
As a simple example of a dataset to which a mixture of regressions models may be applied, consider the sample depicted in Figure 9. In this dataset, the measurements of carbon dioxide (CO 2 ) emissions are plotted versus the gross national product (GNP) for n = 28 countries. These data are included mixtools; type help("CO2data") in R for more details. Hurn et al. (2003) analyzed these data using a mixture of regressions from the Bayesian perspective, pointing out that "there do seem to be several groups for which a linear model would be a reasonable approximation." They further point out that identification of such groups could clarify potential development paths of lower GNP countries.

EM algorithms for mixtures of regressions
A standard EM algorithm, as described in Section 2, may be used to find a local maximum of the likelihood surface. The E-step is the same as for any finite mixture model EM algorithm; i.e., the p (t) ij values are updated according to equation (2)-or, in reality, equation (3) The update to the λ parameters in the M-step, equation (4), is also the same. Letting W nj ), the additional M-step updates to the β and σ parameters are given by where A 2 = A T A and tr(A) means the trace of the matrix A. Notice that equation (14) is a weighted least squares (WLS) estimate of β j and equation (15) resembles the variance estimate used in WLS.
Allowing each component to have its own error variance σ 2 j results in the likelihood surface having no maximizer, since the likelihood may be driven to infinity if one component gives a regression surface passing through one or more points exactly and the variance for that component is allowed to go to zero. A similar phenomenon is well-known in the finite mixture-of-normals model where the component variances are allowed to be distinct (McLachlan and Peel, 2000). However, in practice we observe this behavior infrequently, and the mixtools functions automatically force their EM algorithms to restart at randomly chosen parameter values when it occurs. A local maximum of the likelihood function, a consistent version of which is guaranteed to exist by the asymptotic theory as long as the model is correct and all λ j are positive, usually results without any restarts.
The function regmixEM implements the EM algorithm for mixtures of regressions in mixtools. This function has arguments that control options such as adding an intercept term, addintercept = TRUE; forcing all β j estimates to be the same, arbmean = FALSE (for instance, to model outlying observations as having a separate error variance from the non-outliers); and forcing all σ 2 j estimates to be the same, arbvar = FALSE. For additional details, type help("regmixEM").
As an example, we fit a 2-component model to the GNP data shown in Figure 9. Hurn et al. (2003) and Young (2007) selected 2 components for this dataset using model selection criteria, Bayesian approaches to selecting the number of components, and a bootstrapping approach. The function regmixEM will be used for fitting a 2-component mixture of regressions by an EM algorithm: R> data("CO2data") R> attach(CO2data) R> CO2reg <-regmixEM(CO2, GNP, lambda = c(1, 3) / 4, + beta = matrix(c(8, -1, 1, 1), 2, 2), sigma = c(2, 1)) number of iterations= 10 We can then pull out the final observed log-likelihood as well as estimates for the 2component fit, which includeλ,β 1 ,β 2 ,σ 1 , andσ 2 : R> summary (CO2reg The reader is encouraged to alter the starting values or let the internal algorithm generate random starting values. However, this fit seems appropriate and the solution is displayed in Figure 10 along with 99% Working-Hotelling Confidence Bands, which are constructed automatically by the plot.mixEM function in this case by assigning each point to its most probable component and then fitting two separate linear regressions: plot(CO2reg, density = TRUE, alpha = 0.01)

Predictor-dependent mixing proportions
Suppose that in model (12), we replace λ j by λ j (x i ) and assume that the mixing proportions vary as a function of the predictors x i . Allowing this type of flexibility in the model might be useful for a number of reasons. For instance, sometimes it is the proportions λ j that are of primary scientific interest, and in a regression setting it may be helpful to know whether these proportions appear to vary with the predictors. As another example, consider a regmixEM model using arbmean = FALSE in which the mixture structure only concerns the error variance: In this case, λ j (x) would give some sense of the proportion of outliers in various regions of the predictor space.
One may assume that λ j (x) has a particular parametric form, such as a logistic function, which introduces new parameters requiring estimation. This is the idea of the hierarchical mixtures of experts (HME) procedure (Jacobs et al., 1991), which is commonly used in neural networks. This procedure is a variant on tree-based methods -a context somewhat different from mixtures of regressions. However, a parametric form of λ j (x) may be too restrictive; in particular, the logistic function is monotone, which may not realistically capture the pattern of change of λ j as a function of x. As an alternative, Young and Hunter (2009) propose a nonparametric estimate of λ j (x i ) that uses ideas from kernel density estimation.
The intuition behind the approach of Young and Hunter (2009) is as follows: The M-step estimate (4) of λ j at each iteration of a finite mixture model EM algorithm is simply an average of the "posterior" probabilities p ij = E(Z ij |data). As a substitute, The nonparametric approach uses an idea from nonparametric regression, taking a locally weighted average using a kernel function to give the weights.
Thus, considering the case of univariate x for simplicity, we take where K h (·) is a kernel density function with scale parameter (i.e., bandwidth) h. It is straightforward to generalize equation (16) to the case of vector-valued x by using a multivariate kernel function. Young and Hunter (2009) give an iterative algorithm for estimating mixture of regression parameters that replaces the standard λ j updates (4) by the kernel-weighted version (16). The algorithm is otherwise similar to a standard EM; thus, like the algorithm in section 4.1 of this article, the resulting algorithm is an EM-like algorithm. Because only the λ j parameters depend on x (and are thus "locally estimated"), whereas the other parameters (the β j and σ j ) can be considered to be globally estimated, Young and Hunter (2009) call this algorithm an iterative global/local estimation (IGLE) algorithm. Naturally, it replaces the usual E-step (13) by a modified version in which each λ j is replaced by λ j (x i ).
The function regmixEM.loc implements the IGLE algorithm in mixtools. Like the regmixEM function, regmixEM.loc has the flexibility to include an intercept term by using addintercept = TRUE. Moreover, this function has the argument kern.l to specify the kernel used in the local estimation of the λ j (x i ). Kernels the user may specify include "Gaussian", "Beta", "Triangle", "Cosinus", and "Optcosinus". Further numeric arguments relating to the chosen kernel include kernl.g to specify the shape parameter for when kern.l = "Beta" and kernl.h to specify the bandwidth which controls the size of the window used in the local estimation of the mixing proportions. See the corresponding help file for additional details.
For the GNP and emissions dataset, Figure 10 indicates that the assumption of constant weights for the component regressions across all values of the covariate space may not be appropriate. The countries with higher GNP values appear to have a greater probability of belonging to the first component (i.e., the red line in Figure 10). We will therefore apply the IGLE algorithm to this dataset.
For this implementation of the IGLE algorithm, we set the parameter estimates obtained from the mixture of regressions EM algorithm as starting values forβ 1 ,β 2 ,σ 1 , and σ 2 , and set the starting values for λ(x i ) to be 0.5 for all x i . R> CO2igle <-regmixEM.loc(CO2, GNP, beta = CO2reg$beta, sigma = CO2reg$sigma, + lambda = matrix(.5, 28, 2), kern.l = "Beta", kernl.h = 20, kernl.g = 3) We can view the estimates forβ 1 ,β 2 ,σ 1 , andσ 2 . Notice that the estimates are comparable to those obtained for the mixture of regressions EM output and the log-likelihood value is slightly higher.  This plot is given in Figure 11. Notice the curvature provided by the estimates from the IGLE fit. These fits indicate an upward trend in the posteriors. The predictor-dependent mixing proportions model provides a viable way to reveal this trend since the regular mixture of regressions fit simply provides the same estimate of λ for all x i .

Parametric bootstrapping for standard errors
With likelihood methods for estimation in mixture models, it is possible to obtain standard error estimates by using the inverse of the observed information matrix when implementing a Newton-type method. However, this may be computationally burdensome. An alternative way to report standard errors in the likelihood setting is by implementing a parametric bootstrap. Efron and Tibshirani (1993) claim that the parametric bootstrap should provide similar standard error estimates to the traditional method involving the information matrix. In a mixture-of-regressions context, a parametric bootstrap scheme may be outlined as follows: The mixtools package implements a parametric bootstrap algorithm in the boot.se function. We may apply it to the regression example of this section, which assumes the same estimate of λ for all x i , as follows: R> set.seed(123) R> CO2boot <-boot.se(CO2reg, B = 100) This output consists of both the standard error estimates and the parameter estimates obtained at each bootstrap replicate. An examination of the slope and intercept parameter estimates of the 500 bootstrap replicates reveals that no label-switching is likely to have occurred. For instance, the intercept terms of component one range from 4 to 11, whereas the intercept terms of component two are all tightly clumped around 0: for some positive integer k 0 (McLachlan, 1987).
The mixtools package has functions to employ each of these methods using EM output from various mixture models. The information criterion functions calculate An Information Criterion (AIC) of Akaike (1973), the Bayesian Information Criterion (BIC) of Schwarz (1978), the Integrated Completed Likelihood (ICL) of Biernacki et al. (2000), and the consistent AIC (CAIC) of Bozdogan (1987). The functions for performing parametric bootstrapping of the likelihood ratio test statistics sequentially test k = k 0 versus k = k 0 + 1 for k 0 = 1, 2, . . ., terminating after the bootstrapped p-value for one of these tests exceeds a specified significance level.
Currently, mixtools has functions for calculating information criteria for mixtures of multinomials (multmixmodel.sel), mixtures of multivariate normals under the conditionally i.i.d. assumption (repnormmixmodel.sel), and mixtures of regressions (regmixmodel.sel). Output from various mixture model fits available in mixtools can also be passed to the function boot.comp for the parametric bootstrapping approach. The parameter estimates from these EM fits are used to simulate data from the null distribution for the test given in (18). For example, the following application of the multmixmodel.sel function to the water-level multinomial data from Section 3 indicates that either 3 or 4 components seems like the best option (no more than 4 are allowed here since there are only 8 multinomial trials per observation and the mixture of multinomials requires 2m ≤ r + 1 for identifiability): R> set.seed (10) -7209.967 -3082.434 -2924-7209.967 -3082.434 - .748 -2881-7209.967 -3082.434 - .278 4 Young (2007 gives more applications of these functions to real datasets.

Bayesian methods
Currently, there are only two mixtools functions relating to Bayesian methodology and they both pertain to analyzing mixtures of regressions as described in Hurn et al. (2003). The regmixMH function performs a Metropolis-Hastings algorithm for fitting a mixture of regressions model where a proper prior has been assumed. The sampler output from regmixMH can then be passed to regcr in order to construct credible regions of the regression lines. Type help("regmixMH") and help("regcr") for details and an illustrative example.