A comparison of parameter covariance estimation methods for item response models in an expectation-maximization framework

The Expectation-Maximization (EM) algorithm is a method for finding the maximum likelihood estimate of a model in the presence of missing data. Unfortunately, EM does not produce a parameter covariance matrix for standard errors. Both Oakes and Supplemented EM are methods for obtaining the parameter covariance matrix. SEM was discovered in 1991 and is implemented in both open-source and commercial item response model estimation software. Oakes, a more recent method discovered in 1999, had not been implemented in item response model software until now. Convergence properties, accuracy, and elapsed time of Oakes and Supplemental EM family algorithms are compared for a diverse selection IFA models. Oakes exhibits the best accuracy and elapsed time among algorithms compared. We recommend that Oakes be made available in item response model estimation software.


Introduction
Once a model is fit to data, it is routine practice to examine the degree of confidence we ought to have in the parameter estimates. One approximation of this information is found in the parameter covariance matrix V, and in summary form, as standard errors (SEs), = diag(V) 1 2 . The ABOUT THE AUTHOR Joshua N. Pritikin is a postdoctoral fellow at Virginia Commonwealth University in Richmond, Virginia. Joshua is interested in developing software for the applied statistician. He has made contributions at all levels from the layout of user interface elements that are directly manipulated by applied researchers to the underlying mathematics and algorithms that implement a statistical idea.

PUBLIC INTEREST STATEMENT
Item models are a family of a statistical models frequently used for high stakes testing (SAT, GRE, or MCAT) and for data obtained from psychological questionnaires (like personality assessments). For any statistical model, an important consideration is how precisely the model parameters are determined by the data. For these particular kind of item models, obtaining the information about parameter precision is difficult. This report runs the leading methods against each other, ranking them in terms of accuracy and performance. The method described by Oakes, which is also the simplest, outperforms all the other methods in both accuracy and performance. We recommend that Oakes be made available more widely in statistical software that deals with item models. http://dx.doi.org/10.1080/23311908.2017.1279435 Expectation-Maximization (EM) algorithm (Dempster, Laird, & Rubin, 1977) is a method for finding the maximum likelihood estimate (MLE, ̂) of a model in the presence of missing data. For example, one EM algorithm of importance to psychologists and educators is for implementation of Item Factor Analysis (IFA) (Bock & Aitkin, 1981). Unfortunately, the parameter covariance matrix is not an immediate output of the EM algorithm. Before exploring methods to obtain the parameter covariance matrix, the EM approach will be informally outlined.
Following traditional notation, let Y o be the observed data. We want to find the MLE ̂ of parameter vector for model L(Y o | ). Unfortunately, L(Y o | ) is intractable or cumbersome to optimize. The EM approach is to start with initial parameter vector t=0 and fill in missing data Y m as the expectation In the case of Bock and Aitkin (1981), the missing data are the examinee latent scores (as determined by item parameters). Together, the observed Y o and made-up data Y m constitute completed data Y c . With the parameter vector t at iteration t, we can use a complete data method to optimize L( |Y c ) and find t+1 (M-step). With an improved parameter vector t+1 , the process is repeated until t ≈ t+1 ≈̂. As a memory aid, the reader may prefer to associate the m in Y m with made up (not missing).
In exponential family models, the parameter covariance matrix V is often estimated using the observed information matrix. The negative M-step Hessian is usually easy to evaluate but asymtotically underestimates the variability of (̂;Y c ).
A better estimate is the negative Hessian of only the observed data Y o , Usually (̂;Y o ) is difficult to evaluate; One benefit of the EM method is the ability to optimize L( |Y o ) efficiently without evaluation of Equation (2).
To estimate the parameter covariance matrix in an EM context, many methods have been proposed. Some methods require problem specific apparatus such as the covariance of the row-wise gradients (Mislevy, 1984) or a sandwich estimate (e.g. Louis, 1982;Yuan, Cheng, & Patton, 2013). For IFA models, the Fisher information matrix can be computed analytically. However, a sum is required over all possible patterns (Bock & Aitkin, 1981). Since such a sum is impractical for as few as 20 dichotomous items, no further consideration of this method will be given. Here we will focus on methods that are less reliant on problem specific features.
Finite differences with Richardson extrapolation has been advocated (Jamshidian & Jennrich, 2000). This method evaluates the observed data log-likelihood (Y o | ) at a grid of points in the space to approximate the Hessian. For example, for a single parameter function f, the Hessian can be approximated by for some small > 0. For Richardson extrapolation, the perturbation distance is reduced on every iteration. Precision is enhanced by extrapolating the change in curvature between iterations (Richardson, 1911). Unfortunately, the number of points required to approximate the Hessian is 1 + r(N 2 + N) where r is the number of iterations and N is the number of parameters in vector (Gilbert & Varadhan, 2012). This limits the practical applicability of Richardson extrapolation to models with a modest number of parameters. (1) (3) We are aware of only two algorithms that offer performance that scales linearly with the number of parameters and require little problem specific apparatus: Supplemented EM (SEM) (Meng & Rubin, 1991) and the direct method (Oakes, 1999). Although the direct method is simpler than SEM, Oakes has not been implemented in IFA software until recently (Pritikin, 2015) and has not been compared with parameter covariance estimation methods for IFA models. We describe SEM in some detail so that the reader may appreciate its relationship with Oakes.
In IFA software, SEM grew to popularity because it was superior to the methods commonly available at the time (Cai, 2008). SEM is based on the observation that the information matrix of the completed data (̂;Y c ) is the sum of the information matrices of the observed (̂;Y o ) and made-up data (̂;Y m ) (Orchard & Woodbury, 1972). With some algebraic manipulation we can rearrange the terms, (Dempster et al., 1977). One cycle of the EM algorithm can be regarded as a mapping → M( ). In this notation, the EM algorithm is If t converges to some point ̂ and M( ) is continuous then ̂ must satisfy ̂≈ M(̂). In the neighborhood of ̂, by Taylor series expansion, t+1 −̂≈ ( t −̂)Δ̂ where Δ̂ is the Jacobian of M evaluated at the MLE ̂, Dempster et al. (1977) showed that the rate of convergence is determined by the fraction of information that Y m contributes to Y c . In particular, in the neighborhood of ̂, Combining Equations (5) and (8) The rate matrix Δ̂ from Equation (7) can be approximated using a forward difference method. Let d be the number of elements in vector so we can refer to it as = { 1 , … , d }. Column j of Δ̂ is approximated by That is, we run 1 cycle of EM with set to the MLE ̂ except for the jth parameter of which is set to (̂j + ) where | | > 0 (Note that indices i and j are interchangeable on the diagonal). Then we subtract M(̂) ≈̂ from the result and divide by the scalar . This amounts to numerically differentiating the EM map M.
Theoretically, accuracy improves as → 0. In practice, however, this is arithmetic on a computer using a floating-point representation. We cannot take → 0 but must pick a particular | | > 0. The original formulation proposed to use the EM convergence history t j (where t is the parameter vector at iteration t) and compute the series of columns {r from t to t + 1. This procedure may initially seem appealing, but note that the history of is a function of the starting parameter vector t=0 and no guidance was provided about appropriate starting values. Regardless of starting values, Meng and Rubin (1991) suggested that r ⋅j could be declared stable if no element changed by more than the square root of the tolerance of an EM cycle. For example, if the EM tolerance for absolute change in log-likelihood is 10 −8 then the SEM tolerance would be 10 −4 . Hence, the jth column of r ⋅j is converged when But they remarked that the stopping criterion deserved further investigation.
SEM as originally described does not perform well in some circumstances. Its disappointing performance prompted at least two refinements. One refinement proposed a heuristic for the search of the parameter trajectory history (Tian-SEM;Tian, Cai, Thissen, & Xin, 2013) and was reinforced by a report of promising performance in unidimensional and multidimensional item response models simulation studies (Paek & Cai, 2014). The idea of Tian-SEM is that parameter estimates t typically start far from the MLE ̂ and approach closely only after a number of EM cycles. Starting SEM from t=0 is usually wasteful because Δ̂ does not stabilize until t with t close to convergence. During an EM run, the log-likelihood  typically changes rapidly and then slowly as the parameter values are fine tuned. The quantity was proposed as a standardized measure of closeness to convergence and suggested that the best opportunity for SEM is history subset t cor- .999]. This refinement helps in many cases. However, lingering weaknesses in Tian-SEM prompted another more drastic refinement (Agile-SEM; Pritikin, 2016). Agile-SEM will be shown to perform better than other SEM family algorithms, but not as well as the best method. A full description of Agile-SEM is lengthy and beyond the scope of this article. We include the original algorithm (MR-SEM) and these two refinements in our comparison.
Recall that the goal of SEM family algorithms is to estimate (̂;Y m ) in Equation (4). Oakes gave a remarkably direct way to obtain this quantity, This is the Jacobian of the completed data gradient of the vector ̂ in log L(̂|Y o , Y m ) with respect to the made up data Y m (Oakes, 1999). We are not aware of an analytic expression for this quantity for an item response model, but little additional computer programming is needed to estimate it using finite differences.

Models
We introduce a set of conditions designed to present a challenge to parameter covariance matrix estimators. We included underidentified models, models with bounds, and latent distribution parameters. Underidentified models do not contain enough data to uniquely identify the most likely model parameters. IFA models consist of a collection of response models, each item response model associated with a single item. To some extent, the number of possible response outcomes determines the choice of item model. The dichotomous or graded response model is suitable for items with only two possible outcomes (e.g. correct/incorrect). In contrast, the graded response or nominal model is suitable for items with more than two possible outcomes. Many other response models are available, but we focus on these three because of their enduring popularity. The definitions of these item response models are given in Appendix and detailed in the rpf package (Pritikin, 2015). The structure of Models m2pl5, m3pl15, grm20, and cyh1 will be described. (10) Model m2pl5 contained 5 2PL items. Slopes were 0.5, 1.4, 2.2, 3.1, and 4. Intercepts were -1.5, -0.75, 0, 0.75, and 1.5. Data were generated with a sample size of 1000 and all parameters were estimated. Model m2pl5 is not always identified at this sample size. This allowed us to examine the extent to which algorithms agreed on whether a given model was identified or not.
Model m3pl15 contained 15 3PL items. Slopes were set to 2 and items were divided into 3 groups of 5. Each group had the intercepts set as in Model m2pl5 and the lower bound parameters set to logit((1 + g) −1 ) with g as the group number (1-3). A sample size of 250 was used. For estimation, all slopes were equated to a single slope parameter. To stabilize the model, a Gaussian Bayesian prior on the lower bound (in logit units) with a standard deviation of 0.5 was used (see, Cai, Yang, & Hansen, 2011, Appendix A).
Model grm20 contained 20 graded response items with 3 outcomes. Slopes were equally spaced from 0.5 to 4. The first intercept was equally spaced from -1.5 to 1.5 every 5 items. The second intercept was 0.1 less than the first intercept. A sample size of 2,000 was used and all parameters were estimated. In the graded model, intercepts must be strictly ordered (Samejima, 1969). The placement of intercepts so close together should boost curvature in the information matrix.
The first simulation study from Cai et al. (2011) was included. Model cyh1 was a bifactor model with 2 groups of 1,000 samples each. Group 1 had 16 2PL items with the latent distribution fixed to standard Normal. Group 2 had the first 12 of the items from Group 1. All item parameters appearing in both groups were constrained equal. Data generating parameters for the items are given in Table 1. The latent distribution of Group 2 was estimated. Latent distribution generating parameters were 1, -0.5, 0, 0.5 and 0.8, 1.2, 1.5, 1, for means and variances respectively.
In addition, a 20 item 2PL model and the model from the second simulation study of Cai et al. (2011) were examined. Little additional insight was gained from these models and we do not report them here in detail. However, this work indicated that our results generalize to the nominal response model (see Appendix A). All item response models used a multidimensional parameterization (slope intercept form instead of discrimination difficulty). Hence, intercepts were multiplied by slopes in Models m2pl5, m3pl15, and grm20. Both MR-SEM and Tian-SEM strongly depend on the parameter convergence trajectory. Therefore, it is crucial to report optimization starting values. In general, all slopes were started at 1, intercepts at 0, means at 0, and variances at 1. For Model m3pl15, all lower bounds were started at their true value. Since the intercepts of the graded model cannot be set equal, for Model grm20, intercepts were started at 0.5 and -0.5 respectively.

Ground truth
All models were subjected to 500 Monte Carlo trials to obtain the ground truth for the parameter covariance matrix. For each trial, data were generated with the rpf.sample function from the rpf package (Pritikin, 2015). Models were fit with Bock and Aitkin (1981) as implemented in the IFA module of OpenMx with EM acceleration enabled (Pritikin, 2015, Varadhan & Roland, 2008. For the multidimensional model, Cai (2010b) was used for analytic dimension reduction. The EM and M-step tolerance for relative change in log-likelihood, were set to 10 −9 and 10 −12 , respectively. The use of relative change removes the influence of the magnitude of || on the precision of ||. In models where the latent distribution was fixed, numerical integration was performed using a standard Normal prior. Single dimensional models used an equal interval quadrature of 49 points from Z score −6 to 6. The multidimensional model used an equal interval quadrature of 21 points from Z score −5 to 5. The computer used was running GNU/Linux with a 2.40 GHz Intel i7-3630QM CPU and ample RAM. Table 2 summarizes the results.
The condition number of the information matrix is the maximum singular value divided by the minimum singular value and provides a rough gauge of the stability of a solution (Luenberger & Ye, 2008, p. 239). For example, models that are amply overspecified have a condition number close to 0 whereas slightly overspecified models will have a large positive condition number. When the information matrix is not positive definite then the MLE is unstable and may be a saddle point (Luenberger & Ye, 2008, p. 190). For reference, bias is defined as −̂ (columns 4 and 5) and the Monte Carlo parameter covariance matrix is simply the covariance of each trial's MLE ̂ as the rows of data (column 6).

Measures of quality
In theory, SEs approach 0 proportional to N − 1

Table 2. Descriptive summary of the Monte Carlo simulation studies
Notes: The first column is the number of free parameters in the model. Where the Unidentified column is 0, all trials were included. Trials were considered unidentified if the iteration limit was reached or the log condition number using the covariance of the gradients was greater than log(CondNum). V is the Monte Carlo parameter covariance matrix. To summarize RDs for a set of parameters, the Euclidean or l 2 -norm is used, ||RD|| 2 .

#P
SEs are an incomplete measure of information matrix estimation quality because they only reflect parameter variance. Accurate parameter covariances can be regarded as evidence that the estimation algorithm will generalize to other models. Kullback-Leibler (KL) divergence was used to assess the quality of the variance covariance matrix as a whole. For a 0 mean multivariate Normal distribution, KL divergence was defined as where K is the dimension of Σ.

Procedure
We evaluated convergence properties, accuracy, and elapsed time of Oakes, MR-SEM, Tian-SEM, and Agile-SEM with 500 Monte Carlo replications. The completed data information matrix (Equation (1)) and central difference Richardson extrapolation with an initial step size of 10 −3 and 2 iterations were included as low and high accuracy benchmarks, respectively. A relative EM tolerance of 10 −11 was used without EM acceleration. This relative tolerance roughly corresponds to an absolute tolerance of 10 −6 for the models of interest. The quantity (̂;Y m ) required by Oakes (Equation (11)), was estimated by forward difference with a step size of 10 −5 . Richardson extrapolation was not used so only N + 1 evaluations of the M-step gradient were required.
Since both MR-SEM and Tian-SEM depend on the parameter convergence trajectory, EM acceleration was disabled for trials of these algorithms. Without EM acceleration, the EM iteration limit was raised to 750 from the default of 500 to protect many replications of Model cyh1 from early termination. SEM tolerance was set to the square root of the nominal absolute EM tolerance, 10 − 6 2 (Meng & Rubin, 1991, p. 907). Other EM and SEM tolerances could have been selected, but would involve a trade-off. Either accuracy would be improved at the cost of speed or vice versa. As will be seen, Oakes outperforms all SEM family algorithms in both accuracy and speed. Table 3 exhibits the percentage of models for which each algorithm converged. MR-SEM and Tian-SEM failed to converge for a substantial number of trials. For these algorithms, a failure to converge does not only squander the time spent due to SEM, but if SEM is to be reattempted then the model must be re-fit from starting values. Although Agile-SEM performs well, Oakes exhibits the best performance.

Application
A subset of data from the 2009 Program for International Student Assessment (Bryer, 2012, Organisation for Economic Co-operation and Development, 2009) were used to illustrate the performance of Oakes. Responses to 35 mathematics items by students in the United States were analyzed. A total of 2,981 examinees were represented, but non-missing responses per item ranged from 1,040 to 1,618. The items were scored in 2 and 3 outcomes. Item calibration used the graded response model, consisting of 73 parameters. IFA Model Builder for OpenMx (Pritikin, 2016) was used to create the analysis script.
Both Oakes and Richardson extrapolation were used to estimate standard errors of the item parameters. As before, Richardson extrapolation used central difference with an initial step size of 10 −3  re oakes and 2 iterations, and Oakes used forward difference with a step size of 10 −5 . The EM algorithm converged with a maximum absolute gradient of 1.25 × 10 −2 . For both algorithms, condition number of the information matrix was estimated at about 28. The maximum absolute difference between SE estimates was 1.31 × 10 −3 . The SEs are plotted in Figure 1. Richardson extrapolation took 20.08 s and Oakes took 0.17 s.

Discussion and conclusion
We compared the convergence properties, accuracy, and elapsed time of Oakes and Supplemental EM family algorithms for a diverse selection IFA models. Oakes exhibited superior accuracy and speed. In the present article, only four models were examined. More research is needed to firmly establish whether the superior accuracy of Oakes generalizes. However, we argue that the evidence is already persuasive. When an algorithm is implemented optimally according to theoretical considerations, the deciding factor between algorithms may be the parsimony of the theory. By virtue of its theoretical simplicity, we suggest that the accuracy of Oakes cannot be surpassed by other numerical approaches.
Although SEs are a useful tool, they are not the most accurate way to assess the variability of estimated parameters. If any parameters are close to a boundary of the feasible set then likelihoodbased confidence intervals should be used instead (e.g. Pek & Wu, 2015). Likelihood-based confidence intervals are comparatively slow to compute, but offer higher accuracy than a Wald test and are well supported by OpenMx.
Complete source code for all algorithms discussed is part of the OpenMx source distribution available from http://openmx.psyc.virginia.edu/. The OpenMx website additionally contains documentation and user support to assist users in analysis of their own data using item response models. OpenMx is a package for the R statistical programming environment (R Core Team, 2014).