Markov Chain Monte Carlo Estimation of Normal Ogive IRT Models in MATLAB

Modeling the interaction between persons and items at the item level for binary response data, item response theory (IRT) models have been found useful in a wide variety of applications in various fields. This paper provides the requisite information and description of software that implements the Gibbs sampling procedures for the one-, two- and three-parameter normal ogive models. The software developed is written in the MATLAB package IRTuno. The package is flexible enough to allow a user the choice to simulate binary response data, set the number of total or burn-in iterations, specify starting values or prior distributions for model parameters, check convergence of the Markov chain, and obtain Bayesian fit statistics. Illustrative examples are provided to demonstrate and validate the use of the software package. The m-file v25i08.m is also provided as a guide for the user of the MCMC algorithms with the three dichotomous IRT models.


Introduction
Item response theory (IRT) provides a collection of models that describe how items and persons interact to yield probabilistic correct/incorrect responses.The influence of items and persons on the responses is modeled by distinct sets of parameters so that the probability of a correct response to an item is a function of the person's latent trait, θ i , and the item's characteristics, ξ j , i.e., P (y = correct) = f (θ i , ξ j ).
The model assumes one θ i parameter for each person and is commonly referred to as the unidimensional model, signifying that each test item measures some facet of the unified latent trait.Much research has been conducted on the development and application of such IRT models in educational and psychological measurement (e.g., Bock and Aitkin 1981;Mislevy 1985;Patz and Junker 1999b;Tsutakawa and Lin 1986).
Simultaneous estimation of both item and person parameters in IRT models results in statistical complexities in the estimation task, as consistent estimates are not available.This problem has made estimation procedure a primary focus of psychometric research over decades (Birnbaum 1969;Bock and Aitkin 1981;Molenaar 1995).Recent attention is focused on Markov chain Monte Carlo (MCMC, see e.g., Chib and Greenberg 1995) simulation techniques, which have been influential in modern Bayesian analyses where they are used to summarize the posterior distributions that arise in the context of the Bayesian prior-posterior framework (Carlin and Louis 2000;Chib and Greenberg 1995;Gelfand and Smith 1990;Gelman, Carlin, Stern, and Rubin 2004;Tanner and Wong 1987).MCMC methods have proved useful in practically all aspects of Bayesian inference, such as parameter estimation and model comparisons.A key reason for the widespread interest in them is that they are extremely general and flexible and hence can be used to sample univariate and multivariate distributions when other methods (e.g., marginal maximum likelihood) either fail or are difficult to implement.In addition, MCMC allows one to model the dependencies among parameters and sources of uncertainty (Tsutakawa and Johnson 1990; Tsutakawa and Soltys 1988).Albert (1992)-see also Baker (1998)-was the first to apply an MCMC algorithm, known as Gibbs sampling (Chib and Greenberg 1995;Gelfand and Smith 1990;Geman and Geman 1984), to the two-parameter normal ogive (2PNO, Lord and Novick 1968) IRT model using the data augmentation idea of Tanner and Wong (1987).Johnson and Albert (1999) further generalized the approach to the three-parameter normal ogive (2PNO, Lord 1980) IRT model.With no prior information concerning the item parameters, noninformative priors were adopted in their implementation so that inference was based solely on the data.However, in some applications, informative priors are more preferred than noninformative priors.This is especially the case with the 3PNO model, where improper noninformative priors are to be avoided given the reason described later in this paper.Moreover, when comparing several candidate models, Bayes factors are commonly adopted in the Bayesian framework, but they are not defined with noninformative priors (Gelman et al. 2004).Studies have also shown that by incorporating prior information about item parameters, model parameters can be estimated more accurately with smaller sample sizes (Mislevy 1986;Swaminathan and Gifford 1983, 1985, 1986).
In view of the above, the objective of this paper is to provide a MATLAB (The MathWorks, Inc. 2007) package that implements Gibbs sampling procedures for the one-, two-, and threeparameter normal ogive IRT models with the option of specifying noninformative or informative priors for item parameters.Section 2 reviews the three unidimensional models.Section 3 briefly describes the MCMC algorithms which are implemented in the package IRTuno.In Section 4, a brief illustration is given of a Bayesian model selection technique for testing the fit of the model.The package IRTuno is introduced in Section 5, where a description is given of common input and output variables.In Section 6, illustrative examples are provided to demonstrate the use of the source code.Finally, a few summary remarks are given in Section 7.
It has to be noted that more complicated MCMC procedures have to be adopted for the logistic form of IRT models.For example, Patz and Junker (1999a,b) adopted the Metropolis-Hastings within Gibbs (Chib and Greenberg 1995) for the two-parameter logistic (2PL) and the threeparameter logistic (3PL) models.As Gibbs sampling is relatively easier to implement, and the logistic and normal ogive forms of the IRT model are essentially indistinguishable in model fit or parameter estimates given proper scaling (Birnbaum 1968;Embretson and Reise 2000), the MCMC procedures for logistic models are not considered in this paper.

IRT models
The unidimensional IRT model provides a fundamental framework in modeling the personitem interaction by assuming one latent trait.Suppose a test consists of k dichotomous (0-1) items, each measuring a single unified trait, θ.Let y = [y ij ] n×k represent a matrix of n examinees' responses to the k items, so that y ij is defined as 1, if person i answers item j correctly 0, if person i answers item j incorrectly for i = 1, ..., n and j = 1, .., k.
The probability of person i obtaining correct response for item j can be defined as for the one-parameter normal ogive (1PNO) model, where β j is associated with item difficulty, θ i is a scalar latent trait parameter, and the term "one-parameter" indicates that there is one item parameter β j in the model; for the 2PNO model, where α j is a positive scalar parameter describing the item discrimination; and for the 3PNO model, where γ j is a pseudo-chance-level parameter, indicating that the probability of correct response is greater than zero even for those with very low trait levels.The 3PNO model is applicable for objective items such as multiple-choice or true-or-false items where an item is too difficult for some examinees.As one may note, the three models are increasingly more general and hence more complex.

MCMC algorithms
Gibbs sampling is one of the simplest MCMC algorithms that are used to obtain item and person parameter estimates simultaneously.The method is straightforward to implement when each full conditional distribution associated with a particular multivariate posterior distribution is a known distribution that is easy to sample.Its general underlying strategy is to iteratively sample each item parameter, e.g., ξ j , where ξ j = (α j , β j ) , and person parameter, θ i , from their respective posterior distributions, conditional on the sampled values of all other person and item parameters, with starting values ξ (0) j and θ i .This iterative process continues for a sufficient number of samples after the posterior distributions converge to stationary distributions (a phase known as burn-in).
As with standard Monte Carlo, the posterior means of all the samples collected after the burnin stage are considered as estimates of the true parameters ξ j and θ i .Similarly, their posterior standard deviations are used to describe the statistical uncertainty.However, Monte Carlo standard errors cannot be calculated using the sample standard deviations because subsequent samples in each Markov chain are autocorrelated (e.g., Patz and Junker 1999b).Among the standard methods for estimating them (Ripley 1987), batching is said to be a crude but effective method (Verdinelli and Wasserman 1995) and hence is considered in this paper.
Here, with a long chain of samples being separated into contiguous batches of equal length, the Monte Carlo standard error for each parameter is then estimated to be the standard deviation of these batch means.The Monte Carlo standard error of the estimate is hence a ratio of the Monte Carlo standard error and the square root of the number of batches.More sophisticated methods for estimating standard errors can be found in Gelman and Rubin (1992).
The Gibbs sampler for each IRT model is now briefly described.For ease of illustration, Section 3.1 starts with the more general 3PNO model.

Gibbs sampling for the 3PNO model
To implement Gibbs sampling for the 3PNO model defined in (3), two latent variables, Z and W , are introduced such that Z ij ∼ N (η ij , 1) (Albert 1992;Tanner and Wong 1987), where η ij = α j θ i − β j , and if person i doesn't know the correct answer to item j with a probability density function With prior distributions assumed for θ i , ξ j and γ j , the joint posterior distribution of (θ, ξ, γ, W, Z) is hence where is the likelihood function, with p ij being the probability function for the 3PNO model as defined in (3).
In the literature, noninformative priors for the item parameters have been adopted in the description of the model specification (see e.g., Béguin and Glas 2001;Glas and Meijer 2003).However, they result in an improper posterior distribution because the 3PNO model, as defined in (3), can be viewed as a mixture model with two components, i.e., 1 and Φ(α j θ i −β j ), so the probability of an observation coming from a component is γ j or 1 − γ j .Although the prior for the non-component parameter of the mixture (γ j in this context) can be chosen in a typical fashion (Richardson and Green 1997), improper noninformative priors for component specific parameters are not recommended, as the joint posterior distribution is not defined and parameter estimates are likely to be unstable (Diebolt and Robert 1994).Hence, with a normal prior for θ i , a conjugate Beta prior for γ j and informative conjugate priors for α j and , the full conditional distributions of W ij , Z ij , θ i , ξ j and γ j can be derived in closed forms as follows: where where s j and t j are the two parameters in the specified Beta prior distribution, a j denotes the number of persons who do not know the correct answer to item j, and b j denotes the number of correct responses obtained by guessing.It has to be noted that when s j = t j = 1, the prior distribution for γ j is uniform.Higher values of these parameters lead to more precise prior information.In practice, these parameters should be chosen in a way that E(γ j ) = s j s j +t j is some pre-specified value (see e.g., Swaminathan and Gifford 1986).It is also noted that the proper normal prior for θ i with specific µ and σ 2 values (e.g., µ = 0 and σ 2 = 1) ensures unique scaling and hence is essential in resolving a particular identification problem in the model (see e.g., Albert 1992, for a description of the problem).

Gibbs sampling for the 2PNO model
Assuming that no guessing is involved in item responses (i.e., γ j = 0), the 2PNO model, without being a mixture model, is much simpler than the 3PNO model and hence the prior distributions can be chosen in a typical fashion.The Gibbs sampler involves updating samples for fewer model parameters, namely, θ, ξ, and the augmented continuous variable Z, where The implementation of the Gibbs sampling procedure thus involves three of the sampling processes, namely, a sampling of the augmented Z parameters from a sampling of person trait θ from ( 9), and a sampling of the item parameters ξ from assuming noninformative uniform priors α j > 0 and p(β j ) ∝ 1, or from (10) assuming informative priors.See Albert (1992) for a detailed illustration of the procedure.

Gibbs sampling for the 1PNO model
Likewise, the MCMC algorithm for the 1PNO model, a special case of the 2PNO model, involves only θ, β, and Z, where β = (β 1 , ..., β k ) .Their joint posterior distribution is The implementation of the Gibbs sampling procedure again involves three sampling processes: a sampling of the augmented Z parameters from (13), a sampling of person trait θ from (9), and a sampling of the item difficulty parameters β from N (n

Bayesian model choice technique
In the Bayesian framework, the adequacy of the fit of a given model is evaluated using several model choice techniques, among which, Bayesian deviance is considered and briefly illustrated.It should be noted that this measure provides a model comparison criterion.Hence, it evaluates the fit of a model in a relative, not absolute, sense.
The Bayesian deviance information criterion (DIC) was introduced by Spiegelhalter, Best, and Carlin (1998) (Carlin and Louis 2000).Further, D( θ) = −2 log(L(y| θ)), where θ is the posterior mean.To compute Bayesian DIC, MCMC samples of the parameters, ϑ (1) , . . ., ϑ (G) , can be drawn with the Gibbs sampling procedure, then D could be approximated as . Small values of the deviance suggest a better-fitting model.Generally, more complicated models tend to provide better fit.Hence, penalizing for the number of parameters (p D ) makes DIC a more reasonable measure to use.Indeed, this measure has been found useful for model comparisons in the IRT literature (e.g., Sheng andWikle 2007, 2008).

Package IRTuno
The package IRTuno contains two major user-callable routines: a function for generating unidimensional IRT data using the 1PNO, 2PNO or 3PNO model titled, simIRTuno, and a function that implements MCMC to obtain posterior samples, estimates, convergence statistics, or model fit statistics, gsIRTuno.Both functions require the user to have access to the MATLAB Statistics Toolbox.
The function simIRTuno has input arguments N, K, iparm, and p for the number of respondents, the number of items, user-specified item parameters, and the IRT model based on which the binary response data is generated (p=1 for the 1PNO model, p=2 for the 2PNO model, and p=3 for the 3PNO model), respectively.The optional iparm argument allows the user the choice to input the respective item parameters for each model, or randomly generate them from uniform distributions so that α j ∼ U (0, 2), β j ∼ U (−2, 2), and/or γ j ∼ U (.05, .6).The user can further choose to store the simulated person (theta) and item (item) parameters.
The function gsIRTuno initially reads in the starting values for the parameters (th0, item0), or sets them to be θ (Albert 1992) for the 1PNO and/or 2PNO model, and θ j = 0.2 (Béguin and Glas 2001) for the 3PNO model.It then implements the Gibbs sampler to a user-specified IRT model (parm) and iteratively draws random samples for the parameters from their respective full conditional distributions.The normal prior distribution for θ i (tprior) can assume µ = 0, σ 2 = 1 (default) or any values of interest.The prior distributions for the item parameters in the 1PNO and 2PNO models can be noninformative (flat=1) or informative (flat=0, default).In the latter case, the user can specify any values of interest or use the default values, namely, µ α = 0 and σ 2 α = 1 for α j (aprior), and µ β = 0 and σ 2 β = 1 for β j (gprior).Given the reason provided in Section 3.1, only informative priors can be specified for ξ j (flat=0) in the 3PNO model, where the two parameters in the Beta prior distribution for γ j (cprior) can be s j = 5, t j = 7 (default) or any values of interest.The algorithm continues until all the (kk) samples are simulated, with the early burn-in samples (burnin) being discarded, where kk and burnin can be 10, 000 and kk/2 (default) or any values of interest.It then computes the posterior estimates, posterior standard deviations, and Monte Carlo standard errors of the person and item estimates (iparm, pparm).Posterior samples of these parameters can also be stored (samples) for further analysis.
In addition to Monte Carlo standard errors, convergence can be evaluated using the Gelman-Rubin R statistic (Gelman et al. 2004) for each model parameter.The usual practice is using multiple Markov chains from different starting points.Alternatively, a single chain can be divided into sub-chains so that convergence is assessed by comparing the between and within sub-chain variance.Since a single chain is less wasteful in the number of iterations needed, the latter approach is adopted to compute the R statistic (R) with gsIRTuno.Moreover, the Bayesian deviance estimates, including D, D( θ), p D and DIC, can be obtained (deviance) to measure the relative fit of a model.The function's input and output arguments are completely specified in the m-file.

Illustrative examples
To demonstrate the use of the IRTuno package, simulated and real data examples are provided in this section to illustrate item parameter recovery as well as model goodness of fit.

Parameter recovery
For parameter recovery, three 1000-by-10 (i.e., n = 1000 and k = 10) dichotomous data matrices were simulated from the 1PNO, 2PNO, and 3PNO models, respectively, using the item parameters from Béguin and Glas (2001, p. 551), which are shown in the first column of Tables 1, 2 and 3.The Gibbs sampler was implemented to recover the item parameters for the respective 1PNO and 2PNO models assuming the informative or noninformative prior distributions described previously, and those for the 3PNO model assuming informative priors.The posterior means and standard deviations for item parameters α, β, and/or γ, as well as their Monte Carlo standard errors of estimates and Gelman-Rubin R statistics were obtained and are displayed in the rest of the tables.
It has to be pointed out again that improper priors for component specific parameters in mixture models tend to result in unstable parameter estimates as the joint posterior distribution is not defined, and hence they are to be avoided for the 3PNO model.On the other hand, as informative priors lead to a proper posterior distribution for the model, valid estimates of item and person parameters can be obtained with sufficient iterations.Indeed, as Table 1 suggests, the Markov chain did reach stationarity with a run length of 30, 000 iterations and a burn-in period of 25, 000 iterations.Moreover, the posterior estimates of ξ j are fairly close to the true parameters.
For the 1PNO and 2PNO models, stationarity was reached within 10, 000 iterations, and the item parameters were estimated with enough accuracy (see Tables 2 and 3).In addition, the two sets of posterior estimates, resulted from different prior distributions, differ only slightly from each other, signifying that the posterior estimates are not sensitive to the choice of noninformative or informative priors for α and/or β.

Model goodness of fit
To illustrate the use of the Bayesian deviance measure for the fit of a model, a 500-by-30 dichotomous data matrix was simulated from the 2PNO model using item parameters randomly drawn from the uniform distributions described in Section 5.The Gibbs sampler was subsequently implemented for each of the 1PNO, 2PNO and 3PNO models so that 10, 000 samples were simulated with the first 5, 000 set to be burn-in.It has to be noted that noninformative priors were adopted for the item parameters in the 1PNO and 2PNO models, the correct 2PNO model, indicating that it provided the best fit among the three candidate models.

Empirical example
A subset of the College Basic Academic Subjects Examination (CBASE, Osterlind 1997) English data was further used to illustrate the Bayesian approach for checking model goodness of fit.The data contains binary responses of 600 independent college students to a total of 41 English multiple-choice items.It is noted that in real test situations, the actual structure is not readily known.Hence, model comparison is necessary for establishing the model that provides a relatively better representation of the data.The 1PNO, 2PNO and 3PNO models were each fit to the CBASE data using Gibbs sampling with a run length of 10, 000 iterations and a burn-in period of 5, 000, which was sufficient for the chains to converge.The Bayesian deviance estimates for each model, shown in Table 5, suggest that the 3PNO model provided a slightly better description of the data, even after taking into consideration model complexity.

Discussion
With functions for generating dichotomous response data and implementing the Gibbs sampler for a user-specified unidimensional IRT model, IRTuno allows the user the choice to set the numbers of total and burn-in samples, specify starting values and prior distributions for model parameters, check convergence of the Markov chain, as well as obtain Bayesian fit statistics.The package leaves it to the user to choose between noninformative and informative priors for the item parameters in the 1PNO and 2PNO models.In addition, the user can choose to set the location and scale parameters for the prior normal distributions of θ i , α j and β j , or the two parameters in the Beta prior for γ j to reflect different prior beliefs on their distributions.For example, if there is a stronger prior opinion that the item difficulties should be centered around 0, a smaller σ 2 β can be specified in the gsIRTuno function such that gprior=[0,0.5].It is noted that during an implementation of the Gibbs sampler, if a Markov chain does not converge within a run length of certain iterations, additional iterations can be obtained by invoking the gsIRTuno function with starting values th0 and item0 set to be their respective posterior samples drawn on the last iteration of the Markov chain.This is demonstrated in the example of assessing parameter recovery for the 3PNO model in v25i08.m.
In the CBASE example, one may note the small difference between the two DIC estimates for the 2PNO and 3PNO models from Table 5.Given the complexity of the 3PNO model, it is, however, not clear if this small difference makes a practical significance.Consequently, one may also want to consider Bayes factors, which provide more reliable and powerful results for model comparisons in the Bayesian framework.However, they are difficult to calculate due to the difficulty in exact analytic evaluation of the marginal density of the data (Kass and Raftery 1995) and hence are not considered in the paper.In addition, this paper adopts the Gelman-Rubin R statistic for assessing convergence.Its multivariate extension, the Brooks-Gelman multivariate potential scale reduction factor (Brooks and Gelman 1998), may be considered as well.
who generalized the classical information criteria to one that is based on the posterior distribution of the deviance.This criterion is defined as DIC = D + p D , where D ≡ E ϑ|y (D) = E(−2 log L(y|ϑ)) is the posterior expectation of the deviance (with L being the likelihood function, where ϑ denotes all model parameters) and p D = E ϑ|y (D) − D(E ϑ|y (ϑ)) = D − D( θ) is the effective number of parameters

Table 1 :
Posterior estimate, standard deviation (SD), Monte Carlo standard error of the estimate (MCSE) and Gelman-Rubin R statistic for each item parameter (α j , β j , γ j ) in the 3PNO model assuming informative priors (chain length = 30,000, burn-in = 25,000).whereasinformativepriors were adopted for the 3PNO model.The Gelman-Rubin R statistics suggest that the chains converged to their stationary distributions within 10, 000 iterations.Hence, the Bayesian deviance estimates, including D, D( θ), p D and DIC, were obtained from each implementation and are displayed in Table4.It is clear from the table that both the posterior deviance (column 2) and the Bayesian DIC (column 5) estimates pointed to

Table 3 :
Posterior estimate, standard deviation (SD), Monte Carlo standard error of the estimate (MCSE) and Gelman-Rubin R statistic for each item parameter (β j ) in the 1PNO model assuming informative and noninformative priors (chain length = 10,000, burn-in =

Table 5 :
Bayesian deviance estimates for the 1PNO, 2PNO and 3PNO models with the CBASE data.