Skip to main content
Log in

A Modeling Framework to Examine Psychological Processes Underlying Ordinal Responses and Response Times of Psychometric Data

  • Theory & Methods
  • Published:
Psychometrika Aims and scope Submit manuscript

An Erratum to this article was published on 20 June 2023

This article has been updated

Abstract

This article presents a joint modeling framework of ordinal responses and response times (RTs) for the measurement of latent traits. We integrate cognitive theories of decision-making and confidence judgments with psychometric theories to model individual-level measurement processes. The model development starts with the sequential sampling framework which assumes that when an item is presented, a respondent accumulates noisy evidence over time to respond to the item. Several cognitive and psychometric theories are reviewed and integrated, leading us to three psychometric process models with different representations of the cognitive processes underlying the measurement. We provide simulation studies that examine parameter recovery and show the relationships between latent variables and data distributions. We further test the proposed models with empirical data measuring three traits related to motivation. The results show that all three models provide reasonably good descriptions of observed response proportions and RT distributions. Also, different traits favor different process models, which implies that psychological measurement processes may have heterogeneous structures across traits. Our process of model building and examination illustrates how cognitive theories can be incorporated into psychometric model development to shed light on the measurement process, which has had little attention in traditional psychometric models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Change history

Notes

  1. We used the term ‘item strength’, in a similar sense as item difficulty in IRT analysis of test data in that a respondent with a higher value of latent trait than item strength has a higher probability of endorsing the personality/attitude measurement item. This can also be called ‘item attractiveness’ in that it represents the attractiveness of the item.

  2. An alternative to this choice could be item-wise nondecision time. However, modeling both person-wise and item-wise nondecision times is not feasible because their effects on outcome variables are confounded. In other words, person-wise (item-wise) nondecision time parameters can also account for inter-item (inter-person) differences. We consider person-wise nondecision time in this article because typically measurement data have more respondents (than items) and thus models with person-wise nondecision time parameters can better decompose RTs into decision and nondecision times.

  3. The simulated respondent in the Model 3 result, mentioned in the text, has \(\log (\gamma _p) = 1.520\), which is the largest in the simulation study. The mean and SD of data-generating \(\log (\gamma _p)\) values for Model 3 were \(-0.002\) and 0.498, respectively, and the second largest value was 1.162. Having more items can better constrain nondecision time parameters (Kang et al., 2022b). In particular, including an item with a highly strong inclination (i.e., the one with a positively or negatively highly large \(b_i\)) could be helpful as an RT for this item could be closer to the minimum RT of the respondent mostly spent for nondecision processes only.

  4. This was also the case in our simulation study. For example, when Model 2 was the data-generating model, Model 3 beat Model 2 in 3 out of 25 repetitions when we judged based on DIC. Also, there are at least two ways to compute the effective number of parameters and DIC (Gelman et al., 2013), which can produce different results. For example, when Model 3 was the data-generating model, one way of calculating the effective number of parameters and DIC (Equation 7.10 in Gelman et al.) predicted that Model 3 was the best-fitting model for all repetitions but another way (Equation 7.8 in Gelman et al.) predicted that Model 1 was the best-fitting model for all repetitions. Thus, we were not able to obtain consistent conclusions for the proposed models with DIC.

  5. For example, adapt_delta and max_treedepth in Stan.

  6. Ranger and Kuhn’s model does not consider nondecision time, unlike our models.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Inhan Kang.

Ethics declarations

Code Availability

The Stan codes to fit the proposed models can be found online at https://osf.io/76jb4/.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kang, I., Molenaar, D. & Ratcliff, R. A Modeling Framework to Examine Psychological Processes Underlying Ordinal Responses and Response Times of Psychometric Data. Psychometrika 88, 940–974 (2023). https://doi.org/10.1007/s11336-023-09902-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-023-09902-z

Keywords

Navigation