Skip to main content
Log in

Empirical Performance of a New User Cohort Method: Lessons for Developing a Risk Identification and Analysis System

  • Original Research Article
  • Published:
Drug Safety Aims and scope Submit manuscript

Abstract

Background

Observational healthcare data offer the potential to enable identification of risks of medical products, but appropriate methodology has not yet been defined. The new user cohort method, which compares the post-exposure rate among the target drug to a referent comparator group, is the prevailing approach for many pharmacoepidemiology evaluations and has been proposed as a promising approach for risk identification but its performance in this context has not been fully assessed.

Objectives

To evaluate the performance of the new user cohort method as a tool for risk identification in observational healthcare data.

Research Design

The method was applied to 399 drug-outcome scenarios (165 positive controls and 234 negative controls across 4 health outcomes of interest) in 5 real observational databases (4 administrative claims and 1 electronic health record) and in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively.

Measures

Method performance was evaluated through Area Under ROC Curve (AUC), bias, and coverage probability.

Results

The new user cohort method achieved modest predictive accuracy across the outcomes and databases under study, with the top-performing analysis near AUC >0.70 in most scenarios. The performance of the method was particularly sensitive to the choice of comparator population. For almost all drug-outcome pairs there was a large difference, either positive or negative, between the true effect size and the estimate produced by the method, although this error was near zero on average. Simulation studies showed that in the majority of cases, the true effect estimate was not within the 95 % confidence interval produced by the method.

Conclusion

The new user cohort method can contribute useful information toward a risk identification system, but should not be considered definitive evidence given the degree of error observed within the effect estimates. Careful consideration of the comparator selection and appropriate calibration of the effect estimates is required in order to properly interpret study findings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. FDA. Guidance for industry: good pharmacovigilance practices and pharmacoepidemiologic assessment. US FDA Center for Drug Evaluation and Research and Center for Biologics Evaluation and Research; 2005.

  2. Maldonado G, Greenland S. Estimating causal effects. Int J Epidemiol. 2002;31(2):422–9.

    Article  PubMed  Google Scholar 

  3. Hofler M. Causal inference based on counterfactuals. BMC Med Res Methodol. 2005;5:28.

    Article  PubMed  CAS  Google Scholar 

  4. Hernán MA, Robins JM. Estimating causal effects from epidemiological data. J Epidemiol Community Health. 2006;60(7):578–86.

    Article  PubMed  Google Scholar 

  5. Greenland S, Robins JM. Identifiability, exchangeability, and epidemiological confounding. Int J Epidemiol. 1986;15(3):413–9.

    Article  PubMed  CAS  Google Scholar 

  6. FDA. The sentinel initiative: a national strategy for monitoring medical product safety. May 2008 (cited 2012 September 15). Available from: http://www.fda.gov/Safety/FDAsSentinelInitiative/ucm089474.htm.

  7. Little RJ, Rubin DB. Causal effects in clinical and epidemiological studies via potential outcomes: concepts and analytical approaches. Ann Rev Public Health. 2000;21:121–45.

    Article  CAS  Google Scholar 

  8. Ray WA. Evaluating medication effects outside of clinical trials: new-user designs. Am J Epidemiol. 2003;158(9):915–20.

    Article  PubMed  Google Scholar 

  9. Schneeweiss S, Patrick AR, Sturmer T, Brookhart MA, Avorn J, Maclure M, et al. Increasing levels of restriction in pharmacoepidemiologic database studies of elderly and comparison with randomized trial results. Med Care. 2007;45(10 Supl 2):S131–42.

    Google Scholar 

  10. Schisterman EF, Cole SR, Platt RW. Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology. 2009;20(4):488–95.

    Article  PubMed  Google Scholar 

  11. Suissa S. Immortal time bias in pharmaco-epidemiology. Am J Epidemiol. 2008;167(4):492–9.

    Article  PubMed  Google Scholar 

  12. Schneeweiss S. A basic study design for expedited safety signal evaluation based on electronic healthcare data. Pharmacoepidemiol Drug Saf. 2010;19(8):858–68.

    Article  PubMed  Google Scholar 

  13. Gagne JJ, Fireman B, Ryan PB, Maclure M, Gerhard T, Toh S, et al. Design considerations in an active medical product safety monitoring system. Pharmacoepidemiol Drug Saf. 2012;21(Suppl 1):32–40.

    Article  PubMed  Google Scholar 

  14. Johnson ES, Bartman BA, Briesacher BA, Fleming NS, Gerhard T, Kornegay CJ, et al. The incident user design in comparative effectiveness research. Pharmacoepidemiol Drug Saf. 2013;22(1):1–6.

    Article  PubMed  Google Scholar 

  15. van Staa TP, Abenhaim L, Leufkens H. A study of the effects of exposure misclassification due to the time-window design in pharmacoepidemiologic studies. J Cli Epidemiol. 1994;47(2):183–9.

    Article  Google Scholar 

  16. McMahon AD, Evans JM, McGilchrist MM, McDevitt DG, MacDonald TM. Drug exposure risk windows and unexposed comparator groups for cohort studies in pharmacoepidemiology. Pharmacoepidemiol Drug Saf. 1998;7(4):275–80.

    Article  PubMed  CAS  Google Scholar 

  17. Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology. 2009;20(4):512–22.

    Article  PubMed  Google Scholar 

  18. Rassen JA, Glynn RJ, Brookhart MA, Schneeweiss S. Covariate selection in high-dimensional propensity score analyses of treatment effects in small samples. Am J Epidemiol. 2011;173(12):1404–13.

    Article  PubMed  Google Scholar 

  19. Wahl PM, Gagne JJ, Wasser TE, Eisenberg DF, Rodgers JK, Daniel GW, et al. Early steps in the development of a claims-based targeted healthcare safety monitoring system and application to three empirical examples. Drug Saf Int J Med Toxicol Drug Experience. 2012;35(5):407–16.

    Article  Google Scholar 

  20. Brookhart MA, Schneeweiss S, Rothman KJ, Glynn RJ, Avorn J, Sturmer T. Variable selection for propensity score models. Am J Epidemiol. 2006;163(12):1149–56.

    Article  PubMed  Google Scholar 

  21. Kurth T, Walker AM, Glynn RJ, Chan KA, Gaziano JM, Berger K, et al. Results of multivariable logistic regression, propensity matching, propensity adjustment, and propensity-based weighting under conditions of nonuniform effect. Am J Epidemiol. 2006;163(3):262–70.

    Article  PubMed  Google Scholar 

  22. Sturmer T, Rothman KJ, Avorn J, Glynn RJ. Treatment effects in the presence of unmeasured confounding: dealing with observations in the tails of the propensity score distribution–a simulation study. Am J Epidemiol. 2010;172(7):843–54.

    Article  PubMed  Google Scholar 

  23. Ryan PB, Schuemie MJ. Evaluating performance of risk identification methods through a large-scale simulation of observational data. Drug Saf. 2013 (in this supplement issue). doi:10.1007/s40264-013-0110-2

  24. Ryan PB, Stang PE, Overhage JM, Suchard MA, Hartzema AG, DuMouchel W, et al. A comparison of the empirical performance of methods for a risk identification system. Drug Saf. 2013 (in this supplement issue). doi:10.1007/s40264-013-0108-9

  25. Overhage JM, Ryan PB, Reich CG, Hartzema AG, Stang PE. Validation of a common data model for active safety surveillance research. J Am Med Inform Assoc. 2012;19(1):54–60.

    Google Scholar 

  26. Trifirò G, Pariente A, Coloma PM, Kors JA, Polimeni G, Miremont-Salame G, et al. Data mining on electronic health record databases for signal detection in pharmacovigilance: which events to monitor? Pharmacoepidemiol Drug Saf. 2009;18(12):1176–84.

    Google Scholar 

  27. Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55.

    Google Scholar 

  28. Brookhart MA. Incident User Design (IUD-HOI) 2010 (cited 2013 January 28). Available from: http://omop.org/MethodsLibrary.

  29. Genkin A, Lewis DD, Madigan D. Large-scale Bayesian logistic regression for text categorization. Technometrics. 2007;49:291–304.

    Article  Google Scholar 

  30. Observational Medical Outcomes Partnership June 2012 Symposium presentations 2012 (cited 2013 January 23). Available from: http://omop.org/2012SymposiumPresentations.

  31. Ryan PB, Schuemie MJ, Welebob E, Duke J, Valentine S, Hartzema AG. Defining a reference set to support methodological research in drug safety. Drug Saf. 2013 (in this supplement issue). doi:10.1007/s40264-013-0097-8

  32. Armstrong B. A simple estimator of minimum detectable relative risk, sample size, or power in cohort studies. Am J Epidemiol. 1987;126(2):356–8.

    Article  PubMed  CAS  Google Scholar 

  33. Cantor SB, Kattan MW. Determining the area under the ROC curve for a binary diagnostic test. Med Decis Mak Int J Soc Med Decis Mak. 2000;20(4):468–70.

    Google Scholar 

  34. Ebell MH, Smith MA, Barry HC, Ives K, Carey M. The rational clinical examination. Does this patient have strep throat? J Am Med Assoc. 2000;284(22):2912–8.

    Article  CAS  Google Scholar 

  35. Pisano ED, Gatsonis C, Hendrick E, Yaffe M, Baum JK, Acharyya S, et al. Diagnostic performance of digital versus film mammography for breast-cancer screening. New Engl J Med. 2005;353(17):1773–83.

    Article  PubMed  CAS  Google Scholar 

  36. Martin BJ, Finlay JA, Sterling K, Ward M, Lifsey D, Mercante D, et al. Early detection of prostate cancer in African-American men through use of multiple biomarkers: human kallikrein 2 (hK2), prostate-specific antigen (PSA), and free PSA (fPSA). Prostate Cancer Prostatic Dis. 2004;7(2):132–7.

    Article  PubMed  CAS  Google Scholar 

  37. Romano PS, Roos LL, Jollis JG. Adapting a clinical comorbidity index for use with ICD-9-CM administrative data: differing perspectives. J Clin Epidemiol. 1993;46(10):1075–9; discussion 81–90.

    Google Scholar 

  38. Tisdale J, Miller D. Drug-induced diseases: prevention, detection, and management. 2nd ed. American Society of Health-System Pharmacists, Bethesda; 2010.

  39. Schuemie MJ, Ryan PB, DuMouchel W, Suchard MA, Madigan D. Interpreting observational studies: why empirical calibration is needed to correct p-values. Stat Med. 2013. doi:10.1002/sim.5925.

    PubMed  Google Scholar 

  40. FDA Drug Safety Communication: Update on the risk for serious bleeding events with the anticoagulant Pradaxa (dabigatran). November 2, 2012 (cited 2012 December 1). Available from: http://www.fda.gov/Drugs/DrugSafety/ucm326580.htm.

  41. Platt R. Mini-Sentinel Program to evaluate the safety of marketed medical products—progress and direction. ISPE—a symposium at the 27th international conference on pharmacoepidemiology. Chicago, IL; 2011.

  42. Fireman B, Toh S, Butler MG, Go AS, Joffe HV, Graham DJ, et al. A protocol for active surveillance of acute myocardial infarction in association with the use of a new antidiabetic pharmaceutical agent. Pharmacoepidemiol Drug Saf. 2012;21(Suppl 1):282–90.

    Article  PubMed  Google Scholar 

  43. Rassen JA, Schneeweiss S. Using high-dimensional propensity scores to automate confounding control in a distributed medical product safety surveillance system. Pharmacoepidemiol Drug Saf. 2012;21(Suppl 1):41–9.

    Article  PubMed  Google Scholar 

  44. Robb MA, Racoosin JA, Worrall C, Chapman S, Coster T, Cunningham FE. Active surveillance of postmarket medical product safety in the Federal Partners’ Collaboration. Med Care. 2012;50(11):948–53.

    Article  PubMed  Google Scholar 

  45. Ryan PB, Madigan D, Stang PE, Marc Overhage J, Racoosin JA, Hartzema AG. Empirical assessment of methods for risk identification in healthcare data: results from the experiments of the Observational Medical Outcomes Partnership. Stat Med. 2012;31(30):4401–15.

    Google Scholar 

  46. Myers JA, Rassen JA, Gagne JJ, Huybrechts KF, Schneeweiss S, Rothman KJ, et al. Effects of adjusting for instrumental variables on bias and precision of effect estimates. Am J Epidemiol. 2011;174(11):1213–22.

    Article  PubMed  Google Scholar 

  47. Lipsitch M, Tchetgen Tchetgen E, Cohen T. Negative controls: a tool for detecting confounding and bias in observational studies. Epidemiology. 2010;21(3):383–8.

    Google Scholar 

  48. Walker AM. Confounding by indication. Epidemiology. 1996;7(4):335–6.

    PubMed  CAS  Google Scholar 

  49. Stang PE, Ryan PB, Overhage JM, Schuemie MJ, Hartzema AG, Welebob E. Variation in choice of study design: findings from the Epidemiology Design Decision Inventory and Evaluation (EDDIE) Survey. Drug Saf. 2013 (in this supplement issue). doi:10.1007/s40264-013-0103-1

  50. Glynn RJ, Gagne JJ, Schneeweiss S. Role of disease risk scores in comparative effectiveness research with emerging therapies. Pharmacoepidemiol Drug Saf. 2012;21(Suppl 2):138–47.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The Observational Medical Outcomes Partnership is funded by the Foundation for the National Institutes of Health (FNIH) through generous contributions from the following: Abbott, Amgen Inc., AstraZeneca, Bayer Healthcare Pharmaceuticals, Inc., Biogen Idec, Bristol-Myers Squibb, Eli Lilly & Company, GlaxoSmithKline, Janssen Research and Development, Lundbeck, Inc., Merck & Co., Inc., Novartis Pharmaceuticals Corporation, Pfizer Inc, Pharmaceutical Research Manufacturers of America (PhRMA), Roche, Sanofi-aventis, Schering-Plough Corporation, and Takeda. Drs. Ryan and Schuemie are employees of Janssen Research and Development. Dr. Schuemie received a fellowship from the Office of Medical Policy, Center for Drug Evaluation and Research, Food and Drug Administration. Drs. Schuemie and Madigan have previously received funding from FNIH. Susan Gruber and Ivan Zorych have no conflicts of interest to declare.

This article was published in a supplement sponsored by the Foundation for the National Institutes of Health (FNIH). The supplement was guest edited by Stephen J.W. Evans. It was peer reviewed by Olaf H. Klungel who received a small honorarium to cover out-of-pocket expenses. S.J.W.E has received travel funding from the FNIH to travel to the OMOP symposium and received a fee from FNIH for the review of a protocol for OMOP. O.H.K has received funding for the IMI-PROTECT project from the Innovative Medicines Initiative Joint Undertaking (http://www.imi.europa.eu) under Grant Agreement no 115004, resources of which are composed of financial contribution from the European Union’s Seventh Framework Programme (FP7/2007-2013) and EFPIA companies’ in kind contribution.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick B. Ryan.

Additional information

The OMOP research used data from Truven Health Analytics (formerly the Health Business of Thomson Reuters), and includes MarketScan® Research Databases, represented with MarketScan Lab Supplemental (MSLR, 1.2 m persons), MarketScan Medicare Supplemental Beneficiaries (MDCR, 4.6 m persons), MarketScan Multi-State Medicaid (MDCD, 10.8 m persons), MarketScan Commercial Claims and Encounters (CCAE, 46.5 m persons). Data also provided by Quintiles® Practice Research Database (formerly General Electric’s Electronic Health Record, 11.2 m persons) database. GE is an electronic health record database while the other four databases contain administrative claims data.

Appendix

Appendix

Appendix
figure a

Incident user cohort design estimates for all test cases, by database. MSLR MarketScan Lab Supplemental, MDCD MarketScan Multi-state Medicaid, MDCR MarketScan Medicare Supplemental Beneficiaries, CCAE MarketScan Commercial Claims and Encounters, GE GE Centricity. Blue negative controls; Orange positive controls; each line represents point estimate and 95 % confidence interval for the drug-outcome pair in a particular database

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ryan, P.B., Schuemie, M.J., Gruber, S. et al. Empirical Performance of a New User Cohort Method: Lessons for Developing a Risk Identification and Analysis System. Drug Saf 36 (Suppl 1), 59–72 (2013). https://doi.org/10.1007/s40264-013-0099-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40264-013-0099-6

Keywords

Navigation