Elsevier

Journal of Clinical Epidemiology

Volume 82, February 2017, Pages 119-127
Journal of Clinical Epidemiology

Original Article
The Utrecht questionnaire (U-CEP) measuring knowledge on clinical epidemiology proved to be valid

https://doi.org/10.1016/j.jclinepi.2016.08.009Get rights and content

Abstract

Objectives

Knowledge on clinical epidemiology is crucial to practice evidence-based medicine. We describe the development and validation of the Utrecht questionnaire on knowledge on Clinical epidemiology for Evidence-based Practice (U-CEP); an assessment tool to be used in the training of clinicians.

Study Design and Setting

The U-CEP was developed in two formats: two sets of 25 questions and a combined set of 50. The validation was performed among postgraduate general practice (GP) trainees, hospital trainees, GP supervisors, and experts. Internal consistency, internal reliability (item-total correlation), item discrimination index, item difficulty, content validity, construct validity, responsiveness, test–retest reliability, and feasibility were assessed. The questionnaire was externally validated.

Results

Internal consistency was good with a Cronbach alpha of 0.8. The median item-total correlation and mean item discrimination index were satisfactory. Both sets were perceived as relevant to clinical practice. Construct validity was good. Both sets were responsive but failed on test–retest reliability. One set took 24 minutes and the other 33 minutes to complete, on average. External GP trainees had comparable results.

Conclusion

The U-CEP is a valid questionnaire to assess knowledge on clinical epidemiology, which is a prerequisite for practicing evidence-based medicine in daily clinical practice.

Introduction

Knowledge of clinical epidemiology is crucial to be able to practice evidence-based medicine (EBM) in daily clinical practice [1]. Practicing EBM implies the ability to combine the best available evidence with the clinician's expertise and the patient's preferences [2]. Clinical epidemiology focuses on four important challenges clinicians are faced with. First, how to accurately diagnose a patient's illness (diagnosis, D), second to determine what causes the disease (etiology, E), third how to predict the natural history of the disease in an individual patient (prognosis, P), and fourth to estimate effect of interventions on a patient's prognosis (therapy, Th). In routine clinical practice, these four domains are incorporated in medical decision making, following the so-called DEPTh model [1]. Clinical epidemiology provides the framework and knowledge and skills for practitioners to critically appraise research evidence and translate outcomes of research into use in daily clinical practice. Given its importance for adequate evidence-based practicing in the future, monitoring theoretical knowledge on clinical epidemiology is important in the training of clinicians.

Testing knowledge needed to practice EBM is essential in clinicians [3] and should focus on those aspects useful in clinical practice. The second Sicily Statement pointed out that for a useful evaluation of EBM training it should be clear which aspect(s) such an assessment instrument intends to measure [4]. A number of questionnaires developed for testing knowledge needed to practice EBM exist already [5], [6], [7], but in our view, these do not prioritize clinical relevance, are time consuming to score, or assess therapeutic issues only. Importantly, developers of those questionnaires often provide only minimal data on validation [5], [6], [7].

We previously developed an EBM training program for the vocational training of general practitioners. Focus of the program is the decision process in primary care, and we aim to integrate the training as much as possible into daily clinical practice [8]. The EBM training is strongly based on dilemmas derived from clinical practice and focuses on relevant outcomes for patients. It covers all clinical domains because many clinical queries pertain not only to therapeutic but also to diagnostic or prognostic topics as well [8].

We report on the development and validation of the Utrecht questionnaire on knowledge on Clinical epidemiology for Evidence-based Practice (U-CEP), a questionnaire suitable for the evaluation of EBM training, with a focus on those aspects relevant to clinical practice.

Section snippets

Development of the U-CEP

We postulated that an optimal questionnaire should address the content of EBM training, cover as many different aspects of EBM (ask, acquire, appraise, apply, and assess) as possible, contain questions on clinically relevant aspects with an equal distribution across the different types of clinically relevant research (DEPTh), and test the minimal required methodological knowledge to be able to translate research results to clinical practice. At first, we used our experiences as teachers of EBM

Final format of the U-CEP

The shortening process was based on the results of the individual item analysis on internal reliability and consistency, derived from the scores of the respondents (N = 154; 49 trainees [29 first-year GP trainees and 20 hospital trainees] and 19 supervisors after EBM training, 8 experts, and 78 GP trainees who studied medicine at the same university as the postgraduate training program). We started with a 95-item questionnaire and removed the question with the lowest ITC first (−0.073). We then

Discussion

The U-CEP measures knowledge on clinical epidemiology, focusing on aspects relevant to daily clinical practice. This is important because it has become clear that practitioners face challenges in incorporating biomedical knowledge from research into practice [17], [18], [19]. Other questionnaires measuring this knowledge focus on calculating and interpreting biostatistics (Berlin questionnaire), are more time consuming to fill in and score, while they consisted of open-ended questions only

Acknowledgments

The authors thank all the participants who filled in the questionnaire(s) as well as Pieter Jan van der Schoot and Erwin Veltman for their technical support with the online version of the questionnaire.

Authors' contributions: M.F.K. was the primary investigator of this study and designed the study, acquired the data, analyzed and interpreted the data, drafted the article, and approved the final version to be published. M.E.L.B. was the supervisor of the study and designed the study, assisted in

References (22)

  • L. Fritsche et al.

    Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine

    BMJ

    (2002)
  • Cited by (0)

    Funding: None to report.

    Competing interests: None to report.

    View full text