ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Data Note

Dataset: Knowledge and attitude retention following an implicit bias classroom workshop

[version 1; peer review: 3 approved]
PUBLISHED 11 Jan 2022
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Research on Research, Policy & Culture gateway.

Abstract

Background: Baylor College of Medicine provides a classroom-based implicit bias workshop to all third-year medical students to increase students’ awareness of their unconscious bias and develop strategies for reducing health care disparities. The workshop meets our immediate goals and objectives. However, we are unsure if the benefit would be long-term or diminish over time.
Methods: To examine the concept retention from the implicit bias classroom workshop, we administered a self-developed seven-item seven-point Likert-scale survey to our medical students at pre-, post-, and one-year post-workshop attendance.
Results: The data set was comprised of survey results from two cohorts of our third and fourth-year medical students from 2018 to 2020 and included 289 completed records at three measurement points. The data included: Student Identifiers, Sex, Race/Ethnicity, Student Enrollment Type, Cohort, and three repeated measures results for each of the seven items, which were documented in wide format. The data may be of interest to those who wish to examine how factors including elapsed time, race, and sex may associate with attitudes and understandings of implicit bias following related training, and those interested in analytical methods on longitudinal research in general.

Keywords

implicit bias, classroom workshop, repeated measures, Likert scales survey, unconscious bias

Introduction

Baylor College of Medicine (BCM) has a long history of educating medical students to develop awareness of their potential inherent bias towards specific racial/ethnic groups that might influence their medical decision-making.14 Educators and researchers at BCM developed a classroom-based implicit bias workshop, which has been part of the third-year medical student curriculum since 2008.5 The workshop is currently part of a third-year course on social determinants of health and includes administration of the Implicit Association Test6 (IAT), student review of two articles7,8 about implicit bias in medical education, and small-group discussions. The IAT was used to examine students’ inherent predispositions based on certain demographic groups (e.g., race, weight), with the aim of triggering students’ self-awareness and self-reflection of their unconscious bias. The students were told to read the two articles before the session and to discuss them during the small group session. The literature review allowed examination of how such unconscious bias may influence physicians’ clinical decision-making and was followed by active small-group discussion which fostered engagement with management strategies for reducing health care disparities (a schematic overview of the research design can be found in Figure 1).

94da0f57-0c2c-4177-9f6a-09433087087c_figure1.gif

Figure 1. Implicit bias workshop schematic overview and timeline.

Such educational practices have demonstrated an effective immediate influence on students’ awareness and management of implicit bias.4 However, we were uncertain whether this immediate impact following the one-session workshop would be sustained over time. If the educational effects of the IAT workshop diminish over time, a one-time classroom workshop may not be sufficient in preventing health care disparities among certain social groups. Offering more training in this area to our medical students would require additional time and resources in an already compressed curriculum. A better understanding of how the effects of implicit bias training persist following a classroom workshop would enable medical schools to design and develop curricula which maximize training efficiency and students’ future development as physicians.

The research manuscript generated from this study has been Accepted by Medical Teacher for publication and is in press. However, the data might be valuable for those wishing to pursue additional secondary analyses of related research topics, or a meta-analysis that combines multiple independent scientific studies of similar topics. These data might also be valuable for those interested in using other analytical methods to investigate the same research question or even for new longitudinal research simply focusing on the analytical methods in general, beyond the topic of the implicit bias.

Methods

Study design

This data set was collected as part of ongoing curriculum evaluation and improvement process. However, when aggregated together for the purposes of this project, the data comprise a longitudinal observational study that measures students’ attitudinal change over three repeated measures: immediately before participating in the workshop, immediately after participating in the workshop, and one year after participating in the workshop (measurement timing can be found in Figure 1).

The implicit bias workshop was administered within the required Determinants, Disparities, and Social Health of Populations (D-DASH) course. All third-year students who took the D-DASH course were considered as potential participants in this study. They were informed of the purpose of this study prior to agreeing to complete the survey and learned that information collected through the survey would be used as part of a research protocol. They received the pre-and post-surveys through the web-based course evaluation platform e*Value.

Approximately one year later, we conducted a one-year post-survey of the students during their fourth year as part of their required capstone course, Advanced Physician Experience (APEX). All students in APEX courses were informed again that the survey was part of a research protocol and they were voluntarily agreeing to participate the study.

Study population

The study was based on data collected between 2018 and 2020 from two cohorts of medical students (Class of 2019: n = 168; and Class of 2020: n = 185). Each class of students first received the pre- and post-surveys in their third year (MS3) spring D-DASH course and then completed the same survey the following spring in their fourth year (MS4). Due to merged curricular tracks, there was a small portion of students who did not receive the implicit bias surveys and training in their MS3 spring, but still received the one-year post-workshop-survey in their MS4 spring term. These off-track students did not meet the research criterion and were excluded from the data analysis.

Data collection

Members of the BCM research team created the survey for this study in 2008, when the implicit bias workshop was offered to students and it has been revised and refined several times based on piloted data and students’ feedback. The final version included seven questions evaluating students’ thoughts on questions regarding a physician’s or individual’s perceptions and management of unconscious bias. The survey was in a Likert-style format, with seven options ranging from strongly disagree (1) to strongly agree (7) (the survey can be found in the extended data9).

Surveys were distributed through the course evaluation platform e*Value and were active for two weeks to allow students to complete the surveys. The response rates ranged from 92.2% to 100.0% (see Table 1 for complete response rates). Survey data were exported as Excel data files and were saved in a secure/encrypted web portal within the BCM Medicine network. The original data were identifiable, but we deidentified data for subsequent analysis.

Table 1. Response rate for each data-collection period, by cohort.

CohortCompleted during D-DASH courseCompleted during APEX course
Pre-workshop surveyPost-workshop surveyOne-year post-workshop survey
Class of 2019166a/168b
98.8%
162a/168b
96.4%
155a/167b
92.8%
Class of 2020185a/185b
100.0%
184a/185b
99.5%
177a/192b
92.2%

a Number of students completing the survey.

b Number of students receiving the survey.

The data file included 26 variables: Student Identifier, Gender, Race/Ethnicity, Student Enrollment Type, Cohort, and 21 variables for the survey results. Student Identifier is a unique number for each student. We used randomly generated 10-digit numbers to replace the original school identification numbers to promote confidentiality of respondent data. Gender was self-defined by students. We allowed students to identify a gender other than male or female with an ‘other’ option and space to write their own answer, but all students identified themselves as either male or female. Students also selected their race/ethnicity. We merged ethnicities into more general and commonly used categories and were left with four options: White, Black or African American, Hispanic, Asian, other Asian, or ‘prefer not to answer’. Student Enrollment Type was a dichotomous variable (“0” or “1”) used to differentiate those who had regular curriculum scheduling for the period under study (“0”) and those who had an irregular curriculum schedule (“1”). There were 17 students in the latter category who completed the one-year post-survey two years after the workshop. These 17 students did not meet the criterion for inclusion in the primary analysis. However, this small sub-sample provided a less robust but still intriguing opportunity to examine for retention of the implicit bias training following an even longer elapsed period after the classroom workshop had been conducted.

Cohort included the 2019 Cohort who took the pre- and post-survey in 2018, and the one-year post-survey in 2019; and Cohort 2020 who took the pre- and post-survey in 2019, and the one-year post-survey in 2020. The survey results have 21 variables because the survey had seven Likert-style items, and each was measured three times. The Likert-style items were coded in numerical format, ranging from 1 to 7 (1 for Strongly Disagree, 7 for Strongly Agree).

We merged the three longitudinal surveys’ results into one file using the unique identifiers assigned to each student. The dataset can be found in underlying data.9 We deleted 94 records that did not meet the research criterion: 37 records for those without the pre-survey, three records for those without the post-survey, and 54 records for those without the one-year post-survey. Those without the pre-or one-year post-survey were likely to have irregular curriculum scheduling due to a leave of absence or enrollment in an extended-time and off-cycle dual degree program. Those without the post-survey likely failed to submit responses to the post-survey.

A total of 289 records were available with completed measures at three measurement points, of which 272 students were from the regular curriculum tracks (130 from the Class of 2019, and 142 from the Class of 2020), with the remaining 17 having irregular curriculum scheduling (see Table 2). Resulting data were then merged with race/ethnicity, and gender data self-identified by students which was collected previously during the medical school admissions process. Students were aware that the study may report demographics to describe the sample composition.

Table 2. Measurement timing for each cohort and valid sample collected.

CohortMeasurement timingValid sample
PrePostOne-year post
Class of 2019201820182019n = 130
201820182020n = 17*
Class of 2020201920192020n = 142

* These 17 students on irregular curriculum schedules actually completed the “One-Year Post” survey two years after the workshop.

Data validation

The data set has been proofread for human errors. A frequency summary table (Table 3) was created using the Pivot function in Microsoft Excel. Measurement results were negatively skewed to more positive options but following application of sampling criteria. There were no missing data for any of the three repeated measures. We evaluated the survey quality using this dataset. Cronbach’s alpha measures the internal consistency of survey items, and ranged from 0.86 to 0.87 between the pre-, post-, and one-year post surveys. The seven items had moderate to high inter-item correlations and had consistent patterns across the pre-, post-, and one-year post surveys. The first factor extracted by Exploratory Factor Analysis explains more than half of the variances, which provided primary evidence of a unidimensional structure of the survey. The results provided some evidence that the survey was still solid and reliable. More detailed results for survey quality can be found in Table 4.

Table 3. Frequency of choices for each survey question at three measurement time-points.

Survey questionsMeasurement timingFrequency
Strongly disagreeModerately disagreeSlightly disagreeNeither agree nor disagreeSlightly agreeModerately agreeStrongly agreeTotal
Q1Pre81592497145289
Post44132082175289
One-year post41191688170289
Q2Pre221102679169289
Post361671193289
One-year post281867194289
Q3Pre22101980176289
Post1151671195289
One-year post101355211289
Q4Pre246135691117289
Post45583586146289
One-year post521442104122289
Q5Pre234162986149289
Post12262080178289
One-year post16101778177289
Q6Pre371851106104289
Post12836107135289
One-year post132026112127289
Q7Pre58326011371289
Post121353109111289
One-year post14165112097289

Table 4. Specification of the survey.

Cronbach’s alpha% of first factor varianceInter-item correlation coefficient
Pre survey0.8657%0.19-0.89
Post survey0.8659%0.20-0.90
One-year post survey0.8759%0.16-0.90

Ethics statement

This study and the publication of the data set was approved by The Institutional Review Board for Human Subject Research at BCM (protocol number: H-45073). Written documentation of consent was waived for this study by the review board due to its low-risk nature. The participants were informed of the purpose of the survey while they took the D-DASH and APEX course and that taking the survey was optional, and non-participation would not impact their course grade. They were informed that, by taking the survey, they were voluntarily agreeing to participate in the study. Students were aware that all survey data would be kept confidential, and data would be deidentified for analysis and reporting.

Data availability

Underlying data

DANS: Implicit bias classroom workshop data. https://doi.org/10.17026/dans-zrv-mxsm.9

This project contains the following underlying data:

  • - DataFileImplicitBias.xlsx (dataset and code book for the variables)

Extended data

This project contains the following extended data

  • - Implicit bias survey.pdf (copy of survey).

Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 11 Jan 2022
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Zhou Y, Purkiss J, Juneja M et al. Dataset: Knowledge and attitude retention following an implicit bias classroom workshop [version 1; peer review: 3 approved] F1000Research 2022, 11:25 (https://doi.org/10.12688/f1000research.74442.1)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 11 Jan 2022
Views
7
Cite
Reviewer Report 11 Mar 2022
Courtney West, College of Osteopathic Medicine, Sam Houston State University, Conroe, TX, USA 
Approved
VIEWS 7
The rationale for creating the dataset is clearly described. The purpose was to “examine concept retention” from an implicit bias classroom workshop using a seven-item seven-point Likert-scale survey. “The data set was comprised of survey results from two cohorts of ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
West C. Reviewer Report For: Dataset: Knowledge and attitude retention following an implicit bias classroom workshop [version 1; peer review: 3 approved]. F1000Research 2022, 11:25 (https://doi.org/10.5256/f1000research.78200.r123367)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
19
Cite
Reviewer Report 24 Feb 2022
Ling Ning, Office of Assessment and Planning, University of Colorado Boulder, Boulder, CO, USA 
Approved
VIEWS 19
Thank you for the opportunity to review the manuscript Dataset: Knowledge and attitude retention following an implicit bias classroom workshop for F1000Research. The article was very well written with clear and precise language. The authors did a very good job ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Ning L. Reviewer Report For: Dataset: Knowledge and attitude retention following an implicit bias classroom workshop [version 1; peer review: 3 approved]. F1000Research 2022, 11:25 (https://doi.org/10.5256/f1000research.78200.r122372)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
12
Cite
Reviewer Report 04 Feb 2022
Xuejun Ji, Department of Educational and Counselling Psychology, and Special Education, University of British Columbia (UBC), Vancouver, BC, Canada 
Approved
VIEWS 12
The rationale for the dataset building is fully articulated. The users can access the dataset directly and replicate the results easily. No missing data were found. The codebook is available for users, which allows users to conduct data wrangling and ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Ji X. Reviewer Report For: Dataset: Knowledge and attitude retention following an implicit bias classroom workshop [version 1; peer review: 3 approved]. F1000Research 2022, 11:25 (https://doi.org/10.5256/f1000research.78200.r119372)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 04 Feb 2022
    Yuanyuan Zhou, Department of Education, Innovation, and Technology, Baylor College of Medicine, One Baylor Plaza, Houston, 77030, USA
    04 Feb 2022
    Author Response
    Thank you for your time to review this paper and provide feedback.
    Competing Interests: No competing interests
COMMENTS ON THIS REPORT
  • Author Response 04 Feb 2022
    Yuanyuan Zhou, Department of Education, Innovation, and Technology, Baylor College of Medicine, One Baylor Plaza, Houston, 77030, USA
    04 Feb 2022
    Author Response
    Thank you for your time to review this paper and provide feedback.
    Competing Interests: No competing interests

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 11 Jan 2022
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.