Multi-Institutional Survey of Fourth Year Students’ Self Assessed Milestone Based Skill Proficiency and Faculty Expectation During an Emergency Medicine Clerk-ship: Implications For Curriculum Development

This article was migrated. The article was marked as recommended. Introduction: Emergency medicine milestones suggest skill performance expectations for graduating medical students. The objective of this study is to examine differences between student’s perceived proficiency and faculty expectations relative to Level 1 EM milestones, identifying opportunities for curriculum development. Methods: Using ACGME milestone language, the authors developed a survey that measures student perceived skill proficiency with 22 skills, which was administered to fourth year medical students at 6 institutions. Similar surveys were sent to faculty to determine their expectations of students’ skill proficiency. Differences between student and faculty responses were calculated. Results: There were 608 student and 114 faculty responses. There was a statistically significant difference between mean student and faculty responses for 13 of the 22 skills. For 10 of these skills, students rated their own skill proficiency higher than faculty expectations. For 3 of the skills, faculty rated their expectations higher than students’ perceived proficiency. Conclusions: For pharmacology skills, student ratings were low, indicating an area to focus curriculum development. Items where student ratings are higher than faculty may be a result of overconfidence or a lack of understanding by faculty of students’ abilities. Formal assessment of skills in these areas would help clarify the reason and direct faculty and curriculum development.


Introduction
The Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Emergency Medicine (ABEM) released the Emergency Medicine (EM) Milestones in 2012 (Beeson et al., 2013).Incoming interns are expected to have achieved a Level 1 on each EM Milestone prior to medical school graduation.
Although the Level 1 Milestones are expected of an incoming intern, these objectives are not uniformly used for teaching and assessing students during their undergraduate medical education (Santen et al., 2013).In 2014, the AAMC released the Core Entrustable Professional Activities for Entering Residency (CEPAERs), a common set of core behaviors that should be expected of all graduates.While the CEPAERs provide guidance, medical schools still maintain the flexibility to set their own educational objectives and graduation requirements (Colleges, 2015).This variability in clinical performance expectations does not ensure that Level 1 Milestones are consistently met prior to graduation, and this sets the stage for varying skill levels among medical students and interns, as well as varying expectations among faculty and program directors.
Incongruent expectations of clinical abilities between faculty and learners have been demonstrated across different specialties (Dickson et al., 2013;Wenrich et al. 2010) as well as outside of United States training programs (Sicaja et al., 2006).The literature has also demonstrated differing expectations in regards to the learner's involvement in patient care (Quillen et al., 2013) and procedural skills (Dickson et al., 2013).These varied expectations potentially yield inconsistent student supervision leading to possible patient safety implications.
Unique characteristics common to most EM clerkships introduce even more inconsistency in expectations.The nature of EM scheduling results in medical students interacting with many different faculty during the clerkship -often during any one shift.When layering the increased autonomy that is experienced on many EM clerkships with inconsistent faculty expectations, students are disadvantaged without clear faculty expectations.
Given that the EM Milestones are an expected starting point for incoming EM interns, the authors assessed student selfperceived skill proficiency and comfort as well as faculty expectations of EM milestone based skill proficiency.Our objective was to measure the differences between faculty expectation and student confidence with respect to student performance of Milestone-derived behaviors.

Methods
We conducted a prospective observational study at six academic institutions throughout the United States.The study was approved by the Institutional Review Boards of the participating institutions.These institutions were a convenience sampling based on participation interest through the Clerkship Directors Emergency Medicine listserv.All institutions had a required emergency medicine clerkship during the time of the study.Four of the institutions identify themselves as university affiliated, and four identify as an urban institution.
Fourth-year medical students enrolled in a 4-week emergency medicine course were approached for enrollment and provided written consent during emergency medicine clerkship orientation.EM faculty at all 6 institutions were provided a study information sheet, and faculty participation in an online survey served as consent.
The survey instrument was designed by the authors and contained language taken from Level 1 EM Milestones (see Figure 1S in the supplementary files).It contained 22 items with each item being rated on a 5-point Likert scale.Fifteen of these items asked students to rate their skill proficiency, and 7 of these items asked the students to rate their comfort in performing a skill.Students completed the survey at the beginning of the clerkship and a second time after completing the clerkship.
Faculty were randomized and received electronic surveys at one of three separate times during the study; July, November or March.These time frames were chosen to represent expectations of students in the early, middle, and later part of the year.The faculty survey was very similar to the medical student survey (see Figure 2S in the supplementary files) differing only in language to reflect expectations rather than self-assessed proficiency or comfort.Faculty were instructed to complete the survey based on expectations for fourth year students at the beginning of their EM clerkship at the time of year that the survey was completed.
Data from the faculty surveys were entered automatically into REDCap, an electronic data capture tool hosted at the home institution (Harris et al., 2009).REDCap (Research Electronic Data Capture) is a secure, web-based application designed to support data capture for research studies, providing: 1) an intuitive interface for validated data entry; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for importing data from external sources.A Masters of Public Health student was responsible for obtaining medical student consent, administering the surveys, and entering this data into REDCap.From this point, data was only viewed as aggregate and individual student data could not be identified.

Data Analysis
All item comparisons between faculty and student responses were tested using Wilcoxon-Rank Sum tests since the items were measured on an ordinal Likert scale.For each set of comparisons, p-values were adjusted using the Stepdown Bonferroni method.Since the surveys were collected at different times of the year, we examined the effect of survey time collection on faculty responses and student's pre-clerkship responses.Specifically, we tested whether the difference between the two sets of responses differed by testing the interaction between time period and response type (student preclerkship vs. faculty) in a proportional odds model for each item.We did not have identifiers that enabled linking of a student's pre-clerkship and post-clerkship response.Therefore, we were not able to perform a paired analysis, and used the Wilcoxon-Rank Sum test for all item comparisons.We performed analyses using the dichotomy of 4/5 versus all other scores.Results were similar, therefore we only report results using mean scores.All analyses were performed using SAS V9.3.

Results/Analysis
Surveys were returned by 114 faculty: 33 during the early time period (July), 57 during the middle time period (November), and 24 during the late time period (March).A total of 426 faculty were sent surveys across all six participating sites with a total response rate of 27%.A total of 608 students returned pre-clerkship surveys and Differences in students' self-assessed proficiency at the beginning of the EM clerkship compared with faculty expectations as a mean score can be seen in Table 1.
There was a statistically significant difference for 13 of the 22 skills.For 10 of the statistically significant skills, students rated their own proficiency higher than faculty expectations: eliciting patient expectations, planning therapeutic interventions, performing procedures, understanding available resources, multi-tasking, using ultrasound, establishing rapport with patients, participating in peer teaching, navigating the healthcare system, and using the electronic health record (EHR).For 3 of the statistically significant skills, faculty rated their expectations higher than student's proficiency: ordering medications correctly, pharmacologic knowledge, and asking about allergies.
Changes in student responses between pre-clerkship and post-clerkship surveys can be seen in Table 2.
For all 22 skills there was a statistically significant improvement in student comfort from the beginning to the end of the clerkship.
Table 3 presents student self-assessed proficiency at the conclusion of the EM clerkship compared with faculty expectations as a mean score.For 19 skills, students rated their comfort significantly higher than faculty expectations.For 3 skills (ordering medications correctly, pharmacologic knowledge, and asking about allergies) there was no significant difference.At the beginning of the clerkship, students rated their skill proficiency in these areas significantly lower than faculty, but this difference disappeared by the end of the clerkship.This suggests that students' skill proficiency in the area of pharmacology is improved over the course of the clerkship.
Data comparing student self-assessed skill proficiency at the beginning of the EM clerkship with faculty expectations divided by the early, middle, and late academic year can be found in Tables 4-6.
The interaction between time and response person revealed 3 items with marginally significant interactions (diagnostic studies, p = 0.09; teaching, p = 0.06; and CPOE, p = 0.08).While not significant, this indicates there may be differences between the way students and faculty respond throughout the year.Additionally, several variables showed that faculty and student pre-clerkship levels changed consistently over time.These variables were history and physical exam (p = 0.04), demonstrating empathy (p = 0.04), and using electronic health records (p = 0.02).Specifically, scores for both faculty and students were lowest in period 1 for history and physical exam, and using electronic health records, while the demonstrating empathy was highest in period 2. It is reasonable to assume that expectations and self-assessed skill proficiency would be lower at the beginning of students' final year and higher towards the end.It is less clear why empathy would be highest in the middle of the academic year.

Discussion
Our study provides a quantitative analysis regarding differences between student self-assessed skill proficiency and faculty expectations across a range of skills considered mandatory for incoming interns in EM programs.The expectation is that students going into EM are proficient in these skills by medical school graduation.Previous studies have revealed significant differences between student and faculty expectations of skills at the start of medical school clinical years (Wenrich et al., 2010) and at the beginning of a family practice residency (Dickson et al., 2013).While we examined learners at a different stage of training and assessed a different set of skills, our study confirmed that there is a significant difference between student self -assessed proficiency and faculty expectations.
In most areas, students beginning their EM clerkship rated their skill proficiency higher than faculty expectations.In particular, students reported increased skill proficiency related to systems-based care and technical proficiency.This raises the question of whether faculty underestimate the skill proficiency of incoming students or whether students overestimate their own skill proficiency.The former may represent an opportunity for faculty development.Aligning faculty expectations with actual student proficiencies could prevent redundancy in teaching and make more efficient use of instructional time.The latter possibility, that students overestimate their own skill proficiency, is more concerning as it could potentially compromise patient safety if students attempt to carry out tasks for which they are not proficient without seeking adequate faculty supervision.
Students in our study were significantly less comfortable with medications than faculty expected them to be.With the exception of entering orders in CPOE, which one might surmise would include medication orders, all of the skills in which students felt less proficient than faculty expectations had specifically to do with medications.This may represent a curricular gap in the preclinical and early clinical years.However, even the late-year comparison indicated students felt less proficient with pharmacologic knowledge than faculty expected.This raises the possibility that pharmacologic education may be a shortcoming throughout medical school and an important area for further curricular development.

Limitations
There are some potential limitations to our study.We surveyed students at multiple institutions to provide a broader and more generalizable assessment than could be obtained from a single-center study, but this does not guarantee that the data are applicable to all institutions across the country.Furthermore, we did not record the institution for each returned survey so we cannot assess for differences from one site to another.Additionally, since we did not record individual student identifiers and students frequently do away rotation at multiple institutions, it is possible that a small number of students were administered the survey twice.Finally, not all senior medical students in this survey were applying for an Emergency Medicine residency.The EM Milestones were specifically designed to quantify skills for EM residents.Faculty may have different expectations of students based on their intended field of practice.Our study did not address this, and we did not collect information regarding students' future training plans.
Some educators may suggest using tools other than the EM Milestones to generate a list of skills of interest.Recently, EM physicians have proposed an additional milestone list specifically targeted to medical students (Santen et al., 2014).The American Association of Medical Colleges has also released a list of Entrustable Professional Activities that medical students are expected to demonstrate prior to advancing to residency training (Colleges, 2015).At the time of our data collection neither of these lists had been published.Nevertheless, when developing our survey we did not use every action included in the Level 1 EM Milestones.Rather, we extracted skills that are broadly applicable across multiple specialties and which we believe represent a reasonable expected skill set for any new resident.More importantly, surveys generated from the EM student milestones or CEPAERs may have missed the pharmacologic concerns discovered in our study as both lists mention therapeutics, but neither specifically references medications.
Future work in this area should include assessment of student skill proficiency to determine whether actual student performance aligns more closely with student or faculty expectations.Additional research should also focus on our finding of higher faculty expectations in regards to pharmacologic education and the possible implication on patient safety.

Conclusion
Across a range of core clinical skills, medical student self-assessed proficiency differs significantly from faculty expectations.Specifically, in the area of pharmacology, students feel significantly less proficient than faculty expect them to be, and this may be an area for focused curriculum development.For items where student self-assessed proficiency is significantly higher than faculty expectations, formal assessment of the student's ability in these areas may be useful in clarifying whether this discrepancy is due to student overconfidence or a lack of faculty understanding of students' abilities and in providing direction for both future curriculum development and faculty development.

Take Home Messages
There are significant differences between medical students' perception of their skill level and faculty expectations for student skill level across a wide range of skills.

Medical students rate themselves as having low proficiency in pharmacology
Further research is required to clarify whether items where student self-assessed proficiency is significantly higher than faculty expectations are due to student overconfidence or a lack of faculty understanding of students' abilities.
Notes On Contributors Katie E. Pettit, MD, is the Emergency Medicine Residency Associate Program Director at Indiana University, Indianapolis, Indiana.EM programs.Usually this type of studies will be done by before and after testing of the same skills proficiency among the students.This study differs in testing the students perceived skill proficiency and faculty expectation skill proficiency in multiple institutions.This study has examined learners at a different stage of training and assessed a different set of skills and confirmed that there is a significant difference between student self -assessed proficiency and faculty expectations.Its surprising to see the results where the students rate their proficiency higher than faculty expectations.Of course the author has quoted the possible reasons for the higher rating.But the I feel that the rating might also be higher because the subject variations among the faculty and the students.(the way of perceiving the skill proficiency questions)Nevertheless, the study was well designed and quantitatively analyzed, so that the curriculum development or changes focusing on the particular skills can be done easily based on student and faculty perceptions.

Table 1 .
Mean student comfort pre-clerkship compared with mean faculty expectations.Light grey shading indicates higher student comfort; dark grey shading indicates higher faculty expectations.
* P-value has been adjusted using the Stepdown Bonferroni Method 533 students returned post-clerkship surveys.Surveys were given to 908 students with a pre-clerkship response rate of 67% and a post-clerkship response rate of 59%.

Table 2 .
Difference between student comfort pre and post-clerkship.
* P-value has been adjusted using the Stepdown Bonferroni Method

Table 3 .
Faculty expectations compared with post-clerkship student comfort.Shading indicates higher student comfort.
* P-value has been adjusted using the Stepdown Bonferroni Method

Table 4 .
Pre-clerkship student comfort compared to early academic year faculty expectations.

Table 5 .
Pre-clerkship student comfort compared to mid-academic year faculty expectations.