Are medical student results affected by allocation to different sites in a dispersed rural medical school?

Are medical student results affected by allocation to different sites in a dispersed rural medical school?


Background
The use of multiple clinical sites is a strategy that many medical schools adopt in order to provide clinical resources to support student learning across large healthcare systems.
Typically, a large urban medical school would allocate students to clinical placements in one of several teaching hospital sites, often called clinical schools.These sites are generally not far apart, provide care for parts of the similar urban communities, and can be reached by students and faculty without too much difficulty.It is relatively easy to apply common assessment practices at all sites.The equivalence of student experience and outcomes is rarely challenged.However, with the recent expansion of medical education, and particularly the establishment of medical education in rural and remote regions, clinical schools are now more commonly separated by substantial distances or travel times, in facilities caring for populations with different characteristics, and even functioning within different healthcare systems 1 .The greater distances between sites often limit movement between them of students and teachers.As a result of these differences, it is more likely that there will be greater variation in student learning opportunities than in more traditional models.Despite these differences, the underlying principle of the varied models of dispersed learning is the same: students are placed where they can access sufficient clinical learning resources to support the curriculum and facilitate achievement of identical learning outcomes, usually with common assessment approaches.Indeed, this is a requirement for accreditation of medical schools in Australia and New Zealand 2 .This poses an important question: are learning outcomes related to the clinical school site?
The literature provides little research evidence about this question.Two reported studies are noteworthy.The first found that students in community-immersed rural medical education in Australia obtain assessment results similar to those of students in an urban environment 3 .The other found that the results of students studying in a dispersed clinical school model in Canada appeared not to be related to site 4 .However, neither of these studies examined student performance in the much more dispersed clinical school structure, such as that found in one new Australian rural medical school 5 , where there were 4 clinical school sites separated by up to 1500 km.This article reports an analysis of assessment data for graduating students from this school to determine whether assessment outcomes were related to clinical school site.

The setting
The communities associated with this medical school are relatively small, with populations of less than 200 000.None of these individual centres provides sufficient clinical resources for the entire student cohort, so senior students have to be allocated to 4 clinical school environments (including hospitals and primary care practices) that are up to 2000 km from the main base.The allocation pattern evolved from the first to subsequent cohorts as student numbers increased and more dispersed sites were developed.While none of the 4 sites is based on a large urban teaching hospital, there is still variation in the size, capacity and activity of the 4 hospitals, with three offering varied elements of tertiary care and one only secondary care.All provide for dispersed populations that have somewhat different characteristics, with marked variations in the proportion of Indigenous and immigrant populations.The most distant site is not in the same State and has a different healthcare system.Hence, it is likely that students have a combination of similar and different learning opportunities at each of the 4 hospitals and their surrounding primary care practices, as has been found elsewhere 6 , and is currently being investigated locally.Students express preferences for the clinical school allocation for the last 2 years of the course.The penultimate year (Year 5) has workplace clinical assessment and an end-of-year battery of written papers and an objective structured clinical examination (OSCE).All students sit identical written examinations that are scored centrally, and are brought into the two larger hospitals for the OSCE, where trained examiners are randomly assigned, providing a combination of local and 'visiting' examiners.The final year has only workplace-based assessment.Given the variation in clinical site capacity, clinical experience and assessment locations, students and faculty have naturally wondered if there is any impact on learning outcomes, with many students in the early cohorts believing that staying at the main base was likely to result in higher academic achievement.

Methods
Summative assessment results of the first 5 graduating cohorts were available for analysis, but the first cohort was excluded from the study because this was a smaller cohort that was taught predominantly at the 2 more central sites.
Table 1 lists the assessment data for the second to fifth graduating cohorts.The effect of clinical site location on assessment results was examined through analysis of variance of mean scores in both the Years 5 and 6.The effect of moving to different clinical schools on the rank order of student test performances was analysed by applying the Kruskal-Wallis test on the inter-quartile ranges of scores in each of Years 3-6 of the course, from before dispersal to after dispersal at the 4 clinical sites.This period also spanned the move from predominantly campus-based to predominantly workplace-based assessment.Ethics approval was granted by the James Cook University Ethics Committee

Results
Tables 2 and 3, respectively, provide the mean test scores and test rankings for students completing Years 5 and 6 at the four clinical school sites in each of the 4 cohorts.There were no significant differences in the mean scores of students studying at each site (p values = 0.15-0.63).There were also no significant differences overall between interquartile rankings across years as students dispersed to the different clinical sites (p values = 0. 27-0.78).There were however some small changes in rank order of students within sites, with some slightly improving their relative position, particularly at the smaller sites, and others slightly worsening their relative positions, particularly at the larger sites, but these changes had no effect on pass decisions at the end of the course.In general the workplace-based assessment scores from the final year were higher than the more examination-based scores in the penultimate year.

Discussion
These results demonstrate that there was no significant effect of clinical site location on mean examination scores and rank order of students at the 4 sites.This is reassuring for students, faculty and regulators because it indicates that the learning objectives required by the curriculum (as approved by the Australian Medical Council 2 ) are achievable in each of the clinical school locations, even though they offer somewhat different clinical learning opportunities.The slight changes in rank order after dispersal are not significant but invite speculation.Such differences may happen in any curriculum, as students approach graduation and are assessed against endpoint learning objectives.
However in this medical school the final year assessment is more workplace based, raising the possibility that different attributes are being assessed.Information bias cannot be excluded because different sites may have assessed students differently and therefore Year 6 results have to be interpreted with caution.The focus of student learning has been shown in associated research to be different in the 2 years (Sen Gupta TK, Hays RB, Kelly G, Jacobs H; unpubl.data; 2010).In summary, students in the penultimate year focus on learning to pass exams, whereas those in final year focus on learning to be junior doctors and preparing for longer term career objectives.Hence those students who improve scores in the final year may be better able to make the transition from student to workplace learning and assessment.It is interesting that the smaller centres are associated with the improvement in scores and rankings.These sites may offer a better workplace experience due to lower staff : patient ratios and more general clinical case mix 7 , and so may be more appropriate for workplace immersion models 8 .However, the possibility of less robust supervisory structures in the smaller centres may mean that weaker students receive less support.
Until the differences in performance at the different sites are explored further, it may be prudent to retain weaker students at the larger, more central sites, closer to more formal educational support.
The higher mean scores derived from workplace assessment are also worthy of comment.It is possible that workplacebased assessment, which is conducted by clinicians who may form stronger relationships with students during longer, workplace immersion placements, simply inflates scores artificially.However, because the mean score rises at all 4 sites there is little direct effect on rank order of students.
The effect on pass/fail decisions is more difficult to measure, because the number of students failing final year should be low.To date only one student has failed and repeated final year, and with the relatively small numbers of students it is not possible to know if this indicates 'normal', a lenient system or a system that allows through to final year only students who are ready for the transition to final year.The subsequent performance and career choice of graduates, and possible correlations with these student performance data, are currently being investigated in a longitudinal cohort study.

Limitations
This study involved relatively small numbers of students in only 4 graduating cohorts from one dispersed rural medical school.The effect of selection bias cannot be discounted, because most students were able to choose the clinical school site they attended.

Conclusion
The choice of clinical school site for the final 2 years of an undergraduate rural medical school had no effect on mean assessment scores and only a minor effect on the rank order of student scores.It may be that workplace-immersed placements suit some students better than others, but at this school they appear to provide a valuable transition experience between undergraduate and postgraduate learning.