Profiling medical school learning environments in Malaysia: a validation study of the Johns Hopkins Learning Environment Scale

Purpose: While a strong learning environment is critical to medical student education, the assessment of medical school learning environments has confounded researchers. Our goal was to assess the validity and utility of the Johns Hopkins Learning Environment Scale (JHLES) for preclinical students at three Malaysian medical schools with distinct educational and institutional models. Two schools were new international partnerships, and the third was school leaver program established without international partnership. Methods: First- and second-year students responded anonymously to surveys at the end of the academic year. The surveys included the JHLES, a 28-item survey using five-point Likert scale response options, the Dundee Ready Educational Environment Measure (DREEM), the most widely used method to assess learning environments internationally, a personal growth scale, and single-item global learning environment assessment variables. Results: The overall response rate was 369/429 (86%). After adjusting for the medical school year, gender, and ethnicity of the respondents, the JHLES detected differences across institutions in four out of seven domains (57%), with each school having a unique domain profile. The DREEM detected differences in one out of five categories (20%). The JHLES was more strongly correlated than the DREEM to two thirds of the single-item variables and the personal growth scale. The JHLES showed high internal reliability for the total score (α=0.92) and the seven domains (α, 0.56-0.85). Conclusion: The JHLES detected variation between learning environment domains across three educational settings, thereby creating unique learning environment profiles. Interpretation of these profiles may allow schools to understand how they are currently supporting trainees and identify areas needing attention.


INTRODUCTION
Despite sustained interest from medical education researchers for over 50 years, evaluating medical school learning environments remains challenging. At least 15 tools have been used to assess undergraduate medical school learning environments, but none have strong evidence for validity [1]. Our team of researchers recently developed a new learning environment scale, called the Johns Hopkins Learning Environment Scale (JHLES), at the Johns Hopkins University School of Medicine (JHUSOM), which, after exploratory factor analysis on data from JHUSOM students, yielded a learning environment assessment scale with 28 items spanning seven domains [2]. The goal of this study was to determine the validity of the JHLES in Malaysia, and to ascertain its utility in detecting variation across medical school learning environments. We selected Malaysia because its rapid growth in medical schools http://jeehp.org J Educ Eval Health Prof 2015, 12: 39 • http://dx.doi.org/10.3352/jeehp.2015. 12.39 (three accredited schools in 2000, and 21 accredited schools and up to 30 in operation today [3,4])-often in partnership with a foreign school-has uncertain implications for learning environment quality and makes the valid measurement of learning environments imperative. We studied pre-clerkship learning environments at three medical schools, each of which represents one of the three models of undergraduate medical education currently operating in Malaysia. Two schools were new international partnerships in Malaysia: Perdana University Graduate School of Medicine (PUGSOM), which is a graduate-entry program established with JHUSOM, and Perdana University-Royal College of Surgeons in Ireland School of Medicine (PURCSI), which is a school leaver program run by the Royal College of Surgeons in Ireland. The third medical school, Cyberjaya University College of Medical Sciences (CUCMS), is a private school-leaver program that was started in Malaysia without an international partner.

Study design, subjects, and setting
This study involved the analysis of a cross-sectional survey given to all first-and second-year medical students at three medical schools at the end of the academic year. The characteristics of the three medical schools are summarized in Table  1. The students at CUCMS were surveyed at the end of the 2011-2012 academic year, and the students at PUGSOM and PURCSI were surveyed at the end of the 2012-2013 academic year. The surveys were administered electronically at PUG-SOM and PURCSI, and given on paper, using optical mark reader sheets, at CUCMS. The responses were anonymous and the data were de-identified and analyzed by one author, who had no role in teaching or evaluating medical students at any of the schools.

Ethical approval
Institutional review board approval was obtained from Perdana University in Malaysia.

Survey composition
The student questionnaire was developed by members of our study team from institutions in the USA and Malaysia with extensive experience in medical education and educational research. All questionnaires included the JHLES [2], the Dundee Ready Education Environment Measure (DREEM) [5], a personal growth scale [6], several questions yielding an overall assessment of the learning environment, and questions about demographics.
The JHLES was developed over a series of iterative steps from 2010 to 2012 at JHUSOM [2]. Briefly, a 2010 survey of fourth-year JHUSOM students assessed events that impacted their perceptions of the learning environment [7]. Those results informed follow-up surveys of all JHUSOM students in 2011 and 2012; more than 30 items were used for exploratory factor analysis, which led to the final scale that cumulatively accounted for 57% of variance. The JHLES has 28 items, each with five-point response options, spanning six domains: community of peers (six items, 14% variance, α = 0.91), faculty re- lationships (six items, 12% variance, α = 0.80), academic climate (five items, 9% variance, α = 0.86), meaningful engagement (four items, 9% variance, α= 0.82), mentorship (two items, 5% variance, α = 0.74), acceptance and safety (three items, 4% variance, α= 0.58), and physical space (two items, 4% variance, α= 0.66). Each item is scored from 1 to 5, meaning that total scores on the JHLES can range from 28 to 140. In February 2012, prior to survey administration in Malaysia, the JHLES was modified to remove language that was specific to JHU-SOM in Baltimore, Maryland, and was piloted on paper with focus groups of preclinical students from PURCSI. To provide response processvalidity, the survey was discussed with these students to ensure that the meaning of items was clear and that their understanding of the learning environment was consistent with that theorized in previous research. This process led to only minor wording changes to items. For example, the JH-LES item "The School of Medicine encourages 'scholarship' and innovation" was altered to read "The School of Medicine encourages 'scholarly learning' and innovation, " because students commonly thought of 'scholarship' as tuition support.
The DREEM is a 50-item survey in which students respond with their level of agreement across a five-point scale. Items were grouped by its developers into five categories: perceptions of teachers, perceptions of teaching, academic self-perception, perceptions of atmosphere, and social self-perception. Each item is scored from 0 to 4, such that composite DREEM scores can range from 0 to 200.
The personal growth scale was originally a nine-item survey developed on internal medicine residents that asked about changes in attitudes and behaviors over time along a five-point Likert scale (1, much worse; 2, worse; 3, no change; 4, better; 5, much better). We altered the survey slightly to make it applicable to preclinical medical students and by framing questions about changes in their personal growth in relation to when respondents started medical school. This resulted in a 7-item scale. We scored items from 1 to 5, such that the total scores ranged from 7 to 35.
We also included three single-item global learning environment assessment variables that we have used in prior studies [2,9]. We asked students to rate their overall perception of the learning environment as (1) terrible, (2) poor, (3) fair, (4) good, or (5) exceptional, and their agreement with two statements: "The overall quality of the educational experience at the School of Medicine is excellent, " and "Based on my sense of the learning environment at the School of Medicine, I would recommend it to a close friend as a great school to attend. "

Statistical analysis
The basic descriptive statistics for the study population were tabulated with tests for significant differences applied as ap-propriate. The DREEM and JHLES total scores and domain scores were compared across schools using bivariate analysis with analysis of variation and the Kruskall-Wallis test, as appropriate. Multivariate analysis adjusted for students' year in medical school, gender, and ethnicity. For institutional pairwise comparisons, we used t-tests with the Bonferroni correction. In order to establish relations to other variables' validity for JHLES, we used Pearson correlation coefficients to determine associations among JHLES total scores, DREEM total scores, and personal growth scale total scores. Spearman correlation coefficients were calculated for associations between learning environment scale totals and global learning environment assessment variables. The internal structure of the JHLES was assessed by calculating values of Cronbach's alpha to establish internal reliability and by calculating corrected itemtotal correlations. Stata ver. 13 (Stata Co., College Station, TX, USA) was used for all data analysis. Significance was set at P < 0.05.

Learning environment differences as reflected by the JHLES, DREEM, and global learning environment assessment variables
In comparisons across all three schools, no differences were detected in the total JHLES scores, but statistically significant differences were found between schools in five of seven (71%) JHLES domains, with four (57%) remaining significant after adjusting for medical school year and student gender and ethnicity (all P < 0.05) ( Table 2). PUGSOM had the highest ratings for 'faculty relationships' and 'acceptance and safety, ' while CUCMS had the highest for 'mentorship. ' The schools had nearly identical ratings for 'academic climate' ( differences were seen in two of five (40%) DREEM categories, with one (20%) remaining significant after multivariate adjustment (Table 2). Each single-item global learning environment assessment variable failed to detect differences in the learning environments across the three schools (Table 3).

Relationship to other variables validity
The total JHLES score was highly correlated with the total DREEM score (r = 0.78) across schools, with little difference at each school (PUGSOM, r = 0.82; PURCSI, r = 0.82; CUCMS, r = 0.78).
The JHLES total was moderately to highly correlated with the three single-item global learning environment assessment variables and with the personal growth scale total ( Table 4). The JHLES was more strongly correlated than DREEM to two of the three single-item global assessment variables and to the total score of the personal growth scale.

Evidence for internal structure validity
The JHLES showed high internal consistency for its total across all schools and at each school ( Table 5). Each of its seven domains had values of Cronbach's alpha within acceptable limits across schools, albeit with some site-to-site variability.
The corrected item-total correlations for the JHLES showed that all but two items had correlation coefficients above the acceptable level of 0.30 ("I am concerned that students are mistreated at the SOM" and "I feel concerned at times for my personal safety at the SOM").

DISCUSSION
In this study of students' perceptions of the preclinical learning environments at three Malaysian medical schools, we found that the JHLES was able to create unique learning environment domain profiles for each school. The JHLES also demonstrat-  ed sufficient internal reliability and correlation with external variables across the various student populations that were studied.
A systematic review of learning environment assessment tools showed that most did not have evidence for content, response process, internal structure, or a relationship to other variables' validity [1]. The DREEM is by far the most published instrument, but only content and internal structure validity evidence have been demonstrated for it. The JHLES was recently developed according to guidelines for scale development, including exploratory factor analysis. Before this study, it had already been validated for face, content, response process, internal structure, and relation to other variables validity at JHUSOM [2]. In this study, we further substantiated the content, response process, internal structure, and relation to other variables validity by studying a new medical student population at three distinct medical schools in Malaysia.
The JHLES not only discriminated differences between schools but created unique learning environment profiles for each institution, while maintaining similar levels of evidence of validity at each site. This suggests that the scale may be suitable for a range of educational settings and particularly useful in assuring the quality of learning environments at new schools in Malaysia. Institutional profiles provide the opportunity for benchmarking, which is becoming a preferred approach for schools to identify areas of relative strength and weakness [10]. Moreover, such information may provide opportunities for institutions to collaborate and learn from one another. For example, PUGSOM may seek advice from CUCMS regarding their methods for fostering a 'community of peers. ' Few studies of medical school learning environments have provided multi-institutional comparisons. We identified two studies in which DREEM was used for multi-institution comparisons. In 2008, differences were observed in total DREEM scores and scores in three of its five categories between two schools in Saudi Arabia; a school that used problem-based learning had higher ratings than one that did not [11]. In 2004, the DREEM found that a Scottish medical school had a higher total score and ratings in every category than three other medical schools (two in Saudi Arabia and one in Yemen) that had not been using modern teaching techniques [12]. In contrast to these studies, the DREEM found differences in only one category in this study of three recently opened Malaysian medical schools. All three schools in this study use modern curricula and interactive teaching methods. The DREEM contains several items that ask about interactive teaching methods, while many similar items were not retained during JHLES development because they did not vary among students at JHUSOM. This may suggest that as teaching methods have advanced, with changes in the formal and informal structures of institutions, the DREEM may no longer be as effective at detecting differences as it used to be.
As in our previous research, we found the JHLES and DRE-EM to be strongly correlated, suggesting that they are measuring a similar construct [9]. Also consistent with previous work, we found the JHLES to have superior concurrent criterion validity than the DREEM. The JHLES was able to detect differences in four of seven domains, after adjusting for covariates, Spearman rho presented for the three single-item global learning environment assessment variables. b) Pearson r presented for the personal growth scale.