2020 rankings for US PharmD programs, research, and overall quality

Background US News and World Report (USNWR) publishes well-known rankings of graduate health programs. Medicine and nursing are ranked with weighted metrics using multiple criteria, and medical schools are ranked separately according to their focus (research or primary care). USNWR pharmacy school rankings are based on a single-question peer perception survey. Objective The objective of this study was to develop a simple, transparent framework to rank US colleges and schools of pharmacy in overall quality and separately based on program quality and research quality, using data that are readily available to the academy. Methods Data for three education quality and four research quality metrics were obtained for 2020. Each metric was standardized and ranked, and then each set was summed to determine separate ranks for education and research. Education and research scores were combined using equal weights to provide a single rank for overall quality. A sensitivity analysis was performed to determine the effect of assigning higher proportionate value to education, similar to USNWR medical school rankings. Results Distinct ranks were produced for education, research, overall (education: research) 50:50, and overall 60:40. Sensitivity analysis suggests the more disproportionately the education and research factors are weighted, the more ranks change. Mid-ranked schools were most impacted when weightings changed due to relative strength in one factor and relative weakness in the other. When weighted 60:40, nine (7%) mid-ranked programs improved in rank, while 11 (11%) worsened in rank compared to the 50:50 model. Conclusion Separately ranking education and research can highlight the diverse strengths of pharmacy schools. The proposed model is based on easily obtainable data and is easily reproducible, allowing for annual rankings. These rankings may be used by PharmD and PhD applicants when selecting schools and by pharmacy schools to benchmark true and aspirational peers.


Introduction
Nearly 25,000 individuals signed a change.org petition 1 circulated in 2018 to protect the pharmacist profession, including tightening accreditation requirements such as a minimum 80% pass rate on the North American Pharmacist Licensure Examination (NAPLEX) for schools to maintain accreditation. The petitioners cited indicators of pharmacy school quality as accreditation status, US News and World Report (USNWR) rankings, and NAPLEX pass rates, all of which have their own limitations. 2 USNWR rankings can be frustrating for colleges and schools of pharmacy and the general public because peer perception is the only criterion; they are based on an invalid rating scale with no criteria provided for the rating. 2,3 The USNWR rankings also do not align with other studies of quality. Nau and colleagues 2 found that when NAPLEX pass rates and USNWR rankings were compared side by side, many of the schools ranked in the top 10 in USNWR were not in the top 50 for NAPLEX, and schools with very high NAPLEX pass rates were not as highly ranked. Flawed rankings of academic programs and institutions may also result in members of those programs and institutions questioning their core identities and then reacting by developing inappropriate strategies and tactics in an effort to improve their rankings. [4][5][6][7] Exploratory Research in Clinical and Social Pharmacy 7 (2022) 100169 Many published rankings for universities and academic programs utilize inconsistent methodologies and quality criteria, and may not be based on objective data or any data, as is the case with USNWR pharmacy program rankings. 8,9 However, USNWR medicine and nursing program rankings include objective metrics such as program selectivity and faculty resources (Table 1). 10,11 A study by Ried and Ried 12 found that USNWR program rankings were higher if programs were older, were affiliated with an academic health center, were classified as research-intensive, or were members of a Power 5 athletic conference. The number of full-time faculty equivalents, pharmacy practice h-index, and research funding were also predictors of a program's USNWR ranking. Lastly, student PCAT comprehensive percentile and first-time NAPLEX pass rates were also found to influence rankings. Another study by Ried and Ried 13 found that faculty and student attributes significantly impacted pharmacy school rankings. Faculty metrics included full-time faculty equivalents and research productivity, which were stronger predictors than student academic preparation or NAPLEX scores. The models in their study demonstrate the possibility of creating rankings using more objective data. However, compiling data from individual schools for this purpose is laborious.
Multiple available data sources reflect quality and could be used to determine pharmacy school rankings, including the American Association of Colleges of Pharmacy (AACP) Office of Institutional Research, 14 the National Association of Boards of Pharmacy (NABP), 15 and the American Society of Health-System Pharmacists (ASHP). 16 AACP gathers information annually about pharmacy programs and students, full-time faculty, and external funding and makes it available to its members upon request. NABP publishes annual pass rates for the North American Pharmacist Licensure Examination (NAPLEX), and ASHP disseminates data annually to pharmacy school deans on PharmD graduates' placement in ASHP-accredited Postgraduate Year 1 (PGY1) residency programs.
The objective of this study was to create a simple model for ranking pharmacy schools that improves upon the USNWR pharmacy school rankings by utilizing metrics and data available without additional surveys, calculating ranks with a transparent and easily reproducible method, and considering educational and research strengths separately to reflect the breadth and variety of strengths among all pharmacy schools in the academy.

Methods
Seven indicators attributed to pharmacy school quality were identified from readily available sources. Three indicators represented PharmD program educational quality: student-to-faculty ratio (number of total PharmD students enrolled divided by the number of full-time faculty), NAPLEX pass rate for first-time candidates, and percentage of graduates matched to an ASHP-accredited residency program (number of PGY1 residency matches (both phases) divided by the number of PharmD graduates). The total PharmD students enrolled and the number of PharmD graduates were obtained from the AACP Profile of Pharmacy Students and Degrees Conferred tables. 17,18 The number of full-time faculty was taken from the AACP Fulltime Pharmacy Faculty Interactive Dashboard, 19 the NAPLEX pass rates from NABP, 20 and residency matches from ASHP email sent to pharmacy school deans. The other four indicators pertained to research: total research funding dollars, average award amount (total funding dollars divided by the number of funded faculty), the total number of principal investigators on NIH grants, and the number of PhDs conferred. The first three research variables were obtained from the AACP Funded Research Grant Institutional Rankings, 21 while the fourth was drawn from the AACP Profile of Pharmacy Students, Degrees Conferred. 18 While the USNWR medical school and nursing school rankings helped to inform our selection of education and research quality indicators, a number of indicators used by USNWR are not readily available for pharmacy schools, such as standardized admission test scores (PCAT instead of MCAT); undergraduate GPA; admissions selectivity (number of applicants offered admission); clinical practice participation; graduate outcomes; and other measures of faculty achievement. 10,11 The dataset was cleaned using Microsoft Excel (Version 16.0.11126.20192; Microsoft, 2019) and IBM SPSS (Version 27; IBM, 2020). All variables were converted to a standard score (Z score) to place them on a common scale for calculating the rankings. One variable, student-to-faculty ratio, was reverse-coded so that the direction of the scale was consistent with the other variables (i.e., larger values would be associated with higher quality). When calculating the education rankings, schools missing one or more education variables were deleted listwise. Those schools that reported research funding to AACP but had no NIH investigators or PhDs conferred, or those schools with no research funding or program, were included in the research analysis but assigned zero values for those variables as appropriate.
A sensitivity analysis was performed to compare the effects of applying different weights to the education score and the research score when calculating the overall school ranks, similar to USNWR, that weights different metrics of the medicine and nursing rankings depending on school or program focus. The rank for each school was re-calculated using different 10-point increments for the weight of the education factor. For example, each school's total rank was calculated using a weight of 0 for the education factor and 100 for the research factor, followed by a weight of 10 for the education factor and 90 for the research factor, and so forth. A difference in rank (absolute value) was calculated for each school's rank at each increment compared to the rank produced from the 50:50 weighting scheme. Schools were divided into three groups containing an approximately equal number of schools, based on their overall rank using the 50:50 weighting scheme (i.e., group 1 -highest ranked third, group 2 -middle ranked third, group 3-lowest ranked third). The mean difference in rank was then compared for each group of schools and, overall, for the rank produced using each weighting scheme relative to the 50:50 approach. Results from the sensitivity analysis were also used to identify a second weighting scheme that could be useful and appropriate.

Results
Of the 141 US colleges and schools of pharmacy, four schools were excluded for incomplete or missing variables in both education and research, two schools were excluded as their accreditation had been withdrawn, and five schools had incomplete education and research data. There were 130 schools with complete education data for that ranking and 112 with complete data for the research ranking. By assigning zero values for missing research variables, education, research, and overall ranks for 130 schools were calculated in the final dataset.
The sensitivity analysis ( Fig. 1) suggests that the further from the 50:50 approach to weighting the education and research factors one deviates, the more dramatic the difference in the school rankings. For example, the mean difference in overall rank produced from the 50:50 scheme compared to the 60:40 is only 3.3 positions, whereas the difference from the 50:50 scheme compared to the 90:10 scheme is 11.1 positions. Additionally, the effects of changes in the weights applied to the education and research factors are not equal across the three groups of institutions. The greatest reshuffling of institutions appears to occur within the second (middle) group. An average change of 14.2 positions was observed for these institutions when education received a weight of 20 compared with changes of 3.2 and 9.6 for groups one and three. Institutions in the second group also changed an average of 11.4 positions when education received a weight of 80, compared with 5.6 and 5.0 for groups one and three. Calculated rankings for each pharmacy school included in this study compared to their USNWR ranking are presented in Table 2

Discussion
The USNWR ranks nursing graduate programs 10 and medical schools 11 more objectively than pharmacy by using several objective indicators of program quality, in addition to a peer assessment score. Separate rankings are calculated for research-focused medical schools using a weighted average of 12 indicators, including research activity, and for primary care schools using seven indicators, including the proportion of medical graduates entering primary care specialties. Both rankings include admissions selectivity and student-to-faculty ratios ( Table 1). Doctor of Nursing Practice (DNP) and Master of Nursing (MS) rankings use a weighted average of 14 indicators; seven are used in both frameworks (four research activity and three faculty quality), and the other seven indicators are specific to each degree (Table 1).
Although USNWR uses quality metrics for medicine, nursing, and undergraduate rankings, pharmacy programs are ranked purely on peer perception. [9][10][11] Every four years, a limited number of surveys are sent to each fully accredited pharmacy program in good standing. The USNWR pharmacy program survey asks respondents to consider all factors that relate to excellence in each program, such as curriculum, scholarship and research, and quality of faculty and graduates, to evaluate each program. Respondents rate each school with a single checkmark, 1 = marginal, 2 = adequate, 3 = good, 4 = strong, 5 = outstanding, and "don't know" if the respondent does not have enough knowledge to rate a program. 9 Popular rankings may influence perceptions of potential student applicants, dean and faculty applicants, preceptors, patients, funding agencies, donors, collaborators and partners, and other entities. Therefore, it is important to align ranking systems with measures of program quality that address the interests of those using the results to make decisions. Studies found that USNWR pharmacy program rankings correlate strongly with total grant funding, NIH and non-NIH grant funding, years in existence, and association with an academic medical center. 22,23 Faculty publication rates were also significantly correlated in one study. Therefore, perceptions in the USNWR pharmacy program rankings appear to favor the longerestablished and research-intensive schools while potentially failing to recognize educational quality across the academy.
In this novel study, education and research were initially assigned equal weight in the overall ranking calculation. The authors then debated whether to assign slightly greater weight to education in the overall calculation; education is the primary goal of all schools, and the main audience of program ranking is the prospective applicants interested in educational quality. The study team explored this issue using sensitivity analysis of equal versus unequal weights between the two categories.
The sensitivity analysis revealed the impact of the weightings of the academic and research components on the overall rankings. The baseline analysis used education-research weightings of 50:50. As noted in the sensitivity analysis, an education-research weighting of 60:40 would have resulted in some changes in the rankings. However, further deviation from the 50:50 weighting resulted in a higher level of deviation in the ranks. This deviation was lowest in the schools initially categorized in group 1 (highest ranked at 50:50) and highest for those schools initially determined to fall in group 2. Schools in group 1 displayed more relative strengths in both major components (education and research); schools in group 2 displayed strength in one component but weakness in the other, and schools in group 3 displayed more relative weakness in both major components.
To that end, the education-research weighting of 50:50 or 60:40 in determining the overall ranks is recommended. Those weightings maintain the importance of research for the academy. Additionally, we observe that the education-research weighting of 60:40 is consistent with the USNWR process for ranking research-intensive medical schools and that USNWR calculates rankings for primary care medical schools, nursing  masters, and nursing DNP programs with even greater proportionate weight placed on the education variables (Table 1). 10,11 This research aimed to develop a simple data-driven ranking of pharmacy schools similar to other healthcare professions using readily available metrics that reflect the quality of research and education. An empirical framework was developed using objective data obtained from AACP, ASHP, or the public domain; the metrics selected were similar to those used in USNWR rankings for medical and nursing programs (Table 1). 10,11 Other educational quality measures identified by deans were not included, partly due to difficulty obtaining reliable data: public and patient care service, stakeholder feedback, testing, student success, and curriculum. 24 This framework also excluded other factors correlated with the USNWR rankings, such as the number of years in existence and association with an academic medical center, 23 which may underestimate academic program quality in newer pharmacy schools.
Using standardized objective measures of pharmacy school quality, such as those used in this paper, could inform the development and implementation of strategic plan goals and aid in selecting true and aspirant peers for benchmarking. Pharmacy schools could then target specific metrics for improvement and resource allocation to be more appealing to a potential student or faculty applicants. Further, as the data utilized for analysis are updated annually, rankings can be calculated annually, providing a realtime metric of success to pharmacy institutions.
The methods to develop these rankings provide a general framework that can be easily replicated or adapted for future data. Unlike the reputational scores used in USNWR rankings that may be slow to change, calculated rankings will reflect significant system-wide changes in current and future quality measures (i.e., board passage rates, NIH and other funding).
There are potentially numerous limitations to any ranking system. The researchers attempted to minimize those limitations by carefully selecting quality-based metrics with data available to pharmacy schools or in the public domain. However, not all possible metrics associated with the academic program or research quality were included. For example, several schools received a research score of zero in funding metrics, although there are other forms of scholarship, such as publications in peerreviewed journals. Additional indicators would strengthen the results, such as measures for the provision of patient care or community service, the pursuit of fellowships or graduate education, tuition or debt burden, entry-level salaries, and types of employment, although gathering these data would be laborious endeavors as they are not readily available.
Using a single year of data to calculate rankings presents another limitation because results are sensitive to year-to-year fluctuations, although this is common practice with USNWR and others. This may be mitigated by including multiple indicators; however, a multi-year average may be preferable in future ranking calculations. Another concern is the age of the data used. Most organizations, such as AACP and ASHP, compile data for the preceding academic year, then clean, analyze, and publish those data. As such, even a simple ranking model is based on data close to two years old.

Conclusion
This framework suggests a relatively easy and more objective approach to pharmacy school rankings using distinct quality dimensions in education and research. A focus on both academic program quality and researchbased quality may be useful to the academy due to its inclusivity. Future researchers may consider how much emphasis should be assigned to each dimension and are encouraged to identify additional data sources and quality metrics, including those that are proprietary or collected through surveys or open records requests. Pharmacy schools may benefit from using this study's metrics to develop strategic plans for improvement and benchmark with peer institutions. Given the discrepancies between this model-driven approach and the USNWR peer perception scoring system, deans and academy leaders should advocate for a new ranking system or changes to the existing USNWR Best Pharmacy Schools.